16 minute read

Securing the IoT with SDN

BRADLEY BARBARA | SUPERVISOR: Prof. Ing. Saviour Zammit COURSE: B.Sc. (Hons.) Computer Engineering

The number of internet of things (IoT) devices is increasing at a steady rate, with billions of IoT-connected devices emerging on a yearly basis. Hence, keeping the IoT environment secure is a task of the greatest importance. One of the prevalent threats in the IoT environment is the denial-of-service attack (DoS attack), which depletes the resources of its target, thus rendering it unusable. The main aim of this study was to address the abovementioned issue by using software-defined networking (SDN), a networking innovation that separates the data and control planes. This separation allows the creation of a centralised network-provisioning system, which in turn allows a greater degree of flexibility, programmability, and management.

Advertisement

This project proposes a testbed based on the GNS3 network emulator, whereby the testbed would emulate DoS attacks to be subsequently detected and mitigated using algorithms developed for the purpose. The detection algorithm is based on entropy, which is a measurement of uncertainty. An entropy-based detection algorithm was chosen, as such an algorithm does not incur significant overheads while still being one of most efficient methods to detect abnormal traffic patterns. In this work the entropy was calculated according to the variability of the destination IP address. The standard deviation was calculated on the basis of the entropy measurements carried out and, once an attack was detected, the malign traffic was mitigated by dynamically installing a flow to drop the traffic.

The proposed testbed consisted of the following: an RYU SDN controller which was installed on an Ubuntu machine; an OpenFlow-enabled switch; IoT devices simulated by using a Raspberry Pi virtual machine; and a Kali Linux appliance used to create malicious traffic. The simulation conducted on the testbed covered four separate test scenarios, with the last three scenarios aiming to overcome limitations present in the first scenario.

Figure 1. Network diagram

Figure 2. Packet count during a DoS attack

Drone object-detection based on real-time sign language

GABRIELE BORG | SUPERVISOR: Prof. Matthew Montebello COURSE: B.Sc. IT (Hons.) Artificial Intelligence

In recent years unmanned aerial vehicles (UAVs), commonly known drones, have advanced in various aspects, including: hardware technologies, autonomous manoeuvre and computational power. Consequently, drones have become more commercially available. Combining this with the increasing application of artificial intelligence (AI) allows for increasing the number of applications of UAVs.

In this project, drones, image processing and object recognition were merged together with the aim of creating a personal assistant. However, this is not just any virtual personal assistant (PA), such as Alexa, Siri or Google Assistant, but an assistant that would communicate through sign language[1]. Such a PA would benefit the hearingimpaired community, which constitutes over 5% of the world’s population and amounts to 466 million individuals.

Sign language [2] is a visually transmitted language that is made up of sign patterns constructed together to form a specific meaning. Due to the complexities of capturing and translating sign language digitally, this research domain has not been able to adequately compare with the advanced speech recognition available nowadays. Hence, this project attempted to combine the use of drones, following the user closely, and entering the frame of hand gestures. The project also incorporated object recognition for sign language characters of the American Sign Language (ASL).

In practical terms, the drone would follow the user closely, allowing them to spell out a word, letter by letter, forming a word referring to an object. The drone would then pivot while scanning the area for the object sought by the user [3]. Should the object be found, the drone manoeuvres itself towards the object.

The drone that was used in this project is a Tello EDU which, in spite of having a limiting battery life of around 13 minutes, it allows Python code to be used as means of control.

Figure 1. High-level architecture of the system

Figure 2. Hand-gesture recognition results using a convolutional neural network (YOLOv3)

REFERENCES

[1] A. Menshchikov et al., “Data-Driven Body-Machine Interface for Drone Intuitive Control through Voice and Gestures”. 2019.

[2] C. Lucas and C. Valli, Linguistics of American Sign Language. Google Books, 2000. [3] Nils Tijtgat et al., “Embedded real-time object detection for a UAV warning system”. In: vol. 2018-January. Institute of Electrical and Electronics Engineers Inc., July2017, pp. 2110–2118.

Radio Frequency Wideband Wilkinson Microstrip Power Couplers

KIM CAMILLERI | SUPERVISOR: Dr Inġ. Owen Casha COURSE: B.Sc. (Hons.) Computer Engineering

Designing high-frequency circuits is a challenging endeavour. At high frequencies beyond the gigahertz range, discrete components such as capacitors and inductors start to behave in a non-ideal manner, and their correct operation is limited by their self-resonating frequency which depends on their size and shape (see Figure 1) [1].

Microstrip structures, as shown in Figure 2, are used so that the limitations of discrete components are circumvented when designing beyond the gigahertz range [2].

This project has set out to design, simulate and implement various wideband Wilkinson Microstrip Power Couplers, such as the stepped impedance variant. The design methodology consisted of initially creating a lumped element model of the broadband circuits to be designed. These were then simulated in a circuit simulator to ensure their correct operation and characteristics. The lumped element models were subsequently converted into microstrip sections of correct dimensions and simulated using an electromagnetic simulator. The same simulator was used to trim the designs in order to meet the specifications prior to sending them for fabrication. The designed structures were implemented using an FR-4 double-sided printed circuit board technology.

Finally, the microstrip circuits were characterised and tested using a vector network analyser. The measured scattering parameters were compared to those simulated using both the circuit simulator and electromagnetic simulator.

Figure 1. Frequency response of the impedance of a discrete capacitor (source: C. Bowick, 1982 [3])

Figure 2. Microstrip structure: its dimensions (width W and height H) determine its characteristic impedance.

REFERENCES

[1] Sturdivant R., “Lumped Elements at Microwave and Millimeter-wave Frequencies”, 04 June 2016. [Online]. Available: http://www. ricksturdivant.com/2016/06/04/lumped-elements-at-microwave-and-millimeter-wave-frequencies/. [Accessed: 04-May-2021].

[2] T. Edwards and M. Steer, “Foundations for Microstrip Circuit Design, 4th edition,” Foundations for Microstrip Circuit Design, 4th Edition, John Wiley & Sons, 2016, p. 1.

[3] C. Bowick, “RF Circuit Design”, Oxford: Butterworth-Heinemann, 1982, p. 13.

Confirming the presence of body-worn sensors at critical indoor checkpoints

QUENTIN FALZON | SUPERVISOR: Dr Joshua Ellul | CO-SUPERVISOR: Prof. Ing. Carl Debono COURSE: B.Sc. (Hons.) Computing Science

Internet of Things (IoT) enabled systems afford low power consumption but often at the expense of high-frequency data. This project combines low-frequency data from a depth camera with received signal strength indication (RSSI) of a Bluetooth sensor. The study investigates whether data could be used to determine if a person is carrying the sensor through an indoor checkpoint (e.g., the front door of their home).

The system was developed to cover a hallway, measuring approximately 1.5m by 4m. Two IoT-enabled base stations were deployed in the environment: one at the inner end of the hallway and another at the opposite end, near the front door of the house. Both base stations would measure the RSSI from the sensor several times as the person walks through the hallway, away from the first base station and towards the second. Two base stations were used as they offer improved accuracy over a single reference point.

A depth camera, controlled by the second base station, was also set up near the exit. This was installed for detecting and tracking a person through the hallway as they approach, measuring how far away they would be from the exit in real time. Detection and tracking were accomplished using a single-shot detector (SSD) which is based on an implementation of a convolutional neural network (CNN).

Depth and RSSI measurements are obtained approximately 4 times per second and 3 times per second, respectively. The system was evaluated in a controlled environment of the hallway described above, with a single person wearing the sensor and maintaining line of sight with both base stations.

Two practical applications of this project would be: a) in a medical setting, alerting family or health carers if an Alzheimer’s patient were to wander out of a safe zone (see Figure 1), and b) attaching the sensor to an ‘item of interest’, such as a set of keys, to alert individuals that they might be leaving the house without the item (see Figure 2).

Figure 1. Health carer being alerted about a wandering Alzheimer’s patient

Figure 2. Person being alerted that they are about to leave home without their keys

Evaluating centralised task-scheduling algorithms for multi-robot task-allocation problems

CHRIS FRENDO | SUPERVISOR: Dr Joshua Ellul | CO-SUPERVISOR: Prof. Ing. Saviour Zammit COURSE: B.Sc. (Hons.) Computing Science

This project investigates the problem of task allocation with swarms of robots. Swarm robotics enables multiple robots to work together with the aim of accomplishing a particular task. In some cases, it might be beneficial to split a large task into smaller tasks. This project explores tasks that are sequentially interdependent, which means that before executing Task B, Task A would need to be executed first.

A foraging task, where robots must find food and return it to a nest, was used as an instance of the task-allocation problem. Foraging could be split into two tasks: harvesting and storing. Previous work in this area investigated the scenario where a harvester robot was to directly hand over the collected food item to a storing robot. In this project a cache with a finite capacity was used, thus enabling a harvester to place the food item within a temporary storage location. A storing robot would then collect the food item and take it to the nest.

The evaluation carried out in this project sought to determine whether a cache would result in a more efficient harvesting solution, and in which cases this might occur. Three algorithms were tested, namely: one where no cache was used (a robot performs both subtasks sequentially); one with fixed allocations; and another with a basic taskallocation algorithm that would depend on the cache status. From the results obtained, it was observed that for the experimental scenario in this project, the best results occurred when a cache was not used. This could prove to be interesting, as the generally held view is that addressing smaller subtasks would yield a better result. The outcome of this experiment suggests that there might be more efficient and more complex task-allocation algorithms, which were not explored in this project.

Figure 1. Frame taken during an experiment running on the ARGoS simulator

Figure 2. Results from fixed-task allocation experiments showing the average food collected over 10 experiments for each combination of swarm size and allocation ratio.

Autonomous drone navigation for the delivery of objects between locations

CONNOR SANT FOURNIER | SUPERVISOR: Prof. Matthew Montebello CO-SUPERVISOR: Dr Conrad Attard | COURSE: B.Sc. IT (Hons.) Software Development

The work developed is a proof-of-concept that aims to address the potential of autonomous drone navigation within the transportation industry. The chosen approach was to carefully consider the key limitations identified by Frachtenberg [1]. A solution that could possibly overcome these limitations for autonomous collection and delivery of objects was conceptualised through careful investigation. In the process, autonomous navigation between two points was also developed as a means of proving that navigation and delivery would be possible without human intervention. Since it was necessary for the pick-up and dropoff method to be autonomous, the entire process had to exclude human involvement. The apparatus for the pick-up and drop-off experiment was created following an assessment of a number of sources, Kvæstad [2] in particular. The apparatus in question was fitted with a special case to house a magnet, which is a central feature of the apparatus. The case containing the magnet was designed to slide down a rail on launch, bringing the magnet into contact with the item to be transported. Upon landing, side struts would push the magnet upwards, forcing it to disconnect from the object being transported, effectively dropping it off.

Coupled with the developed apparatus is the transport controller, which has proved to be imperative for user involvement. As stated by Hikida et al. [3], a user interface would help simulate the concept of drone delivery, whilst implementing a means for users to plan routes. Hence, the host machine was provided with start and destination coordinates through the controller, autonomously constructing a list of commands to be followed to complete the delivery.

Since the developed software is a proof-of-concept system on a small scale, the communication between the host machine and drone could only be maintained over a short distance. As a result, any notable limitations could be attributed to the experimental capacity of the work, being a proof-of-concept for autonomous navigation with an object pick-up/drop-off function.

Figure 1. Design for autonomous pick-up and drop-off apparatus Figure 2. Project structure for the host machine and drone communication, and instruction automation

REFERENCES

[1] Frachtenberg, E., 2019. Practical Drone Delivery. Computer, 52(12), pp.53-57.

[2] Kvæstad, B., 2016. Autonomous Drone with Object Pickup Capabilities (Master’s thesis, NTNU).

[3] Hikida, T., Funabashi, Y. and Tomiyama, H., 2019, November. A Web-Based Routing and Visualization Tool for Drone Delivery. In 2019 Seventh International Symposium on Computing and Networking Workshops (CANDARW) (pp. 264-268). IEEE.

Indoor navigation and dynamic obstacleavoidance in assistive contexts, using low-cost wearable devices and beacon technologies

JOSHUA SPITERI | SUPERVISOR: Dr Peter Albert Xuereb | CO-SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Software Development

With today’s advancing technology, the number of persons making use of navigation devices has increased significantly. While outdoor navigation systems are already available and in use, this is not the case for indoor navigation. Although past research into indoor navigation has shown some degree of success, further improvements in accuracy and response time would be required to make the technology generally usable. Navigating through large buildings, such as airports and shopping centres, or venues hosting particular events spread over large indoor areas, still present significant challenges. Many people find navigating such large spaces without any technological aid both confusing and disorienting. Visually impaired persons encounter added difficulty when seeking to navigate unassisted, be it indoors or outdoors. While outdoor spaces tend to be more conducive to developing technology that would be adequately robust and reliable, indoor spaces are very challenging in this respect. This research therefore proposes an artefact in the form of a portable device that would assist a user to determine their current position in an indoor environment, calculate a route to their destination and provide real-time interactive assistance through an unobtrusive wearable device in the form of spectacles.

Previous research studies have utilised various locationsensing techniques to different rates of success. Some studies opted to make use of Bluetooth beacons or radio frequency devices, both of which proved not to reach the level of accuracy required for such navigation [2]. On the other hand, the studies that used Wi-Fi and ultra-wide band achieved a satisfactory level of accuracy [1]. Although the latter provides the highest level of accuracy, it is difficult to implement without a substantial financial outlay. Hence, this study was carried out using , low-latency indoor-location technology, based on wireless Ethernet beacon signals. A low-cost, off-the-shelf ESP32 device with inbuilt Wi-Fi capabilities, was used to create the beacons. These were programmed in object-oriented C++ using the Arduino IDE. The beacons co-exist with existing Wi-Fi access points and provide fast updates of the user’s position, which is crucial in helping the user avoid dangerous situations. A map of the building in grid form could be managed through a webbased interface. Important environment locations such as doors, stairs, lifts and obstacles were marked and colourcoded. The beacons should be distributed throughout the building, preferably in the corners of each room, to obtain the best possible coverage and level of accuracy. The microcontroller found on the portable device would receive packets with information regarding the position of the beacons before calculating the route based on mapping points.

The accompanying schematic diagram outlines the technologies used. The location has been calculated according to the distance of the user from the beacons in the area by making use of the received signal strength indicator (RSSI). A limit was set to ignore beacons having a low signal strength, so as to keep the level of accuracy as high as possible. In this study, indoor navigation and obstacle avoidance was computed through the combination of position and map data. Directional vibrations and voice prompts were then used to guide the user to the final destination.

Figure 1. Schematic diagram

REFERENCES

[1] B. Ozdenizci, K. Ok, V. Coskun and M. N. Aydin, “Development of an Indoor Navigation System Using NFC Technology,” 2011 Fourth International Conference on Information and Computing, 2011, pp. 11-14, doi: 10.1109/ICIC.2011.53.

[2] M. Ji, J. Kim, J. Jeon and Y. Cho, “Analysis of positioning accuracy corresponding to the number of BLE beacons in indoor positioning system,” 2015 17th International Conference on Advanced Communication Technology (ICACT), 2015, pp. 92-95, doi: 10.1109/ICACT.2015.7224764.

Ambient acoustic noise-monitoring solution based on NB-IoT

ALYSIA XERRI | SUPERVISOR: Prof. Ing. Carl Debono | CO-SUPERVISOR: Dr Mario Cordina COURSE: B.Sc. (Hons.) Computer Engineering

Ecoacoustics or soundscape ecology is the study of the effects of sound in our environment. It is an important field of study due to the ecological impact of noise on biodiversity. Nevertheless, research in this field is limited, due to a number of factors. One of these is the lack of wireless connectivity, making noise monitoring a timeconsuming and expensive process, since it currently relies heavily on manually retrieving data from deployment sites. The use of a wireless system would reduce the time and cost currently required for gathering the necessary data.

In view of the increasing internet of things (IoT) market, sound monitoring could be done efficiently through cheap wireless sensor systems. Low-power wide-area networks (LPWAN) were developed to facilitate the communication for wireless devices. For this project, the noise monitoring application chosen was the gunshot detection method. A flagging system was created so that once a gunshot has been detected, a server would be notified with the appropriate information.

The acoustic sensor chosen for recording the gunshots was Audiomoth, which is a scalable open-source device used in various noise-monitoring applications ‒ including the observation of different species of birds and urban noise monitoring. With the creation of a gunshot-detection algorithm, Audiomoth could specifically record gunshots from all the sounds captured by the sensor. When a gunshot would be detected, the appropriate information is sent out to a server by using LPWAN technology. The technology chosen for this project is narrowband IoT (NB-IoT). This is due to its low latency and the quality of service (QoS) allowed by the said technology.

The Arduino MKR NB 1500 was used for narrowband communication and was set to receive information relayed from the Audiomoth. The MKR NB 1500 then sent out this information to a server, where it was recorded for further investigation.

Figure 1. Audiomoth (left) and Arduino MKR NB 1500 (right)

Figure 2. System diagram

This article is from: