Faculty of ICT - Education, Research & Collaboration - Final Year Projects 2022

Page 1

education research & collaboration


WHAT IS THE CLOUD? The cloud is made up of servers in data centres all over the world. Moving to the cloud can save companies money and add convenience for users.

STORE The cloud lets us

ACCESS SHARE

D A T A

The cloud is part of the storage evolution

Today

2GB+

1.44mb

?

How data is stored has evolved over the past 5 decades. CD’s, USB drives and memory cards are still in use but the cloud is the next step.

If you are storing photos, accessing music, sending emails or using online banking you are already in the cloud!

eSkills

eSkills Malta Foundation

/ESkillsMalta

@eSkills_Malta

www

eskills.org.mt


The Faculty of ICT 2022 Publication

Photo by Studio Konnect.

Editorial

S

cience, Technology, Engineering, and Mathematics: together, they form the acronym ‘STEM’ and the basis for our modern way of life. Yet these subjects can sometimes feel abstract, complicated, or inaccessible, and that’s a concept we at the Faculty of Information and Communication Technology (ICT) try to dispel through such publications.

Why Is STEM Important? Whether we’re talking about the cars we drive, the buildings we live in, the food we consume, or the healthcare our doctors offer us, STEM now holds the key to every single area of our lives. In fact, as communities and individuals, we owe a lot to the people whose knowledge of these subjects has resulted in us leading longer, healthier, better, and more comfortable lives. Yet, as STEM continues to define our present and shape our future, particularly through ICT, we must understand that there are hurdles. And none are so significant as the fact that not enough students are taking up STEM-related programmes.

ICT Can’t Continue To Change the World Without You Just last March, the Malta Chamber of Scientists expressed concern about the declining number of STEM students here in Malta. That may come as a surprise to some, especially at the start of a publication chock-a-block with 80 ICT research projects taking place here in Malta, but it’s the truth. Sadly, some students feel that such subjects are completely at odds with what they would like to study or the type of job they’d like in the future. Yet, as many of our interviewees here can vouch, STEM is something that can be explored alongside any other degree, be it in Linguistics, Archaeology, Health, or Humanities, to mention but a few. All you need is a bit of creativity, the willingness to learn something new, and the right tools,

which we are happy to provide. In return, you will be helping to build a fairer, more sustainable future for us all.

Start Them Young! Talking about the future, STEM is also something we can integrate into our children’s lives from a young age. This has been proven to encourage them to think outside the box, increase their creativity, better their problem-solving skills, and help them reach their goals. STEM, in other words, can help us shape tomorrow’s leaders and ensure that the next generation can tackle some of the biggest issues our species faces, including the elephant in every room: climate change.

Industry & Policymakers Also Need To Step Up Their Game Finally, it is also crucial for government, industry, and University to work together to create better research, education, and employment opportunities for all. This includes students who take up STEM but also everyone else, even if they’ve never set foot inside a university. More funding needs to be directed toward STEM and STEM education at all levels to ensure better digital literacy for everyone. After all, this is the only way that we can ensure that no one is left behind as the 21st century progresses. So, as we leave you to leaf through this edition of our annual FICT publication, we hope that the projects featured here will motivate you to begin your journey in STEM and ICT... And to keep in mind that it’s never too late to get into this interesting area! Happy reading!

Dr Conrad Attard & Mr Iggy Fenech

All the content in this publication is available online at https://ictprojects.mt/ L-Università ta’ Malta

| 1


> iGaming > Video Gaming > Esports

HOME OF GAMING EXCELLENCE GamingMalta is an independent non-profit foundation set up by the Government of Malta and the Malta Gaming Authority (MGA). Tasked with the remit of promoting Malta as a center of excellence in the digital and remote gaming sector globally, it is also responsible for liaising with the local relevant authorities to improve Malta’s attractiveness as a jurisdiction and enhance the ecosystem surrounding the gaming industry.

GamingMalta AM Business Centre, Level 0,

T +356 2546 9000

Triq il-Labour Zejtun, ZTN 2401, Malta

E info@gamingmalta.org

gamingmalta.org


A word from the dean

I

f I were pressed to sum up the Faculty of ICT in three words, these would undoubtedly be Education, Research, and Collaboration. Together, these three components make up our Faculty’s raison d’être and are our promise to students and society. Our Education component may seem straightforward: we are here to provide our 450 undergraduate and postgraduate students with the best tools to succeed. We take that role seriously and execute it to the best of our abilities. Still, we understand that for many of them, their studies aren’t just a stepping stone to a better career but also a way of achieving their ambitions and leading a more fulfilled life. This is the driving force behind our commitment to constantly offer new and better opportunities to help our students reach their full potential. Among our work in this area are two new courses we launched over the past year. The first is a Master’s in Data Science, which can be undertaken parttime over three academic years. You can read more about that on page 10. The second is a one-year, full-time Certificate in ICT Foundation Studies. This is a second-chance course aimed at students whose MATSEC grade is less than the Faculty requires for undergraduate enrolment. Like our other degrees, these two courses open the door to the wide world of possibilities digital education provides. This includes access to some fantastic Research, the second component on our list.

Understanding with industry and government. These provide our students with research opportunities, internships, and industry connections, but they also give something back to our partners, which is where the third component, Collaboration, comes in. Over the past year, the Faculty has strengthened its many ongoing collaborations and started several others. This includes a one-year project where some of our lecturers and students will work jointly with MITA to use satellite imagery to locate illegal dumping sites in hard-to-get areas. That means that while our students get first-hand experience, our society also gets a service that aims to make it better and healthier.

“ICT can change lives and bring about a better future for us all”

When these three components come together, they give us the power to find answers to questions and solutions to problems. So, as I leave you to pore over this publication, I hope you will keep in mind that we are incredibly proud to impart the knowledge and skills needed to help the next generation use ICT to change lives and bring about a better future for us all.

As you can see from this publication, our students and their lecturers are constantly working on research projects that can help industry and society advance. We take this seriously, especially since the need for more digitally-skilled personnel is ever-growing, both at a local and European level. To give our students the best possible start in their studies and careers, we continually sign Memorandums of

Prof. Inġ. Carl James Debono Dean of the Faculty of ICT

L-Università ta’ Malta

| 3


SHAPE

YOUR CAREER THE FUTURE OF PAYMENTS

We are a leading provider of global payment solutions and technologies on a single payment platform. Join us and benefIT from a wide range of attractive career opportunities in a global technology company.

rs2.com

recruitment@rs2.com

+356 2134 5857



A Voluntary Organization founded in 1978 by Engineers to represent the Engineering Profession and cater for the interests of Engineers in Malta Your membership ensures a stengthened voice for the Profession

www.coe.org.mt/membership BECOME A MEMBER As a Chamber Of Engineers member, you will benefit from: Enhanced Representation Improved Career Opportunities Participation In Various Events Value Added Benefits

Learn more at www.coe.org.mt Contact us on info@coe.org.mt with any further enquiries


#FICT22

Front cover and booklet design jeanclaudevancell.com

Editorial board Dr Conrad Attard & Mr Iggy Fenech

Lead Administrator of Publication Ms Samantha Pace

Printing deltamedia.services

Abstracts (main text) reviewed by Colette Grech @restylelinguistic

Administration of Publication Mr Rene Barun

Review of Abstracts Dr Chris Porter & Dr Chris Columbo

Photography Mr James Moffett Ms Sarah Zammit


Acknowledgements The Faculty of Information and Communication Technology gratefully acknowledges the following firms and organisations for supporting this year’s Faculty of ICT Publication 2022:

Gold Sponsors

Silver Sponsor

Main Sponsor of the event

Event Sponsors

8

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Data Science

ENTRANCE TO COMMON AREA

Testing & Verification

Internet of Things

E X H I B I T I O N

ENTRANCE FROM STAIRS

Deep Learning

Digital Health

COMMON AREA -1B10 Blockchain and Fintech

Audio, Speech & LanguageTechnology

Software Engineering and Web Applications

Map 2 0 2 2

LEVEL -1 FOYER

ICT LAB -1B01

#FICT22

L-Università ta’ Malta

| 9


Discover the world of data science

Data is one of the cornerstones of our modern world, but its power still hasn’t been fully harnessed. Here, Professor JOHN ABELA talks to us about data science and the new, related master’s degree offered by the Faculty of ICT.

10

|

T

hey say data is king, but a better analogy would be to call it an abandoned book: it could have a lot to teach us if we only bothered to open it. Yet most data isn’t ignored due to procrastination or boredom, but rather because there’s just too much of it. So what’s needed is a generation of data scientists who can crack its secrets. The new Master’s in Data Science is the first step to doing just that.

One exabyte is equal to one billion gigabytes – a vast, almost incomprehensible number; no wonder we call it ‘the data deluge’. But while many of us may not worry about data being deleted, the reality is that it has enormous value and could help business, industry, and academia alike.

“Humankind is generating more data than it has resources to analyse and store,” says Associate Professor John Abela, one of the coordinators of this master’s degree. “99.5% of all data created is discarded without ever being analysed, and that num-

“Data science is a cross-disciplinary domain that uses various tools and techniques to analyse large amounts of data,” Professor Abela continues. “We do this through statistical, mathematical, and computational tools that help us collect, man-

Faculty of Information and Communication Technology - Final Year Projects, 2022

ber could grow as we estimate that by 2025, we’ll be generating 463 exabytes of data per day.”


age, curate, monitor, and analyse data sets [collections of related information]. This, then, helps organisations, institutions, and even governments improve their decision-making processes.” To better understand this, let’s use the diaper-beer syndrome, a folktale every data scientist learns early on in their education. As the story goes, an American retail chain decided to analyse data from its warehouse and discovered a curious correlation between the sale of beer and diapers. Upon investigating, it was realised that fathers would pick up beer when they ran to the store to buy diapers. So what did the management do? It placed beer and diapers next to each other, making it easier – and more tempting – for its customers. That story is more legend than truth, but data has been used to make scientifically-driven decisions for generations. 168 years ago,

for example, English physician John Snow managed to locate the source of a cholera epidemic in London by mapping where the people who succumbed to the disease lived. It was well before computers, but it was still data-driven. “Today, we have computers to help us, yet data science’s primary purpose remains to find hidden patterns, things we wouldn’t otherwise notice. These findings allow leaders to make decisions based on fact rather than intuition or whim.” As its name suggests, data science operates through scientific approaches, using algorithms and frameworks to extract knowledge

“Data science is an exciting area to work in”

and insight from data sets. This means that data science is an extension of various data analytics fields, including data mining, statistics, predictive analysis, machine learning, visualisation, pattern recognition, probability modelling, data engineering, and signal processing. “Because of this, data science is an exciting area to work in. A job in this sector will give you many amazing opportunities to expand your skillset and knowledge, as there are endless ways of applying data to find answers and help make better data-driven decisions.” This is what the new Master’s in Data Science hopes to impart to those who enrol, who could then use their degree to join a whole array of industries, businesses, or academic circles. “This degree is being offered after much deliberation with stakeholders,” Professor Abela asserts. “We have spent two years crafting a course that gives a grounding in statistics, mathematics, data analysis, machine learning, and much more. This offers a solid base for anyone looking to become a data analyst, data engineer, data architect, business intelligence analyst, machine learning developer, or data scientist.” The Master’s in Data Science degree will help students achieve their potential and reach new heights in their careers and personal lives. It will also ensure that our businesses, industries, and society have data scientists who can extract and use information. “We’re extremely looking forward to meeting and training you in this exciting field of study,” Professor Abela concludes. The Master’s in Data Science degree opens this October, with late applications closing on Friday, 30th September 2022. While an ICT-related first degree would prove helpful, the Faculty accepts applications from people with a mathematics or computer programming background. This course lasts three academic years and is offered on a part-time basis (evenings) only.

L-Università ta’ Malta

| 11


What if mastery of data and AI was the key to delivering on your purpose? EY, in collaboration with Hult International Business School, offers a fully accredited Masters in Business Analytics or a Tech MBA for free to all EY Employees. Speak to our Talent Team to find out how you can join our mission and more on these opportunities: careers@mt.ey.com


The leaders have their say We asked some of the Islands’ most influential people in Tech to share their views on ICT. This included why they think it’s crucial to their sector and how the entities they work for are helping to shape the present and the future of the industry. Photo by Daryl Cauchi.

Dr MARTHESE PORTELLI CEO at The Malta Chamber of Commerce, Enterprise & Industry “Tech is not just the present but the future. It is key to productive, profitable, and sustainable economic growth. Look at how Malta has moved from a closed economy dependent on heavy industry, low-cost manufacturing, and tourism to an open economy reliant on knowledge services, value-added manufacturing, and e-tourism. In other words, ‘traditional’ sectors have been revolutionised through tech. “Digital industries have been critical drivers of economic growth over the past decade. This is why Tech is one of The Malta Chamber’s five pillars, with the Tech Business Section focusing on the tech industry’s macro and micro requirements and how it can support other sectors. Tech also plays a crucial role in several of our horizontal thematics, including

digital transformation, environmental sustainability, energy and energy conservation, urban planning, mobility, business transformation, and future cities. “Through the rapid advancement of technologies like AI, 5G, and IoT, as well as the growing network of connected devices with embedded intelligence and learning capabilities, we will have better access to data. Tech will continue playing an important role in increasing productivity and profitability. It is indeed a key driver towards an enhanced future.”

Mr DAMIAN HEATH Partner at Deloitte “As a company that provides audit and assurance, tax, consulting, financial, and risk advisory services, Deloitte requires a synergy of domain expertise. This is delivered through and with digital and technology solutions, which sustain our other core offering: helping our clients transform and introduce best practices through technology. “However, our involvement in ICT does not end there. Deloitte is a digital and technology implementor that works with Maltese companies while leveraging experience from international projects. We also engage in discussions with stakeholders on standards, regulations, and best practices in key areas such as cyber and fintech. Moreover, we conduct several studies, such as one on Rethinking Education, which we’re compiling alongside Deloitte firms in Italy and Greece. This explores the diversity, future job prospects, and potential skills required in STEM education. “In the future, Malta will need to address its skill gap and position itself in a way that is attractive to the increasingly mobile workforce – at least if it wants to sustain our core industries. This would also give it the capacity to support innovation for local enterprises and become an ‘incubator society’, where innovative social and environmental projects can be tried, honed, and refined in preparation for broader international adoption.”

The views expressed by each interviewee in this article belong solely to the respective interviewee and do not necessarily represent those of the editors, the Faculty, or the other interviewees in this article.

L-Università ta’ Malta

| 13


Ms DANA FARRUGIA CEO at Tech.mt Inġ. MALCOLM ZAMMIT, President of Chamber of Engineers “Engineering is a highly versatile profession enriched by various fields that share a breadth of skills, tools, and technologies. ICT is one such element and is a building block of the Fourth Industrial Revolution. Dubbed ‘Industry 4.0’, this Revolution saw us automate the manufacturing processes through technologies like machine learning and real-time data, making business activities more efficient. “Like engineering, ICT is an essential element in developing a forward-looking society, which is why, as the Chamber of Engineers, we support engineers who work in the ICT sector. We are also engaged in promoting the same industry through collaborations with relevant entities, including the Faculty of ICT and the Government. “We have full faith that, moving forward, ICT will continue to contribute to the country’s modernisation and will remain a key sector. Indeed, those who choose this career path will find numerous opportunities at a national and European level, which is why we believe the Government should invest more in STEM education to make this sector more appealing.”

14

|

“Tech.mt was established in 2019 through a partnership between the Government and the Malta Chamber of Commerce, Enterprise, and Industry. Its main responsibility is to promote our national strategy on innovative technology and provide opportunities for local tech companies and professionals to showcase their innovations, expand their operations, and internationalise. “As Tech.mt, we are at the forefront of helping academia and talent through collaborations in global technology-related spheres. We also offer guidance to tech startups to help them take their next step in succeeding. “This, along with the work of other tech leaders, is paying off. The latest Eurostat study shows that Malta’s IT sector displayed the fastest productivity growth over the past decade, leading the industry to

a gross value added of 10.2%. Meanwhile, the last Digital Economy and Society Index revealed that, at 6%, Malta’s number of ICT graduates is significantly higher than the EU average of 3.9%. This augurs well, and we look forward to an even brighter future for this industry. “Nevertheless, there’s still room for improvement. Malta should maintain its strategic and competitive advantage in this area by having more well-trained human resources to adapt and develop innovative technology.”

Mr MICHAEL AZZOPARDI Technology Consulting Leader at Ernst & Young (EY) Limited “There is no denying that ICT plays a vital role across all industries. After all, which company active today doesn’t continually look for newer, better, and faster ways for its people to interact, network, seek help, access information, and learn? Moreover, ICT contributes significantly to a company’s success by helping it address critical initiatives like regulatory obligations, efficiency goals, external challenges, and growth ambitions. “At EY, our diverse teams are connected across the whole busi-

Faculty of Information and Communication Technology - Final Year Projects, 2022

ness, giving us collective knowledge in ICT that can help clients with all this through data-driven insights. These can focus on enriching and personalising consumer experiences, liberating talent, creating opportunities through AI, and leveraging tech to provide transparency and form trust. For that reason, we will undoubtedly continue to invest in our people, including fresh graduates, and to partner up with major tech industry players like the Faculty of ICT.


Mr MICHEL GANADO Partner Advisory at PwC Malta “At PwC, we use technology to digitise how we deliver audit, tax, and advisory services. This has helped improve the quality of our engagements and enabled us to work more collaboratively with our clients. But it’s also meant that our workforce has had to be upskilled, so we embarked on a programme to empower our people through training and tools that streamline their day-today work. “This digitisation of the industry is now inevitable. So much so that

the European Commission is already seeking to mitigate cyber-attacks on financial systems through the DORA proposal and to introduce rules for how AI is used across all industries, including our line of work, through the AI Act. “Thankfully, Malta has made great strides in developing into an information society, and it’s creating an overarching digital strategy for 2021-27. This will include new sectoral strategies on digital public services, cybersecurity, e-commerce,

and data. Yet Malta’s performance in digital skills has the potential to go further. “With its resources and expertise, PwC can support the public and private sector in every step of their digital journey. Among its many initiatives, PwC shares thought leadership with the industry, organises events for the public, assists the University of Malta and MCAST by providing employment paths, and organises training courses in ICT at the PwC’s Academy.”

Prof. Inġ. EDWARD GATT Chair of the Institute of Electrical and Electronics Engineers (IEEE) Malta Section “We are all aware that ICT is a leading industry in Malta, which is why the IEEE Malta Section aims to be at the forefront of promoting new technologies, including IoT, AI, Smart Systems, and Big Data. These could quickly grow in the local market, but industry growth must be coupled with more students taking up related degrees at every level. “Yet Malta needs to continue investing in the skills required for the technology sector and increase women’s participation within it. This is essential to meet future demands, where companies will seek better technologies with more user-centric, mobile, agile, and data-driven capabilities, while their customers will have access to better technology products and services.”

“As the world’s largest technical professional organisation with over 400,000 members, IEEE is dedicated to ‘advancing technology for the benefit of humanity.’ Two of the ways it does so are its extensive technological library, which has more than five million articles on Engineering and ICT, and through the promotion of Engineering and Computing using the TryEngineering and TryComputing programmes.

“The IEEE Malta Section is committed to the same principles, so we provide regular in-service teacher training programmes that encourage using the aforementioned programmes and resources during lessons. We also sponsor events and workshops where we bring the top leading international guest speakers. All this is vital as our Islands establish themselves in research and innovation and continue to attract investment.”

L-Università ta’ Malta

| 15


Mr KENNETH BRINCAT CEO at Malta Digital Innovation Authority (MDIA) “Malta’s development into an information society has come a long way. Today, the country is the only EU Member State with complete coverage of ultrafast broadband networks. It also boasts many graduates in ICT and has introduced regulatory frameworks for AI and blockchain. And that’s just the tip of the iceberg. “In all this, the MDIA serves as a coordinated front that facilitates the promotion of digital technology in an all-encompassing manner, particu-

larly in Malta’s ability to remain economically competitive. We also aim to support the development of forward-thinking technologies through various incentives. This is vital as ICT’s ever-increasing need for adaptability, affordability, and ease of access will coincide with a growing demand for more computer power and strengthened cybersecurity. “In terms of where we’re going, the future of ICT in Malta will not be different from what is happening elsewhere.

ICT will significantly impact the Islands’ future because it will be essential to the economic development of key sectors like healthcare, education, infrastructure, finances, agriculture, manufacturing, and even governance.”

Mr CARMEL CACHIA Chief Administrator and Chair at eSkills Malta Foundation “In the past, a company that failed to computerise would be deemed inefficient and impractical. But today, any company that’s not ready to digitally transform its processes is practically digging its own grave. ICT is now crucial for any company that wants to compete locally and internationally. “That’s why, as the recognised National Coalition for Digital Skills and Jobs, eSkills Malta Foundation contributes actively to government and stakeholders’ policies. We do this through

16

|

the various studies we carry out, the reviewing and contributing to content and implementation, the expansion of curriculums through extracurricular initiatives, and the championing of local and international digital initiatives. “Of course, Malta is already in the top tier of digitally developed EU countries, having placed fifth in the EU Digital Economy and Society Index (DESI). Nevertheless, we must tackle particular challenges, such as increasing the number of ICT practitioners and

Faculty of Information and Communication Technology - Final Year Projects, 2022

matching the industry’s competencies, skills, and attitudes. “I am hopeful about how the country’s stakeholders have reacted to these challenges, and I think we shall succeed.”


Contents Filling the gaps: Malta-Sicily radars

104

DEEP-FIR: enhancing CCTV footage

106

Determining sentiment in Maltese

108

Tiny circuit, major possibilities

110

Digital health could revolutionise Karin Grech Hospital admissions

112

Virtual crime scene: detecting malware attacks

114

Do those drugs go together?

116

Can Twitter help break the news?

118

Exploring climate change

120

2021 Awards: an overview

122

Faculty of ICT Staff Awards 2021

128

#FICT22


Convolutional neural network for ingredient detection

21

Demand-sensitive bus dispatching system

22

The crew pairing problem: An evolutionary metaheuristic

23

Image segmentation using deep learning techniques

24

Dynamic vehicle-assignment for carpooling

25

Deep learning models for application in image restoration

26

Visualising AI: A game presenting neural networks

27

Personal generative landscapes

28

Art generation through sound

29

AlMessage: An anonymised spam-protected smart messaging application

30

Testing & Verification

Deep Learning

20

PEST – Personal Engaging Scheduler Task Manager: A mobile task-manager app designed for people with ADHD

31

Investigating the use of machine learning for automated element location in test automation

32

A comparison of predictive and measured KPIs using a GIS tool

33

Constructing and analysing knowledge maps via source code repository analysis

34

Program analysis: Towards the analysis of CPython bytecode

35

Software Engineering & Web Applications

#FICT22 Contents Abstracts

Driver-drowsiness detection app

Sensation seeking and aesthetic preferences in the context of a supermarket e-commerce website

36

JavaScript framework for Actor-based programming

37

Scalable procedural crowd simulation in large virtual environments

38

Cybersecurity for SMEs

39

The effect of WCAG conformance levels on perceived usability – a study with non-disabled users

40

Autonomous robot path-planning and obstacle avoidance in a dynamic environment

41

An educational app to help improve social and communication skills in autistic children

42

Steganography: A comparison on different algorithms

43

A comparative study of concurrent queueing algorithms and their performance

44

DashView: A project-status visualisation tool for software development managers

45

A comparison between text-based and graphics-based methods in object oriented code comprehension

46

Investigating issues with computing and interpreting the truck factor metric

47

Recommendations to workplace users when sharing knowledge

48

Gamification of specific aspects of software project management to train students/developers

49


Find&Define: Automatic extraction of definitions from text

51

Automated news aggregator

52

Automatic transcription and summarisation of multi-speaker meetings

53

Multilingual low-resource translation for IndoEuropean languages

Data Science

50

Grammatical inference applications in bioinformatics

75

SADIP: Semi-automated data integration system for protein databases

76

Crime Analysis

77 78

54

Applying spatial data-modelling techniques and machine learning algorithms to road injury data for increased pedestrian safety

The applicability of Wav2Vec 2.0 for low-resource Maltese ASR

55

Analysing diverse algorithms performing music genre recognition

79

Speech-driven app to assist people for assisted living

56

Minecraft settlement generator

80

Analysis of the relation between words within the Voynich Manuscript

81

Implementation and analysis of an RSSI-based indoor positioning system

57

Satellite imagery data analytics for remotebased sensing for hospital resource-requirement forecasting

82

A facial recognition system for classroom attendance

58

Safe navigation and obstacle avoidance in the real world when immersed in a virtual environment

83

The design of a piezoelectrically actuated scanning micromirror

59

Automatic and enhanced camera composition in Daz Studio 3D

84

IoT-based domestic air-quality monitoring system

60

Mining the CIA World Factbook

85

A narrowband IoT network solution for air-quality monitoring

61

DFA learning using SAT solvers

86

Automated deployment of network services using the latest configuration protocols and languages

62

Sentiment analyser for the Maltese language

87

63

Solving the sports league scheduling problem using integer linear programming

88

Hardware implementation of an automatic collision-detection-and-avoidance system for a remote-controlled device

Learning DFAs from noisy training data

89

Implementation of a home alarm system using IoT

64

A fast approximate light transport method for ray tracing

90

Building an air pollution dataset through low cost IoT air monitoring

65

Artificial intelligence in short-term meteorological forecasting

91

Automated indoor navigation in a care context

66

IoT-based voice-controlled home automation system

67

Mobile gait analysis

92

Air quality, temperature and humidity monitoring using narrowband IoT

68

ITS4U: Intelligent Tracking System for You

93 94

Smart secure homes: A solution to the cyber security threats in our smart homes

69

Using monitoring-oriented techniques to model the spread of disease A study of deep learning for automatic segmentation of healthy liver in abdominal computed tomography scans

95

Towards macro programming of IoT devices and smart contracts

70

An intelligent healthcare mobile app

96

Accessing the feasibility of tokenisation in healthcare

71

97

Towards seamless cross-platform smart-contract development

72

The application of customer relationship management for digital health systems with the management of outpatients as a use case

98

Smart-contract proxy analysis

73

Practical Artificial Caregiver (PAC): a multimodal application to support the care of persons with impairments in care institutions

Towards seamless multisig wallet key management

74

Investigating cognitive stress in software engineering activities

99

Digital Health

Audio Speech & Language Technology Internet of Things Blockchain & Fintech

SimplifyIt: Automatic simplification of text


Deep Learning

Driver-drowsiness detection app Bernard Borg | SUPERVISOR: Dr Lalit Garg COURSE: B.Sc. IT (Hons.) Software Development Drowsiness at the wheel is the cause of a significant portion of crashes, annually. Various approaches have been studied and subsequently employed to detect drowsiness. Among these are: the subjective, physiological, vehicle-based, and behavioural methods. This project set out to tackle the issue by comparing some of the techniques available for the behavioural detection of drowsiness.

Figure 2. Diagram depicting the drowsiness-detection process Figure 1. Example of face and landmark detection After researching the current state-of-the-art solutions, different available facial-detection, feature-extraction and feature-classification methods were compared. The various facial-detection and feature-extraction methods were evaluated solely on their processing speed, as most existing face-detection and feature-extraction techniques are already accurate enough for frontal well-lit images. On the other hand, the feature-classification methods were compared with accuracy and speed. The MRL (Media Research Lab) eye data set [1] was used to compare these methods. Python was used as the language of choice, as it provides many popular, welldocumented machine learning libraries such as Keras [2] and Sklearn [3].

The final developed system was tested for accuracy, achieving good results. It was also compared with other stateof-the-art methods. The most suitable method was then implemented on a Flask server, which can receive video data, process it and then send the result back to be displayed to the user. A simple mobile application was also implemented using the Xamarin framework. The user could easily record a short video clip of their face, using their mobile phone. The clip would then be sent to the server for analysis, and the result would be displayed on the user’s mobile phone. The proposed solution allows users to check whether they are fit for driving by simply using their mobile devices, thus contributing to reducing the number of drowsiness-related accidents on the road.

REFERENCES [1]

”MRL Eye Data set | MRL”, Mrl.cs.vsb.cz. [Online]. Available: http://mrl.cs.vsb.cz/eyedata set. [Accessed: 28- Apr- 2022].

[2]

K. Team, “Keras: the Python deep learning API”, Keras.io. [Online]. Available: https://keras.io/. [Accessed: 28- Apr- 2022].

[3]

”scikit-learn: machine learning in Python — scikit-learn 1.0.2 documentation”, Scikit-learn.org. [Online]. Available: https://scikit-learn.org/stable/. [Accessed: 28- Apr- 2022].

20

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Luke Dalli | SUPERVISOR: Prof. Inġ. Reuben Farrugia COURSE: B.Sc. (Hons.) Computing Science The niche technology of recognising ingredients has grown significantly in recent years, with food-logging increasing in popularity as a means for monitoring one’s personal diet. In times where dietary awareness has reached very high levels, ingredient recognition automatically demands further experimentation in the quest to perfect existing solutions. This work adopted a supervised machine learning approach to address the problem of ingredient detection – which involves recognising a number of ingredients within an image. Various neural network designs have been manoeuvred to yield an acceptable performance for this computer-visionrelated task. Among other uses, such a technique could be extremely useful in detecting vegetarian/vegan-friendly food, as well as for lactose-intolerance testing. The exact position of the ingredients was not of any interest to this experiment, as that aspect lies within the domain of localization, which is a different bracket of computer vision. The objective of this project was to simply mark the presence of an ingredient in the input data (hence, detection). However, this was not a straightforward task, given that it is arguably a multi-label classification problem. This implies that dishes would contain zero, one or multiple ingredients, and for this reason, an image would be classified into a number of different ‘classes’. The above-mentioned classification process can use deep learning, which is a concept with architectures that are inspired by the human visual cortex, and are composed of interconnected neurons. These neurons are grouped into layers, where they communicate adjacently to transport information from the input to the output layer. A convolutional neural network (CNN) would be adequate for handling such problems, as it uses a mathematical function called

convolution to extract features from two-dimensional data. Depth plays an important role, as it is usually indicative of the complexity and capability of the network. However, depth and other factors may have to be sacrificed due to hardware and timing limitations. One disadvantage of CNNs is that they require large amounts of data to undergo successful training. This is particularly the case in ingredient detection, as food has high visual variance. Hence, robust recognition would require capturing different outcomes of cooking and preparation. Using a data set containing 169,000 food images of Chinese cuisine, this project consisted in designing and training a model capable of recognising a number of ingredients present on a plate. It is to be noted that, although the available data set contained 408 ingredients, this work might only consider a subset of these ingredients, depending on the model’s performance. The goal of this project was to analyse neural networks that resemble both AlexNet and ResNet with a view to achieving satisfactory results. These models consist of different techniques, which allow learning to take place in a feasible manner, i.e., with an acceptable number of parameters while avoiding overfitting. Performance was analysed through the use of evaluation metrics suited for multi-labelled solutions. Such metrics were expected to take into account the non-linearity of the data, as well as the multi-labelled outputs. At the time of writing, the results were still to be finalised. However, it is suspected that AlexNet would reach an F1 score of around 35%. Moreover, through the implementation of ResNet’s residual blocks, a significant improvement over Alexnet value was being anticipated.

Figure 1. CNN output example

L-Università ta’ Malta

| 21

Deep Learning

Convolutional neural network for ingredient detection


Deep Learning

Demand-sensitive bus dispatching system Diana Darmanin | SUPERVISOR: Dr Josef Bajada COURSE: B.Sc. IT (Hons.) Artificial Intelligence The effectiveness and reliability of a public transportation system is key to having a more sustainable mode of frequently used urban passenger transport. In densely populated areas, transportation systems often become less reliable as buses drift away from their scheduled arrival and departure times, due to traffic congestion, accidents and variability in passenger demand. Increased demand would delay a bus from leaving a bus stop on time, while the following bus eventually would catch up, causing the phenomenon referred to in literature as bus bunching. This has led bus operators to gravitate towards a bus schedule that works with intervals between buses rather than fixed timetables. This research sought to investigate the use of reinforcement learning (RI) to dispatch buses dynamically by taking into account passenger demand. Two methods were implemented and compared, the first being a rule-based strategy using an if-then approach, and the second used a

deep Q-learning algorithm. The decisions the system could make are: bus holding; stop skipping; and enforcing boarding limits. Both strategies were tested by simulating a ring road with 24 bus stops and 14 buses, with varied passenger demand along the bus route. In both cases, the SUMO traffic simulator was used. This simulator provides a very good overview of the environment and, by using TraCI - which is SUMO’s Python API - observations and actions could be made programmatically to control the behaviour of the buses. Using a simulator made it possible to create different environments and scenarios over multiple episodes, allowing for trial and error until the agent learns a good policy. The two approaches outlined above, along with a nocontrol approach used as a benchmark, were evaluated using a headway-variance performance metric, whereby a high variance would indicate the likelihood that bus bunching has occurred.

Figure 2. A bus stopping at a bus stop using the SUMO simulator

Figure 1. A depiction of bus bunching (Source: [1]),

REFERENCES [1]

Luís Moreira-Matias, Carlos Ferreira, João Gama, João Mendes-Moreira, and Jorge Freire de Sousa. Bus Bunching Detection by Mining Sequences. In Proceedings of the 12th Industrial Conference on Advances in Data Mining: Applications and Theoretical Aspects, volume 12, pages 77–91, Berlin, Germany, 2012. Springer-Verlag

22

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Liam Gatt | SUPERVISOR: Prof. John Abela COURSE: B.Sc. IT (Hons.) Software Development The airline industry is one of the fastest-growing industries in the world. To keep up with the demand and remain profitable, researchers have been actively seeking ways to optimise different aspects of an airline’s operations in any way possible. This is because even a small boost in optimality could result in huge savings for the airline. One operational problem encountered by airlines concerns airline crew scheduling. Being considered a complex problem due to being NP-hard, airline crew scheduling cannot be solved to optimality in polynomial time. Hence, different researchers have been providing various techniques, in order to find a solution to achieve near-optimality in polynomial time. These techniques range from column generation (CG) ‒ which is the most common method found in literature ‒ to other heuristics such as branch and price or Lagrangian relaxation, as well as metaheuristics such as genetic algorithms and simulated annealing. This problem is also routinely sub-divided into crew pairing and crew assignment. This study has focused

on the former, with the objective of finding a combination of pairings that would cover all flights within a monthly period, at minimum cost. The proposed solution consists of a graph in conjunction with a depth-first search, which has been utilised to find all legal pairings under several predefined and variable legal, temporal, spatial and fleet constraints. These pairings were then used to generate random solutions of relatively high cost, which were then inputted as the population into a genetic algorithm. The solutions obtained after the optimisation of the initially generated chromosomes were then compared to an initial solution which was generated using CG, as provided with the data set used ‒ which was obtained from a major American carrier. Promising results were recorded and explained within the project. The conclusions ensuing from the results obtained also pointed towards possibilities for future work that could enhance the proposed solution.

Figure 2. Diagram showing different steps in the algorithm

Figure 1. Graph representing flight paths between airports

L-Università ta’ Malta

| 23

Deep Learning

The crew pairing problem: An evolutionary metaheuristic


Deep Learning

Image segmentation using deep learning techniques Luke Grixti | SUPERVISOR: Prof. John Abela COURSE: B.Sc. IT (Hons.) Software Development Image segmentation is the process of classifying each object within an image. This process has many important applications, such as assisting self-driven cars to analyse their surroundings, or to help medical professionals better detect tumours in images. The aim of this work was to compare established deep learning (DL) techniques adopted in image segmentation, as well as to analyse the performance of these techniques, incorporating Maltese road scenes into the training and testing processes. Local road-scene footage was recorded using a GoPro camera mounted on the rear-view mirror of a car driving through the streets of a number of Maltese towns. This footage was then converted into images, taking 10 frames per second of footage. Approximately 70 images were chosen for this task. These were then labelled manually for the purpose of training

one, and thus learning could happen. The images taken in Malta and their labels were combined with foreign road scene images, and were then used for the AI (artificial intelligence) training process. Once a DL model would be trained, it could scan a new image and try to segment it into discrete objects as accurately as possible. The segmentation result for the image in Figure 1 would be displayed as shown in Figure 2.

Figure 2. Segmentation result for local image Figure 1. A local image and its label

An important element in image segmentation is labelling, which is the process of marking each object in the image. Therefore, an image like the one shown on the left in Figure 1 would be labelled as displayed on the right. This would allow the DL model to compare its own prediction with the correct

24

|

A number of different DL models have been trained in this work, and the results from each model were compared to identify the best techniques for this task. An analysis of the difference between models that were trained on foreign images only and models that were trained on foreign and Maltese images was also undertaken.

Faculty of Information and Communication Technology - Final Year Projects, 2022


Francesca Maria Mizzi | SUPERVISOR: Dr Josef Bajada COURSE: B.Sc. IT (Hons.) Artificial Intelligence The constantly increasing number of cars on Maltese roads has given rise to the need for an alternative solution for persons whose requirements are not satisfied by conventional public transport such as buses. One possible approach would be to provide a cab carpooling solution, which would provide the same comfort and convenience of private vehicles, while also reducing the overall environmental impact of having one person per car ‒ possibly even eliminating the need of owning a private car. The proposed system has taken into account the start (pick-up) and end (drop-off) coordinates for each of each upcoming trip, the vehicles available, their position and maximum time detour. The goal was to calculate the best route for each vehicle to fulfil all trips, while minimising some cost function. A dashboard with this information using an in-built dynamic map would then display the computed routes, to be used by the users managing the service. The developed software makes use of two algorithms: tabu search and ant colony optimization (ACO). The program runs all the predetermined criteria (the coordinates, number

of vehicles and detour time) through both algorithms, and presents the user with the best solution. The overall cost of a route would be determined by several criteria, some of these being: distance travelled by the vehicle; amount of time taken to carry out the route; and the amount of time a passenger would be left waiting. Another aspect of the system is its ability to add new trips dynamically. Each of the algorithms takes into account the existent routes, generating a new solution that would incorporate the new trip by either utilising an idle vehicle or including the trip in an existent route. Overall, the algorithms performed equally well, each one finding the optimal route for the coordinates within a matter of seconds. It was found that, in some cases where a new pair of coordinates were added in later on, the algorithms tended to behave differently. The tabu search algorithm tended to attempt to add the new coordinates to the existing routes, whereas the ACO algorithm generally preferred creating a new route when there was a car available.

Figure 1. The built-in map indicating the generated route

L-Università ta’ Malta

| 25

Deep Learning

Dynamic vehicle-assignment for carpooling


Deep Learning

Deep learning models for application in image restoration Jacques Leon Patiniott | SUPERVISOR: Mr Tony Spiteri Staines | CO-SUPERVISOR: Dr Clyde Meli COURSE: B.Sc. IT (Hons.) Software Development Images and photographs are an invaluable source of knowledge and information for historians, record-keepers and researchers. Old images serve as testimony to a moment in time. Sadly, however, many photographs taken in the early 20th century are at risk of being lost, as physical photographs have a lifespan of between 75 and 100 years. Using classical image restoration techniques along with machine learning algorithms, these important images could be restored by eliminating the various defects or noise present in old photographs. The issues that could be improved on and fixed through these algorithms vary from the low resolution and comparatively lacking technology in cameras, to deterioration with time, even when kept in a suitable environment. Noise, blurriness, faded colour and sepia effect are all defects that arise from the above-mentioned factors. Various researchers have been seeking to create an application that could solve the complex degradation problem

of old photographs. The current state-of-the-art methods use different algorithms, with some being more suitable than others for different photograph and image types. Hence, this study focused on comparing and contrasting the different methods available, while also seeking to identify the weak points of current methods, and how those issue could be resolved. It is to be noted that the classification methods used to quantitatively measure how well these algorithms perform raises an important issue where different methods use different classification approaches, thus increasing the difficulty in comparing them for performance and quality. Some of the most common evaluation methods tend to measure the ratio of signal to noise, meaning that methods that opt to focus on optimising these metrics still do not correlate to higher (perceived) image quality.

Figure 1. An example using Microsoft’s method, which greatly increases the image quality and removes discolouration effects

Figure 2. An example of the highly effective IRCNN deblurring solution

26

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Visualising AI: A game presenting neural networks

Artificial Intelligence (AI) is becoming a more integral part of daily life, yet many people still have a relatively weak grasp of this technology. Machine learning makes up a large portion of AI technologies and, since ML often makes use of black-box models, this raises the difficulty for users to understand and trust the process. Hence, education on such important topics is increasing in importance. Visualisation and interactivity through video games for learning could well provide a very good foundation for increasing familiarity with AI. In fact, some projects [1] have already looked into creating educational games to provide users with a stronger idea of how certain AI models work. This final-year project has expanded on this notion through the research and development of a visual gamelike application to educate and present the topic of neural

Figure 1. A neural network being trained in the game environment

Deep Learning

Paul Psaila | SUPERVISOR: Dr Vanessa Camilleri | CO-SUPERVISOR: Prof. Matthew Montebello COURSE: B.Sc. IT (Hons.) Artificial Intelligence

Figure 2. Explainable hint, based on user inputs networks. The aforementioned application involves the implementation of a neural network (NN) that could interface with an endless runner game, and a user interface designed with the function of being explicable, thus allowing users to interact and experiment with the NN. By making use of this, users could collect data, train and optimise the NN to play the endless runner game, presenting the structure and setup of NNs. Evaluation was carried out through a user study, where participants were presented with the game, as well as surveys carried out both before and after the experiment. These surveys were adopted and adapted from previous studies [2] and sought to gauge how effective the application would be in illustrating these concepts. The results indicate that such an application would be reasonably effective in helping people to understand more complex concepts.

REFERENCES [1]

L. B. Fulton, J. Lee, Q. Wang, Z. Yuan, J. Hammer, and A. Perer, “Getting playfulwith explainable ai: Games with a purpose to improve human understanding of ai,”03 2020

[2]

G. Petri, C. Gresse von Wangenheim, and A. Borgatto, “Meega+: A method for the evaluation of educational games for computing education,” 07 2018

L-Università ta’ Malta

| 27


Deep Learning

Personal generative landscapes Luke Pullicino | SUPERVISOR: Dr Josef Bajada | CO-SUPERVISOR: Dr Trevor Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence Artificial intelligence (AI) could be applied to a wide variety of fields. In this project, the focus was on creating a program that would revolve around the concept of dynamically generated landscapes, where each user would be able to create their own personalised landscape, based on their position and the gestures performed. This was achieved using AI networks, terrain-generation software and depth sensors. The camera that was utilised in the experiment offered both depth and colour feeds, which could be used in different ways. The colour feeds obtained were primarily used to perform pose estimation using the available AI networks. One of the AI networks was used to obtain an estimated position of different body parts. The information obtained from this network made it possible to find the user’s position in relation to the devised 3D world when combined with the depth information obtained from the depth feed from the camera.

The colour feed was simultaneously applied to a different AI network that could provide more accurate positioning of the user’s hands. The data obtained made it possible to perform gesture recognition by passing the data to a trained neural network model that could detect different gestures performed by the user. Each gesture was linked to an action ‒ for example, if the user were to push down with an open palm, a small mountain would appear within the 3D world. The 3D world was managed by terrain-generation software that could be controlled using different Python APIs. The software uses different algorithms, such as Perlin noise, which allow for the creation of procedurally generated terrain. This, in turn, increases the likelihood of each landscape being unique even if two different users were to perform the same gestures and movements.

Figure 1. Example of randomly generated terrain

28

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Yran Riahi | SUPERVISOR: Dr Vanessa Camilleri COURSE: B.Sc. IT (Hons.) Artificial Intelligence This work has pursued the prospect of cross-modal generation in computational creativity. The method employed was that of developing a specific type of artificial intelligence (AI) model that would be capable of generating many illustrations in the form of a moving visual, based on the music provided. This would allow the creation of unique and singular pieces of the audio-visual artform. The AI model developed in this project is based on a generative adversarial network (GAN). This model pits two networks against each other, with one network trying to generate data that could pass off as real, while the other network tries to discriminate whether that data is real or not. This is crucial for the AI model to be able to generate realistic content similar to ‒ or possibly better than ‒ what a real person could produce. Furthermore, this model moves away from conventional GAN models by having the ability to take sound

as an input, and then generate a moving art piece based on the elements of the sound provided. The model primarily synchronises pitch, volume and tempo with the image, such that these features would control all of the textures, shapes and objects. These features would also control the movement between the frames. The network mainly competes till convergence is reached. This means that, at a certain point, the generative network would improve to the extent that the discriminative network would not be able to tell the difference between real and fake. If the model were to continue competing past this point, the quality of the content could drop. The system developed was tested for its ability, veracity and ingenuity, to satisfactory results. The main issue was the time it required to generate. This could be attributed to computational power and the size of the training data.

Figure 2. A freeze-frame of an audiovisual art piece generated by the system

Figure 1. Overall workflow of the proposed method

L-Università ta’ Malta

| 29

Deep Learning

Art generation through sound


Deep Learning

AlMessage: An anonymised spam-protected smart messaging application Christabelle Saliba | SUPERVISOR: Dr Conrad Attard COURSE: B.Sc. IT (Hons.) Software Development As the number of cars increases [1], parking has become increasingly difficult in Malta [2]. As a result, drivers park in front of garages or obstruct other cars. This final-year project proposes a prototype of an anonymous communication web app that proposes: 1) filtering SMS spam, and 2) communicating anonymously with different owners of objects, using the parking problem as a case study. Moreover, the project explores the potential of the internet of things (IoT), mobile app design and SMS spam filtering. The latter was deemed particularly relevant to the proposed application because, as with all other messaging applications, this application could be misused by spammers. Additionally, the application was subjected to a usability study to determine usefulness, satisfaction and collect user feedback. A rule-based and a machine learning technique were used to identify spam messages. The rule-based technique was applied to determine whether a message is spam, based on whether it is a new conversation, along with checks against the whitelist, blacklist, block settings, as well as the sender’s location distance. When none of the rules provided any judgment, the message’s content proceeded to the machine learning model that produced the final classification. The vector space model, was used in combination with the artificial

neural network model in the final classification. This method provided 98.74% accuracy and 0.21% misclassification of real messages as spam. In practical terms, the car owner would have created an item within the website to represent their own car. The user would offer three access methods, namely: a QR code, a website link, and an item code that would be placed on the car. By utilising the QR code, the sender would only be required to scan the QR code to communicate with the owner. Through this application, the sender would be able to communicate with the user without having access to any personal details. Should the sender not log in prior to sending the message, the conversation would end after a single message. The process for creating an item and sending a message using a QR code is outlined in the accompanying image. To evaluate the application, 10 participants from different age groups and levels of technological competence were interviewed in person. Most participants completed each activity in less than a minute, which suggests ease of use. Furthermore, user satisfaction regarding the system and its usefulness were calculated by using the post-study System Usability Questionnaire (PSSUQ) [3]. This yielded an average satisfaction of 1.86 and average usefulness of 2.03.

Figure 1. The ‘create item’ and ‘send message’ processes

REFERENCES [1]

”The number of cars on Malta’s roads passes 400,000”, Times of Malta, 2021. [Online]. Available: https://timesofmalta.com/articles/view/the-number-ofcars-on-maltas-roads-passes-400000.850391. [Accessed: 28-Apr-2022]

[2]

”Editorial: Tackling the parking problem”, Times of Malta, 2021. [Online]. Available: https://timesofmalta.com/articles/view/tackling-the-parkingproblem.899639. [Accessed: 28-Apr-2022]

[3]

”PSSUQ (Post-Study System Usability Questionnaire)”, Uiuxtrend.com. [Online]. Available: https://uiuxtrend.com/pssuq-post-study-system-usabilityquestionnaire/. [Accessed: 10-Dec-2021]

30

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


PEST – Personal Engaging Scheduler Task Manager: A mobile task-manager app designed for people with ADHD Liam Curmi de Gray | SUPERVISOR: Dr Peter Xuereb COURSE: B.Sc. IT (Hons.) Software Development

Testing & Verifiction

Struggling with time management is a recurring issue for persons with attention deficit hyperactivity disorder (ADHD). This impairment, which is sometimes referred to as time blindness, hinders such individuals from planning time accordingly for medium and long-term projects in order to meet deadlines. While the introduction of mobile applications has stimulated many tech companies to design innovative tools for time management, these tend to cater for a general audience, rather than niche groups, such as persons with ADHD. In view of the above, this study set out to analyse the timemanagement apps that persons with ADHD rely on for time management, and thus to understand the lacunae in any other apps that are not utilised. Therefore, an important supporting objective was to design a time-management app that would match the unique requirements of this specific demographic.

Figure 2. Use-case diagram of the PEST mobile app

Figure 1. Architectural diagram of the PEST mobile app

A workshop organised for persons with ADHD evaluated what time-management tools and apps they made use of. Considering this feedback, an app was developed with the use of Flutter. This permitted coding in one code space that could be compiled for both Android and IOS devices. A week-long experiment was coordinated, where half the individuals were asked to use the prototype app in their daily activities, while the other half served as a control group. Participants in both groups were asked to write down their goals for the day, and note what they managed to accomplish by the end of the day. The preliminary results indicated that, when compared to the control group, those who utilised the app increased their productivity in achieving their goals. Indeed, the latter group showed a keen interest in the app and highlighted aspects not present in other apps that they had been using previously. Hence, the conclusion reached was that it would be possible to integrate useful time-management features boosting the productivity of individuals with ADHD into a single app.

L-Università ta’ Malta

| 31


Investigating the use of machine learning for automated element location in test automation

Testing & Verifiction

Kelsey Debono | SUPERVISOR: Dr Mark Micallef COURSE: B.Sc. (Hons.) Computing Science Modern web application development is constantly evolving. Moreover, the increasing demand for online businesses and the increased complexity in the development of web applications have increased the importance and necessity for automation testing. Automation testing tools, such as Selenium, allow testers to automate testing processes across multiple browsers. In order to test the functionality of web elements, Selenium requires the use of web element locators, such as id, name attributes and XPaths, which are used to select and retrieve elements from the Document Object Model (DOM) using a given query. Despite the numerous benefits that automated testing brings to organisations, certain challenges that may arise in the implementation and maintenance of the automated test suites must also be duly considered [1], as changes to the application under test (AUT) would require the automated tests to be updated accordingly towards ensuring that the tests constantly provide accurate results. This process could be perceived as time-consuming and costly. This study focused on investigating the use of machine learning (ML) techniques trained on a preprocessed data set of HTML elements to identify particular web elements on e-commerce websites using multiple attributes of the element to locate the element, and thus being able to identify the element if changes to the element’s attributes were to occur. Previous studies have successfully managed to develop a framework that would assist automating web

application testing through ML [2]. The adopted methodology followed the following three steps: data collection and data set construction; data preprocessing and feature extraction; and classification and integrating the developed model with Selenium. Throughout the first part of the implementation, a list of e-commerce websites was retrieved from analytical tools, using web-scraping tools to extract the list of websites and generate a data set of the following HTML elements: search field, ‘add to cart’ buttons, and checkout buttons. The generated data set was then preprocessed, which is a method of cleaning the data that involves tokenising, lemmatising and removing stop words from the data. A support-vector machine (SVM) model using bag-of-words (BoW) vectors as features was then trained. Tests were performed to determine whether the trained model could predict a web element, and the performance of the classifier was evaluated using the performance metrics. These yielded satisfactory results. An application programming interface (API) was developed to demonstrate how the trained model could be used with Selenium to generate test cases to check the functionality of a web element. This could enhance web automation testing, as it would enable test engineers to use higher-level domainspecific abstractions, instead of identifying web elements using low-level locators, thus simplifying test creation. Moreover, the automated tests would not be affected by any changes to the locators in the AUT.

Figure 1. Architecture of the web element classifier

REFERENCES [1]

Berner, S., Weber, R. & Keller, R. K. Observations and lessons learned from automated testing.

[2]

Duyen Phuc Nguyen, Stephane Maag. Codeless web testing using Selenium and machine learning. ICSOFT 2020: 15th International Conference on Software Technologies, Jul 2020, Online, France. pp.51-60, ff10.5220/0009885400510060ff. Available at: https://hal.archives-ouvertes.fr/hal-02909787/document

32

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


A comparison of predictive and measured KPIs using a GIS tool

This project is related to the field of telecommunications. It revolves around an analysis of a number of key performance indicators (KPIs) related to telecommunications, with realworld data acquired from specialised equipment compared to a number of predictive models created in this project. The signal strength across a number of points was predicted using three main predictive models. MATLAB, which is a program used to model many engineering scenarios, was used in order to help generate such models. An analysis on the predicted values was conducted, to establish which model performed best. A post-analysis model-fitting exercise was also conducted on different sets of points, so that each model would better ‘fit’ the real data. All the data that was collected and modelled was then displayed on a GIS (geographic information system) web-based portal. This GIS portal consisted of an interactive dashboard displaying all

the information that was gathered during this exercise. This dashboard consisted of all the predictive data pertaining to individual predictive models, as well as the real-world data. This also provided insight into the models generated, and one could easily compare the results produced. A web server was also set up and linked to a geographical database, where information related to the analysis conducted would be stored. Apart from displaying the data collected on a web portal, a statistical analysis was also carried out between the outputs of various predictive models, to determine which model performed better. A number of statistical parameters that proved to be of interest were the mean square error, which could analyse how close the data points would be to one another, as well as the Pearson correlation co-efficient which calculates the closeness between two related sets of data.

Figure 1. System diagram showing the various levels encompassing the system implementation: the bottom layer shows the application used for editing geographic data, with Layer 1 showing the web server and Layer 2 showing the web portal, where data would be represented

Figure 2. Diagram outlining the predictive model: a more detailed view on how the data is generated and then manipulated, before eventually being published on the web GIS or geoportal

L-Università ta’ Malta

| 33

Testing & Verifiction

Kristian Fenech | SUPERVISOR: Prof. Inġ Saviour Zammit COURSE: B.Sc. (Hons.) Computer Engineering


Constructing and analysing knowledge maps via source code repository analysis

Testing & Verifiction

Jack Piscopo | SUPERVISOR: Dr Mark Micallef COURSE: B.Sc. (Hons.) Computing Science The most valuable asset held by software engineering organisations is, arguably, the knowledge held by their employees. It is the creation, transfer and application of such organisational knowledge that provides such companies with the competitive edge required to provide value to their customers. With high staff-turnover in the ICT industry, companies need to protect knowledge assets ‒ or rather, they would need to find a way to mitigate and address knowledge risk. Knowledge risk is generally defined as an ‘operational risk caused by a dependency on, loss of, unsuccessful intended or unintended transfer of knowledge assets, and results in a lack of, or non-exclusivity of these assets’. One example of knowledge risk may involve scenarios whereby a few members of staff would hold a disproportionate amount of knowledge about specific ‘knowledge assets’ when compared to the larger staff complement. This would be an issue (i.e., a risk) because in the event of such people leaving the company, access to important knowledge and/or expertise would be lost. The motivation for this project was to help mitigate the notion of knowledge risk by analysing source code repositories, and subsequently building knowledge maps for an organisation or project. The underlying hypothesis is that one could deduce

’what’ and ’how much’ a person knows about a particular knowledge asset, on the basis of the frequency and nature of their commits of code related to that asset. Following a literature review, the approach taken to test this hypothesis was to create a tool that would pull information from source code repositories (e.g., Git), build a mathematical representation of an organisation’s knowledge using graphs, and then applying graph theory to identify potential instances of knowledge risk. Evaluation was carried out on a number of open-source repositories. The tool analyses commits on the project to build a profile for every knowledge worker. Knowledge assets are then extracted from both the internal and external packages used in a particular commit. The knowledge maps produced by the tool are represented as graphs structures, which consist of knowledge assets and knowledge workers as vertices and represent the relationships between them through weighted edges in order to identify potential knowledge risk and mitigate it. An evaluation exercise was carried out with the help of a number of open-source repositories, to determine the extent to which the constructed knowledge map would represent the real situation.

Figure 1. Example of a generated knowledge map

34

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Program analysis: Towards the analysis of CPython bytecode

Program analysis methods offer static compile-time techniques to predict approximations to a set of values or dynamic behaviours that arise during a program’s runtime. These methods generate useful observations and characteristics about the underlying program, in an automated way. PATH (Python Analysis Tooling Helper) is a static analysis tool created in this project, which generates a standardised intermediary representation (IR) for given functions, allowing analysis metrics to be generated from the facts produced by the tool. The goal of this project was to create a framework that would generate facts from a function, in addition to an IR that would be amenable for further analysis. The developed framework seeks to simplify the engineering complexity of fact analysis for future use. PATH would disassemble CPython bytecode into a more straightforward representation, facilitating any further possible analyses. The final findings of the project indicated that performing analysis on the IR generated by PATH would be indeed a simpler task, as opposed to generating facts manually and conducting block analysis without such a framework. These results have been considered satisfactory, as they fulfilled the objective of this project.

Testing & Verifiction

André Theuma | SUPERVISOR: Dr Neville Grech COURSE: B.Sc. (Hons.) Computing Science

Figure 1. CPython code execution

Figure 2. PATH high-level overview

L-Università ta’ Malta

| 35


Sensation seeking and aesthetic preferences in the context of a supermarket e-commerce website

Software Engineering & Web Applications

Aurora Attard | SUPERVISOR: Dr Colin Layfield | CO-SUPERVISOR: Prof. Gordon Sammut COURSE: B.Sc. IT (Hons.) Software Development Website aesthetics determine the image of a website, thus playing a significant role in forming visitors’ first impressions, a website’s trustworthiness and the associated brand identity. Nevertheless, although appealing aesthetics are indeed desirable, studies have shown that it is one’s personality that ultimately determines what one finds aesthetically pleasing in websites. This study explored how sensation seeking affects website aesthetic preferences, with sensation seeking being defined by Zuckerman as one’s predisposition to seek “varied, novel and complex experiences, and the willingness to take physical, social, legal and financial risks for the sake of such experience”. An online questionnaire was designed to gather data by assessing user perceptions of a number of website aesthetic properties in a supermarket e-commerce setting, while also gauging the sensation-seeking level of 334 participants, 85% of whom were Maltese. These properties included: navigation styles, colour diversity, and artistic styles used in images. The results indicate that the effect of sensation seeking on website aesthetic preferences tends to be

significantly influenced by age group and gender. One finding is that younger users (18 to 34 years) tended to prefer a less conventional side menu, while older users (35+) favoured a conventional mega-menu for navigation purposes. Findings also suggest that attitudes towards the use of abstract artwork in websites were positively related to sensation-seeking among all male participants (r = .180) but negatively related among females within the 18-24 years age group (r = -.189). A sample-wide negative correlation was identified between sensation seeking and attitudes towards websites, with very low colour diversity (r = -.098) ‒ which was stronger among females (r = -.149). Furthermore, it was established that the BSSS (Brief Sensation Seeking Scale) was reliable when administered to a Maltese population (α = .776). This research is believed to be the first study to investigate the effect of sensation seeking on navigation menu preferences. It has also highlighted the significance of age and gender when exploring to what extent one’s personality would influence website perception.

Figure 1. Designs assessing preferences on artwork type

Figure 2. Designs assessing colourfulness preferences

36

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


JavaScript framework for Actor-based programming Actors are concurrent isolated units of computation, which process messages using their predefined behaviour. This study explores the suitability of the actor model when used to bring concurrency and parallelism to JavaScript. The proposed implementation takes the form of two APIs for both the Node.js and browser environments respectively, allowing developers to intuitively reason about engineering JavaScript programs through the spawning and sending of messages to actors. Isolated actors could be safely spawned on remote devices over a network and utilise multiple cores on a local processor. This allows for distributed and parallel computation which have the potential of shortening the time taken to execute computationally intensive tasks. A WebSocket server was used to connect a finite number of Node.js instances and browsers hosting actors over the network. Faster communication links were explored using inter-process communication when hosting multiple processes on a local device. The framework abstracted the adaptive use of different communication links, providing location transparency for remote actors.

Figure 1. Network of devices running processes with actors

Figure 2. Speed-up introduced when running multiple workers Benchmarks were set to analyse the framework’s performance when used on a single instance using Node. js or a browser, as well as the speed-up introduced when utilising additional local or distributed cores working on the same task. The performance of the JavaScript framework developed through this project was evaluated against existing JVM and JavaScript actor framework implementations. The relative performance of the communication links used when distributing actors was also explored. Nowadays, browsers are found on devices such as smart fridges and TVs, both of which could run JavaScript defined behaviour. The increasingly popular Node.js environment for server-side applications would be able to host actors that could communicate with actors hosted on browsers, enabling uniform and flexible scaling of applications. This study also included an assessment of the limitations of the framework, as well as its untapped potential in freezing and migrating actors across the web.

L-Università ta’ Malta

| 37

Software Engineering & Web Applications

Andrew Buhagiar | SUPERVISOR: Prof. Kevin Vella COURSE: B.Sc. (Hons.) Computing Science


Scalable procedural crowd simulation in large virtual environments

Software Engineering & Web Applications

Kyle Camilleri | SUPERVISOR: Dr Sandro Spina | CO-SUPERVISOR: Dr Keith Bugeja COURSE: B.Sc. (Hons.) Computing Science This study has explored the use of entity component systems (ECS) in the design of scalable crowd simulators. The procedural aspect of the envisaged crowd simulator is such that the system would be able to intelligently reduce the load, density, and quality of the active crowd to suit the underlying hardware, while still generating believable agent behaviour. Crowd simulation for virtual environments poses a number of interesting challenges. Particularly in the case of rich openworld games, which immerse the player in large sprawling cities, the actualisation of agents incurs costs in term of memory and computational – not only to model their respective behaviours but also to store, animate and visualise the assets tied to these behaviours. On memory and/or performance-constrained devices, this could result in a reduction in crowd variety to the extent of breaking user immersion. The accompanying image shows how a simple ECS would function. The Person and Vehicle entities both have a Translation component, which is used by the Move system. The Vehicle entity also has a DriveTo component. The MoveVehicle system makes use of both the Translation component and the DriveTo component, meaning that only the Vehicle entity would be affected by this system. This project also explores the benefits of using dataoriented programming (DOP) and ECS over object-oriented

programming (OOP). Such benefits include: the performance advantages of scaling with entities rather than scaling with objects, and the principle of composition over inheritance in order to solve the hierarchy issues caused by OOP. In this work, the Unity ECS implementation was used by making use of Unity’s Data-Oriented Technology Stack (DOTS), thus facilitating the writing of thread-safe code, due to its checks. Furthermore, it also enhances the performance of the system significantly. A case study was developed to tie in crowd simulation and the hierarchy issues with OOP, through the interaction of a crowd of people and a fleet of cars. The people could be equipped with a gun to be able to damage the cars or harm other people. However, a car with a gun attached to it would be as simple as adding a gun component to it. The people could also have a sword with different elemental powers, and combining them would be as simple as adding the respective elemental components. This example was then built upon to create more complexity and further showing how using DOP/ECS tends to be more efficient than using OOP. Lastly, given the nature of DOP/ECS, the developed code is very modular and expandable. Moreover, since the systems and components are relatively isolated and independent (decoupled), the code also allows for new features to be added easily.

Figure 1. A simple ECS example

38

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Cybersecurity for SMEs Cyberattacks are an ever-growing threat, with small-tomedium-sized enterprises being particularly vulnerable to attack. While there are various factors influencing this vulnerability, there are also myriad issues in mitigating this vulnerability. A number of these factors and issues have been discussed in this work. One major issue transpiring from the research process for this project was that cybersecurity training would often be perceived as tedious. This tends to cause a lack of engagement from employees during training, with the result that the said training would be ineffective. This presents a substantial threat to SMEs, as cyberattacks would be more likely to have a devastating effect in this scenario. This project set out to address the aforementioned issue by exploring possible ways of making cybersecurity training more engaging, and thus more effective, through information technology. An Android application was developed that would

both educate and quiz the user on cybersecurity, with the main topic being social engineering. This term describes a type of cyberattack that focuses on manipulating users to give away sensitive information. A combination of videos and descriptive text was employed in educating the trainees on the topic, while the quiz segment tested them on the information acquired. This project seeks to offer a novel learning approach, through an application designed to be used in 20-minute bursts, rather than over a number of hours, as with other training courses. The quiz functionality is an attempt at gamifying the learning process, employing the principles of active recall with the goal of trainees retaining more of the information covered. The application has been published on the Google Play Store under the name ‘Cybersecurity Quiz App’. This made it possible for employees working in SMEs to review the application. The effectiveness of the application was assessed with reference to the feedback gathered from the participating SME employees.

Figure 1. A screenshot of the learning page, which features the video being played at the top and a description of social engineering at the bottom

Figure 2. A screenshot of a quiz page, which is testing the user on what was learned in the learning segment

L-Università ta’ Malta

| 39

Software Engineering & Web Applications

Matteo Caruana Bond | SUPERVISOR: Dr Clyde Meli | CO-SUPERVISOR: Mr Tony Spiteri Staines COURSE: B.Sc. IT (Hons.) Computing and Business


The effect of WCAG conformance levels on perceived usability – a study with non-disabled users

Software Engineering & Web Applications

Eric Curmi | SUPERVISOR: Dr Chris Porter COURSE: B.Sc. IT (Hons.) Computing and Business Digital accessibility has become a very important consideration for both private and public entities. This is not only because legislation has been put in place to regulate the development of technology per se [1], or because it has a direct financial benefit on businesses [2], but also because there is a moral obligation to build inclusive technology that could be easily accessed by everyone. The Web Content Accessibility Guidelines (WCAG) [3], which are the reference point for most laws and regulations pertaining to this area (such as the EU Web Accessibility Directive) consists of over 70 success criteria for web accessibility across three levels of conformance: level A, AA, or AAA. Digital products may conform to any of these levels, each requiring an additional level of effort with more rigorous tests. In previous studies on the subject [4] - [7], it was indicated that a higher level of conformance would result in higher levels of perceived usability. For the purpose of this study, two functionally equivalent e-commerce websites were used in a controlled study. The first version did not adhere to any of the accessibility guidelines. On the other hand, the second version complied with WCAG 2.1 at level AA, which is the level of conformance typically recommended within national and international legislation, including the EU Web Accessibility Directive [8]. The study was split into three parts. The first part of the study set out to understand the participants’ familiarity with e-commerce websites. In the second part, participants were asked to perform a series of tasks on one of the websites (nonconformant or conformant). This study was conducted using a between-subjects approach. Therefore, participants were split into two groups, namely: a control group and an experimental group. Participants in the control group were asked to carry out a series of tasks on a non-conformant version of the

Figure 1. The four principles of accessibility in WCAG 2.1

e-commerce site, while the experimental group was asked to carry out the same set of tasks on the version which conforms to WCAG 2.1 Level AA. Data points such as time-on-task and task completion rates were recorded, while participants were also asked to rate the perceived workload for each task using the Single Ease Question. Finally, after completing all assigned tasks, participants were asked a series of questions regarding their experience, using standardised tools such as the system usability scale, along with a semi-structured interview aimed at shedding further light on their experience with the assigned site.

REFERENCES [1]

W. (WAI), “Web Accessibility Laws & Policies”, Web Accessibility Initiative (WAI), 2021. [Online]. Available: https://www.w3.org/WAI/policies/. [Accessed: 30- Jun- 2021].

[2]

W. (WAI), “The Business Case for Digital Accessibility”, Web Accessibility Initiative (WAI), 2021. [Online]. Available: https://www.w3.org/WAI/businesscase/#accessibility-is-good-for-business. [Accessed: June 2021].

[3]

W. (WAI), WCAG 2 Overview. [online] Web Accessibility Initiative (WAI). Available at: <https://www.w3.org/WAI/standards-guidelines/wcag/> [Accessed June 2021].

[4]

S. Schmutz, A. Sonderegger and J. Sauer, “Implementing Recommendations From Web Accessibility Guidelines: A Comparative Study of Nondisabled Users and Users With Visual Impairments”, Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 59, no. 6, pp. 956-972, 2017. Available: 10.1177/0018720817708397

[5]

S. Schmutz, A. Sonderegger and J. Sauer, “Implementing Recommendations From Web Accessibility Guidelines: Would They Also Provide Benefits to Nondisabled Users,” Human Factors The Journal of the Human Factors and Ergonomics Society, vol. 58, no. 4, pp. 611-629, 2016.

[6]

A. Aizpurua, S. Harper and M. Vigo, “Exploring the Relationship between Web Accessibility,” International Journal of Human-Computer Studies, vol. 91, pp. 13-23, 2016.

[7]

S. Schmutz, A. Sonderegger and J. Sauer, “Effects of Accessible Website Design on Nondisabled Users: Age and Device as,” Ergonomics, vol. 61, no. 5, pp. 697-709, 2017.

[8]

W. (WAI), Web Accessibility Laws & Policies. [online] Web Accessibility Initiative (WAI). Available at: <https://www.w3.org/WAI/policies/> [Accessed June 2021].

40

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Autonomous robot path-planning and obstacle avoidance in a dynamic environment

Robots have come a long way since first being introduced. They have evolved from machinery merely repeating the same action in factories, to human-like bodies, advanced cars without drivers and much more. This progress was mainly possible as a direct result of the continuous development of artificial intelligence (AI). An autonomous mobile robot can traverse an environment without the need for human interference. This is achieved by using path-planning and obstacle-avoidance techniques. Socalled traditional techniques include artificial potential field, cellular decomposition and Vector Field Histograms. However, the last twenty years have witnessed more cutting-edge approaches using AI techniques, including A*, fuzzy logic, rapidly-exploring random trees and ant colony optimisation. In this project, two different heuristic algorithms were implemented in order to determine which would be the more suitable as a path-planning and obstacleavoidance technique for dynamic environments. The first algorithm applied was a search-based algorithm referred

to as Hybrid A* [1]. Search-based algorithms operate by first converting the given environment into a grid. Hybrid A* first uses the same technique as the A* algorithm to find a path using the knowledge it has of the environment. Furthermore, Hybrid A* would also simplify the path for the robot due to its movement constraints. The second algorithm was a sampling-based algorithm referred to as rapidly-exploring random tree* [2]. This algorithm would create a tree by randomly generating nodes around the free space of the environment, until it reaches the goal. The two algorithms were tested in three different environments, as follows: a) without obstacles, b) with static obstacles, and c) with dynamic obstacles. In all tests, both algorithms successfully guided the robot around the static and dynamic obstacles without any collision. Moreover, it was observed that Hybrid A* algorithms always managed to find the shorter path in all tests. On the other hand, rapidlyexploring random tree* algorithms always required less average RAM and CPU usage for the static tests performed.

Figure 1. Path produced by the Hybrid A* algorithm

Figure 2. Path produced by the rapidly-exploring random tree* algorithm

REFERENCES [1]

D. Dolgov, S. Thrun, M. Montemerlo, and J. Diebel, “Path Planning for Autonomous Vehicles in Unknown Semi-structured Environments,” The International Journal of Robotics Research, vol. 29, no. 5, pp. 485–501, Apr. 2010, publisher: SAGE Publications Ltd STM.

[2]

I. Noreen, A. Khan, and Z. Habib, “Optimal Path Planning using RRT* based Approaches: A Survey and Future Directions,” International Journal of Advanced Computer Science and Applications, vol. 7, Nov. 2016.

L-Università ta’ Malta

| 41

Software Engineering & Web Applications

Sean Farrugia | SUPERVISOR: Dr Ingrid Vella COURSE: B.Sc. IT (Hons.) Artificial Intelligence


An educational app to help improve social and communication skills in autistic children

Software Engineering & Web Applications

Elena Fomiceva | SUPERVISOR: Dr Peter Xuereb COURSE: B.Sc. IT (Hons.) Software Development Autism spectrum disorder (ASD) is a complex developmental disability that is manifested during the early years of life by qualitative difficulties in verbal and non-verbal communication, social interactions, and a tendency to stereotypical behaviour. Autistic children differ from one another in terms of their intellectual and functional abilities. However, all of them to a certain extent experience social difficulties and emotional impairments, which are quite challenging to overcome without professional support and assistance. While for the majority of children social-interaction skills and patterns are acquired naturally from their environment, this is not the case for children with ASD. The use of computer technologies ‒ and educational applications in particular ‒ is considered to have a beneficial influence on the development of social and communication skills in autistic children. Such technology would help them gain valuable communicative experience and useful behavioural patterns within a safe and emotionally comfortable environment. In view of the above, the development challenge of this project was to create an educational application that could

help autistic children in developing essential social and communication skills. Extensive background research and consultations with domain experts identified the needs in the application that would become an interactive support for the manual Picture Exchange Communication System (PECS) training, while also serving as a bridge that would facilitate the child’s communication with family, tutors and peers. The direct involvement of professional educators helped to carefully plan and design the application. Subsequently, the app was implemented using React Native, which allowed the creation of a dynamic, intuitive and attractive user interface aimed at motivating and engaging a child in practising communication skills. The evaluation of this project focused on whether a mobilebased educational application could motivate and assist children with ASD in improving their social and communication skills. Professionals working with these children provided positive feedback, highlighting that the application was engaging and effective in assisting the children towards succeeding in their communication training programmes.

Figure 1. Vocabulary user interface displaying the ‘Domestic animals’ category Figure 2. Use-case diagram

42

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Steganography: A comparison of different algorithms

The purpose of this research was to compare the different methods that could be used for still-image steganography, and subsequently to create an algorithm that would implement the best among these methods. Still-image steganography is the practice of hiding basic text within a digital still image by concealing the text inside the pixels of the image. The image could then be decrypted to reveal the message to another user. The function of this software is essentially to send important information discreetly to another user, without alerting third parties who may be trying to gather private data through illicit means. The usual method for still-image steganography involves editing the binary value of a number of pixels, depending on the message length, in such a way that the pixel colour would only change very slightly, such that it could not be detected by the human eye. For this project, four steganographic methods were selected, namely: the least significant bit (LSB); pixel value differencing; the discrete wavelet transform (DWT) technique; and random pixel embedding methods. The selection of these 4 methods was primarily due to their popularity, availability and level of security. The results showed that none of the methods would change the image in a way that could be perceived by the human eye. Moreover, since the methods use different ways to achieve this, it would be more difficult to detect the type of method being used. This research adopted the LSB steganography algorithm in order to gain a better understanding as to how a typical

steganography software would behave. Furthermore, this particular method was chosen as it is a relatively simple, yet very effective, method that is widely used for commercial and personal use. The chosen method works by converting the hidden text and password into a binary string consisting of 1s and 0s. Then, it converts every pixel inside the cover image into a decimal figure, according to its RGB value, and then to binary. Once both of these binary conversions are completed, the algorithm proceeds to replace the furthest right bit (LSB), in the cover image with one binary number from the hidden text and password binary sequence. This is done in a sequential manner for the entire length of the binary string of the hidden text. The furthest right binary number would be chosen because this number changes a single RGB value in a pixel to an almost negligible degree (usually in an increment of 1 to -1) or would make no change at all. As a result, any change would be so small that it would be very difficult to detect with the human eye, and would require specific steganalysis software. The decryption of the image for the purpose of obtaining the data would work in a similar way; the user would enter the password into the software, it would be then converted into binary and compared with the binary password hidden inside the image. Should it be a match, the rest of the binary string could be decrypted, and the hidden text would be outputted and shown to the user.

Figure 1. The least significant bit (LSB) method

L-Università ta’ Malta

| 43

Software Engineering & Web Applications

André Muscat | SUPERVISOR: Mr Tony Spiteri Staines | CO-SUPERVISOR: Dr Peter Xuereb COURSE: B.Sc. IT (Hons.) Software Development


A comparative study of concurrent queueing algorithms and their performance

Software Engineering & Web Applications

Luca Muscat | SUPERVISOR: Prof. Kevin Vella COURSE: B.Sc. (Hons.) Computing Science Queues are among the most ubiquitous data structures in the field of computer science. With the advent of multiprocessor programming, concurrent queues are at the core of many concurrent and distributed algorithms. This work is an experimental study covering a number of concurrent queueing algorithms, with the aim of comparing their performance and replicating the results of other researchers. The production of a benchmarking framework for concurrent queues also forms part of this study’s contributions. Particular focus has been placed on evaluating the ‘Baskets Queue’ of Hoffman et al. [1], Valois’ queue [2], together with Michael and Scott’s lock-free and double-locked concurrent queue [3]. With the exception of Michael and Scott’s double-locked concurrent queue, each queue possesses the non-blocking progress condition, meaning that one thread is guaranteed to make progress in a bounded number of steps. This progress condition makes queues more resistant to random delays, unlike its blocking counterpart, where a delay in one thread could cause delays elsewhere.

Two benchmarks were used in the evaluation phase. Firstly, the pairwise enqueue-dequeue benchmark was applied, which consists of a number of enqueue-dequeue pairs, evenly split across each process. Secondly, the 50% enqueue benchmark was used, where each process would have a 50% chance of executing an enqueue, The execution of operations was decided by a uniformly distributed random-number generator. This randomness would offer a more varied interleaving of operations, exercising a wider combination of code paths. The number of processes concurrently interacting with the queue and the length of the artificial delay between each operation were used as control variables. Using readings from the initial stages of this study, the two provided plots were generated, showing that although blocking queues could compete with non-blocking queues with less threads, the blocking queues were several magnitudes slower than their non-blocking counterparts.

Figure 1. Pairwise enqueue-dequeue benchmark plot

Figure 2. 50% enqueue benchmark plot

REFERENCES [1]

Hoffman, M., Shalev, O., Shavit, N. (2007). The Baskets Queue. In: Tovar, E., Tsigas, P., Fouchal, H. (eds) Principles of Distributed Systems. OPODIS 2007. Lecture Notes in Computer Science, vol 4878. Springer, Berlin.

[2]

John D. Valois ( 1995). Lock-Free Data Structures. PhD thesis, Rochester Institute of Technology.

[3]

Maged M. Michael and Michael L. Scott (May 1996). Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms. Proceedings of the 15th ACM Symposium on Principles of Distributed Computing. pp. 267--275.

44

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


DashView: A project-status visualisation tool for software development managers

This work has sought to visualise the software project management activity underpinning the development of software, in an attempt to demonstrate the efficacy of visually describing the status and progress of software development to promote higher-quality management. Based on supporting research, the approach chosen was that of creating a visualisation solution in the form of a dashboard to embody the status of any project in development through one unified view [1] and location for a manager to access and use when needed ‒ as opposed to querying different data sources [2]. The proposed dashboard contains graphs that change according to the user’s needs and other forms of indicators.

The dashboard also has the ability to visualise previous states of a software project being undertaken. Therefore, this would provide a level of traceability and customisability within the dashboard that would further enhance its usability and applicability. The proposed dashboard should support and facilitate the daily routines of a software project manager, which would include such activities as report compilation and data comparison. Additionally, the dashboard could be accessed through an online portal, thus adding a level of access flexibility. React, Firebase Hosting, and Firebase Firestore were the chosen technologies used in the development of the proposed visual solution.

Figure 1. Screenshot of dashboard, displaying all data related to project progress

Figure 2. The structure used for the database created for this artefact

REFERENCES [1]

H. Kerzner, Project management metrics, KPIs, and dashboards : a guide to measuring and monitoring project performance. Hoboken, New Jersey: John Wiley & Sons, Inc, 2017.

[2]

R. W. Selby, “Analytics-Driven Dashboards Enable Leading Indicators for Requirements and Designs of Large-Scale Systems,” IEEE Software, vol. 26, no. 1, pp. 41–49, Jan. 2009, doi: 10.1109/ms.2009.4.

L-Università ta’ Malta

| 45

Software Engineering & Web Applications

Nina Maria Musumeci | SUPERVISOR: Prof. Ernest Cachia COURSE: B.Sc. IT (Hons.) Software Development


A comparison between text-based and graphics-based methods in object-oriented code comprehension

Software Engineering & Web Applications

Kristina Pace | SUPERVISOR: Dr Chris Porter | CO-SUPERVISOR: Dr Mark Micallef COURSE: B.Sc. IT (Hons.) Software Development

Code comprehension is one of the central activities in software development and tends to be a very timeconsuming task. Code comprehension could be split in two main approaches: (1) text-based (e.g., using an integrated development environment or IDE) and (2) graphics-based (e.g., using a 3D representation). The aim of this study was to shed more light on the effectiveness of these two approaches in the context of object-oriented programs. The first step was to select a tool for each approach. Hence, a rigorous review of academic and grey literature was conducted to classify existing tools towards selecting one for each approach through specific selection criteria (e.g., citation in literature and support). For the graphicsbased approach CodeMetropolis was selected, while the IntelliJ IDE was chosen for the text-based approach. CodeMetropolis is a tool that represents object-oriented code using a city metaphor, mapping code structures and code metrics onto a three-dimensional city structure.

A between-subject study design was adopted, where participants were randomly assigned to either the text-based approach (using IntelliJ) or the graphics-based approach (using CodeMetropolis). Both approaches included the same objectoriented codebase, and participants in both groups were required to answer a series of questions with a view to gauging their level of code comprehension (functionality, coupling, cohesion, etc.). While carrying out the code-comprehension tasks, a number of data points were also recorded, including the time taken to complete tasks, accuracy of task completion, the participant’s thought process (using concurrent think-aloud) along with the overall perceptions of the tool. At the time of writing, the data indicated that no single method proved to be consistently and significantly better than the other. With CodeMetropolis, the participants managed to better comprehend and highlight system metrics, such as coupling levels, while with IntelliJ, the participants managed to obtain a better understanding of system dependencies and logic within the code.

Figure 1. The CodeMetropolis tool

Figure 2. The IntelliJ tool

46

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Investigating issues with computing and interpreting the truck factor metric

The knowledge held by a company’s employees is a missioncritical asset in software engineering organisations. It is the creation, transfer and application of such organisational knowledge that provides such companies with the competitive edge required to deliver value to their customers. This asset is hard to manage and maintain, due to high labour turnover rates, making it essential for these companies to protect it as best as possible. The software engineering community is familiar with the so-called truck factor metric, which is defined as the minimal number of developers that have to be, figuratively speaking, hit by a truck or leave before the project should encounter difficulties. Of course, the notion of literally being hit by a truck is an extreme one, but more realistic threats such as people leaving the company would have similar repercussions. Alternatively, this metric reveals the project’s concentration of knowledge and its key developers.

Figure 1. A sample truck factor method output, which shows the number of key developers found, together with their respective developer names, number of authored files and total percentage of files

Many algorithms have been proposed to calculate this metric by extracting maintenance activity data from version control systems such as Git. Despite popular understanding of the metric’s notions, there are still challenges with its calculation and interpretation. Hence, this study proposes an implementation of an existing technique to calculate this analysis together with improvements and different thresholds. These improvements consider the recency of work done by developers, as well as considering line-level edits (as opposed to file-level) when calculating the truck factor for a project. To explore the validity of the different ways in which the truck factor could be calculated, the research involved an industry partner, who provided access to a Git repository. This repository was analysed using the three different calculation methods, each time drawing conclusions regarding possible knowledge risks for that project. These conclusions were then presented to the industry partner for feedback on their perceived validity.

Figure 2. A bar chart showing different truck factor values with the 3 different calculation methods adopted

L-Università ta’ Malta

| 47

Software Engineering & Web Applications

Jolene Sultana | SUPERVISOR: Dr Mark Micallef COURSE: B.Sc. (Hons.) Computing Science


Recommendations to workplace users when sharing knowledge

Software Engineering & Web Applications

Jonathan Vella | SUPERVISOR: Dr Conrad Attard COURSE: B.Sc. IT (Hons.) Software Development

E-learning systems provide a wide range of educational resources through which users could enhance their learning. However, such systems tend to burden users with choice overload when vast amounts of material are presented. Recommender systems would enhance the user’s learning experience by providing a select number of items that would be more likely to be of interest to the user. Therefore, users would be less likely to get overwhelmed and would find it easier to continue learning when presented with learning objects catering for their interests. Three of the most common approaches to recommending items are content-based filtering, collaborative filtering, and hybrid filtering. This study has focused on the collaborative filtering approach, where learning objects would be recommended to a user according to the preferences of similar users. A data set was created by web scraping the MERLOT repository, an online e-learning system that provides

learning materials of various types, such as tutorials, quizzes, and presentations. In this way, a data set that would contain users, items and ratings was created and passed as input to the algorithms used for the experimentation phase. This work adopted two techniques, namely: the k-nearest neighbor (k-NN) and matrix factorisation (MF). The former finds the most similar user to the target user through mathematical functions called similarity metrics, such as cosine similarity, while the latter technique computes matrix multiplications to find the relationships between users and items. For a given user and item, both techniques used have the capability to predict the rating of the user for that specific item. The experiments applied three variations of the k-NN and two variations of the MF technique. For each technique, a set of different parameters was used to test the respective variations. One of the MF variations obtained the best results.

Figure 2. Two of the many learning objects from the MERLOT online repository

Figure 1. An overview of the collaborative filtering approach

48

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Gamification of specific aspects of software project management to train students/developers

The motivation behind this project was to demonstrate that a gamification approach in the training of software-projectmanagement skills would be both possible and effective in preparing software developers for a managerial role. With this in mind, a scenario representing software projects was implemented to be able to convey the learnings from the game scenario to the actual job. Several aspects of software project management were implemented through the use of game features that would make the training more attractive as proved by [1], [2] and [3], as well as enhance the assimilation of knowledge. In order that the character of the trainee could be portrayed within the game, the trainee would be required to choose from a list of options representing their specific traits. The trainee would also be required to manage the distribution of all the resources among the appropriate teams, including the developers within the team.

The main part of the game would take place when development commences. As in the real world, issues during the development process would arise depending on the decisions taken. The trainee would need to address these issues by reallocating any resources as they would deem best. Particular attention should be given to the cost, ensuring that development would remain within the allocated budget. The trainee would be further conditioned by the status of the resources themselves and should ensure that the resources be reallocated according to their properties and that none of the resources would be utilised beyond the usage parameters. Finally, since time is limited and being a manager requires knowledge and effort, all the decisions taken during the development process would expend energy. It is therefore the trainees’ task to ascertain that they do run out of energy - as this too would lead to an unsuccessful completion of the project.

Figure 1. Early design prototype of the gameplay

REFERENCES [1]

J. Molléri, J. Gonzalez-Heurta, K. Henningsson, “A Legacy Game for Project Management in Software Engineering Courses”, June. 2018.

[2]

B. Marín, M. Vera, G. Giachetti. “An Adventure Serious Game for Teaching Effort Estimation in Software Engineering”. Universidad Tecnológica de Chile INACAP, Santiago, Chile.

[3]

A. Baker, E. O. Navarro and A. van der Hoek, “An experimental card game for teaching software engineering,” Proceedings 16th Conference on Software Engineering Education and Training, 2003. (CSEE&T 2003)., 2003, pp. 216-223, doi: 10.1109/CSEE.2003.1191379.

Figure 2. Flowchart of the gameplay of the serious game

L-Università ta’ Malta

| 49

Software Engineering & Web Applications

Julian Zammit | SUPERVISOR: Prof. Ernest Cachia COURSE: B.Sc. IT (Hons.) Software Development


SimplifyIt: Automatic simplification of text

Audio Speech & Language Technology

Benjamin Bezzina | SUPERVISOR: Dr Claudia Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence Health literacy is often overlooked in the public health sector. Medical reports are usually written to share a medical situation with other physicians and, in order to describe the patient’s needs as accurately as possible, highly technical language would be used. In most cases, this makes it very hard for the patient to understand what is being said about them, thus creating a health literacy problem. Being unable to understand their own medical situation from official medical reports, patients could resort to looking for information from less reliable sources or misinterpret the physician’s reports. This situation provided the motivation for this the project, which sought to simplify medical text by experimenting with text simplification techniques used in the field of natural language processing. Generally, text simplification systems are trained on a two-part data set – one part would contain the complex terminology and the other part would offer the same content, but in simpler language. One of the primary challenges in the experiment was finding such training data in the medical domain fit for this purpose. In this work, the limited data available was used to evaluate the effectiveness of finetuning a pre-trained model, such as BART (Bidirectional and Auto-Regressive Transformer) [1], for medical text simplification through three different experiments. More specifically, the experiment was set up to investigate the effect of the data used to fine-tune the base version of BART for medical text simplification. In the first experiment, the model was fine-tuned on EWSEW (English Wikipedia and Simple English Wikipedia) [2], a general English corpus for text simplification compiled from Wikipedia. The second experiment extended the model trained in experiment 1 by fine-tuning it further on the limited data available specific to the medical domain, which was a subset of the EW-SEW corpus made up of only medical sentences. This model was evaluated on the same medical data as the other experiment. In the third experiment, the pretrained model was fine-tuned only on the medical training data, and then evaluated on the same test set as the other two experiments. These models were evaluated using three evaluation metrics, Namely: the BiLingual Evaluation Understudy (BLEU)

Figure 1. The set-up of the experiments

score, the System output Against References and against the Input sentence (SARI) score [3] and the Flesch-Kincaid Grade Level (FKGL) score. The BLEU score was developed for evaluating machine translation tasks. Despite studies showing that it does not evaluate simplification accurately, early works used BLEU and it was implemented in this project to be able to provide a direct comparison with these studies. On the other hand, the SARI score was developed specifically to evaluate text simplification systems and is now the standard for such tasks. While, FKGL would not take into consideration whether the meaning is preserved in the output sentence, it was used to score the readability of the output sentence. This project set out to address two research questions, namely: a) establishing the extent to which BART-base performs the task of text simplification, and b) establishing the extent to which it adapts to domain-specific language when fine-tuned accordingly.

REFERENCES [1]

M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, “BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension,” CoRR, vol. abs/1910.13461,2019. [Online]. Available: http://arxiv.org/abs/1910.13461

[2]

W. Hwang, H. Hajishirzi, M. Ostendorf, and W. Wu, “Aligning sentences from standard Wikipedia to Simple Wikipedia,” in Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Denver, Colorado: Association for Computational Linguistics, May–Jun. 2015, pp. 211–217. [Online]. Available: https://aclanthology.org/N15-1022

[3]

W. Xu, C. Napoles, E. Pavlick, Q. Chen and C. Callison-Burch, “Optimizing Statistical Machine Translation for Text Simplification”, Transactions of the Association for Computational Linguistics, vol. 4, pp. 401-415, 2016. Available: 10.1162/tacl_a_00107.

50

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Find&Define: Automatic extraction of definitions from text Definitions are the main means through which humans learn the meanings of concepts and, therefore, have a central role in the process of communication. The purpose of a dictionary is to help in the generalisation and circulation of these meanings. However, keeping a dictionary up-to-date is tedious work, with many new definitions being introduced as language evolves, and new words enter the language ‒ or existing words shift in meaning. The automated extraction of textual definitions would be particularly beneficial, as.it could facilitate the curation of dictionaries. It could also be used in a language-learning environment, whereby learners would have access to definitions of any new terms encountered. The automatic extraction of definitions from text using machine learning techniques is a growing research field. Typically, this was always done in a limited, structured, well-defined way, using pattern matching. Recently, more advanced techniques, namely neural networks, have been used to solve this problem. These also make it possible to recognise definitions without specific defining phrases, such as ’is’, ’means’, and ’is defined’. This project was composed of three main stages or components, which constituted the proposed definitionextraction pipeline. The first component simply established whether a given sentence would be a defining sentence or not. The second component classified each token in the sentence with a label that indicated its type, including ‘term’, ‘definition’, ‘alias’ or ‘referential term’ of a sentence. This label fed into the third and final component. This labelled the relation between each tag in a sentence, which could be, ‘direct-definition’, ‘AKA’, ‘supplements’, amongst others, towards finally forming a definition.

For each of the three steps mentioned above, a pre-trained neural network was fine-tuned for its specific purpose. Since training a neural network would require a large amount of data, this project employed the DEFT corpus. This database was constructed specifically for definition extraction and is the largest annotated semi-structured corpus for this field, containing sentences from open-source textbooks covering a wide range of subjects. The US Securities and Exchange Commission EDGAR (SEC) database was also used. The main aim of the project was to experiment with and analyse various preprocessing techniques and pre-trained language models, including BERT, RoBERTa and ALBERT, to address each of the three components. These large neural language models have been used in many NLP (natural language processing) applications. In fact, the research question at the centre of this project focuses on comparing the impact of these different models on the definition-extraction pipeline and the individual component. The experiments with these language models included freezing the weights and fine-tuning them, as well as checking the effect of balancing classes in the training data. The results achieved were then compared to those of the DEFT-Eval competition, which is a definition-extraction competition that uses the DEFT data set, thus serving as a good benchmark. However, the primary focus in this study remains a comparative study of the impact that the large neural language models have on the definition-extraction task. The outcome of this project confirmed that definition extraction can be beneficial in a variety of situations. One of these is the building dictionaries or knowledge graphs, which benefit from increased connectivity and relevance, especially for question-answering machines.

Figure 1. The definition-extraction pipeline

L-Università ta’ Malta

| 51

Audio Speech & Language Technology

Paolo Bezzina | SUPERVISOR: Dr Claudia Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence


Automated news aggregator

Audio Speech & Language Technology

David Dimech | SUPERVISOR: Prof. Alexiei Dingli COURSE: B.Sc. IT (Hons.) Artificial Intelligence The way readers consume news has evolved as a result of the rise of the internet and social media. Over the last two decades, newsrooms have expanded their operations online, and their stories are now published on social media, online web portals, and/or mobile applications. The internet has democratised and facilitated journalism, while social media has facilitated the exchange and spreading of news. While being an objectively positive development, there may be certain instances where it could have a detrimental impact. If the news would be biased or inaccurate, it could distort the public’s perception of critical issues. The aggregator attempts to solve this problem by providing an online platform. Here, articles related to the same subject published by multiple newsrooms, would be aggregated into one article with minimal bias. Currently, existing systems merely group similar articles and stories together. This project seeks to enhance this by aggregating the article’s content, generating as little bias as possible, while always working in a transparent and responsible manner. The original articles were scraped from their respective websites, preprocessed and translated. Using TF-IDF, the articles were transformed into a vector, in order to be queried and grouped into similar articles. Each sentence of the similar articles was split and inserted into one list of sentences. The sentences were then embedded into sentence vectors, and clustered semantically. Clusters of similar sentences were then processed and scored according to specific criteria, such as: the sentiment of a sentence; the number of entities; the position of a sentence relative to the article; and use of pronouns. The best scoring sentence was then extracted

Figure 1. Aggregate article outline

from its cluster, and added to a list of sentences for the newly aggregated article. ANA has the potential to take online news portals on a new trajectory, encouraging the spreading of consumable and unbiased media.

Figure 2. Aggregation pipeline process

52

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Automatic transcription and summarisation of multi-speaker meetings

Businesses and organisations across the globe are continuously implementing solutions that utilise automatic speech recognition (ASR) and natural language processing (NLP) to streamline their operations. One such task is the automatic transcription of dialogue followed by conversation summarisation, which proves to be highly valuable in the use case of lengthy meetings involving multiple speakers. This project sought to apply ASR to audio obtained during meetings, simultaneously segmenting between two or more speakers within these conversations. This proved to be a challenging task due to non-ideal acoustic conditions, coupled with regular speech overlaps (participants speaking over one another). Subsequently, the work focused on producing a brief summary containing the main points of the conversation. This task was also non-trivial due to the non-conventional structure of conversational text, with the main points being scattered across several speakers and utterances. Following an in-depth analysis of numerous relevant state-of-the-art models and techniques, a system was implemented that achieved this goal, successfully transcribing and summarising meeting recordings from a collection of business meetings. This system comprised of a pipeline of models for each respective sub-task, the first of which was speaker diarisation, which detects speaker

turns, answering the question, “Who spoke when?” This was followed by ASR, also referred to as speech-to-text, obtaining the spoken words for each speaker. This text was punctuated using a sequence-to-sequence model, and the result was a complete and enhanced transcript of the meeting. Finally, this transcript was used as the input of a summarisation model that returned a summary highlighting the salient points of the conversation. Two deep learning models were trained for the summarisation task on data obtained from a meeting corpus, appended by synthetic data from the robust GPT-3 model. This process was carried out in order to achieve the results of a high-resource model with significantly less data. In the process, this increased the robustness of the summarisation model towards noise and errors in the transcript obtained. Subsequently, the system was evaluated as a whole by altering various parts of the pipeline and examining these changes on the overall output of the system. These experiments were conducted in order to gain insight as to which components the system relied on most, and which could be sacrificed in order to save computational resources. This would pave the way for such a system to be implemented in low-resource languages and direct efforts towards the most crucial areas for advancement in this field.

Figure 1. A high-level system overview of the transcription and summarisation pipeline

L-Università ta’ Malta

| 53

Audio Speech & Language Technology

Mikea Dimech | SUPERVISOR: Dr Andrea DeMarco | CO-SUPERVISOR: Dr Claudia Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence


Multilingual low-resource translation for Indo-European languages

Audio Speech & Language Technology

Jake Sant | SUPERVISOR: Dr Claudia Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence Neural machine translation is the task of translating text from a source language to a target language using artificial neural networks. As a rule, training neural machine translation models is computationally expensive and requires vast amounts of data in order to build a model that would be able to translate accurately enough for use in the real world. Some of the best currently available translation models, such as those developed by Google and Facebook, are trained on billions of sentences where data is available. Although they provide coverage for over 100 languages, many of these languages do not have large parallel corpora. This becomes evident in the translation output. This project focuses on neural machine translation in the context of low-resource languages, which can rely on far less online corpora when compared with other widely used languages, such as English, Spanish and Mandarin. A data set was constructed from an amalgamation of different multilingual sources, in order to create a single, larger multilingual corpus, which included English, Danish, German, Icelandic, Norwegian and Swedish. This sequence involved preprocessing texts, removing sentences beyond a certain length and splitting sentences into smaller units called ‘tokens’. Existing models that were already pre-trained for neural machine translation were obtained from the Hugging Face repository. These models were further trained and finetuned using the new data set. These models are known as transformers and have become the backbone of the architectures used by some of the best translation models. In order to achieve the best performance and results, a cloud computing platform ‒ Paperspace Gradient ‒ was

used, where all models were fine-tuned and trained on high-performance hardware available on the platform. The accuracy of these models was evaluated using additional data sets provided by the organisers of a shared task in the Sixth Conference on Machine Translation. The accuracy of the system itself was compared with that of other models submitted to the shared task, using evaluation metrics including BLEU, TER and chrF. In conjunction, other pretrained models were obtained and acted as baselines. These were not fine-tuned with any other data and used ‘out of the box’ to give a clearer indication of the impact of fine-tuning on the translation tasks. Experiments were conducted to examine the relationship between the accuracy of the model and training with low-resource languages. Different instances of the pre-trained models were fine-tuned using smaller subsets of the larger constructed data set. First, this involved starting from just 10% of the complete data set and examining accuracy, then repeating this experiment using a larger amount of data. This was repeated until the entire data set was used. The process outlined above sought to investigate the question, ‘What is the least amount of data that could be used while ensuring that the system remains accurate?’ The outcome of the experiments performed suggested that the additional training and fine-tuning performed on each model provided a more substantial benefit in accuracy when larger sets of the constructed data set were used. Moreover, the linguistic relationship among the Germanic languages used in this task provided an overall increase in the quality of the translations obtained.

Figure 1. The task pipeline for this project

54

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


The applicability of Wav2Vec 2.0 for low-resource Maltese ASR

This project sought to fine-tune the Wav2Vec 2.0 neural network architecture, with a view to applying it to automatic speech recognition (ASR) in Maltese. By 2012 the Maltese language was spoken by 520,000 native speakers. This quantitative limitation tends to create a number of difficulties when working on technical innovations for the language, hence its being classified as a low-resource language. As part of this project, a novel speech corpus consisting of excerpts of parliamentary speeches in Maltese was collected and labelled. Speeches by twenty-three male and three female members of parliament, amounting to 2 hours and 30 minutes, were utilised. This data was combined with other available Maltese corpora ‒ including subsets from the MASRI and Common Voice projects ‒ to compile a data set totalling 50 hours to be amalgamated into the study. External research also shows that the use of additional data from auxiliary languages improves training performance. For this reason, a subset from the English Librispeech corpus was also used for further experimentation. Before 2016, supervised machine learning methods relying heavily on using vast quantities of labelled data for their training, dominated the ASR field. Recently, self-supervision, which is a method that uses unlabelled data for pre-training a model, has been popularised in various artificial intelligence (AI) fields, including speech systems. These methods can leverage unlabelled data, which is far more available, even for low-resource languages.

Wav2Vec 2.0 has been employed as an end-to-end ASR system able to learn speech representations from massive amounts of unlabelled data during pre-training. These Wav2Vec representations were then used to a match audio samples to their specific linguistic representation in a process called fine-tuning. Moreover, a multi-lingual approach, more specifically an XLSR, was developed allowing Wav2Vec 2.0 to learn cross-lingual speech representations from a varied pretraining data set. This project used a pre-trained XLSR model to conduct various experiments with the collected data sets in Maltese and English. The Wav2Vec 2.0 system supported its applicability for low-resource settings by using splits of 10 minutes, 1 hour, 10 hours, and, 100 hours. The Maltese data set was therefore split into seven subsets: 10 minutes, 30 minutes, 1 hour, 5 hours, 10 hours, 20 hours, and 50 hours to validate these claims. The 50-hour model was then fine-tuned further on the English speech data. The models have been evaluated using the word error and character error rates (WER/CER). Using more data has proven that model performance scales up with data quantity, although not in a linear manner. The best monolingual model has achieved 15.62% WER and 4.59% CER. Upon further evaluation, the performance of the different models on a set of varied utterances was duly noted. The system was also applied towards generating transcriptions for a number of samples of Maltese speech sourced from the web, to varying degrees of success.

Figure 1. The implemented solution pipeline of this project

L-Università ta’ Malta

| 55

Audio Speech & Language Technology

Aiden Williams | SUPERVISOR: Dr Andrea DeMarco | CO-SUPERVISOR: Dr Claudia Borg COURSE: B.Sc. IT (Hons.) Artificial Intelligence


Speech-driven app to assist people for assisted living

Audio Speech & Language Technology

Kyle Zammit | SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Software Development Residents in care homes often find it difficult to communicate or carry out certain mental tasks without the help of carers or staff members. This is largely due to mild impairments that they might suffer from, such as cognitive, mobility and spatial issues. Assistance would necessitate the availability of staff, which comes as a premium. This resource is nevertheless crucial to the residents’ quality of life, as without the help of carers they would feel isolated and demotivated. This study focused on employing technology-based approaches to facilitate the resident’s everyday tasks. As part of the study, an application was developed, designed to help users in tasks such as: checking reminders, meals, events, family visits, access websites and write e-mails through a portable mobile app controlled through a speechdriven interface. Such a system could be managed by the staff of the care home and additional remote carers, such as relatives. This app attempts to bridge the gap between the carers, residents and remote carers. One of the main methods identified was using a speech-driven approach in order to overcome some of the impairments typically experienced by the target users. The proposed app is based on retrieving data from a database about the care facility and its events, meals, visits and reminders for various daily tasks, for example taking medication. Material.io components were used for a uniform look across screens. The technologies used include the Flutter framework, which is used to develop scalable, multiplatform and user-friendly applications. The software relies

Figure 1. A screenshot of the app, as viewed by the user (care home residents) on a database to save the user’s data securely and update it in real-time. Finally, the app includes a voice assistant, designed to interpret the user’s speech into actions that the app could execute, such as switching screens and providing voice feedback to the user when executing a command.

Figure 2. The architecture of the system

56

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Implementation and analysis of an RSSI-based indoor positioning system Connor Attard | SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Software Development

Figure 1. Test environment and set-up

and visualised on a PC.The key performance metric was the positioning error ‒ i.e., the difference between the actual position and the position calculated by the system. Since the raw RSSI readings tend to be highly prone to fluctuations, filtering and smoothing techniques were applied to improve the accuracy. To assess the robustness of the system, variables such as the number of beacons and the amount of clutter in the room were also taken into account.

Figure 2. Mobile tag process diagram

L-Università ta’ Malta

| 57

Internet of Things

An indoor positioning system is a technology used to locate a person or an object within a building or in an environment in which conventional GPS technology would not be reliable due to signal impairments. Such systems have been successfully deployed in commercial settings, such as retail outlets, factories, and airports. The aim of this work was to set up and analyse the characteristics of a simple indoor positioning system using inexpensive digital radio modules operating on the 2.4GHz band. Particular attention has been given to its feasibility as a navigational tool for indoor vehicles, such as smart wheelchairs. To test the system, a number of ‘anchors’ or ‘beacons’ were installed around the perimeter of a test environment. A receiving ‘tag’ was then placed on several markers around the room, the precise location of which was pre-determined using a laser measuring device. As in typical indoor positioning systems, the receiving end exploited some characteristic of the signal, which would vary according to the distance between the transmitter and the receiver – in this case, the received signal strength indication (RSSI). Using a signal propagation model, the RSSI from the closest three beacons could be translated into Euclidean distance, and thus the position would be triangulated. The receiver module was connected to a single-board computer, which performed the necessary computation and logged the results to an SD card. This data was then analysed


A facial recognition system for classroom attendance

Internet of Things

Cristina Barbara | SUPERVISOR: Prof. John Abela COURSE: B.Sc. IT (Hons.) Software Development

Taking classroom attendance manually is a very timeconsuming and cumbersome task but is nonetheless generally required for classes at any level of education. Reducing the time taken for conducting such a task would increase the available time for teaching the course contents. Therefore, automating classroom attendance would be beneficial to both the educator and the students, since it would not require user input and would be much faster. This work has set out to research and build an efficient solution for automating classroom attendance using face recognition (FR). The system initially captures 100 pictures of each student’s face, and labels them with the student’s name. Subsequently, the same images are used to train the FR algorithm. Prior to each lesson or lecture, the student is required to stand in front of the camera so that an image of their face could be captured and recognised by the system. Once the recognition process is completed (i.e., the algorithm assigning a name to each captured image), the students’ names, the date and timestamp would be entered into an Excel sheet and sent as an e-mail attachment. Additionally, a face detection algorithm was used to locate the face for each image inputted in the system. The existing model was executed on a PC but was originally developed to run on the Raspberry Pi. Once this system was fully functional, alternate algorithms were implemented and their efficiency was tested using a local data set. The results were then saved and evaluated through a spreadsheet. Testing the algorithms on frontal face pictures resulted in 100%

Figure 1. The system’s output displayed on the screen during real-time face recognition identification accuracy for each algorithm, whereas when they were tested on different face poses all the algorithms obtained 99% identification accuracy. The ideal algorithm for this type of system would be the one offering the highest speed, as reducing the time taken for the task would be essential in ensuring that lessons or lectures would always start on time.

Figure 2. The system’s internal recognition process for taking attendance

58

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


The design of a piezoelectrically actuated scanning micromirror Benjamin Barthet | SUPERVISOR: Prof. Ivan Grech COURSE: B.Sc. (Hons.) Computer Engineering piezoelectrically actuated, and having a resonant frequency of around 230KHz (which is the frequency required to scan or project 4K resolution horizontally at 60Hz). The micromirror in this project was designed using CoventorWare, a simulation software that could be used to design and carry out various finite-element simulations on MEMS devices. The PiezoMUMPs manufacturing process was used as the target process for the design of the mirror. This consisted of a silicon base housing the mirror and the actuator wings, which would be excited electrically with opposing charges at the resonant frequency of the mirror to induce motion in one dimension. A layer of piezoelectric material was then deposited on top. An oxide layer filled in any spaces, so as not to short-circuit the electrodes with the semi-conductive silicon base. Lastly, electrodes were placed atop the structure to induce a charge in the piezoelectric material.

Internet of Things

Microelectromechanical systems (MEMS) are becoming the norm in today’s devices. This is, in part, due to their low power consumption. More importantly, their small size allows them to take on physical properties that cannot be achieved by any other means. One application of MEMS is scanning mirrors, which are found in micro and pico projectors, and are significantly bringing down the cost of lidar technology, the main sensing technology in self-driving cars. Scanning micromirrors employ one of various actuation techniques to achieve resonance. An emerging technology that is gaining momentum is actuation by piezoelectric materials. This is due to a number of factors, mainly the low driving voltage required, the linearity of the effect and its nature as a reciprocal transducer, thus allowing the measuring of the performance of the mechanism and aiding calibration. This project revolved around the research, design, and testing necessary in devising a micromirror that would be

Figure 2. The motion of the mirror

Figure 1. The mirror design as seen from above, and a side view enlarged in the Z plane by 10 times (legend of the individual layers: blue = substrate; cyan = oxide; red = base; magenta = piezoelectric actuator; green = insulating oxide; yellow = electrode)

L-Università ta’ Malta

| 59


IoT-based domestic air-quality monitoring system

Internet of Things

Abigail Cini | SUPERVISOR: Dr Inġ. Owen Casha COURSE: B.Sc. (Hons.) Computer Engineering

Due to the Covid-19 pandemic, people have become more aware of the importance of clean air around them, as this could effectively reduce health hazards. The aim of this final-year project was to develop a wireless sensor network that would monitor the air quality in domestic households. The implemented system includes three separate nodes, positioned in strategic areas of the house such as the garage, the kitchen, and one of the bedrooms. These nodes were set to monitor specific gases, pressure, humidity, and temperatures. In order to display the data, a webpage was created, which presents all the data collected from these nodes in the form of a table. The internet of things (IoT) is a network of devices featuring hardware and software components, which connect and exchange data with other devices and systems via the internet [1]. Collecting data through sensors is crucial to the development of IoT systems such as alarm systems, burglar alarms and air-quality monitoring systems. Sensors are devices that detect physical data and produce signals and information that people and machines can understand and evaluate. From these sensors, data could be observed and analysed by members of the household, enabling the family to take measures to improve air quality in specific areas around the home. In this project, a wireless sensor network (WSN) was built to monitor abnormalities in the air towards reducing them. The communication backbone of this WSN is the nRF24L01+

transceiver module. The transceiver module is a low-power and long-range (approximately 1km) device. For two or more transceiver modules to communicate with each other, they would need to be on the same channel. This channel could be any frequency along the 2,483.5 MHz industrial, scientific and medical (ISM) band. This system can have multiple slaves and masters. However, in order to communicate with each other, they would need to have the same address, and run data through a pipe. The project employed a synchronous serialcommunication interface ‒ more specifically, the Serial Peripheral Interface (SPI). This facilitated communication between the main hub and the sensory nodes implemented around the house. NodeMCU with an onboard ESP-12F chip was used as a hub that could communicate with a few sensor nodes around a domestic environment, where the data would be collected and processed onto the webserver. The data sent by the sensory nodes would then be uploaded, and the user would be informed about specific inconsistencies identified and recorded. In order to create the hub, where everything could be monitored, a Raspberry Pi PC was used to make the system as portable and efficient as possible. Each sensor node was designed around the Arduino UNO microcontroller. Each featured a number of sensors to collect data, including: temperature, pressure, humidity, carbon monoxide levels inside a garage, and a smoke sensor in the kitchen. This incorporated an alarm system that would be triggered, should any abnormal levels be measured.

Figure 1. Block diagram of the project

REFERENCES [1]

60

Y. Ismail, “Internet of Things (IoT) Importance and Its Applications,” November 2019. [Online]. Available: https://www.intechopen.com/chapters/69788. [Accessed September 2021].

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


A narrowband IoT network solution for air-quality monitoring Isaac Debattista | SUPERVISOR: Prof. Inġ. Carl J. Debono | CO-SUPERVISOR: Dr Mario Cordina COURSE: B.Sc. (Hons.) Computer Engineering radio band. At the other end, narrowband IoT (NB-IoT), which operates on a narrow portion of the Long-Term Evolution (LTE) band, was utilised to upload the concentration readings onto the cloud dashboard. This technology proved to be the superior choice in terms of hard-to-reach coverage, reliability, as well as power consumption. To further cut down on the cost and power consumption, the microcontroller was designed to aggregate the received data and adapt it to the environmental changes that the sensors would record. This resulted in an intelligent understanding of when it should send such information to the user, thus reducing the aforementioned consumption by adaptively eliminating redundant information that would otherwise induce an LTE charge via the onboard SIM provided by Epic Communications Limited. Furthermore, the system could recognise what kind of readings would be of concern in the present environment and, hence, requiring the immediate attention of the user.

Figure 1. A high-level diagram depicting the network solution

Figure 2. A graph of collected CO2 concentration levels relative to their adaptive thresholds and forwarding behaviour

Internet of Things

With the growing interest in internet of things (IoT), as well as the rising concerns for the environment, this work seeks to bridge the two considerations by producing an effective airquality monitoring system. The selected approach involved utilising a carbon dioxide (CO2) sensor that could be deployed remotely in any type of environment. This project also included the development of a hybrid gateway that would be capable of receiving CO2 concentration readings from multiple instances of the mentioned sensor, and forwarding the information to a cloud dashboard. The hybrid gateway was built on an Arduino MEGA microcontroller, with two expansion shields stacked on it. Both shields corresponded to the two communicating fronts of the network ‒ the receiving and sending sides. The receiving end was connected to the CO2 sensor via Xbee 868MHz, a wide-area, low-power radio frequency (RF) operating on the European free ISM (industrial, scientific, and medical)

L-Università ta’ Malta

| 61


Automated deployment of network services using the latest configuration protocols and languages Kyle Fearne | SUPERVISOR: Prof. Inġ. Saviour Zammit COURSE: B.Sc. (Hons.) Computer Engineering is a standardised network management protocol that allows configuration changes to be made to network devices through remote procedure calls. The protocol could be used in the same way on devices made by different vendors, as long as a NETCONF agent would be running on the device. For the implementation of the project, GNS3, a network simulation tool, was run on a Ubuntu 20.04 environment to emulate a variety of possible network services, together with the respective router configurations. More specifically, the project focused on the automatic provisioning and deprovisioning of an E-Line connection, which is defined by the Metro Ethernet Forum as a point-to-point link between two network interfaces where both ends can only communicate with each other. The network service was then monitored to ensure that it was in a healthy state. In addition to the above, a basic network inventory and service catalogue were developed using a MariaDB database.

Internet of Things

Network automation is set to play an important role in the development of computing over the next few years, particularly in the area of telecommunications and internet service providers. This is because automation saves time by reducing human interaction and, thus, removing the possibility of human error. In view of this potential development, this project, which was undertaken in collaboration with GO plc, set out to automate the deployment of various network services, using the NETCONF protocol and the YANG data modelling language. At present, the most commonly applied method for network management is a combination of using the Simple Network Management Protocol (SNMP) for monitoring the network state and using a command-line interface in order to effect any configuration changes on the network devices. Most network vendors tend to limit the amount of configuration that could be done via SNMP and opt to use proprietary interfaces for configuration. NETCONF would help solve this issue, as it

Figure 1. Network management using NETCONF and the Yang Data Model

Figure 2. Diagram depicting an E-Line service, whereby two customer user network interfaces are connected to each other via a provider bridge device

62

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Hardware implementation of an automatic collisiondetection-and-avoidance system for a remote-controlled device Kyle Friggieri | SUPERVISOR: Prof. Inġ. Edward Gatt COURSE: B.Sc. (Hons.) Computer Engineering

Figure 1. The final product (hardware) temperature, air pressure, carbon monoxide levels, frontal distance for collision detection, and speed. Due to cost constraints, this project was implemented by controlling a simple, inexpensive and widely available toy car. Nevertheless, the concept was designed to be scalable on a more robust platform, such as a custom/hobby-grade chassis.

Figure 2. Top-level diagram of the project

L-Università ta’ Malta

| 63

Internet of Things

Sensors have become essential for accessing metrics and data required for monitoring and interaction. In situations where human access might be impaired or impossible, a mobile, flexible, and scalable solution would be required to provide access and monitoring. This project primarily consisted in creating hardware that would enable a remote-controlled device via Bluetooth ‒ in this case, a toy car ‒ to gather and relay data from various onboard sensors to a locally hosted web server and the remote application itself. It was ensured that the user would be at a safe distance from the area in question, while still being able to obtain a live video feed from the onboard camera. Adding automated collision detection and avoidance to the system allowed the device to assure its own safety independently, using internal logic. With the implementation of multiple processing nodes ‒ Raspberry Pi 4 for the main node and an Arduino UNO as the auxiliary node ‒ it was possible to eliminate the need for components such as analog-to-digital converters and logic-level converters, while allowing the use of available libraries and distributing the processing load accordingly. This allowed for a responsive system that could be easily expanded and modified to user-specific requirements. The said processing nodes were required to read humidity,


Implementation of a home alarm system using IoT Amy Gatt | SUPERVISOR: Prof. Victor Buttigieg COURSE: B.Sc. (Hons.) Computer Engineering

If a user’s smartphone is not connected and sensor data changes, the software concludes that the home is empty and automatically switches on the alarm system. On the other hand, should the system be on and the software would recognise the smartphone of the user on the home network, then it automatically shuts off parts of the alarm system. When the system would detect an intrusion, the alarm would go off and the user would be notified by a smartphone notification. This notification would be sent by the web server that is running on the PC. The web server was also set to display sensor values, the state of the system (i.e., if the alarm system was on or off) and any warnings or intrusions detected. The system also includes sensors that could detect high temperatures and rain. If the temperature sensor should register a high reading, this reading would be sent to the microcontroller, which would forward it to the PC through WiFi. This event would then be displayed on the web server as a warning, notifying the user. As regards the rain sensors, should rain be detected, the system would check that all the windows are closed and raise a warning on the web server about any windows that have been left open. The system described was tested and achieved generally successful results, with the main drawback being that only one user was considered. A few features that could be added to the system include: considering multiple users; improving the home security by adding components such as cameras to monitor intrusions; and focusing on increasing user comfort.

Internet of Things

A concern that many persons share is ensuring the safety of their homes. The aim of this project was to design and implement a system that would minimise any security vulnerabilities of a home by observing the state of the house and detecting any changes. The whole process was done automatically, without the need for a user controlling the system. The proposed solution entails a variety of sensors attached to specific parts of a house. These sensors include contact sensors (which could monitor any doors or windows and detect if they have been opened or shut) and motion sensors (which detect movement, such as a person walking past the sensor). Two contact sensors and two motion sensors were used. The front door was fitted with a contact sensor and a motion sensor observing that area. The other contact sensor was attached to a window, while the second motion sensor was positioned to observe another part of the house. Any changes in the sensor values were detected by the microcontrollers to which the sensors had been attached. These microcontrollers were intended to act as a microcomputer and have been equipped with software designed to forward these values to a PC through wi-fi technology. The software on the PC would gather all the sensor data and check whether a user’s smartphone is still connected to the home network, in order to establish whether the house is empty or otherwise.

Figure 1. Image of the web server

64

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Building an air pollution data set through low-cost IoT air monitoring

Intensified industrialisation in recent years has brought about higher levels of air pollution. More specifically, this is due to increasing use of private vehicles (car dependency), increasing number of factories and a growing amount of construction projects, to name a few. This project proposes an internet of things (IoT) air-quality monitoring system fitted with consumer-grade electronic parts that are available on the international and local market. This would allow a person with some technical knowledge and skill to replicate such a system, enabling them not only to evaluate the surrounding air quality, but also build a data set ‒ all at a low cost. IoT is a technology that describes a network that connects objects in the physical world to the internet. This study has also explored the current state of air-quality monitoring, the current challenges, and what is being done to address them. Furthermore, the work included an observation of how the said challenges would affect the experiment. The chosen approach was to connect several sensors to a small programmable computer, more precisely a microcontroller, to measure the quality of the surrounding air. The air-quality data collected from a remote location (by the sensing microcontrollers) was transmitted wirelessly through their LoRa module, this being a long-range wireless communications protocol designed to transmit small amounts of data at very low power levels. The wireless data was then received by the master microcontroller through its own LoRa module. It was physically connected to a PC, enabling the

master microcontroller to write the data to the PC’s storage medium and, thus, build the data set. Sensors were selected to measure the following airquality characteristics: Particulate matter, Temperature, Pressure, Humidity, Carbon dioxide, Carbon monoxide and Ozone. The system was designed with the objective of making it scalable and low-maintenance, while providing accurate airquality data. Hence, parts were selected primarily based on cost, power consumption and sensor accuracy. Furthermore, the transmission technology was also selected based on power consumption and range. The sensors were calibrated to ensure accurate readings. The system was set to collect outdoor air-quality data over a one-month stretch. The system was tested on power consumption and accuracy of results, which were then compared to other existing solutions from other research papers. Data analysis was then performed on the compiled data, thus determining the different correlations and relationships between the air-quality features. The results were then compared with theory. As a basis for future work, the system was analysed to identify areas in which it could be improved. Data collection covering a longer period was also considered to provide a data set that could be used to better calibrate the sensor readings through software-based calibration methods, particularly in machine learning.

Figure 1. Preview of the microcontroller electronic circuit

L-Università ta’ Malta

| 65

Internet of Things

Galin Petkov Gluhov | SUPERVISOR: Dr Inġ. Trevor Spiteri COURSE: B.Sc. (Hons.) Computer Engineering


Automated indoor navigation in a care context Andrew Magri | SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Software Development

compute the location of the microcomputer relative to the beacons in the vicinity. The indoor navigation module would use the location module to calculate the direction and distance that the robot would have to travel to with regard to its own position in relation to the indoor environment/the beacons. While the robot would be traversing from the starting point to the end point of any given journey, the object detection and avoidance modules would be constantly examining the robot’s surroundings so as to minimise the chances of collision. In order to detect obstacles, the corresponding module was split into two parts. The first and most important part used a simple USB camera to detect whatever was in the path of the robot, while the second consisted of ultrasonic sensors on either side, as a pre-emptive measure for the camera’s blind spots. The camera sub-module was fitted with filters such as the Chambolle-Pock denoising filter and image inversion in order to produce a binary object map (BOM). The BOM was then used by the obstacle avoidance module to detect the edges of the object/s in the robot’s path and avoid them safely. The robot was expected to avoid obstacles detected by the camera by calculating the best angle change needed in order to circumvent any possible collision. The ultrasonic sensors were used to detect any objects appearing outside of the camera’s field of vision, and alerted the robot to slow down until the object was either within view of the camera or would have disappeared from the detection zone of the sensors.

Internet of Things

The main motivation behind this study was to improve the quality of care given to mobility-restricted individuals in environments such as hospitals and care homes for the elderly. In this context, smart wheelchairs could provide greater mobility and independence for wheelchair-dependent persons and reduce the amount of manpower needed in the indoor transportation of persons within the care sector. Such technologies require automated indoor navigation systems and this study focuses on the feasibility and efficacy of such a system. In a real-world application of the developed system, a smart wheelchair or a similar device would be used. However, due to the current limitations, a simple robotic vehicle was constructed using a microcomputer, along with a series of other plug-and-play apparatus. The system developed focuses on four main aspects: indoor location, indoor navigation, obstacle detection and obstacle avoidance. The global positioning system, or GPS, is not suitable in indoor environments. This is mainly due to the substantial loss in accuracy occurring as a result of signal interference because of the infrastructure of the environment. This issue could be by-passed by using alternative technologies such as Wi-Fi or Bluetooth as indoor location beacons. The sole purpose of these beacons would be to emit a frequency or signal that would transmit information about the location of the beacon. The microcomputer would then receive these signals and

Figure 1. Top-down diagram depicting the system working within an indoor environment

66

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


IoT-based voice-controlled home automation system Owen Micallef | SUPERVISOR: Prof. Inġ. Edward Gatt COURSE: B.Sc. (Hons.) Computer Engineering

NodeMCU ESP8266 microcontroller was set up to send data to a real-time database, and then to the mobile application in order to display the sensory data gathered. Moreover, the NodeMCU ESP8266 microcontroller was set up to host a web server to be able to make sense of the HTTP requests received from the mobile application through wi-fi and turn on/off the electronic appliances within the home environment. The system was tested using different techniques to assess the performance of the whole system in terms of feasibility in real-life scenarios and efficiency. The hardware section was tested using a multi-meter, the MegaunoLink serial capture tool and different methods were used to calibrate each sensor. The software section was tested using both black-box testing and white-box testing. From the results obtained, the proposed solution was deemed to be feasible and efficient. In real-life scenarios this system would be a very good candidate to consider, when implementing a voice-controlled home automation solution.

Internet of Things

This project proposes a solution of a voice-controlled smart home system, implemented through two microcontroller boards targeted for applications based on internet of things (IoT), namely Arduino Uno and NodeMCU ESP8266. These were interfaced with a mobile application, various sensors, electronic devices, and a real-time database. The aim of this project was to design a simple and costeffective voice-controlled home automation system that could be used to gather sensory data and control home appliances with ease. This project presents the development of a solution for a real-world application for smart home systems built around the NodeMCU ESP8266 development board, the Arduino Uno board, Firebase real-time database and external electronic devices and sensors. In this project the Arduino Uno microcontroller was interfaced with the NodeMCU ESP8266 microcontroller through serial communication to gather sensory data around the home environment. Also, the

Figure 1. The actual hardware circuit

Figure 2. Casing for the prototype system

L-Università ta’ Malta

| 67


Air quality, temperature and humidity monitoring using narrowband IoT Luca Ruggier | SUPERVISOR: Prof. Inġ. Carl J. Debono | CO-SUPERVISOR: Dr Mario Cordina COURSE: B.Sc. (Hons.) Computer Engineering

end-users, given that many providers charge on the amount of data used. The measured quantities of temperature and humidity were also used to model the air-quality sensor to a temperature and humidity dependence, thus increasing testing accuracy. The temperature and humidity sensor was factory calibrated, and hence did not need further calibration The algorithms and hardware set-up were evaluated by comparing the total power draw throughout a 24-hour cycle to a system that was continuously measuring and transmitting data. While a substantial reduction in power usage was noted, downfalls in this specific hardware set-up were visible, as such development boards were not optimised for ultra-low power draw. In order to avoid this in the future, the use of custom printed circuit boards and the ability to cut power to the modem and sensors would help bring about further reductions in power draw. This project was conducted in collaboration with Epic Communications Ltd, which provided the SIM card, narrowband IoT base station and back-end connection information.

Internet of Things

This project sought to develop a dynamic data measurement system that would measure and transmit data over narrowband IoT (internet of things) while incorporating various algorithms to reduce unnecessary power consumption and data transmission. The environment parameters that were monitored in this project include temperature, humidity and air quality (PM2.5). The set-up included two sensors to measure the desired quantities, a Quectel NB-IoT modem to transmit data and an Atmel microcontroller to poll sensors, pass data to the modem and run the algorithms. The measured data was then transmitted and forwarded to an IoT cloud interface. Three algorithms were implemented, namely: deep sleep mechanisms within the central microcontroller and transmitting modem; adjustable sample rate to reduce power consumption in periods with lower rate of change of data; and minimum variance transmission to ensure that repeated values would not be transmitted wasting power, as well as avoiding unnecessary data transmission. The third in particular would help reduce costs to

Figure 1. Hardware configuration

68

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Smart secure homes: A solution to the cyber security threats in our smart homes Jayden Sinagra | SUPERVISOR: Prof. Victor Buttigieg COURSE: B.Sc. (Hons.) Computer Engineering

The aim of this project was to implement a security device, using either a PC or a Raspberry Pi, which would secure a home network while requiring minimal technical know-how from the user. Snort 3, an IDS that features a new design and a superset of Snort 2, was used to warn users if it detected a ping (Internet Control Message Protocol or ICMP) or SYN (half-open) flood attack, as illustrated in Figures 1 and 2. Results obtained during these attacks, such as network throughput, percentage central processing unit (CPU) and memory usage, determined which device was the most efficient and provided the best performance. Snort 3 was also configured to protect the internal network against such an attack. Ultimately, the system’s strength could be further evaluated against other types of attacks such as Hypertext Transfer Protocol (HTTP) and User Datagram Protocol (UDP) flood attacks.

Internet of Things

A smart home is constituted by various smart devices being connected to a network, each with a specific set of functions. These devices tend to simplify the lives of the users by providing services according to the users’ requirements. Thus, they provide a more comfortable, convenient and secure home. Notwithstanding the benefits of a smart home, problems would occur when the cyber-security systems in the home would not be strong enough (due to poor configuration or default passwords, for example) to stop unauthorised persons from compromising a device connected to the network. In addition, some of the devices installed in smart homes do not have enough processing power to implement strong security features. These problems could be solved by placing a security device between the modem connected to the internet service provider (ISP) and the rest of the home network, which offers an extra layer of security in the network. This device could act as a firewall and/or an intrusion detection system (IDS).

Figure 1. SYN flood alerts on a personal computer

Figure 2. ICMP flood alerts on a personal computer

L-Università ta’ Malta

| 69


Towards macro programming of IoT devices and smart contracts Joseph Cefai | SUPERVISOR: Prof. Joshua Ellul COURSE: B.Sc. IT (Hons.) Software Development

three parameters relating to it, including the platform type, the function name, and some platform-specific settings. As at the time of writing, Smart Tool did not provide options for selecting different platforms, such as Ethereum (blockchain) and Arduino (IoT). However, the tool has been designed in such a way that it could support other implementations. Both Stratis and Raspberry Pi generators inherit an interface that would determine the types of functions they could define. Therefore, if a different platform would be required, an experienced programmer familiar with that platform would need to define an interface provided by Smart Tool and implement the functions in that language. The run-time call function discussed previously (i.e., the function that would enable platforms to communicate with each other) would also require an implementation in addition to the generator. As with the generator, this code must be language-specific, and it should depend on which protocols would be the most appropriate, for example, Hypertext Transfer Protocol (HTTP) or Transmission Control Protocol (TCP). At present, the run-time call contains two cases, Stratis and Raspberry Pi, which communicate via HTTP requests. In comparing two systems ‒ that is, one generated by traditional methods and the other by Smart Tool ‒ it was determined that the lines of code generated by both were nearly identical. This suggests that the tool has a low overhead and that it is highly efficient. In the future, it is likely that more platforms would be supported, thus allowing users to benefit from a cross-platform tool and creating the system according to their preference.

Blockchain & Fintech

This project consisted in developing a tool called Smart Tool, which has been designed to ease the burden of programming across internet of things (IoT) devices, smart contracts, and other systems that integrate with them. This tool uses different skills to work on each of these platforms. By using macro programming, Smart Tool attempts to alleviate some of the burdens associated with programming, enabling programmers to create source code and focus on a single abstract system. The accompanying image shows how the programmer’s code would be passed through the tool, to be processed and converted into individual files according to the specified platforms. The ultimate goal was to allow developers to concentrate on business logic instead of being hindered by different systems or components. This would save the programmers time by automatically ‘connecting the dots’ between how various systems communicate. In the long run, Smart Tool could provide increased value if more platforms were to become available (such as Ethereum and Arduino), which will make these single abstract systems cross-platform. While allowing the IoT platform, the blockchain platform (where smart contracts are deployed), and the main platform to function independently, Smart Tool would simplify communication between them. On the main platform, all function calls would be encapsulated within another function ‒ the run-time call ‒ which triggers a function on the other platforms and waits for a response. The run-time call function of Smart Tool would accept

Figure 1. Structural overview

70

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Accessing the feasibility of tokenisation in healthcare Andrew Fauré | SUPERVISOR: Dr Lalit Garg COURSE: B.Sc. IT (Hons.) Computing and Business

could be shared and to specify to which health data the external entity would be allowed access. This would enable patients to monitor who is accessing their data. This project has also sought to create a custom smart contract for healthcare professionals to view patients’ data. The smart contract was set up so that every time the patients’ data would be updated or modified after a hospital visit, a new asset would be created to keep any new information distinct from one visit to another, and for the patient to be able to view their data more clearly. A survey was published to explore the opinion of the general public regarding healthcare data, and to find the different uptakes of tokens. This survey reached 151 persons. A separate survey was circulated among healthcare professionals only, and its purpose was to establish how they use health data in their work, and how the introduction of tokenised health data would affect them. The latter survey reached 46 professionals. The premise for this research was that tokenising healthcare would greatly enhance healthcare systems. The data stored on the blockchain would be more secure, while the smart contracts would be conducive to efficient data sharing. Smart contracts could also allow the patient to decide which data may be shared, and also be kept updated as to which entities would have gained access to their data.

Blockchain & Fintech

This project consisted in developing a framework for accessing and exchanging patient data using tokenisation, by placing patients’ data into a decentralised database that could only be accessed through tokens. The proposed framework would enable organisations such as hospitals, public health authorities, private doctors, and insurance companies to access patients’ health data on a decentralised database strictly upon having obtained the authorisation to access it. Therefore, when the patient would authorise the sharing of their data, this would initiate a transaction between the external party and the patient, whereby the patient would receive tokens. The external party would in turn receive access to the individual patient’s data. Tokens could be used by patients to pay for the hospital service. The patient could acquire tokens by purchasing them, collect tokens from a previous visit or else they could gain tokens by sharing their data with external parties. The accompanying image describes the different interactions between the hospital, patient and external organisations, and shows when tokens could be exchanged and used. Besides tokenising patients’ health data and giving them an incentive for sharing their data, the proposed framework would also allow patients to gain complete control over their data. With the use of smart contracts, the patient would be able to decide with whom their data

Figure 1. Tokenisation in the healthcare system

L-Università ta’ Malta

| 71


Towards seamless cross-platform smart-contract development Stefano Schembri | SUPERVISOR: Prof. Joshua Ellul COURSE: B.Sc. IT (Hons.) Software Development

issue of frequent changes being released across different Solidity versions. These changes would often result in older code not working as expected or even not compiling at all. The work being proposed also analyses situations where a smartcontract developer might want to deploy their Solidity smart contract onto multiple blockchain platforms. Traditionally, a developer would have to create different files containing duplicate code for all the targeted platforms. Using the proposed framework, the developer could instead create one file and specify for which platform each code-block would be intended. The framework would then automatically generate the necessary files with the deployable smart-contract code for all the different platforms. A use-case study of a smart contract designed to be deployed across six different platforms confirmed that, when using the proposed approach, developers could generate the deployable smart-contract code for all the different platforms by feeding just one file into the framework. The single file would consist of less than half the lines of code needed in a more traditional approach. The significant reduction in lines of codes would also translate into less time and effort being required for maintaining code for different platforms and versions. This would also allow the developers to shift their focus towards high-level implementation. The above results show that by using the proposed framework, developers would be able to write smart contracts intended for different platforms and versions in a seamless manner, allowing for developers to focus on the high-level logic.

Blockchain & Fintech

With the growing interest in blockchain technologies, smartcontract development has gained more and more traction in recent years. This is an exciting new field, which holds great potential in providing solutions for significant real-world problems. The novelty of this field stems from the decentralised and trust-less nature of blockchain, which are attributes that other software solutions have lacked, to date. However, developing solutions within a new field often comes hand-in-hand with a lack of tools for developing the desired solutions in an efficient manner. Currently, a smart-contract developer is required to keep up with the rapid evolution of the related technologies, the technical differences of a multitude of platforms such as smart-contract language versions, and different blockchain platforms. Keeping up-to-date with all the technologies and their frequent updates tends to hinder the developer’s ability to approach the solution in a high-level manner and focus on the actual logic of the program. Moreover, the issue is exacerbated when developers would be required to write similar code split into multiple files in order to support different platforms. This research set out to address the latter issue primarily by means of a framework that would allow developers to combine common code, which is shared between multiple platforms, with unique code for specific platforms, together in a single file. The developer would simply add annotations to the code in this file to specify to which platform the different blocks of code belong. The use cases in this study focused primarily on the Solidity smart-contract language, and mainly targeted the

Figure 1. Structure of a smart-contract source file

72

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Smart-contract proxy analysis Tony Valentine | SUPERVISOR: Dr Neville Grech COURSE: B.Sc. (Hons.) Computing Science

Figure 1. Dataflow for diamond-pattern fallback function

This study has identified a general technique for multicontract analysis, through the modularisation of the Gigahorse [3] analysis framework and the propagation of storage facts between smart contracts during analysis execution. In fact, the project proposes a new tool called SOuL-Splitter, which generates multi-contract evaluation test sets through automated decomposition of existing smart contracts.

Blockchain & Fintech

The evolution of smart-contract protocols, with respect to both size and complexity, has led to the creation of new design patterns, centred on modularity, maintainability and upgradeability. One such emerging pattern in the Ethereum space is the diamond pattern [1]. The diamond pattern is analogous to a reverse-proxy in Web2 infrastructure, as it provides a singular endpoint to a smart-contract protocol, for which the implementation is split across multiple smart contracts. The state (storage) across the implementation contracts is consolidated in the proxy contract through the use of the delegatecall opcode. Although mechanisms exist to ensure implementation contracts could operate over segmented sections of the storage (state), a portion of the state would always remain shared and mutable. Incompatibilities in the manipulation of these storage variables across implementation contracts could introduce unique vulnerabilities, which might go unnoticed when observing a single contract. Currently available state-of-the-art static-analysis tools do not take into account the unique intricacies of having shared mutable state across multiple smart contracts. Clairvoyance [2] is arguably the one exception to this. However, its application is currently limited to the information flow of calls between contracts, rather than unified multi-contract protocols.

Figure 2. Multi-contract analysis evaluation

REFERENCES [1]

N. Mudge, “EIP-2535: Diamonds, Multi-Facet Proxy”, Ethereum Improvement Proposals, 2022. [Online]. Available: https://eips.ethereum.org/EIPS/eip-2535. [Accessed: 01- Apr- 2022]

[2]

Y. Xue, M. Ma, Y. Lin, Y. Sui, J. Ye and T. Peng, “Cross-Contract Static Analysis for Detecting Practical Reentrancy Vulnerabilities in Smart Contracts,” 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), 2020, pp. 1029-1040.

[3]

N. Grech, L. Brent, B. Scholz and Y. Smaragdakis, “Gigahorse: Thorough, Declarative Decompilation of Smart Contracts,” 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), 2019, pp. 1176-1186, doi: 10.1109/ICSE.2019.00120.

L-Università ta’ Malta

| 73


Towards seamless multisig wallet key management

Blockchain & Fintech

Anthony Zammit | SUPERVISOR: Prof. Joshua Ellul COURSE: B.Sc. IT (Hons.) Software Development The main focus of this project was on investigating a more secure way through which people could easily store and transfer their cryptocurrencies to other wallets , in the event of any attacks. The concept was tackled by creating an interface that could interact which the Ethereum blockchain, which would allow a user to create a wallet ‒ also known as a multisignature (or multisig) wallet. In order to create this type of wallet, the user would need to assign multiple wallet addresses to be linked to this new multisig wallet. By doing so, the multisig wallet owner/s can only withdraw cryptocurrency when authorised by the specified number of signatures. This would be done to make sure that no one would be able to access the crypto without having access to the other assigned wallets, thus making it a more secure environment for storing crypto. Besides ensuring a more secure wallet, a multisig approach brings various other benefits to a crypto wallet. One of these is that this interface would allow the user to create a multisig wallet by simply specifying the multisig owner (who would be the person paying for any fees required for creating this wallet) and the addresses of the signers assigned to this new wallet. These keys would all be managed seamlessly and automatically for the user from the backend. A threshold for the new multisig wallet would also be required. This refers to the number of addresses from the initial addresses assigned to the multisig wallet that a user must sign in order to be able to transfer any money out of the multisig wallet. Not only would the interface allow the user to create multisig wallets but would also allow the user other facilities, such as creating new regular crypto wallets and transferring crypto from a multisig wallet to any other wallet(s). To transfer from a multisig wallet to another, the user would merely need to input the main details, namely: the multisig address; the signer key; the amount of money to be transferred and to whom it is to be transferred. Thus, the multisig transaction could be conducted seamlessly. A link will also be given to the user after each input, offering them transparency of all the interactions with the blockchain. The system created was tested heavily to ensure that a transaction would be executed perfectly. This is of particular

74

|

Figure 1. Flowchart of the process of creating a new multisig wallet significance because, since the system would be handling cryptocurrencies, any errors in the system would almost certainly lead to a loss of money. In conclusion, the proposed system would allow users to manage their keys seamlessly in order to create a more secure wallet. This would enable users to store and transfer money in a very secure manner, without the fear of losing their entire wallet in cases of attacks or loss of a single key.

Faculty of Information and Communication Technology - Final Year Projects, 2022


Grammatical inference applications in bioinformatics Malcolm Agius | SUPERVISOR: Dr Kristian Guillaumier COURSE: B.Sc. IT (Hons.) Artificial Intelligence

Figure 1. Table displaying the 20 amino acids and their corresponding symbols [1] of the fundamentals of the field. The next step was an assessment of other studies that used the grammatical inference approach to solve this problem, with a view to replicating their experiments. With acquired background knowledge of the field it was possible to undertake an interpretation of the results obtained in the replicated studies. The final step in the project consisted in comparing grammatical inference techniques implemented with more traditional machine learning methods, with the aim of identifying any differences in performance among the different methods.

REFERENCES [1]

Z. Zheng, Y. Chen, L. Chen, G. Guo, Y. Fan, και X. Kong, ‘Signal-BNF: A Bayesian Network Fusing Approach to Predict Signal Peptides’, Journal of biomedicine & biotechnology, τ. 2012, σ. 492174, 10 2012.

L-Università ta’ Malta

| 75

Data Science

This bioinformatics-oriented project has tackled the task of classifying amyloidogenic hexapeptides. Amyloids are proteins that may form fibrils instead of functioning as intended. These proteins may build up in organs such as the heart, kidneys, liver, and brain, and are responsible for a group of diseases referred to as amyloidosis, which includes Alzheimer’s disease, Parkinson’s disease and Type II diabetes. In essence, a protein consists of one or more chains of amino acids called polypeptides. In the field of bioinformatics, polypeptides could be represented as sequences of amino acids, where each amino acid is represented by its unique symbol. In fact, the amino acid alphabet contains 20 symbols, each representing a different amino acid. A hexapeptide is a chain of six amino acids, and it has been shown that short chains of six amino acids may be responsible for amyloidogenicity. Hence, these were sufficient for the classification problem at hand. This is particularly significant, as the majority of data sets of labelled peptides publicly available are hexapeptide data sets (data sets of labelled chains of six amino acids). It is currently too expensive and time consuming to experimentally assert all possible sequences of hexapeptides as amyloid or non-amyloid. Therefore, it would be useful for researchers to be able to rely on computational models that could classify hexapeptides in a short amount of time, aiding them in their research on these diseases. Indeed, this project focused on investigating the application of grammatical inference techniques to this problem. Grammatical inference is a field of study where a formal language would be inferred from a set of example strings that belong to a language, and a set of counterexamples that do not belong to the language. In simpler terms, grammatical inference techniques infer a set of ‘rules’ ‒ usually in the form of a mathematically constructed abstract machine ‒ that could be used to classify a given sequence of amino acids symbols as amyloid or non-amyloid. The task was accomplished by first researching the field of grammatical inference in order to gain a solid background


SADIP: Semi-automated data integration system for protein databases Jurgen Aquilina | SUPERVISOR: Mr Joseph Bonello COURSE: B.Sc. IT (Hons.) Software Development Biologists frequently need to combine information from different databases, and currently do so by manually following hyperlinks between different databases and using the distinct interfaces provided by the different databases. This is cumbersome and time-consuming. Moreover, biologists require information on different aspects of proteins, such as the protein structure, sequence, function, and its interactions with other biomolecules. With the aim of facilitating this task, past research in data integration involved attempts to provide a single access point for all required biological data. One approach is data warehousing, where data from different databases would be combined within a centralised repository. However, the current state-of-the-art in biological data warehousing requires bespoke software development and maintenance for each database. This is unfeasible, since data is spread over many databases which are constantly changing. This study aims to eliminate the process outlined above through a semi-automated data integration system. Given

user-defined configurations, the developed prototype could automatically retrieve information from the original databases and load it onto a graph database. Using Apache Spark and the Hadoop distributed file system (HDFS) would enable horizontal scalability within the preprocessor, transformer and merger components. The system would then offer a user interface that presents all identified information for a protein in a unified manner. The graph database also allows technical users to code for more complex questions, such as, “Which proteins are known to interact with proteins involved in Alzheimer’s disease?” In view of time and space constraints, the prototype was applied to a restricted set of protein databases and file formats. However, the results obtained were promising and, with further development, the tool could improve the time required to find protein information. Further work on the system would include: extending the transformer module to handle further cases; improving the performance for retrieving structural domains; and applying the tool to more databases and file formats.

Data Science

Figure 1. High-level system architecture

Figure 2. Subset of the information retrieved for SCO5223

76

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Crime analysis Karl Attard | SUPERVISOR: Dr Joel Azzopardi COURSE: B.Sc. IT (Hons.) Artificial Intelligence This project focuses on using data-mining techniques within the crime analysis domain, which has been increasing in popularity in view of many data sets becoming publicly available to researchers. Crime has become a socio-economic problem, with an ongoing impact on quality of life and economic growth [1]. Criminology is also one of the most important fields to which to apply data-mining techniques, as these could produce significant results [2]. In fact, providing accurate and reliable crime predictions assists law enforcement authorities and other entities to effectively prevent crimes from recurring, while handling them effectively if and when they occur [3]. The study focused on three objectives, namely: predicting the type of crime (crime type prediction), predicting the number of future crimes that could occur (crime rate prediction) and establishing how much data would be needed to train these models. Two distinct American public data sets were chosen for this study ‒ one focusing on the city of San Francisco and the other on New York City (NYC). Each data set provided geographic, temporal, demographic, and historical crime data, and were preprocessed differently, depending on the objective in question. For the first objective, which was crime type prediction, a number of models were implemented. These ranged from traditional machine learning to deep learning methods, all of which could classify the crime category based on the inputted data, mainly: the time of crime occurrence, its location and whether an arrest was made or not. The results obtained were then compared to those presented in an existing research paper, with the twofold aim of replicating the findings in the said paper and to improve upon the performance of classification of the work. It is worth mentioning the current experiment created models that classified the crimes to a high degree of accuracy than in the said research paper. The second objective concerned crime rate prediction, which essentially means forecasting the number of future crimes in a particular place. As with the first objective, the experiment set out to replicate and improve upon the performance of the models from the read literature (which

Figure 1. Methodology overview of the project utilised the same NYC data set) by implementing both machine learning and deep learning models. This task took a regression-based approach, meaning that it predicted a continuous value (the expected future crimes), as opposed to the classification-based approach above (which predicted a category). Furthermore, the task integrated additional data for this objective that helped create models offering higher accuracy. As with the first objective, the results obtained were compared to a research paper. Again, the experiment yielded better predictions overall. The third objective required establishing the minimum amount of data required by the learning models in order to produce effective results. With each objective, the original size of the training set was reduced, while keeping the test set unchanged, so as to check how much data would be required by the developed models to retain their optimal performance. The results obtained were around 70% f1-score for the crime type prediction and under 6% MAPE for crime rate prediction.

[1]

Andrey Bogomolov, Bruno Lepri, Jacopo Staiano, Nuria Oliver, Fabio Pianesi, and Alex Pentland. Once upon a crime: Towards crime prediction from demographics and mobile data. 09 2014. doi: 10.1145/2663204.2663254

[2]

K. Zakir Hussain, M. Durairaj, and G. Rabia Jahani Farzana. Criminal behavior analysis by using data mining techniques. In IEEE-International Conference On Advances In Engineering, Science And Management (ICAESM 2012), pages 656–658, 2012

[3]

Jay Yoon Lee, U. Kang, Danai Koutra, and Christos Faloutsos. Fast anomaly detection despite the duplicates. In Proceedings of the 22nd International Conference on World Wide Web, WWW ’13 Companion, page 195–196, New York, NY, USA, 2013. Association for Computing Machinery. ISBN 9781450320382. doi: 10.1145/2487788.2487886. URL https://doi.org/10.1145/2487788.2487886

L-Università ta’ Malta

| 77

Data Science

REFERENCES


Applying spatial data-modelling techniques and machine learning algorithms to road injury data for increased pedestrian safety Olesya Dmitrievna Borisova | SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Computing and Business various layers of geospatial data for Malta, with an analysis of point density and time series clustering. The investigation of incidents and related features by means of spatial analysis may shed light on the dependent variables and injury-prone areas from historic data. Another aspect to the work concerns the application of machine learning (ML) techniques to estimate future injury trends from the historic data set. A data set comprising of traffic-related injury data sourced from the Malta Police Force provides an insight into the injury trends between 2014 to 2018. The data required cleaning, grouping and allocation within a database environment, which was then addressed using SQL commands for retrieval. The data tuples were also geocoded to obtain the latitudinal and longitudinal coordinates for visual analysis and placing them in the context of neighbouring features, such as schools, churches, shops. The ML-based element may be utilised to estimate the potential risk of injury at given locations and times. This requires the adjustment of the data to a time series format to allow for prediction, based on the behaviour patterns of previous years. The analysis, techniques and technologies applied in the experimentation part of the study could demonstrate how traffic-injury data could assist in the planning and anticipating of hazards in order to safeguard road users. The information derived could also be made available to pedestrians, to be utilised in finding the safest walking route through a usercentred application.

Data Science

Nowadays many people seek to live a healthy lifestyle, which in turn is increasing the awareness for active mobility and its importance to our health. The significance of safe pedestrian mobility is also recognised on a national level, as could be seen through the rise of incentives promoting car-free travel options and restricting motor access to city centres. While the above-mentioned incentives are undoubtedly an encouraging development, pedestrians still face daily risk of injury from vehicles. This was particularly evident in the first four months of 2022, where a total of 12 road-related fatalities occurred, 5 of which involved pedestrians being struck by a vehicle. Furthermore, due to an exponential rise in vehicles nationwide, the number of vehicles is expected to outnumber the residents. Therefore, development in this field may contribute to better planning and synergy of all road users. This study focused on the road-traffic injury rate vis-àvis pedestrians, by investigating collision patterns that could put pedestrians at risk. This was achieved by modelling the road traffic network and other related features, such as: road safety infrastructures, attractions, and points of interest. The model could then be used for visualisation purposes, and with the application of data analytic techniques to allow for a more comprehensive investigation of the injury trends and their relation to the physical infrastructure. The outcome of this artefact would in turn contribute to a better understanding of issues and hazards within a road network, as well as serve as a basis for further development. The model developed for this study would allow viewing the

Figure 1. Distribution of geospatial data, where the red circles with a black cross indicate injuries between 01 January and 15 June 2014 in The Three Cities area. Other attributes: black spots indicate major/main intersections, light-blue correspond to points of interest, darkblue indicates museum points and perimeters, and yellow indicates church parameters.

78

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Analysing diverse algorithms performing music genre recognition Jamie Buttigieg Vella | SUPERVISOR: Mr Joseph Bonello COURSE: B.Sc. IT (Hons.) Computing and Business

There are countless features that could be extracted from a digital version of an audio signal, such as the spectral centroid, zero-crossing rate and Mel Frequency Cepstral Coefficients (MFCCs), and each feature would be able to describe in a different way some property of the signal. For this project, 13 MFCCs were extracted out of each audio track, as these coefficients were found to be among the best features to approximate the human auditory system, in that they can describe timbral features, as well as features related to frequency, rhythm and amplitude. Subsequently, the MFCC values were inputted into the different models used to perform MGR, which could predict the correct genre/s of a music track from among popular genre labels. There are also a variety of machine learning models that could perform MGR. This project implemented and evaluated three among the most popular models, namely: an artificial neural network (ANN), a convolutional neural network (CNN) and gradient boosting machines (GBMs). The ANN uses a model that is very loosely based on the human nervous system and is the earliest form of artificial intelligence to be developed, making it a very popular choice that is also widely used in this field. The CNN is a deep learning model that is used to classify images and similar three-dimensional data. The GBMs are a relatively modern model based on an ensemble of models offering a flexible application scope, and studies have shown that it is a promising technique for MGR.

Data Science

This project set out to determine the extent to which music genre recognition (MGR) could be performed by evaluating different machine learning algorithms. The algorithms were applied to a curated benchmark of audio tracks with corresponding genre labels, and compared to similar models documented in the literature. Nowadays, we rely greatly on the applications we use for listening to music to discover (sometimes new) music that matches our taste. In the process, we allow algorithms to introduce us to new music genres that may be of interest to us. In view of this, it is especially important that these applications would be able to achieve a good understanding of our listening habits, taste in genre and connections with the performers, among other factors. A musical piece has various characteristics by which it could be described. Hence, songs with similar characteristics could be organised together in a single class, referred to as a musical genre. Although defining a genre is itself subjective, the musical genre is one of the most important descriptors of the songs themselves. The boundaries between one musical genre and another are not standard and are rather based on user perception. In fact, the genre labels that were chosen as targets for the training and testing of the algorithms used were extracted from user-submitted, albeit curated, labels precisely because there currently exists no official taxonomy of genres.

Figure 1. A spectrum of 13 MFCCs for a 30-second audio track

L-Università ta’ Malta

| 79


Minecraft settlement generator Nathan Camilleri | SUPERVISOR: Dr Vanessa Camilleri | CO-SUPERVISOR: Dr Antonios Liapis COURSE: B.Sc. IT (Hons.) Artificial Intelligence

Data Science

Procedural content generation (PCG) refers to algorithms used to automate the creation of game material that an individual would typically produce. Despite being used in various projects, PCG is still widely acknowledged as an under-researched area in artificial intelligence (AI). Hence, this study has attempted to offer a deeper understanding of PCG in Minecraft by seeking to define the problem thoroughly and reviewing the current state-of-the-art research. Being heavily motivated by the Generative Design in Minecraft (GDMC) competition, the proposed solution is a Minecraft settlement generator that uses PCG to emulate humans in constructing settlements. By constantly analysing the changing environment, it would be ensured that the structures be placed in such a way as to avoid the occurrence of overlaps. The settlement would also include districts, each generating a different ambience, where structures would adhere to a consistent architectural and aesthetic style. Moreover, the creation of more prominent structures would occur towards the centre of the settlement, while generating smaller-sized buildings on the outskirts. It would also be ensured that each newly created building would be connected to the settlement by producing the shortest path found between the newly generated building and the already existing buildings. This would be achieved by utilising the A* pathfinding algorithm. The analysis of the proposed settlement generation employed a variety of methodologies. These included an expressive range analysis, and setting up a judging panel to assist in the emulation of the judging process in the GDMC competition. Three different settlements were used for this

Figure 1. Diagram illustrating the layout of a possible settlement generation using the implementation under review process, two of which were the first and second-placed settlements from the competition’s 2018 iteration, and the third being the generator being proposed. After the judges assessed the three settlements, a focus group session was conducted to obtain comprehensive insights about the generation under review. The results indicated that the implementation performed well, gaining the highest score in the aesthetics category and obtaining a good overall score.

Figure 2. Screenshot of a settlement generation using the proposed implementation

REFERENCES [1]

C. Salge, M. C. Green, R. Canaan, and J. Togelius, “Generative design in Minecraft (GDMC): Settlement generation competition.” Association for Computing Machinery, 2018.

[2]

G. Smith, “Understanding procedural content generation: A design-centric analysis of the role of PCG in games.” Association for Computing Machinery, 2014, pp. 917–926.

80

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Analysis of the relation between words within the Voynich Manuscript Andrew Caruana | SUPERVISOR: Dr Colin Layfield COURSE: B.Sc. IT (Hons.) Software Development This research set out to analyse words within the Voynich Manuscript (VM), a text that has eluded amateurs and academics since its discovery. The main purpose of this study is to seek to identify and determine any meaning hidden within its pages. The analysis employed natural language processing (NLP) techniques to remove stop words, which are words that mainly serve a sole grammatical purpose, and to plot co-occurrence graphs. Furthermore, these techniques were also applied to texts in medieval/early modern English and medieval Italian in order to identify any linguistic patterns similar those in the VM for comparison purposes. All the documents were randomly scrambled to determine whether any detected patterns would hold in random text. The documents were preprocessed to facilitate analysis, by removing stop words, punctuation, and digits. Subsequently, co-occurrence graphs were generated for each document containing words, word pairs and the

frequency of use. Using this data, an algorithm was created to calculate the skewed pairs of each document appearing beyond a specific frequency threshold. For example, in the case of a word pair P and its inverse pair Q, a skewed pair would be one where P would occur much more frequently than Q, if at all. This skewed pair calculation was done for all documents and their randomised counterparts. The results were plotted onto a graph showing the ratio of skewed pairs to regular pairs taken to a percentage. These results indicated a significant difference between the English and Italian texts and their random counterparts, where the normal version scored much higher. However, the results for the VM were slightly higher, even with the randomised version only scoring roughly half as much as the normal variant. The outcome of this study would suggest that the VM is not a randomly generated text, which in turn could suggest that the VM might be a code or cipher.

Data Science

Figure 1. Co-occurrence graph for the word ‘colle’ in Dante’s Divina Commedia (Inferno)

Figure 2. Ratio of skewed and regular pairs

L-Università ta’ Malta

| 81


Satellite imagery data analytics for remote-based sensing for hospital resource-requirement forecasting Bernard Cassar | SUPERVISOR: Dr Lalit Garg COURSE: B.Sc. IT (Hons.) Computing and Business

Data Science

It is becoming evident that the traffic of hospital admissions and patient length of stay are on the increase. More specifically, different departments, particularly the emergency ward, tend to experience long and perpetual queues at certain periods. In such cases, the overall workflow for effectively managing the resources of the relevant medical departments has become a daunting task. Furthermore, economists have also been expressing their concern on how to make best use of scarce resources in hospitals. In certain countries, this issue is also reaching unsustainable levels and is being endured by the general population [1-2]. This situation has brought about a growing need to adequately forecast hospital resource requirements, particularly in recent years. Through sustainable use of modelling, the allocation of resources could be significantly improved so that the incidence of bed crises could be reduced to a minimum, or even avoided completely [3]. In parallel, the volume of satellite imagery has also grown significantly, as more satellites were placed in orbit to monitor the status of our planet. This has facilitated the creation of various applications, through which satellite data could be applied to this field of research. Thus, the availability of this expanding satellite data would present an ideal opportunity for discovering novel approaches, specifically data-driven methodologies. In fact, it has become far more feasible to undertake various projects designed with the sole intention to tackle and solve issues in the real world, such as the one in hand. Indeed, the Sentinel-3 Earth observation satellite, developed by the European Space Agency as part of the Copernicus Programme, was used to capture the land surface temperature (LST) data for the purpose of this project. The work behind this study attempted to solve abovementioned issues by closely identifying a tentative relationship between fluctuations in LST and hospital discharge data. The aforementioned data covered anonymous information, including the admission and discharge data, as well as the admitting ward and discharging ward of every patient from a specific period of time. The LST data compiled on a day-today basis, spanning one whole year. Moreover, an inset figure

Figure 1. Process overview

was drawn on each individual LST data frame to cover the geographical region surrounding the Maltese borders. This was done to obtain the best accuracy possible when attempting to compare the two data sets simultaneously. Moreover, these satellite images were compared, and changes in temperature were recorded from time to time. Ultimately, the results obtained from the experiment could be deemed useful by health professionals, and policy-makers who may require an insight into the impact of movements in temperature concerning hospital admission figures.

REFERENCES [1]

“Active ageing: a policy framework”, World Health Organization, 2002. [Online]. Available: https://apps.who.int/iris/handle/10665/67215.

[2]

“A Disease-based Comparison of Health Systems - What is Best and at What Cost?”, OECD Publications, France, 2003. [Online]. Available: https://www.oecdilibrary.org/social-issues-migration-health/a-disease-based-comparison-of-health-systems_9789264100053-en.

[3]

M. Mackay and M. Lee, “Choice of Models for the Analysis and Forecasting of Hospital Beds”, Health Care Management Science, vol. 8, no. 3, pp. 221-230, 2005. Available: 10.1007/s10729-005-2013-y.

82

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Safe navigation and obstacle avoidance in the real world when immersed in a virtual environment André Desira | SUPERVISOR: Dr Michel Camilleri COURSE: B.Sc. IT (Hons.) Software Development

Figure 1. Conceptual architecture visual cues onto the headset whenever the wearer would be getting too close. This approach potentially improves the user’s trust and ability to navigate safely across a demarcated environment, while avoiding obstacles along the way. The experimental part of this study firstly consisted of participants testing the normal boundary-box solution provided by Oculus (called Guardian). Along with certain experimental features, Oculus promises to navigate while detecting any obstacles within the playing area. Subsequently, the participant tested the proposed solution using the 3D lidar, navigating once again through an environment with obstacles. Lastly, the participants were requested to provide feedback on which of the two solutions they preferred, and why. Moreover, factors such as time required for completing the task and the number of collisions were evaluated to determine which solution delivered the best performance.

Data Science

Virtual reality (VR) is the technology of immersing an individual in a completely virtual environment to explore and perform tasks. It is the next social platform for interacting, playing games, performing simulations, and even to be used in the workplace. Currently, VR is experienced through a VR headset, such as the Oculus Quest 2. Currently available VR headsets lack obstacle avoidance and safe navigation. These are crucial features, since the user would be immersed in the virtual world of VR, and therefore blind to the real-world environment. Leading VR industries have been seeking to mitigate the risk of injury by enabling the user to set up a virtual boundary box in an unobstructed playing space. The VR headset would then assist the user in remaining within that pre-set playing space. However, this type of solution does not offer insight about obstacles, neither within the playing space nor outside it. Some of the issues with such systems include: a) requiring the user to move any furniture outside the playing space in advance; and b) they cannot track the movements of moving third parties that may enter within the boundary playing area, such as animals or other humans. This would inevitably increase the chances of colliding with someone that might pass through the space, or even hit objects that are immediately outside the playing area, should the user overstep the boundary. This appears to be a common occurrence, as the VR community does not fully trust the boundary box, which offers limited to no insight into the surrounding physical environment. This study set out to investigate current approaches to the difficulty outlined above and sought to propose a solution based on real-time object sensing and visual rendering, in order to improve safety and reduce the risk of injury while maintaining the sense of immersion. This was achieved through the use of an external depth sensor, such as a 3D lidar, mounted on the VR headset. Similar to a 3D scanner, the 3D lidar could accurately scan 3D objects in real time, compute their distance and then provide a 3D mesh of the object as

L-Università ta’ Malta

| 83


Automatic and enhanced camera composition in Daz Studio 3D Omar El Aida Chaffey | SUPERVISOR: Dr Clyde Meli | CO-SUPERVISOR: Mr Tony Spiteri Staines COURSE: B.Sc. IT (Hons.) Software Development

This project deviated from the standard method of measuring camera-shot ratings by mitigating a rendering approach that would measure the camera shot according to pre-defined photographic rules, such as the rule of thirds, the phi grid, space to move, and headspace ‒ all of which are among the many rules used in portraiture. To solve the virtual camera composition (VCC) problem and automate the process of finding an optimal or suboptimal composition, the particle swarm optimisation algorithm (PSO) was used. By computing different movements and rotations, the algorithm explores a 3D space to compare local and global best results, until a satisfactory shot is obtained. Although real-time processing was not the main focus of this system, it was tested for accuracy, quality, and time ‒ and proved to be satisfactory. The main errors could be attributed to the local minima problem, which tended to lead to the premature exploration.

Data Science

The virtual camera in 3D graphics imitates real-life cameras by serving as a window from which objects within a scene are perceived. Therefore, the positioning and orientation of the camera are crucial, as they can significantly influence the final result. This project consists of a DAZ Studio 3D application plugin, which aims at alleviating the effort and time needed to direct the camera towards a subject within the scene. The user would be provided with multiple dropdown menus, from which to select specific shot options. Based on these predefined constraints, the system would proceed to find camera parameters that could create a high-quality, well-composed image as close to the selection as possible. Despite the availability of many plugins for Daz Studio 3D, none of them offer functionalities for setting high-quality camera positions based on advanced photography rules. However, with the development of more efficient hardwarerendering techniques, it has become possible to implement advanced composition rules in 3D applications that use image processing techniques.

Figure 1. Plugin UML activity diagram Figure 2. Processing rendered shot during rating calculation

84

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Mining the CIA World Factbook Matteo Farrugia | SUPERVISOR: Dr Joel Azzopardi COURSE: B.Sc. IT (Hons.) Artificial Intelligence

Figure 1. Subgraph of data mined from the CIA World Factbook

techniques for mining further results from the graph. These results, along with the results from the previous phase of the project, were then analysed in order to formulate a number of conclusions, based on the data and results of the implemented algorithms. This evaluation determined the ability of AI in reflecting the situation of the world and international relations, despite the continuously shifting state of the real world and current affairs. Upon a final analysis, it was established that the implemented algorithms detected a number of alliances between countries (such as those between EU Member States), identified various countries that are dependent on trade with others, and identified the main global superpowers.

Data Science

Recent years have seen impressive advancements across a vast number of areas within the domain of artificial intelligence (AI). Such developments translate into an increased use of AI and data mining with the aim of gaining insight into a number of available data sets that could be utilised for drawing practical conclusions from the acquired information. These conclusions could then be used to increase profits, further advance research and to increase personal security by detecting fraudulent or illegitimate activities. This study involved the application of data mining techniques to the CIA World Factbook, a large data set covering a wide range of topics that is constantly maintained and updated by the CIA (Central Intelligence Agency) to ensure that the data remains up to date. This project explored the extent to which data mining could be applied to a large data set and analysed the results to determine whether the results could adequately reflect real-world situations. This was accomplished by making use of a number of data mining and graph-analysis techniques in order to bring together AI and the field of international relations. The first phase of the project consisted in data extraction. This process involved the extraction of data from the CIA World Factbook, modifying it to a format more suited to the task at hand. These files were then mined for useful information, which in turn was entered into a graph data structure, and stored in a graph database. The second stage of the project was the data-clustering phase, where algorithms were implemented in an attempt to divide the countries and territories contained within the CIA World Factbook into a number of clusters of countries, using a number of different algorithms and representations of the data. This facilitated the process for determining whether the clusters would support real-world alliances and dependencies. The final stage consisted in data analysis. This part of the experiment entailed the application of other analytical

L-Università ta’ Malta

| 85


DFA learning using SAT solvers Logan Formosa | SUPERVISOR: Dr Kristian Guillaumier | CO-SUPERVISOR: Prof. John Abela COURSE: B.Sc. IT (Hons.) Artificial Intelligence Regular inference is the task of inferring a deterministic finitestate automaton (DFA) from a training set of positive strings and negative strings which, respectively, belong and do not belong to a regular language. Additionally, the regular inference task is usually formulated as finding the minimum-state DFA that is consistent with the training data. This problem is NPcomplete and is one of the more intensely studied areas in the broader field of grammatical inference. One of the most successful approaches in addressing the problem is to apply so-called state-merging algorithms. A highly specific hypothesis called a prefix-tree acceptor (PTA) is created from the training data, allowing pairs of states to be iteratively selected and merged to compact and generalise the hypothesis. Another interesting approach consists in reducing this problem to Boolean satisfiability (SAT). Here, the PTA is reduced to an instance of graph colouring which, in turn, is

reduced to SAT. The generated clauses are then passed to an SAT solver and the subsequent truth assignment is used to construct a solution. This exact algorithm results in millions of clauses on large problems, which prove to be too much for SAT solvers. Therefore, the PTA is first preprocessed using a state-merging algorithm, such as EDSM, to obtain a partially identified DFA, used as an input in the reduction to SAT. This greatly diminishes the size of the encoding at the cost of no longer being an exact algorithm due to any possible incorrect merges performed during preprocessing. This project has set out to test the DFA-SAT algorithm, performing a comparative analysis on Abbadingo and Stamina problem instances. DFA-SAT was compared with current state-of-the-art state-merging algorithms, in which a library of DFA instances was created. This was used in comparing DFA-SAT against EDSM, windowed EDSM, and BlueFringe. The study also proposes ideas for other preprocessing methods.

Data Science

Figure 1. The DFA-SAT procedure: a PTA is built from the training data of a target DFA; the PTA is preprocessed and the partial DFA is then reduced to graph colouring, and then SAT. The truth assignment then offers a hypothesis to be evaluated with the testing data

86

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Sentiment analyser for the Maltese language Christopher Galea | SUPERVISOR: Prof. Alexiei Dingli COURSE: B.Sc. IT (Hons.) Artificial Intelligence

Figure 1. The concept of sentiment analysis

The proposed sentiment analyser is capable of performing preprocessing on the given data, cleaning the text of any redundant characters and preparing the data for feature extraction. The sentiment analyser is able to extract handcrafted features, thus allowing data classification at a contextwindow level. In this way, one could easily obtain virtually within seconds the sentiment of any comment written in the Maltese language. The developed sentiment analyser has an accuracy of around 84% on the corpus at hand, thus offering a fairly reliable data sentiment analysis for the Maltese language.

Data Science

In recent years, communication via internet has rapidly increased, particularly through social media sites. This was also the case across the Maltese Islands, with a marked increase in popularity of social media websites, especially Facebook. As with other populations, the Maltese tend to make extensive use of this platform to make their voice heard, posting hundreds of reviews regarding various subjects. The purpose of this work was to propose a sentiment analyser for such comments/posts, using novel methodology. Although there are quite a number of similar projects for the English language, this was not the case for the Maltese language, due to the challenges it presents, mainly due to being considered a low-resource language. Sentiment analysis is the process of analysing data and categorising it as positive, negative, or neutral, depending on the overall sentiment of the given text. At the beginning of the project, a new data set was specifically built for the task at hand. This consisted of a number of comments extracted from popular Maltese-language Facebook pages, written by the Maltese public, and mainly in Maltese. Human annotators were tasked with correcting the said comments in terms of grammar and spelling. The comments were then classified into the aforementioned sentiment categories. This exercise yielded a labelled data set, on which the proposed sentiment analyser would function.

Figure 2. The developed sentiment analyser, explained

L-Università ta’ Malta

| 87


Solving the sports league scheduling problem using integer linear programming Oleg Grech | SUPERVISOR: Dr Colin Layfield COURSE: B.Sc. IT (Hons.) Software Development

soft constraints (or restrictions) express preferences that should be respected. Soft constraints are typically imposed by stakeholders and other interested organisations, such as media companies and football teams, to maximise revenue received throughout the season. These vary between different tournaments and leagues. This project incorporated research regarding different sports-scheduling algorithms and approaches aimed at obtaining a more insightful look into this area. A meeting was held with a person involved in scheduling the Belgian Pro League, to discuss his experiences with generating sports schedules. The data sets used were sourced from the International Timetabling Competition on Sports Timetabling. An integer linear programming algorithm was used to develop such schedules. The main aim was to obtain a schedule with the least number of breaks and soft-constraint violations. The results obtained were compared to previous results and a validator provided by the International Timetabling Competition on Sports Timetabling.

Data Science

Sports scheduling has interested various researchers and educators over the past years. The sports industry has become one of the largest businesses worldwide. Therefore, a good schedule would be beneficial to the multiple stakeholders involved, including club owners, staff and players, television broadcasters, club supporters and the amount of revenue generated. Manual scheduling is a time-consuming process that generates less revenue when compared to automated scheduling, which delivers better schedules that are also generated in less time. This allows derby and rival matches to be broadcast at a more convenient time for supporters, resulting in a healthy relationship with the supporters and sports associations involved, and consequently generating more revenue. Planning any form of sports league or tournament is essentially deciding when each match is to be played. Such a schedule is generally influenced by hard and soft constraints. Hard constraints are schedule features that cannot be violated and are generally set by the organisers. On the other hand,

Figure 2. Sample output of a generated football league

Figure 1. High-level flowchart of working system

REFERENCES [1]

88

[1] Van Bulck, D., Goossens, D., Beliën, J. and Davari, M., 2022. ITC2021 | Sport Scheduling Research Group. [online] Sportscheduling.ugent.be. Available at: <https://www.sportscheduling.ugent.be/ITC2021/> Gurobi Optimization. 2022. [online] Available at: <https://www.gurobi.com/>

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Learning DFAs from noisy training data Daniel Grech | SUPERVISOR: Dr Kristian Guillaumier | CO-SUPERVISOR: Dr Colin Layfield COURSE: B.Sc. IT (Hons.) Artificial Intelligence against evolutionary algorithms, it follows a heuristic driven state-merging approach that allows it to scale to large DFAs. Based on the relevant analysis, the project seeks to present a new heuristic based on EDSM, which is modified to handle noisy data. This was applied within three monotonic statemerging search frameworks, namely: windowed breadth-first search, blue-fringe, and exhaustive search. The proposed heuristic was evaluated over a variety of problem instances, consisting of target DFAs and training sets generated according to the Abbadingo One competition procedures. The performance of the algorithms used was compared to that of other state-merging algorithms over a number of DFA learning problems of varying target DFA size, training set density, and level of noise. The implemented algorithms were also evaluated over the GECCO 2004 competition data sets, which consist of a variety of problem instances with 10% noise added to the training set. The evaluation findings offered considerably better results than Blue*, with respect to noisy training data. The results also indicated an average accuracy of up to 24% higher than Blue*, when learning 64-state target DFAs with training sets having up to 10% noise. Furthermore, the approaches taken suffered a slower degradation in accuracy as the level of noise increased. Following an evaluation using the GECCO 2004 competition data sets, the implemented algorithms appeared to obtain competitive results against the best-performing evolutionary algorithms. On the basis of the outcome of the experiment evaluations, the project offers a heuristic that could then be applied within a variety of state-merging frameworks to handle noisy training data.

Data Science

Grammatical inference is the process of learning formal grammars or languages from training data. DFA learning is a subfield of grammatical inference, which deals with learning regular languages as deterministic finite-state automata (DFAs). DFA learning has been applied to robot navigation, financial forecasting, and bioinformatics, among other applications that rely on real-world data that is affected by the presence of noise. DFA learning is an NP-complete problem which is made harder when noise is present in the training data. Traditional DFA learning algorithms typically construct a highly specific tree-shaped DFA ‒ the augmented prefix-tree acceptor ‒ and repeatedly merges its states for generalisation. This is done according to some heuristic such as the evidence-driven state-merging heuristic (EDSM), which is known to be a good heuristic for this task. Unfortunately, these state-merging algorithms suffer in the presence of noisy training data. The current state-of-the-art DFA learning algorithms which are resistant to noise are evolutionary algorithms. However, these techniques have been found to have a high experimental time complexity. Therefore, while performing well when learning small DFAs, these techniques fail to scale on larger problems, due to the size of the search space. Additionally, their relatively long running time makes them impractical to evaluate over a large pool of problem instances. This project has explored a state-merging algorithm that would be resistant to noise, with the added task of enhancing it further. This algorithm is called Blue* and it relaxes the statemerging condition employed in traditional state-merging algorithms using a statistical test to prevent overfitting to the training data. While Blue* does not perform competitively

Figure 1. Degradation in accuracy as noise increases when running the proposed algorithms (green lines), Blue* (blue lines) and state-merging algorithms not resistant to noise (red lines).

L-Università ta’ Malta

| 89


A fast approximate light transport method for ray tracing Luke Bjorn Scerri | SUPERVISOR: Dr Sandro Spina | CO-SUPERVISOR: Dr Keith Bugeja COURSE: B.Sc. (Hons.) Computing Science In computer graphics, rendering is the process of transforming a scene from a numerical format into a picture that could be visualised and displayed on a screen. Figure 1 shows a diagram that outlines how the image is rendered by tracing multiple rays that scatter across the scene. The simulation of scattering when light interacts with a surface depends on the surface material. With some materials (e.g., perfect mirrors) this operation is straightforward and computationally inexpensive, whereas with others (e.g., translucent materials such as milk and marble), it is prohibitively expensive. The operation requires computing the closest point of intersection with a polygon in the light path; in the case of large scenes with tens of millions of polygons, this operation is very expensive. Spatial-data structures are often used to group polygons in the scene and accelerate ray-intersection tests. A signed distance field (SDF) is a grid of values that represent the closest

distance to the surface of an object; SDFs are commonly used in collision detection and other physical simulations. In Figure 2 (top), the SDF of a dragon mesh is shown. Light-blue refers to space inside the mesh and grey denotes space outside the dragon. The lightness of the colour varies to represent the closest distance to the surface of the dragon. This project has explored the use of signed distance fields to accelerate point-to-point intersection tests in a scene. The goal was to reduce the time spent carrying out occlusion tests for shadow rays and closest-point intersections, which are essential for propagating light across surfaces. The method was evaluated in terms of correctness, through comparisons against ground-truth images and performance, by determining the viability of the pipeline for interactive rendering. Performance was also compared to that of other spatial-data structures, such as the k-dimensional (k-d) tree.

Figure 1. Simple diagram depicting ray tracing

Figure 2. 2D slice of the dragon’s signed distance field (SDF)

Data Science

REFERENCES [1]

90

P. Shirley and R. Morley, Realistic ray tracing.$c Peter Shirley, R. Keith Morley. Wellesley: Ak Peters, 2008.

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Artificial intelligence in short-term meteorological forecasting Ethan Zammit | SUPERVISOR: Dr Joel Azzopardi COURSE: B.Sc. IT (Hons.) Artificial Intelligence

layers, units per layer, dilations and kernel sizes. This allowed each parameter to be tested multiple times, across several different architectures. As a result, it was possible to keep a certain parameter static and take the mean of all other variations. This provided a very accurate representation of the parameter’s contributions. The findings indicated that both the LSTM and TCN models outperformed baselines and statistical techniques. Moreover, the LSTM performed marginally better than the TCN. The optimal look-back period was found to range between 24 and 48 hours, and predictions were still found to be satisfactory for 48 hours in advance. Furthermore, it would be recommended that at least 6 months of training data be used, as a sharp deterioration in forecast accuracy tended to occur with smaller training data sizes. Finally, the results of considering external meteorological parameters within the inputted data did not produce as much improvement as expected. In fact, the results indicated that the models would perform better when only presented with past temperature and time. This result could be attributed to the short-term nature of predictions, where the other meteorological parameters could affect the temperature over a longer period of time. On the basis of the above, it has been concluded that AI techniques could indeed be used to forecast air temperature accurately, surpassing statistical techniques and being run on a regular workstation. In short, AI-based solutions tend to be the ideal compromise for accuracy and efficiency.

Data Science

Forecasting the weather has long been point of focus among researchers seeking to achieve predictions to the highest possible levels of accuracy. The main challenge lies within the chaotic variation of temperature, where traditional techniques are unable to account for the non-linear relationships of meteorological parameters or typically require supercomputer scale-processing power to model them. In recent years, advancements in the domain of artificial intelligence (AI) have brought to light several new tools. Through novel time-series modelling techniques, researchers could improve meteorological forecasting systems. AI-based tools have proven to be a lightweight and accurate solution for forecasting meteorological parameters. Some interesting applications include: forecasting wind, precipitation, solar radiation, sea level, and even pollutants such as PM2.5. This research investigates the sustainability of datadriven techniques for short-term air temperature forecasting. Twenty-six months of real-world observation data from the local Marsaxlokk real-time weather station was used, spanning from July 2016 to September 2018. An evaluation was carried out to explore the limits of the long short-term memory (LSTM) and temporal convolutional network (TCN) models when applied to the forecasting of local weather data. Each model was trained with look-backs of 12, 24, 48 and 168 hours, as well as forecast horizons of 3, 12, 24 and 48 hours. These parameters were tested in a grid search, along with model hyperparameters such as hidden

Figure 1. LSTM vs TCN 48-hour advance prediction on Marsaxlokk

L-Università ta’ Malta

| 91


Mobile gait analysis Owen Agius | SUPERVISOR: Prof. Alexiei Dingli COURSE: B.Sc. IT (Hons.) Artificial Intelligence Gait analysis is the systematic study of human locomotion, employing observers’ eyes and brains, together with apparatus to measure body movements, body mechanics, and muscle activation. Analysing gaits is quite a tedious and generally lengthy procedure. Moreover, there is currently a very limited number of professionals equipped to undertake the analysis. Typically, expensive and time-consuming marker-based technologies are employed in the procedure, in conjunction with several infrared cameras to generate kinematic data that would concisely quantify a person’s walking behaviour. This often discourages people from getting their gait examined. This project sought to develop a mobile-based automated substitute to marker-based gait analysis, which would require less financial and computational resources. The proposed alternative could serve as a baseline for future implementations that would be accessible from an ordinary smartphone or web browser, and therefore cheaper and more user-friendly. The flexibility of accessing gait analysis through a web application would encourage people to check their walking patterns more regularly, and if their issue were very severe, they could proceed to contacting a specialist in a timely manner. In order that gait analysis could be made possible on a mobile device, a substantial amount of data was required. By collaborating with Mater Dei Hospital and the Chinese

Academy of Sciences Institute of Automation (CASIA), a considerable amount of gait data was acquired. The data consisted of videos of people walking regularly or irregularly. Since the videos alone were not sufficient for the development of the proposed system, the videos were inputted into a pose estimator whose goal was to outline the skeleton of the person throughout the video. Additionally, the pose estimator was modified to record the coordinates of the main joints concerning a gait cycle (i.e,, hip, knee and ankle). These coordinates were then plotted as a scatter plot, where the gait cycle could be generated. After the gait cycle of each video was extracted, the next step was to classify whether that gait cycle was either regular or irregular. This goal is achieved by passing the labelled gait cycles into a pattern-recognition architecture (model), which was then tested for accuracy with satisfactory results. Finally, the classification of the gait cycle of a person was achieved through the upload of a video into the web application depicting that person walking. Correct classification was acquired by processing that video to extract the gait cycle and passing it on to the already trained model, which was set to output the nature of the gait cycles. The application was tested out on persons having bad, good or slightly bad gaits to investigate the rigidity of the system. After a series of experiments, it could be concluded that the system performed with 94% accuracy.

Digital Health

Figure 1. Regular and irregular gait cycles

92

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


ITS4U: Intelligent Tracking System for You Matthew Bonanno | SUPERVISOR: Prof. Matthew Montebello COURSE: B.Sc. IT (Hons.) Artificial Intelligence This research applied artificial intelligence (AI) to facilitating the process of organising and handling a marathon in such a way that the athletes running the course could be easily detected and tracked. From the data gathered, the timing of every 5km interval covered by the marathon runners was noted and stored in a case base. This data was then used for predicting a new personal-best timing for the athletes, or even recreational runners interested in running the marathon. Checkpoints were installed along the marathon track with a radiofrequency identification (RFID) system that could monitor the runners’ interval timings and pace. The architecture of the proposed system (Figure 1), is made up of: a unique RFID tag, to be worn by the athlete; an RFID reader; an RFID antenna connected to the reader through the use of an interface cable; and a computer system connected to the reader. The RFID system would monitor the

runners’ timing at every interval checkpoint, collecting data that could be processed to infer smart information. The system makes use of the Universal Reader Assistant by ThingMagic. This allows the concurrent monitoring and detection of the RFID tags throughout the marathon by displaying the name of the runner, together with the timestamp of when the runner would have passed the checkpoint, as well as the read count. Moreover, the software displays the unique tag count that would display the number of runners who would have passed through this checkpoint during the whole marathon. The RFID timing system (Figure 2) was tested to calculate the effectiveness and range of the antenna, and obtained satisfactory results. Furthermore, the data gathered from the system was utilised to determine and predict a new timing for each marathon runner that previously was not possible.

Digital Health

Figure 1. Top-level architecture of the system

Figure 2. The radio-frequency ID timing system

L-Università ta’ Malta

| 93


Using monitoring-oriented techniques to model the spread of disease Leanne Briffa | SUPERVISOR: Prof. Gordon Pace COURSE: B.Sc. (Hons.) Computing Science The modelling of the spread of a disease is a very important yet complex task, since each different disease spreads in a unique manner, depending on factors such as its spread rate (R-factor). The spread of disease could also be affected by conditions such as whether two persons in close proximity would be vaccinated or not. Modelling would entail simulating these parameters and variables for various social settings, rendering the modelling process a major challenge. The simulation task also carries the undesired issue of having to mix together code driving the system and the code controlling the spread. Monitoring-oriented programming (MOP) has previously been applied to obtain this separation of concerns. This technique allows for the injection of new behaviour in specified parts of program execution through the use of monitors. These monitors could observe the system and be programmed to interrupt the system to execute certain actions when particular program traces are observed. A downside to this approach is that the use of monitors would add overheads to the system. Therefore, to investigate this, a simulator was built using Java to act as a system that would monitor the system through the Larva tool [1]. Thus, it would be possible to determine whether MOP could effectively simulate transmission, and whether the approach would scale up when increasing the number of rules and blobs (persons and objects) in the simulation. Three different scenarios were implemented in the project, with increasing complexity: one where the people moved at random; a care home for elderly persons with four wards; and a restaurant. For the simulation of the spread of disease, the SEIR compartmental method was used. This

refers to each of 5 states: Susceptible, Exposed, Infected, Recovered or Removed (i.e., dead) and each person in the simulation would be considered to be in one state at a particular point in the simulation.. Larva works by defining a rule through a notation known as a DATE, which is similar to an automaton. Each person would be given a DATE having all of the above-mentioned five statuses, thus enabling the person to shift from Susceptible to Exposed, depending on the duration of contact with the disease. This process was based on three models. The first accumulates exposure time, based only on the presence of an infected blob, while the second accumulates depending on the number of exposures. The third does not accumulate and considers only whether two blobs would have been in constant contact for the set amount of time. After exposure, whether a person becomes infected and whether they would then recover, is based on a probabilistic approach, rendering the automaton into a Markov chain. A total of nine Larva scripts were implemented, since each method was implemented in three different ways, namely: polling; keeping note of arrival and departure times; and by setting a Larva dynamic clock on initial contact to check if two blobs were still in contact after the set amount of time. Through this project, it was proven that it would be possible to mimic the spread of disease through MOP. Furthermore, the results obtained when evaluating the monitored system showed that it scaled up well, making MOP a viable solution for disease modelling.

Figure 1. The restaurant scenario

REFERENCES C. Colombo, G. J. Pace, and G. Schneider, “Larva — Safer Monitoring of Real-time Java Programs (Tool Paper),” in Seventh IEEE International Conference on Software Engineering and Formal Methods, SEFM 2009, Hanoi, Vietnam, 23-27 November 2009, D. V. Hung and P. Krishnan, Eds. IEEE Computer Society, 2009, pp. 33–37.

Digital Health

[1]

94

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


A study of deep learning for automatic segmentation of healthy liver in abdominal computed tomography scans Jeremy James Cachia | SUPERVISOR: Prof. Johann Briffa COURSE: B.Sc. (Hons.) Computer Engineering Medical-image segmentation refers to a process in which regions of interest (ROIs) such as organs are annotated in 2D or 3D images. Medical-image interpretation performed by radiologists and physicians has proved to be crucial for early clinical detection, diagnosis, and treatment. However, precise manual segmentation of medical images is a timeconsuming process, which is an issue when reducing the time gap between medical scanning and any required medical procedure would be crucial. In view of the above, the use of computers to assist medical-image interpretation, in the form of computeraided diagnosis (CAD), has become an essential tool for radiologists. In recent years, automatic image segmentation based on deep learning (DL) models has become popular due to the fast and precise results that could be achieved, thus surpassing traditional methods. Although DL models have outdone traditional methods, the segmentation required in the medical sector requires precision that DL models are yet to achieve. The high

variability from patient to patient, ROI overlapping, limited size of data sets to learn from, and low-resolution images, are the main factors hindering the development of a universal DL model suitable for any specific problem. The first stage of this project was a review of existing solutions, architectures and implementations for medical-image segmentation, highlighting key concepts that emerged. Subsequently, state-of-the-art deep neural networks (DNNs) were implemented, namely: U-Net and 3D U-Net. A proposed model was then implemented, where medical images were first preprocessed and classified to include the liver before passing through the DNN. The models were applied and compared to the liver CT scan data set publicly provided by the CHAOS challenge, where the CT scans were acquired at portal phase after contrast agent injection for pre-evaluation of living liver donors. The proposed model was found to improve over the precision of U-Net, from a DICE score of 0.94 to 0.97.

Figure 1. The CHAOS challenge data set, representing the upper abdomen in 3D by a set of 2D scans, slices, stacked in the z-axis: (a) the transverse plane, (b) the coronal plane, (c) the sagittal plane Figure

Digital Health

Figure 2. Proposed model

L-Università ta’ Malta

| 95


An intelligent healthcare mobile app Samira Noelle Cachia Spiteri | SUPERVISOR: Dr Lalit Garg COURSE: B.Sc. IT (Hons.) Software Development The main aims of this project were to contribute towards: reducing the strain on hospital resources, improving the accessibility to healthcare services and empowering patients to manage their health better. The above was achieved by developing a mobile application for both patients and doctors. The main features chosen were based on the online services currently offered by the Ministry of Health, such as myHealth, Pharmacy of Your Choice (POYC) and Telemedicine ‒ and bringing them together in a single app. This was done to make these features available to patients at any time through their smartphones or tablets, while ensuring a better user experience. The proposed app also offers novel features, including patient education, differential diagnosis, and disease prevalence. Patient education seeks to provide users with knowledge on various medical topics such as nutrition, first aid and holistic medicine. The differential diagnosis would allow users to select several symptoms and send them to a GP, who could best determine the best course of treatment for them. Upon receiving the symptoms and the patient’s medical history, the GP could assess a patient’s status. With this information in hand, the GP would be able to determine whether the patient would need to seek immediate medical attention at Mater Dei Hospital (MDH). In the case that the patient would not require to be hospitalised, they could merely

visit a healthcare centre or book an online appointment with the relevant outpatients clinic. This feature was aimed at reducing the strain on the emergency department at MDH, and to improve triage assessment. Lastly, the ‘disease prevalence’ feature is included to offer patients insights into the most common diseases present in Malta, the probability of their developing that disease, and information on how to treat or control it. The generic probability would be based on publicly available data sets and calculated using machine learning. If the patient would allow access to their medical history, their data could be used as part of the processing for more accurate results. The app could also assist patients in finding the nearest pharmacies, clinics, health centres, emergency departments and hospitals. The app would also aid doctors in efficiently managing their patients’ health through remote monitoring, access to their medical history, and critical information. It would also enable doctors to schedule appointments both physically and remotely, and manage patient medication. All of the data handled by the app was stored in Firebase, which also hosted the database and was responsible for user authentication. This platform was chosen to ensure secure data storage, given its sensitivity. The database is GDPRcompliant and is hosted in the EU.

Digital Health

Figure 1. A novel healthcare app to help patients better manage their health and to improve patient-doctor relations

96

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


The application of customer relationship management for digital health systems with the management of outpatients as a use case Nicole Fenech | SUPERVISOR: Dr Conrad Attard COURSE: B.Sc. IT (Hons.) Computing and Business readmitted into hospital. As a result, various issues could arise in the process of planning resources and scheduling appointments. The prediction algorithm being proposed would allow clinicians to prepare for future appointments by determining which patients would be likely to be readmitted and, consequently, be in a better position to schedule resources efficiently. Furthermore, a CRM dashboard displaying useful data, such as the one being proposed, would assist clinicians in understanding the state of the clinic and patient throughput details. The clinicians could use these patient demographics to help them develop criteria for incoming readmissions, as well as facilitating data visualisation and comprehension. Meetings with domain experts within the survivorship team at SAMOC were held to gather requirements. A meeting was scheduled with the team members to discuss clinical practices, current challenges, data aspects that would be best visualised in dashboards in detail, and any clinic processes that would best be included within the functionality of the CRM. A usability study with various respondents was implemented to gain feedback on the CRM, its functionality, and design. Since the platforms currently used, Microsoft Teams and Mosaic, did not record the necessary amount of clinical data, an external data set was required for the purposes of applying the prediction models. On the unknown data, the performance of these prediction models was very good, with the random forest model presenting the best performance. Hence, it could be considered as a solution to patient readmission in the outpatient setting. If MDHs’ current systems collected more data about clinics, this study could have been more practical and applicable.

Digital Health

The demand for outpatient services at Mater Dei Hospital (MDH) has been on the rise in recent years. This gives rise to a number of issues, which in turn could lead to inefficiencies in departments. Consequently, healthcare services have undergone major changes at an increasing rate. Moreover, the rate of change in healthcare systems is expected to accelerate further as both patients and professionals incorporate technology into their procedures. The case study chosen for this project concerns the current challenges faced by staff at MDH in tracking patient readmission, due to the way information is recorded, which has been hindering them from being able to make betterinformed assumptions. Hence, the goal of this research was to examine clinical processes, and create a simplified customer relationship management (CRM) system that would incorporate a dashboard facilitating the clinical process and the visualisation of data. Additionally, prediction algorithms would be used to calculate the likelihood of patient readmission, in order to obtain better insight, and thus plan more efficiently. In brief, the project relied primarily on CRM and machine learning. The CRM incorporated the functions mainly used in the clinical processes, as well as being designed to provide relevant data in an effective format and to aid clinic analysis and patient progress. Furthermore, using an external data set, prediction algorithms were used to calculate the possibility of a patient being readmitted to the hospital, allowing for more efficient resource scheduling. As at the time of undertaking the project, the available data at Sir Anthony Mamo Oncology Centre (SAMOC) at MDH did not offer any indication as to whether a patient might be

Figure 1. The CRM dashboard, which facilitates the visualisation and the comprehending of data

L-Università ta’ Malta

| 97


Practical Artificial Caregiver (PAC): a multimodal application to support the care of persons with impairments in care institutions Cedric Muscat | SUPERVISOR: Dr Peter Xuereb COURSE: B.Sc. IT (Hons.) Software Development Disability is a public health issue and it is estimated that more than 1 billion persons globally live with some description of impairment. The number of persons with disabilities is increasing constantly, and many are more than likely to experience impairments at some point in their life. There is also a need to support and improve healthcare services for individuals with such conditions. Combining different technologies, such as voice user interfaces (VUIs) and artificial intelligence (AI) assistants, to mimic human speech and behaviour is a promising and interesting concept. This concept could be applied on mobile devices, such as smartphones and smartwatches, to help tackle the issues of caregiver shortage, loneliness and/or physical disability. Practical Artificial Caregiver (PAC) is an assistive solution developed with the objectives of supporting caregivers assisting persons with impairments and/or disabilities residing in care homes. The solution bridges together the interactions and communication that take place between a resident and their caregiver, without the need for a physical presence. The architecture of this system was developed in such a way as to enable multiple mobile devices to access and make use of synchronised data from an online database. The system also involved utilising a simple but effective user experience (UX) for persons with impairments, using assistive technologies and others such as natural language processing, voice recognition and VUIs. This was achieved by deploying the use of mobile technologies, more specifically smartphones,

tablets, and smartwatches. The system was split into two: an application for the caregivers called PAC Carer, and an application for the resident called PAC Assistant. PAC Assistant was deployed through the use of an android smartwatch device. This was deemed to be the ideal choice, as it would allow the user to wear a non-intrusive device, thus eliminating the need to remember to carry a non-wearable device, and consequently feeling more at ease. Moreover, a watch device was chosen on the merit of being a familiar object to residents, having a simple interface. Therefore, explaining that it was practically a watch that ‘speaks’ and one they could ‘talk to’, helped encourage the residents to use the technology. PAC Carer was deployed through the use of an android mobile device. The solution could be used on both a tablet and a mobile phone, depending on the caregiver’s preference. From this side of the solution, caregivers could view a resident’s journal of reminders and alarms – which is essentially a list of tasks or activities that a patient would need to do, and reminded of, on a daily basis. The caregiver could also create new entries into a resident’s journal, such as alarms (for example when someone would need to be notified about tea-time) and reminders (for instance, when one would need to be reminded to take their medication at a specific time during the day). The software would also enable caregivers to send messages to their residents, to which the latter could reply by using voice input from their watch application.

Digital Health

Figure 1. The architecture and functionality of the entire PAC system, where the left side outlines the resident-oriented application, while the right side outlines the caregiver-oriented application.

98

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Investigating cognitive stress in software engineering activities Mark Xuereb | SUPERVISOR: Dr Chris Porter | CO-SUPERVISOR: Dr Mark Micallef COURSE: B.Sc. IT (Hons.) Software Development were designed, reflecting mental workload typically experienced while carrying out software engineering tasks (e.g., debugging or code comprehension). Variables such as time constraints and task difficulty were adopted across tasks. Various metrics were recorded during the study, including task performance and task duration. Moreover, subjective scales were used throughout the experiment to obtain insights on the perceived stress levels and cognitive workload experienced by participants. Heart-rate readings, along with pupil dilation, were also collected using a Fitbit Sense smartwatch and an Eye Tribe eye-tracker, respectively. The obtained metrics helped determine whether correlations between the overall task difficulty and the physiological responses did indeed exist, and to which extent. The subjective measures used throughout would in turn offer a deeper meaning when drawing conclusions from the participants’ data.

Figure 1. The steps of the experiment

Figure 2. The equipment set-up of the experiment

L-Università ta’ Malta

Digital Health

The objective of this research was to investigate the pressures encountered by software engineers while carrying out cognitively intensive work. For this reason, a controlled lab-based experiment was designed to study how typical stressors affect participants while carrying out predetermined, cognitively demanding tasks. Tasks covered aspects such as rational thinking and the use of working memory in a controlled environment that includes stressors (e.g., distractions). During this study, both physiological and subjective workload measures were recorded. The insights obtained from this research would help demonstrate the validity of data obtained through a smartwatch, as well as an eye-tracker, to accurately pinpoint periods of stress. The participants selected for this investigation were junior developers. The interface for the experiment was developed using the lab.js platform through which a series of mental arithmetic questions and n-back questions

| 99



Parallels:

Simplicity over complexity Parallels Inc., a global leader in crossplatform solutions, makes it simple for customers to use and access applications and files they need on any device or operating system.

Parallels has been a “Gold partner” of the University of Malta since 2016

The heart of the Parallels Remote Application Server (RAS) development is in Malta

For the UM students there is the Parallels “Students Partnership Experience” initiative allowing talented young individuals to take part in Work exposure and Internship programs to try themselves with real life projects as part of the RAS team.

Parallels RAS is a streamlined remote working solution providing secure access to virtual desktops and applications:

Working/Studying from home? Do you need full access to your applications wherever you are? We make it happen! We have already helped thousands of organizations to easily continue to work remotely. Joins us to help in saving the economy! Join our team and make it happen! parallels.com/eu/about/careers

Parallels is an integral part of Corel Corporation.

•Deliver virtual desktops and apps to any device, anywhere, anytime. •Enhance data security by centrally monitoring and restricting access. •Quickly scale your IT infrastructure on demand with auto-provisioning.



Going Places With Microsoft Business Solutions Have you heard about our graduate programme for:

Junior Consultants & Junior Programmers

Scan this QR code for more information on our Graduate Programme:

For any questions contact: MBSMT-Careers@KPMG.com

© 2022 KPMG Crimsonwing (Malta) Limited is a subsidiary of KPMG Investments Malta Limited, a subsidiary of KPMG LLP, a UK limited liability partnership and a member firm of the KPMG global organisation of independent member firms affiliated with KPMG International Limited, a private English company limited by guarantee. All rights reserved.


FILLING THE GAPS:

Malta-Sicily radars

The sea between Malta and Sicily is constantly monitored to ensure our maritime security and avert oil spill disasters, yet the data we collect can sometimes be incomplete. Now, Master’s in AI-graduate ANNE-MARIE CAMILLERI may have figured out how to fill those gaps.

W

hether we’re talking about aviation, architecture, or security, our modern way of life depends on the collection and analysis of data. But what happens when the technology that provides this information breaks down or suffers connection losses? The answer is data gaps, which could have severe consequences for the task at hand. Anne-Marie Camilleri has been working on a system that could help us fill these gaps for the four high-frequency radars (HFRs) that monitor a particularly pertinent area of our seas.

104

|

“HFRs are antennas that bounce high-frequency radio waves off the sea’s surface, hence why they are always located at the water’s edge,” Anne-Marie explains. “These help scientists collect oceanographic data, which is crucial for both researchers and Malta’s authorities.” There are currently four such HFRs monitoring the stretch of sea that separates the Maltese Archipelago from the southernmost tip of Sicily, known as the Malta Channel. These are located at different points in Malta, Gozo, and Sicily, and at least two of their frequencies must overlap for data to be collected.

Faculty of Information and Communication Technology - Final Year Projects, 2022

“This overlapping provides the University of Malta and other relevant authorities with the vector data [longitudes and latitudes] required to monitor the seabed, be alert for potential oil spillages, and maintain our maritime security.” Collecting and analysing this information is therefore essential. Nevertheless, as Anne-Marie points out, technology operating on the same radio frequency as the HFRs can sometimes interfere with the signal. Meanwhile, extreme weather events could result in a complete system breakdown.


their results. The Long Short-Term Memory Neural Network (LSTM) emerged as the winner. “This neural network, which mimics how our brains work, can remember select patterns for longer, making it ideal for our task,” she explains. “But what’s even better is that we now have a deeper understanding of how to fill such gaps and make predictions.

“For example, we now know that including wind data from satellites in the mix doesn’t significantly improve results. We also know that older data is relevant to training a system, meaning it doesn’t need to be retrained with new data sets after fixed intervals. And, finally, we know that the data from the six hours immediately preceding a gap is the most important for the computer to fill a gap.”

Anne-Marie Camilleri.

Yet this system goes a step further. Through its knowledge, it can forecast future data up to 24 hours in advance, allowing researchers and authorities to anticipate certain events, such as strong currents before they even occur. This opens up a whole other world of possibilities. “I’m thrilled with the work so far, and I’d like to publish my findings in the future. But there’s much work to be done before that, and I would be happy if a prospective student took it over,” she concludes.

“When this happens, the information we receive is incomplete, resulting in data gaps. This can reduce our maritime security or even cause delays in identifying environmental disasters. So it’s very fortunate that Artificial intelligence (AI) can help us avoid this.”

“It’s very fortunate that Artificial intelligence can help us … fill these gaps”

What will happen next with Anne-Marie’s project remains to be seen. Nevertheless, it cannot be denied that this is an excellent example of a system with transferable skills: all it needs is a different data set, and it can predict events in other countries and other seas. That in itself is a promising development that could guarantee its future.

Anne-Marie employed machine learning techniques to fill these gaps, a process through which a computer can be taught how to think like a human being. The exercise began with Anne-Marie gathering the data collected by the four HFRs over the previous two years. A computer then analysed this data set to extract knowledge and identify patterns, allowing it to fill gaps in the data even when the HFR signal is down. To achieve the best possible outcome in this operation, Anne-Marie ran the data set through three different machine learning models before comparing

L-Università ta’ Malta

| 105


DEEP-FIR: Enhancing CCTV footage H By enhancing the quality of CCTV images, the Deep-FIR project uses Artificial Intelligence to help criminal investigators in their work. Here, the researchers behind it explain the concept.

ave you seen that meme juxtaposing a crystal-clear image of the surface of Jupiter with a pixelated image from a bank’s security camera? Either way, we’re sure you’re aware that CCTV images and clips tend to lag in quality, which means they’re not always helpful to criminal investigators. Now, Artificial Intelligence (AI) may be on the cusp of solving that. “By their very nature, CCTV cameras are constantly capturing footage, resulting in vast amounts of data that need to be compressed in order to be more easily storable,” explains Dr Inġ. Christian Galea, whose PhD in Computer Vision focused on biometrics and forensics. “This often results in the reduction of image quality, which is rarely great to begin with, as such cameras tend to film in relatively low resolution and have low-quality lenses.”

106

|

Faculty of Information and Communication Technology - Final Year Projects, 2022

This means that the low quality of such footage can sometimes defeat the purpose of why it is capturing it in the first place. After all, while CCTV cameras can help deter crime, their primary function is to aid criminal investigators in determining the identity of perpetrators. “That may prove difficult with a pixelated image that doesn’t show much detail,” says Matthew Aquilina, a parttime research support officer currently reading for a PhD in Precision Medicine. “But this is where the Super Resolution (SR) techniques we have been working on come in.” These SR techniques use AI models trained to do two specific jobs. The first is to make an image sharper by increasing its resolution, while the second is to help reconstruct any missing details in that image, such as by reducing blurriness.


To do this, most basic SR models use just one low-resolution image to make up the new estimate, but our trio has been seeking to create a system that could use other information, such as the gender, age, or hair colour of a subject, to improve the results further. But this project is a sum of its parts, as each researcher has taken over the creation of a piece in the puzzle. Keith George Ciantar, a software developer at Ascent Software with a Master’s in Signal Processing and Machine Learning, is responsible for the Meta-Attention side of the project. “This is the process by which we can provide the AI model with supplementary information that can be used to improve the accuracy of the super-resolved image,” he explains. “So, if we know the type of camera that was used, or how the footage was compressed, we can give the AI model that information to improve image quality.”

Meanwhile, Matthew’s job is to attempt to predict any degradation an image might have suffered, such as blurring due to a poor-resolution camera or compression from trying to save space. This process, called ‘blind’ SR, allows the AI model to automatically predict and reverse such degradations. “There are many types of blind models in the literature, and each one has its advantages and disadvantages,” Matthew asserts. “Ours has been programmed to understand how to represent each degradation so that it can then be plugged into our Meta-Attention model and boost its performance.”

“The system can also be used to restore images of vehicle license plates and old footage”

Mr Matthew Aquilina, Dr Inġ. Christian Galea, and Mr Keith George Ciantar.

While both the Meta-Attention and blind SR models are based on single images, Christian’s work uses the Face Multi-Frame SR. This means that he is looking to extract data from different stills in the same video to create a more complete picture. “Although this is still in the early stages, this type of SR allows us to use information captured from multiple frames that show a subject from different angles, distances, and sharpness levels to create more accurate estimates. This is then coupled with the Meta-Attention and blind SR models to provide the clearest picture possible,” Christian explains. Together, these three approaches have been dubbed the Deep-FIR project, and it could prove an invaluable tool for criminal investigators using CCTV images. But there’s still more to come. “The system can also be used to restore images of vehicle license plates and old footage,” Keith explains. “But, over and above that, we’re trying to find ways to add attributes mentioned in eyewitnesses accounts to enforce the image and make it even clearer. This is still in the preliminary stages, but it could be a great addition to the software.” Where the trio will take this software remains to be seen. However, with a paper describing the Meta-Attention model published in the peer-reviewed journal IEEE Signal Processing Letters, and the open-source software already up on GitHub under a dual license, Deep-FIR looks set for a bright future. L-Università ta’ Malta

| 107


For years, scientists have taught computers to understand sentiment in several global languages. Now, Master’s in AI student DAWSON CAMILLERI wants to use the work that’s already there to boost our national language.

O

ur brains can process language with minimal effort. Even on paper, we can tell how a writer feels by their choice of words, tone of voice, and even letter case. To computers, however, all terms have the same value, meaning they cannot decipher whether a sentence has positive, negative, or neutral sentiment – at least, not unless researchers like Dawson Camilleri teach them how to. “The first step in this process is for scientists to explain which words fall in each category to an Artificially Intelligent (AI) computer system,” says Dawson, who previously studied Software Development at MCAST. This is done through data sets containing lists of terms connected with a quantifier. For example, phrases like ‘thanks’ and ‘great’ have a positive sentiment, while ‘expensive’ and ‘upset’ have a negative one. But while this may seem simple, creating them

108

|

Faculty of Information and Communication Technology - Final Year Projects, 2022

Mr Dawson Camilleri.

Determining sentiment in Maltese

can be a massive undertaking as each word also needs to be ranked. “Scientists have to look at each word and determine whether it tells us anything about sentiment,” Dawson continues. “So, while words like ‘beautiful’ or ‘horrible’ explain how a person feels about something, a noun, such as the name of a town, or a pronoun, such as ‘it’, don’t.” This process has already been undertaken in many major languages, giving AI systems enormous data sets in English and Italian. Maltese data sets also exist, though we are lagging due to fewer speakers, resources, and scientists to work on them. “That’s where my project comes in,” he explains. “My idea was to use the data sets already available in other languages to create a larger one in Maltese. This can then be used to determine sentiment.”


To test his theory, Dawson looked at using two separate approaches.

which should be objective, have a hidden agenda or political bias. And, for owners of websites or news sites in Maltese, such a system could help eliminate spam in the comments section to ensure its integrity.”

In the first one, Dawson translated a test data set from Maltese into English before feeding it into an AI system that had been trained on an English data set. In other words, the system knows ‘bad’ has a negative sentiment and knows that ‘ħażin’ is the Maltese word for ‘bad’, leading it to conclude that ‘ħażin’ carries negative sentiment.

Figure 1. Overall Architecture Approach 1

Meanwhile, in the second approach, Dawson translated an English data set and an Italian data set into Maltese before teaching the AI system what each word in Maltese inferred.

sentiment classifier of the AI system so that it can run sentiment analyses. In other words, it can compare the terms in a sentence with those in the positive or negative rankings lists.

“The translation of these data sets was done automatically using Google Translate API, but each data set had to be processed in a way that would guarantee maximum accuracy… Moreover, the AI systems had to be taught to recognise Maltese characters.”

“Such software can then be used in many industries,” he explains. “In marketing, this could allow companies to monitor related comments on social media and understand how people feel about the brand. In journalism, this could help give feedback on whether articles,

This system is still in the works, but its contribution to creating an accurate and usable data set of Maltese words in sentiment analysis could be enormous. “When these data sets are completed, they are then inputted into the

“Such software can then be used in many industries”

A tool such as this also serves another purpose: to make working in Maltese as accessible and as easy as it is with other languages. This makes the continued use of our language possible, even as our society moves forward, showing that ICT is also about preserving our identity, culture, and language.

Figure 2. Overall Architecture Approach 2

L-Università ta’ Malta

| 109


T I N Y

C I R C U I T

MAJOR POSSIBILITIES Dr Inġ. FRANCARL GALEA’s Electrical Interfacing Circuit is small in scale but enormous in potential. So much so that it could change the way trackers, sensors, and even pacemakers are powered.

I

f there is one thing we take for granted it’s that plugging an electronic device into a power socket and switching it on, works. But what happens when an electronic device is somewhere highly remote, where no sockets are available? You may think the answer is the trusty battery, but Francarl Galea’s idea may be a better solution overall. Francarl has spent the past seven years working on his PhD in Microelectronics and Nanoelectronics. For it, he developed an Electrical Interfacing Circuit (EIC) that can store the energy it derives from energy harvesters, effectively replacing both sockets and batteries in such situations. To understand this EIC, we first need to look at energy harvesters, which are devices that collect energy from a source like the Sun, the wind, or movement. These are typically used to power loads such as sensors in remote locations that alert us to natural disasters, trackers that help us study wildlife, and body implants

110

|

Faculty of Information and Communication Technology Final Year Projects 2022

like pacemakers, which can aid people in achieving a more regular heartbeat. “These harvesters are indeed great, but they have one major flaw: they are only useful when their energy source is present. So if a sensor is solar powered and the Sun has set, the object will either stop working or require a battery to function.”

“The circuit only consumes two microwatts” To counteract this, Francarl added two significant functions to energy harvesters through his initial EIC prototype. The first is the ability to store energy, so the device can still power the load even when the source isn’t active. The second is the Maximum Power Point Tracking (MPPT) function, a technique that maximises energy extraction from the source even when conditions vary.


Dr Inġ. Francarl Galea.

“This means that no matter how much energy the harvester collects, the amount of power supplied to the device is always controlled,” he explains. “All these functions were first placed together on a tiny, integrated circuit measuring no more than 1.4mm2,” he says. “The results were fantastic as we achieved up to 99% efficiency levels for the MPPT, 63% electric power conversion rates, and 95% rectification levels.” Rectification is the conversion of alternating current (AC), where the voltage constantly changes from positive to negative and vice versa, into a direct current (DC), where the flow of electricity is constant. “This is being done by a novel ACDC Boost converter, which makes the EIC compatible with energy harvesters no matter what type of voltage they produce and without using any external rectification blocks.” These functions were also included in Francarl’s second, more powerful prototype, which measures just 1.6mm2 and incorporates a regulated output voltage.

Surprisingly, the circuitry for this EIC was implemented using analogue electronics rather than digital ones. This may seem strange today when we are used to more powerful digital electronics and microprocessors, but there was a good reason for this. “If this control system had used a digital microprocessor, all the power harvested would have been used up by the circuit itself. This way, the circuit only consumes two microwatts [2/1,000,000 of a watt], which is practically nothing.” Even better, the circuit also comes with plug-and-play functionality, allowing designers to decide what kind of energy they’d like the harvester to collect – be it solar, thermal, vibrational, or radiofrequency – and the type of voltage they’d like it to put out. “The conversion efficiency of this circuit reached 82%, which is great

considering all these components take up just 0.05mm2 on the EIC. Indeed, this is quite an advancement from what was previously available in this area, namely a regulator that prevented the energy harvester from injecting too much voltage into the device at one go.” Francarl’s hope for this EIC is that a prospective student will now take it over and improve it. Even so, in its current state, the possibilities are countless. For example, remember the pacemakers, the sensors, and the trackers we mentioned at the start? Well, this EIC has the potential to transform the way all of them are powered, reducing their need for batteries and making them more self-sustainable. Now isn’t that a thought?

L-Università ta’ Malta

| 111


Digital health could revolutionise Karin Grech Hospital admissions FRANCESCA MUSCAT’s PhD in Digital Health combines her experience as a physiotherapist with her interest in technology. In her studies, she’s adopted the digital health concept to create a paperless medical process that standardises the prediction of rehabilitation potential and aids doctors with admissions to Karin Grech Hospital.

P

ublic healthcare is a wonder of our age. Thanks to it, we can all rest easy knowing that if we fall ill or have an accident, professionals will be ready to look after us in spaces equipped for the job. Nevertheless, operating a national healthcare system is a mammoth task that requires the input of countless individuals. So, as Francesca Muscat is all too aware, a bit of help from modern technology can go a long way. “Karin Grech Hospital (KGH) is a unique establishment in Malta’s network of public healthcare facilities,” says Francesca, who has worked at the hospital as a physiotherapist for the past

112

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


five years. “It’s a place where people are admitted for rehabilitation before going back into the community, so the idea is always that of a short stay.” However, admitting older adult patients to one of the seven wards dedicated to this isn’t always straightforward. As Francesca explains, many domains affect whether a person will require short-term rehabilitation or long-term care.

This information usually adds up to a couple of thick files’ worth of paper documents, some of which can sometimes be misplaced, leading to incomplete data. Either way, the consultant geriatricians must pore over these documents and use their expertise to determine whether a patient should be admitted for a short stay at KGH or whether they would fare better at a permanent care home. But this is where Francesca’s PhD could help: bridging two University of Malta faculties, namely those of Health Sciences and ICT, Francesca is employing the concept of digital health – the use of ICT to enhance the efficiency of healthcare – to help facilitate this process. “My aim is to standardise the process through which consultant geriatricians can decide whether KGH would be the best place to care for the patient,” she explains. “I’m working on a system that can predict the rehabilitation potential of a patient, helping clinicians in their work.” To make the use of this technology simple, Francesca has created a tablet application that works on a predictive model, which is a system that uses statistics to forecast future outcomes. This, as she points out, has been created based on a systematic review of what as-

Ms Francesca Muscat.

“This includes things like a person’s daily function, such as whether they can get dressed on their own or feed themselves; cognition, which is needed for tasks like buying groceries or remembering to switch off the oven; and physical strength, like whether they can get out of bed alone. Then there is also the home environment and social support: is their home accessible to them in their situation, and are there relatives or friends who can help?”

“Our goal is to ensure that patients receive the right care” sessors usually look at when admitting patients for rehabilitation, as well as on information received from a panel of rehabilitation experts, including clinicians and hospital managers. “The front-end of the app will have a series of questions that the geriatricians and the multidisciplinary team at Mater Dei Hospital (MDH) will fill in on behalf of the patient, either through one-on-one assessments or through the documents available,” she explains. “The app then gives a percentage score on how likely the patient is to benefit from being sent to KGH.” Yet Francesca is adamant that such a percentage is only meant to aid clinicians and not replace their expertise.

would make it accessible to healthcare workers in any hospital or care home the patient is in and reduces the risks of any information or papers going missing. “Over my years working in the healthcare industry, I realised that one profession cannot be independent of another. So using more than one discipline – in this case, healthcare and technology – means that we’re more likely to find a solution to any shortcomings and improve our patients’ lives – which is why we’re here after all, isn’t it?”

“The final call is always the clinicians’,” she asserts, “after all, human beings are more complex than statistics… It must also be said that we are not looking to deny anyone care. We aim to ensure that patients receive the right care for their conditions while freeing up spaces in respective hospitals.”

Indeed, although Francesca is still in the first year of her PhD, her project shows how multidisciplinary ideas can benefit all industries. From our end, we hope we’ll see more of these digital health projects make it into our publication in the future.

But there is another benefit to such an app: through it, the information collected could be stored digitally. This

Francesca Muscat’s PhD is funded by TESS (The Tertiary Education Scholarship Scheme).

L-Università ta’ Malta

| 113


VIRTUAL CRIME SCENE:

detecting malware attacks High-profile people have high-profile information worth stealing – but with hackers now able to delete their tracks, pinning them down is getting harder. Could the answer lie in add-ons to certain Android apps? PhD student JENNIFER BELLIZZI believes so.

O

ur phones have become a lot more than just a tool for making calls. They’re now our personal assistants with access to our bank accounts, photo albums, private conversations, and health records. Unfortunately, they are also susceptible to attacks by hackers, which is problematic for everyone, but it’s doubly worrying when the phone belongs to high-profile individuals whose information could be used for malicious purposes. Now, Jennifer Bellizzi’s PhD may offer some hope. “Over the years, hackers have become more astute,” says Jennifer, who is in the second year of her PhD in Android Digital Forensics. “We’ve reached a point where a hacker can bypass anti-virus software, install malware on your phone, and send messages to people in your phone book, load apps, or even see your screen.” What’s worse is that some of these pieces of malware, often referred to as ‘trojans’ or ‘worms’, can sometimes delete any traces of their activities, making it almost impossible to find out whether your phone has been hacked. This, in turn, places the victims of such stealthy attacks at a disadvantage and leaves criminal investigators with a gap in their timeline of how and when the attack may have happened. But there may be a way to counteract this. Jennifer’s PhD focuses on a piece of software that can be added to apps on Android phones, which tend to be more prone to such attacks. “It’s called the Just In Time – Memory Forensics [JIT-MF] driver, and its job is a simple yet crucial one as it acts as a black box, meaning that it cannot be erased or manipulated. A mobile phone application can then ‘dump’ certain relevant information into it.” Basically, the JIT-MF driver records all the activities undertaken on a particu-

114

|

Faculty of Information and Communication Technology Final Year Projects 2022


Ms Jennifer Bellizzi. lar app. So, among other things, criminal investigators could have access to information on when and to whom messages were sent, when the app was opened, whether the user had been active right before or after such activities took place, and so on. Nevertheless, for the JIT-MF driver to offer a complete log, the programmer needs to be specific about what the app is used for and what kind of attacks it could suffer. “Let’s take my first JIT-MF driver as an example,” she continues. “This was created specifically for messaging apps like WhatsApp, Telegram, and Signal. It was given clear instructions on what such apps can be used for, such as loading the app, messaging, calling, or sending photos. These are, of course, all legitimate actions the phone’s rightful owner may do multiple times a day, but they’re also the actions a hacker might undertake through a messaging hijack attack.

However, the idea behind this driver isn’t for users to have their messages monitored by default. Instead, it is an addon that high-profile users can choose to install on specific apps to safeguard their business and personal information. It can then be used to keep a record that could help criminal investigators in instances of such hijackings, which can lead to the theft of money, leaking of sensitive information, and even blackmail. “As things stand, hackers have the upper hand in such situations, with even state-of-the-art forensic tools unable to find logs of certain activities. The JIT-MF

“The JIT-MF driver will hopefully … become an integral tool for investigators”

driver will hopefully change that and become an integral tool for investigators to use when they’re putting together their timelines of such crimes,” Jennifer concludes. The JIT-MF driver’s role in mobile phone-related crime detection and prevention is promising, particularly as it is so versatile. This means that as hackers up their game, those fighting crime can also have updated tools that keep users safe – and that’s undoubtedly one of the best uses of information and communication technology we can think of! The next phase of JIT-MF driver research will be carried out as part of the Digital Evidence Targeting covErt Cyberattacks through the Timely Information Forensics (DETECTIF) project, launched in July 2022. Project DETECTIF is financed by the Malta Council for Science & Technology for and on behalf of the Foundation for Science and Technology through the FUSION: R&I Research Excellence Programme.

“So the JIT-MF driver will start recording every time the app is used for whatever action we have asked it to. This, in turn, will allow criminal investigators to know whether the app was opened when the user was otherwise inactive, or when messages were sent and deleted in quick succession.”

L-Università ta’ Malta

| 115


Do those drugs go together? With more of us being put on multiple medications, the risks of drug-to-drug interactions are increasing. And while testing for these is possible, current methods are expensive and time-consuming. Now, Master’s in AI student LIZZY FARRUGIA may have taken the first step towards solving that.

Ms Lizzy Farrugia.

116

|

W

e’re fortunate to live in an age where medication can treat and prevent most diseases. However, this blessing doesn’t come without its troubles, including that consuming multiple drugs simultaneously can lead to adverse effects. Scientists are constantly testing for these, but the processes aren’t always straightforward or economical. Lizzy Farrugia’s project shows that that doesn’t have to be the case.

Faculty of Information and Communication Technology - Final Year Projects, 2022

“The medical world knows and understands how most drugs react to each other,” she says. “In fact, pharmacists in Malta consult the British National Formulary, a highly-respected reference book that offers advice on prescribing and pairing drugs to avoid negative drugto-drug interactions [DDI].” Nevertheless, the way two or more drugs react depends on multiple factors. This includes their chemical makeup, what they’re meant to treat,


which organs they affect, the patient’s health, and how they are consumed (swallowed, injected, absorbed, etc.) “This means that every drug can react differently when combined with others under specific circumstances. So as more people are placed on multiple medications to treat separate issues, the number of unknown DDIs is ever-growing.”

Here, SMILES stands for ‘Simplified Molecular-Input Line Entry System’, and it’s essentially a string of symbols explaining a drug’s molecular formula to a computer. For example, in the case of aspirin, the SMILES notation is ‘O=C(C) Oc1ccccc1C(=O)O’.

“Then I mapped the relationships between the different points in the data, showing the system what was and wasn’t connected. Finally, I tested the KG’s validity to confirm the system would work in a real-life scenario.”

“This, however, comes with its limitations as we don’t have such notations for all drug types just yet, and even when they’re available, they sometimes over-simplify the structure, leading to incomplete results.”

With this phase completed successfully, Lizzy moved on to the prediction phase, for which she connected the KG with a deep neural network. This is a stacked system with each layer observing and analysing data similarly to how the human brain does.

According to a recent Polypharmacy Management Report funded by the European Union, 5% of all hospitalisations in Europe happen due to negative DDIs. That’s 430,000 people a year. “Since the pharmaceutical industry approves hundreds of new drugs per annum, these figures could grow,” Lizzy continues. “While rigorously tested and individually safe to prescribe, it is currently tough to check how each new drug would react with every other drug under every possible condition.”

“A cheaper, less labour-intensive, and potentially more accurate alternative” Of course, that doesn’t mean that the industry and researchers don’t do it. Scientists currently conduct in vivo (inside a living body) and in vitro (such as in a test tube) experiments, which have picked up hundreds of thousands of potential DDIs, saving countless lives. Yet, as Lizzy points out, these are costly, labour-intensive solutions. “The other alternative is using a state-of-the-art DeepDDI, which is considered the best for researchers in this field. This is a super-powerful, Artificially Intelligent [AI] model that can predict the way drugs will react with each other using their SMILES notations.”

During her research, Lizzy came across another method that could be used: the knowledge graph (KG), a semantic network that shows the relationship between two or more entities. This, as she explains, had been used to find unknown DDIs in the past with promising results, which is why she decided to follow this path. “To build the KG, I first had to create an inventory of pertinent data, which I got from reliable sources like the DrugBank [a database of known drug interactions], the Bio2RDF [a database of the studies of living organisms], and the ATC Classification [a drug classification system managed by the WHO].

“Its first task was to predict known DDIs so I could make sure it worked. This achieved an F1 score of 91.94%. This is fantastic and means we’re on the right track to creating a system that could offer a cheaper, less labour-intensive, and potentially more accurate alternative to help predict DDIs and, maybe, even suggest safer drug combinations in the future,” Lizzy concludes. Although the work on this system isn’t complete yet, it’s already extremely promising. So, who knows? In the future, it may end up saving someone’s life.

L-Università ta’ Malta

| 117


Can Twitter help break the news? PhD in AI student NICHOLAS MAMO has spent years working on an Artificially Intelligent system that can alert journalists to news stories breaking on social media. Here he explains how it works.

cial media platforms in a way that could alert journalists,” he continues. “So, six years ago, when I started my BSc dissertation, I began experimenting with a system that can collect information from social media about events like elections, terrorist attacks, natural disasters, and sporting events.”

“Social media sometimes breaks the news earlier than newsrooms do,” says Nicholas, who is in the last year of his PhD. “This is because people on the ground or those in the know can post about occurrences well before journalists are tipped off.”

“Machines don’t instinctively know what is and isn’t important”

Undoubtedly, Nicholas is referring to events like Michael Jackson’s death or the 2008 earthquake that devastated the Sichuan province in China. Both of these broke on Twitter hours before making it onto mainstream media, proving the power people hold in the generation of news. “This got me thinking about the possibility of extracting information from so-

118

|

Nicholas focused on Twitter and applied two Artificially Intelligent (AI) methods to collect tweets (posts on the platform): information retrieval to collect knowledge and data mining to find patterns. “The system can now look at past, related events and learn generic keywords that it deems important for the

Faculty of Information and Communication Technology - Final Year Projects, 2022

Mr Nicholas Mamo.

T

he consumption of news has come a long way since the early days of the newspaper. Yet the most significant revolution has been the public’s growing role in its creation and dissemination. Rather than seeing this as a potential threat to traditional news channels, however, Nicholas Mamo has used this as an opportunity to help newsrooms and journalists.

subject. Then, the user can input determining keywords for the particular event in question,” he explains. To test the system out, Nicholas brought another passion of his into the fold: football. Using one football match at a time as the ‘event’, the system first learnt generic keywords based on the topic, such as ‘goal’, ‘yellow card’, ‘red card’, and ‘foul’. Nicholas then added specific keywords, like the names of the clubs facing each other, to instruct


the machine on which event it should follow. “The system then collected related tweets to the match at hand and compiled them into a timeline that presented all the information in bite-sized chunks. This included who had scored, if any red or yellow cards had been drawn, and if any extra time had been allocated.” This timeline looks similar to those used on local news sites to report important court cases, for example. But using tweets to compile it was an entirely different ballgame. With its 330 million active monthly users posting more than 500 million daily tweets, Twitter can be a minefield for an AI system to navigate. “It’s important to remember that machines don’t instinctively know what

is and isn’t important,” he explains. “Something like a yellow card in football won’t be tweeted about as often as a goal, so the system may ignore it. Nevertheless, a sports journalist may still want to know that information, so I updated the system accordingly.” As Nicholas explains, such systems typically need over 100,000 tweets to build a decent timeline, a considerable number to expect for news that hasn’t yet been reported. Nevertheless, thanks to the amalgamation of generic and specific keywords, Nicholas’s system requires just 8,000. “My siblings are both journalists, so I’ve been interested in the news for as long as I can remember,” Nicholas tells us. “I have to admit that I’m very proud to see this system, which I’ve dedicated so

much of my time to, actually detecting news before it even breaks on mainstream media,” While there is still a while before journalists can get their hands on this system, Nicholas is determined to see it through. Moreover, he’s mulling over whether the public could use it to follow updates about specific topics like archaeological discoveries, scientific breakthroughs, or celebrity gossip. Either way, this project shows that our contribution to the creation and distribution of news is set to increase in the future. Although, we wonder what Elon Musk’s acquisition of Twitter will mean for all this. The research work disclosed in this article is funded by the Tertiary Education Scholarships Scheme (TESS).

L-Università ta’ Malta

| 119


Exploring

climate change By combining virtual and augmented realities, Master’s in Computer Information Systems graduate RYAN HAMILTON has created an educational app that brings the impacts of climate change to life.

C

limate change is a topic that comes up no matter what we’re discussing, be it food or fashion, travel or health. It affects us all, which means it needs to be given priority if we want the future to be brighter than our past. Ryan Hamilton understands this all too well, which is why he’s dedicated his Master’s thesis to aiding educators in teaching it and students in visualising its outcomes. “EcoXR is a tablet application that combines two of the fastest-growing technologies in the world, namely virtual reality [VR] and augmented reality [AR],” Ryan explains. “The idea behind the app, which can be used on a simple tablet, is to enhance the learnability of climate change through a highly-immersive experience.” The app comes with two distinct sections. One of them uses VR and sees users enter a virtual, modern home where they have a series of tasks to perform. These include turning off taps, switching off lights, and adjusting the temperature on an air-conditioning unit.

120

|

The other section uses AR to explore three virtual ecosystems and how climate change is reshaping them. These are namely the Arctic ecosystem, where melting ice caps are resulting in the decline of glacial environments; a Field and Forests ecosystem, where water bodies are drying up, endangering food sources for wild animals and leaving scarce opportunities for productive farming; and a Sea Coast ecosystem, where sea levels are rising, wreaking havoc on coastal communities. “These two realities are intertwined in the app, with the future state of each ecosystem changing based on the user’s efforts at ‘fixing’ critical environmental issues in their virtual home.” But the app goes a step further. It explains the process behind what is happening through short and easy-to-understand texts compiled from information published in reputable scientific journals and via statistics released by NASA and the European Commission, among others.

Faculty of Information and Communication Technology - Final Year Projects, 2022


Mr Ryan Hamilton. “The ultimate aim of EcoXR is to aid youths and adults understand how each action, no matter how small, can impact our environment,” Ryan asserts. To create such an app, Ryan needed to merge various disciplines. On the one hand, there was the technological side of the process. This required him to be knowledgeable in both VR and AR. But he also

“Each action, no matter how small, can impact our environment” needed to look into how tablet sensors work and how three-dimensional elements could be assembled to showcase ‘living’ ecosystems. On the other, there was the pedagogical element (the method of teaching), which meant Ryan had to ensure that the app was imparting information in a way that students could learn from it. “Understanding how climate change is taught was an obvious first step. To ensure I was on the right track, I contacted the Institute for Climate Change and Sustainable Development and a foreign start-up that focuses on climate change education to find out what’s already on the market and what gaps EcoXR could fill. “Then I looked at the pedagogy of climate change and how the app performed that role. When the first version of the app was completed, I asked educators who teach

climate change as part of their syllabus to test it.” These educators were also invited to fill out a system usability scale (SUS) questionnaire before participating in one-on-one interviews where they voiced their likes, dislikes, and proposed amendments. This led to suggestions for future developments, including the importance and effectiveness of having consistent and brief informational areas. “I have been working on this application since August 2020 now, and I’m happy most educators loved the idea and thought it would be helpful in classrooms or educational museums. “I hope to keep working on the app or find someone else who will take over. From the educators’ suggestions, we’ve already pinpointed some of the work that needs to take place next, including better handling of the tablet’s working memory [RAM] to ensure a smoother experience for learners.” We stand behind this call, especially since such a project shows how combining technologies can aid in teaching such a subject. After all, what better way is there to raise awareness about a topic like climate change than through the education of the upcoming generation?

L-Università ta’ Malta

| 121


2021 Awards: an overview

O

n Friday, 23rd July 2021, the Faculty of ICT handed out 14 accolades to students who had excelled in their

studies.

Eight were part of the Dean’s List, an award recognising students who achieve academic excellence during their undergraduate degrees. To feature on this list, students must obtain a final average of 80 or above, demonstrating exceptionally high achievement across all study units in their Faculty-approved course. They must also have no cases of failed or re-sit study units during their final year and no reports of misconduct during the whole period of their studies.

122

|

Three other prizes were awarded to students whose undergraduate final-year projects were considered exceptional. Referred to as the FYP Awards, the winners were chosen by their respective Heads of Department. Finally, there were three Best ICT Projects With Social Impact Awards. These accolades were awarded to Daniel Attard for his driving behaviour monitor; Lukan Cassar for his paper on the Discovery of Anomalies and Teleconnection Patterns in Meteorological Climatological Data; and Francesca Chircop for her model developed to improve low-dose CT-scan images. The event, which took place during the opening ceremony of the

Faculty of Information and Communication Technology - Final Year Projects, 2022

Faculty’s 2021 Exhibition, was hosted by the Faculty’s Dean, Prof. Inġ. Carl James Debono, and the Faculty’s Deputy Dean, Dr Conrad Attard. Joining them in congratulating the awardees were the Honourable Justyne Caruana, then-Minister for Education; the University’s Rector, Prof. Alfred J. Vella; and the President of the Chamber of Engineers, Inġ. Malcolm Zammit. Ms Randy Cassar, a then-third-year student reading for a B.Sc in IT (Hons.) Software Development, also delivered a speech on the night. We congratulate all 2021 winners and look forward to this year’s awards, which will take place in July 2022.


Dean’s List Awards The Dean’s List Awards were presented by Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University of Malta (left).

Mr Domenico Agius.

Mr Daniel Attard.

Mr Matthew Jonathan Axisa.

Mr Chris Frendo.

Mr Nathan Grech.

Mr Jacques Vella Critien.

The Best FYP Awards The Awards were presented by Prof. Alfred J. Vella, Rector of the University of Malta (left), and Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University of Malta (middle).

First Prize delivered to Mr Domenico Agius.

Second Prize delivered to Mr Jonathan Axisa.

Third Prize delivered to Mr Ethan Briffa.

Best ICT Projects with a Social Impact Awards The Awards were presented by Mr Carm Cauchi, Chief Administrator at the eSkills Malta Foundation (left), and Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University of Malta (middle).

Mr Daniel Attard.

Mr Lukan Cassar.

Ms Francesca Chircop.

L-Università ta’ Malta

| 123


The Hon. Justyne Caruana, then-Minister for Education, together with (L-R) Prof. Alfred. J. Vella, Rector of the University of Malta, Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT, Dr Conrad Attard, Deputy Dean of the Faculty of ICT, and several students from the Faculty.

(L-R) Dr Conrad Attard, Deputy Dean of the Faculty of ICT, Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT, Mr Carm Cauchi, Chief Administrator at the eSkills Malta Foundation, and Prof. Alfred. J. Vella, Rector of the University of Malta.

The Malta Engineering Excellence Awards

IEEE Awards

Professor Edward Gatt, the Chair of the IEEE Malta Section, presenting the IEEE Award to Mr Ryan Camilleri.

124

|

Faculty of Information and Communication Technology - Final Year Projects, 2022

Ms Francesca Chircop with (L-R) Inġ. Malcolm Zammit, President of the Chamber of Engineers, Inġ. David Scicluna Giusti, Activities Secretary at Chamber of Engineers, and Prof. Inġ. Carl James Debono, Dean of the Faculty of ICT at the University of Malta.



Paving your way to the working world of tech The PwC Graduate Programme for STEM students is a journey that focuses on your development during your studies at University and prepares you for a career at PwC.

What to expect? A mix of digital upskilling, practical work experience, dissertation support, networking with the best in the industry and much more ...

Interested?

Apply today

www.pwc.com/mt/studentregistration

Data Analytics

Software Development

Cyber Security

IT Audit

Follow us on:

© 2021 PricewaterhouseCoopers. PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details.

Artificial Intelligence


SHAPE THE WORLD OF CYBERSECURITY WITH AN AWARD-WINNING TEAM Founded in 2007 in Germany, Hornetsecurity has quickly grown to become a global expert in email cloud security, compliance and backup. From one office in Hanover to 12 offices around the world, including Malta (previously Altaro), Hornetsecurity continues to swiftly increase its footprint in the email security and data backup industry. We strive to deliver the best possible solutions to our partners and customers, ensuring nobody will have to worry about their data ever again.

ADVERT

We continue to build a strong community of topnotch professionals who are great at what they do but also love doing it together with us.

We are proud of what we do and how we do it!

JOIN THE SWARM Check out our careers page

#HORNETROCKS

Home Office

Fitness Allowance

Pizza Meetings

www.hornetsecurity.com

Employee Exchange


Faculty of ICT Staff Awards 2021 For the first time, the Faculty of ICT presented the outstanding Faculty and Staff awards. These awards are intended to recognise individuals who provide an extraordinary commitment to the Faculty and student achievement and wellbeing.

The award with the citation “In recognition for the dedication to the Faculty and students” went to Prof. John Abela.

The award with the citation “In recognition for administrative excellence in supporting the Faculty” went to Ms Michelle Agius. 128

|

Faculty of Information and Communication Technology - Final Year Projects, 2022


Members of Staff F A C U LT Y

O F

I C T

DEPARTMENT OF COMMUNICATIONS AND COMPUTER ENGINEERING PROFESSOR Professor Inġ. Carl J. Debono, B.Eng.(Hons.), Ph.D.(Pavia), M.I.E.E.E., M.I.E.E. (Dean of Faculty) Professor Inġ. Adrian Muscat, B.Eng. (Hons.), M.Sc. (Brad.), Ph.D.(Lond.), M.I.E.E.E. Professor Inġ. Saviour Zammit, B.Elec.Eng.(Hons.), M.Sc. (Aston), Ph.D.(Aston), M.I.E.E.E. (Pro-Rector for Research and Innovation, resigned September 2021) ASSOCIATE PROFESSORS Professor Johann A. Briffa, B.Eng. (Hons)(Melit.), M.Phil.(Melit.), Ph.D.(Oakland), (Head of Department) Professor Inġ. Victor Buttigieg, B.Elec.Eng.(Hons.), M.Sc. (Manc.), Ph.D.(Manc.), M.I.E.E.E. Professor Inġ. Reuben A. Farrugia, B.Eng.(Hons.), Ph.D., M.I.E.E.E. (on special leave from 18 Feb 2022) SENIOR LECTURERS Dr Inġ. Trevor Spiteri, B.Eng.(Hons.), M.Sc., Ph.D.(Bris.), M.I.E.E.E. Dr Inġ. Gianluca Valentino, B.Sc.(Hons.)(Melit.), Ph.D. (Melit.), M.I.E.E.E. ASSISTANT LECTURER Inġ. Etienne-Victor Depasquale, B.Elec.Eng.(Hons.), M.Sc.(Eng.), M.I.E.E.E. AFFILIATE PROFESSOR Dr Hector Fenech, B.Sc. (Eng.) Hons., M.E.E. (P.I.I.), Ph.D. (Bradford), Fellow A.I.A.A., F.I.E.E.E., F.I.E.T., Eur. Eng. AFFILIATE ASSOCIATE PROFESSOR Dr Norman Poh, Ph.D (EPFL), IEEE CBP, FHEA VISITING ASSISTANT LECTURERS Inġ. Brian E. Cauchi, B.Sc.IT (Hons.), M.Sc.(ICT), M.Ent. Inġ. Antoine Sciberras, B.Eng.(Hons.)(Melit.), PG.Dip.Eng.Mangt.(Brunel), M.ent (Melit.) RESEARCH SUPPORT OFFICERS Dr Mang Chen (Research Support Officer III) Dr Asma Fejjari (Research Support Officer III) Dr Christian Galea, Ph.D (Merit), M.Sc (Melit.), B.Sc. (Hons.) ICT (CCE), MIEEE (Research Support Officer III) Dr Fabian Micallef (Research Support Officer III) Mr Matthew Aquilina (Research Support Officer II) Mr Roland Pfeiffer (Research Support Officer II) Mr Neil Borg (Research Support Officer I) Mr Leander Grech, B.Sc (Hons) (Research Support Officer I) Dr Thomas Pugnat (Research Support Officer) ADMINISTRATIVE & TECHNICAL STAFF Mr Mark Anthony Xuereb, (Administrator I) Mr Albert Sacco, (Senior Laboratory Officer) Inġ. Maria Abela-Scicluna, B.Eng.(Hons.)(Melit.), M.Sc. ICT (Melit.) (Systems Engineer) Mr Jeanluc Mangion, B.Eng.(Hons.)(Melit.) (Systems Engineer)

L-Università ta’ Malta

| 129


RESEARCH AREAS Computer Networks and Telecommunications

Signal Processing and Machine Learning

Computer Systems Engineering s

s s s s s s s s s s

Error Correction Codes Multimedia Communications Multi-view video coding and transmission Video Coding Internet of Things 5G/6G Networks Green Telecommunications Network Softwarization Satellite Communications Quantum Key Distribution

s s s s s s s s s s s s

Computer Vision Image Processing Light Field Image Processing Volumetric Image Segmentation Medical Image Processing and Coding Earth Observation Image Understanding Vision and Language tasks in Robotics Visual Relation Detection Visual Question Answering Self Supervised Learning Federated Learning

DEPARTMENT OF COMPUTER SCIENCE PROFESSOR Professor Gordon J. Pace, B.Sc., M.Sc. (Oxon.), D.Phil. (Oxon.) Professor Adrian Francalanza, B.Sc.I.T. (Hons.), M.Sc., D.Phil.(Sussex) ASSOCIATE PROFESSORS Professor Kevin Vella, B.Sc., Ph.D. (Kent) Professor Joshua Ellul, B.Sc.I.T. (Hons.), M.Sc. (Kent) , Ph.D. (Soton) SENIOR LECTURERS Dr Mark Micallef, B.Sc. I.T. (Hons.), Ph.D. (Melit.), M.B.A.(Melit.) (Head of Department) Dr Mark J. Vella, B.Sc.I.T.(Hons.), M.Sc. Ph.D. (Strath.) Dr Christian Colombo, B.Sc.I.T. (Hons.), M.Sc. Ph.D. (Melit.) Dr Sandro Spina, B.Sc.I.T.(Hons), M.Sc. (Melit), Ph.D.(Warw.) Dr Keith Bugeja, B.A.(Hons), M.IT, Ph.D.(Warw.) LECTURERS Dr Neville Grech, B.Sc.(Hons),M.Sc.(S’ton),Ph.D.(S’ton) AFFILIATE LECTURER Dr Alessio Magro, B.Sc. IT (Hons)(Melit.),Ph.D.(Melit) RESEARCH SUPPORT OFFICERS Caroline Caruana B.Sc.(Melit.), M.Sc.(Melit.) (Research Support Officer I) Mark Charles Magro, B.Sc.(Melit.),M.Sc.(Melit.) (Research Support Officer II) Adrian De Barro, B.Sc.ICT(Hons)(Melit.),M.Sc.(Melit.) (Research Support Officer II) Kevin Napoli, B.Sc.ICT(Hons)(Melit.),M.Sc.(Melit.) (Research Support Officer II) Jennifer Bellizzi, B.Sc.ICT(Hons)(Melit.), M.Sc.(Birmingham) (Research Support Officer II) Robert Abela, B.Sc.(Hons), M.Sc.(Melit.) (Research Support Officer II) ADMINISTRATIVE STAFF Ms. Gianuaria Crugliano, P.G.Dip.Trans.& Interp. (Melit.) Administrator II

130

|

Faculty of Information and Communication Technology - Final Year Projects, 2022

s s s s

Data Acquisition and Control Systems for Particle Accelerators and Detectors Implementation on Massively Parallel Systems (e.g. GPUs) Reconfigurable Hardware Implementation of Machine Learning algorithms at the edge Distributed Ledger Technology


RESEARCH AREAS s s s s s s

Concurrency Computer Graphics Compilers Blockchain Distributed Systems and Distributed Ledger Technologies Model Checking and Hardware/ Software Verification

s s s s s

Operating Systems Program Analysis Semantics of Programming Languages High Performance Computing and Grid Computing Runtime Verification

s s s s

Software Development Process Improvement and Agile Processes Software Engineering Software Testing Security

DEPARTMENT OF MICROELECTRONICS AND NANOELECTRONICS PROFESSOR Professor Inġ. Joseph Micallef, B.Sc.(Eng.)(Hons.),M.Sc.(Sur.),Ph.D.(Sur.), M.I.E.E.E. Professor Ivan Grech, B.Eng.(Hons.),M.Sc.,Ph.D.(Sur.),M.I.E.E.E. Professor Inġ. Edward Gatt, B.Eng.(Hons.),M.Phil.,Ph.D.(Sur.),M.I.E.E.E. SENIOR LECTURERS Dr Inġ. Owen Casha, B. Eng.(Hons.) (Melit.), Ph.D. (Melit.), M.I.E.E.E. (Head of Department) Dr Inġ. Nicholas Sammut, B.Eng.(Hons.) (Melit.), M.Ent. (Melit.), Ph.D. (Melit.), M.I.E.E.E. ADMINISTRATIVE & TECHNICAL STAFF Ms Alice Camilleri (Administrator I) Inġ. Francarl Galea, B.Eng. (Hons.),M.Sc.(Eng.) (Senior Systems Engineer) SCIENTIFIC OFFICER Dr Inġ. Russell Farrugia, B.Eng. (Hons)(Melit.), M.Sc.(Melit.) (Research Support Officer II)

RESEARCH AREAS s s s

Analogue and Mixed Mode ASIC Design Radio Frequency Integrated Circuits Embedded Systems

s s s s

Biotechnology Chips Micro-Electro-Mechanical Systems (MEMS) Quantum Nanostructures System-in-Package (SiP)

s s s s

System-on-Chip (SoC) Accelerator Technology Microfluidics Internet-of-Things (IoT)

DEPARTMENT OF ARTIFICIAL INTELLIGENCE PROFESSOR Professor Alexiei Dingli, B.Sc.I.T. (Hons.) (Melit.), Ph.D. (Sheffield), M.B.A (Grenoble) Professor Matthew Montebello, B.Ed. (Hons)(Melit.), M.Sc. (Melit.), M.A. (Ulster), Ph.D. (Cardiff), Ed.D. (Sheff.), SMIEEE SENIOR LECTURERS Dr Charlie Abela, B.Sc. I.T. (Hons)(Melit.), M.Sc. (Comp.Sci.)(Melit.),Ph.D.(Melit.) (Head of Department) Dr Joel Azzopardi, B.Sc. (Hons.) (Melit.), Ph.D. (Melit.) Dr Claudia Borg ,B.Sc. I.T. (Hons.) (Melit), M.Sc. (Melit.), Ph.D. (Melit.) Dr Vanessa Camilleri, B.Ed. (Hons.)(Melit.), M.IT (Melit.), Ph.D. (Cov) AFFILIATE SENIOR LECTURER Dr Andrea De Marco, B.Sc.(Hons)(Melit.),M.Sc.(Melit.),Ph.D.(U.E.A.) Dr Jean Paul Ebejer, B.Sc.(Hons)(Melit.),M.Sc.(Imperial),D.Phil.(Oxon.) Mr Michael Rosner, M.A. (Oxon.), Dip.Comp.Sci.(Cantab.)

L-Università ta’ Malta

| 131


LECTURERS Dr Josef Bajada, B.Sc. I.T. (Hons)(Melit.), M.Sc. (Melit.), M.B.A.(Henley), Ph.D. (King`s) Dr Ingrid Galea, B.Eng. (Hons)(Melit.), M.Sc. (Imperial), D.I.C., Ph.D. (Nott.), M.B.A. (Lond.) Dr Kristian Guillaumier, B.Sc. I.T. (Hons.) (Melit.), M.Sc. (Melit.), Ph.D. (Melit.) Dr Dylan Seychell, B.Sc. I.T. (Hons.) (Melit.), M.Sc. (Melit.), Ph.D. (Melit.), SMIEEE RESEARCH SUPPORT OFFICERS Mr Kurt Abela, B.Sc. IT (Hons.) (Melit.) (Research Support Officer I) Ms Sarah Aguis, B.A. (Hons.)(Melit.), M.A. (Leic.) (Research Support Officer II) Mr Enrico Aquilina, B.Sc. IT (Hons.)(Melit.), M.Sc. AI (Melit.) (Research Support Officer II) Mr Andrew Emanuel Attard Biancardi , B.Sc. IT (Hons.)(Melit.) (Research Support Officer I) Mr Stephen Bezzina, B.Ed (Hons.) (Melit.), M.Sc. Digital Education (Edinburgh) (Research Support Officer II) Mr Luca Bondin, B.Sc. IT (Hons) (Melit.), M.Sc. AI (Melit.) (Research Support Officer II) Mr Mark Bugeja, B.Sc. (Hons.) Creative Computing (Lond.), M.Sc. AI (Melit.) (Research Support Officer II) Mr Gabriel Calleja, B.Eng(Hons) (Melit.), MSc (Melit.) (Research Support Officer II) Ms Martina Cremona, B.A. (Hons.)(Melit.), M.A. (Melit.) (Research Support Officer II) Ms Dorianne Micallef, B.A. (Hons.)(Melit.), M.A. (Card.) (Research Support Officer II) Mr Kurt Micallef, B.Sc. IT (Hons.)(Melit.), M.Sc. (Glas.) (Research Support Officer II) ADMINISTRATIVE STAFF Mr Elton Mamo, (Administration Specialist) Ms Nadia Parnis, (Administrator II)

RESEARCH AREAS - Ongoing Research Title: Smart Athlete Analytics through real-time tracking Area: AI & IoT Coordinator: Prof Matthew Montebello Title: LearnML Task: Creation of a resource kit and guide for teachers to teach concepts of AI to young people Coordinator: Institute of Digital Games Title: UPSKILLS Task: Creation of a collection of resources for higher education courses relating to linguistics and language students Coordinator: Institute of Linguistics Title: Autonomous Diagnostic System (ADS) Task: Investigating the use of deep learning methods and graph-based

132

|

approaches to detect anomalies Coordinator: Dr Charlie Abela (in collaboration with Corel Malta Ltd)

Corpus to facilitate Maltese Speech Recognition Coordinator: Dr Claudia Borg

Title: AI4Manufacturing Task: Investigate the application of Machine Learning techniques in areas related to external/internal failure analysis and predictive/prescriptive maintenance. Coordinator: Dr Charlie Abela (in collaboration with STMicroelectronics (Malta) Ltd)

Title: NLTP - National Language Technology Platform Task: Maltese Automatic Translation aimed at Public Administration Coordinator: Dr Claudia Borg

Title: MDIA for Maltese Text Processing Task: The creation of computational tools to process the Maltese Language Coordinator: Dr Claudia Borg Title: MDIA for Maltese Speech Processing Task: The creation of a Maltese Spoken

Faculty of Information and Communication Technology - Final Year Projects, 2022

Title: LT-Bridge Task: Integrating Malta into European Research and Innovation efforts for AIbased language technologies Coordinator: Dr Claudia Borg Title: ELE - European Language Equality Task: Assessing the use of language technologies in Malta and establishing a national research agenda for Maltese Language Technologies Coordinator: Dr Claudia Borg


RESEARCH AREAS - Completed research Title: EnetCollect – Crowdsourcing for Language Learning Area: AI, Language Learning Coordinator: Dr Claudia Borg Title: Augmenting Art Area: Augmented Reality Task: Creating AR for meaningful artistic representation Title: Smart Manufacturing Area: Big Data Technologies and Machine Learning Title: Analytics of patient flow in a healthcare ecosystem Area: Blockchain and Machine Learning Title: Real-time face analysis in the wild Area: Computer vision Title: RIVAL; Research in Vision and Language Group Area: Computer Vision/NLP Title: Learning Analytics, Ambient Intelligent Classrooms, Learner Profiling Area: AI in Education Coordinator: Prof Matthew Montebello

Title: Medical image analysis and Braininspired computer vision Area: Intelligent Image Processing Title: Notarypedia Area: Knowledge Graphs and Linked Open Data Coordinator: Dr Charlie Abela and Dr Joel Azzopardi Title: Language Technology for Intelligent Document Archive Management Area: Linked and open data Title: Maltese Language Resource Server (MLRS) Area: Natural Language Processing Task: Research and creation of language processing tools for Maltese Coordinator: Dr Claudia Borg Title: Language in the Human-Machine Era Area: Natural Language Processing

Title: Smart animal breeding with advanced machine learning techniques Area: Predictive analysis, automatic determination of important features

Title: Morpheus Area: Virtual Reality Coordinator: Task: Personalising a VR game experience for young cancer patients Title: Walking in Small Shoes: Living Autism Area: Virtual Reality Task: Recreating a first-hand immersive experience in autism Title: eCrisis Task: Creation of framework and resources for inclusive education through playful and game-based learning Title: cSDGs Task: Creation of digital resource pack for educators to teach about sustainable development goals, through dance, storytelling and games Coordinator: Esplora Science Centre Title: GBL4ESL Task: Creation of digital resources for educators using a Game Based Learning Toolkit

An updated list of concrete areas in which we have expertise to share/offer s s s s s s

AI, Machine Learning, Adaptive Hypertext and Personalisation Pattern Recognition and Image Processing Web Science, Big Data, Information Retrieval & Extraction, IoT Enterprise Knowledge Graphs Agent Technology and Ambient Intelligence Drone Intelligence

s s s

s s

Natural Language Processing/ Human Language Technology Automatic Speech Recognition and Text-to-Speech Document Clustering and Scientific Data Handling and Analysis Intelligent Interfaces, Mobile Technologies and Game AI Optimization Algorithms

s s s s s s s

AI Planning and Scheduling Constraint Reasoning Reinforcement Learning AI in Medical Imaging Applications (MRI, MEG, EEG) Gait Analysis Machine Learning in Physics Mixed Realities

L-Università ta’ Malta

| 133


DEPARTMENT OF COMPUTER INFORMATION SYSTEMS ASSOCIATE PROFESSOR Professor Ernest Cachia, M.Sc.(Kiev), Ph.D.(Sheff.) (Head of Department) Professor John Abela, B.Sc.(Hons.), M.Sc., Ph.D.(New Brunswick), I.E.E.E., A.C.M. SENIOR LECTURERS Dr Conrad Attard, B.Sc.(Bus.&Comp.), M.Sc., Ph.D.(Sheffield) (Deputy Dean) Dr Lalit Garg, B.Eng.(Barkt), PG Dip. I.T.(IIITM), Ph.D.(Ulster) Dr Colin Layfield, B.Sc. (Calgary), M.Sc.(Calgary), Ph.D.(Leeds) Dr Peter A. Xuereb, B.Sc.(Eng.)(Hons.)(Imp.Lond.), A.C.G.I., M.Phil.(Cantab.), Ph.D.(Cantab.) Dr Christopher Porter, B.Sc.(Bus.&Comp.), M.Sc. , Ph.D.(UCL) Dr Joseph Vella, B.Sc., Ph.D.(Sheffield) LECTURERS Dr Michel Camilleri, B.Sc., M.Sc., Dip.Math.&Comp., Ph.D (Melit.) Dr Clyde Meli, B.Sc., M.Phil, Ph.D (Melit) VISITING ASSISTANT LECTURERS Inġ. Saviour Baldacchino, B.Elec.Eng.(Hons.), M.Ent., D.Mgt. Mr Norman Cutajar, M.Sc. Systems Engineering ASSISTANT LECTURERS Mr Joseph Bonello, B.Sc.(Hons)IT(Melit.), M.ICT(Melit.) SENIOR ASSOCIATE ACADEMIC Mr Anthony Spiteri Staines, B.Sc., M.Sc., A.I.M.I.S., M.B.C.S. AFFILIATE SENIOR RESEARCHER Dr Vitezslav Nezval, M.Sc.(V.U.T.Brno),Ph.D.(V.A.Brno) ADMINISTRATIVE STAFF Ms Shirley Borg, (Administration Specialist) Ms Lilian Ali, (Administrator I)

RESEARCH AREAS Software Engineering s Computational complexity and optimisation s Integrated risk reduction of information-based infrastructure systems s Model extraction (informal descriptions to formal representations) s Automation of formal programming syntax generation s Automation of project process estimation s High-level description language design s Distributed computing systems and architectures s Requirements engineering methods, management and automation s System development including

134

|

s

real-time scheduling, stochastic modelling, and Petri-nets Software testing, information anxiety and ergonomics

Data Science and Database Technology s Data integration and consolidation for data warehousing and cloud services s Database technology, data sharing issues and scalability performance s Processing of streaming data s Data analysis and pre-processing s Predictive modelling s Data warehousing and data mining: design, integration, and performance s Big data and analytics s Search and optimization

Faculty of Information and Communication Technology - Final Year Projects, 2022

s s s s s

Business intelligence Data modelling including spatialtemporal modelling Distributed database systems Missing data analysis Information retrieval

Human-Computer Interaction s Human-Computer Interaction (HCI) s Understanding the User Experience (UX) through physiological and cognitive metrics s Human-to-instrumentation interaction in the aviation industry s User modelling in software engineering processes s Human-factors and ergonomics s Accessibility, universal design and accessible user agents


s s s s

Advancing assistive technologies (multi-modal interaction) Affordances and learned behaviour The lived experience of information consumers Information architecture

Bioinformatics, Biomedical Computing and Digital Health s Gene regulation ensemble effort for the knowledge commons s Automation of gene curation; gene ontology adaptation s Classification and effective application of curation tools s Pervasive electronic monitoring in healthcare s Health and social care modelling s Missing data in healthcare records s Neuroimaging s Metabolomics s Technology for an ageing population s Education, technology and cognitive disabilities (e.g. augmented reality) s Assistive technologies in the

s

context of the elderly and individuals with sensory and motor impairments in institutional environments Quality of life, independence and security - investigating the use of robotic vehicles, spoken dialogue systems, indoor positioning systems, smart wearables, mobile technology, data-driven systems, machine learning algorithms, optimisation and spatial analytic techniques

Applied Machine Learning, Computational Mathematics and Statistics s Applicative genetic algorithms and genetic programming s Latent semantic analysis and natural language processing s Heuristics and metaheuristics s Stochastic modelling & simulation s Semantic keyword-based search on structured data sources s Application of AI and machine learning to business and industry s Application of AI techniques for

s

s s s s

s s

s s s

operational research, forecasting and the science of management Application of AI techniques to detect anomalies in the European Electricity Grid Knowledge discovery Image Processing (deconvolution) Image super-resolution using deep learning techniques Optimization of manufacturing production lines using AI techniques Square Kilometre Array (SKA) Tile Processing Module development Spam detection using linear genetic programming and evolutionary computation Scheduling/combinatorial optimisation Traffic analysis and sustainable transportation Automotive cyber-security

Fintech and DLT s Automatic Stock Trading s Distributed Ledger Technologies

FACULTY OFFICE Ms Nathalie Cauchi, Dip.Youth&Comm.Stud.(Melit.), H.Dip.(Admin.&Mangt.) (Melit.), M.B.A.(A.R.U.,UK) (Manager II) Ms Michelle Agius, H.Dip.(Admin.&Mangt.)(Melit.) (Administrator II) Mr Rene’ Barun, BA (Hons.) Philosophy (Melit), (Administrator II) Ms Therese Caruana (Administrator II) Ms Samantha Pace (Administrator I) Ms Luisa Castorina, B.A.(Melit.), M.Trans.(Melit.) (Administrator I)

SUPPORT STAFF Mr Patrick Catania A.I.M.I.S. (Senior IT Officer I) Mr Paul Bartolo (Senior Beadle) Ms Melanie Gatt (Beadle) Mr Raymond Vella (Technical Officer II)

L-Università ta’ Malta

| 135


Change your job into a career! The Faculty of Information & Communication Technology (ICT) offers a range of specialised courses that enable students to study in advanced areas of expertise, and improve their strategic skills to achieve career progression.

Get to know more about our courses  um.edu.mt/ict

Our courses Master of Science (by Research) Master of Science (Taught and Research) in the following areas: =

Artificial Intelligence

=

Human Language Science and Technology

=

Computer Information Systems

=

Microelectronics and Microsystems

=

Data Science

=

Signal Processing and Machine Learning

=

Digital Health

=

Telecommunications

Connect with your peers and develop your future with IEEE Malta Section

JOIN NOW IEEE.ORG

Autumn 2022 Be part of this event. Reach out to us! IEEE MALTA WEBSITE HTTP://WWW.IEEEMALTA.ORG/ /IEEEMALTA



YOU

SOFTWARE DEVELOPER

IT ANALYST

TECHNICAL SERVICES OFFICER

Designs, codes, tests and documents programs and scripts

Identifies, analyzes, and investigates Cybersecurity threats

Administers and supports ICT Systems to meet service requirements

Contributes to planning and design of ICT applications according to client needs

Provides security operations support

Investigates, diagnoses, and solves system related issues

Identifies and resolves issues with applications Can be assigned to several business areas within MITA

Implements and manages several security solutions

Monitors issues from start to resolution

Works with advanced enterprise-level security tools and technologies to support a comprehensive cybersecurity program

Can be assigned to areas such as Service Call Centre, Network Operations Centre, Email Services and Workstation Management

For more information visit mita.gov.mt/careers


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.