RGU School of Computing 2020 Degree Show

Page 1

School of Computing

2020

DEGREE SHOW

1


<#> 2


Contents BSc (Hons) Computer Science

4

BSc (Hons) Computing Application Software Development

92

BSc (Hons) Computing Graphics and Animation

146

BSc (Hons) Digital Media 154 BSc (Hons) Computer Network Management and Design 192

www.rgu.ac.uk/computing 01224 262700 CSDM-Enquiries@rgu.ac.uk ComputingRGU ComputingRGU ComputingRGU

3


4 <#>


BSc (Hons) Computer Science

5


STUDENT BIOGRAPHY

Oliver Aarnikoivu Course: BSc (Hons) Computer Science Detecting Emotion from Text using Deep Learning Currently, a vast majority of research which has been applied with regards to sentiment analysis has been focused on classifying text as either “positive” or “negative”. Evidently, if we can move from a binary classification task into analysing and detecting distinct emotions, this could lead to advancements in various fields. However, it’s difficult to gain an understanding on how we define ”emotion” due to the complexity of human behaviour. Emotion can be expressed in so many different ways, such as facial expressions, gestures, speech, text and even from less obvious indicators such as heart-rate, skin clamminess, temperature, and respiration velocity. Nevertheless, since 1979, an illustration provided by the psychologist Robert Plutchik (Plutchik 1979) has been widely used to demonstrate how different emotions can blend into another creating new ones. These emotions are (joy, trust, fear, surprise, sadness, disgust, anger and anticipation). If we can agree that emotions can be categorised into these distinct labels, it begs the question on whether it is possible to convey these emotions through text?

<#> 6


Detecting emotion from text using Deep Learning Oliver Aarnikoivu & Eyad Elyan

Introduction Currently, a vast majority of research which has been applied with regards to sentiment analysis has been focused on classifying text as either “positive” or “negative”. Evidently, if we can move from a binary classification task into analysing and detecting distinct emotions, this could lead to advancements in various fields. However, it’s difficult to gain an understanding on how we define ”emotion” due to the complexity of human behaviour. Emotion can be expressed in so many different ways, such as facial expressions, gestures, speech, text and even from less obvious indicators such as heart-rate, skin clamminess, temperature, and respiration velocity. Nevertheless, since 1979, an illustration provided by the psychologist Robert Plutchik (Plutchik 1979) has been widely used to demonstrate how different emotions can Wheel of Emotions blend into another Figure 1: Plutchik (Plutchik 1979) creating new ones. These emotions are (joy, trust, fear, surprise, sadness, disgust, anger and anticipation). If we can agree that emotions can be categorised into these distinct labels, it begs the question on whether it is possible to convey these emotions through text?

Project Aim

This aim of this project is to assess the ability of different deep learning models to classify a text as having one, or more, emotions for eight of the (Plutchik 1979) categories plus Optimism, Pessimism, and Love. The model should be able to generalise adequately to unseen data.

Methods This project uses the SemEval Task 1: Affect in Tweets E-c dataset. The dataset consists of 6838 training examples, 886 validation examples and 3259 testing examples. The experiment is tested using both a Text CNN (Convolutional Neural Network) and Attention LSTM (Long short term memory network) proposed by the authors (Kim 2014) and (Zhou. et al 2016) respectively. Due to the limited amount of training data, we make use of transfer learning such that the embedding layer of both models is initiated using both pre-trained GloVe (Global vectors for word representation) vectors and BERT (Bidirectional Encoder Representations from

Transformers) embeddings, generated from a pre-trained BERT transformers model. While the two chosen model architectures are considerably different, they have been selected due to their ability to identify words within a sentence regardless of its position.

This suggests that the model does an adequate job taking into account the label imbalance of the dataset.

Figure 2: Attention LSTM Model Architecture (Zhou et al. 2016)

Figure 4: Example of attention visualisation for emotional classification

Figure 3: Text CNN Model Architecture Kim (2014)

Figures and Results

The figure above displays the words that the attention model considers the most significant with regards to its predictions. The color intensity and word size corresponds to the weight given to each word. We can see that the model is successfully able to “place attention” towards words which correlate to the predicted emotions.

Conclusion

Table 1: Performance comparison of Attention LSTM and Text CNN.

Table 2: Attention LSTM vs. Text CNN on Plutchik Categories by F1 Score.

Based on our results, it’s evident that the Attention LSTM performs better for emotion detection. The Attention LSTM using BERT embeddings outperformed both Text CNN models as well as the Attention LSTM model using GloVe embeddings. Moreover, with regards to both the Attention LSTM and Text CNN, in both cases the model using the embeddings produced by the pre-trained BERT model proved better results. This suggests that the contextualised embeddings produced by BERT may be superior in comparison to the non-context dependent vectors produced by GloVe.

This project compared an Attention-based bidirectional LSTM to a CNN using transfer learning such that the embedding layer is initialised with pre-trained GloVe and BERT embeddings. The results achieved by the Attention LSTM model using BERT embeddings proved comparable to the current top 10 official SemEval Task 1: E-c (multi-label label emotion class.) competition results. The results displayed in Table 2 indicate that both chosen models have a difficult time generalising to categories with just a few training examples, whereas categories with a sufficient amount of training data perform well. This suggests that the labels with a worse class imbalance would benefit from having a larger dataset.

Acknowledgments

I would like to give a special thank you to my honours supervisor Dr. Eyad Elyan, whose support and guidance throughout this project has been invaluable.

References

Table 3: SemEval 2018: Task 1 E-c (multi-label emotion class.) English leaderboard, snippet of the top 10 results.

figures

GloVe. Furthermore, as shown by the table above, we see that in terms of the Micro F1 score, the results achieved by the Attention LSTM model using BERT embeddings are comparable to the top 10 official SemEval Task 1 (multi-label emotion class.) competition results.

Kim, Y. (2014), Convolutional neural networks for sentence classification, in ‘Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)’, Association for Computational Linguistics, Doha, Qatar, pp. 1746–1751. URL: https://www.aclweb.org/anthology/D14-1181 Plutchik, R. 1979. Emotions: evolutionary theory. 1.

A

general

psych

Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H. & Xu, B. (2016), Attention-based bidirectional longshort-term memory networks for relation classification, In ‘ACL’.

BSc (Hons) Computer Science 7


STUDENT BIOGRAPHY

Petar Bonchev Course: BSc (Hons) Computer Science Self-Balancing Robot This project focuses on building a single-wheeled self-balancing robot with a humanoid body form printed using a 3D printer. Importantly, the goal in terms of design was for the robot to resemble the Robert Gordon University robot, also referred to as ‘Glitch’. Using specific hardware including motors which would move accordingly depending whether the robot is in a balanced position or in a falling state. Firstly, maintaining balance was implemented using a PID controller and then with Reinforcement Learning. Software simulation had to be created as well using the same approaches, simulating how the robot behaves and balances in a 3D simulated world.

<#> 8


Self-Balancing Robot

Autonomous Self-Balancing Humanoid Robot made from scratch using Reinforcement Learning and a PID Controller Student: Petar Bonchev BSc (Hons) Computer Science Supervisor: Kit Hui

Introduction This project focuses on building a single-wheeled self-balancing robot with a humanoid body form printed using a 3D printer. Importantly, the goal in terms of design was for the robot to resemble the Robert Gordon University robot, also referred to as ‘Glitch’. Using specific hardware including motors which would move accordingly depending whether the robot is in a balanced position or in a falling state. Firstly, maintaining balance was implemented using a PID controller and then with Reinforcement Learning. Software simulation had to be created as well using the same approaches, simulating how the robot behaves and balances in a 3D simulated world.

Project Aim

The purpose of the project was to create a robot from scratch using a 3D printer, which will have a body similar to the humans’ and be able to maintain balance by itself using a PID controller and Reinforcement Learning.

Methods In order to complete the hardware part of the project were used: Raspberry Pi 4, Gyroscope MP6050 which also has an accelerometer, Servo MG90S - modified in order to be able to rotate with 360 degrees, jumper cables, 6 AA 2900mAh batteries, battery holder and a UBEC 5V switch regulator. The body of the robot was designed using FreeCAD software. In terms of software, the hardware part of the project was used Python and different hardware libraries. Whereas, for the 3D simulation was used C# and

Figures and Results

As seen on the image below, the first prototype of the robot. Initially, just lower part of the body was printed since the middle and higher part of the robot were mainly decorational. The images a the very top are an example of how the robot was trying to maintain balance using a PID controller when pushed. The MP6050 sends regular signals to the raspberry PI which uses a filter to get rid of the noise and recognize the main patterns, based on the data, an error values is passed to the PID controller which returns a specific value that tells the motor for how long it has to move. As seen in the images there are a lot of cables attached to the PI which were supporting in not to fall sideways. Further on the cables were removed and the robot controlled using wireless SSH. Unfortunately, there was a huge issue with the design per se, the motors (too fragile, made out of plastic), and later on was discovered that the motors had too slow response rate which was causing major issues with the balance. Reinforcement Learning was not implemented fully as the motors were not suitable. Software simulation was executed using PID controller and partly using Reinforcement Learning as time was running low.

Conclusion

After multiple attempts, design changes, electronic changes, motor changes and software adjustments the robot did not manage to keep balance by itself due to the slow responsiveness of the motors. Even though the length of the legs was increased multiple times to up to 30cm which had a substantial difference, the motor was still incapable to react fast enough in order to balance the robot.

Acknowledgments

Special acknowledgements to my supervisor Mr. Hui, who was guiding me through the whole process of building the robot using FreeCAD, He also helped me choose the appropriate hardware for the project, gave me guidance of electronics and explained to me the main concepts which would enable me to complete the project. Acknowledgments to School of Computing for financially supporting me when it came to purchasing the hardware.

References BIKASH KUMAR BHAWMICK, December 2017. [online] ResearchGate. Available f r o m : https://www.researchgate.net/ p u b l i c a t i o n / 323258475_DESIGN_AND_IMPLEMENTA TION_OF_A_SELF-BALANCING_ROBOT April 2020] DESMOND CAULLEY, June 2019. [online] Cornell University. Available f r o m : https://people.ece.cornell.edu/land/ courses/ece4760/FinalProjects/f2015/ d c 6 8 6 _ n n 2 3 3 _ h z 2 6 3 / final_project_webpage_v2/ dc686_nn233_hz263/index.html April 2020] PETER MILLER, 2017. [online] University of Southern Queensland. A v a i l a b l e f r o m : https://core.ac.uk/download/pdf/ 11039073.pdf April 2020]

9


STUDENT BIOGRAPHY

Matthew Buchan Course: BSc (Hons) Computer Science Impact of Game Mechanics on Raising Awareness of Cyber Security Games used to be a casual form of entertainment for many years, but as with more recent innovation in technologies, games have become a daily part of most people’s lives. The gaming industry generated £104.87 billion in 2018 with an increase of 10.9% upon the previous year and is projected to reach 139.91 billion by 2021. More than 2.5 billion people play games worldwide which is 33% of the world population. It can be determined from these statistics that gaming is a popular method of entertainment used by billions worldwide and is an ever growing market, but instead of being made solely for the purpose of entertaining, could it be an effective method in training and educating a user to increase awareness in cyber security. The benefits to having proper training programmes of becoming cyber secure are extensive, just some of these benefits can include less exposure to security risks, the security and integrity of user data, lower costs of insurance, restoring trust in the public eye and many more positive outcomes. My project seeks to be part of the solution in the training of people who are unaware in concerning the risks involved with cyber-attacks. The goal is for users to become cyber aware and cyber secure in the learning of the correct procedures to ensure security in their work and out of work activities.

<#> 10


popular method of entertainment used by billions • A control group of 10 users w worldwide and is an ever growing market, but instead of increase upon their first scor being made solely for the purpose of entertaining, could it the hangman game multiple be an effective method in training and educating a user to • 70% of the respondents foun increase awareness in cyber security. in the hangman game increa • The benefits to having proper training programmes of • However the control group w becoming cyber secure are extensive, just some of these they had used the forums se benefits can include less exposure to security risks, the on the quiz and help themse 1, of security and integrity of user data, lower costs Matthew Buchan Hatem Ahriz2 questions. insurance, restoring trust in the public eye and many • 100% of the respondents did 1 2. of Computing Science,Buchan Robert1Gordon United Kingdom Matthew , Hatem University, Ahriz2 more positiveSchool outcomes section. • My project seeks to be part of the solution in the training 1School • Lastly the control group were of Computing Science, Robert Gordon University, United Kingdom Methods of people who are unaware in concerning the risks Introduction leaderboard drove competitio involved with cyber-attacks. • All respondents replied that i • A forums sections willMethods allow for discussion on cyber security • The goal isIntroduction for users to become cyber aware and cyberrelated topics in order to help new users acclimate faster to towards besting their oppone the games. • A forums sections will allow for discussion on cyber security secure in the learning of the correct procedures to ensure • Hackman is one of the cyber games and is a similar to related topics in order to help new users acclimate faster to more to gain more points. hangman in which you are given a key cyber security word security in their work and out of work activities. the games. to guess. This should help in recollection for the quiz as the

Impact of game mechanics Impact of awareness game mechanics on raising of cyber on raising awareness of cyber security security

Project Aim

source: https://www.itu.int/en/ITU-D/Cybersecurity/Pages/global-cybersecurity-index.aspx

• Games used to be a casual form of entertainment for source: https://www.itu.int/en/ITU-D/Cybersecurity/Pages/global-cybersecurity-index.aspx many years, but as with more recent innovation in technologies, games become a daily part of most for • Games used to have be a casual form of entertainment people’s lives. many years, but as with more recent innovation in • The gaming industrygames generated 2018 technologies, have £104.87 become abillion daily in part of most with anpeople’s increase of 10.9% upon the previous year and is lives. projected reachindustry 139.91 generated billion by 2021. More thanin2.5 • The to gaming £104.87 billion 2018 billion with people games worldwide is 33% year of the an play increase of 10.9% upon which the previous and is 1 projected to. reach 139.91 billion by 2021. More than 2.5 world population billion people play games is 33% of • It can be determined from theseworldwide statistics which that gaming is the a 1. population popularworld method of entertainment used by billions • It can be determined from these statistics that gaming is a worldwide and is an ever growing market, but instead of popular method of entertainment used by billions being made solely for the purpose of entertaining, could it worldwidemethod and is an growing market, but instead be an effective in ever training and educating a user to of being made solely for the purpose of entertaining, could it increase awareness in cyber security. be an effective trainingprogrammes and educating • The benefits to havingmethod proper in training of a user to increase cyber security. becoming cyberawareness secure areinextensive, just some of these • The benefits to having proper training programmes benefits can include less exposure to security risks, the of becoming cyber of secure extensive, justof some of these security and integrity user are data, lower costs benefits can include to security risks, the insurance, restoring trust inless theexposure public eye and many security and integrity of user data, lower costs of more positive outcomes2. insurance, restoring trust in the public eye and many • My project seeks to be part of2 the solution in the training more positive outcomes . of people who are unaware in concerning the risks • My project seeks to be part of the solution in the training involved with cyber-attacks. of people who are unaware in concerning the risks • The goal is for users to become cyber aware and cyber involved with cyber-attacks. secure in the learning of the ensure • The goal is for users to correct become procedures cyber awaretoand cyber security in their work and out activities. secure in the learning of of thework correct procedures to ensure

• Hackman is one of the cyber games and is a similar to keywords are used in the quiz. User is awarded points hangman in which you are given a key cyber security word based on their guesses. to guess. This should help in recollection for the quiz as the • Quiz is the last which a user will answer keywords arecyber usedgame in the in quiz. User is awarded pointsa set based of questions within a timeframe with some given on their guesses. lifelines. Bythe use of cyber the hackman quiz bea • Quiz is last game in game which this a user willshould answer made Pointswithin are rewarded by the of correct seteasier. of questions a timeframe withamount some given answers. lifelines. By use of the hackman game this quiz should be • A leaderboard is Points used to display a users combined game made easier. are rewarded by the amount of correct points to drive competitive nature, this should encourage answers. users to become number one and learn more. • A leaderboard is used to display a users combined game points to drive competitive nature, this should encourage users to become number one and learn more.

• To create an interactive web app that provides training scenarios regarding cyber security awareness to a user with the implementation of game mechanics. Results • Creation of web application using the MERN (mongoDB, Results • A control group of 10 users was polled on their ability to express, React, NodeJS) development stack. increase upon their first score attempt on the quiz by use of • A control group 10 userstimes. was polled on their ability to • Utilization of VS code for local development. the hangman gameofmultiple upon their first score attempt on the by use of • 70%increase of the respondents found that guessing thequiz keywords • As was aforementioned, the • Trello used to track development. the hangman game multiple times. in the hangman game increased their score on the quiz. • 70% of the respondents found that guessing the keywords an interactive application tha • However the control group were also polled on whether • Web App version control by use of GitHub. the used hangman game increased score on the quiz. theyinhad the forums section intheir which to ask questions mechanics, to provide a user • However the control group were also polled on whether • Heroku used to host final version. on the quiz and help themselves to answer the quiz's they had used the forums section in which to ask questions questions. regarding cyber security awa on the quiz and help themselves to answer the quiz's • 100% of the respondents did not make use of the forums questions. • A user should therefore be a section. • 100% of the respondents did not make use of the forums • Lastly the control group were polled on whether the section. awareness scenario such as leaderboard drove competition between each other. • Lastly the control group were polled on whether the • All respondents replied that it did indeed drive them fraudulent or not and engage leaderboard drove competition between each other. towards besting their opponents by playing and learning • All respondents replied that it did indeed drive them manner. more to gainbesting more points. towards their opponents by playing and learning more to gain more points. • Results show an increase of security in their work and out of work activities. as such after use of the appli Project Aim based learning of cyber secu Project Aim user. • To create an interactive web app that provides training

• • • • •

scenarios regarding cyber security awareness to a user • To create an interactive web app that provides training with thescenarios implementation of cyber gamesecurity mechanics. regarding awareness to a user Creation of the webimplementation application using the MERN (mongoDB, with of game mechanics. • Creation web application using stack. the MERN (mongoDB, express, React,ofNodeJS) development express, development stack. Utilization of VSReact, code NodeJS) for local development. • used Utilization of development. VS code for local development. Trello to track Trello used control to trackby development. Web• App version use of GitHub. • Web App version control by use of GitHub. Heroku used to host final version. • Heroku used to host final version.

Acknowledgments

• As was aforementioned, the aim of the project is to create • interactive As was aforementioned, themakes aim of use the project is to create an application that of game an interactive application makes useon ofscenarios game mechanics, to provide a userthat with training mechanics, to provide a user with training on scenarios regarding cyber security awareness. regarding security awareness. • A user shouldcyber therefore be able to deliberate upon an • A user should therefore be determining able to deliberate uponto anbe awareness scenario such as a website awareness scenario such as determining fraudulent or not and engage interactively inaawebsite positiveto be fraudulent or not and engage interactively in a positive manner. manner. • Results show an increase of awareness and demonstrate • Results show an increase of awareness and demonstrate as such after use of the application, thus proving game as such after use of the application, thus proving game based learning of cyber security raises the awareness of a based learning of cyber security raises the awareness of a user. user.

This poster was made possible by the efforts of my supervisor Hatem Ahriz and his work to guide me through my dissertation.

Acknowledgments Acknowledgments •

Conclusion Conclusion

This• poster was made possible by thebyefforts of myof my This poster was made possible the efforts supervisor HatemHatem Ahriz and work guide me through supervisor Ahrizhis and his to work to guide me through my dissertation. my dissertation.

Conclu

Refer

• Dobrilova, Teodora,. How M Worth?, https://techjury.ne worth/, Accessed 04 Novemb

References References

• Archer, J., 2018. Cyber a companies say. Technology from: https://www.telegraph.co.uk/t attacks-biggest-risk-companie 23/09/2019].

• Dobrilova, Teodora,. How Much • Dobrilova, Teodora,. How MuchIs IsthetheGaming GamingIndustry Industry Worth?, Worth?, https://techjury.net/stats-about/gaming-industryhttps://techjury.net/stats-about/gaming-industryworth/, Accessed 04 04 November 2019. worth/, Accessed November 2019. • Archer, J., J., 2018. • Archer, 2018.Cyber Cyberattacks attacksarearethe thebiggest biggest risk risk companies say.say. Technology Intelligence. [online]. companies Technology Intelligence. [online].Available Available from: from: https://www.telegraph.co.uk/technology/2018/11/12/cyberhttps://www.telegraph.co.uk/technology/2018/11/12/cyberattacks-biggest-risk-companies-say/ [Accessed attacks-biggest-risk-companies-say/ [Accessed 23/09/2019]. 23/09/2019].

Matthew Buchan Buchan Matthew m.buchan6@rgu.ac.uk m.buchan6@rgu.ac.uk

11


STUDENT BIOGRAPHY

Rafael Castillo Course: BSc (Hons) Computer Science Can machine learning algorithms for Question and answers be used to help with depression problems? The world is facing a golden digital age where artificial intelligence is evolving the world faster than ever. Digital assistants and chatbots which can emulate human abilities trained by machine learning algorithms are on the rise. This technology can help with one of the biggest problems humanity faces one that has always surpass availability rendering people unattended and without the knowledge or care to cope with “depression�. In the last years machine learning has played a big role in health care institutions, natural language processing and machine learning techniques makes it possible to automate the process of helping people cope with depression problems by creating an environment where people can ask questions about their personal problems to a computer software without fear or shame of dealing with real people. This sophisticated algorithms can also document and evolve with users input and find patters that can help make the algorithm as useful as possible in treating depression and so that the best answers and trends can be exposed by the community for the community.

<#> 12


Can machine learning algorithms for Question and answers be used to help with depression problems? Rafael Castillo & Nirmalie Wiratunga

Introduction The world is facing a golden digital age where artificial intelligence is evolving the world faster than ever. Digital assistants and chatbots which can emulate human abilities trained by machine learning algorithms are on the rise. This technology can help with one of the biggest problems humanity faces one that has always surpass availability rendering people unattended and without the knowledge or care to cope with “depression”. In the last years machine learning has played a big role in health care institutions, natural language processing and machine learning techniques makes it possible to automate the process of helping people cope with depression problems by creating an environment where people can ask questions about their personal problems to a computer software without fear or shame of dealing with real people. This sophisticated algorithms can also document and evolve with users input and find patters that can help make the algorithm as useful as possible in treating depression and so that the best answers and trends can be exposed by the community for the community.

Project Aim Research machine learning algorithms that can be used to create a question answering bot which is able to comprehend human language, pick up on questions of a specific topic such as depression and deliver an answer that fully answers the question that was asked.

Methods The method we have found in order to create an algorithm that can be trained and answer depression queries is called Bidirectional Encoder Representation from Transformers also called BERT. BERT is a new technology introduced by google in October 2019, I have chosen this technology because it has been pre-trained with over 3.4 billion words and is able to recognize 104 languages and allows for the best contextual representation of the word in a sentence because It reads stream of words from left to right and right to left hence the name "Bidirectional". Google posted a picture how this new techniques makes better understanding of context, we can see the results from the search query changes before and after BERT was introduce because is able to understand the meaning of the word “for”

The word “them” in the question means the students in the paragraph and when asked “where are they from” it also recognizes we mean the students and get as an answer “RGU”. The last example is a yes or no question and here is where the algorithm meets it’s limits as good as it is understanding questions it is unable to answer a simple yes. My expected results for the accuracy of this algorithm is expected to be similar to the current benchmark of “87.43” surpassing even human performance and achieving #1 state-ofthe-art results

figures

This pre-trained algorithm solves the problem of training a neural network from scratch as there Is no need to find millions of lines of words on depression to train it and instead this algorithm has an extra layer that can be fine tuned. The algorithm has been trained using masked language model which mask words in a sentence and tries to predict them, it also uses next sentence prediction and tries to label if sentence A belongs to sentence B this structure makes it easy to fine tune BERT for question and answering as the question is sentence A and the paragraph containing the answer can be sentence B. These 2 techniques are also known as self-supervise learning and every time the algorithm is wrong it fixes its weight achieving state of the art results.

figures

Figures and Results I have tested BERT using the SQuAD model, this is a reading comprehension dataset that has been used to train BERT to read and understand a questions and look for the answer on a paragraph of text, the results obtained so far are surprisingly accurate. In the example below we have feed BERT with a paragraph about depression and asked a few questions which answers can be found in the paragraph. It is worth noticing that BERT can only answer the question as it is on the paragraph and is unable to paraphrase or add or subtract any information. What picks my interest is to see how well the algorithm deals with the context in the questions, for example in the third question the algorithm understands

figures

Conclusion My research has taken me from simple ANNs to different ML models such as BERT, this is a powerful tool that can be used not just in depression but in any other field where questions need to be asked and an answer can be abstracted from a text. This technology could also be used in conjunction with other ML models to build a chatbot giving the answer the human touch BERT is missing.

Acknowledgments I would like to thank my tutor Nirmalie Wiratunga for pointing me in the right direction in order to make myself acquaintance with the technology of BERT. Special thanks to the Computer Science school staff which has always believed in me, supported me and making me feel as I was part of a family during in my hardest times

References Chang, M.-W., & Devlin, J. (2018)Google AI Blog: Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing. https://ai.googleblog.com/2018/11/opensourcing-bert-state-of-art-pre.html McCormick, C. (2019). BERT Research - Ep. 3 - Fine Tuning - p.1. https://www.youtube.com/ watch?v=x66kkDnbzi4&t=792s

13


STUDENT BIOGRAPHY

Matthew Donald Course: BSc (Hons) Computer Science The Gamification of Cybersecurity It is estimated by 2021 cybercrime will have caused $6 trillion worth of damage which will cost more than all the natural disasters in a year (Steve Morgan, 2017). Cybercrime can affect businesses in many different ways, these can include damage to the reputation of the business that has been attacked. Furthermore, damage can be done to the companies intellectual property as ideas and strategies could be stolen by the attacker. Gamification is using game-like elements in a nongaming environment. Gamification is an effective method of teaching as it gives the user control, allowing them to complete a game at their own pace. Furthermore, the user gets a sense of achievement from completing tasks within the app.

<#> 14


The Gamification of Cybersecurity

In relation to cybersecurity awareness for employees in an organisation. Matthew Donald

Introduction

It is estimated by 2021 cybercrime will have caused $6 trillion worth of damage which will cost more than all the natural disasters in a year (Steve Morgan, 2017). Cybercrime can affect businesses in many different ways, these can include damage to the reputation of the business that has been attacked. Furthermore, damage can be done to the companies intellectual property as ideas and strategies could be stolen by the attacker. Gamification is using game-like elements in a nongaming environment. Gamification is an effective method of teaching as it gives the user control, allowing them to complete a game at their own pace. Furthermore, the user gets a sense of achievement from completing tasks within the app.

Project Aim The general aim of the project is to produce a mobile application that targets employee’s in businesses. The application should help the user to improve their knowledge of common cyber security scenarios that may affect them in the workplace and also inform them of the way attacks work. This is done through the use of scenarios the user plays through and quizzes to test their knowledge.

Methods

Figures and Results

Conclusion

From the results analysed so far, it can be seen that most users feel they have improved their knowledge of cybersecurity by using the app, this will lead to better workplace security practise. The results shown in graph above have been produced by using a System Usability Scale, it consists of a 10 item questionnaire that allows users to evaluate the usability of the application and gives a score out of 100, with the average score being 68. The graphs show some questions sampled from the SUS, where odd number questions are in a positive tone and even number questions are in a negative tone. The table above shows how the SUS score is calculated for each individual response with the average score taken from all responses.

Furthermore the results from the SUS questionnaire show that the application should be relatively easy for anyone to pick up and start using, meaning it would be a good choice for an organisation to use as a tool to educate employees.

Another tool used to assess the functionality of the app was CogTool. CogTool is used to assess the cognitive load of an application and can be used to asses how long tasks would take within an application. The table below shows some example tasks within the app such as navigating the index page and roughly how long it would take to complete each task.

Moreover, the application is designed to be easily expandable in such a way that other scenarios and quizzes could be added with little disruption to the way the app works right now.

Acknowledgments I would like to thank Dr. Hatem Ahriz for supervising this project, whose weekly meetings were very helpful in providing direction for the project.

References

Cybercrime Damages $6 Trillion by 2021 (2017). Available at: https://cybersecurityventures.com/ hackerpocalypse-cybercrimereport-2016/ Wireframes were drawn up detailing various screens of the application. Following wireframes a work flow was made which detailed the pathways for the user to navigate the app. After this the front and back end were implemented using Android Studio as the development environment.

CogTool-IBM (2018). Available at: https://researcher.watson.ibm.com /researcher/view_group.php?id=2 238

Computer Science

15


STUDENT BIOGRAPHY

Marcus Douglas Course: BSc (Hons) Computer Science Investigating the Influence of Soundtrack on Player Experience in Video Games An often somewhat underappreciated feature of video games today is the quality of audio and soundtrack that is implemented into the experience. Soundtrack is a feature that has come to be expected to the point where its value is perhaps overlooked. With much of the gaming industries development focus directed toward graphical innovations, this project looks to place a spotlight on the importance of soundtrack in video games. A video game will be designed for this project and take inspiration from Hideo Kojima’s most recent release Death Stranding. This game received many negative reviews due to the tedious nature of its gameplay. For those however who did enjoy the game, it seems that the soundtrack was a rather large part of the enjoyment players had. Many in fact wishing that the games soundtrack was available to listen to again through an in-game playlist feature. This project will therefore consider the ways in which soundtrack is presented in games.

<#> 16


Inves)ga)ng the Influence of Soundtrack on Player Experience in Video Games Student: Marcus Douglas Supervisor: Carrie Morris

Introduc)on An o%en somewhat underappreciated feature of video games today is the quality of audio and soundtrack that is implemented into the experience. Soundtrack is a feature that has come to be expected to the point where its value is perhaps overlooked. With much of the gaming industries development focus directed toward graphical innova?ons, this project looks to place a spotlight on the importance of soundtrack in video games. A video game will be designed for this project and take inspira?on from Hideo Kojima’s most recent release Death Stranding. This game received many nega?ve reviews due to the tedious nature of its gameplay. For those however who did enjoy the game, it seems that the soundtrack was a rather large part of the enjoyment players had. Many in fact wishing that the games soundtrack was available to listen to again through an in-game playlist feature. This project will therefore consider the ways in which soundtrack is presented in games.

Project Aim This project seeks to gain insight into the influence of soundtrack implementa?on on the player experience and the level to which it affects player engagement and enjoyment. To do this, a video game will be created with varying soundtrack implementa?ons which will be tested by groups of par?cipants for analysis.

Design Methods The video game was developed en?rely within the Unity game engine. All assets such as textures, character models, and other environment objects were imported from the Unity asset store. Unity had all the appropriate built in tools to allow for implementa?on and mixing of audio.

Unity Implementa)on

Conclusion

figures

figures

All the imported assets were chosen with care to ensure a thema?c consistency with Death Stranding from which the design of this game takes inspira?on from. Extra terrain sculp?ng tools were also imported from the asset store to ease the crea?on of a large, drama?c landscape that the player could explore. The player can manoeuvre through the game world with a humanoid character in the third person aspect. Some simple cube shaped objects were created to serve as collectable sub and main objec?ves for the player to complete. Three different builds of the game were created; one which had soundtrack scripted in response to in-game events, one with soundtrack implemented as a playlist feature, and one with no soundtrack. Where the soundtrack was implemented, the songs were selected from Death Stranding’s soundtrack maintaining the thema?c consistency. To help gain a beOer understanding of how the user played the game, ?mers were scripted to run in the background with the results shown when the main objec?ve was completed.

Tes)ng

figures

To test the video game and achieve the project aim, three groups of par?cipants were required to play different versions of the game each and fill out a ques?onnaire. While tes?ng is s?ll underway, the generally an?cipated result is that par?cipants who get to play versions of the game with soundtrack implementa?on will perhaps gather more of the op?onal objec?ves. Furthermore, it would be expected to be found from the ques?onnaire responses that par?cipants find versions with soundtrack more enjoyable, and that having soundtrack implemented to play at scripted points is further preferable to soundtrack scripted as a playlist.

Many challenges have arisen throughout the course of this project, with many adaptions having been necessary. The main challenge of which has been to build and test a video game of the scale that this project would ideally require. With more time and expertise, a more refined video game may have been created which did not focus quite so obviously on soundtrack. Were there more time, it may have been more ideal to have participants revisit the video game and see how their engagement with the game dropped or increased over multiple playthroughs. To conclude, video games have far more complexity to them and how we enjoy them than simply a good soundtrack. Video games are art, and art is subjective.

Future Work There are many avenues for future work in this area. Further investigation could be made into what happens to player engagement when for example the soundtrack does not match the context of what is happening on screen. With more time and a video game demo at the ready, participants could be exposed to the demo multiple times and their engagement over each exposure compared. The experimental design choice could even be changed to repeated measures. This would mean participants would experience all versions of the game and their engagement could be analysed across versions; for example, would a participant who played with no soundtrack initially be as willing (if not more) to explore the game more when music is playing?

17


STUDENT BIOGRAPHY

Ogochukwu Emele Course: BSc (Hons) Computer Science The use of poseNet in monitoring the progress of a knee rehabilitation patient Following a knee injury or surgery, knee rehabilitation therapy is an essential step to recover normal joint function for daily activities. Physical rehabilitation can take up to several weeks or even months before full range of motion and joint flexibility are regained, and so patients are usually encouraged to continue exercises at home until they fully recover. A specialist monitors the progress through routine visits and may adapt the exercises. Literature reveals that patients may lose motivation to exercise on their own especially if no active monitoring is in place. This project explores how poseNet (a library of human poses) can be exploited to aid patients’ monitoring of their progress in a knee rehab program.

<#> 18


The use of poseNet in monitoring the progress of a knee rehabilitation patient Ogochukwu Emele* & Prof. Nirmalie Wiratunga Introduction Following a knee injury or surgery, knee rehabilitation therapy is an essential step to recover normal joint function for daily activities. Physical rehabilitation can take up to several weeks or even months before full range of motion and joint flexibility are regained1, and so patients are usually encouraged to continue exercises at home until they fully recover. A specialist monitors the progress through routine visits and may adapt the exercises 2. Literature reveals that patients may lose motivation to exercise on their own especially if no active monitoring is in place. This project explores how poseNet (a library of human poses) can be exploited to aid patients’ monitoring of their progress in a knee rehab program.

Project Aim The project integrates the poseNet human pose models into a mobile app for monitoring the progress of knee rehabilitation patients.

Methods 1.  Develop a mobile app 2.  Use IDE (e.g. Android studio) 3.  Use poseNet TensorFlowLite3 4.  Track human poses 5.  Extract the angle formed by the intersection of the Hip-Knee & Knee-Ankle lines. 6.  Store the angles and plot a graph to monitor progress. 7.  Conduct a few experiments to evaluate the system

figures

Figures and Results

Conclusion

1.  poseNet has been successfully integrated into the mobile app 2.  Knee angles were successfully calculated from human poses 3.  Experiments were conducted to evaluate the mobile app project 4.  Results showed that the app aided users to monitor progress

This project has achieved the key objectives and requirements such as integrating poseNet into mobile app, facilitating the extraction of the knee angle from a human pose, and plotting a graph of the angles for monitoring progress. For future work, admin features could be added to the app, and more rigorous evaluation involving real patients could be considered.

Acknowledgments Thanks to the TensorFlow team for the poseNet libraries. I am also grateful to my supervisor Prof. Nirmalie Wiratunga for her support and guidance during this project.

References

1.  Preece, S. J. et al. (2009), Activity Identification Using Body-Mounted Sensors- a Review of Classification Techniques. Phys. Meas. 30, R1–R33 2.  Clark, N. C. (2015), ‘The role of physiotherapy in rehabilitation of soft tissue injuries of the knee. Orthopaedics and Trauma, Vol 29, Issue 1, 48-56 3.  Mao, E & Prity T (2019). Track human poses in real-time on Android with TensorFlow Lite. https:// blog.tensorflow.org/2019/08/track-human-poses-inreal-time-on-android-tensorflow-lite.html

*Corresponding author Email: o.c.emele@rgu.ac.uk

19


STUDENT BIOGRAPHY

Kirsty Forrest Course: BSc (Hons) Computer Science An Investigation into Deployment Tools to Determine the Futuristic Requirements for Business Standards The chosen project is investigating deployment methods within a business environment. Deployment methods have been around since the beginning of machines. They began very basic with simple installations. The software was individually setup on each machine, this has progressed to creating .iso files on a bootable device or deployment servers. Throughout the process, I identified a flaw within the system, when creating a machine with core images and key applications. Most deployment methods do not offer a low bandwidth option, but more importantly specifically if there is no technical expertise available. A non-technical person could not deploy a crucial machine to ensure the business continuity required. Including this aspect could make deployments easier in a short space of time or on a remote location. Determining the futuristic element of the project by the means of reviewing the regular change evidences how they would improve and produce cost saving features.

<#> 20


An Investigation into deployment tools to determine the futuristic requirements for business standards Student name: Kirsty Marjory Forrest

Introduction

The chosen project is investigating deployment methods within a business environment. Deployment methods have been around since the beginning of machines. They began very basic with simple installations. The software was individually setup on each machine, this has progressed to creating .iso files on a bootable device or deployment servers. Throughout the process, I identified a flaw within the system, when creating a machine with core images and key applications. Most deployment methods do not offer a low bandwidth option, but more importantly specifically if there is no technical expertise available. A non-technical person could not deploy a crucial machine to ensure the business continuity required. Including this aspect could make deployments easier in a short space of time or on a remote location. Determining the futuristic element of the project by the means of reviewing the regular change evidences how they would improve and produce cost saving features.

Project Aim The purpose of this project was to investigate deployment tools and determine if the requirements they provide are adequate for the future of businesses standards. The tools were to be critically evaluated on their effectiveness. The project investigates alternative methods of deployments including a Raspberry Pi.

Methods At the start of the project, I began investigating Microsoft tools such as SCCM and Intune as these are the most popular. These are particularly prevalent in medium-large corporate networks. Both can be used side by side to complement each other. .

Supervisor: Andrei Petrovski

Microsoft has dominated the market since the beginning of the use of sophisticated tools. Their main tool is the Systems Centre Configuration Manager. This complex tool provides full administrative components, integrated with Active Directory. Intune was integrated in 2010 with SCCM to produce a mobile management tool. More and more personal devices are being used in place of company devices. Intune provides this service. Workspace ONE is a VMWare product that uses an online portal to access the software. This requires no physical machine to be built, therefore no deployment method. Raspberry Pi has become a centralised focus for experimental projects, but it is capable of more. The Raspberry Pi 4 has the same capacity as a older laptop. In terms of implementing the deployment methods. I created a Raspbian Linux Server which manages the admin of ISO files. A PXE Server is used to distribute the ISO images to the ‘client’ machines. Python was used to create the screen displayed on the ‘Server’ device. The key focus on the process was usability. Portability. Simple solution.

Conclusion The Raspberry Pi fulfils a gap within the market. The emergency deployment device: used for offshore rigs or Remote locations such as military bases. A non-technical user can deploy a device without any prior knowledge except a login. If active directory was configured inline with the ‘server’, no preconfigured login would be required as the user would be using the network configuration information. Raspberry Pi 4 was a good choice for a test network, but due to being new difficult to add non Raspberry Pi devices.

Acknowledgments Figure 2: Network Diagram

The result of the project has been successful as the functional requirements have been met. Deployments are possible and can be customised.

Future Work The project has plenty of room for development and taking this project further. The Raspberry Pi could be taken a step further into a fully-fledged server. This could be adapted into a Windows 2020 Server to ensure security measures are met. This would have been attempted if Microsoft trials lasted longer than 60 days. A business environment would be either paying for AWS Server or a physical server within their datacentre.

Figure 1: SCCM and Intune

Figure 3: Login Screen of Raspberry Pi Screen

Due to limited hardware, the device was not tested with physical laptops. This would be a great extension onto the testing of the device

I would like to acknowledge Andrei Petrovski for all his hard work throughout this project, who guided me through the process. Providing critical views, ideas and feedback. Without your combined knowledge I would not have been successful in bringing this project to a positive close.

References JOY-iT 3.5" Touchscreen for Raspberry Pi Elektor, 2020. [online]. Available from: https:// www.elektor.com/joy-it-3-5-touchscreen-forraspberry-pi [Accessed 13 Apr 2020]. Comparison between Microsoft Intune and System Center Configuration Manager (SCCM) Capabilities - theCloudXperts, 2020. [online]. Available from: http:// www.thecloudxperts.co.uk/comparisonbetween-microsoft-intune-and-system-centerconfiguration-manager-sccm-capabilities [Accessed 13 Apr 2020].

21


STUDENT BIOGRAPHY

Hugh Fraser Course: BSc (Hons) Computer Science DiabetEase:a mobile application as atool for diabetics to self manage their disease Diabetes is undoubtedly an alarming and often misunderstood disease,along with the life-changing impacts it brings to sufferers. It has also been said that if the disease continues to exist, there will be a need to effectively manage it. (Yach,2003) While diabetes has been around for an extensive amount of time, and technology has progressed, it was found that many self-management applications fell short of what can be accomplished in the 21st century. Research found that, despite clear outlines of categories of diabetes self-management, and a vast number of applications existing both on the Apple App Store and on Google Play, the majority of applications do not advance beyond the bare minimum of functions or requirea subscription to do.

<#> 22


DiabetEase: a mobile application as a tool for diabetics to self manage their disease

Hugh Fraser

Introduction

Supervisor: David Lonie

Implementation

Conclusion

Diabetes is undoubtedly an alarming and often misunderstood disease, along with the life-changing impacts it brings to sufferers. It has also been said that if the disease continues to exist, there will be a need to effectively manage it. (Yach, 2003)

This project has been both challenging and revealing at the same time. Numerous issues have arisen that have made me question the level of comfort and confidence I have while coding in Java, but ultimately the main obstacle encountered was time management.

While diabetes has been around for an extensive amount of time, and technology has progressed, it was found that many self-management applications fell short of what can be accomplished in the 21st century.

A fully functional self management tool is a possibility, but the true scale of the project in terms of activities, additional Java classes and XML files was not properly considered before implementation began, severely hindering the project. As a result, features such as the graph, pictured below courtesy of Trust Onyekwere of Medium.com, could not be implemented fully.

Research found that, despite clear outlines of categories of diabetes selfmanagement, and a vast number of applications existing both on the Apple AppStore and on Google Play, the majority of applications do not advance beyond the bare minimum of functions or require a subscription to do.

Project Aim The project aim was to consolidate findings from existing applications in order to develop a new, and hopefully more comprehensive freeware tool for diabetics. The selection of features implemented was based upon the categories outlined in the work of El-Gayar et al:

figures

As with any other Android Studio application, the project is based upon several activities. Each feature is given its own main activity with the intent to show entries through the use of a RecyclerView component. An entry is added through a separate view where the user fills in values for given fields which are taken and transferred to the Room database. In order to edit or delete a previous entry, the user will be able to tap their selected item from the RecyclerView and be taken to an edit screen. From here they will also be given the option to delete an entry. These values are to be used in combination with the GraphView library to create a line graph timeline of entries. While each separate entry type will be shown alone on its relevant activity. On the main dashboard, these lines will all be shown on the same graph at the same time. Each time an entry is either updated or deleted, the graphs will be updated to show this. While not yet implemented, the education system would be a series of CardViews containing videos, text, diagrams and photographs for the user to scroll through at their leisure. Additionally, the tag system was considered to be a potential feature of the database later on so that users could mark entries with a specific colour to remind themselves about additional information, such as if a day of particular entries was highly active or stressful.

Methods

Future Work

Android Studio was used to develop the application DiabetEase in the Java programming language.

Undoubtedly, the application is by no means complete. While many of the base requirements can be finished after further time on implementation, there are many additional features which can be added. These range from connection to a cloud-based storage system to further development on how data entries are displayed, such as a tab system to separate entries by month, by week, etc.

Alongside this, the GraphView library created by Jonas Gehring was utilised to create a visual graph for data elements.

Acknowledgments Many thanks to David Lonie for the immense support and guidance throughout the project timeframe. Thanks also to John Isaacs for additional support given at key points during the project. Final thanks to Jonas Gehring for creating the GraphView library, and to Lynne and Ross whose struggles with Type 1 I have watched over the years inspired me to try and create a tool to help you overcome some of the many hurdles you have faced.

References Yach, D., 2003. Introduction. Diabetes Atlas. s.l.:s.n., pp. 9-10.

In:

El-Gayar, O., Timsina, P., Nawar, N. & Eid, W., 2013. Mobile Applications for Diabetes Self-Management: Status and Potential. Journal of Diabetes Science and Technology, 7(1), pp. 247-262.

Computer Science

23


STUDENT BIOGRAPHY

Fatima Ghanduri Course: BSc (Hons) Computer Science Determine a Robust Machine Learning approach for Credit Risk Analysis With the current financial climate, borrowing methods like mortgage and credit support is widely used. These algorithms will help these organisations find the credit risk through the usage of the most efficient method. This project will benefit these banking institutions by having a visualisation to distinguish how the credit risk is affected based on occupation of the borrowing party. Machine Learning methods provide a better fit for the non linear relationships between the explanatory variables and default risk. Using these techniques have been found to predict credit risk greatly improves the accuracy probability.

<#> 24


Computer Science (Hons)

Determine a Robust Machine Learning approach for Credit Risk Analysis Fatima Ghanduri

Introduction

With the current financial climate, borrowing methods like mortgage and credit support is widely used. These algorithms will help these organisations find the credit risk through the usage of the most efficient method. This project will benefit these banking institutions by having a visualisation to distinguish how the credit risk is affected based on occupation of the borrowing party. Machine Learning methods provide a better fit for the non-linear relationships between the explanatory variables and default risk. Using these techniques have been found to predict credit risk greatly improves the accuracy probability.

Project Aim

Finding a robust Machine Learning algorithm for credit risk. Main techniques: - When passing processed entities through the machine and deep Learning models, the data used for visualisation to determine the efficiency and viability of these models. - These visualisation patterns and determined to be the results of this project.

Methodology

Nirmalie Wiratunga

Evaluation and Results -

-

-

Four model types were evaluated based on a Receiver Operating Characteristic Curve (ROC) and an a Precision Recall Curve (PRC). A ROC is used in the interpretation of the risk factor as-well as the efficiency and overall quality of the model. The PRC is a curve between precision and recall for various values.

1. Random Forest Model The value of logistic linear AUC calculated for RFM is 0.69 suggesting that no discrimination is in the model and this the predicted credit risk is viable. However, this result does not produce a well enough value meaning that the predicted values are mostly False Positive. A very small part of the predictions are in the True Positive making the model not very accurate.

4. Deep Learning The final method that was used was a neural network that consisted of three equ-al layers. The logistic linear AUC 0.53 making the model discriminatory to an extent. Although the predictive accuracy is 81%, the models, the PR curve has a smaller value in true positive making the results not majority in the zero value, making it a one in a half chance of the predicted result is correct. Thus, the model is the least viable model to use.

2. Gradient Boosting Gradient Logistic linear AUC value is 0.68 making this model with no discrimination, however, the accuracy is lower than the RFM. There is a smaller value of the data that is predicted is in the True positive area and a large majority is either in the false negative or in the false positive. In addition, the PR curve in figure.. Represents the reduction of the accuracy over a recall period where it starts to go down steeply. This model can be better represented if more data was present to train the model more.

3. Elastic net The best predictive model is the Elastic Net model. The logistic linear AUC value is 0.71 making that the model non discriminatory and with a larger percentage of the predictions are in the true positive rate. The PR curve is also represented. The PR curve is a lot less steep proving that the data that is recalled has a higher precision for a large portion of the predicted values. However, with a larger data set in training, the model could be more accurate as more training is implemented.

Conclusion

This project has successfully established how machine learning can be used to further efficiency in banking. In the four models that were used, the elastic net has happened to be the best in terms of the evaluation of the AUC and PR curves. The deep learning model was initially predicted to be the best model that has no discrimination due to its many layers, but through the above graphs it is apparent that the model has values that are more in the false negative than in the false positive. With a larger training set of and different input layers may make a large difference in the logistic linear value. The data collected for this research consisted of large clusters and pre-processing needed to be implemented. - Lexical analysis was used to create tokens of jobs using Levenshtein comparison. - Similar industrial jobs were then placed in buckets using Locality sensitive Hashing. - To achieve LSH, shingling was used to convert the text to characters of length k-grams. - The Jaccard Index was then implemented as a coefficient to find the similarity between sample sets from a database. - These industry buckets were processed individually into the models separately.

Acknowledgments I would like to express my deep gratitude to Professor Nirmalie Wiratunga, my research supervisor, for her patient guidance, enthusiastic encouragement and useful critiques of this research work. I would also like to thank my parents for their constant support through-out my four years of university. Lastly, a very sincere appreciation to Lee Robbie, Mark Scott-Kiddie and Calum McGuire for their encouragement throughout my final year.

References AI in Banking Report: How Robots are Automating Finance. (n.d.). Retrieved April 17, 2020, from https://amp.businessinsider.com/8-5-2018-ai-in-bankingand-payments-report-2018-8

Tel: 07473931385 Linked-in: linkedin.com/in/fatima-ghanduri Email: fghanduri@protonmail.com

25


STUDENT BIOGRAPHY

Chloe Greenwood Course: BSc (Hons) Computer Science An e-Learning System Designed to Aid Users in Creating Alexa Skills Amazon Alexa and other voice assistants are becoming frequently more used and the interest in creating Alexa “skills” is becoming more common, however, many who are looking for resources to create said skills may struggle in finding the information they’re looking for. As of January 2019, Amazon has sole over one hundred million Alexa devices (Bohn 2019). Amazon Alexa is a well-know voice assistant, the the Echo thriving in private spaces such as the home, it has the ability to accept many different tasks in multi-user interactions (Purington, Taft, Sannon, Bazarova & Taylor 2017)

<#> 26


27


STUDENT BIOGRAPHY

Scott Guy Course: BSc (Hons) Computer Science Using artificial intelligence to develop a vehicle convoy system Artificial intelligence is a huge part of the world today with a huge amount of effort and resources going into self driving vehicles. However, there are many issues with the modern day self driving vehicles with the main one being related to the decision making of an AI and the lack of common sense ..“What self driving cars lack is the ability to know what is going on in someone’s head”head”(Joann Muller, 2019). One way to give an AI controlled vehicle common sense is to allow a human to make the decisions related to the rules of the road and how the vehicles should move. This can be done by using a convoying system with the leading vehicle being controlled by a human and following vehicles following based on the leaders speed and direction

<#> 28


Using artificial intelligence to develop a vehicle convoy system Scott Guy & supervisor Dr Kit-ying Hui

Introduction Artificial intelligence is a huge part of the world today with a huge amount of effort and resources going into selfdriving vehicles. However, there are many issues with the modern day selfdriving vehicles with the main one being related to the decision making of an AI and the lack of common sense. “What self-driving cars lack is the ability to know what is going on in someone's head” (Joann Muller, 2019)

One way to give an AI controlled vehicle common sense is to allow a human to make the decisions related to the rules of the road and how the vehicles should move. This can be done by using a convoying system with the leading vehicle being controlled by a human and following vehicles following based on the leaders speed and direction.

Project Aim The aim of this project is to create a reliable vehicle convoying system using artificial intelligence. This system could reduce the number of drivers on the road and in turn reduce the number of lives lost in vehicle accidents. This would also save many companies money when transports multiple vehicles from one location to another.

Methods

Figures and Results

(Figure 2, Speed/Distance Sensor example)

Two different simulation have been developed with both using the same distance sensor but use two different steering sensors. The distance sensor takes the centre of the leading vehicle and the following vehicle and calculates the difference between the two points as a input for the Q-learning which produces the appropriate output. As you can see in the Figure above, with the ideal distance set at 1500, the current distance is at 1688 so the AI is going “Forward” to try and correct this. The way that the steering sensors work is that they calculate the position of the leading vehicle relative to the following vehicle. Ideally the leading vehicle will need to be in the middle of the following vehicles field of view.

(Figure 3, Simulation 1, Angle calculation steering sensor)

The first steering sensor calculates the angle left or right of the centre of the following vehicle by using the centre coordinates of the leading vehicle and the direction the following vehicle is facing. That angle is then put through Q-learning which produces the output action.

The steering AI in both simulations give out the correct actions to be taken to ensure that that the leading vehicle is in the middle of the following vehicles field of view which is the main goal of the steering AI. The “Wide sensor steering” (Figure 4) however, needs some processing time which causes the performance of the system to suffer and run a lot slower than the “Angle calculation steering” (Figure 2).

Acknowledgments Special thanks to my supervisor Dr. Kitying Hui for giving me support throughout this project in the form of pointing me in the right direction and suggesting ideas that helped me develop this project to this point. Another thanks to my peers that sat in on my meeting who have also helped me understand some concepts and techniques.

(Figure 1, Q-Learning Diagram)

For this project, Two sperate Q-learning network for controlling both the speed and the direction of the following vehicles. Three different sensors were used which will be responsible for producing the input information for the Q-learning network which will then produce the output action that the following vehicle will preform.

Conclusion

In conclusion, This project has fulfilled basic requirements set out at the start of the project with a vehicle fully controlled by the Q-learning network is able to follow a leading vehicle which is controlled by a human. The Q-learning network manages to detect when the following vehicle should speed up or slow down (Figure 2) however maintaining the exact distance between vehicles is difficult for the simulation due to the small delay for the sensor detecting the leading vehicles movement and being unable to catch up to the ideal distance without the lead vehicle coming to a stop. The initial goal being able to control the exact distance between the two vehicles was not achieved to due delay between the sensor detecting a change and the corresponding action needed to correct the change.

(Figure 4, Simulation 2, Wide area steering sensor)

The second sensor scans the area Infront of the following vehicle and separates it into 3 sections: left and right of the lead vehicle and where the lead vehicle is. Since the lead vehicle needs to be in the middle of the field of view, the sensor compares the size of the green and blue section to make sure the red section is in the middle. This information is the inserted into the Q-learning network.

References Joann Muller, 2019, Self-driving cars need to be better mind readers [online], available from: https://www.axios.com/self-driving-cars-humanbehavior-social-cues-34c50fbe-d802-48f1-a7cb75a3e2b40fb2.html

Bsc Computer Science

29


STUDENT BIOGRAPHY

Dominik Gyecsek Course: BSc (Hons) Computer Science The Open Data Map - Location-based Open Data Made Accessible for Everyone There are a large amount of location-based open data posted by councils and other governmental bodies in the United Kingdom and many other countries, however these are mostly hard or almost impossible to view by ordinary people and even if they are able to preview it, it might take a long time to process this information and not just find the closest location, but the most suitable one as well.

<#> 30


Dominik
Gyecsek
&
Dr
David
Corsar recycles
bottles

recycles
tins

accessible
access

recycles
batteries

free

car
collision

significant
delay

open
24/7

almost
empty

240
spaces

soccer
field

4
forecast
index

play
area

picnic
tables seating

4

The
Ionic
framework
is
used
to
achieve
native
 feel
and
appearance
on
both
iOS
and
Android.
 Capacitor
is
used
to
access
native
functionalities
 of
the
devices.
React.js
is
used
to
build
reusable
 UI
components.
Node.js
is
used
on
the
server
 (AWS
EC2
Ubuntu
instance)
and
PostgreSQL
is
 used
as
a
DBMS
(AWS
RDS).

The
primary
aim
of
this
project
is
to
create
a
 global
(extendable
for
the
whole
world,
but
 primary
focused
on
the
United
Kingdom)
open
 data
collection
for
geospatial
datasets.
Given
the
 big differences
between
the
ways
(e.g.
different
 file
formats,
data
formatting
and
contained
 details)
different
countries
or
even
cities
store
 and
update
these
kinds
of
data,
it
is
essential
to
 handle
the
majority
of
the
cases
and
to
make
it
as
 easy
and
fast
as
possible
for
organisations
to
be
 able
to
post
datasets
on
the
platform
and
keep
 them
updated
either
internally
or
externally. Other
than
just
coordinates
and
addresses
these
 dataset
can
contain
large
amounts
of
other
data
 such
as
features
(e.g.
facilities
or
amenities),
 phone
numbers,
email
addresses,
websites
and
 many
other
dataset
specific
fields
which
vary
 greatly
over
different
datasets
and
cities
and
at
 the
moment
there
is
not
a
single
solution
to
 easily
explore
these
visually
on
the
map
or
 programmatically
via
a
publicly
accessible
API. The
secondary
aim
of
this
project
is
to
allow
 everyone
with
a
regular
account
to
interact
with
 these
datasets
and
contained
locations
(e.g.
 report
as
inaccurate,
rate,
comment,
share
or
 like),
request
new
data
and
even
create
new
 location-based
open
data
under
the
CC0
licence
 which
is
then
would
be
accessible
for
everyone
 else
via
the
website,
application
and
public
API.

The
original
plan
was
to
conduct
two
surveys,
 one
targeted
at
councils
and
the
other
one
at
 users,
some
councils
have
expressed
interest,
but
 not
yet
tested
and
filled
out
the
survey,
but
the
 user
survey
has
showed
promising
results.
 The
survey
was
filled
out
by
24
people
and
 among
other
things
it
included
5
scenario-based
 tasks
to
find
specific
locations
by
categories
and
 features
on
the
Open
Data
Map
and
another
 source
deemed
appropriate
by
the
user.
The
 chart
below
shows
that
76%
of
the
locations
were
 found
easier
on
the
Open
Data
Map,
however
 10%
still
did
not
find
some
locations
neither
on
 the
Open
Data
Map
nor
from
any
other
source
 which
means
that
there
is
still
room
for
 improvement.

Furthermore
92%
of
the
people
agreed
that
the
 application
or
website
is
easy
to
use
in
general.

74
of
the
initial
80
requirements
have
been
 completed,
most
of
the
uncompleted
ones
were
 deemed
not
needed
after
further
consideration
 and
two
were
not
completed
due
to
time
 constraints.
Some
of
the
completed
ones
include:
 -
Simple
and
powerful
public
API
with
advanced
 queries
and
map-based
location
search;
 -
CSV,
JSON,
XML,
XLSX,
KML,
KMZ,
Shape
and
 GeoJSON
file
support;
 -
Editing
datasets
internally
or
periodically
 checking
for
updates
from
external
source;
 -
Rule-based
categories,
features
and
markers;
 -
Individual
location
creation,
commenting,
 rating,
reporting
and
sharing
by
all
users;
 -
Insightful
statistics
for
cities,
countries,
users,
 organisations,
datasets
and
dataset
content;
 -
Geospatial
dataset
discovery
from
CKAN;
 -
22
added
datasets,
such
as
live
car
park
data,
 roadworks,
parks
and
pollution
forecast
in
the
UK
 by
10
authors
with
more
than
20000
locations.

Looking
at
the
survey
results,
it
is
apparent
that
 the
project
has
the
potential
to
make
locationbased
open
data
more
accessible,
however
the
 survey
questions
were
targeted
on
data
that
was
 specifically
added
before,
searching
for
data
that
 was
not
added
before
would
have
not
yielded
the
 presented
results
and
just
the
fact
that
only
3
of
 the
90
contacted
councils
seemed
interested
in
 testing
goes
to
show
that
this
is
not
enough
to
 make
a
change
in
this
form,
but
all
users
can
 contribute
by
adding
locations.
Dataset
creation
 can
be
also
enabled
for
regular
users
with
some
 changes
on
third-party
service
reliance.

ACKNOWLEDGMENTS
 Special
thanks
to
Dr
David
Corsar
for
supervising
 this
project.

BSC
HONS
 COMPUTER
 SCIENCE

31


STUDENT BIOGRAPHY

Piotr Hass Course: BSc (Hons) Computer Science Analysis of Crypto Mining Malware Using Artificial Intelligence to Recognise Crypto Malware Behaviours In the past years, cryptocurrencies have gained a lot of recognition and mining those digital currencies became just as popular. Mining is a big part of any crypto network Cryptocurrency miners provide their computing power as a bookkeeping service for different coins in order to be rewarded those coins for their service Cointelegraph 2019. With the rise of the popularity of cryptocurrencies and their value there is also a rise of hackers that want to take advantage of regular users to use their machines in order to mine the cryptocurrencies or steal their assets. This research will look into crypto malware that can affect anyone’s computer. The malware behaviour will be analysed using various techniques such as static or dynamic analysis in order to see what makes it different to other types of malware.

<#> 32


Analysis of Crypto Mining Malware – Using Artificial Intelligence to Recognise Crypto Malware Behaviours. Piotr Hass & Hatem Ahriz

Introduction

Figures and Results

Conclusion

In the past years, cryptocurrencies have gained a lot of recognition and mining those digital currencies became just as popular. Mining is a big part of any crypto network. Cryptocurrency miners provide their computing power as a bookkeeping service for different coins in order to be rewarded those coins for their service (Cointelegraph, 2019).

In total, there were 100 malware samples used for building the dataset. The malware types used for creating the dataset can be seen on figure 1 in which 15 were crypto.

In conclusion, this project and it’s results have shown that the crypto mining malware has different behaviours compared to other types of malware and it was successful in creating a machine learning algorithm to distinct those this type of malware based on it’s behaviour.

With the rise of the popularity of cryptocurrencies and their value there is also a rise of hackers that want to take advantage of regular users to use their machines in order to mine the cryptocurrencies or steal their assets. This research will look into crypto-malware that can affect anyone's computer. The malware behaviour will be analysed using various techniques such as static or dynamic analysis in order to see what makes it different to other types of malware.

Project Aim The aim of this project is to analyse crypto malware using dynamic and static analysis. Using A.I and the behavioural data that was collected the crypto mining malware will be compared to other malware in order to see what makes it different and to create a A.I. model that recognises the behaviours of this type of malware.

Fig1. Malware Types.

Fig 2. Miner Types.

On figure 2, we can see one of the results that were observed on crypto-malware. We can see the types of miners that the malware deploys to mine cryptocurrencies. The miners observed were mainly the Monero miners, but we can also find some Bitcoin and Ethereum miners. The malware samples were put into machine learning algorithms to find out about their behaviours and to create an A.I. model that recognises the crypto-malware. the models can be seen below on figure 3. Fig 3. Machine Learning Algorithms.

figures

Methods The two main techniques that are going to be used for this project are static and dynamic analysis. Static analysis is used to disassemble the malware to view the source code in order to learn about it without executing the malware, and dynamic analysis is used to run the malware and learning of it’s behaviour and effects on the machine and system (Zimba 2018). This project will use a virtual sandbox to simplify the process of analysing the malware. The Cuckoo sandbox uses static and dynamic analysis in order to produce a report on the behaviours of the malware tested. The results will be then gathered in a dataset. Once all the malware is gathered, machine learning will be used to learn of the behaviours of the crypto mining malware in order to then build a model that recognises this type of malware based on it’s behaviour.

Using Cuckoo sandbox, I was able to analyse and extract the data from each piece of malware and create a 100 instance dataset on behaviours of crypto and other malware types. This was done to then compare the behaviours of crypto-malware to other malware in order to find out how differently it behaves. Using Python 3 and Orange 3, I have created machine learning models in order to apply various artificial intelligence algorithms on the data. The results indicate that the most successful machine learning algorithm to distinct crypto mining malware behaviour is the artificial neural network, this algorithm was the most precise with the score of 100%. Next, we had the Naïve Bayes algorithms, K-nearestneighbour algorithm, random forest trees and decision trees. All of the algorithms were very successful in recognising crypto-malware with scores not going lower than 0.99% this is most likely due to the fact that crypto-malware has many behaviours that were not found in any other types of malware. Those behaviours include deploying a coin miner on users personal computer.

Acknowledgments Fig 4. Machine Learning Results.

I want to thank my supervisor Hatem Ahriz for the guidance, patience, encouragement and advice that he has provided me through the time of this project. I have been tremendously lucky to have a supervisor who promptly responded to my every question and cared about my work.

References On figure 4, we can see the precision results of the machine learning algorithms where the neural networks was the most precise algorithm in recognising crypto-mining malware after that Naïve Bayes algorithm's were the fastest and later the clustering and classifying algorithms such as the K-NearestNeighbour, random forest trees or the decision trees.

Cointelegraph,2019. What is Cryptocurrency. Guide for Beginers. [online] Available from: https://cointelegraph.com/bitcoin-forbeginners/what-are-cryptocurrencies [Accessed 27 April 2020] ZIMBA, A. et al., 2018. Crypto Mining Attacks in Information Systems: An Emerging Threat to Cyber Security. Journal of Computer Information Systems, pp. 1-12.

BSc (Hons) Computer Science

33


STUDENT BIOGRAPHY

Jonathan Hutson Course: BSc (Hons) Computer Science Asset management system with dual access levels for company employees and customers In many organisations, the physical assets an organisation possesses are the foundation for their ongoing success. The effective management of these assets is what is referred to as Asset Management and is essential to the organisation’s overall success.(Frolov, Ma, Sun, & Bandara, 2010). Classification of asset management can often be separated into six “Whats”. Therefore, it is proposed that a successfully implemented system should be able to answer these six questions(Canada, 2001). What do you own? What is it worth? What is the deferred maintenance? What is its condition? What is the remaining service life? What do you fix first?

<#> 34


Asset management system with dual access levels for company employees and customers

Jonathan Hutson & John Isaacs

Introduction In many organisations, the physical assets an organisation possesses are the foundation for their ongoing success. The effective management of these assets is what is referred to as Asset Management and is essential to the organisation’s overall success.(Frolov, Ma, Sun, & Bandara, 2010). Classification of asset management can often be separated into six “Whats”. Therefore, it is proposed that a successfully implemented system should be able to answer these six questions(Canada, 2001). What do you own? What is it worth? What is the deferred maintenance? What is its condition? What is the remaining service life? What do you fix first?

Figures and Results

The solution shown in the figures above showcases the minimalistic design of the front end allowing users to seamlessly navigate through what could end up being quite a substantial database. All filtering options are available including text search functionality to refine the results and only showcase entries that include the data in the users search parameter. In the below example we can see how a user might group all the assets by who has checked them out, allowing members of the organisation o see at a glance what assets the client has and exactly what location they have them in.

The project successfully demonstrates the functional advantages associated with different control levels for asset information. Engineers at the organisation are able to update information on the fly through the use of the QR code scanner and c l i ent s ar e abl e to access information about the assets in their possession.

Acknowledgments I would like to thank in particular my supervisor John Isaacs for being on hand whenever I needed assistance and helping give the project some direction. I would also like to thank Len Robertson of Norco Energy without whom this project wouldn’t be possible as he provided me with all the information needed to successfully implement this for Norco.

Project Aim The aim for this project was to allow not only select members of the organisation view their assets but also allow a login facility for clients where they can check what assets they’re in possession of and flag up any particular issues with the assets owner.

Methods As the purpose of the application is to allow both the organisation and the organisations clients to login to the system the user will be supplied with a username and password from an admin that they can use to login to the system and the levels of control they have will determine on their role in either organisation.

The project was developed through the use of Amazon Web Service backend and a React.js development library. Every asset was stored in a DynamoDB database and displayed on the webpage using MaterialTable which is a react data table compent.

Conclusion

References 1)  Frolov, Ma, Sun & Bandara 2010 – Identifying core functions of Asset Management 2)  Canada, 2001 – Innovations in Urban Infrastructure

35


STUDENT BIOGRAPHY

Dominykas Laukys Course: BSc (Hons) Computer Science Investigating Graphs and their ability to confirm commercial business information In the current day it is possible to see that social media has had a large and positive effect on todays businesses to such an increase where in the United States alone in 2019, 82% of businesses use social media for marketing in January 2020. (Digital 2020), and as more social media websites are created businesses will move forward to be able to advertise on all of the available social media. This increases the following issue of maintaining this business information which if it is found incorrect can be potentially devastating to a business especially a local one as it is possible that up to 73% of consumers could lose trust with the business if the information that is listed is found to be incorrect (SearchEngineWatch 2014). Therefore it is important that business listings online contain the correct information for consumers to contact the business..

<#> 36


Investigating Graphs and their ability to confirm commercial business information. Dominykas Laukys & Carlos Moreno Garcia

Introduction In the current day it is possible to see that social media has had a large and positive effect on todays businesses to such an increase where in the United States alone in 2019, 82% of businesses use social media for marketing in January 2020. (Digital 2020), and as more social media websites are created businesses will move forward to be able to advertise on all of the available social media. This increases the following issue of maintaining this business information which if it is found incorrect can be potentially devastating to a business especially a local one as it is possible that up to 73% of consumers could lose trust with the business if the information that is listed is found to be incorrect (SearchEngineWatch 2014). Therefore it is important that business listings online contain the correct information for consumers to contact the business.

Project Aim The general aim of this project is to investigate if it is possible to confirm business details that are found online and ensure that the details that are found online are able to be verified with the correct information while using graphs to help us represent the information and help identify false business details.

Methods

Figures and Results

Conclusion

Figure (Fully Processed Data Sample)

figures Figure : Ini=al Dataset*

The solution seen from the figure below was achieved from the initial data that is shown in the figure above, this solution was achieved with a combined use of Python as well as Neo4Js and Highlights potential solution to the problem of identifying false information. The initial figures above show us that it is possible to confirm the business information and identify the false information through just viewing the initial data gathered however this data is poorly represented in addition if done with a larger database this could potentially cause human error reducing the effectiveness of that approach. The combination of using Python as well as NEO4J was somewhat successful for finding the accuracy, using string analysis within python this helped predicting the accuracy of the email for that particular business, this was however only able to predict the domain accuracy of the email and would struggle with analysis on the local part of the email (name of the email). However with the inclusion of Neo4J it helps the user to visualise the contact information as well as all the potential contact information that the has been gathered.

The challenges involved with this project have demonstrated the issues of crea=ng a solu=on to detect incorrect business informa=on. The solu=on could improved by adding more advanced string analysis to be able to predict more accurately names that are outside the naming scheme which this solu=on uses however this was not required for the purposes of this project. This project successfully demonstrates that it is possible to conďŹ rm online business informa=on as well as how graphs can be used to help us represent and help iden=fy false business details.

Acknowledgments This project was undertaken for Polaris Image.

I would like to extend my Gra=tude to Ross Benne@ of Polaris Image for allowing me to undertake this project. I would also like to extend my Gra=tude towards to Carlos Moreno Garcia for his guidance and support throughout the project.

References Figure : Workow of the solu=on

The solution was implemented using a crawler (a bot that is used to collect information online) which collected a CSV (Comma Separated Values) of different business listings and their contact information. This was then processed using python and put into a NEO4J graph database to help visualise the results.

Figure (Singular node of the email)

Digital (2020).[Online] Available at : https://wearesocial.com/us/ digital-2020-us SearchEngineWatch(2014) [Online]Available at: https:// www.searchenginewatch.com/ 2014/04/08/73-lose-trust-inbrands-due-to-inaccurate-localbusiness-listings-survey/

37


STUDENT BIOGRAPHY

Cosmin Mare Course: BSc (Hons) Computer Science Teaching Companion We live in a society driven by technology, where most individuals have access to an unlimited amount of activities and information all throughout the day, made possible, in most cases, by the smartphone in their possession. This is also the case in most classrooms, and, unfortunately, it brings with it negative aspects such as: distractions and interruptions, cheating, disconnection from face-to-face activities and even cyberbullying. The way of life introduced by smartphones and the access to the internet in general has also affected our attention span. According to research, the average attention span in humans went from 12 seconds, in 2000, to 8.25 seconds, in 2015, which makes simply banning smartphones in academic institutions not ideal in tackling this problem.

<#> 38


Teaching Companion Connecting Teachers and Students through Technology Cosmin Mare & Mark Zarb

Introduction We live in a society driven by technology, where most individuals have access to an unlimited amount of activities and information all throughout the day, made possible, in most cases, by the smartphone in their possession. This is also the case in most classrooms, and, unfortunately, it brings with it negative aspects such as: distractions and interruptions, cheating, disconnection from face-to-face activities and even cyberbullying. The way of life introduced by smartphones and the access to the internet in general has also affected our attention span. According to research, the average attention span in humans went from 12 seconds, in 2000, to 8.25 seconds, in 2015, which makes simply banning smartphones in academic institutions not ideal in tackling this problem.

Future Features

Features

figures Real Time Polls – The interactive polling feature of the Teaching Companion will allow Teachers to quickly get answers from the whole body of their class, without having to count raised hands or having to compromise to only hearing the answers of a limited handful of students. Polls are also the lightest way of quickly reengaging the students’ attention to the lesson as it will encourage everyone to respond through the predefined possible answers and appealing to curiosity.

•  •  •  •  •

PowerPoint Presentations integration Teaching Analytics Specialized and Custom Add-ons and Tools Improved GUI and UX Accessibility support

All these features can be easily implemented thanks to a well designed foundation, allowing broad expandability and scalability.

Conclusion

Image from dice.com

Image from wyzowl.com

Project Aim And here is where the Teaching Companion intervenes, with the goal of turning smartphones in classrooms from portals of distraction to means of teacher-student interaction and classroom experience interactivity. The project also aims to tackle the social pressure students feel when giving input or requiring further assistance, which may stop them from doing it in the first place.

Methods The method of reaching this project’s goals is by developing an web application that will fundamentally allow teachers to create and manage teaching companionship sessions to which the students present in the classroom can connect and use to interact. A good example would be the ability of asking questions and endorsing questions asked by their colleagues. The question can then be shown as anonymous to the other students, removing social pressure, and the number of endorsements can give the teacher insights of whether a misunderstanding happened on the student’s or the teacher’s part.

Interactive Questions – Asking questions can be quite anxiety-provoking for students, especially in large classrooms. It can also inconvenience the teacher if they are interrupted in the middle of explaining a concept or an idea. The teacher also cannot know whether the answer for a question asked by a single student would benefit the whole class or not. This feature of the tackles all these challenges, allowing students to ask questions anytime, anonymously even, enabling other students to endorse questions that they would like answered as well, and making it easier for the teacher to prioritise popular questions and not having to be interrupted when some does ask a question. In future versions this feature could also allow comments to questions, encouraging discussions on the matter, or other students answering their colleagues.

In conclusion, the Teaching Companion aims to combat the negative effects of technology in the classroom by harnessing it and using it to power an enhanced learning experience, facilitating teacher-student interaction, incorporating digital platforms into lessons and supplementing them with online, on-demand resources. Materialising in the form of a web app, and designed with maximum accessibility in mind, it is a powerful but intuitive tool that can be introduced in almost any classroom in the world providing the ability of easily meeting custom needs where required and evolving to always keep up with the continuously changing standards of the industry and society in general.

References

•  oxfordlearning.com/should-cell-phones-beallowed-classrooms/ •  wyzowl.com/human-attention-span/ •  angular.io/start •  docs.microsoft.com/en-us/azure/servicefabric/service-fabric-overview •  docs.microsoft.com/en-gb/aspnet/core/ tutorials/signalr •  auth0.com/docs

39


STUDENT BIOGRAPHY

Ben Martin Course: BSc (Hons) Computer Science Improving Data Quality and Completeness within a Dataset This project was originally designed to improve data quality and completeness with data from a relational database. Due to complications and being unable to carry this out, the report has adapted into improving data quality and completeness within a dataset. The programming language used is Python, Jupyter notebooks and it required installing anaconda to integrate and use different libraries that will help fulfil requirements. The main aspects shown are getting the machine learning algorithm k-Nearest Neighbour to improve upon the dataset along with using python pandas to help organise and manipulate the data using data frames.

<#> 40


Improving Data Quality and Completeness within a Dataset Ben Martin & Stewart Massie

Introduction

Figures and Results

This project was originally designed to improve data quality and completeness with data from a relational database. Due to complications and being unable to carry this out, the report has adapted into improving data quality and completeness within a dataset. This graph shows the different performance that an increasing kNN value has on the accuracy present. The more strain there is on the kNN algorithm then varied results are present. Overall successful and shows that over time, kNN will have a stable accuracy and is reliable.

The programming language used is Python, Jupyter notebooks and it required installing anaconda to integrate and use different libraries that will help fulfil requirements. The main aspects shown are getting the machine learning algorithm k-Nearest Neighbour to improve upon the dataset along with using python pandas to help organise and manipulate the data using data frames.

Conclusion

Project Aim The aim of this project is to improve data quality and completeness to a dataset. This is to be done using machine learning tools such as k-Nearest Neighbour but also trying to use some other artificial intelligence to help improve the quality and completeness of the data. This is to be done using the programming language Python.

Methods The main method that has been carried out in this project is the use of kNN. The kNN is a supervised algorithm for machine learning and in doing so can help improve the data quality and completeness of a variety of datasets used (Classification datasets). Standardisation of the variables has been carried out along with using the Scikit-learn library. This library is available for Python that can deal with Classification data (among others) to help sort through the kNN algorithms. This algorithm deals with improving the data quality. For the data completeness, implementation has been made to search through the data and find missing values . Ideally this would be fully integrated into the AI but have not managed to fulfil this aspect, so it has been implanted manually for the results aspect. Different imports in use are; pandas- used for data manipulations specifically data frames in the project, matplotlib – used to create plots/ graphs, numpy- providing mathematical computation on the arrays/ matrices. As stated before, Scikit-Learn library is also in use.

The two graphs shown above show the different effects that a k-value has on the overall accuracy in the predictions to help improve data quality. Graph 1 shows, shows the error rates compared against different values of k, carried out using the elbow method. The error rate takes a decrease when the values of k are decreased, there is an optimum result but it is not guaranteed. What is guaranteed is the error rates flattens off and doesn’t move too much after a certain point, hence why the bar graph shows the accuracies of k-values 1 and 17. The bigger the dataset, the higher value k would have to be for the kNN Algorithm to make effective/ efficient predictions. The program can successfully identify missing values and will replace them by a few options that the user can select from. These include an average value of the column, removing the row, backfilling the data and also having the standard of 0 being entered. This would be to only replace NaN values.

At this current state, the project does not meet all of the requirements initially set out. This is due to personal coding issues but the requirements it does meet are as follow: Identifying missing values and giving the user an option to changing them to a suitable equivalent. Also using machine learning algorithm k-Nearest Neighbour to improve the quality of the data predictions. Different packages/libraries were successfully used to produce the results. Overall, the project still needs some work done before the final submission due to some requirements not being fully working or filled out right at this moment in time. Proper testing will need to be carried out to ensure that all of the requirements area fulfilled.

Acknowledgments A big thank you to all the university staff present within the school for creating a fun environment to carry out such a project. I would also like to thank all of my friends and family for their support especially with the situation the world is currently in.

References

All Graphs and figures present came from the testing within the Python Jupyter Notebooks.

figures

41


STUDENT BIOGRAPHY

Matthew McArthur Course: BSc (Hons) Computer Science Encouraging Cyber Security Best Practices Through Reward Programs There are apps out there which encourage security best practices to be learned eg Cyber Smart) or to reward users for performing certain tasks eg Sweatcoin but this app should combine the two, giving the users an opportunity to perform cyber security related tasks to earn and learn as well as implementing best practices on their device. Some of the reasons that cyber security awareness programs fail are listed as users not understanding what security awareness really is, lack of engaging, appropriate materials and unreasonable expectations (Sasse Nurse, 2015). This project will aim to combat these factors “The idea that serious games could be used to change behaviour and help individuals role play scenarios which would make them more security aware is proposed and, it seems like serious games might be a good way to educate users ( Al Sherbaz Bloom, 2016)

<#> 42


Encouraging Cyber Security Best Practices Through Reward Programs

Rewarding Users for Playing Serious Games and Completing Cyber Security Related Actions Matthew McArthur

Introduction There are apps out there which encourage security best practices to be learned (eg. Cyber Smart) or to reward users for performing certain tasks (eg. Sweatcoin) but this app should combine the two, giving the users an opportunity to perform cyber security-related tasks to earn and learn as well as implementing best practices on their device. Some of the reasons that cyber security awareness programs fail are listed as users not understanding what security awareness really is, lack of engaging, appropriate materials and unreasonable expectations (Bada, Sasse & Nurse, 2015). This project will aim to combat these factors. “The idea that serious games could be used to change behaviour and help individuals role-play scenarios which would make them more security-aware is proposed and, […], it seems like serious games might be a good way to educate users (Hendrix, Al-Sherbaz & Bloom, 2016)

Project Aim The main aim of this project is to create a mobile application which can be used regularly by people to increase the security of their mobile phones and better their knowledge and awareness of mobile security best practices. There is a second aim to incentivise this and reward the users for the actions they take to secure their mobile phone.

Methods

Figures and Results

Figure 2. The password game implemented on the app.

Conclusion

Figure 3. The security requirements Implemented on the app.

Above (figure 2) is a screenshot of one of the games implemented on the application. The aim of it is to encourage strong password use to users, guiding them through what they need to do to create a strong password and rewarding them when they do. The app does not store what they type here to ensure that user’s passwords cannot be leaked. The purpose of the second game (not pictured) is to have users examine phishing emails, then learning how to identify them as phishing emails (through checking email addresses and URL links) before getting feedback on how they did. The security requirements that the app searches for (figure 3) include features and settings that users sometimes may not have implemented/realised they can implement on their phones. Through rewarding users for changing these, hopefully they will be encouraged to better secure their phones. 100%

Attributed to end users

Reports to NCSC 1200

90%

1080

1000

80%

800

70%

791

600

60% 400 50%

243 200

40%

64

34

0

30%

10%

A literature review was performed which identified a gap in the market for a security-based rewards app. Android Studio was then used to create a mobile application for Android. This application linked to a database implemented on Firebase.

0%

2017

2018

2019

Figure 4. The amount of reports per year to the NCSC attributed to the user (CybSafe, 2019)

Acknowledgments Special thanks to my supervisor Hatem Ahriz, whose guidance especially in the early stages of this project ensured that it got to where it did. In addition, I would like to thank my wife Claire for helping me in every moment of this year, from the support and advice offered in the hardest moments to the joy we will feel after the final submission.

References HENDRIX, M., AL-SHERBAZ and A., BLOOM, V., 2016. Game Based Cyber Security Training: are Serious Games suitable for cyber security training? International Journal of Serious Games, 3(1), pp. 53-61.

20%

Figure 1. The software used to implement the app. Firebase (left) and Android Studio (right).

In the next two weeks research will be done into whether the app can encourage users to be more secure. It is expected that the results of this will be that people will initially secure their phone to a higher level than before but that due to a lack of pull back to the app, users probably would not come back to the app regularly or over a period of time. That would mean implementing features of the app that draw users back regularly, such as daily rewards or commonly updating the list of security requirements. As we can see from the stats above, the average retention rate of mobile apps is low. This needs to be considered and combatted, if possible.

Figure 5. The amount of reports to the NCSC of each main type of attack (CybSafe, 2019)

As seen in the graphs above, the amount of reports to the NCSC by businesses in Britain of cyber attacks which could be attributed to end users grew by over 50% from 2017 to 2019 (figure 4). This took the form of over 1000 reports of phishing attacks (figure 5). Clearly if the user can be trained in correctly navigating these scenarios, the amount of attacks and breaches can be reduced.

Bada M, Sasse A, Nurse JRC. Cyber Security Awareness Campaigns: Why do they fail to change behaviour? In: International Conference on Cyber Security for Sustainable Society; 2015. p. 118–131. CybSafe, 2019. [online]. Human Error to Blame for 9 in 10 UK Cyber Breaches In 2019. Available from: https://www.cybsafe.com/press-releases/human-error-toblame-for-9-in-10-uk-cyber-data-breaches-in-2019/ [Accessed 27-04-2020]

Contact Information Email: matthewmac1@hotmail.com

BSc. Hons Computer Science

43


STUDENT BIOGRAPHY

Calum McGuire Course: BSc (Hons) Computer Science Using Bluetooth Low Energy to Create a Mobile Application Tracker Daily it is estimated that each person spends 10 minutes of their time looking for lost items Averaging out to be 2 5 days in a year this time could be used for more valuably Essential items like Keys, Wallets, Glasses or Paperwork that are misplaced must be located, but this task can be arduous and time consuming Technology has been developed to try and help this predicament with applications with this sole purpose littering the Android Google Play store and Apples App Store The most prominent of these such apps, Tile, Trackr Chipolo have been inaccurate and unreliable This has negatively impacted the demand for item tracker application as they are seen to be unworthy of the sometimes hefty price tag that is attached ..“ aims to solve this problem by using a simplistic, easy to follow design, to provide the user with an accurate and reliable locater in the form of an application that will save them considerable time.

<#> 44


Using Bluetooth Low Energy to create a mobile application tracker. Student: Calum McGuire

Introduction

Figures and Results

Daily it is estimated that each person spends 10 minutes of their time looking for lost items. Averaging out to be 2.5 days in a year this time could be used for more valuably. Essential items like Keys, Wallets, Glasses or Paperwork that are misplaced must be located, but this task can be arduous and time-consuming. Technology has been developed to try and help this predicament with applications with this sole purpose littering the Android Google Play store and Apples App Store. The most prominent of these such apps, Tile, Trackr, Chipolo have been inaccurate and unreliable. This has negatively impacted the demand for item tracker application as they are seen to be unworthy of the sometimes hefty price tag that is attached. “Ping” aims to solve this problem by using a simplistic, easy to follow design, to provide the user with an accurate and reliable locater in the form of an application that will save them considerable time.

While the application does perform correctly, a considerable hindrance to its functionality is calculating the distance using the RSSI. The RSSI can be affected if the signal passes through any surface, thus obtaining a skewed reading. It is, therefore, better to use the calculated distance as an estimate and not an accurate reading.

Methods The mobile application was created using the IDE Android Studio. A reading is gained from a Bluetooth Low Energy tracker with the mobile device taking the Universal Unique Identifier (UUID) and Received Signal Strength Indicator (RSSI). Through the RSSI, the distance is calculated using the following calculation.

The device will also gather the longitude and latitude coordinates and a marker onto Google Maps, using the Google Maps API, indicating where the device was last spotted

On initial start-up the application will start searching for all available devices. A list of devices will appear which all can be paired with.

Conclusion

Project Aim The project aims to produce a mobile application item tracker that will use a Bluetooth Low Energy tracker to send relevant data to the user’s mobile phone and present it in a readable layout. The application will calculate the distance between the mobile and the tracker, and when the user is within a certain perimeter will use GPS to find the current longitude and latitude to place a marker on a map.

Supervisor: David Corsar

Once paired to the tracker, the RSSI will be used to calculate a distance between the two devices. This will update every second. If the device gets farther away from the tracker, this will be visualised by a box lighting up red, closer together by lighting up green. If the device is with 0.5 meters, it will light up gold to indicate that the user is close/ or has found the tracker.

Ping would be an incredibly useful application to have readily available on mobile devices as it would save people valuable time in their day to day lives. While the project does work, it would potentially be a good idea in looking into a better median of being able to track everyday objects as Bluetooth Low Energy has proven to be unreliable for intricate detail.

Acknowledgments I want to thank my honours supervisor, David Corsar of Robert Gordon University, for his continued guidance throughout my final year. I would also like to thank Lee Robbie, Mark Scott-Kiddie, Fatima El-Ghanduri, Craig Cargill and Charlie Marsh for their support which I would not have been able to do without. Finally, I would like to thank my family for their continual motivation for the duration of my University life.

References https://www.computersciencezone.org/virtuallost-found/ Data is taken from each device, including its name, UUID and longitude and latitude of last known location. This is stored inside a SQL Lite database

BSc (Hons) Computer Science 45


STUDENT BIOGRAPHY

Paul McGuire Course: BSc (Hons) Computer Science Education and Crime in Scotland: Finding the Link There is a noted link between crime and education (Lochner & Moretti, 2004). Crime has been shown to be increasing, with some crimes are at the highest level on record, meanwhile educational attainment levels have been declining over the last decade (National Statistics publication for Scotland, 2019). This research explores the link between crime and education In the Aberdeen council area and Scotland. Key research questions were; What are the correlations that exist between education and crime data in Scotland? How can contemporary data science techniques draw insight from the existing data? The following open source data published by Scottish government; Educational Attainment by Council Student destination by Council Crime statistics by Council Using data analytics can give new depth to our understanding the ways in which educational attainment and crime levels are linked.

<#> 46


Educa0on and Crime in Scotland: Finding the Link Paul J McGuire Supervisor: David Lonie

Introduc0on

Figures and Results

There is a noted link between crime and educaFon (Lochner & More=, 2004). Crime has been shown to be increasing, with some crimes are at the highest level on record, meanwhile educaFonal aWainment levels have been declining over the last decade (NaFonal StaFsFcs publicaFon for Scotland, 2019). This research explores the link between crime and educaFon In the Aberdeen council area and Scotland. Key research quesFons were; ●  What are the correlaFons that exist between educaFon and crime data in Scotland? ●  How can contemporary data science techniques draw insight from the exisFng data? The following open source data published by Sco=sh government ●  EducaFonal AWainment by Council ●  Student desFnaFon by Council ●  Crime staFsFcs by Council Using data analyFcs can give new depth to our understanding the ways in which educaFonal aWainment and crime levels are linked.

Project Aim

●  Using staFsFcal programming packages to examine the relaFonship between crime and educaFon and report results ●  Compare staFsFcal programing languages R and Python in performing this analysis

figures

Diagram1: CorrelaFon Matrix of EducaFonal aWainment, crime rate and posiFve desFnaFons

Diagram 4: Random Forrest tree

figurs Diagram 2: Highers and advance Highers

Conclusion

The use of data science techniques can draw new insights from exisFng data, students in Aberdeen are less likely to move to higher educaFon ajer school than students in Aberdeenshire. The evidence also shows that the students in rural areas are more likely to achieve at least one SQAF level 6 qualificaFon however they do not follow the trend to move on higher educaFon ajer school. Areas with lower educaFonal aWainment were more likely to experience a higher average of crime. The results of the research has helped provided further evidence of the link between educaFon and crime.

Diagram 3: Crime rate

Amongst the interesFng findings were that: •  The majority of council’s in the ‘top ten for student Higher cerFficate aWainment’(SQAF level 6) are those with the smallest populaFon. •  Students are less likely to archive a Higher cerFficate(SQAF level 6) in ciFes with higher crime rate •  Students who study within the councils with the 5 lowest crime rates are more likely to achieve Higher cerFficate(SQAF level 6). •  Students in Aberdeen are less likely to achieve a Higher cerFficate or be accepted onto a university course compared to students in Aberdeenshire. •  Random Forrest achieving an accuracy of 62.86% The technology comparison found: •  Python is easier for a novice data analyst and is easily accessible for those familiar with programming. •  R is more specialized and required a deeper understanding of A Special thanks to David Lonie for the data analyFcs support and tutelage throughout this project.

Acknowledgments

References

Methods

Processing and analysis of the data was conducted using Python and R. Anaconda’s Juypter notebook and Spyder IDE for python and RStudio IDE for R were used. The project has been broken down into several phases; ●  Data Cleaning (Pandas and Mice) ●  Data ExploraFon (Pandas and DataExplorer) ●  Data summarizaFon (Numpy and Dplyr) ●  K-means clustering (SKlearn and R Stats) ●  Linear regression(SKlearn and R) ●  Random Forrest (SKlearn

Lochner, L., & More=, E. (2004). The Effect of EducaFon on Crime: Evidence from Prison Inmates, Arrests, and Self-Reports. American Economic Review, 155-189.

Diagram 4: Cluster of educaFonal aWainment and crime. Clustered ladled into council areas

Thorburn, M. (2018). Progressive educaFon parallels? A criFcal comparison of early 20th century America and early 21st century Scotland. Interna1onal Journal of Educa1onal Research Volume 89, 103-109. Atainment and leavers DesFnaFon “aWainment data” & “Student Data” hWps://www2.gov.scot/Topics/StaFsFcs/Browse/SchoolEducaFon/leavedestla Police Scotland recorded and detected crimes “crime rate local area” hWps://www2.gov.scot/Topics/StaFsFcs/Browse/CrimeJusFce/Datasets/RecCrime/RC201718tab

47


STUDENT BIOGRAPHY

Andrew McInnes Course: BSc (Hons) Computer Science The creation of an optimal Mathematics web application, aimed at dyslexic first-year pupils, using researched teaching and presentation techniques. Every year in August, pupils from a variety of primary schools begin at a secondary school. Many of these pupils have previously been taught at different feeder primary schools, these schools could have a variety of teaching paces and therefore pupils can be at different ability levels in key subjects such as Mathematics. Secondary school teachers are then tasked with bringing all the new intake of pupils up to a similar level to be able to continue their education. Pupils, themselves, face a harder challenge of coming to a new environment, with new classroom structure, teachers and expectations. The stress of this transition can be worsened still if a pupil suffers from a learning difficulty. The use of a dynamic web-based application, which targets pupils who have just started their first year of secondary school and takes appropriate steps in its design to support learning difficulties, most notably, dyslexia, could provide an effective tool to push on and monitor pupils’ abilities to be able to achieve the teacher’s task while equally also providing adequate support to the pupils.

<#> 48


The creation of an optimal Mathematics web application, aimed at dyslexic first-year pupils, using researched teaching and presentation techniques.

Andrew McInnes, Supervised by David Lonie

Introduction

Every year in August, pupils from a variety of primary schools begin at a secondary school. Many of these pupils have previously been taught at different feeder primary schools, these schools could have a variety of teaching paces and therefore pupils can be at different ability levels in key subjects such as Mathematics. Secondary school teachers are then tasked with bringing all the new intake of pupils up to a similar level to be able to continue their education. Pupils, themselves, face a harder challenge of coming to a new environment, with new classroom structure, teachers and expectations. The stress of this transition can be worsened still if a pupil suffers from a learning difficulty. The use of a dynamic web-based application, which targets pupils who have just started their first year of secondary school and takes appropriate steps in its design to support learning difficulties, most notably, dyslexia, could provide an effective tool to push on and monitor pupils’ abilities to be able to achieve the teacher’s task while equally also providing adequate support to the pupils.

Project Aim

Methods

After considering researched techniques, it became clear which components would be critical in the development of this optimal teaching application. The interface had to be highly configurable to ensure that it could cater to the specific needs of each individual. If despite the previously mentioned measures to create a clear interface, the user is still confused by the content then a speech synthesizer could be used to read to the user. As well as making the consumption of the teaching material as smooth as possible, the application must use gamification in an effort to engage pupils and keep their interest in the subject high. The use of simple, yet exciting games, and by posing smaller problems for the user to solve, has been found to allow for a different approach to teaching whilst ensuring pupils are having an enjoyable experience and reduce their anxiety felt towards the subject (Faghihi et al, 2017).

Figures and Results

The completed application, Mathemassist, consists of a Home page, an About page, a Chapter Overview for pupils which leads to each chapter, a Preferences page and a Pupil Overview for teachers which leads to each individual pupil’s performances. The use of an About page provides detailed instructions of the functionality and uses for the application. This provides better understanding and familiarity for not only the pupils but the teachers which will make use of this technology more likely in the classroom environment and in turn allow them to see the benefit in such usage (Mundy et al, 2012, Vannatta & Nancy, 2014, Ruggiero & Mong, 2015). Once a teacher has signed up, they can distribute their unique code to their class members for them to use while signing up which would place them in their teacher’s class which allow the teacher to keep track of their pupils’ progress. For pupils, the Preferences screen gives them the option of changing the interface using a combination of components to best suit their needs, these include font size; font-family; the spacing between letters and words; font colour and the background colour. Once this had been set by the user, the values would be inserted to the back-end database and fetched when a user entered a teaching screen to allow these values to dynamically change the CSS code.

The aim of this project is to present educational content via a computer application which can spur on pupils’ interest and motivation in Mathematics. The computer application must help to maximise teaching effectiveness and improve the confidence of pupils, especially dyslexic individuals, whilst taking steps to ensure that the consumption of the content does not cause confusion. By implementing multiple researched components to an accessible web application, the hope is to provide a useful and complete learning tool.

Conclusion

Mathematics can be the source of anxiety for many pupils, but with an application that makes the user comfortable with its interface and that can be an enjoyable alternative to classroom teachings; pupils’ confidence with the subject can be greatly increased. As technology use in a classroom environment becomes more popular, computer applications such as this one add an extra tool to a teacher’s arsenal to provide a complete and captivating educational experience for pupils.

Acknowledgments

Special thanks to my supervisor David Lonie for his guidance throughout my project, providing me with advice and helping me evolve this project into the finished product. In addition to this, I wish to extend my gratitude to Simon Fogiel and the Maths department at Robert Gordon’s College for supplying the curriculum.

References

Preferences Screen

Technologies Used: Drop and Drop game

Each chapter has a simple game based on the relevant material. Each game was implemented to specifically accommodate the nature of the chapter it represented; with the use of three unique game templates being adapted throughout the application. In addition to these engineered games, every chapter had a speech synthesizer to assist the user in their learning. The content being read out needed to be adapted slightly from the chapter to improve its delivery to the user.

- Faghihi et al. (2017), ‘How to Apply Gamification Techniques to Design a Gaming Environment for Algebra Concepts’ - Mundy et al. (2012), ‘Teacher’s Perceptions of Technology Use in the Schools’ - Vannatta, R.A. & Nancy, F (2014), ‘Teacher Dispositions as Predictors of Classroom Technology Use’ - Ruggiero, D & Mong, C.J. (2015), ‘The Teacher Technology Integration Experience: Practice and Reflection in the Classroom’

Quiz Game

Picture Game

Computer Science

49


STUDENT BIOGRAPHY

Daniel Mercik Course: BSc (Hons) Computer Science Solving Travel and Logistic Problems Using Mobile Applications And Open Data Open Data is presented as a process that leads to an increase in economic activity and the enhancement of social and political wellbeing of citizens. The reality is more complicated as the process is hindered by barriers of entry that exist as part of the current open data climate. A lack of knowledge about open data, and the lack of purpose for already existing data on government websites is a multifaceted problem that requires many adjustments in both public culture and knowledge of open data, and a drive from the software development community to alleviate these barriers and allow open data reach it’s full potential.

<#> 50


Solving Travel and Logistic Problems Using Mobile Applications And Open Data Daniel Mercik, Stewart Massie

Introduction

Figures and Results

Conclusions and Evaluation

The project’s initial literature review showed that there are obstacles in using Open Data that prevent it from proving beneficial in social and economic areas. As an attempt to ameliorate this, an android app was developed that would potentially prove useful in planning trips as it displayed all information on planned roadworks that is published by Traffic Scotland.(example in app screenshots below) This information includes the name of the stretch of road being worked on, as well as planned dates and the extent of travel disruption expected.

The mobile application successfully demonstrates the potential of open data being used to create functioning apps that help solve real problems like that of travel planning; the usability of this app can be called in to question as no direct evaluation involving potential users was carried out. The questionnaire directly shows that the public supports the policy of open data whilst remaining critical of it’s implementation in the UK.

Open Data is presented as a process that leads to an increase in economic activity and the enhancement of social and political well-being of citizens. The reality is more complicated as the process is hindered by barriers of entry that exist as part of the current open data climate. A lack of knowledge about open data, and the lack of purpose for already existing data on government websites is a multifaceted problem that requires many adjustments in both public culture and knowledge of open data, and a drive from the software development community to alleviate these barriers and allow open data reach it’s full potential

Project Aim The initial aim of this project was to create a software solution that uses Open Data to assist with short to long term planning of travel and logistics. Due to a change of circumstances the project took on a side aim of investigating public perception of Open Data.

Methods An application was developed for Android OS that uses Open Data provided by Traffic Scotland in RSS format. Alongside this, a questionnaire was distributed to gauge the public perception of Open Data. The original plan involved getting people to use a prototype of the app, followed by a questionnaire tailored to gathering feedback on the app.

Unfortunately the planned evaluation of the app proved impossible due to a global pandemic. In order to gather related feedback on the larger subject of Open Data instead, an online questionnaire was distributed, and was answered by 20 respondents. The results of the questionnaire convey the image of Open Data as an unfulfilled ideal in the eyes of the public. All respondents aware of Open Data (10 out of 20) believe that the UK government has not done enough to advertise their Open Data scheme (figure above, 1 – not enough, 5 – enough) with 9 out of the 10 aware of open data believing that open data could be used to improve their local council’s decision making and make it a more open and transparent process. Out of the 10 respondents unaware of open data 9 believed that governments should publish data they gather, further supporting the concept of open data.

For the project to be fully successful in enhancing the open data process, the mobile application should ideally be further improved with feedback mechanisms that would allow the user to notify the data provider on issues. Also an evaluation of the application’s usability should be carried out.

Acknowledgments Special thanks to the project supervisor Stewart Massie for guidance, my friends and family for answering my questionnaire and help with it’s distribution. I would also like to thank the government institutions that publish data under the Open Data programme for their contribution to a more transparent and open government.

References Questionnaire response summary available upon request.

COMPUTER SCIENCE

51


STUDENT BIOGRAPHY

Stuart Miller Course: BSc (Hons) Computer Science Using Gamification to Engage Users in Community Reporting Previous attempts at gathering information from members of a community have struggled with engagement particularly from certain demographics. It is not uncommon to see an overrepresentation of respondents who are retired, likely because they have fewer responsibilities. (Renn, et al., 1993). As such, any information gathering system should aim to be as engaging as possible and intuitive to the user. One way that this might be achieved is through the use of Gamification, the addition of game-like elements into non-game activities (Deterding, et al., 2011). This includes the use of game design elements such as scores and leaderboards. When applied, gamified systems are used to reward the user for their efforts in the hope that the user will be enticed to continue. They can also be used to inspire competition, using the user’s innate desire to compete.This project explores the use of gamification elements in an information gathering application.

<#> 52


Leaderboard

Score

Using Gamification to Badges Engage Users in Community Reporting

lusion

8

Stuart Miller – s.miller5@rgu.ac.uk - Dr Carlos Moreno-Garcia - c.moreno-garcia@rgu.ac.uk

Introduction

Figures

edback suggests that leaderboards were the most effective ployed by the application. This suggest the importance of c s to continue engaging with the application. Users have rep ore higher than others on the leaderboard leading to this re have been more effective with both a larger number of mo r testing period to allow users more time to actively pursue system could also have been able to inspire competition if u Aim adgesProject earned by their friends.The project leaves amost? great de Which system motivated you Results mentation and improvement, for example the users who we hat they would be more interested in using the application i vide small monetary rewards or gift cards to those who con Previous attempts at gathering information from members of a community have struggled with engagement particularly from certain demographics. It is not uncommon to see an overrepresentation of respondents who are retired, likely because they have fewer responsibilities. (Renn, et al., 1993).As such, any information gathering system should aim to be as engaging as possible and intuitive to the user.One way that this might be achieved is through the use of Gamification, the addition of game-like elements into non-game activities (Deterding, et al., 2011). This includes the use of game design elements such as scores and leaderboards.When applied, gamified systems are used to reward the user for their efforts in the hope that the user will be enticed to continue. They can also be used to inspire competition, using the user’s innate desire to compete.This project explores the use of gamification elements in an information gathering application.

The aim of this project is to find a solution to increase community engagement whilst increasing the efficiency and accuracy of the information provided. This will be demonstrated with an Android application, “LocalEyes”, designed to collate community defect reports, e.g. potholes, faulty streetlights, or vandalism. Different gamification strategies will be employed with the goal of determining which is most effective. The application will give a very efficient way for local councils to collect information. The inclusion of GPS data will provide an advantage over traditional systems. Data submitted through the application could be used by local councils to inform their decisions on repairs.

The application’s main screen shows all of the report issues in the area surrounding the user.

A survey was distributed amongst the user testing group. The following graph show the gamification method that each user found most effective.

To explore this an Android Application was developed to explore effectiveness of competing gamification strategies.The App was developed using Google’s Firebase as a backend to handle the storage and documentation of user contributions.This project compares the use of Scores and Leaderboards with a Badge based system.The score system awards users based on the quality of their contributions whereas badges are awarded for consistent usage of the app.

5 Responces

Leaderboard

Score

owledgements Methods

The leaderboard ranks users by their score, more contributions leading to a higher score. Some level of moderation would be necessary in this section to stop spam.

The badges section shows the name and graphic for each badge. The user is informed how to earn each badge so they can actively pursue them.

20%

80%

Badges

Conclusion

The user feedback suggests that leaderboards were the most effective gamification element employed by the application. This suggest the importance of competition in getting users to continue engaging with the application. Users have reported strong desire to score higher than others on the leaderboard leading to this result.The badges system may have been more effective with both a larger number of more varied badges and a longer testing period to allow users more time to actively pursue unlocking them. This system could also have been able to inspire competition if users were able to see the badges earned by their friends.The project leaves a great deal of room for future incrementation and improvement, for example the users who were surveyed all expressed that they would be more interested in using the application if the council were to provide small monetary rewards or gift cards to those who contribute.

ank my supervisor Carlos Moreno-Garcia for his great patience w ng forward. Also thanks to Marianthi Leon and Caroline Hood for and view points. Thanks to Paul Joy for his support with this pos Acknowledgements to my Family for putting up with me in the later stages of this pro

s

I’d like to thank my supervisor Carlos Moreno-Garcia for his great patience whilst keeping this project moving forward. Also thanks to Marianthi Leon and Caroline Hood for providing their own insights and view points. Thanks to Paul Joy for his support with this poster. And finally thanks goes to my Family for putting up with me in the later stages of this project.

References Renn, O. et al., 1993. Public participation in decision making: A three-step procedure. Policy Sciences, 9, 26(3), pp. 189-214. Deterding, S., Dixon, D., Khaled, R. & Nacke, L., 2011. From Game Design Elements to Gamefulness: Defining “Gamification”, : ACM.

. Public participation in decision making: A three-step procedure. Policy 53


STUDENT BIOGRAPHY

Jehanzeb Mobarik Course: BSc (Hons) Computer Science Automatic Classification of Pneumonia Using Deep Learning Chest X-rays are one of the most popular medical imaging techniques that are used to look for abnormalities within a patient(WHO, 2001). One of these abnormalities includes pneumonia infected lungs which appear upon the X-ray as obscure white spots Radiologists diagnose chest X-rays, however, over the past few years, there have been a drop radiologists leading to increase backlogs for the National Health Service(NHS). With the advent of public datasets and compute power, the possibility of real time diagnosis tool of X-rays showing either healthy or pneumonia infected lungs is needed today more than ever.

<#> 54


Automatic Classification of Pneumonia Using Deep Learning Jehanzeb Mobarik & Dr Eyad Elyan

Introduction Chest X-rays are one of the most popular medical imaging techniques that are used to look for abnormalities within a patient(WHO,2001). One of these abnormalities includes pneumonia infected lungs which appear upon the X-ray as obscure white spots. Radiologists diagnose chest Xrays, however, over the past few years, there have been a drop radiologists leading to increase backlogs for the National Health Service(NHS). With the advent of public datasets and compute power, the possibility of real-time diagnosis tool of X-rays showing either healthy or pneumonia infected lungs is needed today more than ever.

Project Aim This project aims to implement a pneumonia-classification pipeline using various deep learning techniques. Also, this project aims to apply techniques such as data augmentation to deal with class imbalance and transfer learning to help with the challenges that come with a limited dataset

Figures and Results

Conclusion

Figure 2 : Confusion Matrix before(left) and after(right) data augmentation

Figure 3 : Example of Data augmentation

By running several experiments and fine tuning hyper-parameters, our initial model of 3 convolutional layers and 3 fully connected layers achieved an accuracy of 77% on the test set. However, the model suffered from low specificity and precision as the dataset was imbalanced with normal X-rays under represented. From the confusion matrix in figure 2, it can be observed the model is highly biased towards the positive class resulting in a high false positive count. To combat this under representation, data augmentation is used to increase the number of normal X-rays via affine transformations. Figure 3 shows a case of augmentation where a single patient X-ray can be rotated in various degrees to create new samples. Our model was then trained on the original and augmented images which resulted in accuracy increasing to 90% with both precision and specificity increasing. Figure 2 shows a confusion matrix(right) of the model evaluated on the test set showing less false positives and increase in false negatives

Methods

In this project, we provided an overview of the deep learning techniques applied on medical images to diagnose pneumonia in chest X-rays. Different CNN models were trained and tested on chest X-rays where it was observed that novel methods such as transfer learning via the VGG16 model performed worse compared to a much simpler model with only 3 convolutional layers. We also demonstrated that techniques such as data augmentation to help with class imbalance greatly helped in reducing the number of false positives. Ultimately this project proved there is a place where deep learning can be utilised in healthcare to automate diagnosis. In future work we hope to build a generalised classification pipeline on more than one pathology.

Acknowledgments I would like to thank my supervisor, in particular, Dr Eyad Elyan for his supervision during this project. Without his guidance and encouragement this project would not have been possible. I would also like to thank my family for their support during this project.

References

Figure 1: Methods of deep learning

The dataset contained various sizes of images due to different devices used for scanning which proved challenging since all images need to be scaled to the same size in order to classify with a Convolutional Neural Net(CNN). Different methods of using CNNs is illustrated in figure 1. The CNN algorithm is one of the well know deep learning techniques used to extract features automatically without human assistance (Krizhevsky et al., 2012) .

Figure 4: VGG16 Architecture

Table 1: Summary of Algorithm Performance

A common approach to overcoming limited dataset size and long training time is to utilize transfer learning. This approach allowed us to use a pretrained VGG16 model, shown in figure 4, whose convolutional layers are trained to extract low to high level features of an image. We adapted the VGG16 model to work on X-ray images by freezing the convolutional layers and training the fully connected layers which allows us to take advantage of VGG16’s feature extraction without training from scratch (Simonyan et al, 2015)

We evaluated the performance of each of the algorithms on accuracy and F1-score, which is a measure of the harmonic average between precision and recall. Three different algorithms were compared where it was noted that transfer learning produced sub-optimal scores compared to a smaller network. The table above shows a CNN without data augmentation producing higher accuracy and F1-score compared to the transfer learning approach. Finally, the same CNN model with data augmentation outperformed all other algorithms on accuracy and F1-score

Simonyan, K. and Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 10971105). WHO. Standardization of interpretation of chest radiographs for the diagnosis of pneumonia in children. 2001

More Information Email: j.mobarik@rgu.ac.uk

Computer Science 55


STUDENT BIOGRAPHY

Joe Murray Course: BSc (Hons) Computer Science A Decentralized Sports Betting Application on the Ethereum Blockchain Blockchains are a family of technologies which typically share three aspects: a historical ledger of transactions, conducted in a proprietary currency, based upon shared consensus of decentralized network member nodes. Ethereum is a blockchain which can run computer operations across its member nodes. In essence it provides a shared computer. Software can be stored on the blockchain as a decentralized application, known as a smart contract. A smart contract can be interacted with by blockchain users via ransactions, which are essentially calls to the methods of the application. Smart contracts and the transactions thereupon are fully transparent, meaning their code is can be viewed in full by the user. This provides a degree of honesty and integrity – the user is fully aware of the precise implications of a transaction, and no unexpected outcomes can occur. This is a desirable advantage to smart contracts over traditional applications for many consumer purposes.

<#> 56


A Decentralized Sports Betting Application on the Ethereum Blockchain Discovering and discussing some of the unique design concerns and constraints of decentralized Ethereum applications Joe Murray – Dr Roger McDermott

Introduction Blockchains are a family of technologies which typically share three aspects: a historical ledger of transactions, conducted in a proprietary currency, based upon shared consensus of decentralized network member nodes. Ethereum is a blockchain which can run computer operations across its member nodes. In essence it provides a shared computer. Software can be stored on the blockchain as a decentralized application, known as a smart contract. A smart contract can be interacted with by blockchain users via transactions, which are essentially calls to the methods of the application. Smart contracts and the transactions thereupon are fully transparent, meaning their code is can be viewed in full by the user. This provides a degree of honesty and integrity – the user is fully aware of the precise implications of a transaction, and no unexpected outcomes can occur. This is a desirable advantage to smart contracts over traditional applications for many consumer purposes.

Project Aim

Figures and Results

An application was successfully created which compiles and runs on the simulated Ethereum node chain network. The application constructs contests with data retrieved from an online API. Users can place bets on an outcome of these contests, supplying amounts of Ether currency as a wager. The application pays out winnings depending on the outcome of the contest.

The project demonstrates that it is perfectly possible to develop functional decentralized blockchain applications. There are however, various factors which are not found in traditional application development. These factors are unique to blockchain/Ethereum smart contract development. As blockchain continues to grow in popularity and market share, it is recommended that developers familiarise themselves with the unique constraints found in the field of blockchain development. This may further facilitate the adoption of decentralised applications in the wider information technology space, with all the potential advantages they bring.

Acknowledgments

The aim of the project is to develop and deploy a decentralized application using Ethereum blockchain technology.

The author would like to thank Dr Roger McDermott, whose limitless patience withstood every assault.

In doing so we will discover and highlight some of the particular design constraints which are unique to blockchain/Ethereum development.

Methods The project was developed using a proprietary programming language Solidity, in the Remix IDE. Solidity is a C-like language created for Ethereum dapp development. Remix is a browser-based IDE which compiles Solidity code. It permits the testing of dapps using a simulated testing blockchain. It also provides for the use of many in-built development plugins.

Conclusion

A number of unique considerations were discovered, such as: - transactions made by users to the smart contract require payment in the form of the currency gas. The more complex are operations, the more expensive, thus it is vital to keep code as simple as possible. - strings in Solidity are not a class such as in Java. As such typical string methods such as concatenation, comparison etc. do not exist. Workarounds or the use of third-party utilities are required. - due to the nature of blockchains, decentralized applications are required to be deterministic i.e. reliably produce the same results regardless of context. This renders calls to web APIs impossible without the use of a third-party oracle utility.

The author also wishes to extend his gratitude to lecturers and staff of the School of Computing Science and Digital Media at Robert Gordon University for their aid on innumerable occasions over the course of the last five years. Finally, to my lovely assistant Stella, who has seen me though many tough times.

References - LEE,W., 2019. Beginning Ethereum Smart Contracts, New York, NY: Apress. - DANNEN, C., 2017. Introducing Ethereum and Solidity, New York, NY: Apress - MOHANTY, D., 2018. Ethereum for Architects and Developers, New York, NY: Apress

BSc Computer Science

57


STUDENT BIOGRAPHY

Callum Norris Course: BSc (Hons) Computer Science Analysing the concerns of students transitioning into higher education Transition can be defined as the internal process which occurs when students move to an unfamiliar situation while adjusting to higher education. One scholar refers to educational transition as fluid and as a journey which starts and ends and starts againš, implying that it is a dynamic and ever changing concept. The importance of transition to students and academic institutions cannot be understated and there is no doubt that transition offers considerable challenges to all parties involved², along with the mismatch in expectations and lack of preparation may mean that many prospective students may find the transition from secondary education to higher education.

<#> 58


Analysing the concerns of students transitioning into higher education Student: Callum Norris | Supervisor: Dr. Mark Zarb

Introduction

Figures and Results

Conclusion

Transition can be defined as the internal process which occurs when students move to an unfamiliar situation while adjusting to higher education. One scholar refers to educational transition as fluid and as a journey which starts and ends and starts again¹, implying that it is a dynamic and ever changing concept. The importance of transition to students and academic institutions cannot be understated and there is no doubt that transition offers considerable challenges to all parties involved², along with the mismatch in expectations and lack of preparation may mean that many prospective students may find the transition from secondary education to higher education difficult.

Through the use of RStudio I have been able to analyse two datasets, one which provided qualitative data and another which provided quantitative data for me to analyse, and ascertained what the key areas of concern are for prospective students, as shown in figures 1-4. With this new information I have created a website(figure 5) that addresses these key concerns and tries to offer advice to students regarding them in an attempt to mitigate these concerns and encourage them regarding transition.

In conclusion, this project has discovered what exactly the primary concerns of students transitioning into higher education are and has created a resource in order to alleviate these concerns through information provided in the form of a website, to smooth their transition into higher education

Project Aim I intend to research the primary concerns held by prospective computing students about transitioning into university education and propose solutions that would help mitigate these perceived roadblocks and allow prospective university students to enjoy and smoother transition.

Further Work

fig. 1

fig. 3

fig. 4

Acknowledgments I would like to thank, in particular, my honours project supervisor, Dr Mark Zarb for his weekly meetings which provided great direction and wisdom relating to my project and also for his provision of the two datasets I have been analysing throughout. I would also like to thank my girlfriend, Marion, for her continued support of my studies throughout these trying times that we are living in.

Tools

References

figures The RStudio software package has been used alongside the datasets provided to me, which I have manipulated to allow for efficient analysis. I have normalised the data through renaming, as well as the removal or substitution of null values and enumerating the quantitative dataset so that it can be operated on mathematically.

fig. 2

I believe that further work could be done through the surveying of more prospective students in order to obtain a larger dataset in which to analyse and address, thus improving the depth of knowledge. A more advanced statistics library could also be used to create more vivid and attractive figures to widen the appeal to students.

1 - ROSS, S. and GRAY, J., 2005. Transitions and re-engagement through second chance education. Australian Educational Researcher, 32(3), pp. 103–140. 2 - BRIGGS, A.R.J., CLARK, J. and HALL, I., 2012. Building bridges: Understanding student transition to university. Quality in Higher Education, 18(1), pp. 3–21.

fig. 5

BSc (Hons) Computer Science

59


STUDENT BIOGRAPHY

Zander Orvis Course: BSc (Hons) Computer Science Utilising Neural Networks to Create a Self-Driving Car Self-Driving cars have been a subject of fascination for decades and have recently seen a massive boost in interest. (Anderson, et al., 2014) However, there are lofty expectations both from the public and investors set for self-driving cars, leading to rapid, aggressive testing that has led to several fatalities.(Lyon, 2019 Currently, selfdriving vehicles utilise neural networks for their control systems, and understanding how these work will highlight why the technology is still limited, and why self-driving cars are still not widespread.

<#> 60


Utilising Neural Networks to Create a Self-Driving Car Alexander Orvis & Kit-ying Hui

Introduction

Self-Driving cars have been a subject of fascination for decades and have recently seen a massive boost in interest. (Anderson, et al., 2014) However, there are lofty expectations both from the public and investors set for self-driving cars, leading to rapid, aggressive testing that has led to several fatalities.(Lyon, 2019 Currently, self-driving vehicles utilise neural networks for their control systems, and understanding how these work will highlight why the technology is still limited, and why self-driving cars are still not widespread.

Project Aim The aim of the project is to create a self driving car that utilises neural networks to control its movements. The implementation of this network and the car should meet the minimum specifications laid out in the requirements.

Figures and Results

Conclusion A Problem with Uncertainty

figures

Track 1

Each of the neural networks are of similar design, but have varying inputs and hidden nodes. The Angles input the angle of the path in degrees, either if it is left or right, but never together, in order for the car to know which direction it is in.

figures

Track 2

Track 3

figures

A sample of the training data used for the navigation network, the line separates the recorded inputs from the recorded outputs. The 4 input values and 4 output values match the order they are shown top to bottom on the diagram

The two designs were implemented to reasonable success, with both cars utilising a total of 11 sensor readings, which proved to be adequate for navigating the three unseen testing tracks. Although the cars showed good performance, they were still imperfect and would make mistakes. In order to test their performance, each car made timed laps around the track whilst the mistakes they made were recorded. Certain mistakes, like not stopping at a red light, incurred a time penalty, but time was stopped if a car got stuck and restarted when it was moving again.

Methods

Car and Obstacles The hybrid network car on the training track, a traffic light and a road node are in the foreground, whilst several obstacles are in the background

For the scope of this project, many features typically seen on real cars were simplified, such as not having a camera for the car to see thus not needing any complex image processing. With this in mind, two designs were created, one using a single network, and one that used a hybrid of two networks that each handle different tasks. Both designs were trained with supervised learning methods that required the collection of training data by manually driving the car. Implementation was achieved by simulating a car in the Unity 3D Game Engine.

Visualisation of External Sensors Red is the obstacle sensors, detecting distance to the obstacle, Yellow is the node path sensor, detecting the angle and distance to the next path node, and Green is the traffic light sensor, detecting the distance to either a red or green traffic light

Looking at the results, both cars showed comparable performance in time, although there is a clear difference in the number of mistakes made. The single network car made far more mistakes than the hybrid, which also explains why the times are so similar, as the hybrid was far more cautious driving around obstacles, slowing its time especially on test track 3, which contained many obstacles. However, preferably the cars would want to be safer and make less mistakes, rather than be faster. The single network struggled to perform tasks, such as stopping at the red light, which can be attributed to its more complex and difficult to train network, compared to the specialised hybrid networks.

In a situation like this, the car gets confused and will end up crashing, unless specifically trained to choose a single direction, which can cause problems

The hybrid design showed clear advantages: • Better driving performance • Far faster to train • Separate networks allowed for the training of a specific function However, the limiting factor with the hybrid was the ‘transition’ between the two networks, which could be improved. Although the final result worked, it had some crucial flaws that limited its performance. The biggest of which would be that it cannot deal with uncertain or unseen situations. Additional training can help with this, but this is a problem faced by even the most advanced self-driving cars. Further work would include adding more obstacles, such as pedestrians, creating proper roads with boundaries the car must stay within, and improving the functionality of some sensors.

Acknowledgments I’d like to thank Kit-ying Hui for the guidance he has given throughout the project, especially for inspiring the idea of the hybrid design..

References Anderson, J. M. et al., 2014. Chapter Four: Brief History and Current State of Autonomous Vehicles. In: Autonomous Vehicle Technology: A Guide for Policymakers . s.l.:RAND Corporation, pp. 55-74.

Training Track

Lyon, P., 2019. Why The Rush? SelfDriving Cars Still Have Long Way To Go Before Safe Integration. [Online] Available at: https://www.forbes.com/ [Accessed 4 November 2019].

Computer Science

61


STUDENT BIOGRAPHY

Nyameye Otoo Course: BSc (Hons) Computer Science Can “Smart” Clothing Provide a Novel Method of Addressing Environmental Issues? This project seeks to explore Particulate Matter (particles under 2. 5ug/m3,called PM2.5 and particles under 10ug/m3m, called PM 10 respectively) – awareness. PM can be extemely damaging to the envronemnt aswell as to the human body, but isn’t often discussed. It aims to supplement current strategies used to track and store PM data, aswell as introduce the concept of Smart Clothing, seeking to merge these concepts; that is - measuring, persisitng and retrieving data in a dynamic matter - as currently most solutions used static sensors. Clothing has the potential to be an incredible data collection tool as per the working definiton of Smart Clothing - “for example, collect data and either transfer it wirelessly and automatically to an external computing unit or process the data itself [ …] without any user interfacing” (McCann and Bryson, 2009, p5).

<#> 62


Can "Smart" Clothing Provide a Novel Method of Addressing whether wearable technology can be used to Environmental Issues? Exploring meaningfully affect particulate matter(PM) air quality

tra -cking, storage and reporting for improved public health.

Nyameye Otoo & John Issacs

Introduction This project seeks to explore Particulate Matter (particles under 2. 5ug/m3,called PM2.5 and particles under 10ug/m3m, called PM 10 respectively) – awareness. PM can be extemely damaging to the envronemnt aswell as to the human body, but isn't often discussed.

Figures and Results Fig 3: PI Holster and Battery (inside jacket)

It aims to supplement current strategies used to track and store PM data, aswell as introduce the concept of Smart Clothing, seeking to merge these concepts; that is - measuring, , persisitng and retrieving data in a dynamic matter - as currently most solutions used static sensors. Clothing has the potential to be an incredible data collection tool as per the working definiton of Smart Clothing - "for example, collect data and either transfer it wirelessly and automatically to an external computing unit or process the data itself [ …] without any user interfacing" (McCann and Bryson, 2009, p5).

Fig 2: Embedded Denim

A fully functional smart jacket protoype was able to be created. This prototype tethers to a mobile phone for a source of internet connectivity. With this rig (Fig 2, 3), the user is able to freely walk around with the jacket, and viewing the Fresh-Wair website on a phone showed continuously updating results on refresh. One issue in this section, was that if connectivity is broken, the value is saved to a buffer, however it appears this buffer did not always properly update the values. New data however was sent.

Project Aim The aim of this project is to create a functional prototype of an " embedded device" powered peice of clothing. This is, to create a small, discrete hardware kit that seamlessly blends in with the clothing, and can collect particulate matter (PM) data. Additionally, this data is to be stored in an online database, and provide an interactive means for a user to view this geo-located air quality infromation.

Fig 4: Mobile Site The website (https://freshwair.hopto.org:8443/scarf) also functions mostly as planned, allowing users to select a timerange to query data in an a 1km radius of the origin click. In addition, after enabling HTTPS, iPhone, Android and PC users can all use their respective navigation features by clicking """" MY LOCATION", and the map will automatically centre and zoom to their location, highlighting the area and retrieving data for the selected timerange. A user can click on any area of the map, and the dialog will show relevant data (markers show where data has been properly recorded). This dialog shows the concentrations, aswell as the Air Quailty Index(AQI). Issues I faced however, are that my original PI Zero W failed, and my sensors arrived late (due to the pandemic), meaning I did not manage to implment some features, however all requirements were met. Also due to the pandemic, I was unable to retrieve a large variation of PM data as I could not travel. Finally obverving Fig. 4 shows the maximum AQI in my household was 78, which is a moderate level, colour coded for the user.

The finalised aim of this will be a technology empowered jacket.

Method(s)

The Amazon DynamoDB functioned as planned, however it could certainly be made far more efficient through implenting a datetime sortkey which I was not able to successfully complete.

This project exists in three main, distinct but interconnected sections:

Fig 5: Worn Embedded Denim

Conclusion

Whilst the prototype is not the most aesthetically pleasing, it shows that clothing can indeed provide a meaningful contribution to air quality monitoring.

- The jacket; This is powered by Raspberry PI in a custom holster inside the jacket. Connected to this PI is an Inovafit SDS011 sensor which measures PM concentration aswell as a U-Blox 7 Series GPS, which provides location data. In addition to this is a WiFi module (not needed on Zero W). This setup sends data to the final piece, the Amazon DynamoDB Database via a Cloud Hosted Website/ Web Service.

However, there was limtied test data, aswell as being tested rurally. It would certainly be worthwhile to further test air qaulity in more congested cities and in areas where those more vulnerable (children an elderly) reside. In additon, initial configuration user-friendliness whether wear could be further developed.

Clothing Provide a Novel Me al Issues? Exploring meaningfully affect pa Acknowledgements

- The Website / Webservice; This website provides users a way to view measured PM Data, and associated Air ohn Quality Issacs health ratng for the 1km area on an online, mobile friendly map. In addition to this, there is an API which recieves data from the Jacket, and in an online database, to persists it explore stores the display data. icleswhich under 2. particles under The Datastore; Data is persisted respectively) –in a modern cloud database known as Amazon temely damaging Fig 1: Implementation DynamoDB. to ll as the Workflow en discussed.

ion

ment current k and store PM ce the concept king to merge measuring, ng data in a currently most sors.

tial to be an on tool as per Smart Clothing data and either lessly and rnal computing ta itself [ …] acing" (McCann

im

Another area to develop further would be storage and both John more discrete with both hardware (e.g. PI Isaacs and the CSDM School Office, for Zero, chip GPS) and presence in clothing. their support not only through information but also for provision of necessary hardware for this project. I MCCANN, J. and BRYSON, D., 2009. Smart would also like to thank Alistair Kevan Clothes and Wearable Technology. Fig for provision of a Raspberry PI at short notice after mine died! LADEN, F. et al., 2000. Association of Fine Particulate Matter from Different I would also like to acknowledge the Sources with Daily Mortality in Six U. S. following core technologies which have Cities. Environmental Health Perspectives • made this project possible: LeafletJS, AngularJS, jQuery, Amazon DynamoDB and GROUP, A. Q. E., 2015. Fine Particulate EC2, pynmea2 - and the plethora of Matter (PM2.5) in the United Kingdom. academic and enthusiast technical information relating to Air Quality and Particulate Matter measurment.

-cking, I'd like to firstly thank

References Figures and

,

re

Re 3:

Bsc.(Hons) Computer Science Fig

2:

Embedded

Denim

A fully functional smart jacket protoype was able to be created. This prototype tethers to a mobile phone for connectivity. With this rig (Fig to freely walk around with the Fresh-Wair website on a pho 63 updating results on refresh. On was that if connectivity is brok


STUDENT BIOGRAPHY

Jakub Parker Course: BSc (Hons) Computer Science Bots and their influence on the quality of user experience provided using internet based services Bots have many definitions and facets In general terms, however, a bot is a piece of software or a script that is used to execute a specific, usually repetitive procedure ( Storey and Zagalsky 2017). Many technologies that the society takes for granted nowadays would not have been achievable without the development of bots. Popularity of bots have been on the rise ever since the IRC days, but chatbots specifically have seen an explosion of popularity with many companies using them internally and on social media platforms to deliver a massive range of functionality in an easy to access and if executed well, a highly branded way (Pasquarelli and Wohl 2017) “If we choose the right path bots might be the best thing to happen to marketing yet .� (HubSpot 2018)

<#> 64


Bots and their influence on the quality of user experience provided using internet-based services. Jakub Parker & Supervisor Dr John Isaacs Introduction Figures and Results

Bots have many definitions and facets. In general terms, however, a bot is a piece of software or a script that is used to execute a specific, usually repetitive procedure (Lebeuf, Storey and Zagalsky 2017). Many technologies that the society takes for granted nowadays would not have been achievable without the development of bots. Popularity of bots have been on the rise ever since the IRC days, but chatbots specifically have seen an explosion of popularity with many companies using them internally and on social media platforms to deliver a massive range of functionality in an easy to access and if executed well, a highly branded way (Pasquarelli and Wohl 2017). “If we choose the right path… bots might be the best thing to happen to marketing yet.” (HubSpot 2018).

The chatbot analyses each query or question, and as long as it is not outside of the current knowledgebase scope, it will instantly respond with a response that it deems most relevant. Queries outside of the knowledgebase will be filtered-out to reduce the amount of clutter that the language understanding model has to interact with.

Vague questions might result in multiple answers being returned by the chatbot, as the scoring system is not yet perfect, however, with more user testing and extra utterances available for the model, the scoring will improve over time.

In conclusion, I believe that the project is a success and resolves the issue that it was initially set-out to fix, and covers most of the main functionality that I have planned for it. Both intent predictions and data extraction using LUIS exceeded my expectations and made it possible to expand the chatbot to other request types in the future. I am convinced that with more QnA entries from askme.rgu portal and user testing, the model will get progressively better at extracting important data from user queries and in determining the right responses for different query types. Once the bot is hosted on social media platforms like Messenger or Slack, I am positive that it will take some burden off the lecturers and staff dealing with repeated queries.

Project Aim

The aim of the project is to use the capabilities of chatbots and machine learning to make an interactive FAQ section with infrastructure in place to expand the chatbot’s functionality in the future. The functionality should allow lecturers and staff to save time by not having to answer the same repeated queries and questions a multitude of times.

Acknowledgments

Methods

For the project, I have used ML-based LUIS API developed by Microsoft to extract both the intent and important pieces of information from a query provided by the user, which the chatbot then uses to handle the query using its knowledgebase. The bot currently responds to requests for information and can be used to schedule a meeting.

Conclusion

Special thanks to Dr. John Isaacs for supervising the project, giving pointers and assisting in terms of the direction that the project should aim towards. Also, thanks to Shona Lilly for providing me with the FAQ knowledgebase, which reduced the amount of copy-pasting needed to be done from the askme.rgu.ac.uk portal. In terms of future work, I believe testing the chatbot with students is essential, as it will allow the model to improve by learning off students’ mannerisms. Another ideal feature would be to automate adding new utterances from the askme.rgu portal to the application’s knowledgebase, which will make the knowledgebase always up-to-date without the need for manual additions to both systems. Another exciting aspect of the project is potentially adding AI-based personality to the bot, which is available using the QnA maker framework also made by Microsoft, but due to potentially expanding functionality of this chatbot, I have decided to stick with Bot Framework and limit the responses to pre-written ones due to complicated nature of AI-based response generation for the time being.

References

LEBEUF, C., STOREY, M.A. and ZAGALSKY, A., 2017. Software Bots. IEEE Software, 35(1), pp. 18–23. HUBSPOT, 2018. Battle of the Bots. [online]. Available from: https://www.hubspot.com/stories/chatbotmarketing-future PASQUARELLI, A. and WOHL, J., 2017. Why marketers are betting on bots. [online]. Available from: https://adage.com/article/digital/marketersbetting-bots/309767

BSc Computer Science

65


STUDENT BIOGRAPHY

Lee Robbie Course: BSc (Hons) Computer Science Increasing Usability of Flight Trackers using Augmented Reality With the app stores possessing millions of apps, 2.5 million on Google Play Store and 1.8 on Apple’s App Store, (Statista, 2019) there is an oversupply of applications. The attention of users is limited, resulting in users wanting quick and streamline applications. Flight tracking applications have been proven to use an abundance of irrelevant features aiming for quantity rather than quality and simplicity to compete with other applications. This has resulted in a negative impact on the application’s usability, rendering users who would otherwise use the kind of application to be put off in using them due to their complexity. The decrease in usability also spans to the use of AR where applications have overcomplicated this already unfamiliar functionality again discouraging users from trying to use the feature. Flight Detector aims to solve this problem with a minimalised set of features to provide the primary functional user experience, which ight tracking application are designed to deliver. The application will incorporate an easy to follow user interface in addition to creating the application as a web-based app rather than the existing native based equivalent.

<#> 66


Increasing Usability of Flight Trackers using Augmented Reality Student: Lee Robbie Introduction With the app stores possessing millions of apps, 2.5 million on Google Play Store and 1.8 on Apple’s App Store, (Statista, 2019) there is an oversupply of applications. The attention of users is limited, resulting in users wanting quick and streamline applications. Flight tracking applications have been proven to use an abundance of irrelevant features aiming for quantity rather than quality and simplicity to compete with other applications. This has resulted in a negative impact on the application’s usability, rendering users who would otherwise use the kind of application to be put off in using them due to their complexity. The decrease in usability also spans to the use of AR where applications have overcomplicated this already unfamiliar functionality again discouraging users from trying to use the feature. Flight Detector aims to solve this problem with a minimalised set of features to provide the primary functional user experience, which flight tracking application are designed to deliver. The application will incorporate an easy to follow user interface in addition to creating the application as a web-based app rather than the existing native based equivalent.

Project Aims The project aims to produce a mobile web-based flight tracking application which will take live data from an API presenting the data in a readable format based on user input. The application will allow for a selected flight to be tracked visually and displayed in a live environment using AR.

Methods The mobile application was created using the IDE Atom where a flight data API was used from The OpenSky Network. The API supplied live data from its coverage around the world, presenting aircraft with ADS-B and MLAT transponders. The interactive AR was created using the open-source frame work AR.js. The design process throughout the production of the application was exposed to user feedback, ensuring user opinions were a central part throughout all aspects of the project. Final usability metrics will be used to scrutinise the application before user Fig. 2: AR.js Location-Based Example testing.

Flight Detector

Supervisor: Dr John Isaacs

Figures and Results The design process of the application resulted in 8 separate/variations of the home page. Following the most preferred design, colour schemes were created and used on the chosen design. The designs were put to users through a survey. The most preferred design was taken forward to include the different colour schemes.

Fig. 5: Survey Results From Home Design Preference

Fig. 3: Chosen Home Page Design

Users again would choose a preferred colour scheme against the home page design. The resultant home page is shown in Figure 3. The home page survey results are shown in Figure 5 were the survey results for the colour scheme survey are shown in Figure 6. Both surveys show a clear preference between 4 choices where both surveys came with a clear majority of choice. The process was followed for all aspects of features within the app such as the search tab, flight data display and AR page. The AR page chosen is shown in Figure 4, adopting the colour scheme “Electric and Crisp”.

Fig. 4: Chosen AR Page Design Fig. 2: The OpenSky Network Logo

Acknowledgements I would like to thank Dr John Isaacs, Head of the School of Computing Science and Digital Medi, for his dedication in providing guidance throughout the year. I would also like to thank Mark Scott-Kiddie, Fatima El-Ghanduri and Calum McGuire for their support and motivation throughout the years at university. Finally, of course, I would like to thank my family for their continued support throughout my time at university, especially emphasising the support received this year above all.

Fig. 6: Survey Results From Colour Scheme Preference

Conclusion The project aimed to create a usable application for all demographics of user. The application had thorough user feedback throughout to ensure the application design would appeal to the broader audience of users. Only a small scale of user testing has been conducted since the application has been implemented. Although only small scale testing has been completed, it has shown the application to have been successful in achieving a usable interface with usable features. Further testing will be completed to gauge a more substantial proportion of user feedback to determine if the project has been entirely successful.

References Statista. (2019). Number of apps in leading app stores 2019. Statista. https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/

BSc (Hons) Computer Science 67


STUDENT BIOGRAPHY

Quentin Robertson Course: BSc (Hons) Computer Science An Educational Game on Sustainable Fishing in the North Sea It is no big secret that the human race has a sordid history of driving many species of animals to extinction. The Passenger Pigeon, the Western Black Rhinoceros and the Dodo bird are some of the most infamous cases of this. However, it could be that soon, due to continuous overfishing, we may find ourselves merely remembering some of the most popular fish, such as Cod and Haddock. The IUCN red list of threatened species is commonly used to gauge the risk of extinction of various species of animal, Cod and Haddock currently both are considered vulnerable ..(The International Union for Conservation of Nature, 2020) Video games have quickly become one of the most popular media forms used by the public. The idea of using video games as a tool for education is in no way new, be it games made specifically for education, or just games which cover real world topics and just happen to be very informative.

<#> 68


An Educational Game On Sustainable Fishing In The North Sea. Quentin Robertson, Ines Arana

Introduction It is no big secret that the human race has a sordid history of driving many species of animals to extinction. The Passenger Pigeon, the Western Black Rhinoceros and the Dodo bird are some of the most infamous cases of this. However, it could be that soon, due to continuous overfishing, we may find ourselves merely remembering some of the most popular fish, such as Cod and Haddock. The IUCN red list of threatened species is commonly used to gauge the risk of extinction of various species of animal, Cod and Haddock currently both are considered vulnerable. (The International Union for Conservation of Nature, 2020) Video games have quickly become one of the most popular media forms used by the public. The idea of using video games as a tool for education is in no way new, be it games made specifically for education, or just games which cover real world topics and just happen to be very informative.

Project Aim

Implementation and Results

Conclusion

Above are screenshots of the final product of this project. As seen, there are a number of differences between the original design and final product. However, the majority of functionality remains unchanged from the original idea. The players take on the role of a fishing vessel, which has a starting storage, quota and max fuel. Players must earn money by fishing for the fictional species of fish ‘Coddack’ which is inspired by Cod and Haddock as the name suggests. With each boat upgrade, the player can fish more each month, meaning that this increases the chances they can overfish Coddack to extinction. There is a conservation status, based on the IUCN list, which represents the Coddack population, the lower the rating the harder it is to repopulate in the next spawning season. The game also features a ‘Help & Info’ section in the menu to inform players of some key gameplay information, as well as background information on aspects that inspired the game.

As seen in the results section, and the chart above, users found the game very successful at illustrating the risks of overfishing. This is ideal as this was one of the main goals of the project. We also note that users overwhelmingly answered the knowledge test questions correctly, showing that they did retain a degree of information after the experience. As stated, there were suggestions for changes to the game, and even though the project as a whole is a success in terms of educating users on the desired topics, there is always room for improvement. With future work it would be ideal to focus on adding new features to the game, such as having multiple players take on vessels, and aim for the best profit individually, while trying to sustain the Coddack population as a group.

The aim of this project was to develop a video game which would have players experience a basic simulation of the fishing industry in the North Sea. The goal in the project being to create a game which would educate players on North Sea fishing, and the importance of sustainable fishing.

Methods

Above is the mock up design for the game. It was decided that the best platform to develop such a video game would be the Unity Engine. The game would be a strategy type, similar to titles such as Plague Inc. and would have the players take on the role of a fishing vessel working in the North Sea. For data representation, an SQLite database is set up which the game can communicate with during play.

Acknowledgments

Testing was carried out by having people watch a demonstration of the game, this would mainly be users playing the game if not for the current lockdown, and answering a series of questions to test their knowledge and give feedback on the experience. The results showed that most users did not, in the majority of cases, poses much prior knowledge about the North Sea fishing industry, the fish that reside there, or the extent of overfishing. All users said they learned from the experience, and to test their learned knowledge, they were asked three questions regarding the living habits of North Sea fish, all participants answered right, bar one response to one question. Users were also asked to give feedback regarding the experience, where we saw users on average rating the help sections and overall UI at around 8/10. The entertainment value and likelihood to play again (shown above) was at a slightly lower, but still positive, 7/10 on average. Most importantly, all participants believed the game illustrated the effects of overfishing extremely well, with only one response being below 8/10 for effectiveness. In terms of written feedback, we saw users suggesting additions to the game such as more in game sound, slight UI tweaks, further information sections and the inclusion of a voiced tutorial in game.

First of all, I would like to offer my special thanks to my supervisor Ines Arana for her continuous support throughout the development of this project, without her feedback keeping me on track, this project would have been impossible. I would also like to thank Dr John Isaacs for his unwavering support in all aspects of university life throughout the years, and always being there to listen whenever problems had arisen.

References

1) The International Union for Conservation of Nature, 2020. The IUCN Red List of Threatened Species. [Online] Available at: https://www.iucnredlist.org/ [Accessed 20 April 2020].

More Information Email: q.c.robertson@rgu.ac.uk

Computer Science

69


STUDENT BIOGRAPHY

Lewis Ross Course: BSc (Hons) Computer Science Introductory Programming in Javascript using a Media Computation Approach Programming languages are somethlng many students find difficult to get to grips with (Mttchell, Purchase, and Hamme<. 2009). One of the main issues they find is that they struggle to visualise what it is they are doing (Higgins, C. et al. 2019). The media computation approach was pioneered by Mark Guzdtal back in 2003 to combat this Issue of students not engaging due to the lack of opportunity to be creative (Guzdial, 2003). The method was to have students learn by manipulating media data by doing things such as: change the background colour of an image, creating sounds and changing videos (Coweb.cc.gatoch.edu, n,d.). It was first introduced at Georgia Tech in America and the results saw big improvements in tests and getting more female students to enrol in Computing Science. (Guzdial, 2013), This method has worlked well with Java and Python (Guzdlal,2013) and now appears the time to introduce it to JavaScript. JavaScript is still one or the most popular languages (Stack Ovefflow, 2019), so it is a skill students will require in order to make themselves more employable. The aim of the project is to find a way to provide a learning ptatform for novice programmers to learn JavaScript using the media computation approach. Can students learn just as well from looking at the manipulation of media as they do with more traditional methods?

<#> 70


71


STUDENT BIOGRAPHY

Aleksandrs Rudenoks Course: BSc (Hons) Computer Science Analysis and Exploitation of Vulnerabilities in Unmanned Aerial Vehicles The presence of UAVs in our daily lives continually grows each day. While the majority of UAVs are intended for a peaceful use, they can also be used with offensive intentions or as a disruption tool. This project intends to provide a solution for the problems that unauthorised drone flights can present (PA Media, 2019).

<#> 72


Analysis and Exploitation of Vulnerabilities in Unmanned Aerial Vehicles Computer Science

Aleksandrs Rudenoks & John Isaacs Introduction The presence of UAVs in our daily lives continually grows each day. While the majority of UAVs are intended for a peaceful use, they can also be used with offensive intentions or as a disruption tool. This project intends to provide a solution for the problems that unauthorised drone flights can present (PA Media, 2019).

Project Aim

Results

Parrot AR drone and majority of modern drones utilise IEEE 802.11x standard for wireless communications. Performed vulnerability analysis revealed that this standard fails to provide a secure communication channel between the drone and its operator. The most critical weakness of the wireless communication turned out to be a de-authentication attack, commonly known as Denial-of-Service type of attack. This weakness was utilised as a core attack method for the developed anti-UAV system.

The general aim of this project is to identify potential vulnerabilities in UAVs, determine the most critical vulnerabilities for the operation of UAV and develop an anti-UAV system that exploits found vulnerabilities. The primary goal of this system is to prevent an unauthorised use of UAVs in restricted areas. In addition, the system must be developed with several constraints in mind: it must be legal, easy-to-use and portable.

Research Methods After thorough research, the vulnerability analysis was narrowed down to the assessment of wireless networks as communication channels. For this task, a wireless adaptor was configured to operate in monitor mode in order to scan all traffic received on wireless channels. A wireless network auditing tool called Aircrack-ng was selected to display and perform various attacks, including de-authentication, “evil twin” and replay attacks.

During the testing stage, Raspberry Pi 3b+ encountered some issues operating in the monitor mode. Occasionally, the wireless adapter stops receiving any packets and gets trapped in the endless reconfiguration loop that can only be terminated by device reboot. The same adapter worked flawlessly with the HP laptop running the same version of Kali Linux. It was also noted that the Raspberry Pi has a greater execution delay due to the lower processing speeds.

Conclusion Provided WiFi router and Parrot AR drone were set up and analysed for potential vulnerabilities, followed by a successful implementation of the previously mentioned attacks.

Implementation The anti-UAV system was developed as the Python application optimised for Kali Linux distribution. Alternatively, this application can run on any Linux distribution with pre-installed Aircrack-ng module and Python 3.7 or higher. The finished application was called “Drone Striker” and it is designed to support devices with various screen sizes. Following screenshot illustrates the main page of the anti-UAV system:

According to the test results, this project has proved that some UAVs contain a number of vulnerabilities that can be exploited, with some vulnerabilities proven to be crucial for UAV operation. While most wireless networks are prone to the de-authentication and MITM attacks, there were some attempts to develop a solution for wireless attacks prevention, for example “Management Frame Protection” by Cisco (CISCO, 2008). De facto, those attack prevention mechanisms are rarely implemented because of the hardware restrictions. The developed application works as expected and meets all “Must Have” requirements. While the target hosting device was supposed to be a Raspberry Pi, the tests have proved it to be less reliable than a laptop.

References CISCO (2008) Infrastructure Management Frame Protection (MFP) with WLC and LAP Configuration Example - Cisco. Available at: https://www.cisco.com/c/en/us/support/docs/wireless-mobility/wlansecurity/82196-mfp.html#prereq (Accessed: 27 April 2020). PA Media (2019) Activists to fly drones at Heathrow in attempt to ground flights | UK news | The Guardian. Available at: https://www.theguardian.com/uk-news/2019/aug/29/heathrowactivists-fly-drones-attempt-ground-flights (Accessed: 27 April 2020).

Analysis and Exploitation of V in Unmanned Aerial V Acknowledgements

I would like to express my very great appreciation to Dr John Isaacs for his valuable and constructive suggestions during the development of this project, without his guidance this project would not have been possible. In addition, I wish to thank the IT Support Team at Robert Gordon University for their assistance in prompt hardware provision.

Aleksandrs Rudenoks & John Isaacs Introduction

Results

Com 73


STUDENT BIOGRAPHY

Mark Scott-Kiddie Course: BSc (Hons) Computer Science Hardware Password Management Through a Web Interface On average internet users have to remember up to 90 passwords. It is important that people protect themselves with a good password strategy, which consists of the strength and storage of passwords. Password Manager Types: • Cloud-based - Requires internet • Hardware-based - Can be finicky • Software-based - Not Portable This project aims to implement a hardware password management solution accessible via a web interface, while avoiding some of the perceived downfalls that are present in other implementations.

<#> 74


Hardware Password Management Through a Web Interface

PassMan

Mark Scott-Kiddie

Figures & Results

Introduction On average internet users have to remember up to 90 passwords. It is important that people protect themselves with a good password strategy, which consists of the strength and storage of passwords. Password Manager Types: • Cloud-based - Requires internet • Hardware-based - Can be finicky • Software-based - Not Portable

Supervisor: Hatem Ahriz

Performance

Compatibility

Overall the Pi Zero performed surprisingly well, however, the hardware is still the main bottleneck. While running on the Pi, it was noticeably slower in loading, though this was expected. Overall the operations were completed successfully albeit at a slightly slower speed.

A USB connection to the host is all that is required for the Pi Zero to function. The Pi is running a local webserver, making it compatible with anything that can access a web browser. Meaning it could connect to a tablet or smart phone. *

45

The temperatures stayed stable while running the project on the Pi.

43 41 39 37

% of internet users using a specific strategy Memorize Physically take note Digitally take note Save to browser Use password manager Other

Project Aims This project aims to implement a hardware password management solution accessible via a web interface, while avoiding some of the perceived downfalls that are present in other implementations

35 14:26:45 14:34:19 14:35:19 14:36:20 14:37:20 14:38:20 14:39:05

Portability

Usability Designed using Angular Material library Design system / principles developed by Google Very little effort can be put into the design of the site due to this

The Pi is an incredibly small package, easy to include in everyday carry Still bigger than the average USB stick

Security

The size of devices like the Pi Zero can only decrease in the future

Passman encrypts passwords with 128-bit AES on the client-side, meaning that no encryption keys are stored on either device. Tech used is resistant to XSS attacks. The Hardware only interfaces with USB host, stopping remote attacks. Possible Combinations

AES Key Complexity 1.00E+35 1.00E+30 1.00E+25 1.00E+20 1.00E+15 1.00E+10 1.00E+05 1.00E+00

Methods PassMan will be developed using a modern technology stack, MEAN, while also utilising a wide variety of available libraries. A Raspberry Pi Zero will be used to host this application as physical size is important.

Figure.2 Main Page of PassMan

32

56

64

128

Key Size (Bits)

Conclusion PassMan fulfils a very niche window in the market that targets the downfalls of other implementations and uses them as its main strengths. Though PassMan could be considered a successful and complete project, it is something that could be continued in the future to further polish out issues, especially when faster and more up to date hardware becomes available.

Acknowledgements I would like to thank my honours supervisor Hatem Ahriz, CSDM, Robert Gordon University for his guidance and insight throughout the year. I want to also thank Calum McGuire, Fatima El-Ghanduri and Lee Robbie for motivating me throughout my time at university. Finally I want to thank my family from constantly motivating me and ensuring I have everything I need to succeed in university.

References https://www.raspberrypi.org/products/raspberry-pi-zero/ https://www.ibm.com/cloud/learn/mean-stack-explained https://www.kryptall.com/index.php/2015-09-24-06-28-54/how-safe-is-safe-is-aes-encryptionsafe Figure.1 Raspberry Pi Zero Ethernet gadget courtesy of @aallan from Medium.com

* Testing hasn’t been conducted on a mobile device/tablet.

BSc (Hons) Computer Science

75


STUDENT BIOGRAPHY

Sunny Shek Course: BSc (Hons) Computer Science Visualisation and Exploration of Open Data Air pollution has been ongoing problem around the world that we live in by the burning of fossil fuels like coal and oil. The burning of these fossil fuels creates a gas called carbon dioxide (CO2) which can affect us as humans. To investigate this issue the project considered an air quality dataset that is recorded hourly everyday from 2013 to 2017 from several monitoring stations in Beijing, China. By using this dataset and applying data science techniques to it. The data could be visualised to show the future rates of any air quality variable in the dataset. The reason for picking this dataset is because China has been facing the issue of air pollution with the smoke of its factories and coal has been it’s main source of energy as it accounts for more than 70% of the country’s total energy consumption. Doing this project would make people aware that air pollution is an issue that need to be solved to keep the air around us humans clean.

<#> 76


Visualisation and Prediction of Air Quality in Beijing Sunny Shek & David Lonie

Introduction Air pollution has been ongoing problem around the world that we live in by the burning of fossil fuels like coal and oil. The burning of these fossil fuels creates a gas called carbon dioxide (CO2) which can affect us as humans. To investigate this issue the project considered an air quality dataset that is recorded hourly everyday from 2013 to 2017 from several monitoring stations in Beijing, China. By using this dataset and applying data science techniques to it. The data could be visualised to show the future rates of any air quality variable in the dataset. The reason for picking this dataset is because China has been facing the issue of air pollution with the smoke of its factories and coal has been it’s main source of energy as it accounts for more than 70% of the country’s total energy consumption. Doing this project would make people aware that air pollution is an issue that need to be solved to keep the air around us humans clean.

Figures and Results Figure 1. Shows the correlations between different variables for one of the monitoring stations. A dark blue dot in figure 1 means that the relationship between the two variables are strong while a lighter shade suggests that there a relationship. It is the opposite if the dot is dark red means that there is a negative correlation between the two variables. Overall all the variables that are related to the air have a strong relationship with each other while temperature, pressure and dew point (DEWP) have a negative correlation.

Project Aim The main aim of this project was to use a machine learning algorithm called ARIMA (Auto Regressive Integrated Moving Average) to predict the future air quality from the dataset. Output those results from the algorithm using data visualisation to show the results as a graph.

Methods A standard data science procedure was used when it comes to implementing the project. The following steps were to load the dataset, explore the data, clean the data, fit data into a model and predict the future values of the model using the data. The framework used to implement the project is in R with RStudio as the IDE of choice. A package called forecast was used to create the ARIMA model in R.

figures

Figure 2 (left). Shows the original time series for a 52 week period for each year. Figure 3 (right). Shows the ACF and PACF of the time series. Overall figure 2 shows that there were roughly high spikes each week throughout 2015 and 2016. While it starts to drop at the end of 2016 to the start of 2017. The ACF (bottom left) in figure 3 shows at the start the plot is high and it starts to drop gradually. Around lag 60 it starts to go negative this would determine that the autoregressive (AR) part of the model is useful. The PACF (bottom right) in figure 3 shows that the plot gets cut off dramatically at the beginning this would mean that the moving average (MA) will not be important in the model.

The components in figure 4 means that seasonal indicates that there is a seasonal pattern, trend shows the overall movement in the time series. The remainder is what is left over after the other two components subtracted from the data. The figure 5 shows the residuals, ACF and PACF after modelling the ARIMA using the auto.arima function that automatically find the best fit for a time series. The ACF suggest that a model of the residuals would not be needed as the values are within the confident bounds.

Conclusion

Figure 6. The circled point (left) on the map is the monitoring station that the data was used to produce the results for this project. It is located around the Olympic park in Beijing. The map was taken from the air quality index website. The results overall that was that there would be a seasonally pattern where specific weeks or month would spike while other months would be lower than normal. The results displayed shows that data visualisation techniques were used. Future work for this project is to show more of the variables that was in the data and model those variables. Other monitoring stations from the dataset could be used as a comparison as there were 11 more stations of data in the set not including the one used in this project.

Acknowledgments The author of this project would like to thank David Lonie for being great supervisor to work with throughout the project. Would also like to thank to Song Xi Chen for providing the data set that was used for this project. Lastly the author would like to thank Rob Hyndman for the creation of the forecast package which made this project possible.

References Figure 4 (left). Shows the components of the decomposed data those being the seasonal, trend and remainder. Figure 5 (right). Shows the residuals, ACF and PACF after modelling the ARIMA

Chen,S.X., 2019. Beijing Multi-Site Air-Quality Data Data Set. Available from: https://archive.ics.uci.edu/ml/datasets/Beijing +Multi-Site+Air-Quality+Data [Accessed 30 January 2020] Chak K. Chan and Xiaohong Yao, 2008. Air pollution in mega cities in China.

77


STUDENT BIOGRAPHY

Marc Smith Course: BSc (Hons) Computer Science Creating a Portfolio Optimisation Tool using Common Economic Theories When investing in any kind of asset it is important to consider the return that the asset is expected to provide after a period of time. As well as this, the risk that the asset carries with it should also be considered. Investing in a single asset immediately implies a greater level of risk, as if that company, currency or property were to fail, the entire investment will also fail It is therefore common among investors to select multiple assets that vary across class and sector. This process of investing in various assets is known as diversification. The rational investor seeks investments that maximize a return whilst minimizing the risk/volatility. Many theories and algorithms have been put forward over the years that try to calculate these properties. These algorithms aim to help potential investors by maximizing the returns on their investments and avoid unnecessary risk ( Yusuf, Christopher 2012)

<#> 78


CREATING A PORTFOLIO OPTIMIZATION TOOL USING COMMON ECONOMIC THEORIES Marc Smith

Abstract When investing in any kind of asset it is important to consider the return that the asset is expected to provide after a period of time. As well as this, the risk that the asset carries with it should also be considered. Investing in a single asset immediately implies a greater level of risk, as if that company, currency or property were to fail, the entire investment will also fail. It is therefore common among investors to select multiple assets that vary across class and sector. This process of investing in various assets is known as diversification. The rational investor seeks investments that maximize a return whilst minimizing the risk/volatility. Many theories and algorithms have been put forward over the years that try to calculate these properties. These algorithms aim to help potential investors by maximizing the returns on their investments and avoid unnecessary risk (Omisore, Yusuf, Christopher 2012).

Testing

Figure 2. The applications form page where the user can enter their portfolio criteria. The backend combines historical price data with the users criteria to create a portfolio that it deems optimal. These results are then sent to the front-end for the user to view.

figures

displays to the user details about the generated portfolio.

Figure 1. A graph that shows the difference that diversification in a portfolio can have on an investments return (Levine 2018).

Project Aim The aim of this project was to apply these economic theories of portfolio optimization to a free to use tool that allows its users to create and optimize portfolios whilst providing them with key information about said portfolio. Furthermore, portfolios generated by the tool can be tested to measure the applications effectiveness in producing optimal portfolios.

Methods The tool built to optimize portfolios takes the form of a web application which allows its free to use platform to reach a wider audience. The front-end accepts a range of criteria regarding their desired portfolio including the stocks that they wish to invest in, the algorithm that optimizes the portfolio as well as the minimum expected rate of the return that the portfolio has.

▪ ▪ ▪ ▪

▪ ▪ ▪ ▪

Optimization Algorithms Highest and Lowest Sharpe Ratio’s Industry diversification levels Minimum adjusted return rates

Results

Figure 3. The application’s summary screen that

This data provides the user with information about their portfolio, such as:

Testing any tool that tries to predict future financial performances requires backtesting. This involves splitting historical pricing data into training and testing sets. This allows the application to generate a portfolio based off of stock data over a period of time, and then calculating the return that the portfolio would have achieved (QuantInsti 2019). This technique was used to create a variety of testing strategies comparing performance across:

The testing strategies produced mixed results overall. However, the strategy that compared portfolio industry diversification levels produced results that aligned with predictions. The undiversified portfolios performed worse over a 5 year period than their diversified counterparts.

key

Asset weight allocation Expected return Dividend payment history Sector allocation

The application uses the Flask framework to handle the requests and responses from the client. The back-end relies on the Yahoo! Finance API to retrieve the necessary stock information needed to generate a portfolio. The Pandas library is used to structure data and perform the operations that are required to optimize the portfolio. Finally, the matplotlib library is used to plot the different graphs that summarizes the portfolio. One of the portfolio summary graphs created by the application is a scatter plot of all possible portfolios, plotting their expected return against their volatility. This graph is commonly referred to as the “efficient frontier” because it outlines the set of optimal portfolios that do not carry unnecessary risk (Ganti, 2020). An example of an efficient frontier plot created by the application can be seen below.

Figure 4. A graph to compare the average annual and

5 year return of diversified and undiversified portfolios using both Modern Portfolio Theory and Post Modern Portfolio Theory.

Conclusion

The implemented program successfully provides a solution that assists users in the selection of appropriate investments. The tool was also successfully tested using a wide range of testing strategies The tool in its current form is limited by the API used to obtain historical stock data. Most notably, the maximum portfolio asset size is restricted as well certain portfolio summary information. These restrictions could easily be lifted by adopting a premium membership for the financial API which removes said limitations.

References

Omisore, I., Yusuf, M. and Christopher, N. (2012). The modern portfolio theory as an investment decision tool. University of Abuja, Abuja, Nigeria. Levine, J., 2018. The Big Benefit Of Diversification No One Talks About. [online] Forbes. Available at: <https://www.forbes.com/sites/jeffreylevine/2018/07/31/thebig-benefit-of-diversification-no-one-talks-about/#28191a6743cf> GANTI, A., 2020. Efficient Frontier Definition. [online] Investopedia. Available at: <https://www.investopedia.com/terms/e/efficientfrontier.asp> Gupta, A. and Tahsildar, S., 2019. What Is Backtesting A Trading Strategy?. [online] QuantInsti. Available at: <https://blog.quantinsti.com/backtesting/>

79


STUDENT BIOGRAPHY

Calum Stewart Course: BSc (Hons) Computer Science Autonomous Vehicle Platoon Forming Autonomous vehicle and driverless car technologies have been making huge technological advancements in recent years. We are getting closer to living in a world where cars driving themselves is the every day norm. Many features of autonomy are already available in cars today, such as adaptive cruise control and assisted steering on motorways. A related technology that has not been explored greatly as of now is platoon forming in a highway driving environment. This has the potential to impact the safety and efficiency of driving greatly. Platoons are up to 20% more fuel efficient than normal driving due to decreased wind drag and offer increased safety due to inter- vehicle communication.

<#> 80


Autonomous Vehicle Platoon Forming

Simulated Autonomous Platoon Formations in a Realistic Highway Environment Calum Stewart – Supervisor: John McCall

Introduction Autonomous vehicle and driverless car technologies have been making huge technological advancements in recent years. We are getting closer to living in a world where cars driving themselves is the every day norm. Many features of autonomy are already available in cars today, such as adaptive cruise control and assisted steering on motorways. A related technology that has not been explored greatly as of now is platoon forming in a highway driving environment. This has the potential to impact the safety and efficiency of driving greatly. Platoons are up to 20% more fuel efficient than normal driving due to decreased wind drag and offer increased safety due to inter- vehicle communication.

Project Aim

To create a motorway driving environment simulation and populate it with realistic traffic. Then use this to develop an algorithm which allows vehicles to form platoons together autonomously. Measurements will be taken to prove the succesful formation of platoons.

Methods

SUMO will be used to simulate the driving environment. Glasgow highways will be imported using OSM maps. Vehicles will be programmed using SUMO Traci interface, existing tools and plugins will be utilised to achieve the goal of forming autonomous platoons.

Figures and Results

A realistic driving environment was successfully modelled using SUMO. The M5 highway going through Glasgow was imported into SUMO using Osmosis to crop the desired area of the map and to delete all other roads except from highways, and NETConvert to convert the osm map file into a SUMO. The map was not very accurate so lots of touch-ups had to be done. The roads were populated with random traffic using SUMO RandomTrips.py. This spawned random cars from the fringe edges and gave them random destinations. Trip routing and is handled within SUMO, it plots a trip based on starting point and destination coordinates. Random vehicle behaviour is handled with randomtrips and SUMO. Simpla is a tool for sumo which aids in the creation of platoons. While developing and testing, a simple stretch of road with 2 lanes was set up to show a platoon forming. Simpla assigns cars in the platoon different classes which exhibit different behaviours. A platoon successfully formed with most of the cars in the simulation, with a platoon leader in front. The vehicles did not all stay in the same lane however. Further development will allow this to be ported into the motorway driving environment and force platoons to stay in the same lane.

Conclusion

Creation of a simulated driving environment on a realistic motorway was successful. Platoon formation was successful on a straight road with one platoon, and the cars do not stay in the same lane. Further work is needed to fix some of the small bugs the platoon has and to port this to a motorway driving environment so multiple platoons can form individually.

Acknowledgments Advice given by John McCall has been a great help in achieving my goals set out in this project.

References

Daily, M., Medasani, S., Behringer, R., & Trivedi, M. (2017). Self-Driving Cars. Computer, 50(12), 18–23. https://doi.org/10.1109/MC. 2017.4451204 Liang, K. Y., Mårtensson, J., & Johansson, K. H. (2016). HeavyDuty Vehicle Platoon Formation for Fuel Efficiency. IEEE Transactions on Intelligent Transportation Systems, 17(4), 1051–1061. https://doi.org/10.1109/TITS. 2015.2492243 Krajzewicz, D., Erdmann, J., Behrisch, M., & Bieker, L. (2012). Recent Development and Applications of {SUMO Simulation of Urban MObility}. International Journal On Advances in Systems and Measurements, 5(3), 128–138. Retrieved from http://elib.dlr.de/ 80483/

81


STUDENT BIOGRAPHY

Andrew Swan Course: BSc (Hons) Computer Science Implementing IPv6 Devices on an IPv4 Network Given the new explosive growth of IoT devices and the recent innovation from the old Internet Protocol (IP) version 4 to the new IPv6 as the anticipated primary internet routing protocol, there is an anticipated period of transition. During this time, there will need to be methods of translating between the legacy IPv4 and the new IPv6. Various Transition Mechanisms (TMs) were proposed in the early days of IPv6, but none have cemented themselves as standard, and the transition to the new protocol has taken longer than expected: many networks world-wide are still using the old IPv4 for their routing device. This presents a landscape of new IPv6-base devices being used on old IPv4-base networks. To combat this, there will need to be a way to transfer IPv6 packets to these new s over a legacy IPv4 network.

<#> 82


Implementing IPv6 Devices on an IPv4 Network By Andrew Swan Supervisor: Andrei Petrovski

Introduction Given the new explosive growth of IoT devices and the recent innovation from the old Internet Protorol (IP) version 4 to the new IPv6 as the anticapated primary internet routing protocol, there is an anticipated period of transition. During this time, there will need to be methods of translating between the legacy IPv4 and the new IPv6. Various Transition Mechanisms (TMs) were proposed in the early days of IPv6, but none have cemented themselves as standard, and the transition to the new protocol has taken longer than expected: many networks world-wide are still using the old IPv4 for their routing. This presents a landscape of new IPv6base devices being used on old IPv4base networks. To combat this, there will need to be a way to transfer IPv6 packets to these new devices over a legacy IPv4 network.

Project Aim The aim of this project is IPv4toIPv6 transtation. This means, practically, the ability to communicate and access an IPv6-only device on an IPv4-only network. The main aims will be first, to scan and find these devices; second, to access information on a select device via network communication; third, to securely communicate with the device.

Figures and Results

Conclusion

Figure 4: IPv4 Address Space Run-Down

figures

Figure 2: Network Topology

The project uses the Java SE 9 and Java library pcap4j[1] to scan a network from the Application layer, look for IPv6 capable devices and craft send an IPv6 packet to this device over an IPv4 network. As see above from the demo, the client of the application is responsible for scanning and picking out IPv6 devices (the cells shaded red) and sends the Ipv6 address and open TCP port number to the server of the application. When the server has this information, using the pcap4j, it crafts an IMCP packet and wraps this in an IPv6 wrapper. This packet is sent over the network and is received by the device. Pcap4j uses WinPcap (or Npcap for Nmap) to capture packets but is unique in that it can craft packets for use; making it ideal for translation.

Translation between these two protocols proved more difficult than might be expected, given their names, one would assume that there was some design continuity between the two, but the two are very different. The use of both in tandem is leading to conflicts and confusion already[2], it is clear with this and the quickly depleting address space[3], the full switch is needed. The client-server architecture of the project has meant a division of functionality: the client handling the IPv6 device identification and the server handling the packets/sockets (as seen in Figure 3). The server (to be a web server in the future) will be able to send IMPC packets to these IPv6 devices using the tunneling method.

Acknowledgments

Methods

1. The project was completed with a third-party Java library, pcap4j, by Kaito Yamada: https://github.com/kaitoy 2. Figure 1: Cisco, https://www.cisco.com/c/en/us/supp ort/docs/ip/ip-version-6/25156ipv6tunnel.html

References Figure 1: IPv6 Tunnelling

This will be accomplished with a Java client-server application that can scan an IPv4-only network and find IPv6-only devices, wrap IPv6 packets in an IPv4 wrapper and transfer them over the network to be received by the device: be this an IoT device or otherwise.

Figure 3: Finding an IPv6 Device and Opening a Socket

The next step was to secure the communication between the devices using TLS/SSL and password authentication. The device would be given an ID and would be stored on the server alongside an account with a password that would be set as the only account that could communicate with this device.

1.Kaito Yamada, https://github.com/kait oy/pcap4j 2.Geoff Huston, 25th Mar 2020, https://www.potaroo.net/tools/ipv 4/index.html 3. Jeff Doyle, 5th Jun 2009, https://www.networkworld.com/ar ticle/2235990/the-dual-stackdilemma.html

BSc Computer Science

83


STUDENT BIOGRAPHY

Satya Tamalampudi Course: BSc (Hons) Computer Science Video anomaly detection using machine learning and deep learning techniques Anomaly detection is the process of finding unusual patterns in data that do not confined to expected behaviour. In recent years, video anomaly detection has received major attention in computer vision. Video surveillance systems have become very popular due to heightened security concerns and low hardware costs. Moreover, in most circumstances, it is necessary for humans to analyse the videos, which is inefficient in terms of accuracy and cost. Generally, anomalous events rarely occur when compared to normal events. As a result of this, together with the large number of such videos produced daily, there is a great need for a real time automated system that detects and locates anomalous behaviour

<#> 84


Video anomaly detection using machine learning and deep learning techniques. Satya Tamalampudi (Supervisor: Pam Johnston)

Introduction Anomaly detection is the process of finding unusual patterns in data that do not confined to expected behaviour. In recent years, video anomaly detection has received major attention in computer vision. Video surveillance systems have become very popular due to heightened security concerns and low hardware costs. Moreover, in most circumstances, it is necessary for humans to analyse the videos, which is inefficient in terms of accuracy and cost. Generally, anomalous events rarely occur when compared to normal events. As a result of this, together with the large number of such videos produced daily, there is a great need for a real time automated system that detects and locates anomalous behaviour.

Project Aim The aim of this project is to identify anomalies in video using machine learning. This project also aims to analyse deep learning technique called transfer learning to solve the problem.

Figures and Results

The figure above shows one input frame and its associated dense optical flow frame. In this frame, we can visualise the angle (direction of the pixel) by hue and the distance (magnitude of the pixel) by value of HSV colour representation. To visualise the anomalous datapoints, histograms were plotted. The frame on the right has anomaly (cyclist passing by) it has higher magnitude values whereas frame on left has no anomalies hence low magnitude values. The machine learning algorithm was trained with the magnitude values to find the best model. The main challenge of this method is perspective error. To overcome this error, YOLO is implemented.

We provided an overview of machine learning and deep learning methods applied to anomaly detection task. The analysis of Optical flow method worked achieving 0.72 area under curve score with logistic regression. When the dataset was tested with YOLO, It can also be able to correctly detected most anomalous behaviour with greater accuracy than Optical flow (giving 95% probability score for truck). Our project demonstrated the ability of deep learning model to achieve good results.

Acknowledgments

Methods

Optical flow is the method used to find the apparent motion patterns between frames of sequence. Dense optical flow (Farneback, 2003) is the type of optical flow method which gives the flow vectors of the entire frame. Then magnitude and angle is calculated from the flow vector. The output of dense optical flow method is the colour coded video. An image frame from that output video is shown in the next figure.

Conclusion

Three different algorithms were compared Logistic regression, Support Vector machine and Naive Bayes. By running several experiments and fine tuning the models hyperparameters we were able to achieve AUC of 0.72 on the test set. The model that was able to obtain this performance is Logistic regression. The steepness of the curve for SVM and N.B are struggling to achieve good performance when compared to LR.

YOLO (You Look Only Once) is the deep learning based object detection model. It is single convolutional network which simultaneously predict multiple bounding boxes and class probabilities. It is very fast and can process 45 frames per second. So, it is useful for implementing in real time. We implemented YOLO with OpenCV module for this dataset. It suits the problem because it also understands the object generalisation.

I would like to express my sincere gratitude to my supervisor Pam Johnston. Without her guidance and encouragement this project would not have been possible. I also want to thank my family who have always been there for me with all the love and support that I needed during this project

References Farneback (2003). Two-frame motion estimation based on polynomial expansion. Scandinavian conference on Image analysis. Springer. Redmon (2016). You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on CVPR. IEEE. Mahadevan (2010). Anomaly Detection in Crowded Scenes. Proceedings of IEEE Conference on CVPR. IEEE

BSc Computer Science 85


STUDENT BIOGRAPHY

Brad Thomson Course: BSc (Hons) Computer Science Mental Health and Activities Correlation Mobile Daily Application Autonomous vehicle and driverless car technologies have been making huge technological advancements in recent years. We are getting closer to living in a world where cars driving themselves is the every day norm. Many features of autonomy are already available in cars today, such as adaptive cruise control and assisted steering on motorways. A related technology that has not been explored greatly as of now is platoon forming in a highway driving environment. This has the potential to impact the safety and efficiency of driving greatly. Platoons are up to 20% more fuel efficient than normal driving due to decreased wind drag and offer increased safety due to inter- vehicle communication.

<#> 86


Mental Health and Activities Correlation Mobile Dairy Application:

A look into the correlation between a users mental health and the activities they participate in through the medium of a mobile diary application Author: Brad Thomson

Supervisor: Roger McDermott

Introduction Stress is a part of everyday life, and with mental illness becoming less taboo, nowadays people are more willing to seek help and admit that they are not doing well in terms of mental health. For this project, a mobile application was developed that allows the user to record a mental health rating as well as what action or activity that caused that particular rating. This application will allow the user to keep track of how they are feeling and how they respond to certain activities. This project will allow users to understand the correlations between there daily business that they may not have the objectivity to notice on their own. The main topics of this project are the different methods of daily data collection and the analysis of unique user data, with equally unique data visualisation.

Project Aim The aim of this project is to give users a secure, easy and intuitive way to record and visualise their mood ratings. The user should be able to enter data at anytime of any day. The user will be able to see a data visualisation of their journey from when they started using the application to any date in the future. The user will also be able to see correlations between mood ratings and activities the participate in.

Figures and Results

figures

Figure 1 - Early Concept Design

figure figures

Methods Firstly, research was carried out on similar applications that were available to distinguish if the was any discernible gaps within the market that could be met with my applications functionality, as well as researching user interfaces and navigational structures. Concept designs, sketches and storyboards were created to give an idea of the look and feel of the application that was to come. The application was built using the Nativescript framework using Angular for ease of cross platform development between iOS and Android devices, using the Visual Studio Code IDE. A simple local backend database is used to store the users login details as well as their daily diary entries. The user can enter the fill out the diary entry form and send his entries to the backend database, to which they can then be displayed and analysed within other sections of the application.

Figure 2 – Project Sign-in screen

Figure 3 – Project Home screen

Figure 1 (left) is a early stage concept design when the project was still in its infancy. It depicts each screen and its contents it may contain, as well as its navigational paths and wider array of components that will be implemented. Figure 2 (left) is the final projects sign in screen, this adds an extra layer of protection for users who wish to secure their private information. Figure 3 (left) is the final projects home screen, this shows the main navigational options that the user has. Figure 4 (left, below) is the final projects diary entries, this list shows all the diary entries that the user has entered throughout the use of the application and given more development time, would allow the user to edit existing entries. Figure 5 (left, below) is the final projects statistics page, which will allow the user to see visualised data on of their entries.

Conclusion

The aims of this project were mostly all achieved, with one exception. The user can securely sign into the application using their unique login details, record various mood diary entries, see all entries that have been entered and see a visualisation of data within the statistics portion of the application. The feature of finding correlation between a user's mood and activities was not implemented but was heavily researched upon and will be discussed in detail in my report.

References

Figure 4 – Project Diary entries screen

Figure 5 – Project Statistics screen

Wallach, Helene S. “Changes in Attitudes Towards Mental Illness Following Exposure.” ‌Bonnardel, Nathalie, et al. “The Impact of Colour on Website Appeal and Users’ Cognitive Processes.” Clark, Lee Anna, and David Watson. “Mood and the Mundane: Relations between Daily Life Events and Self-Reported Mood.”‌

87


STUDENT BIOGRAPHY

Jack Webster Course: BSc (Hons) Computer Science Can Drone Technologies be used to Complete a Wi Fi Survey? When I was on my placement year with a prominent oil company, we faced a simple problem To support deployments of new infrastructure technology. It was critical that there was a good level of coverage across the whole site including the large storage yard. The procedure employed was to simply carry a laptop, running a piece of propriety cisco software, as you walked the whole site. You would then stop at regular intervals and manually log in Excel the current wireless metrics. This was a very time consuming process which could be subject to human error due to the manual logging. It was common that the data would need to be cleaned afterwards before it could be used. Accuracy of the data was always a concern with positioning data being pulled from GoogleMaps satellite view manually. This project was born from the common and cost effective availability of simple technologies which could be combined to automate and improve on this existing process.

<#> 88


Can drone technologies be used to complete a Wi-Fi survey? Student: Jack Webster Supervisor: John Isaacs

Introduction

Figures and Results

When I was on my placement year with a prominent oil company, we faced a simple problem. To support deployments of new infrastructure technology. It was critical that there was a good level of coverage across the whole site including the large storage yard. The procedure employed was to simply carry a laptop, running a piece of propriety cisco software, as you walked the whole site. You would then stop at regular intervals and manually log in Excel the current wireless metrics. This was a very time consuming process which could be subject to human error due to the manual logging. It was common that the data would need to be cleaned afterwards before it could be used. Accuracy of the data was always a concern with positioning data being pulled from Google Maps satellite view manually. This project was born from the cost effective availability technologies which could be automate and improve on process.

common and of simple combined to this existing

Project Aim

The aim was to investigate if it were possible to use a drone along with a payload to complete a wireless survey similar to the manual process described above .The drone would need to navigate itself around the area capturing the same metrics the human operator did manually.

Methods The project resulted in the combination of a wi-fi controlled drone for the base platform. Through the use of a Raspberry Pi 3 Model B+ commands could be given to the drone via the Pi network interface. With the addition of a usb gps receiver the Pi was then able to determine its current position and calculate which direction it needed to travel. The final piece was the magnetometer included on the Pi Sense Hat. From this the pi knew which way it was facing so it could orientate the drone appropriately. The combination of these 4 devices allowed for navigation to given waypoints defined within a set area. On arrival the Pi captured the survey data automatically and logged it to a file before continuing to the next point.

Conclusion

From the project I was able to recreate the procedure for a wifi survey following the same process as I did on industrial placement. The gps device proved to be both an asset and a burden. Although it quickly attained a position for the program, accuracy of the position was at times very unstable. This resulted in fluctuating bearings being calculated and the drone meandering towards the target. Further cleaning of the gps data would be required for a final product to smooth out the flightpath and make surveys more efficient. You can see this inaccuracy in the form of a speed component for a stationary receiver caused by position jumping.

The senseHat magnetometer required repeated calibration after the slightest change in the environment. The top figure shows the default range calibration from the device using RTEllipsoidFit package. Compared to the lower figure which represents after calibration, the range of values is significantly different. I also believe the calibration was inverted by default which could explain my navigation difficulties in the demo. Further investigation will be completed before final submission.

Except in this case the project has produced a drone which when provided with a list of waypoints I choose, could navigate itself to these positions and capture data automatically. By removing the human element the chances of errors in the captured data have been removed and the drone would be able to travel faster than a human on foot. There is still much work to be done. The drone needs two parameter files provided to it and they need to be manually created at present. A user interface could be developed for a production environment to remove this weakness. Although the project produces data this needs to be cleaned before being usable by api such as leaflet. A solution could be developed to include this functionality.

Acknowledgments I would like to thank John Isaacs for all his hard work and assistance with this project. Thank you for all your feedback over the last few months and pointers when I hit a wall.

References Even with the difficulties in the execution of the Python program, the code does calculate the information necessary for navigation. With the substitution of Nodecopter the drone is theoretically capable of self navigation and completing the wifi survey.

Raspberrypi.org. 2020. [online] Available at: <https://www.raspberrypi.org/products/raspber ry-pi-3-model-b-plus/> [Accessed 29 March 2020]. Raspberrypi.org. 2020. [online] Available at: <https://www.raspberrypi.org/products/sensehat/> [Accessed 29 March 2020]. Parrot Store Official. 2020. Panel EN - Edito Parrot AR.Drone 2.0 Elite Edition. [online] Available at: <https://www.parrot.com/uk/drones/parrotardrone-20-elite-edition> [Accessed 23 April 2020].

Bsc Hons Computer Science

89


STUDENT BIOGRAPHY

Amy Whyte Course: BSc (Hons) Computer Science An Investigation into the Use of Machine Learning Techniques on Renewable Energy Data The project undertaken was an investigation into the ways in which artificial intelligence is benefitting the battle against climate change. This investigation through literature review showed that there are areas where renewable energy methods could be improved to reduce the reliance on traditional fossil fuels. From a CSV file with data including the average wind speed and dates and times, exploration and preprocessing can be used to view trends in the data. Following that, machine learning methods can be applied to test the accuracy in predicting wind speed levels.

<#> 90


An Investigation into the Use of Machine Learning Techniques on Renewable Energy Data Amy Jessica Whyte & Supervisor Dr Eyad Elyan

Introduction The project undertaken was an investigation into the ways in which artificial intelligence is benefitting the battle against climate change. This investigation through literature review showed that there are areas where renewable energy methods could be improved to reduce the reliance on traditional fossil fuels. From a CSV file with data including the average wind speed and dates and times, exploration and pre-processing can be used to view trends in the data. Following that, machine learning methods can be applied to test the accuracy in predicting wind speed levels.

Data Pre-Processing and Exploration To begin using the dataset, any null or irrelevant rows or columns are removed. The useful columns, which will be “Average Wind Speed”, “Time Stamp”, “Standard Deviation” and “Average Direction” are so named to give better context to the data held within them. Another column is created to categorise the wind speeds to give a good idea of the usual wind speed. Throughout the exploration of the data, different functions are used to determine the lowest and the highest wind speed within the dataset. The functions also allow the creation of graphs to visualise the data to see if there are any obvious trends.

With ANN, the accuracy continuously increases as the number of epochs increase. However, by continuously increasing the number of epochs, that then gives the risk of “overfitting” the model to the training set. Meaning whilst it may work very well on the dataset which it has been trained on, it may not work well on any new data.

Objectives The dataset being used contains data which was recorded in Nebraska by a twenty-meter anemometer and has a record of ten-minute average wind speed. A variety of machine learning techniques will be researched and applied to a wind speed dataset in order to determine which of the techniques gives the highest level of accuracy in predicting the average wind speed. Discovering this could aid the predicting of renewable wind energy levels that are generated and pulled into the power grid. Being able to make more accurate predictions of the wind speed would allow better control of the fossil fuels used to assist any gaps in the renewable energy that needs to be filled. This could reduce the level of fossil fuel-based energy sources being wasted through the unnecessary generation and thus reduce the greenhouse gas emissions produced. From the literature review, it is already seen that renewable energy, including wind, has a very inconsistent nature which makes it difficult to analyse and base predictions on. This investigation will confirm if accurate wind speed levels predictions could be made.

Results and Conclusion

From the kNN algorithm, it is shown from the graph that the highest result is when the k value is equal to one, then the accuracy slowly decreases. The kNN algorithm’s highest accuracy was only 6%, meaning a very unsuccessful experiment.

Methods The k-Nearest-Neighbour (kNN) method bases its prediction on k’s closest training instances and rather than learning a particular model, it simply remembers the training instances. The three parameters within the kNN algorithm are k (the size of the neighbourhood), the distance metric and the prediction function. The k value is highly dependent on the data and must be tuned to find the most optimal value. The Artificial Neural Network machine learning model is based upon neural networks found inside the brain. This method mimics the learning process done by the human brain. ANN has hyperparameters such as the learning rate, epochs and batch size. This method will be explored with different hyperparameter values to find the most optimal.

The graph shows that the accuracy peaks when the batch size is between two-hundredand-fifty and five-hundred. This value is between one and the size of the training set which allows for mini batch gradient descent.

91


92 <#>


BSc (Hons) Computing Application Software Development

93


STUDENT BIOGRAPHY

Yasseen Ahmanache Course: BSc (Hons) Computing Application Software Development A web app that uses machine learning to identify phishing websites Phishing is a type of social engineering attack used to steal users confidential data; these typically include login credentials and credit card numbers (Whittaker Google Inc et al., n.d.). Phishers tend to employ both social engineering and technical skills to trick users into giving them their personal information. In January 2019, the number of phishing websites detected reached 48,663, and it raised to 81,122 by march (Anti-Phishing workgroup, 2019). Loses of between $60 million and $3 billion are expected every year. It is a trend that does not seem to slow down, although methods are proven to work, it is essential to provide the help that is easy for those who are most vulnerable to these specific attacks.

<#> 94


A web app that uses machine learning to identify phishing websites Yasseen Ahmanache

Introduction

Figures and Results

Phishing is a type of social engineering attack used to steal users confidential data; these typically include login credentials and credit card numbers (Whittaker Google Inc et al., n.d.). Phishers tend to employ both social engineering and technical skills to trick users into giving them their personal information. In January 2019, the number of phishing websites detected reached 48,663, and it raised to 81,122 by march (Anti-Phishing workgroup, 2019). Loses of between $60 million and $3 billion are expected every year. It is a trend that does not seem to slow down, although methods are proven to work, it is essential to provide the help that is easy for those who are most vulnerable to these specific attacks.

Project Aim

Since the URL is one of the most common ways of distinguishing a phishing website from a genuine site, this project aims to create a model that uses the ideal classifier, which is easily accessible through a web app for users to connect to, advising them on the risks and preventing them from

Methods

figures Figure 5 Figure 2 With the data set that I was using, I had to cut different columns that did not include aspects of a URL. The remaining columns were then selected based on their relationships using a heat map. As the repercussions of phishing can severely affect a user, it is crucial to ensure that the predictions would be as accurate as possible and provide the right algorithm for users to use. To do this, I tested five different algorithms, Logistic regression, NaĂŻve Bayes, KNN, SVM and Random Forest. As you can see from figure 2, you can see that all the models performed remarkably well besides SVM. All the models returned with an accuracy of greater than 87% and achieved high true positive rates and low false-positive rates. Even with the K-fold cross-validation, all the models display a high rate of accuracy when training with new data.

figures Figure 1

figures

In the end, I decided to go with the random forest model. The model can classify each URL and returns with the likelihood of it as well. The web app also provides a platform for users to create accounts and keep track of different sites. The application has also been designed with more room to grow. Other algorithms could be added quickly with other aspects of the webpages besides the URL can be used to find them. The groundworks of the project open up the possibility of creating a chrome extension, one that would alert users once they access any sites that may not be secure. With the extra time and dedication, I believe this is something that can have an impact in tackling one of the biggest cybercrimes today.

Acknowledgments

figures Figure 3

One of the main requirements set out for the project was to pick the model that is best suited for it. Achieving this involved testing various algorithms and evaluating them based on factors such as accuracy and time. To make this accessible to users, I had to use a web development stack called FLASK. Figure 1 shows how this works where it retrieves input from the website sends it to the model and then returns the result.

Conclusion

figures

Figure 4 Figure 5 To make this accessible for all users, I needed to create a web app that would allow the users to copy a URL to see the probability of it being a phishing website. The website takes the URL and breaks it down into eleven different values and assigns them to an array which is sent to the model via Flask before returning a result.

Special thanks go to Dr Hatem Ahriz who supervised me through the development of this project, Hatem was very easy to keep in touch with and was always available to help especially if I encountered any problems. Also, I would like to thank everyone involved in the CSDM course at RGU who should not go unnoticed, where I have been taught valuable skills that helped me throughout the entirety of this project

References Whittaker Google Inc, C., Ryner Google Inc, B., & Nazif Google Inc, M. (n.d.). Large-Scale Automatic Classification of Phishing Pages. Gupta, S., & Kumaraguru, P. (2014). Emerging phishing trends and effectiveness of the anti-phishing landing page. ECrime Researchers Summit, ECrime, 2014-Janua, 36–47. https://doi.org/10.1109/ECRIME.2014.6 963163

Computing (Application and software development)

95


STUDENT BIOGRAPHY

Keiran Bain Course: BSc (Hons) Computing Application Software Development Neural Networks as a Video Game Level Testing Tool This project idea originated from a interest in gaming and a curiosity to find out more about the uses of AI within that field After watching several online videos, I wanted to explore the uses and limitations in using AI to test game levels and ultimately shorten the development and testing process If AI could be generalised and capable of tackling similar levels to those of its training levels, it could be an extremely useful tool allowing video game developers to test large numbers of levels in a short time frame.

<#> 96


Student Name: Kieran Bain | Supervisor Name: Carrie Morris

Introduction

Aims

This project idea originated from a interest in gaming and a curiosity to find out more about the uses of AI within that field. After watching several online videos, I wanted to explore the uses and limitations in using AI to test game levels and ultimately shorten the development and testing process. If AI could be generalised and capable of tackling similar levels to those of its training levels, it could be an extremely useful tool allowing video game developers to test large numbers of levels in a short time frame.

The aim of this project is to establish if artificial intelligence is capable of being used as a tool to automatically test game levels and confirm if they can actually be completed.

Methods The AI algorithm is called Neuro Evolution of Augmenting Topologies (NEAT). This algorithm is a form of neural network which starts with no topology building it from the ground up following the theory of natural selection and evolution (Stanley & Miikkulainen, 2002). This differs from other neural networks which start with a pre-built network and tweak values to achieve the performance required. The algorithm will randomly mutate its population and if the mutation is positive then the genome will perform better and will be allowed to breed and create children like it. If the mutation is bad and harms performance, then the genome will be killed and not allowed to breed. An example of the structure of a genome can be seen below. In order to train the AI a random level generator was used. This could produce a near infinite number of levels to use as both training & testing data and an example level with the AI running through it can be seen below as well as the components that are used to turn a flat level into a challenging and unique level.

Figures And Results Following AI training two different sets of results emerged due to the way the AI was tested. Initially a single genome was used and evaluated on how successful it is in level completion but upon further testing it became apparent that the entire population of genomes in training worked as a team to progress. This suggested each genome specialised in slightly different tasks. This can be compared to how humans tend to specialise in one task and the need for a society of specialised workers to thrive. This resulted in having two different tests. A test where only a single genome from the population was tested and another where the entire population was tested. Overwhelmingly when loading the entire population, the AI’s performance increased significantly as can be seen by the graph below. This fits with the idea of a society of specialist workers as it allows for the entire population to tackle the problem as opposed to one AI which may only specialise in a specific type of level. Two different training approaches were also undertaken. The first was to train the AI on many shorter levels randomly generated as shown in the level above. The other training method involved using a single long level instead of many short ones. It was hoped that having a long level would force one genome to become the fittest and outperform the rest, however this was never achieved. Long levels require the AI to navigate a lot of terrain before reaching the part where it is having difficulty to test new mutations. This resulted in extended training times not being achievable within the project duration. Even when training with longer levels it appears a population outperformed a single genome.

Conclusion

Acknowledgments

In conclusion I believe that this project was a success but in a different manner than expected. Although the project was never able to get a single genome to perform exceptionally well, using a population proved to be extremely effective with a 99% completion ratio. I believe with enough development time this algorithm could be used effectively to test video game levels and save development time. An example of this would be the game “Super Mario Maker” in which users submit levels for others to play. The AI could be used to ensure any level that is uploaded is able to be completed without the author having to dedicate the time to test it. In the future I would like to tweak some parameters and the training methodology to achieve a better single genome performance and also see if it could generalise to more advanced user created levels.

I would like thank my supervisor Carrie Morris who helped me through each step of this project with regular meetings to provide advice and feedback. This support was very much appreciated.

References Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2), 99–127. https://doi.org/10.1162/106365602320169811

Application Software Development 97


STUDENT BIOGRAPHY

Gatien Bouyssou Course: BSc (Hons) Computing Application Software Development A new MOBA game including an agent monitored by a Case Based Reasoning system using Reinforcement Leaning The number of MOBA games has been bursting during the past 11 years with the creation of games like Dota2, Smite, Heroes of the Storm ... The most famous being League of Legends. With 100 million players monthly, it is one of the most played games nowadays. Its events are huge with a cash price over 5 million dollars, and its world final in 2018, has been watched by 96 millions viewers. Recently, an AI has arisen in the MOBAs world. Its name is OpenAI Five. The 13 of April 2019, OpenAI Five has won 2-0 against the best worldwide team of Dota2. Using Deep learning with Reinforcement Learning, it has been self improving by playing against itself over and over again. Since it has a high win rate (99.4%) players could use it to improve or prepare to a competition (Christopher Berner et al, 2019).

<#> 98


A new MOBA game including an agent monitored by a Case Based Reasoning system using Reinforcement Leaning Gatien Bouyssou & Nirmalie Wiratunga

Introduction The number of MOBA games has been bursting during the past 11 years with the creation of games like Dota2, Smite, Heroes of the Storm ... The most famous being League of Legends. With 100 million players monthly, it is one of the most played games nowadays. Its events are huge with a cash price over 5 million dollars, and its world final in 2018, has been watched by 96 millions viewers. Recently, an AI has arisen in the MOBAs world. Its name is OpenAI Five. The 13 of April 2019, OpenAI Five has won 2-0 against the best worldwide team of Dota2. Using Deep learning with Reinforcement Learning, it has been self improving by playing against itself over and over again. Since it has a high win rate (99.4%) players could use it to improve or prepare to a competition (Christopher Berner et al, 2019).

Project Aim We want to create a MOBA game where the player is fighting against an AI that is learning using Reinforcement Learning. However, instead of using deep learning, we want to use a Case Base Reasoning system. A CBR system is good to give intuitive explanation of its decisions. That way, our agent should be better at teaching how to behave in a given situation.

Methods

Figures and Results

Conclusion

This game was made to be simple. The aim for the player is to be able to do some quick games. That way the user could play during the loading screen of his favourite MOBA game. Since it is a little game, it is essential to make it as clear and simple as possible. Inspired from some most famous MOBA games, we have decided to have: the general information on the top (such as the time and the status of each players), the healthbar and manabar above each player, and the spells with their corresponding hotkeys and cooldown on the bottom centre of the screen.

During this year, I have been able to create a MOBA game that the players enjoy playing with. This game for now contains a simple Agent monitored by a set of fuzzy rules and the player seems to find this agent difficult enough. However some player are still able to win in few games. Now, we need to add the agent monitored by the CBR system. This agent to win at least 60% of his games against the simple agent. According to our predictions it should be possible after 900 games. This new agent would constitute a new level of difficulty and should please our community.

In order to get some feedback from the players, we decided to create an agent based on very simple rules (If your health is under 20% then retreat...). This new agent will be used has a reference to test the quality of our other agent monitored by the CBR system.

Win rate of the players

Number of games needed before winning a game

Acknowledgments I would like to thanks in particular Nirmalie Wiratunga my supervisor that has been helping me through this year. She gave me relevant advice that were helping me making crucial decision. I learnt a lot thanks to her experience. I would also like to thanks Ikechukwu Nkisi-Orji who has accepted to gave his time to help me finding the right tools to build my CBR sytem.

References Agent difficulty according to the players

The agent will be monitored by a Case Based Reasoning system using Reinforcement Learning to improve over the games. The retrieval phase, along with the execution of the SARSA algorithm to update the Qvalue, should not take too long in order to keep our game running smoothly without any lags.

Rate given to the game by the players

On the left, we can see that half of the players have a very low win rate. This may be because they have not played a lot to the game. The second graph shows that the agent is far from being invincible. The players do not need more than 5 games to win a game. It means that it does not take a lot of time to learn how to beat the agent. However, according to the 3rd graph, it seems that the players have found the level of the agent sufficient. Almost all the values are between 2 and 4 on 5. The graphic design of our game is not that good. However, thanks to a good game design, a simple User Interface, and a good agent, we have been able to make the users have fun with our game.

Daily peak concurrent player Available at : https://www.riftherald.com/2019/9/ 17/20870382/league-of-legends-playernumbers-active-peak-concurrent Prise pool for the worlds 2018 Available at : https://www.riftherald.com/2018 /12/11/18136237/riot-2018-league-oflegends-world-finals-viewers-prize-pool Dota 2 with Large Scale Deep Reinforcement Learning and OpenAI Five, Cristopher Berner et al, 2019. Available at: https://arxiv.org/pdf/1912.06680.pdf

Computer Science Github : https://github.com/GatienBouyssou/Game_Honours_Project Email: gatien.bouyssou@gmail.com

99


STUDENT BIOGRAPHY

Kirsty Douglas Course: BSc (Hons) Computing Application Software Development Which Method of Handling Missing Values Results in the Greatest Accuracy of the Model? Every year breast cancer causes around 11,400 deaths in the UK. This is about 31 deaths a day. It is the 4th most common cause of cancer death in the UK. The dataset for breast cancer that I chose to use only had 699 entries but in these entries there were 16 that had missing data. For some datasets if they have a large enough number of entries then 16 missing entries may not seem like much but in this data set it is 3% of the entries. While investigating the data I found that 14 of these missing values have a benign classification and 2 of these have a malignant classification. As this dataset only has 699 entries every entry is important towards training the data to make correct predictions and therefore every entry matters.

<#> 100


Which Method of Handling Missing Values Results in the Greatest Accuracy of the Model?

Kirsty Douglas & Carlos Moreno-Garcia

Introduction

Every year breast cancer causes around 11,400 deaths in the UK. This is about 31 deaths a day. It is the 4th most common cause of cancer death in the UK. The dataset for breast cancer that I chose to use only had 699 entries but in these entries there were 16 that had missing data. For some datasets if they have a large enough number of entries then 16 missing entries may not seem like much but in this data set it is 3% of the entries. While investigating the data I found that 14 of these missing values have a benign classification and 2 of these have a malignant classification. As this dataset only has 699 entries every entry is important towards training the data to make correct predictions and therefore every entry matters.

Project Aim

The aim of my project is to analyse the best pre-processing and classification algorithms to handle missing data in the medical domain, using as a case study a dataset for breast cancer prediction. The best algorithm will have a high accuracy when predicting if a tumour is malignant or benign.

Methods

Using Python, analysis was performed on different methods for handling missing data. These methods included finding the median of the missing values, removing the rows which contain missing data and removing the column which contained missing data. We are able to remove the column containing missing data as after exploration we seen that there was only one column which contained missing data. The Bare Nuclei column had 16 missing values. After the missing data had been handled for each method three models were created. These models were Logic Regression, Decision Tree and Random Forest. At the end we took the accuracy of each of the models for each method and found which model and method resulted in the best accuracy.

Figures and Results

Figure 1

The graph in figure 1 showed the accuracy of the three models and methods used on the data set. sThe median method of handling missing data had an accuracy between 95% and 96% depending on the model used. The model with the highest average for this method was both Logic Regression and Random Forest which both had an accuracy of 96%. The method of removing the column containing missing values had an accuracy of between 93% and 95% depending on the model used. The model with the highest average for this method was both Logic Regression and Random Forest which both had an accuracy of 95%. While investigating the data it was revealed the Bare Nuclei column influences the class column by 82% this can be seen in the heatmap in figure 2s. This column is the column which affects the class the most. Therefore removing this column because it has missing values may not be the best solution as it influences the class in such a big way that it would be more beneficial to keep this column and use another method to Figure 2 handle the missing values. The method of removing rows containing missing values had an accuracy of between 94% and 99% depending on the model used. The model with the highest average for this method was both Logic Regression and Random Forest both of which have an accuracy of 99%. The table in figure 3 is the output from Method Model Class Precision Recall f1-score the classification report ran on each Median Logic Regression 0 0.96 0.96 0.96 method and model. From this table we Median Logic Regression 1 0.93 0.93 0.93 can see that the method which had the Median Decision Tree 0 0.95 0.95 0.95 Median Decision Tree 1 0.9 0.9 0.9 highest precision was the row method Median Random Forest 0 0.96 0.96 0.96 with the logic regression model. This Median Random Forest 1 0.93 0.93 0.93 means that it is the best method and Column Logic Regression 0 0.96 0.96 0.96 model in terms of not labelling Column Logic Regression 1 0.93 0.93 0.93 Column Decision Tree 0 0.95 0.95 0.95 something positive when it is negative. Column Decision Tree 1 0.9 0.9 0.9 We can also see that according to Column Random Forest 0 0.96 0.96 0.96 recall the row method and logic Column Random Forest 1 0.93 0.93 0.93 Row Logic Regression 0 0.98 1 0.99 regression model has the highest Row Logic Regression 1 1 0.97 0.98 recall score. This means that it is the Row Decision Tree 0 0.94 0.97 0.96 best model and method when it comes Row Decision Tree 1 0.95 0.89 0.92 to having the ability to be able to find Row Random Forest 0 0.99 0.99 0.99 Row Random Forest 1 0.98 0.98 0.98 all of the positive instances. The method in the table which had the Figure 3 highest f1-score is the row method and the model which has the highest f1-score is both random forest and logic regression. This means that these models were the best in terms of a weighted mean of the precision and recall.

Conclusion

From the analysis on the three methods of handling missing values it is clear that the method which was the best was removing the rows which contained missing data. This method scored the best in precision recall and f1-score. Although removing the rows which contain missing values is best method for the dataset which we used it might not always be the best method as it does mean losing some data from the dataset. This could be unsuitable for some data sets as they may have missing data scattered throughout the data so to delete all of the rows containing missing data may end up meaning that a lot of rows have to be removed. The best model to use with the dataset was logic regression as it scored the best in precision and recall. Logic regression and random forest scored the same in f1-score. Although in this case all three methods that we used would be valid to use to handle missing data these would not be suitable for all datasets. The dataset I used only had one column which contained missing data meaning that handling the missing data by removing the column that has the missing values would only mean losing one column but for datasets which have multiple columns with missing data this method wouldn’t be viable as it would mean losing information which could have a big influence on the dataset

Acknowledgments

Special thanks to Carlos MorenoGarcia for supervising this project,.

References 1.  Breast cancer statistics | Cancer Research UK. (n.d.). Retrieved April 25, 2020, from https://www.cancerresearchuk.org/ health-professional/cancer-statistics/ statistics-by-cancer-type/breastcancer#heading-Two 2.  Understanding the Classification report through sklearn Muthukrishnan. (n.d.). Retrieved April 25, 2020, from https://muthu.co/ understanding-the-classificationreport-in-sklearn/

101


STUDENT BIOGRAPHY

Nathan Falconer Course: BSc (Hons) Computing Application Software Development Simulating a Fishing Business as a Means to Educate the Importance of Moderation Video games have provided a medium for showcasing theories and problem solving that is able to reach younger audiences more easily that traditional studying. This presents an opportunity to teach about moderation as well as business management, with that in mind the premise of the application would be based around managing some sort of resource that could be abused yet could also be infinite in its usage. This project will build on this train of thought to simulate the lifecycle of a hypothetical species of fish living in a body of water. The goal of the ‘player’ in this application will be to maintain their ecosystem while also making a profit.

<#> 102


Simulating A Fishing Business

As a means to educate the importance of moderation

Nathan Falconer, Ines Arana

Introduction Video games have provided a medium for showcasing theories and problem solving that is able to reach younger audiences more easily that traditional studying. This presents an opportunity to teach about moderation as well as business management, with that in mind the premise of the application would be based around managing some sort of resource that could be abused yet could also be infinite in its usage. This project will build on this train of thought to simulate the lifecycle of a hypothetical species of fish living in a body of water. The goal of the ‘player’ in this application will be to maintain their ecosystem while also making a profit.

Project Aim The Aim of the project is to develop a application/game capable of simulating a fishing business with the goal of running said business via fishing a body of water that if mismanaged would result in the player losing/causing a game over.

Development The development of this project has been done in Cocos Creator and Visual studio code. The Ui and interconnectivity were assembled in Cocos Creator, providing quick and easy access to many features that would have been more difficult to assemble otherwise. The coding for the project was done in JavaScript, this is due to Cocos Creator using JavaScript or TypeScript for its functionality. All of the Image assets were drawn using GIMP. Originally the coding of the project was to be done in python and Cocos2D, this was changed to Cocos Creator due to more up to date tutorials being available for the latter. The change in platform occurred before any actual coding had taken place so no time was lost in the transition. The dataset used to predefine the data is based on hypothetical data,

Implemented Features

The first figure shows the main UI for the player, the game map features 17 zones from which the player would interact with. The player would be able to select the level of fishing they would like to do within that zone, currently only 2 levels being moderate and heavy were to be in place but with the eventuality of a slider to choose the percentage between the light to heavy fishing being done on the current zone. The main piece of the application is the data being stored about each age of fish, this is the cornerstone of the application as nearly all of the calculations draw from it; each of these values can be altered to fine tune the applications logic as needed. These figures are used as references for boundaries within the application, meaning that if they were to be changed in the node that they currently exist in it would then update the entire application. The spawn fish function makes use of a formula to calculate the number of fish being spawn into the system, it gathers each zones fish values by age and calculates its spawning mass based on the amount of fish of that age multiplied by its weight and maturity which is read from the dataset above. The natural fish deaths function makes use of similar logic to the spawn fish, by multiplying the current amount of fish by the predefined death rate read from the dataset and also taking away a flat death rate it is able to update each zones current amount of live fish. This method is simulating effects such as cannibalism and other outside affects that the player cannot control.

Conclusion While the project itself is not finished it has laid the groundwork for future development and has shown that the project itself is doable. The project in its current state is able to: •  Spawn a value of new fish based on the existing number of fish in accordance to their weight and maturity level. •  Fish can die of old age at the end of a years cycle based on values defined in a dataset which can be altered to adjust game balance. As well as younger fish getting older to replace the ones that died. The distribute fish method works in theory but without the full working application this method cannot be fully tested as of yet. Too much of the time spent on this project was spent in learning the specifics of the development platform and combining that with a lacklustre amount of online help much of the early stages were finding the right resources for the problem that was being addressed. Some fine tuning of the predefined dataset would also be required as the flat death rate applies very heavily to some age groups and is barely touching others, a mass amount of user testing would be required to get the levels of accuracy required for the data to be more reasonable.

Acknowledgments I would like to thank my supervisor Ines Arana for assisting me in this project and for helping guide areas of focus to make more efficient use of development time.

References Mitchell, A., & Savill-Smith, C. (2004). The use of computer and video games for learning. Plass, J. L., Homer, B. D., & Kinzer, C. K. (2015). Founda7ons of Game-Based Learning. Squire, K. (2013). Video Games and Learning: Teaching and Par7cipatory Culture in the Digital Age. Rebecca Pleasant.

103


STUDENT BIOGRAPHY

Connor Forsyth Course: BSc (Hons) Computing Application Software Development Test-driven Teaching with Advance Web Technologies When teaching any form of programming to a large class, it can be very difficult and time consuming to help every single individual with any issue(s) that they may have. Learning to programme for the first time can be a daunting experience, with studies in the past highlighting the programming fundamentals stuydents tend to struggle with the most (Lahtinen, Ala-Mutka and Jarvinen, 2005) and (Mhashi and Alakeel 2013

<#> 104


105


STUDENT BIOGRAPHY

Kevin Kelbie Course: BSc (Hons) Computing Application Software Development An implementation of the Statechain Protocol with applications to Bitcoin The Statechain Protocol is a second-layer technology that runs on top of cryptocurrencies and provides a novel way of sending coins in an off-chain manner thereby improving privacy and scalability (Somsen, 2019).

<#> 106


An implementation of the Statechain Protocol with applications to Bitcoin

Sending keys; not coins Introduction The Statechain Protocol is a second-layer technology that runs on top of cryptocurrencies and provides a novel way of sending coins in an off-chain manner thereby improving privacy and scalability (Somsen, 2019).

Figures and Results Conclusion MESSAGE 1

MESSAGE 2

INIT

TRANSFER

TRANSFER

OWNER 1 PUBLIC KEY

OWNER 2 PUBLIC KEY

Project Aim

MESSAGE 1

The project aimed to implement the Statechain Protocol on the Bitcoin Network. We plan to implement a Statechain server which is introspectable using an easy to use web interface as well as expose an API for clients to verify the Statechain server is honest.

Motivation The motivation for implementing Statechains is because the Bitcoin network can often be quite expensive to broadcast transactions especially in times of high-density throughput because it causes the network to be congested since Bitcoin can only handle between 3.3 and 7 transactions per second (Croman et al., 2016). Statechains would help reduce that by allowing individuals to send update transactions between each other off-chain thereby reducing on-chain transactions.

Methods The methods used to implement the Protocol was to leverage as much of the existing cryptocurrency infrastructure as possible to prevent reinventing the wheel. The server was partially implemented in Javascript because that’s what we were more familiar with but we decided to but the rest of my energy in implementing it in Rust because of the performance gains to be had there. After careful consideration, we decided to use a PostgreSQL, a relational database over RocksDB, a key-value store database because we found the flexibility of PostgreSQL to be much useful than any performance benefits that would be made with RocksDB. Rather than using HTTP for communication across the board, we decided to use TCP connections for peer-to-peer communication between the servers to reduce the amount of bandwidth required because we avoid sending the HTTP headers and can maintain a connection throughout the request. One of the HTTP endpoints we used was for retrieving data out of the Statechain explorer which was done over a GraphQL endpoint so that the amount of bandwidth required reduced by allowing the user to specify what data they need in the request in a declarative manner.

MESSAGE 2

OWNER 1 SIGNATURE

OWNER 2 SIGNATURE

SHA256 PRE IMAGE 1

SHA256 PRE IMAGE 2

MESSAGE 1

MESSAGE 2

OWNER 2 PUBLIC KEY

OWNER 3 PUBLIC KEY

SIGN

SIGN

OWNER 1 PUBLIC KEY

OWNER 2 PUBLIC KEY

SERVER PUBLIC KEY

SIGNED MESSAGE 1

OWNER 3 PUBLIC KEY

SIGNED MESSAGE 2

VERIFY

FIGURE 1

In figure 1 we have created a simplified diagram to show how the client interacts with the Statechain server. This shows the two mains functions that the state chain server supports however it is not limited to just these functionalities but these are the functions required for transferring UTXO’s from one user public key to another. I’ve omitted the blind signatures in this diagram because we didn’t find an adequate solution. TCP

We were able to implement the state chain server but due to the limitations of the current cryptocurrency infrastructure, it was not possible to implement the entire protocol. If and when the Eltoo proposal is implemented on Bitcoin we will be able to implement the full state chain protocol but until that happens there is no way to force the Bitcoin network to accept the updated state chain transactions (Decker, 2018). Implementing blind Schnorr signatures proved to be more difficult than initially anticipated because we couldn’t find any libraries that implemented it due to the rough key attack but fortunately, the core of my project was not reliant on this. In order to be a viable technology in the future, we would have to implement a way for transferring smaller denominations of the currency because state chain coins cannot be divided into smaller amounts of change. The project was more complicated than initially anticipated making it difficult to deliver on every single requirement we set out for myself so we put most of my time into making sure the server was robust rather than adding lots of features that were not critical.

Acknowledgments I am grateful for all the help that my supervisor, Mark Zarb offered to me over the course of the project. Ruben Somsen, the author of the Statechain white paper answered a great many of my questions privately which was crucial to my understanding of the protocol.

References HTTP

FIGURE 2 In figure 2 we show how we have implemented the peer-to-peer protocol and our solution was to use HTTP for the client-to-server communication and TCP for server-to-server communication. When the client makes requests to the server it doesn’t require multiple rounds trips hence the choice to use HTTP. However, TCP is used for serverto-server communication because there is a four round trip required to sign the signatures due to the nature of how MuSig works (Decker, 2018).

Decker, C., 2018. eltoo: A Simplified Update Mechanism for Lightning and OffChain Contracts. Blockstream. URL https:// blockstream.com/2018/04/30/en-eltoo-nextlightning/ (accessed 10.9.19). Somsen, R., 2019. Statechains: O -chain Transfer of UTXO Ownership 7. Croman, K., Decker, C., Eyal, I., Gencer, A.E., Juels, A., Kosba, A., Miller, A., Saxena, P., Shi, E., Gün Sirer, E., Song, D., Wattenhofer, R., 2016. On Scaling Decentralized Blockchains: (A Position Paper), in: Clark, J., Meiklejohn, S., Ryan, P.Y.A., Wallach, D., Brenner, M., Rohloff, K. (Eds.), Financial Cryptography and Data Security. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 106– 125. https://doi.org/ 10.1007/978-3-662-53357-4_8

statechain.info

107


STUDENT BIOGRAPHY

Sebastian Kowalczyk Course: BSc (Hons) Computing Application Software Development Web-based pain diary application for people with chronic pain According to The British Pain Society, more than two fifths of the UK population suffer from chronic pain. This means that around 28 million adults live with chronic pain that lasts more than three months. Unfortunately, this condition affects not only the patient, but also the environment (family and friends), causing the deterioration of patient’s life. Often patients are forced to give up any physical activity, hobbies, and in some cases even work. Taking into account the above, I decided to create an online pain diary that will help people suffering from chronic pain to get rid of it forever, enabling them to save and view relevant data whenever necessary.

<#> 108


Web-based pain diary application for people with chronic pain Sebastian Kowalczyk & Ines Arana

Introduction

Figures and Results

According to The British Pain Society, more than two fifths of the UK population suffer from chronic pain. This means that around 28 million adults live with chronic pain that lasts more than three months. Unfortunately, this condition affects not only the patient, but also the environment (family and friends), causing the deterioration of patient’s life. Often patients are forced to give up any physical activity, hobbies, and in some cases even work. Taking into account the above, I decided to create an online pain diary that will help people suffering from chronic pain to get rid of it forever, enabling them to save and view relevant data whenever necessary.

Project Aim The aim of this project is to develop a web-based pain diary that will allow individuals to keep track of many longterm pains known as chronic pain simultaneously. It will be available to at least 95% of devices capable of running a web browser.

Methods

The front of the application was developed using Bootstrap 4 and custom-built solutions using HTML5, CSS3 and JavaScript ES6. The backend is powered by the NodeJS server, which in combination with various frameworks and packages such as bcrypt, MongoDB from Mongoose, PassportJS, Nodemailer, expressvalidator and others creates a fully functional system that is able to recognize different users, provide relevant data, send e-mails and all that is done in a secure way.

The combination of technology, frameworks and packages I mentioned earlier has allowed me to create a secure and fully functioning system that has a lot to offer. This application includes a login system that uses sessions and cookies, as well as email verification during critical operations such as registration or password recovery. In addition, the main functionality of the application focuses on entries that can be created, updated, edited or deleted from the system. These four basic operations, combined with other functions, provide a system that many people can use to track their chronic pain. In addition, the system includes two additional functions that may be useful when trying to get information from the system. One of them allows users to perform a search operation within a certain period of time, and the other one allows users to search all entries that contain a given keyword or part of it. All this functionality is packaged in a simple and minimalist user interface that is consistent and constantly updates the user with what is currently happening.

Conclusion

This project was a challenge for me because I had many ideas about what I wanted to implement and a rough idea of how I was going to implement my ideas. Due to the nature of the project I had to introduce a 'learn on the go' approach, which helped me overcome many problems. There is still room for improvement, for example data validation is not fully supported and no additional features have been added. Future work will include: refactoring the code, implementing additional features such as a more advanced search system (filters), additional graphs that will provide more explanation of what is currently displayed, and an improved graphical user interface.

Acknowledgments I would like to thank my supervisor Ines Arana, whose help, guidance and commitment was extremely important to me throughout this project. Her enthusiasm, her different point of view, her interest in the project she showed, proved invaluable. I would like everyone to have a supervisor like mine who has always listened, understood and showed me the right direction.

References Caniuse.com. 2020. "Can I Use" Usage Table. [online] Available at: <https://caniuse.com/usage-table> [Accessed 28 April 2020] Britishpainsociety.org. 2016. The Silent Epidemic - Chronic Pain In The UK | News | British Pain Society. [online] Available at: <https://www.britishpainsociety.org/med iacentre/news/the-silent-epidemicchronic-pain-in-the-uk/> [Accessed 28 April 2020].

Computing - Application Software Development 109


STUDENT BIOGRAPHY

Leon Law Course: BSc (Hons) Computing Application Software Development Internet of Things Chatbot I have used Google’s dialogflow (Dialogflow, 2019) to creat a chatbot that can interact and control IoT devices. I have connected my chatbot with a smart bulb and also a heat sensor that is connected to a Raspberry Pi. The chatbot is able to turn the smart bulb on and off and also adjust the brightness. It can also read the temperature and humidity from the heat sensor along with the time of the reading and what sensor it is. The chatbox could be configured to work with a wide variety of IoT devices and provides a platform for users to interact with all their IoT devices in one place.

<#> 110


111


STUDENT BIOGRAPHY

Thomas Lowe Course: BSc (Hons) Computing Application Software Development Stock Portfolio Optimization Tool Using Artificial Intelligence This project focuses on the highly competitive stock market and tries to find a way to improve upon already existing methods of portfolio optimisation. Inparticular it focuses on the buying and selling of “shares�. These shares are initially sold at a set price when they are released on to the market. However, once they are sold again on a secondary market, they are not sold at a set value and the prices fluctuate depending on a variety of reasons (Chen, Roll &Ross, 2006). Being able to predict these fluctuations could increase profits or minimise losses.

<#> 112


Stock Portfolio Optimization Tool Using Artificial Intelligence

Can AI be used to develop an ‘edge’ in stock trading? Thomas A. Lowe

Introduction

Figures and Results

This project focuses on the highly competitive stock market and tries to find a way to improve upon already existing methods of portfolio optimisation. In particular it focuses on the buying and selling of “shares”. These shares are initially sold at a set price when they are released onto the market. However, once they are sold again on a secondary market, they are not sold at a set value and the prices fluctuate depending on a variety of reasons (Chen, Roll, & Ross, 2006). Being able to predict these fluctuations could increase profits or minimise losses.

With this project, the results can be difficult to interpret due to the technical nature of the topic. These results can be achieved by testing historic stock data and their price movements, the tool can be used to make recommendations but shouldn’t be relied upon for guaranteed results.

Project Aim

The aim of this project is to research the current stock market and find out various methods for getting an edge while trading. This research is taken and used to derive a method to solving the problem of stock portfolio optimisation and how to implement artificial intelligence to achieve this.

Figure 3b. Comparing the actual daily change of the stock price against the values generated with the kNN model

Figure 1b. Stocks from 1a prices over time

Figure 1c. Stocks from 1a volatility over time

The graph Figure 1b shows four separate stock tickers, and how their stock price changes each day. Whereas Figure 1c shows each stocks volatility of each individual stock. This shows where there are spikes in the daily returns, either a large increase, or a large decrease in the returns. These spikes represent how volatile a market is and being able to automate or predict these spikes could result in greater returns for the investors.

While the results for this project are not as definitive as other applications of machine learning. It was not expected to be as in this specific field it is more acceptable to have accurate predictions occur less often. Given additional time and resources this could easily be further developed to add more models to allow for more informed investments.

Acknowledgments

Methods

Figure 2a. The daily close price of the Microsoft (MSFT) stock over time

Figure 1a. Efficient Frontier based off 4 stocks

Various methods are used in this project. Markowitz Mean variance is used to generate a portfolio (Figure 1a). “One of the main advantages of the mean–variance criterion, is that it has a simple and clear interpretation in terms of individual portfolio choice and utility optimization.”(Markowitz, 1987). The program was written in Python due to the extensive amount of packages/libraries that are available. Using these will reduce the time taken to make the project as they don’t have to be written from scratch. The stocks for this demonstration were gathered using YahooFinancials over a period of 01/01/2010 to 01/01/2020. LTSM and kNN techniques were both applied to the data, and results were compared and contrasted.

Conclusion

I would like to give a big thanks to my personal supervisor/tutor Dr. David Lonie, as well as Dr. John Isaacs for all the help and support they gave me during my studies. I would not have been able to finish university if it hadn’t been for there support and guidance. I would like to also give thanks to my parents for supporting me throughout my studies, as well as giving me a place to live so I can focus my energy on my studies instead of on worrying about bills and other expenses.

References

Figure 2b. Plot of the actual values of MSFT compared to the predictions made by the LSTM model

Figure 3a. Plot of the actual values of MSFT compared to the predictions made by the kNN model

The graph “Figure 3a” found above is the result of running the kNN model on the “MSFT“ (Microsoft) stock. In order to evaluate the success of this model; the achieved results were plotted alongside the actual values from the stock over time. The accuracy was calculated using the “accuracy_score” function from the sklearn package. This returns the number of correctly classified samples. The result came out to 0.55 (with this metric, the best performance model would result in a 1) on the test data which is not considered to be a good result in most models but due to the truly volatile nature of stock prices, this is a viable result where any improvement could be beneficial to a trader. At the end of this model the Sharpe ratio is calculated (the Sharpe ratio is commonly used measure of return/risk performance (Splurgin,2001). For this particular stock in testing over this time period the Sharpe Ratio came out to “-7.06”. This negative result means that the expected return is less than the risk-less return.

Chen, N.-F., Roll, R., & Ross, S. A. (2006). Economic forces and the stock market. Journal of Empirical Finance, 13(2), 129–144. https://doi.org/10.1016/j.jempfin.2005.09. 001 Markowitz, H. M. (1987). Mean-Variance Analysis in Portfolio Choice and Capital Markets. Spurgin, R. B. (2001). How to Game Your Sharpe Ratio. The Journal of Alternative Investments, 4(3), 38–46. https://doi.org/10.3905/jai.2001.319019

More Information Email: t.a.lowe@outlook.com

Application Software Development

113


STUDENT BIOGRAPHY

Aoife McNeill Course: BSc (Hons) Computing Application Software Development Development of a software application designed to help people with food allergens/food intolerances Within the EU, food allergies and intolerances make up a staggering 41% of disability-adjusted life years among all diagnosed diseases (Spassova et. al., 2015). In the UK alone, it is estimated that 2 million people have some form of food allergy (Nunes et. al, 2018). This is not a small health problem; however, food product information is varied for different countries/regions (such as the US and EU) which is problematic for anyone with a food sensitivity and can take up a large chunk of time while out buying groceries and a misprinted label, small, hard to read text or lack thereof can lead to fatalities (Grifantini, 2016).

<#> 114


Development of a software application designed to help people with food allergens/food intolerances

Aoife McNeill & Dr. Mark Zarb

Introduction Within the EU, food allergies and intolerances make up a staggering 41% of disabilityadjusted life years among all diagnosed diseases (Spassova et. al., 2015). In the UK alone, it is estimated that 2 million people have some form of food allergy (Nunes et. al, 2018). This is not a small health problem, however food product information is varied for different countries/ regions (such as the US and EU) which is problematic for anyone with a food sensitivity and can take up a large chunk of time while out buying groceries and a misprinted labels, small, hard to read text or lack there of can lead to fatalities (Grifantini, 2016).

Project Aim

To create a functional web-based smartphone application that helps users to have confidence when they’re out doing their food shopping by making the information readily available and readable.

Methods After testing web-based mobile development languages such as Apache Cordova and jQuery, I decided to use React Native for my project based on its current popularity. I have not previously touched React, let alone React Native. This is where the real challenge was. Using React Native, React, Redux, JSON, Android smartphone platform and npm (node) to connect between the application and smartphone, the application scans the product barcode, sends the request to the online database with the barcode details, returns it to the application where it then compares against user-inputted allergens or intolerances and gives an alert on whether or not they can have that product.

Figures and Results

The diagram below shows the process that occurs behind the app to show how the results are achieved.

User adds their a l l e r g y o r intolerance to the list, which is stored in an array for later use

User scans food product barcode

B a r c o d e information is sent to the online database

Conclusion This project has been successful in being able to compare information and make it more clear in whether or not the user can have the product in question. However, it is quite limited at the moment. Since Open Food Facts is based on community effort, not all products have been added to the database since its creation in 2012.

Future Work

Once the product related to the barcode is found, it is returned as a JSON string. This is then compared against the list mentioned earlier to see if there is a match

The JSON string is then displayed with the results on whether or not it is safe to eat via an alert

The figures below show what happens when a user-inputted allergen is found within the product (figure 1), showcasing that it is not safe to eat, and what happens when the allergen is not found (figure 2), showcasing that it is safe to eat. Since the database was originally created in French, not all products have been translated to English (figure 3)

Going forward, this application can be utilized to translate between languages, add products that aren’t in the database, and be less specific with it’s comparison (i.e. onion can be used for matching with onion powder)

Acknowledgments Special thanks to Dr. Mark Zarb for supervising this project and to Open Food Facts (https:// world.openfoodfacts.org/) for providing free use of their database.

References Spassova L., et al.,(2015). What’s in My Dish? – Extracting Food Information Through Mobile Crowdsourcing Nunes, F.,et. al. (2018). Exploring Challenges and Opportunities to Support People with Food Allergies to Find Safe Places for Eating Out ‌Grifantini, K. (2016). Knowing what you eat: Researchers are looking for ways to help people cope with food allergies

Figure 1, Heinz Tomato Ketchup, match

Figure 2, Doritos Tangy Cheese, no match

Figure 3, Tilda Chili & Lime Basmati Rice, no translation

Gigandet, S. (2012). [online] Openfoodfacts.org. Available at: https://world.openfoodfacts.org/.

115


STUDENT BIOGRAPHY

Patrick McPherson Course: BSc (Hons) Computing Application Software Development Transferring parameters and files through PythonSSL from one machine to another for use with a program. Not everyone is a software developer and familiar with python. Some people don’t know it even exists and because of this it cannot be assumed they have it installed on their PCs. So if a company uses software that requires python and OpenCV installed on it the average person won’t know how to do that if it is an open source program that someone created that just uses scripts and isn’t an executable file. My project addresses these issues by attempting to write a program that can send at the very least variables or a file to another machine such as a server that will then have the required libraries and/or version of python required to do what the user would like. The user would then receive feedback in some way. This for example could be used by a company that analyses drawings and needs to detect shapes. They could upload a file to the server and then choose the code they could run and then get feedback depending on what the code they wanted to run would be.

<#> 116


Transferring parameters and files through PythonSSL from one machine to another for use with a program. Patrick McPherson & Carlos Moreno-Garcia

Introduction Not everyone is a software developer and familiar with python. Some people don’t know it even exists and because of this it cannot be assumed they have it installed on their PCs. So if a company uses software that requires python and OpenCV installed on it the average person won’t know how to do that if it is an open source program that someone created that just uses scripts and isn’t an executable file. My project addresses these issues by attempting to write a program that can send at the very least variables or a file to another machine such as a server that will then have the required libraries and/or version of python required to do what the user would like. The user would then receive feedback in some way. This for example could be used by a company that analyses drawings and needs to detect shapes. They could upload a file to the server and then choose the code they could run and then get feedback depending on what the code they wanted to run would be.

Figures and Results

In conclusion the project was not a success overall due to my own performance but I am happy that I was able to get to a point where a user could interact with it from any device and enter a string and have a separate python script run with that as input and output back onto the website for the user to see.

figures

The site has users as the implementation of user access levels was added for security so that only authorised persons could run their own code on the server. The base access level is level one and just allows users to select what function they would like to implement.The page above is what a logged in user would see when using the app and is where they would enter the data to be in this case, printed or the length of the strength would be checked.

Project Aim

Acknowledgments

The project is to create a file transfer program with Python to send files from one PC to another that has an application that will use these files. The test data of the project will be engineering drawings which will transferred from one PC to another through the system or application I will create.

Methods The project evolved into being a web-based app using flask. This was done as it was a simple way to have a cross platform gui that a user could use from a phone or a macbook or a desktop. The app is hosted on a server along with the libraries that a user may need such as OpenCV in the example that I was using while creating the project.

Conclusion

The login page (above) and registration page (below)

I would like to thank my supervisor Carlos for assisting me the best he could in spite of my lack of effort towards the end and lack of focus on the project. I am aware that the outcome of the project is a result of my own inaction but if it wasn’t for Carlos I wouldn’t have been able to get anywhere with the project at all. Through his guidance I was able to realise how I could accomplish the goals of the project.

References File Transfer Protocol (FTP): Why This Old Protocol Still Matters - WhoIsHostingThis.com (no date). Available at: https://www.whoishostingthis.com/resources/ft p/

Computing: Application Software Development 117


STUDENT BIOGRAPHY

Brice Meunier Course: BSc (Hons) Computing Application Software Development Legal Representative Monitoring Android Devices More than 5 billion mobile users around the world and almost 90 of the entire earth is under the mobile coverage (M Kumar Rathi, 2013 The need of monitoring a phone is growing up for parents and employers over the years. Even if there are more than 200 different Android permissions, it is not possible to do everything on a smartphone For example, it is not possible to add an application to the list of system’s applications, which would allow it to be invisible to the user or impossible to remove (Android Developers, 2018 a) Also, it is not possible to have access to other third party applications like WhatsApp or Messenger To be able to do such things, it would be necessary to modify the Android system.

<#> 118


Legal representative monitoring Android devices An Android spyware installed on the child’s smartphone send data to a server. Parents can access a website displaying these data. Student: Brice MEUNIER Supervisor: Dr. David CORSAR

Introduction More than 5 billion mobile users around the world and almost 90% of the entire earth is under the mobile coverage (M. Kumar & Rathi, 2013). The need of monitoring a phone is growing up for parents and employers over the years. Even if there are more than 200 different Android permissions, it is not possible to do everything on a smartphone. For example, it is not possible to add an application to the list of system’s applications, which would allow it to be invisible to the user or impossible to remove (Android Developers, 2018a). Also, it is not possible to have access to other thirdparty applications like WhatsApp or Messenger. To be able to do such things, it would be necessary to modify the Android system.

Figures and Results

Conclusion

Figure 2 – Steps to use the software on a mobile view

We can see that, first, the user needs to create an account on the website, then copy the private key from the profile page and paste it on the Android app (no screen here). The final screen is an example of what you will get after a few days of location monitoring.

Figure 4 – App Usage page on desktop view

This project is a working monitoring software that can work on real life scenarios. The Android application because, without rooting Android, it is not possible to hide an application from the user. It means that the child could turn off the monitoring if he found out the application. Successes for this project are the possibility to retrieve SMS, call logs, position, app usage and contacts from the phone. Failure would be that media such as photos or videos can not be retrieved at the moment.

Project Aim

The aim of this project is to develop a three-part software monitoring an Android device. It will require an Android application, an online server and a website, The main objective is to build something as easy as possible to install and make it open source. People not very comfortable with technology could use a free online server already set up, and more advanced people could use their own server or use another provider.

Future work

This software being open-source, it is open to addition and modification. Photos, videos or records could be fetch by the app in the future. As well, A push notification system could be implemented using a script on the server as every notification is stored on a specific table in the database.

Methods

Acknowledgments

I would like to thanks my tutor David Corsar and my flatmate Gatien for the help and feedback given throughout the development of this project

Figure 1 – Representation of the three part of the project

The Android app is developed in Java, using Service and Content Observer to fetch data in the background. The web server is an Amazon EC2 instance running on Linux 2. PHP and MySQL are used to deal with data sent by the app. The website is hosted on the EC2 instance. It is developed in PHP, JavaScript, jQuery and Materialize CSS.

Figure 3 – SMS page with an explanation of how the website works

As you can see, basic features are available at the moment. Notifications are generated independently on the server-side. Both contact list on the left and SMS messages on the right are responsive on a mobile and tablet view. They both have an independent scroller. This SMS page is largely inspired by the design of Facebook Messenger as you can guess.

References

1) Android Developers. (2018a). Permissions overview | Android Developers. Retrieved October 30, 2019, from https://tinyurl.com/y5lsszh4 2) Kumar, M., & Rathi, S. (2013). Services in Android can Share Your Personal Information in Background. Retrieved from http://arxiv.org/abs/1308.4486

BSC HONS COMPUTING (APPLICATION SOFTWARE DEV)

119


STUDENT BIOGRAPHY

Seweryn Musial Course: BSc (Hons) Computing Application Software Development Developing an online video conference app for businesses Video conferencing software is an important factor of every business, from having meetings online to having meetings with potential over sees investors. “96% of business decision makers believe video conferencing remove the distance barriers and improves the productivity between teams” (San Jose, 2013). That was said back in 2013 with technology improving and in constantly in demand, people want easy to use software that will meet their demands. Companies want to be able to expand their businesses and they have two ways to do that, send their workers over to the client’s which costs time and money or have virtual meetings online.

<#> 120


Developing an online video conference app for businesses. Seweryn Musial, Kyle Martin

Introduction

Conclusion

Video conferencing software is an important factor of every business, from having meetings online to having meetings with potential over sees investors. “96% of business decision makers believe video conferencing remove the distance barriers and improves the productivity between teams” (San Jose, 2013). That was said back in 2013 with technology improving and in constantly in demand, people want easy to use software that will meet their demands. Companies want to be able to expand their businesses and they have two ways to do that, send their workers over to the client's which costs time and money or have virtual meetings online.

I went into this project with many ideas and features that I wanted to implement into the application, but I quickly realized that it was not as easy as I thought it would be to create a video chatting application. There are many aspects I have learned from this project that go into creating your own website, from choosing the correct API and making the website look nice. Finding the correct API’s is easy enough but making the all work with each other is another matter. I'm glad I chose this project as my final work because although I did not manage to complete all the things, I set out to do I was able to learn more about the world of website creation.

Project Aim The aim of the project was to develop a free web-based application that will allow businesses to easily conduct real time meetings, that will be available for different users on a range of devices like laptops, mobile phones and tablets.

Methods The application was developed using HTML, JavaScript, CSS and Amazon AWS which is a PaaS that was used to host the website. Node.js was used to create the server which was run on AWS. CSS and HTML were used to create the website and its looks. Finally JavaScript was used to create the websites Functionality. Web RTC was used alongside Pusher which was used to create the link between the two users so that they can interact with each other. Pushed is an API that handles the virtual room that the users join and allows them to interact with each other in real-time.

Future work Testing/Results The testing of the application were done in 2 ways. The first one was a functionality test where it looked at each specific function that was meant to be implemented and document if they were successfully implemented or not.

The second test that was conducted, were the test cases. The test cases look at each function and it walks through doing specific tasks like connecting to a call step by step, it records the action name, input the user must do like typing in numbers or clicking buttons.

The future for development of this application I would like to add the missing functionality I originally wanted to do. I would like to add the speech to text option that would allow users to take automatic notes. Also would like to add online storage/file transfer between users. A real time chat messenger for the users that do not have a microphone. I would like to give the users the opportunity to share their screens. I would also like for my application to support up to 6 users and have multiple virtual rooms so that multiple meetings can he held at once. Also another feature is to give users the ability to add Their own names and have custom links so that they can invite people directly.

Acknowledgments I would like to give huge thanks to my supervisor Kyle Martin, he was able to give me massive amounts of guidance and ideas in our meetings which were very important for the scope of my project. He was able to help me find missing details when he looked over my code and a huge help in general.

References Video Conferencing Expected to be Preferred Business Communications Tool in 2016 According to New Survey on Global Video Conferencing Trends and Etiquette.

121


STUDENT BIOGRAPHY

Kristof Nagy Course: BSc (Hons) Computing Application Software Development Can computer games teach about climate change effectively? Climate change has got increased attention the last decade as warming could go up to 2 Celsius degrees. This would cause the sea level rising by 56cm, 80% chance of ice-free Arctic summer, 37% of heatwaves every 5 years, 4 month length of drought and 4 tropical cyclones annually etc2. These are massive changes globally that humans either needs to adapt or create different methods to avoid such an increase.

<#> 122


​Can computer games teach about climate change effectively? Kristof Nagy supervised by Tiffany Young

Introduction

Design, Implementation

Climate change has got increased attention the last decade as warming could go up to 2 Celsius degrees (Figure 1) 1. This would cause the sea level rising by 56cm, 80% chance of ice-free Arctic summer, 37% of heatwaves every 5 years, 4 month length of drought and 4 tropical cyclones annually etc2. These are massive changes globally that humans either needs to adapt or create different methods to avoid such an increase.

For teaching purpose, there are several tasks that can be done. These are indicated with some sort of interactable objects such as newspapers see figure 2

The player can also change their car from petrol/diesel to an electric car before going home to do one of the first level’s task. At home there are dynamic lights that when switched off, the player can navigate themselves in a dark room, or strategically switching them off, so there is an easier way to get back to the city. (Figure 6) Figure 6 | The player’s home

Future Plans Figure 2 | Interactable objects for missions

After interacting with one of the objects, the player can see different facts for different objets. Above electricity and littering can be seen and after closing them a dialogue bubble appears further explaining it in common sense logic. See an example on Figure 3 & 4.

Figure 5 | Second level Figure 1 | Temperature changes each year approximetly (2009 data) 1

Project Aim Making people learn while playing video game is an effective method as it enhances learning motivation3. This project uses video game to teach and inform of facts about climate change and possible actions the player can take to reduce their own environmental footprint, such as reducing electrical waste.

Figure 3 | Mission starters with quick facts.

Acknowledge

Method The project is using a role-playing game method which means the player can interact object as it would be themselves. This way the player can feel their actions matter. They can do tasks that have explanation and reasoning why to do them, while they can explore the different levels.

figures

The same way as the first level has been created, second level would be looking the exact same way with different environment, but with also two missions. Figure 5 shows the layout of the level as it is playing in a forest, where the players could interact with notes on the trees or laying on the ground. Similar ideas for level 3. Planning to add a survey to get a conclusion of how successful my project aim is.

Figure 4 | Dialogue bubbles explains further the fact read in the papers. Also as in Figure 4, on top left a UI can be seen that helps the player to remember what tasks do they need to complete. It disappears when there is no task selected yet or if all the missions has been done.

I am very thankful to my supervisor Tiffany Young to guided me throught the entire project. Without her the project would not look the same as it is now. Also I appreciate my friend Gergő Juhász’s hard work who created awesome music for the game

References MALTE, M. et al. 2020. Greenhouse-gas emission targets for limiting global warming to 2 celsius degree

1

PEARCE, R., 2020. Interactive: The Impacts Of Climate Change At 1.5C, 2C And Beyond. 2

The player can go to next level by reaching the right side of the map and pressing the interact button which is displayed such as on Figure 4 „Read” label.

3 SUNG, H. and HWANG G., 2012. A collaborative game-based learning approach to improving students’ learning performance in science courses.

123


STUDENT BIOGRAPHY

Craig Pirie Course: BSc (Hons) Computing Application Software Development Applying Computer Vision Techniques To Identify Corrosion In Underwater Structures Corrosion is a naturally occurring phenomena that causes the deterioration of a metal due to exposure to certain environmental factors, which if left untreated, can become a major safety and cost concern. The National Association of Corrosion Engineers (NACE) conducted a two-year study, ending in 2016, which estimates the annual global cost of corrosion in society at US$2.5trillion (The Global Cost and Impact of corrosion, 2020). In the Oil & Gas Sector, it is the job of the Inspection Engineer to analyse the integrity of pipelines, valves, infrastructure and more. As most of these domains reside underwater, this usually involves the aide of an ROV or underwater drone fitted with a camera to feed back footage of the infrastructure to the engineer. This footage can then be monitored to analyse the impact of corrosion on the metalwork, so that the engineer can then advise the action needed to treat and correct the effects of the damage. With Artificial Intelligence gaining evermore trust and popularity in society, there is a push for the inspection process within the energy sector to be assisted by this new technology in order to cut the costs of the inspection process. Automating the underwater inspection process is however rather difficult due to the qualities of the subsea world. It is a difficult and expensive process to capture images below the surface, requiring specialist equipment, making data a scarce commodity. In addition, light behaves differently underwater which distorts the quality and makes it difficult to detect objects in images. Such a hurdle is why it is hypothesised to be vital to appropriately correct the image quality before attempting to automate the inspection process.

<#> 124


Applying Computer Vision Techniques To Identify Corrosion In Underwater Structures Craig Pirie (Dr Carlos Moreno-Garcia)

Introduction

Methods

Corrosion is a naturally occurring phenomena that causes the deterioration of a metal due to exposure to certain environmental factors, which if left untreated, can become a major safety and cost concern. The National Association of Corrosion Engineers (NACE) conducted a two-year study, ending in 2016, which estimates the annual global cost of corrosion in society at US$2.5trillion (The Global Cost and Impact of Corrosion, 2020). In the Oil & Gas Sector, it is the job of the Inspection Engineer to analyse the integrity of pipelines, valves, infrastructure and more. As most of these domains reside underwater, this usually involves the aide of an ROV or underwater drone fitted with a camera to feed back footage of the infrastructure to the engineer. This footage can then be monitored to analyse the impact of corrosion on the metalwork, so that the engineer can then advise the action needed to treat and correct the effects of the damage. With Artificial Intelligence gaining evermore trust and popularity in society, there is a push for the inspection process within the energy sector to be assisted by this new technology in order to cut the costs of the inspection process. Automating the underwater inspection process is however rather difficult due to the qualities of the subsea world. It is a difficult and expensive process to capture images below the surface, requiring specialist equipment, making data a scarce commodity. In addition, light behaves differently underwater which distorts the quality and makes it difficult to detect objects in images. Such a hurdle is why it is hypothesised to be vital to appropriately correct the image quality before attempting to automate the inspection process.

Sample image data was gathered by using a web scraper and was then processed using appropriate labelling tools for the varying computer vision methods. Three different image pre-processing methods were compared in total; Retinex, Gray World and Contrast Limited Adaptive Histogram Equalization (CLAHE). Three distinct computer vision methods – image classification, object recognition and instance segmentation were then compared. These were built from four different underlying architectures; CNN, Faster RCNN, Mask RCNN and YOLO. The workflow is outlined in the figure below.

Project Aim The main aim of the project is to analyse and compare state-of-the-art computer vision and image pre-processing techniques in order to provide a system to assist the Inspection Engineer in the corrosion identification process.

Figures & Results

YOLO

CNN

Faster RCNN

Mask RCNN

Above, are sample results from each of the trained models on an image from the underwater environment, with varied degrees of success. The CNN fails to recognise the existence of corrosion in the image. YOLO begins to recognise that there is rust present on the prominent pipe in the foreground, however the predictions are quite unstable. Faster RCNN correctly identifies the presence of corrosion in the foreground, but ignores the presence in the background. Mask RCNN successfully acknowledges the presence of multiple occurrences of corrosion across the image. However, falsely highlights part of the image as instances of rust.

Conclusion Original

Retinex

Gray World

CLAHE

The above images display the output results from each of the chosen image pre-processing techniques compared to the original. Gray World showed some promise in its ability to mitigate the factors of the underwater environment, however introduced some undesirable effects of its own. It appears to over-compensate for the domination of blue pixels in the image by introducing too many red pixels into the image, giving rise to its redness. Retinex appears to produce the most desirable outcome of the three techniques. With this technique we can see a stark reduction in the blue tint of the image and start to see more object definition and detail in the image. CLAHE is shown to have an adverse effect on the image. By smoothing the image, it displays a loss in definition across the sample. After comparison of the three techniques it was decided that Retinex had the most promise, hence was chosen for use in the final study.

figures

CNN

Faster RCNN

YOLO

Mask RCNN

92%

49%

6%

56%

The work done in this project validates the use of Image Recognition techniques in the corrosion inspection process. Pre-processing using the named techniques was also found to be unnecessary and in fact detrimental to the performance of corrosion detection. Although the project has taken steps towards proving this concept works underwater, more work still needs done with larger underwater datasets to further explore the outcomes.

Acknowledgments I would like to thank my supervisor, Dr Carlos Moreno-Garcia for his expert advice and mentorship throughout the entire project. And thankyou to the Foxshyre Analytics team for their financial support and for their input into the project.

References 1) Inspectioneering.com. 2020. The Global Cost And Impact Of Corrosion. [online] Available at: <https://inspectioneering.com/news/2016-0308/5202/nace-study-estimates-global-cost-ofcorrosion-at-25-trillion-ann> [Accessed 24 April 2020].

Computing Science: Application Software Development

125


STUDENT BIOGRAPHY

Craig Robertson Course: BSc (Hons) Computing Application Software Development Microsoft Kinect V2 Vision System to Identify Unusual or Extraordinary People in Queues or Crowds Using a depth camera system like that of the Microsoft Kinect v 2 can provide a very detailed understanding of the 3 D world This can be very beneficial in a lot of different areas of research such as 3 D digital human modelling (Wijckmans et al 2013 Size estimation of Livestock (Marinello et al 2015 and People Detection and Tracking (Liciotti et al 2017 to name just a few. This project will focus on accurately measuring human body dimensions remotely with no human interference. This research can have a lot of uses such as warning individuals about height and width restrictions, identifying if there are too many people with large size in a small area and even calculating group weight for weight sensitive domains.

<#> 126


Microsoft Kinect V2 Vision System to Identify Unusual or Extraordinary People in Queues or Crowds Craig Robertson, Mark Bartlett

Introduction

Using a depth camera system like that of the Microsoft Kinect v2 can provide a very detailed understanding of the 3D world. This can be very beneficial in a lot of different areas of research such as 3D digital human modelling (Wijckmans et al., 2013), Size estimation of Livestock (Marinello et al., 2015) and People Detection and Tracking (Liciotti et al., 2017) to name just a few. This project will focus on accurately measuring human body dimensions remotely with no human interference. This research can have a lot of uses such as warning individuals about height and width restrictions, identifying if there are too many people with large size in a small area and even calculating group weight for weightsensitive domains.

Project Aim This project aims to produce a software system using the Microsoft Kinect v2 Depth Sensor. The system will be able to identify extraordinary or unusual people that walk in front of the sensor. The system should be able to display this information to a user with an intuitive interface.

Methods

Figures and Results

Conclusion

Figure 1 User Interface of Kinect V2 System

The software system created as seen in Figure 1 can detect an individual walking in front of the sensor and calculate their height and width in real-time. The system is capable of evaluating up to 6 people at once, for each individual their unique body index ID and calculated body measurements are displayed above their head after every successful new frame of data. The method in which the height and width calculations are generated using segmentation isn’t a perfect solution as it struggles with clearly representing where the feet end and also detecting the top of the head due to hair. The results however, turned out to be sufficiently accurate with the height estimations having an average of 0.026 metres inaccuracy and width having an average of 0.076 metres inaccuracy.

figures

Real Height(m) 1. 1.95 2. 1.80 3. 1.63 Average

Real Width(m) 1.87 1.78 1.48

Calc Height(m) 1.92 1.78 1.60

Calc Width(m) 1.80 1.62 1.48

Height Acc.(m) 0.03 0.02 0.03 0.0266...

Width Acc.(m) 0.07 0.16 0 0.0766...

512 pixels

The software vision system using the Kinect v2 has been able to identify accurately unusual or extraordinary people and display this information to the interface. The criteria on unusual or extraordinary people are defined using the interface, where the user can specify the maximum and minimum height and widths that they deem unusual. Due to the limitations of the hardware used to develop, it was only able to perform calculations on 3 concurrent individuals but theoretically, the system will work for the maximum of 6. The software could still be improved in the sense of improved reliability, height & width estimations, and efficiency. Future work would include weight estimation, size estimation of various body parts, and a tracking/counting system that would be able to identify how many people entered and exited a room. This would have benefits for visitor safety and crowd control.

Acknowledgments I would like to thank, my supervisor Mark Bartlett, whose supervision and guidance throughout the whole year, during weekly meetings and emails, has made it possible for me to advance through my degree and create a final honours project to be proud of.

References Wijckmans, J. et al. (2013) ‘Parametric Modeling of the Human Body Using Kinect Measurements’. doi: 10.15221/13.117. Marinello, F. et al. (2015) Application of Kinect-Sensor for three-dimensional body measurements of cows. Available at: https://www.researchgate.net/publication/2 88493476_Application_of_KinectSensor_for_threedimensional_body_measurements_of_cow s (Accessed: 2 November 2019). Liciotti, D. et al. (2017) ‘People Detection and Tracking from an RGB-D Camera in Top-View Configuration: Review of Challenges and Applications

424 pixels

One frame from the Kinect v2 Sensor produces 217,008 bytes of depth data that can be utilised. Using this data, through the method of Segmentation the individual can be disconnected from their environment. This enables the system to pinpoint any data point on a person and extract X, Y, Z coordinates. From this, it is possible to calculate the distance between any 2 points in 3D space. This is the method involved in calculating body measurements like height and width.

Due to the Kinect system being real-time and constantly updating individuals heights and widths the results produced were chosen when stationary and values started to level off and iterate around ± 0.05 metres.

More Information Email: 1805675@rgu.ac.uk GitHub: https://github.com/1805675CraigRobertson/KinectHonoursProject

BSc. (Hons) Computing (Application Software Development) 127


STUDENT BIOGRAPHY

Timothe Rouze Course: BSc (Hons) Computing Application Software Development Chronic Pain Diary: Help coping with chronic pain conditions with new technologies, an alternative to paper diaries Chronic pain is a major problem in our society. It is one of the main causes of disability in working age individuals. Chronic pain is also the costliest chronic condition to treat in the US before heart disease or cancer (Disorbio et al., 2006). Its annual economic cost in the United States is at least $560–635 billion (Gatchel et al., 2014). The goal for this project was to see if it was possible to offer a better way to manage chronic pain by using new technologies. It appeared that there where not a lot of existing qualitative applications for pain management that use actual medical knowledge (Lalloo et al., 2015). This situation implied that people suffering from chronic pain have to use paper diaries to keep track of their pain episodes and to be able to study them with their practitioners. Several studies have been made on a potential better pain management with an electronic diary and on the best way to present this diary (Duggan et al., 2015; Adams et al., 2017). The second goal was to provide an environment for users that could provide insights on their pain episodes in order to help them having a better management of this pain. The project we made has inspired itself from existing literature and is based on scientific resources for the recording of pain such as the Brief Pain Inventory (Cleeland and Ryan, 1994). It is believed that a better tool for managing pain could help people to handle this pain better. Also, giving insight about their pain to people can be very positive to help them having a better management and maybe a better understanding on how to avoid or reduce this pain.

<#> 128


ain Diary. Chronic Pain Diary.

ain conditions with with chronic pain conditions Help coping with chronic with pain conditions with ve to new paper diaries. to paper ogies, an alternative diaries. to paper diaries. technologies, an alternative

McDermottTimothé Rouzé & Roger McDermott es on

Pain mation society. edical bility cribedin is also t was in the cided orbio mostet in the billion kept project soon better g new ere not or pain medical tuation ic pain of their y them s have pain and on gan et second users r pain better ect we erature or the f Pain . It is g pain better. people ving a better ce this

ecure mobile as an ject to vider. des in n and will be range s pain. such general etting create ation.

ng the The e pain xisting p to fit or the er and data, as it n be on of ered users

ce cation

Introduction

To allow users to log their pain episodes on the app, a page inspired from the Brief Pain Chronic pain is aso major problem our society. Inventory (BPI) that the in information It is one are of thecorresponding main causes of gathered to disability medical in working age individuals. pain is also standards. At first the BPIChronic was transcribed the costliest chronic tothat treatit was in the directly on the app but itcondition appeared before heartexperience disease or so cancer notUS ideal for user it was(Disorbio decided et 2006). annual economic in the to al., reduce it inItsorder to keep only cost the most United information. States is at least $560–635 billion important (Gatchel al.,questionnaire 2014). The goal this project The rest ofetthe hasforbeen kept was the to see if it wascode possible offer better inside application to be to used asasoon to manage chronic pain by using new asway it would be needed. technologies. It appeared that there where not a lot of existing qualitative applications for pain management that use actual medical knowledge (Lalloo et al., 2015). This situation implied that people suffering from chronic pain Thehave application still needs improvement to use paper diariessome to keep track of their following the user phaseto that pain episodes andtesting to be able studyjust them ended. With practitioners. the feedbackSeveral from users with their studiesand have some time,onit acould becomebetter a very beenmore made potential pain powerful tool to manage pain. However has on management with an electronic diaryitand shown encouraging resultsthis as diary users(Duggan where et the best way to present happy to haveAdams a wayetof al., managing pain al., 2015; 2017). their The second with their generally for found goal wassmartphones to provide anand environment users what Withtheir more that they couldwhere provideexpecting. insights on pain development it would possible episodes intime order to helpbe them havingtoaadd better several functionalities that pain. wouldThe makeproject it reallywe management of this powerful. Finally, beingitself ablefrom to seek the advice made has inspired existing literature of and a health professional to review the general is based on scientific resources for the functioning it to make it fitBrief health recordingand of adapt pain such as the Pain standards better. Inventory (Cleeland and Ryan, 1994). It is believed that a better tool for managing pain could help people to handle this pain better. ForAlso, this giving project, I would thank my insight aboutlike theirtopain to people supervisor McDermott who them helpedhaving me a a can be Roger very positive to help lot better throughmanagement the entire process of this project, and maybe a better giving me insighton on way to go andthis understanding howthe to avoid or reduce answering my questions. He also helped me pain. to keep going around the end of the year during the situation we have been. I also thank all thecreate peoplea who The would aim oflike thetoproject is to mobile To store user in ame reliable andit secure tested thethe app anddata helped making better to application that will allow people subject way it was decided to use Firebase as an in every aspects. chronic pain to save their pain episodes in authentication andto thank database Finally, I would RGU forprovider. giving records. With like these records, statistics will be Firebase for fast implementation and access to allowed its students a lot facilities created to give insights onof the user’sthat pain. strong database. Firebase also offers a range make working on projects a lot easier. These statistics can also be useful for general of ready to use user management tools such practitioners following the users to help create as automatic Emails for password resetting adapted diagnosis.. that can be very useful for such M., an WETHINGTON, application. ADAMS, P., MURNANE, E.L., ELFENBEIN,

Conclusion

Acknowledgments

Project Aim

References E., GAY, G., 2017. Methods Supporting the SelfManagement of Chronic Pain Conditions with Tailored Momentary Self-

The application was using the Assessments. Proceedings of thedeveloped 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17. ACM, android application framework. The New York, NY, USA, 1065–1077. questionnaires for the pain CLEELAND, C.S., RYAN,and K.M.,designs 1994. Pain used assessment: Global use of the where Brief Pain Inventory. Acad. the Med. Singap., entries chosen Ann. from existing 23, 129–138. scientific literature in order for the app to fit DISORBIO, J.M., BRUNS, D., BAROLAT, G., 2006. Assessment and Treatment Chronic Pain, 10 health careofrequirements. The slider for the DUGGAN, G.B., KEOGH, E., MOUNTAIN, G.A., MCCULLAGH, pain level is inspired from the SAFE slider and P., LEAKE, J., ECCLESTON, C., 2015. Qualitative the SuperVAS (Adamsself-management et al., 2017).system for evaluation of the SMART2 in chronic pain. Disabil. Rehabil. Assist. Technol., 10, Topeople create the graphics based on user data, 53–60. the library AnyChart has been C.A., used as B.,it GATCHEL, R.J., MCGEARY, D.D., MCGEARY, LIPPE, 2014. for Interdisciplinary chroniccreation pain management: allows fast graphic that canPast, be present, and future. Am. Psychol., 69, 119–130. used for visualisation. Graphic visualisation LALLOO, C., JIBB, L., RIVERA, J., AGARWAL, A., STINSON, of J., 2015. a PainofApp for That”: Reviewgathered of Patientdata is “There’s a big part using the data Smartphone Applications for Pain Management. to targeted help for a better understanding of users Clin. J. Pain, 31, 557–563. condition.

To allow users to log their pain episodes on the app, a page inspired from the Brief Pain Inventory (BPI) so that the information gathered are corresponding to medical standards. At first the BPI was transcribed directly on the app but it appeared that it was not ideal for user experience so it was decided to reduce it in order to keep only the most important information. The rest of the questionnaire has been kept inside the application code to be used as soon as it would be needed.

Conclusion

The application still needs some improvement following the user testing phase that just ended. With the feedback from users and some more time, it could become a very powerful tool to manage pain. However it has shown encouraging results as users where happy to have a way of managing their pain with their smartphones and generally found what they where expecting. With more development time it would be possible to add several functionalities that would make it really powerful. Finally, being able to seek the advice of a health professional to review the general functioning and adapt it to make it fit health standards better.

Acknowledgments

Conclusion

The application still needs some improvement following the user testing phase that just ended. With the feedback from users and some more time, it could become a very powerful tool to manage pain. However it has shown encouraging results as users where happy to have a way of managing their pain with their smartphones and generally found what they where expecting. With more development time it would be possible to add several functionalities that would make it really powerful. Finally, being able to seek the advice of a health professional to review the general functioning and adapt it to make it fit health standards better.

Acknowledgments

For this project, I would like to thank my supervisor Roger McDermott who helped me a lot through the entire process of this project, giving me insight on the way to go and answering my questions. He also helped me to keep going around the end of the year during the situation we have been. store the like usertodata in aallreliable and secure I To also would thank the people who way itthewas to use Firebase as an tested appdecided and helped me making it better database provider. inauthentication every aspects. and Firebase allowed implementation and Finally, I would likeforto fast thank RGU for giving strong to database. Firebase offers a range access its students a lotalso of facilities that of ready to use user management make working on projects a lot easier.tools such as automatic Emails for password resetting that can be very useful for such an application.

For this project, I would like to thank my supervisor Roger McDermott who helped me a lot through the entire process of this project, giving me insight on the way to go and answering my questions. He also helped me to keep going around the end of the year during the situation we have been. I also would like to thank all the people who tested the app and helped me making it better in every aspects. Finally, I would like to thank RGU for giving access to its students a lot of facilities that make working on projects a lot easier.

ADAMS, P., MURNANE, E.L., ELFENBEIN, M., WETHINGTON, E., GAY, G., 2017. Supporting the SelfManagement of Chronic Pain Conditions with Tailored Momentary SelfAssessments. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17. ACM, New York, NY, USA, 1065–1077. CLEELAND, C.S., RYAN, K.M., 1994. Pain assessment: Global use of the Brief Pain Inventory. Ann. Acad. Med. Singap., 23, 129–138. DISORBIO, J.M., BRUNS, D., BAROLAT, G., 2006. Assessment and Treatment of Chronic Pain, 10 DUGGAN, G.B., KEOGH, E., MOUNTAIN, G.A., MCCULLAGH, P., LEAKE, J., ECCLESTON, C., 2015. Qualitative the graphics SMART2 self-management system for Toevaluation createof the based on user data, people in chronic pain. Disabil. Rehabil. Assist. Technol., 10, the library AnyChart has been used as it 53–60. allows R.J., for MCGEARY, fast graphic creation C.A., thatLIPPE, can B., be GATCHEL, D.D., MCGEARY, 2014. for Interdisciplinary chronic pain management: Past, used visualisation. Graphic visualisation of present, and future. Am. Psychol., 69, 119–130. data is bigL.,part of using the data gathered LALLOO, C., a JIBB, RIVERA, J., AGARWAL, A., STINSON, J., Pain Appunderstanding for That”: Review of to2015. help“There’s for aa better of Patientusers targeted Smartphone Applications for Pain Management. condition. Clin. J. Pain, 31, 557–563.

ADAMS, P., MURNANE, E.L., ELFENBEIN, M., WETHINGTON, E., GAY, G., 2017. Supporting the SelfManagement of Chronic Pain Conditions with Tailored Momentary SelfAssessments. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI ’17. ACM, New York, NY, USA, 1065–1077. CLEELAND, C.S., RYAN, K.M., 1994. Pain assessment: Global use of the Brief Pain Inventory. Ann. Acad. Med. Singap., 23, 129–138. DISORBIO, J.M., BRUNS, D., BAROLAT, G., 2006. Assessment and Treatment of Chronic Pain, 10 DUGGAN, G.B., KEOGH, E., MOUNTAIN, G.A., MCCULLAGH, P., LEAKE, J., ECCLESTON, C., 2015. Qualitative evaluation of the SMART2 self-management system for people in chronic pain. Disabil. Rehabil. Assist. Technol., 10, 53–60. GATCHEL, R.J., MCGEARY, D.D., MCGEARY, C.A., LIPPE, B., 2014. Interdisciplinary chronic pain management: Past, present, and future. Am. Psychol., 69, 119–130. LALLOO, C., JIBB, L., RIVERA, J., AGARWAL, A., STINSON, J., 2015. “There’s a Pain App for That”: Review of Patienttargeted Smartphone Applications for Pain Management. Clin. J. Pain, 31, 557–563.

References

School of Computing Science School of Computing Science BSc. (Hons) Computing (Application BSc. (Hons) Computing (Application Software Development) Software Development)

References

129


STUDENT BIOGRAPHY

Iain Scott Course: BSc (Hons) Computing Application Software Development CRAZY TALKING HEADS - The Personal Chatbot The household of the future may include an artificial companion or personal bot. The application created for this project was based on the premise that familiarity and shared experiences make entities (or other people) more relatable. If people can recognise common characteristics, traits and experiences in other people; why not a bot? A bot with the same cultural reference points and perceived shared experiences would be as relatable as their human counterparts; or would they?

<#> 130


CRAZY TALKING HEADS The Personal Chatbot

Student: Iain Scott. Supervisor: Professor Nirmalie Wiratunga.

Introduction

Project Aim The aim of this project is to develop a chatbot that can interact with humans on an personal level using n a t u r a l l a n g u a g e , a n d t o demonstrate a capacity to invoke emo3onal responses, even if in a limited way.

A Design for Life

A successful social chatbot must have the ability to accept and support user preferences, personalise responses through data capture, provide emotional support, and a natural human experience through verbal and visual communication. By making the bot customisable and ‘individualistic’ it should allow the system to interact in a more profound manner. The application development focused on three main areas: ● The user controls (Control Panel Options). ● Personal responses (through data capture) using natural language and user options. ● A natural human computer interaction interface (Avatar). The ambition was to develop and integrate these core areas within a single application or platform, that would sufficiently deliver upon the stated aims and objectives of the project and deliver a truly personal chatbot. The chosen development platform was Java NetBeans.

Implementation

Java Methods CrazyTalkingHeads.Class{

NetBeans Java platform running on Windows OS User Experience Layer

Speech to Text

Text to Speech

Input

Output

CrazyTalk 8 Avatar Animation

Data Layer

Response Layer

figures

Sphinx 4 Speech Recognition Software

Animation Output

The household of the future may include an ar3ficial companion or personal bot. The applica3on created for this project was based on the premise that familiarity and shared experiences make en33es (or other people) more relatable. If people can recognise common characteris3cs, traits and experiences in other people; why not a bot? A bot with the same cultural reference points and perceived shared experiences would be as relatable as their human counterparts; or would they?

Response Generation

Multiple Response Candidates

File and Database Persistence

Multiple Response Candidates Selected Response

Response Selection

Task Execution

Figure 1 Basic Chatbot Design

SQL DB

Files

Response Candidates

In order to implement the project, a chatbot prototype was developed that has the following func3onal requirements. ● A viable speech recogni3on capability: a method of detec3ng and recognising speech inputs and responding with verbal outputs. ● An expressive, realis3c avatar representa3on: an animated avatar capable of expressing emo3onal responses and communica3ng through body language. ● The ability to perform basic tasks: these include answering ques3ons based on known informa3on from data storage and running windows applica3ons. ● A unique customisable persona: a method of customising how the bot interacts with the user through seHngs and file op3ons. ● A recognisable personality through the ability to personalise responses: a method of s3pula3ng individual responses through seHng files and set-up op3ons. ● The ability to store personal data through data persistence: a method of implemen3ng a database that collates personal informa3on through natural language interac3ons. ● The ability to retrieve personal data for personalised responses: a method to access personal data stored in a database, through natural language interac3ons, that can be used to form personalised responses. ● The ability to provide emo3onal support: a method of responding to certain inputs that might be described as ‘having applied emo3onal weight’ in an appropriate manner. ● Limited conversa3onal ability: a method of s3pula3ng and randomising relevant responses to inputs that enable a two-way dialog.

Figure 2 Applica;on Female Avatar

public void makeDecision(){ this method triggers the responses and actions through string pattern matching. If (this condition is met) { do this }. else if (this condition is met) { do this } else { do this.. Final catch clause } } // methods invoked by main() method. public void controlPanel(){ creates the panel and controls. Invokes panel actions through action listeners. } public void femaleAvatar(){ creates the avatar frame and controls avatar actions through action listeners. } public void maleAvatar(){ creates the avatar frame and controls avatar actions through action listeners. } // methods invoked by makeDecision() public void writeToFile(){ this method appends a line of text to a text file. The filename of the file to be modified, and the line of text to be added is passed to the method as strings. } public void sqlQuery1Param(){ this method takes an sql query string and returns the data retrieved for any given single table column. } public void sqlQuery2Param(){ this method takes an sql query string and returns the data retrieved for any given two table columns. } public void sqlUpdate(){ this method takes an sql update string and performs updates on the sql database. } public void familyDbOps(){ this method matches keywords with string inputs and updates the database with new information. It will also generate responses based on database information.. } public void daysToBirthday(){ this method takes a date of birth and calculates a persons age and how many days there are until their next birthday. } public static void playSound(){ this method plays sounds that the text to speech feature cannot produce. }

public void readFileOut(){ this method reads text files. The filename of the file to be read is passed to the method as a string. This is required for text to voice functionality (reading user specified responses from file), and to generate the control panel display content. When reading out to the display panels, if the file has more than seven lines, it will only read out the last seven lines as displays can only display a maximum of seven lines. } public void speakString(){ this method speaks the strings and synchronies the avatar animations. It does this by calculating the length of time it takes to speak the number of words in each response including punctuation. } public void updateSettingsFile(){ this method updates the settings file every time a setting is changed. } public void openScriptFile(){ this method opens the specified text file (passed as a string to the method) for editing. Allows users to add their own responses to the system. } public void createSettingFile(){ this method creates a settings file at runtime if one does not already exist and generates default settings. } public void createGreetingFile(){ this method creates a greetings file at runtime if one does not already exist and generates default greetings. } public void createSignoffFile(){ this method creates a sign-off file at runtime if one does not already exist and generates default sign-offs. } public void createInsultFile(){ this method creates a insults file at runtime if one does not already exist and generates default attitude or insult responses. } public static void main() { at runtime… run Control Panel Frame run Female Avatar Frame run Male Avatar Frame (although both avatar frames are invoked at runtime, only one avatar frame is visible. The other will be hidden. Controlled by Settings file). } }

Figure 3 Java class methods and descrip;ons

Although the large scope of the p r o j e c t a ff e c t e d t h e o v e r a l l development of the applica3on (with some elements being underdeveloped); without knowing what could reasonably be achieved, all outcomes were aNempted and implemented. However, there was simply not enough 3me to develop all areas to their fullest extent. With a more effec3ve speech recogniser, beNer text to speech func3onality, a more naturally animated avatar, and an A.I. that c o u l d b e N e r u n d e r s t a n d t h e sub3li3es and context of the human language, I might have goNen closer to the projects overall ambi3on. However, the project has produced a func3oning bot that answers the specifica3on, if not necessarily its ambi3on. The ambi3on of the project was perhaps always going to be out of reach but the aims and objec3ves were implemented in full and I therefore consider the project a success.

131


STUDENT BIOGRAPHY

Liam Seddon Course: BSc (Hons) Computing Application Software Development Price Comparison Website For this project I will be researching on the following areas in detail for the shopping service; Why is this shopping service needed, specifically in Aberdeen, the areas being researched in depth will be; Price comparison, Other price comparison apps and web scrappers. After conducting this research, it will be used to help better understand Price comparison and help implement my research into a working program, which will allow users to buy products from a variety of supermarkets and do a price comparison between the stores to get the product cheaper. This shopping service will also allow users to add these products into one basket regardless of which supermarket the product is from so that they get the cheapest price for each product they want

<#> 132


Price Comparison Website Liam Seddon & Kyle Martin

Introduction For this project I will be researching on the following areas in detail for the shopping service; Why is this shopping service needed, specifically in Aberdeen, the areas being researched in depth will be; Price comparison, Other price comparison apps and web scrappers. After conducting this research, it will be used to help better understand Price comparison and help implement my research into a working program, which will allow users to buy products from a variety of supermarkets and do a price comparison between the stores to get the product cheaper. This shopping service will also allow users to add these products into one basket regardless of which supermarket the product is from so that they get the cheapest price for each product they want.

Evaluation

Conclusion

Throughout my project, I have tried making a web app that I wanted to deploy to Heroku, I started by doing some front end development as a starting point on my website. I done this through HTML, CSS and JavaScript. I then was going to do my back end development using Node.js and express to help power the server and connect my database to the website, I was working on trying to get the server working to display the webpages this took up a lot of time as there was a lot of bugs in the coding for my server. Looking back I would’ve developed all the front end first before trying to deploy to Heroku and I would have done more research on node.js and express as my poor knowledge of both really let me down during the development of my project. In the end I have a static HTML site with some images and some functionality, again this was down to not leaving myself enough time due to fixing my server on Heroku to display web page.

I would consider next time, to work more on front end development before going to work on back end and fix bugs. As I have said due to bugs I haven’t left myself enough time to finish front development, the front end so far is that it has some functionality to it buttons allow users to move pages and a login area works as well to allow users access to the site.in conclusion there is still a lot to do for this website to be fully functional and the way it is meant to be.

Project Aim

Acknowledgments

The Aim of this project to is to help users of the app save money on their shopping, this app will help by working out what's the cheapest of each product against different stores and displays the cheapest one.

Methods

(Rouse 2011) The website was created using HTML, css and JavaScript. PHP was used to help allow the website to be live via Heroku. I was trying to use MySQL, express and node to help connect a backend but in the end, this didn’t work.

I would like to thanks Dr Kyle Martin for his help throughout the project lifecycle, he has helped a great deal and has been a great supervisor.

In Future, I would like to have had the knowledge to implement Node.js and express to help with the backend development of the website, I would like to convert this website into a web app with all the functionality and requirements I had set out to do at the start. The functionality would allow users to do a lot more interaction with the site, for e.g. they could add products to their shopping basket to and then when the mouse is hovering over it, it then displays a drop down menu showing the products the users has added. I would also improve my css on the front end as it admittedly doesn’t look nice or have much colour. Better routing of web pages to have a better flow of the website. A database to store and post data from users and to also display the excel file the price comparison data is stored in.

References Margaret Rouse . (2019). Web Application (Web App). Available: https://searchsoftwarequality.techt arget.com/definition/Webapplication-Web-app. Last accessed 30/4/2020. Node.JS. (2020). Logo Downloads. Available: https://nodejs.org/en/about/resour ces/. Last accessed 30/4/2020. Patrick Collinson. (2020). Panic buying on wane as online shopping takes over, says bank. Available: https://www.theguardian.com/busi ness/2020/mar/30/coronavirusbank-finds-end-to-panic-buyingwhile-online-shopping-takes-over. Last accessed 30/4/2020.

Computer Application Software Development

133


STUDENT BIOGRAPHY

Grant Sheils Course: BSc (Hons) Computing Application Software Development Klink: A visual programming language game to teach the basics of coding In the past few years, there has been a rising need for software developers around the world, in The United States alone the number of employed developers went from 800,000 in 2004 (Geer, 2006) to over 4 million in 2016 (Daxx Team, 2019). However, coding and computer science, in general, can be a hard field to get involved with especially for younger students as there are many schools that fail to provide any Computer Science education. A tired and proven way of helping beginners start with coding is introducing them to a Visual Programming Language, this will allow them to understand the basic fundamentals of coding while still keeping the process simplified. These VPLs have been used in conjunction with school education programs with great results (Grout and Houlden, 2014), increasing the child’s knowledge base and interest in computer science. Then there is also the research behind the use of general software or video games within the field of children’s education, showing that not only do students have fun playing these games but their interest in the actual subject field itself increases. It seems then that a strong combination that will help introduce coding to younger audiences would be between video games using visual programming languages. And that is exactly what this project hopes to achieve.

<#> 134


Klink: A visual programming language game to teach the basics of coding Grant Shiels & Dr Mark Zarb

Introduction In the past few years, there has been a rising need for software developers around the world, in The United States alone the number of employed developers went from 800,000 in 2004 (Geer, 2006) to over 4 million in 2016 (Daxx Team, 2019). However, coding and computer science, in general, can be a hard field to get involved with especially for younger students as there are many schools that fail to provide any Computer Science education. A tired and proven way of helping beginners start with coding is introducing them to a Visual Programming Language, this will allow them to understand the basic fundamentals of coding while still keeping the process simplified. These VPLs have been used in conjunction with school education programs with great results (Grout and Houlden, 2014), increasing the child's knowledge base and interest in computer science. Then there is also the research behind the use of general software or video games within the field of children's education, showing that not only do students have fun playing these games but their interest in the actual subject field itself increases. It seems then that a strong combination that will help introduce coding to younger audiences would be between video games using visual programming languages. And that is exactly what this project hopes to achieve.

Project Aim The main goal of this project is to create some form of visual programming language software that will encourage people, specifically younger students, to take an interest in coding. The software should either be a video game or implement video gamelike mechanics in order to allow accessibility and increase the user's engagement with the application.

Methods

Before starting I spent time researching the different methods that could be used to achieve my goal. I first had to look at what engine the game should be made in. For this, there were two main competitors, Unity or Godot. In the end, I went with Godot as it had amazing digital resources available and I personally found it much easier to understand as a first-time game developer. Since I was using Godot I decided to use their built-in language: GDScript. During the development process, I had to create multiple different sprites and objects that needed visual assets. For creating these assets I used GIMP as I've had experience using this software for many years.

Gameplay The basic gameplay idea of the project would be that the user is faced with different levels, and within each of these levels, a maze-like path can be seen with the player character inside. It is then the job of the user to navigate the player character from one end of the maze to the other. However they won't be able to use the classic move keys to move the character, this is where the visual programming language aspect comes into play.

Conclusion

In conclusion, I feel the fundamental requirements for the project were reached, a game was created that allows users to create commands using blocks of code, a visual programming language. At this stage, full testing has yet to be completed, but the feedback that has been received so far has been positive. With what testers have said, users seem to enjoy the gameplay experience and find it accessible as an entry into programming. I also believe this project has given me insight into the process of working with game engines and how game development differs from other forms of development that I'm familiar with. I hope that the skills that I've gained during this project will help me in future opportunities.

Further Work

The user shall use a selection of buttons that when pressed will add a code block to a command that the user can see and edit. This command will be made up of different tasks that the player character will carry out when the user presses the run button. These tasks will do things such as move and change direction, with text boxes for the user to have input on certain aspects, like the distance that the character will move. Once the user has built their command they shall press the run button and watch to see if they've managed to successfully guide the character to the goal area, if they are successful the game will move on to the next level. However, if they don't reach the goal they will be transported back to the beginning of the path where they can try again.

There is potential to add in a larger variety of code blocks allowing for more complex commands to be built and in turn, allow users to gain more knowledge of the fundamentals of coding. Implementation of online features could also lead to the ability to create and share custom levels for other users to complete.

Acknowledgments

I would like to use this section of the poster to give thanks to my supervisor Dr Mark Zarb, who provided great support across both semesters as well as holding weekly meetings that I found incredibly helpful at providing information and insight into the projects of other members of my class. As well as give thanks to both NESCol and RGU for giving me the opportunity to study.

References

Geer, D., 2006a. Software developer profession expanding. IEEE Softw. 23, 112– 115. https://doi.org/10.1109/MS.2006.56 Daxx Team, 2019. Software Developer Statistics 2019: How Many Software Engineers Are in the US and in the World? [WWW Document]. Daxx Softw. Dev. Teams. URL https://www.daxx.com/blog/ development-trends/number-softwaredevelopers-world Grout, V., Houlden, N., 2014. Taking Computer Science and Programming into Schools: The Glyndžr/BCS Turing Project. Procedia - Soc. Behav. Sci., 4th World Conference on Learning Teaching and Educational Leadership (WCLTA-2013) 141, 680–685. https://doi.org/10.1016/j.sbspro. 2014.05.119

135


STUDENT BIOGRAPHY

Alex Thomson Course: BSc (Hons) Computing Application Software Development Migrate and enhance a Python-based image recognition software used to digitise engineering drawings from the Oil & Gas industry Due to the increase in mobile devices which are used to run and demonstrate software solutions, Industries demand the development of mobile applications or web sites which have the ability of running and testing tools. Within the oil and gas Industry they have a variety of different diagrams, one of these being a sensor-equipment diagrams, as observed by Moreno-Garcia the process of interpreting these is not straightforward even for human experts (Moreno-GarcĂ­a 2018). This project will focus on migrating a system which digitises these sensor-equipment diagrams using Python-based optical character recognition from a Windows based platform to an android device while still running on Python. The application will ultimately be easily interacted with, have a simple user interface and display the information in a clear and concise manor. The application may also offload some of the computation if a web connection is available to take some of the strain from the processor on the mobile device.

<#> 136


Migrate and enhance a Python-based image recognition software used to digitise engineering drawings from the Oil & Gas industry Alex Thomson & Dr Carlos Moreno-García

Introduction

Due to the increase in mobile devices which are used to run and demonstrate software solutions, Industries demand the development of mobile applications or web sites which have the ability of running and testing tools. Within the oil and gas Industry they have a variety of different diagrams, one of these being a sensor-equipment diagrams, as observed by MorenoGarcia the process of interpreting these is not straightforward even for human experts (Moreno-García 2018). This project will focus on migrating a system which digitises these sensorequipment diagrams using Pythonbased optical character recognition from a windows based platform to a android device while still running on Python. The application will ultimately be easily interacted with, have a simple user interface and display the information in a clear and concise manor. The application may also offload some of the computation if a web connection is available to take some of the strain from the processor on the mobile device.

Project Aim The aim of the project was to migrate and enhance a Python-based image recognition software used to digitise engineering drawings from the oil and gas industry.

Methods

Figure 1: Kivy Framework

Experiments and Results

Figure 2: Diagram Example

Upon launching the migrated application the user has to make selection of a file, which looks like Figure 3a, where they will pick their sensor diagram image which represents Figure 2. Next the application will move onto the next screen and the user will make selections on what areas they would like the application to process and then click save (see figure 3b)

Figure 3a: Image Selec0on

Figure 3b: Area Selec0on

The application has to use Pytesseract to read the text from the sensors and unfortunately this cannot be done on Python for mobile currently as there is no recipe for Pythonfor-android. As an alternative I have created a server using flask to offload the tesseract to an AWS instance. Annotations was the main problem on the migration, sloth isn’t supported in Python-for-android and there are no alternatives, so I had to create a labelling tool of my own for the application to progress.

Figure 4: Rendered Diagram

Conclusion

Figure 6: Offload Flow Diagram

The existing application was successfully migrated to mobile using the Kivy framework, with a couple of problems faced along the way. Pytesseract could not be build using the Python-for-android package and therefore an alternative had to be sought and this was to offload this using a flask server which took in the image and returned the text. Having to create an annotation tool, and problems with the architecture and hence running the 32-bit version of OpenCV. But overall this shows that Python on mobile is completely possible and most common packages are supported.

Acknowledgments Special thanks to Dr Carlos MorenoGarcia for supervising the project, and for creating the application to migrate I guess. I would also like to thank DNV·GL for providing the image dataset which was used in the original application.

References

The existing Python application was migrated using the Kivy framework which is a collection of projects which allows the execution of Python code on mobile devices (Bc & Chrastina, n.d.). An android system was used for the application to be run on for testing as was more efficient that virtualisation.

‌Bc, M. and Ondřej Chrastina (n.d.). Cross-platform development of smartphone application with the Kivy framework. [online] Available at: https://is.muni.cz/th/430596/fi_m/dp.pdf [Accessed 29 Oct. 2019]. Once the areas have been selected in Figure 3b, Then after a couple of minutes of processing the user will be moved to the next screen where the application wills show the user the rendered diagram with the connectivity displayed for each sensor and equipment where applicable.

Moreno-García, C. (2018). Digital interpretation of sensor-equipment diagrams. [online] Available at: http://ceur-ws.org/Vol-2151/ Paper_s2.pdf .

137


STUDENT BIOGRAPHY

Andrew Trail Course: BSc (Hons) Computing Application Software Development Drone Delivery Scheduling Drones are an up-and-coming method of delivering items to customers. They have not been extensively used as of yet, due to being a relatively new technology. There are also some hardware limitations of the drones. These limitations include maximum flight distance, weight capacity, adverse weather conditions, and legal issues (Doring et al.,2017). However, as drone technology advances they will be able to fly further and carry more weight. The demand from customers is high, with many being happy to pay extra for same-day delivery (McKinsey & Company, 2016). This combination of demand from customers and advances in drone technology will lead to an increase in their use. It is safe to imagine a situation in the near future where drones are able to carry multiple items to multiple customers in a single flight.

<#> 138


Drone Delivery Scheduling

Clustering and Path Finding to produce a schedule Andrew Trail & Dr. Kit-ying Hui

Introduction

Drones are an up-and-coming method of delivering items to customers. They have not been extensively used as of yet, due to being a relatively new technology. There are also some hardware limitations of the drones. These limitations include maximum flight distance, weight capacity, adverse weather conditions, and legal issues (Doring et al.,2017). However, as drone technology advances they will be able to fly further and carry more weight. The demand from customers is high, with many being happy to pay extra for same-day delivery (McKinsey & Company, 2016). This combination of demand from customers and advances in drone technology will lead to an increase in their use. It is safe to imagine a situation in the near future where drones are able to carry multiple items to multiple customers in a single flight.

Project Aim The aim of this project is to build a software solution to create an efficient delivery schedule. The schedule should be optimised to reduce the total amount of time taken for each customer to receive their item.

Testing

Figure 2: Customers grouped into 5 clusters by location

Figure 3: Customers grouped into 8 clusters by location

A sample set of data was input for testing purposes. 33 mock customers were added, scattered across Aberdeen. Each marker on the map in Figures 2 and 3 represent a customer. The clustering algorithm “kMeans” was used to group customers by location. The resulting groups for different numbers of clusters input are shown in Figures 2 and 3. From this stage, the two path-finding algorithms can be applied to find a route between each customer. If one of the routes is too long to be completed given the input parameters of the drones, the group is split in 2 using “kMeans” again. The process repeats until all groups are given a manageable route, or determined to be impossible. The path finding algorithms have functions built in to improve the results that are produced. The distance between legs of the route are measured and fed into an algorithm that gives the time required for a drone to complete that leg. This algorithm takes into account the wind speed and direction to produce an accurate result.

Conclusion - The project demonstrates that artificial intelligence can be a useful tool in delivery scheduling. The solution created is able to produce an efficient schedule quickly and easily. It also allows comparison of different path finding algorithms. - The algorithms are potentially not as optimal as possible, as many factors haven’t been considered due to time limitations of this project. Factors such as the amount of time a customer has been waiting could be considered, so that customers who have waited a long time are given priority in the queue. - Additionally, the software could be expanded to provide more of a ‘real-time’ environment, showing the current location of drones and sorting new orders into new groups to be completed once drones have completed the route they are on. - Weight of items is not currently taken into account. In a real system this would be a key consideration, so would need to be added. - Future work would need to be carried out to implement these features before the system was ready for commercial use.

Methods

Figure 1: A set of customers split into groups with routes generated Two techniques are deployed to create a schedule: - Customers are split into groups based on how close they are to one another geographically. - A path finding algorithm is used to find a route for each drone to follow. Two path-finding algorithms were implemented to allow comparison of their suitability for the task: Genetic Algorithm (GA) and Greedy Best First (GBF).

Figure 6: Comparison of length of routes produced by GA and GBF Figure 4: Routes for 5 clusters using the Genetic Algorithm

Figure 5: Routes for 5 clusters using Greedy Best First algorithm

Evaluation Figure 6 shows that across all routes, GA out-performed GBF. Route 3 shows similar results, with GBF producing a route 3.76% greater than GA. However, in route 4, GBF produced a massively worse route which was 51.89% slower than the route produced by GA. Interestingly, there are occasions where the Genetic Algorithm produces a longer route by distance, but takes less time to complete. This is because the Genetic Algorithm is able to try many different combinations, allowing it to produce a route where it flies with a tailwind for as long as possible.

References 1 - Dorling, K., Heinrichs, J., Messier, G. and Magierowski, S. (2017). Vehicle Routing Problems f o r D r o n e D e l i v e r y. I E E E Transactions on Systems, Man, and Cybernetics: Systems, 47(1), pp.70-71. 2 - McKinsey & Company (2016). Parcel delivery. The future of of last mile. [online] pp.9-9. Available at: https://mck.co/34XcEXR

139


STUDENT BIOGRAPHY

Nauris Valaks Course: BSc (Hons) Computing Application Software Development Interactive Math Tutorial System Educational tools have been around for years with business adopting e learning tools in 2000s (eFront Blog, 2019) and will be part of our future for years to come. Maths can be taught using online tools that can provide materials and tasks supplied by lecturers which can be used for students to take deeper dive into what has been covered in lectures or labs. Utilising correct platform to develop these tools is important to take advantage of users who have appropriate hardware whether it may be a website or mobile application. Having assessments from comfort of their home is an obstacle that needs to be conquered to providing opportunities for students to have a fair system where cheating is not a possibility or pre-assessment system which can be designed to provide students with opportunity to test their skills before real assessment.

<#> 140


Interactive Math Tutorial System Nauris Valaks & Supervisor David Lonie

Introduction Educational tools have been around for years with business adopting e learning tools in 2000s (eFront Blog, 2019) and will be part of our future for years to come. Maths can be taught using online tools that can provide materials and tasks supplied by lecturers which can be used for students to take deeper dive into what has been covered in lectures or labs. Utilising correct platform to develop these tools is important to take advantage of users who have appropriate hardware whether it may be a website or mobile application. Having assessments from comfort of their home is an obstacle that needs to be conquered to providing opportunities for students to have a fair system where cheating is not a possibility or pre-assessment system which can be designed to provide students with opportunity to test their skills before real assessment.

Figures and Results

Above results are taken from a survey with 25 respondents. It can be seen that people use online materials for revision which is a direct correlation of the research set out in the beginning. Even though majority is sometimes that gap can be filled by investigating further why they use online materials sometimes. By which future work can implement changes that would provide a need for a wider audience to use online tools more often.

These results show that a glimpse of an application still under development has interest to try out and see if it would help for their revision. Following these findings next step would be to provide users a current version of application to be tested to find any bugs and receive future suggestions. These suggestions then can be implemented into next version of application. Main thing for future implementation would be to create a web application to allow students to access it anywhere and from any device, and keep this application for offline use if in any circumstances there is no access to internet.

Methods

Visual Studio will be used because it is free to use with tutorials to explore and well-structured documents. Develop a windows application using WinForms as the designated framework. Using SQL database to store user data and quiz questions

Acknowledgments I would like to thank my supervisor David Lonie for his guidance and advice throughout the project.

Aim of this project was to develop an application which would allow students revise math problems and apply their skills learned to a quiz. Users of this application should be able to login with their own login details, access revision materials and an assessment tool in shape of a quiz. All of these intentions have been implemented with changes on how many types of materials are available for students.

Above image shows the final quiz screen on how the question would be displayed and clicking any of the four buttons would procced to the next question. It uses RGU colour scheme at the top where navigation sits in other parts of the application.

Project Aim Aim of this project was to develop a functioning tutorial software that would provide university students maths problems to solve in a quiz type system. With a feature to provide randomized question every time to provide reusability. Aim of this project was successfully achieved with minor changes and some aspects still to be implemented.

Conclusion

Choosing a quiz as the way of assessing users abilities was a great choice as it can bee seen from the above chart that people find quizzes helpful to prepare for assessments. This can lead to future additions and improvements to the quiz and how it functions. Finding ways to implement multiple revision types such as multiple choice and text inputs in one place could provide a way for assessments to be completed in the application.

References eFront Blog. (2019). A brief history of elearning (infographic) - eFront Blog. [online] Available at: https:// www.efrontlearning.com/blog/2013/08/ a-brief-history-of-elearninginfographic.html [Accessed 28 Oct. 2019].

141


STUDENT BIOGRAPHY

Isaac Ward Course: BSc (Hons) Computing Application Software Development Mental Health Issues Among Students Studying In Higher Education For this project, a mobile app was developed to help students document their feelings and direct them to different means of support for any mental health issues they may have. As students deal with high levels of stress due to the pressure of attaining good grades, this project is important because it will help students visualize how they feel, which allow them to keep track of their emotions and create a positive headspace during periods of increased stress. The main topics which are covered in this are include how students deal with mental health issues in higher education and the different support materials that are available to them. There are many websites available to students to support them, however they are not being utilized as much as they could be.

<#> 142


Mental Health Issues Among Students Studying In Higher Education Isaac Ward & Tiffany Young

Introduction

For this project, a mobile app was developed to help students document their feelings and direct them to different means of support for any mental health issues they may have. As students deal with high levels of stress due to the pressure of attaining good grades, this project is important because it will help students visualize how they feel, which allow them to keep track of their emotions and create a positive headspace during periods of increased stress. The main topics which are covered in this are include how students deal with mental health issues in higher education and the different support materials that are available to them. There are many websites available to students to support them, however they are not being utilized as much as they could be.

Project Aim

The aim of the project is to develop a product that will direct students to different help points where they can learn where to go to get help with any mental health issues they suspect, as well as users being able to track their feelings for personal use or to use the data to help them communicate their feelings between themselves and a professional. This is important as it means when users who are dealing with mental health issues get the support they need, they will already be equipped with a tool that will allow doctors to see how they have been feeling over the recorded period of time.

Methods

Firstly research was carried out to see how effective previous methods have been when tackling the issue of student mental health. Further research went into the design process for the application which included investigating different types of question structure and user interface layout, which would then help provide a suitable atmosphere users. Storyboards were created to visualise potential user pathways before creating the application.

After the research phase, implementation began using Android Studio to create a mobile application which would allow students to keep track of their feelings each day.

Figures and Results “Figure 1” below shows what the user would see when first logging into the application and going through the data collection process. This includes answering a few questions based on their mood, sleep and activities throughout the day. Once the data has been collected users can navigate to different parts of the application where they can find more information on where to go if they require external help from a professional. “Figure 2” gives a visual representation of how the data is presented back to the user on their chosen day. By using the date picker shown in “figure 2”, users can choose any day of the week and see their previous answers displayed back to them. The application has also been developed in a way where the user can choose to go back to previous dates and see their results for the chosen day.

Conclusion

The purpose of this project has been fulfilled as the application successfully allows users to selftrack their mood on a day to day basis and see their data visually displayed back to them. This will allow the app to be used as a tool as a means of communication between users and a professional who is trying to help them. The application also fills the need of educating students on where to go if they require further help, as well as giving users information on why it is important to keep a record of their mood, sleeping patterns and creating a positive mindset.

Acknowledgments

Figure 1

Figure 2

The application has a support page where users are provided with a variety of sources based on helping people who suffer from mental health issues such as depression, as well as other difficulties users may be having such as insomnia. On the information page, users can choose which category they would like to view information on. This includes the topics based on the questions the user answered earlier in the application. The information page displays a brief summary on why it Is important to keep track of your mood as well as provide tips on how to stay positive. A key challenge in this project was to make sure the application made it clear that it is not a tool to be used for selfdiagnosis. A future improvement could be implemented where the days are automatically highlighted on the calendar where the user has available data to view. This would help the user find what they are looking for quickly by reducing the time it takes to view old information.

The functionality of the application was thoroughly tested by myself so all of the features could be tested correctly. The app was also tested by family members who had no prior knowledge of how the application worked. Inspiration for this project came from initial research on how students in higher education suffer from excessive amounts of stress and don’t know what material is available to help them.

References Higher Education Statistics Agency, 2019. What are HE students' progression rates and qualifications. Higher Education Statistics Agency, 2019. Non-continuation summary: UK Performance Indicators 2017/18

143


STUDENT BIOGRAPHY

Caelan Weaver Course: BSc (Hons) Computing Application Software Development Teaching introductory level JavaScript in a 3-D virtual space whilst implementing the media computation approach. This project is a gamified 3-Dimensional Virtual Environment which is used to educate students and those with relevant interest Introductory Level Basics and Coding Principles of JavaScript, with the implementation of the Media Computation Approach. The application created combines the full immersion experience you get with virtual reality, along with the educational aspect. From this, research found that there was a lot evidence support visual learning and virtual reality gaming, but not a lot regarding a combination of the two. Unfortunately, due to complications with COVID-19, core functionality has suffered, causing alterations to the project as well as the overall deliverable.

<#> 144


Teaching introductory level JavaScript in a 3-D virtual space whilst implementing the media computation approach.

Author: Caelan Weaver & Supervisor: Roger McDermott

Introduction This project is a gamified 3Dimensional Virtual Environment which is used to educate students and those with relevant interest Introductory Level Basics and Coding Principles of JavaScript, with the implementation of the Media Computation Approach. The application created combines the full immersion experience you get with virtual reality, along with the educational aspect. From this, research found that there was a lot of evidence support visual learning and virtual reality gaming, but not a lot regarding a combination of the two. Unfortunately, due to complications with COVID-19, core functionality has suffered, causing alterations to the project as well as the overall deliverable.

Figures and Results

Unity Events

figures

figures

figures

figures

figures

Project Aim

For the initial design of the level shown above, images used were found online from another existing classroom scene that were developed as games and used 3D objects and rendering. Some inspiration for the classroom aesthetic was taken from the smaller center image shown above. A similar window design has been incorporated into my project but made slight alterations to the shape and size. My plane was to adapt Guzdial’s existing software to work within the Unity 3D engine and gamify it. So far, there is a fully functional and developed coding block puzzle using the VRTK. VRTK is a Virtual Reality Tool Kit which can be integrated into a Unity 3D project by downloading VRTK from the asset store. The reason I am using unity is because the Virtual Reality aspect of the project can be implemented and developed with unity. Not only was Unity a familiar IDE to work with, it is also free and very accessible.

Methods

The game is a simple problem solving puzzle. It teaches the user about variables and functions. A question is presented at the top of the whiteboard. The user has two coding blocks, both with different functions on them. Placing a coding block onto the whiteboard applies the function displayed on the coding block and adds it to the code on the whiteboard. This code is then verified to be either right or wrong. The whiteboard at the top will display either, “You Are Correct!” or “You Are Wrong!” and play a positive or negative sound accordingly. Below, you will find a script which handles the UnityEvent system.

figures

Main Code

Create a project that d e m o n st r a t e s t h e e f f e c t o f immersion within a 3-Dimentional Virtual Environment, by allowing users to interact with 3D objects in the scene, visually witness changes within the virtual world and receive auditory feedback from multiple directions, creating auditory illusion and manipulation.

Unity 3D was my development tool as it supported virtual reality game development and provided the necessary libraries and assets from the asset store, such as Oculus VR, and OpenVR. Given the circumstances, obtaining the necessary hardware was not possible, however, VRTK is a virtual reality tool kit which helped to emulate a virtual world and aided in producing the current deliverable to date.

As you can see above, there is a snippet from the inspector view within unity. This window essentially provides the developer with easy trigger event management based on either block colliding with the whiteboard. In this case, the system is changing the text displayed, the current Mesh Renderer of the whiteboard to the appropriate display based on the block that collided with the whiteboard, and is playing the ‘Correct Sound’ whilst also setting the pan stereo to 1, meaning the audio source position is set to the right of the player within the game.

Conclusion Due to the complications that have arisen, testing, implementation and evaluation have been difficult to carry out effectively, and as a result has caused negative impact to the project in someway. Ideally, there would be evaluation methods put in place in order to prove success and testing would be carried out by students and other volunteers in order to prove success via a multitude of different testing methods, such as Whitebox and Blackbox testing. Overall, the project aim has been achieved in most aspects as them demonstration video can confirm the projects use of media computation within the virtual world with the use of interactable objects.

References

•  G u z d i a l , M . , 2 0 2 0 . M e d i a Computation Teachers Website. C o w e b . c c . g a t e c h . e d u http://coweb.cc.gatech.edu/ mediaComp-teach [Accessed 28 April 2020]

145


146 <#>


BSc (Hons) Computing Graphics and Animation

147


STUDENT BIOGRAPHY

Ross MacTaggart Course: BSc (Hons) Computing Graphics and Animation How lighting in 3D animation effects the viewer’s perception of the environment and assists with storytelling Lighting is a key part of media both in the real world and the digital one It has many uses including setting the tone of a scene, revealing the personality of a character, directing the viewer’s focus and many more. Using lighting well can make a massive difference on almost any type of media within almost any genre. This project will focus on how light can be used to affect the viewer’s perception of the environment and how it can assist with storytelling.

<#> 148


How lighting in 3D animation effects the viewer’s perception of the environment and assists with storytelling Name: Ross Mactaggart Supervisor: Pam Johnston

Introduction

Modelling & Texturing

Results

Lighting is a key part of media both in the real world and the digital one. It has many uses including setting the tone of a scene, revealing the personality of a character, directing the viewer’s focus and many more. Using lighting well can make a massive difference on almost any type of media within almost any genre. This project will focus on how light can be used to affect the viewer’s perception of the environment and how it can assist with storytelling.

The creation of the models began with a simple primitive shape. After this many different modelling techniques and modifiers including symmetry, chamfer, noise and lots of polygon manipulation were used to build them into the characters that were desired. Attempts were made to keep them fairly low poly (Pre-turbosmooth) so that it would be easier to work with them during the animation process. Once the two characters were created, each one was UV unwrapped so that they could be textured. The UV maps were exported into Adobe Photoshop and then the textures were painted on top. Fully rendered versions of each character and the texture maps can be seen here.

To evaluate the animations, a survey was sent out to 40 participants. This survey asked the participants several questions about each animation with the goal of finding out which animation was more atmospheric and why this was the case. The results showed that almost every participant thought that the Red version of the animation was more atmospheric. The majority of the participants thought that the main change that affected the atmosphere of the scene was the flashing lights rather than the change of colour from white to red. The survey also asked the participants to enter 3 words that they associated with each animation. Word clouds made of these responses can be seen here.

Figure 3 – Textured Character Models and texture maps

Project Aim

The aim of this project is to create two animations that are perfectly identical other than the lighting that is used and find out what kind of differences the lighting makes. The focus of the animations will be the horror genre. One will use what can be considered “normal” lighting with consistent white light whilst the other will use lighting that attempts to be scarier. The animations will then be evaluated using a survey to see how big of a difference the lighting makes to the atmosphere of the scene.

Design The implementation of the project began by creating storyboards for the animation. After this, the next step was to sketch out designs for the characters that would feature in the animation. After settling on designs that were appropriate they were developed into modelling sheets that could be taken into Autodesk 3DSMax to use as references when creating the models, these modelling sheets are shown below. Figure 1 – Space Suit Modelling Sheet

Figure 7 – White Animation Word Cloud

Animation Before animating the scene the character models were prepared for animation. This involved rigging them with custom biped skeletons and then skinning them so that they could be posed without morphing in unwanted ways. The majority of the animation was done using motion capture, the university facilities were used to capture many different movements and then retargeted to work within 3DS Max and with the character models’ biped skeletons using Autodesk MotionBuilder.

Figure 8 – Red Animation Word Cloud

Figure 4– Motion capture studio and MotionBuilder

Conclusion

After fitting together the different movements and correcting any issues, small bits of keyframe animation were done between the clips to get them all to flow as seamlessly as possible. After setting up all of the animations for each character, the scene’s environment was built up and the two sets of light were set up. The first set was normal constant white light whilst the second had flashing red lights. Below you can see some frames of each animation being compared as well as a QR code that will take you to a YouTube video showing both the animations alongside each other. Figure 5 – Comparison of frames of the animations

Figure 6 – QR Code link to animations

The project successfully shows just how big of a difference lighting can make in 3D animation. Future work on the project could involve creating more animations to test out more types of lighting and investigate the differences that they make. Throughout this project I personally both learned new skills and improved already existing ones, I also learnt just how long the creation of a 3D animation can take especially when using lower-end machines.

Acknowledgements Figure 2 – Monster Modelling Sheet

I would like to thank my supervisor Pam Johnston for her guidance and support throughout this project, without her this project would not have been possible. I would also like to thank Jamie McDonald for teaching me many of the skills that I used throughout this project over the years that I have studied at RGU.

Computing (Graphics and Animation) 149


STUDENT BIOGRAPHY

Charlie Marsh Course: BSc (Hons) Computing Graphics and Animation Comparison of Stereoscopic Technology with Conventional Animation When stereoscopic 3 D (S 3 D) was featured in James Cameron’s 2009 Sci Fi film, ‘ the technology was perceived to be a new way of experiencing films in theatres. However, since then, the popularity of the technology has dropped considerably (figure 1 and the conventional form of watching films in 2D, though at varying resolutions, has dominated in it’s place. Along with this, virtual reality technology is providing users with a whole new sense of immersion, one that will vastly improve upon the seemingly outdated S3D. The goal of this project is to find out if stereoscopic 3D still provides a better and more practical form of immersion that it’s 2D counterpart.

<#> 150


Comparison of Stereoscopic technology with conventional animation Supervisor: Yang Jiang

Name: Charlie Marsh

Introduction When stereoscopic 3D (S3D) was featured in James Cameron’s 2009 Sci-Fi film, ‘Avatar’, the technology was perceived to be a new way of experiencing films in theatres. However, since then, the popularity of the technology has dropped considerably (figure 1), and the conventional form of watching films in 2D, though at varying resolutions, has dominated in it’s place. Along with this, virtual reality technology is providing users with a whole new sense of immersion, one that will vastly improve upon the seemingly-outdated S3D. The goal of this project is to find out if stereoscopic 3D still provides a better and more-practical form of immersion that it’s 2D counterpart.

Project Aim The aim of this project is to test stereoscopic technology against a standard 2D production and evaluate how the technology effects viewer, experimenting with the effect of how stereoscopic film is designed to make the experience feel more immersive than a flat image. To do this, a 3D animation will be created and rendered in two different ways, one with standard cameras, and the other using an S3D setup.

Methods

Figure 2

In order to create the animation, a storyboard will be written, and from this, various 3D landscapes will be created so that virtual models and assets can be animated within them to create a short story. Once the two productions are complete, a survey will be handed out to a group of people to answer about the use of stereoscopic technology compared to the standard render.

Implementation

Results Stereoscopy is essential to providing a production with photo-realism: 2.5 2 1.5 1 0.5 0

Strongly agree

Agree

Neutral

Disagree

Strongly Disagree

Figure 9

Figure 3

Figure 4

Figure 6

Figure 5

The animation followed the brief events of a dinosaur. The landscapes were created from plane meshes with displacement modifiers, whilst grass and trees were appended from another project folder. The highpoly tree model used after the opening shot was acquired from an external source due to timing reasons (Figure 7). The dinosaur was created from a box mesh, with several extrusions and extra vertices added on. The mesh was then sculpted, painted and rigged, before being appended into a scene project file for animation. Once the scenes were complete, one copy was rendered with standard settings, and the other with stereoscopic cameras applied. Along with this, other settings were also added to the final production, such as motion blur, in order to make the animation seem more fluid, and to simulate moving objects blurring in human vision (figure 8).

Conclusion Overall, the project proved to be a mixed outcome. Whilst the participants in the survey did not state that there was anything wrong with the current usage of stereoscopic technology in productions, the majority agreed that the technology is mainly likely to bring on cases of motion sickness and nausea, which unless if this is a desired affect from the production, could be considered a point against the technology. However, other questions state that the users overall agreed on it’s capability to create an immersive experience, potentially implying that as the technology evolves, a development will be made that will make people feel more engaged with a storyline, without the discomfort currently presented.

Project Statement Whilst the conclusion projected some mixed and ambiguous results, I felt this project was an ideal test of my current capabilities in 3D production and animation. Creating the dinosaur taught me a lot about the 3D modelling process. The animation stage also gave me an insight into how best to go about future 3D projects. One aspect that I would redo in future work, would to be spend more time on the production itself. Since this project was mainly about the camera work, I feel future projects would benefit from more thoroughly planned storylines and assets. With this in mind however, this project has demonstrated that performing the evaluation earlier on would be greatly beneficial, due to the current circumstances.

The evaluation consisted on a survey, containing 20 questions related to the usage of the stereoscopic technology. Due to the current circumstances, only my three flatmates were able to participate. The results proved mixed in some areas. For example, all viewers did not have a view on the future of 3D, though the majority agreed that the technology was essential to providing immersion.

Acknowledgments

I would like to thank my honours supervisor, Yang Jiang, lecturer at the Robert Gordon University, for her regular guidance throughout my final year. I would also like to thank Calum McGuire, Craig Cargill and Tricia Wagg for participating in my evaluation process, as well as Scott Robertson for assisting with renders, all providing support throughout my honours project. Finally, I would like to thank my family for supporting me through my time at University.

References Figure 1: James Garbett, 7th September 2019, ‘With a decline in 3D box office revenues, is 3D dead?’: https://www.entertainmentfocus.com/film-section/filmnews/with-a-decline-in-3d-boxoffice-revenues-is-3d-dead/ Figure 7: 3D Developer Tools, 11th February 2019, ‘How to create a tree in Blender 2.8’: https://www.youtube.com/watch?v =aPhY0ZntULM Figure 8: PixFlow, ‘Motion Blur Effect: The Good Bad What Why & How’: https://pixflow.net/blog/what-ismotion-blur-effect-in-after-effects/

Computing Graphics and Animation

151


STUDENT BIOGRAPHY

Aaron Michie Course: BSc (Hons) Computing Graphics and Animation VR an Exploration into Immersive Story Telling Immersive Technology is most commonly associated with the realms of science fiction. However, recent advances with in the technology finds it making more and more of a presence within modern pop culture. From 360-degree videos used for journalism, to fully immersive VR experiences built to let users experience the wonders of faraway places, like the rainforests and artic, from the safety of their own homes. With VR Headsets becoming more accessible in price and computer requirements, more and more people are turning to VR immersion as the next thing on entertainment. This new-found popularity creates a new possibility for the development of new immersive storytelling experiences to advantage of this growing market.

<#> 152


153


154 <#>


BSc (Hons) Digital Media

155


STUDENT BIOGRAPHY

Ross Allan Course: BSc (Hons) Digital Media Accessibility in Gaming: Developing a 3D Video Game and Implementing Auditory and Visual Accessibility Features Using the Unity Game Engine The purpose of this project was to investigate the problems that people with accessibility issues face within 3D video games, as well as solutions to these problems and answers to many relevant questions revolving around these issues. A game was designed which highlighted exactly how these issues could be tackled and set a good example for how to apply certain methodologies specifically relating to auditory and visually impaired players. Early market research showed high numbers of people with accessibility issues play video games. Wing Chin (2015) “over 92% of people play games despite difficulties from impairments.� The projects motivation stems from the underrepresentation of people with accessibility issues in gaming and the understanding that everyone should have the freedom to express themselves in video games as they do in other forms of entertainment.

<#> 156


Accessibility in Gaming

Developing a 3D Video Game and Implementing Auditory and Visual Accessibility Features Using the Unity Game Engine Ross Allan & David Corsar

Introduction The purpose of this project was to investigate the problems that people with accessibility issues face within 3D video games, as well as solutions to these problems and answers to many relevant questions revolving around these issues. A game was designed which highlighted exactly how these issues could be tackled and set a good example for how to apply certain methodologies specifically relating to auditory and visually impaired players. Early market research showed high numbers of people with accessibility issues play video games. Wing Chin (2015) “over 92% of people play games despite difficulties from impairments.” The projects motivation stems from the underrepresentation of people with accessibility issues in gaming and the understanding that everyone should have the freedom to express themselves in video games as they do in other forms of entertainment.

Project Aim

This project intended to research the accessibility issues faced within video games: what are they, are they recognised, and detail how can they be tackled as well as develop a 3D firstperson-shooter game which will highlight appropriate methods to do so with a focus on auditory and visual accessibility issues.

Methods

Figures and Results

Five main accessible features were successfully added to the game. Three of which were for auditory accessible players including subtitling, audio sliders and directional audio cues. The remaining two were for visually accessible players which included a fully scalable user interface and a brightness slider. Smaller features such as enhanced effects when shot by enemies and some post-processing were also added to aid visual accessibility. These features were added to enhance the playability of the game for people with auditory and visual accessibility issues. Additional features such as multiple colour blind pallets and a possible voice-over for the enemies could have been included to enhance accessibility.

Accessibility features in menu

Figure 2. Screenshot from AccessFPS (2020)

Scaled UI

Figure 3. Screenshot from AccessFPS (2020)

Enhanced Visual Effects

DirecSonal Audio Cues

Fifteen participants were split into groups of five for user testing. One group played with visual impairments through either removing glasses (short-sighted) or making use of external equipment such as sunglasses or other vision distorting goggles. Another group played under auditory impaired conditions using no volume or low volume replicating deaf or hard of hearing impairments. The final group tested the game with no auditory or visual impairments and played under normal conditions. Groups were then asked to complete a survey in relation to how they played.

Figure 1. Powered by unity (2020)

The game was developed using Unity 2018.4.15 in combination with an FPS Template and scripts written in C#. Upon loading the FPS template immediate accessibility issues were present. Having no way to scale the user interface, no audio or brightness sliders and very little depth to the game overall. Solutions quickly became apparent for these issues and implementing them through Rapid Application Development proved to be a great way to do this, showcasing a new version of the game each week.

Conclusion

Figure 7. Powered by unity (2020)

My game was found to have a very successful accessibility implementation, scoring 77% on the System Usability Scale (SUS) which is an industry standard scale widely used for over 30 years and has been used in over 1300 publications (MeasuringU 2007). The overwhelming majority of results were positive and showed that the game was still fun, whilst adhering to those with auditory and visual accessibility issues. Future work for this project may include adding more accessible elements such as a colour blind pallet or adding voiceover work for the enemies to help those with visual and auditory issues further. Further testing could include posting my game to accessibility communities for further feedback.

Acknowledgments

Thank you to all the friends and family who participated and helped to facilitate the user testing for my game. A special thank you to David Corsar the supervisor for this entire project who’s guidance and assistance was second to none, as well as the module coordinator Dr John Isaacs and course leader Dr Mark Zarb for all of their hard-work and support, keeping the entire course and honours project afloat amidst a global pandemic. A final thank you to Dr Michael Heron who taught and inspired me to develop video games from his Games Development course in Year 3.

References

Strongly Disagree

Strongly Disagree

Figure 4. Google Form – VISUAL IMPAIRMENT (2020)

Strongly Agree

Figure 5. Google Form – AUDITORY IMPAIRMENT(2020)

Strongly Disagree

Strongly Agree

Strongly Agree

1. Wing Chin (2015) Around 92% of people with impairments play games despite difficulties, Available at: https://www.game-accessibility.com/ documentation/around-92-of-people-withimpairments-play-games-despite-difficulties/ 2. MeasuringU (2011) Measuring Usability With The System Usability Scale, Available at: https://measuringu.com/sus/

Figure 6. Google Form – NO IMPAIRMENT (2020)

157


STUDENT BIOGRAPHY

Niall Butler Course: BSc (Hons) Digital Media An investigation into current hybrid modelling techniques and the impact within animation A very recent and upcoming trend in the creative/entertainment industry is ‘hybrid modelling’. Hybrid modelling within animation allows creators to produce animation where the difference between 3D and 2D is blurred, therefore creating new diverse content by making use of both 3D models and 2D scenes, or vice versa. This project looks to showcase techniques and methods that can allow for use of both 3D and 2D in one animation, helping create unique art styles and add to the variety of content offered within this industry.

<#> 158


consistent animation

What style of animation do you think this short utilises? 52rC'Spor'l$('S

e,o

;cb II II.Š.

• 2D

Bo1h

e Unsure \

References - Into the Spider Verse(2018) & Ori and the blind forest(2015)

159


STUDENT BIOGRAPHY

Gregor Davidson Course: BSc (Hons) Digital Media A Comparison of Accessibility Options in Video Game Menus. Video Games have become very popular in todays society and often developers have not considered accessibility enough in the development of their games. This project took four selected video game titles which were Fortnite, FIFA 20 Minecraft and Apex Legends to compare and analyse the accessibility options they offered to create a menu that would be accessible for as many users as possible. Through research of literature and initial research into these games a checklist of accessibility options was designed to compare what options some games offered that others didn’t. After this User Testing was completed to discover which games used their accessibility options in the best way for the users.

<#> 160


A Comparison of Accessibility Options in Video Game Menus. Gregor Davidson Supervisor: Carrie Morris

Introduction Video Games have become very popular in todays society and often developers have not considered accessibility enough in the development of their games. This project took four selected video game titles which were Fortnite, FIFA 20, Minecraft and Apex Legends to compare and analyse the accessibility options they offered to create a menu that would be accessible for as many users as possible. Through research of literature and initial research into these games a checklist of accessibility options was designed to compare what options some games offered that others didn’t. After this User Testing was completed to discover which games used their accessibility options in the best way for the users.

Project Aim The main aim of the project was to discover what makes a video game accessible and create a menu to help make games more accessible in the future. The menu at created after the user testing stage would be created in Adobe Xd.

Methods Accessibility Option Magnifier Virtual Keyboard Font-Size Hi-Contrast Enlarged Pointer Text to Speech Colour Blindness Options Enlarger Subtitles Re-size Subtitles Remap Buttons

Conclusion

Yes or No

One of the methods used was an checklist of accessibility options. Fortnite

Figures

Accessibility Option Magnifier Virtual Keyboard Font-Size Hi-Contrast Enlarged Pointer Text to Speech Colour Blindness Options Enlarger Subtitles Re-size Subtitles Remap Buttons

Yes or No No No No Yes No No Yes No Yes Yes Yes

Apex Legends Accessibility Option Magnifier Virtual Keyboard Font-Size Hi-Contrast Enlarged Pointer Text to Speech Colour Blindness Options Enlarger Subtitles Re-size Subtitles Remap Buttons

Yes or No No No No No No Yes Yes No Yes Yes Yes

Over the four checklists that were completed there ended up only being a total of 6 different accessibility options that were in the four games. Each title used the accessibility options differently so it was important that User Testing was completed in order to decide which was the best way to use them. The challenge faced with User Testing was that users with accessibility issues were needed. In the end the blind user, deaf user and motor impaired user had to be imitated by using a blindfold, no audio and sellotaping 3 fingers together.

Results

figures

The conclusion of this project was that there are different impacts of accessibility options when they are presented in different ways. Overall the menu designed should be usable to many people based on results. However, due to limitations it will not be usable to everyone as there wasn’t a user with cognitive impairments available to test with. Finally there is not enough being done by developers to tackle accessibility issues in the video game industry as a whole.

Acknowledgments

After completing the User Testing was complete the menu was designed based on how the users ranked each of the options and their comments in the feedback section. The users felt that overall Fortnite had the best subtitles and subtitle size as the subtitles were the clearest to read and they showed how they would look in the menus. The users also found that Fortnite had the best colour blind settings as they offered a test for users that may not know they are colour blind and showed how the colours would look in the game. Based off these results the menu was designed to try and be accessible to as many users as possible.

Thank you to Michael Heron who was very helpful throughout the process of completing the Literature Review. Thank you to Carrie Morris who took over as my supervisor and has done a fantastic job.

References Heron, M., 2012. Inaccessible through oversight: the need for inclusive game design. The Computer Games Journal, 1(1), pp.29-38. Bierre, K., Chetwynd, J., Ellis, B., Hinn, D.M., Ludi, S. and Westin, T., 2005, July. Game not over: Accessibility issues in video games. In Proc. of the 3rd International Conference on Universal Access in HumanComputer Interaction (pp. 22-27).

Digital Media

161


STUDENT BIOGRAPHY

Jamie Dempster Course: BSc (Hons) Digital Media Report Racism in Football. Can you stop racism within football? My project is based on Racism in football and how we can try and remove it from the game that we all know and love. The application that the project is on is a website and it has all different computing languages use on it. When I was planning out the project I had to think carefully what I wanted this project to achieve. After some consideration I decided to have a project to have a dedicated service for football fans to report any form of racism. They would fill out a form to report the racism that they have seen or heard.

<#> 162


Report Racism in Football Can you stop racism within football? Jamie Dempster Supervisor: Carrie Morris

Introduction & Project Aim My project is based on Racism in football and how we can try and remove it from the game that we all know and love. The application that the project is on is a website and it has all different computing languages use on it. When I was planning out the project I had to think carefully what I wanted this project to achieve. After some consideration I decided to have a project to have a dedicated service for football fans to report any form of racism. They would fill out a form to report the racism that they have seen or heard.

Research Before I started the implementation process of this project I carried out some research. I already knew that there was an issue with racism in football, so I only carried out a little bit of search of the topic. The main focus of my research was to find out if people would actually use my project to report racism if they came across it at football events. I sent out a questionnaire to people for them to fill in answering a series of questions. Questions ranging from “Would you feel comfortable using the website to report racism” to “what do you think is the best way to tackle racism”.

Software Used:

Implementation

Conclusion

figures

figures

The first part of the implantation phase that I done was getting high quality images to use when doing my project, they had to be royalty free images so I used a website that I sure every time when I am doing projects like this, the website I used was Unsplash. After I had got all the images that I wanted to use I then opened up Dreamweaver which was the piece of software that I used to make this project. I started off by creating my homepage for the website, I have used four different computer languages in this project they are, HTML, CSS and JavaScript and PHP. When I was working of=n the report page, when I was making the form it was very important to include a function that allowed the user to fill in the report anonymously. The reason that was so important was so the user felt safe knowing that no one will know that it was them who filled out the report. Once I had made all of the pages the next step was too make the website mobile responsive. Ensuring that it is mobile responsive is crucial to ensure that everything on the page is displayed properly on smaller devices.

Looking back that the project that I have worked on I feel like that was a success as I have done what I set out to do by making it easy for football supporters to report any racism that they have come across when they have been attending football matches in the country. The big plus side is the fact that when you report the racism you can hide your identity so that no one will know that it was you that made the report.

What’s Next? With this project now complete I fell that the next steps to take with this project is to get it out there and start getting people to promote it so that more and more people get to know about it and will start to use it and hopefully this will increase the number of people reporting racism that they hear at football matches. With people reporting it hopefully it will start to reduce the number of victims of racial abuse in football. Then hopefully down the line turn it in to an organisation and start to get funding to keep the service going and make improvements where necessary..

References h " p s : / / commons.wikimedia.org/wiki/ File:Adobe_Dreamweaver_CC_ic on.svg h"ps://unsplash.com/ h"ps://en.wikipedia.org/wiki/ Adobe_Photoshop#/media/ File:Adobe_Photoshop_CC_icon. svg

163


STUDENT BIOGRAPHY

Anastasia Di Modugno Course: BSc (Hons) Digital Media A Comparison of Accessibility Options in Video Game Menus. Usually, when mentioned motion capture, the first ideas that come to mind are related to the videogame and movie industry, however, this technology is used in a variety of fields, from physiotherapy to music, military and even psychology. Motion capture is a technology in constant evolution, that had allowed us to be able not only to give life to unbelievable characters but also to analyse our body from a completely new perspective. It is a new, modern and accessible technology, but could it become also a medium to preserve and teach about cultures and traditions?

<#> 164


165


STUDENT BIOGRAPHY

Bruce Dickson Course: BSc (Hons) Digital Media Why isn’t Football Considered the most Physically and Mentally Demanding Sport? Football is one of the most highly watched sports there is worldwide, BUT yet it’s still not deemed a physically or mentally demanding sport. There is statistics and research to prove this and yet people still turn a blind eye to the problems that brings to the athlete. This project is to show and visualise the problems that’s football consists of from the injury rehabilitation to the mental fatigue that occurs when taking part in the sport. My artefact is an informative video that explains statistics of football injury’s, mental problems such as suicide rates or depression levels worldwide. The artefact allows a visual representation of my project and poster, this allows my key points to get across to the target audience easier.

<#> 166


Why isn’t football considered the most physically and mentally demanding sport? Bruce Dickson, Supervised by Tiffany Young

Introduction Football is one of the most highly watched sports there is worldwide, BUT yet it’s still not deemed a physically or mentally demanding sport. There is statistics and research to prove this and yet people still turn a blind eye to the problems that brings to the athlete. This project is to show and visualise the problems that’s football consists of from the injury rehabilitation to the mental fatigue that occurs when taking part in the sport. My artefact is an informative video that explains statistics of football injury's, mental problems such as suicide rates or depression levels worldwide. The artefact allows a visual representation of my project and poster, this allows my key points to get across to the target audience easier.

Project Aim The purpose of the project is to reflect on the physical and mental aspects of football and to give an InSite to the problems of the sport so that it can deemed a more physically and mentally challenging sport. The summary of the project is to allow the audience to be shown the facts and statistics in a formal yet professional way so that they can gain a better understanding of the sport and the dark side of the topic. This will be accomplished by doing things like surveys of footballers with mental and physical problems, a video demonstration of the key topics of my Lit review and also this poster to show you the stats in a visual format. This poster is to allows the audience gain more knowledge and to hopefully change their opinion or to adjust their feeling towards the sport.

Methods The implantation of the task that occurred was to create a artefact to explain the project in a visual and professional manner. The studies and research for this project were visualised using a video format, there was a lot of studies to prove that football was a physically and mentally demanding sport so the best way to visualise these findings was to create a video so that the audience could visualise the problems as it is proven that videos are the best way of teaching such a big problem, especially a topic that is new to a lot of people.

Figures and Results

Conclusion

figures

Does football lower blood pressure? METHOD: A Norwegian preparticipation cardiac screening of male professional football players enrolled 493 white European, 47 matched controls, 49 black and 53 players of another ethnicity. BP was measured as a mean of two measurements. Height and weight were selfreported, and body surface area (BSA) was calculated. The echocardiographic parameters were indexed to BSA. Heart rates (HRs) by electrocardiography and pulse pressure (PP) were considered as surrogates for sympathetic activity. Arterial compliance was calculated as stroke volume (BSA)/PP. RESULTS: The players mean age was 25 years (18-38) and mean BP 122/69 ± 11/8 mmHg. There were no significant differences in prevalence of hypertension between all players, 39 (7%), and controls, four (9%), or between white, 32 (7%), and black, five (10%), players. There was a significant positive linear relationship between BP and left ventricle mass (BSA), left atrium volume (BSA), stroke volume (BSA), HR and PP, and negative relationship to arterial compliance (BSA). CONCLUSION: Although the prevalence of high BP in professional football players was low, our data indicate a novel association between elevated BP and reduced arterial compliance, increased left ventricle mass and left atrium volume even in young athletes. This emphasizes closer focus on BP measurements and standardized follow-up after preparticipation screening of athletes.

The conclusion of the project is that football is one of the most physically and mentally demanding sports there is, Sports such as boxing have the visual problems for injuries but there is scientific evidence that football is just as mentally and physically demanding on the body as boxing. This poster displays studies these scientific researches and displays them in such a way that it’s easier to understand and teach those who aren’t as clued up on the topic of why is football not regarded as being such a demanding sport. This is backed up with evidence from scientific papers and studies using different techniques recording blood pressure, body fat, mental fatigue levels etc. Football is constantly increasing in popularity around the world but especially for women’s football, there is evidence to state this statement is politically correct. There has been a thorough research on football as a whole topic from the origins of the sport to the problems in football which consist of many such as gender equality, racism and hooliganism.

Acknowledgments I would like to thank everyone who took an interest and showed support for my project— your encouragement has helped me to tackle what has been a challenging but interesting topic.

References SMITH, M., COUTTS, A., MERLINI, M., DEPREZ, D., LENOIR, M. and MARCORA, S., 2016. Mental Fatigue Impairs Soccer-Specific Physical and Technical Performance. Medicine & Science in Sports & Exercise, 48(2), pp.267-276.

figures

167


STUDENT BIOGRAPHY

Fraser Dow Course: BSc (Hons) Digital Media Developming Immersive VR Experience that Evoke Emotive Player Responses The aim of this project is to explore how immersion in video games can be developed further through the use of Virtual Reality and its importance in evoking an emotional response from players. It will delve into what immersion is and how to ahcieve this state, as well as how to maintain it. VR offers users a simulated experience that can either be reflection of the real world or something entirely different. This technology is one of the biggest current trends in gaming technology, illustrating the relevancy of this project, that will perform an investigation into how this technology has progressed over the years in order to reach the amazing advancements that we have today.

<#> 168


169


STUDENT BIOGRAPHY

Calum Grant Course: BSc (Hons) Digital Media Using 3D Modelling to Raise Awareness of the Impacts Rising Sea Levels Have on Coastal Settlements Climate change is an ever growing issue and it has been around for over a century now. However, in the last 30 years the rate in which climate change is evolving has increased drastically. It is becoming more important for us as humans to do our part to slow the process down or attempt to bring it to a halt. The doubt and speculation involved with climate change creates a divide amongst people and a lack of empathy towards the problem. Many techniques are used to communicate the facts and raise awareness such as posters, adverts, documentaries, etc. but these are all common. An alternative way of showcasing the problem could change way it is perceived.

<#> 170


Using 3D Modelling to Raise Awareness of the Impacts Rising Sea Levels Have on Coastal Settlements Supervisor: Carrie Morris Student: Calum Grant Introduction Climate change is an ever growing issue and it has been around for over a century now. However, in the last 30 years the rate in which climate change is evolving has increased drastically. It is becoming more important for us as humans to do our part to slow the process down or attempt to bring it to a halt. The doubt and speculation involved with climate change creates a divide amongst people and a lack of empathy towards the problem. Many techniques are used to communicate the facts and raise awareness such as posters, adverts, documentaries, etc. but these are all common. An alternative way of showcasing the problem could change way it is perceived.

Project Aim The aim for this project is to produce a realistic 3D simulation representing the changes and effects of rising sea levels on coastal settlements. The model will have appropriate sounds and subtitled facts to hopefully leave a lasting impact on the viewers opinion on the importance and severity of climate change.

Methods

The initial idea was to produce the whole project solely through Autodesk 3DSmax, however after a trial period of attempting to create realistic water, many problems arose. Due to these issues the project was altered to include Unity. The models were created in 3Dmax and imported into Unity to use alongside the Crest Water Plug In.

Implementation/Evaluation

As mentioned, the models were created and textured in 3DSmax, once they were all complete they were packed and imported into Unity. The Crest water plug in produced high quality, realistic water to use in the animation. The raw shots were captured and then combined in Adobe Premier Pro, where sounds and captions were added. This produced the final product of an educational 3d animation. The renders shown were taken in 3DSmax to display the quality of models, this quality was lost when used in Unity as the textures did not show up. The water strengthens the appearance of the animation greatly, it would take far longer to replicate fluid movements of that quality in 3DSmax. It would also require a very high standard of equipment. It was essential to keep the video short and concise to maintain user focus and attention To evaluate the project, a questionnaire was produced and sent to 20 people to assess the outcome. Due to the situation involving COVID-19, the range of demographic reachable was limited. The vast majority of the participants were young adults. Despite the narrow range of people the feedback would still be useful.

A sample question from the questionnaire is displayed above. Most participants believed that 3D modelling/simulation generally has more of an impact on them as opposed to other methods of communicating information. Part of this could be down to the fact that 65% of the population are visual learners, so something that is visually appealing whist also being educational, has a higher probability of information retention from their viewers/listeners.

DIGITAL MEDIA

Issues/Challenges A few issues and challenges presented themselves as the project went on. The first of which already mentioned as the facilities available could not handle certain techniques within 3dsmax (e.g. fluid reactor). Attempting this would either slow the programme down considerably or crash it all together. This issue was resolved by involving Unity to run the animation, high render and texture quality was sacrificed for this. Another issue that affected the outcome of the project was the global pandemic that struck at the time of production and evaluation. Facilities were limited and the equipment available was below the standard of those at the university campus. This caused the production time to increase and resulted in a lower quality final product.

Conclusion

The outcome of the project was a little different to the intended one. The majority of people found this method of raising awareness more effective than other methods such as posters and adverts (infographics), however documentaries were still favoured. It would appear the effects of 3D modelling on raising awareness are somewhat effective. With a better quality of model could change the results.

Future Work There is plenty opportunity for future work in this area. Certainly with a higher calibre of expertise and facilities a top quality product could be created. This would perhaps have a bigger effect on people. 3D modelling could prove to be very useful in future work on climate change, replicating the problems and issues it brings to our world. Further research and study into the facts could also provide a higher quality.

171


STUDENT BIOGRAPHY

Rohan Hutcheon Course: BSc (Hons) Digital Media Potential Benefits of Animation versus Static Images in Communicating Visual Information Visual Information is critical to how humans see and learn about the world around them. This project looked at researching Visual Information in the forms of Animation and Static Images and how they are conveyed. Understanding how Animation and Static Images are used to display Visual Information, investigating the positives and negatives in both forms of media. Building an understanding of what information is best conveyed through Animation and what is best conveyed through Static Images.

<#> 172


173


STUDENT BIOGRAPHY

Guenole Lorho Course: BSc (Hons) Digital Media Research on the cognitive pricess behind Edit blindness and the Match-Action Cut in Film EDITING: Editing (or Cutting) is the craft of stitching these moving pictures (i.e. shots) together in order to create a cohesive narrative. A cut (or edit) is an abrupt, but usually trivial film transition from one sequence to another. CONTINUITY EDITING: Editing for continuity is editing with the intent of making the editing invisible to the viewer. Continuity facilitates the viewing experience, guides viewers through the narrative and optimises the information given to them. MATCH ACTION CUT: The match action cut is the easiest cut to miss with participants missing them 32.4% of the time even when their only task was to detect cuts (Smith & Henderson 2008). A match action cut is an editing technique achieved by cutting between two shots of the same subject during “the greatest point of action�, with the motion being seen before and after the cut, across two shots (Smith 2006). There also appears to be a bias of overlapping the action (average time 125ms) in order to perceive the cut as smooth (Shimamura et al. 2014). EDIT BLINDNESS: Edit blindness is the phenomenon of not noticing a cut has happened. This phenomenon seems to emanate from the limitations of human cognition and the minuscule information a person can process at any given time. MATCH ACTION CUT AND EDIT BLIDENESS: In the context of match action cut, Smith (2006) suggests that during the cut, the viewer is focused on the motion itself and the expectation of completing the motion, and will not detect the cut if his expectations are complete. If there are enough cues that are not interrupted (such as motion and sound) across the cut, it has a higher probability to be missed.

<#> 174


175


STUDENT BIOGRAPHY

Hamish MacRitchie Course: BSc (Hons) Digital Media How the use of visual effects can be used to increase a user’s immersion of a sci-fi trailer This project is in investigation into how visual effects(VFX) impact a viewers immersion in a sci-fi themed trailer. Key aims of the project were; to research current visual effect techniques and technologies and how they are implemented into current films. Identify, from research, what type of visual effects could be implemented into this project. To learn how to create and manipulate VFX’s so that they would improve the overall quality/user experience of the final production. Key techniques that were included; 3d animation, motion tracking, fire/ smoke simulations, spark particles and digital compositing.

<#> 176


overall effectiveness of the trailer it was sent out to a number of participants that were d answer a series of questions evaluating the effectiveness of key elements including; p, the fire/smoke effect,spark effect and overall compositing of digital elements. Overa y to the trailer but the effectiveness of it could have been improved based on results. E the location emission of them could have been improved based to on feedback. Howand the use of visual effects can be used

The Crashing Ship:

increase a user’s immersion of a sci-fi trailer

By Hamish MacRitchie Supervised by Jay Lytwynenko

Abstract/Aims This project is in investigation into how visual effects(VFX) impact a viewers immersion in a sci-fi themed trailer. Key aims of the project were; to research current visual effect techniques and technologies and how they are implemented into current films. Identify, from research, what type of visual effects could be implemented into this project. To learn how to create and manipulate VFX’s so that they would improve the overall quality/user experience of the final production. Key techniques that were included; 3d animation, motion tracking, fire/smoke simulations, spark particles and digital compositing.

Design

ootage The overall story and direction of the short trailer was

established during the start of the design phase. Multiple different ideas based on a sci-fi theme were considered. The final idea of a ship crashing from space down to earth was decided to be the direction of the trailer. This was due to the potential range of VFX’s that could realistically incorporated into it. Once the overall direction of the trailer was finalised a script and storyboards were created as well as location scouting, all process included in standard pre-production phases in film making.

ion

Implementation & Testing The majority of the implementation for this project took place during the post-production phase. Much of the key techniques had to be learned before implementing them into a final scene. Separate tests creating different effects were created to understand how they worked. These included Fire and smoke simulations using eevee renderer. Spark effects using a particle system. Motion tracking using the tracking suite included in Spark Effect Test Fire Effect Test 3d tracking Test blender. The techniques learned in the single case scenarios were later used when implementing all elements into a single scene. The first technique that was worked on was 3d animation. The animation included was very simple and was done using the auto keying tool. The ship was moved to the its start and end positions in the timeline and key frames were added with additional frames added in between were the ship needed to rock and shake to show how the damage was effecting it. One of the most challenging techniques to perform was the motion tracking. This was required to simulate the live camera movement in a 3d space. This requires creating multiple tracking points that use contrast data in a scene to interpret the camera movement. Many problems faced where due to the shots being very barren and lacking contrasting detail. The lack of detail couldn't be changed but original footage was flat in colour and this could be improved using colour correction. Basic colour correction was performed on the footage and the tracking was attempted again with a much better result. Fire and smoke effect was created using the quick effect tool in the object settings in blender. This allowed for the quick creation and simulation of a fire and smoke effect. The main challenge encountered with this effect was the overall quality. The effect was attached to the ship which animated a long distance in the scene. This meant the domain for the effect had to be large too but it caused a reduction in quality of the effect. The spark effect was created using a particle system in blender. A particle modifier was added to the object that was to be used as the emitter. The settings were then tweaked to make the particles react like sparks. The key settings for this were a short and random life time, high velocity, high number of particles and random emission. An ico sphere was used for the particle object and material that would fade out depending on the age of the particle was added. This material was created using the node editor in blender. The final stage of the implementation was to composite all elements elements together. Each scene was rendered using eevee and cycles for different elements. Eevee was used to render the effects whilst cycles was used to render the ship and shadows. The sequence were rendered as PNG's and then imported into adobe after effects as a sequence. Each element was layered appropriately in the time line, usually in the order of; background footage/ship and shadows/effects.

With Ship Model

With Effects

Future Work

s highlighted some key for Overall this project was challenging but v cus for future iterations. These fulfilling to conduct. Many of the techniqu animation, the emission and this project can be used in a variety of diff In order to test the overall effectiveness of the trailer it was sent out to a number of participants that were asked to fire and spark effects and the watch the trailer and answer a series creative ways. Thisof project has of questions evaluating the effectiveness key elements including; the served to animation of the ship, the fire/smoke effect,spark effect and overall compositing of digital elements. Overall viewers ing emitted. Example 60%of storyboard of 20 participants the of how areof create responded positively to the trailer but the understanding effectiveness of it could have been improved based on VFX results. Elements the effects such as the location and emission of them could have been improved based on feedback. as immersive they can be better implemented by focus During this stageoverall the software to bewhich used to create means the implementation of this project was considered. A combination of Blender and Adobe After effects was somewhat successful in their certain areas of them. This will in turn imp chosen to be used in this project. Blender is free to use and is a very diverse and comprehensive 3d package. uld be greatly improved to increase quality of VFX used in future projects. It allows for 3d moddeling, animation, VFX and compositing to be done in the one software. However, e who Adobe feltafterimmersed. effects was chosen to do the compositing in due to the tools is had avalible to fix potential problems that may have arisen during the rendering. Premier was also used to edit the individual clips together and to add in sound work into the sequence.

Original Footage

With Ship Model

Digital Media

Conclusion

Feedback from users highlighted some key for improvement and focus for future iterations. These include the; detail of animation, the emission and location of both the fire and spark effects and the volume of sparks being emitted. 60% of 20 participants agreed the trailer was immersive overall which means that the effects were somewhat successful in their goal. However it could be greatly improved to increase the number of people who felt immersed.

(Design,Production,Development) Workspace Hamish Blender MacRitchie(1706216)

Digital Media

(Design,Production,Development) Hamish MacRitchie(1706216)

With Effects

References

Future Work

Overall this project was challenging but very fulfilling to conduct. Many of the techniques used in this project can be used in a variety of different and creative ways. This project has served to increase the understanding of how VFX are created and how they can be better implemented by focusing on certain areas of them. This will in turn improve the quality of VFX used in future projects.

3D Model of space Sourced from: https://www.turbos Under public doma References 3D Model of space ship Sourced from: https://www.turbosquid.com Under public domain license

177


STUDENT BIOGRAPHY

Serena-Niamh McGurk Course: BSc (Hons) Digital Media How Staging and Sound Techniques Enhance the Illusion of Smell and Touch in Media As stated by RNIB (2019), only 20% of most major broadcastings are available with audio description. This means that 80% of programmes aren’t fully accessible to visually impaired users. Audio description is important to visually impaired audiences as they cannot use sight to take in information. They miss out of expressions, body language, colours and movement. These are vital in creating a sensory process within our brains to comprehend situations and experiences in society. For example, an actress with their arms crossed, not making eye contact could mean that they are expressing anger. According to the NHS (2018) ‘almost 2 million people are living with sight loss’ right now and RNIB (2018) research predicts that by 2050 the numbers of people with partial sight and blindness in the UK will double. This showcases the already large and potentially larger need for more visual aids to be in place so visually impaired audience will not miss out on vital sensory information and understanding and being part of society.

<#> 178


179


STUDENT BIOGRAPHY

Andrew Ogilvie Course: BSc (Hons) Digital Media Virtually Sport: Improving the Perception of eSports with Traditional Sport Audiences “The bottom line is how lazy is our society becoming where we make sitting at a computer and twiddling our thumbs while we eat chips all day a sport?� (Gandara, 2013). Electronic Sports can include video games however it can also involve competing digitally through the use of exercise equipment, such as digital spin classes but the gaming side has always impacted the reputation of the industry as a whole. Long before the term eSports was even conceived video games spent decades being vilified by media (Williams, 2003) and branded as dangerous or frivolous. Although the perception of the industry has improved over the years the stigma still lingers and threatens the growth of the professional video gaming side industry.

<#> 180


Virtually Sport: Improving the Perception of eSports with Traditional Sport Audiences Andrew Ogilvie & Ines Arana

Introduction

Results

Conclusion

The video was then tested on 16 individuals, 11 of whom were above 45 which fit the demographic targeted, who were asked before and after seeing the advertisement their opinion on eSport.

In conclusion, the main reason that eSports is surrounded in stigma is due to lack of knowledge. There is also evidence to suggest that advertising to the traditional sport demographic using already established methods is effective even when not discussing traditional sport topics.

“The bottom line is how lazy is our society becoming where we make sitting at a computer and twiddling our thumbs while we eat chips all day a sport?” (Gandara, 2013). Electronic Sports can include video games however it can also involve competing digitally through the use of exercise equipment, such as digital spin classes but the gaming side has always impact the reputation of the industry as a whole. Long before the term eSports was even conceived video games spent decades being vilified by media (Williams, 2003) and branded as dangerous or frivolous. Although the perception of the industry has improved over the years the stigma still lingers and threatens the growth of the professional video gaming side industry.

Project Aim The aim of the project is to educate the two demographics, older generations who are not very aware of video games and traditional sports fans, about the achievements of eSport and its similarities to traditional sports. The aim would be to appeal to these demographics in their form of media and alleviate the stigma surrounding the eSport industry.

As seen in the chart above over half of people consider eSport a sport, meaning the advertisement did provide a positive response. Also seen in the graph below, over 80% consider it a sport in some form, even if not a full fledged traditional one. This means that, while potentially not considering it fully a sport, the demographic are willing to accept it as one in some form.

The video itself did successfully disperse the stigma from several individuals as seen in comparison to the chart above and, while viewers did not overall be interested in involving themselves with the industry, did seem to have a higher level of acceptance of it after the advertisement. It was also found that using 3D Animation was impactful in the video to some degree however the time to create the assets may not be effective in relation to the impact it provided.

Future Work

Methods The method was to create a video advertisement that emulated traditional sports advertisements. This would appeal to the target demographics and help further the comparison between industries.

In terms of the actual video, the individuals were asked to rate the advertisement on a scale of 1 to 10. As seen below, the overall rating received was 7.1 which means the video was received reasonably well by the audience.

The video incorporated 3D animation, created in 3ds Max and rendered with Arnold, as the form of media is used in both sports graphics and is the form of media eSports is played in. Using After Effects, the animation was used alongside real life footage to create the advertisement, narrated by a voice over.

When asked specifically only a slight majority thought that the 3D enhanced the animation however the majority didn’t find that the 3D Animation hindered or impacted the video negatively, with many remarking that it helped break up the pace of video.

figures

In the future, hyper-realistic renders or no animation could be tested within the advertisement to see if it was specific animation considered a hinderance. A larger testing group would also provide better results, which could have incorporated several different demographics and further increased accuracy of the targeted ones.

Acknowledgments

Special thanks to Dr Jean-Claude Golovine who, while unable to see the whole project through, was crucial in setting up the foundations and Dr Ines Arana who was vital in seeing it through to the end, despite not being in her field of expertise.

References

1.  Gandara, L. 2013. Video games aren’t sports. [online]. Available from: https://www.talonmarks.com/opinion/2013/12/12/videogames-arent-sports/ [Accessed 8 November 2019]. 2.  Williams, D. 2003. The Video Game Lightning Rod. 3.  Syracuse Staff. 2019. With Viewership and Revenue Booming, Esports Set to Compete with Traditional Sports. [online]. Accessed from: https://onlinebusiness.syr.edu/blog/esportstocompete-with-traditional-sports/#viewers [Accessed 27 October 2019].

BSc (Hons) Digital Media 181


STUDENT BIOGRAPHY

Hams Samy Course: BSc (Hons) Digital Media Comparison of 3D Lighting Techniques to Investigate the Psychological Effect of Mood Mood and emotion are different in human psychology Moods last longer than emotions and essentially have a purpose but are temporally remote Lighting can affect human mood in various environments and conditions such as work environments, film industry and 3D animation scenes. The colour of light and lighting styles (High key and Low key) affects the viewer’s perception of a scene and the viewer. Various scenes are developed in this research as well as a questionnaire survey to gain more understanding about human psychology and light. This work may contribute to the knowledge of how lighting techniques can affect human mood.

<#> 182


Comparison Of 3D Lighting Techniques To Investigate The Psychological Effect Of Mood Hams Samy & Jamie McDonald

Introduction Mood and emotion are different in human psychology. Moods last longer than emotions and essentially have a purpose but are temporally remote. Lighting can affect human mood in various environments and conditions such as work environments, film industry and 3D animation scenes. The colour of light and lighting styles (Highkey and Low-key) affects the viewer’s perception of a scene and the viewer. Various scenes are developed in this research as well as a questionnaire survey to gain more understanding about human psychology and light. This work may contribute to the knowledge of how lighting techniques can affect human mood.

Project Aim Investigate and measure the effect of lighting on human mood using various 3D lighting techniques. 3D models will be created, a survey will be conducted and the results will be statistically analysed. A conclusion will be made accordingly.

Methods

Figures and Results

A

B

Conclusion

C

Approximately all participants are in agreement that lighting has a major effect on scene selections. 97% of the participants found that lighting affected their choice in the selection. Scene C (neutral) was selected by the majority of participants as their preferred scene, chosen as their potential happy scene and was also identified as the scene that may have had an overall effect on their mood. They didn’t choose the intended happy scene (scene B) with high-key lighting. This finding is consistent with the literature review, as pointed out by Rikard Kuller in 2006 that workers’ mood was at its lowest when the lighting was experienced as much too dark. The mood then improved and reached its highest level when the lighting was experienced as just right, but when it became too bright the mood declined again’. Its important to mention, there were no physical monitors on the participants while answering the survey, therefore there is no proof of changes in their mood.

Lighting affects the human mood in a positive and a negative way. High-key lighting can convey a feeling of happiness, however when it passes a certain point of exposure it can have a negative affect on the human mood. In conclusion, people seem to prefer neutral lighting. Some people prefer dim lighting. This will require further research.

Acknowledgments ▪ This project would not be possible without the continuous support from my supervisor guiding me along the way. ▪ I would like to thank the university for its resources including facilities, library etc. ▪ I have gained a lot of knowledge during this project (in particular literature review) and was heavily inspired by people that have carried out related work in the past as mentioned below. Thank you.

References

Design and model three scenes including different lighting techniques to convey the appropriate mood. Create the appropriate questionnaire, conduct a survey and analyse the results. Conduct a statistical analysis in order to identify and measure the effect of lighting on human mood.

▪ EKKEKAKI, P., 2012. Affect, Mood, and Emotion ▪ HUME, D., 2012. Organisational behaviour. Emotions and moods ▪ PHILLIPS, A., 2011. The basics of the art of lighting, part 1: Simple principles of and techniques for creating artful ▪ lighting ▪ RUSSELL, J., 2003. Core Affect and the Psychological Construction of Emotion ▪ https://docs.arnoldrenderer.com

Digital Media

183


STUDENT BIOGRAPHY

Marta Sbrana Course: BSc (Hons) Digital Media Investigation and Comparison of Current 2D and 2.5D Animation Techniques The focus of this project was to investigate the techniques necessary to obtain a traditional animation and the ones utilised to creat 2.5D animation, in order to produce the same brief scene with both techniques.

<#> 184


185


STUDENT BIOGRAPHY

Erin Watson Course: BSc (Hons) Digital Media Suitable Housing using Recycled Materials to Help Reduce the Affect of Global Warming This project focuses on providing a 3D visualisation of a sustainable home that is made using recycled materials. In the Literature Review, it was found that building sustainable housing or making existing housing more sustainable was on way to reduce the amount of carbon that is released into the atmosphere. Efforts towards sustainable housing have been made from the 1970s, and there are many different examples of sustainable housing using recycled materials. There are many examples of these types of homes found in Findhorn, Scotland which are made using old recycled whiskey barrels and straw bale houses. The University of Brighton also made a house known as the “Waste House� which was made using 90% recycled materials. However, the literature review found that Earthships fulfil all of the criteria for sustainable housing. Their design calls for the use of recycled and natural materials, and their capability of being off the gird and they, therefore have minimal reliance on public utilities and fossil fuels. Because of this it was decided to produce a model that will represent an Earthship.

<#> 186


187


STUDENT BIOGRAPHY

Ross Yuill Course: BSc (Hons) Digital Media Educational 3D Animation Surrounding the Subject of Climate Change and the use of Portable Water Turbines This project focuses on the impact of climate chane and how activism on the issue is on the rise with the aid of educational videos and content helping to inform individuals on how they can reduce their carbon footprint through hydropower.

<#> 188


189


STUDENT BIOGRAPHY

Leonardo Zoia Course: BSc (Hons) Digital Media Comparative Analysis of Post Production Techniques and Genre in Filmmaking Genre was at first considered a canonical staple of Filmmaking (1910s-1950s), then challenged as an unrealistic construct (1960s-1970s). A somewhat universal consensus was reached (1980s-today), deeming it a useful communicative tool in the study and analysis of movies, still not an axiomatic truth. Film Genres are “sets” of movies that can be grouped together accordingly with their general themes, archetypical characters, iconic visual style or setting. Post Production, the phase in filmmaking in which the raw footage and audio are turned into the final product, is fundamental in defining the Genre of a movie, by adding many of its defining features, especially when it comes to its visuals and Sound Design. This project was aimed at creating a “Genre-Neutral” short film, showcasing an interaction between two characters. This short was then edited into different iterations of the same scene, with the goal of creating 3 videos, all using the same shots and lines, yet each belonging distinctly to one specific Film Genre. The audience would experience 3 different films just by virtue of Post Production.

<#> 190


Comparative Analysis of Post Production Techniques and Genre in Filmmaking Leonardo V. Zoia & Jay Lytwynenko Introduction & Project Aim Genre was at first considered a canonical staple of Filmmaking (1910s-1950s), then challenged as an unrealistic construct (1960s1970s). A somewhat universal consensus was reached (1980stoday), deeming it a useful communicative tool in the study and analysis of movies, still not an axiomatic truth. Film Genres are “sets" of movies that can be grouped together accordingly with their general themes, archetypical characters, iconic visual style or setting. Post Production, the phase in filmmaking in which the raw footage and audio are turned into the final product, is fundamental in defining the Genre of a movie, by adding many of its defining features, especially when it comes to its visuals and Sound Design. This project was aimed at creating a “Genre-Neutral” short film, showcasing an interaction between two characters. This short was then edited into different iterations of the same scene, with the goal of creating 3 videos, all using the same shots and lines, yet each belonging distinctly to one specific Film Genre. The audience would experience 3 different films just by virtue of Post Production.

Implementation & Testing

The footage was recorded with a Sony X160 camera fitted on a steadicam harness and tripod depending on the shoot. Voice over was recorded in an RGU studio with a Rode VideoMic shotgun microphone. the voice-over audio was reworked in Adobe Audition, is volume raised and the white noise eliminated. The sound effects were recovered from online Royalty-Free Libraries. Some of the soundtrack was composed in MuseScore and exported as MP3 files. The videos were edited in Adobe Premiere Pro and Adobe After Effects; the footage was cut into 3 iterations of the same scene, which were made into a single video. The video was then further Post Produced into 3 separate short films, each characterised accordingly to the style of a specific Genre: Horror, Science Fiction and Noir. The Genres were selected for their defined, and defining, aesthetics, typical usage of colour and iconic sound design.

Results showed a tendency of answers in the second half of the questionnaires, the one regarding Genres in the post produced videos, to agree that the videos were more stylistically defined and the editing made the scene clearer. The identification of the Genre was generally more imprecise (this can also be due to the fact that some of the testers were not too familiar with Film Studies), but it still showed a trend towards the expected “feelings” and the right Genres.

Future Work

Design The script was created with minimal character interaction, using short and neutral sentences that would not allow users to know the characters too well and that would not hint at the two’s relationship. For the same reason the characters’ faces were left out of some of the film’s iterations, so that their facial expressions would not suggest any specific intent. Other iterations featured the characters’ faces, to investigate whether viewers would note that. The two main characters were both males: this was made to avoid directly suggesting the character’s romantic interest, but leaving the option open. The storyboard was made in Adobe Photoshop.

Conclusions

The Questionnaire was created in Google Forms. It is structured to allow viewers to compare the videos and comment on them. The questionnaire was written to be the same for all iterations of the video, but three versions of it were created containing one of the three stylised films each. This was made so that the users’ responses could be compared to note a potential trend in the video’s interpretation accordingly with its Post Production. After a brief explanation of the project, the survey shows users the first video, the unedited version of the film and asks a few questions about the film itself, its style and its Genre. Afterwards, the second video, the edited version of the film, was shown to the subjects, who would then be asked similar questions to the first half of the survey, allowing them to comment on the edited footage and audio and enquiring whether their opinion of the film had changed. Each questionnaire was submitted to 15 people, for a total of 45 test subjects, which were chosen to fit the study’s demographics (18-30 years old).

Changes could be made to the script to optimise the ambiguity upon which the testing was based. The project could also be expanded upon by creating a larger series of iterations for the film, by introducing more differentiated Genres (possibly some ostensibly comedic ones). The audio design could benefit from the usage of more varied or Genre-specific sounds and soundtrack. Some experimentation with Foley sounds would most likely benefit the scope of the study.

Acknowledgments I would like to thank Jay, my supervisor, for the support and insight which she passionately provided throughout the Project and all of my Academic life. I also thank all students who volunteered in helping to bring the project to life and all who took part in this study.

References

School of Computing Science and Digital Media

Grant, B., 2007. Film genre: from iconography to ideology. In: W. Press, ed. Film genre: from iconography to ideology Vol. 33. London: Wallflower Press

191


192 <#>


BSc (Hons) Computer Network Management and Design

193


STUDENT BIOGRAPHY

Lewis Anderson Course: BSc (Hons) Computer Network Management and Design Performance Analysis of Open VPN and Current Industrial Trends VPNs have traditionally provided a secure tunnel to a remote location, allowing remote workers to establish a private connection to their company headquarters. More recently, VPNs have seen an increase in popularity with personal users who wish to privatise their own connections or remove geographical restrictions. But with many overheads, can a VPN connection provide users with adequate bandwidth to complete their tasks? Coupled with the rise of cloud computing, it may also be viable to run a VPN server in the cloud, with OpenVPN now offering a cloud-based VPN service too. This led to the question; can protocols like OpenVPN benefit from being run over highspeed infrastructure, such as Amazon Web Services (AWS)?

<#> 194


Performance Analysis of OpenVPN and Current Industrial Trends supervised by

a thesis by

Chris McDermott

Lewis Anderson

INTRODUCTION

TEST RESULTS

VPNs have traditionally provided a secure tunnel to a remote location, allowing remote workers to establish a private connection to their company headquarters. More recently, VPNs have seen an increase in popularity with personal users who wish to privatise their own connections or remove geographical restrictions. But with many overheads, can a VPN connection provide users with adequate bandwidth to complete their tasks? Coupled with the rise of cloud computing, it may also be viable to run a VPN server in the cloud, with OpenVPN now offering a cloud-based VPN service too. This led to the question; can protocols like OpenVPN benefit from being run over highspeed infrastructure, such as Amazon Web Services (AWS)?

The above graph shows the average network bandwidth for all four tests. The “classic internet” tests saw a small, expected decrease in performance when OpenVPN was enabled. However, the AWS bandwidth was affected considerably by OpenVPN.

PROJECT AIMS

SURVEY RESULTS

The first goal of this thesis was to determine whether running OpenVPN over high speed cloud infrastructure could have an impact on performance. This involved network tests being run to determine the bandwidth over different links. Additionally, it was aimed to find out if the industry favours a VPN protocol, and whether poor VPN performance would sway the industry’s choice of protocol. This goal was done by distributing a survey, aimed at IT professionals.

METHODS

Four tests were run, each over a 24-hour period. Testing with and without OpenVPN over a classic internet connection, and the same tests with AWS infrastructure. Three virtual machines were setup in AWS, with 1 CPU core and 1GB of RAM; running Windows Server 2019. It was hypothesized that the hardware limitations may impact performance.

Tests lasted 30 seconds and were run every six hours for a day. This is the average speed over the 24-hour period.

The survey was aimed at IT professionals, to determine the way the VPN industry is currently leaning. The chart below offers insight into whether poor performance would be a deciding factor in choice of VPN protocol.

• 46% of participants use OpenVPN on a regular basis • 25% of participants who use OpenVPN said they experience poor performance with it.

• 66% of participants who use OpenVPN agree that

poor performance would influence their choice of protocol.

CONCLUSION

Overall, the results have shown that IT professionals are using IPsec and OpenVPN almost equally, with IPsec slightly winning. This could be down to performance related issues with OpenVPN, which this thesis also concludes, or it could simply be because IPsec is a long-established protocol, natively available on a variety of different hardware and software. When a VPN tunnel is running, there will always be some overhead. The results from the Internet tests as seen to the right, show a 5.6% bandwidth drop with OpenVPN. The tests conducted over AWS were incredibly surprising, showing a 43% decrease in bandwidth with OpenVPN running. This could suggest that cloud infrastructure is not a good platform for tunneling traffic across. Alternatively, it could simply be the hardware limitations faced with this experiment.

THANKS TO

Special thanks to Chris, my supervisor. This project would not have been as possible, exciting or interesting without his guidance! Another big thanks to all my lecturers and the university itself for providing me with the opportunity to complete this degree.

REFERENCES

Coonjah et al. (2015). TCP vs UDP tunneling using OpenVPN. Coonjah et al. (2018). Investigating the TCP Meltdown problem in OpenVPN. Donenfeld, J. (2017). WireGuard whitepaper. Kotuliak et al. (2011). Performance comparison of IPSec and TLS based VPNs.

BSc (HONS) COMPUTER NETWORK MANAGEMENT & DESIGN

195


STUDENT BIOGRAPHY

Cameron Birnie Course: BSc (Hons) Computer Network Management and Design Automation of Network Device Configuration, Management and Monitoring with a Graphical Interface using Python Network automation as defined by Cisco is “the process of automating the configuring, managing, testing, deploying, and operating of physical and virtual devices within a network.” [1]. Automation provides three main benefits to an organisation – reduced OPEX, reduction in human errors and providing a framework to implement agile development or services. These benefits have lead to increased adoption of automation in recent years to a point that the use of automation within networks has become prevalent with Juniper’s SoNAR stating that 96% of businesses have implemented automation in some form [2]. One of the more popular methods of implementing network automation is the use of network management tools. network management tools are a type of software that assist in the management and monitoring of a network.

<#> 196


Automation of Network Device Configuration, Management and Monitoring with a Graphical Interface using Python Cameron Birnie & Christopher McDermott

Introduction Network automation as defined by Cisco is “the process of automating the configuring, managing, testing, deploying, and operating of physical and virtual devices within a network.” [1]. Automation provides three main benefits to an organisation – reduced OPEX, reduction in human errors and providing a framework to implement agile development or services. These benefits have lead to increased adoption of automation in recent years to a point that the use of automation within networks has become prevalent with Juniper’s SoNAR stating that 96% of businesses have implemented automation in some form [2]. One of the more popular methods of implementing network automation is the use of network management tools. network management tools are a type of software that assist in the management and monitoring of a network.

Project Aim Design and implement a user-friendly network management tool using a GUI with a focus on providing usability, modularity and multi-vendor support. In order to facilitate rapid configuration, deployment, management and monitoring of networks in comparison to traditional methodologies

Figures and Results

figures figures

The tool was to be assessed by evaluating the three attribute of usability as defined by ISO standard 924111:2018 [4] – Efficiency, Effectiveness and Satisfaction. The proposed testing involved having five users perform a set of configurations on multiple devices both with the tool and manually to assess efficiency and effectiveness. Satisfaction would then be examined using a system usability scale (SUS) to accurately gauge user satisfaction after using the tool. Due to the current pandemic it has become unfeasible to perform the original testing, the amended plan involves using myself as the sole subject. Satisfaction will also no longer be assessed due to the fact that it is an opinion based evaluation and using myself would potentially provide inaccurate results due to inherent bias towards the tool.

Overall I feel the project successfully meets the aim of the project by creating an easy to use network management tool. Areas for improvement in the future might include remote storage or the ability to configure multiple devices simultaneously. Further testing will be performed on the tool prior to the final hand in and it is expected that the results will conclude that the tool successfully provides an easy to use method of configuring devices in a more efficient manner than traditional methodologies. The tool has also been developed in a manner that will allow it to be easily modified and extended in the future to provide additional functionality.

Acknowledgments

Methods

figures In order to ensure that the GUI was well designed, user friendly and intuitive the design followed Jakob Nielsen’s 10 Usability Heuristics for User Interface Design [3]. These principles provide a set of high level design statements to ensure that user interfaces are designed in a manner to provide a high level of usability while avoiding common design pitfalls such as cluttered screens or unnecessary complexity. The tool itself was created using Python and modules such as Kivy, Netmiko, Netifaces, Winreg, Os, Re and Sys have been leveraged to provide additional functionality .

Conclusion

I would like to express my thanks to my supervisor Christopher McDermott for providing guidance throughout the project to ensure a finished product was produced

References Pilot testing has been carried out on the tool to provide an initial assessment on it’s suitability and highlight areas for improvement prior to the final usability testing. Overall the tool performed as expected and all functions were executed successfully, although a number of issues were highlighted that if left unchanged will hinder the effectiveness of the tool. There were a number of instances where crashes occurred during operation of the tool due to unexpected user input, as such improvements to the tools error handling will be carried out. While the method of storing credentials as plain text within the source code has been deemed a major security risk and a login screen will instead be implemented to allow the user to enter credentials at each login.

1). What Is Network Automation?, 2020. [online]. Available from: https://www.cisco.com/c/en/us/solutions/automation/ network-automation.html [Accessed 6 May 2020]. 2). JUNIPER ENGNET, 2020. 2019 State of Network Automation Report. [online]. Juniper. Available from: https:// www.juniper.net/assets/us/en/local/pdf/ebooks/7400113en.pdf [Accessed 6 May 2020]. 3). Enhancing the explanatory power of usability heuristics, n.d. Boston Massachusetts USA. Boston, Massachusetts, USA: Association For Computing Machinery. pp. 152–158. Available from: https://dl.acm.org/doi/ 10.1145/191666.191729 [Accessed 6 May 2020]. 4). ISO 9241-11:2018, 2020. [online]. Available from: https:// www.iso.org/standard/63500.html [Accessed 6 May 2020].

197


STUDENT BIOGRAPHY

Gift Chilera Course: BSc (Hons) Computer Network Management and Design VMware ESXI Vs Proxmox VE, Vs Microsoft Hyper-V Virtualisation has become an important factor in the world of IT today, there are over 36,000 “companies that use VMware vSphere” (enlyft. com) and there are over 41,000 “companies that use Microsoft Hyper-V server” (enlyft.com). VMware who developed vSphere (www.vmware. com) and Microsoft who developed Hyper-V (docs.microsoft.com) are popular platforms in virtualisation, with some of their virtualisation software being ranked high based on reviews (trustradius.com) and ease of use (g2.com).

<#> 198


VMware ESXI Vs Proxmox VE, Vs Microsoft Hyper-V Gift Chilera, Ian Harris

LAN Tests Introduc)on Ark.intel.com (2020) shows that the clock speed/stock speed of the Virtualisa5on has become an important factor in the world of IT Intel Core i5-3470 CPU that was used is 3.2 GHz which is equivalent 3200 MHz. The turbo speed of the CPU is 3.6 GHz which is equivalent today, there are over 36,000 “companies that use VMware to 3600 MHz. Acording to intel.co.uk (2020), turbo boost will increase vSphere” (enlyZ.com) and there are over 41,000 “companies the core speed to be higher than the stock speed of the CPU. Intel that use MicrosoZ Hyper-V server” (enlyZ.com). VMware who also state that for Turbo boost to work the CPU “must be working in developed vSphere (www.vmware.com) and MicrosoZ who the power, temperature, and specifica5on limits of the thermal developed Hyper-V (docs.microsoZ.com) are popular pla\orms in virtualisa5on, with some of their virtualisa5on soZware being design power (TDP)”. This will lead to improved “performance of both ranked high based on reviews (trustradius.com) and ease of use single and mul5threaded applica5ons.” (g2.com). The test results show that ESXI had the highest clock speed which was 3432.35 MHz which is not only above the CPU clock speed, but it is Project Aim the closest speed to the turbo speed. From what Intel state this TotusoZ.com (2020) says that when LAN Speed Test tests the LAN speed it should be a safe clock speed as it is below 3600 MHz and it should be The aim of this project was to find out which one out of three will create a file in memory and send it in both direc5ons. safe it is also opera5ng within the temperature, thermal design type 1 hypervisors would be best suited to the user based on their needs. The hypervisors that were compared were Vmware power, and power limits. This makes ESXI the best out of the three Out of the highest read/write speeds: ESXI, MicrosoZ Hyper-V, and Proxmox VE. The latest free version hypervisors when it comes to CPU performance. of each hypervisor at the 5me were used for the tes5ng in this ESXI had the fastest read (download) speed which was 4404.58 Mbps and the Hyper-V’s highest clock speed was 3243 which is just above the stock project. lowest write (upload) speed which was 106.29 Mbps. The windows 10 virtual speed meaning that it is a safe clock speed. machine was either using an E100e network adapter which is an emula5on of Methods Proxmox’s highest clock speed was 3193 MHz which is below this the Intel Gigabit NIC 82574L (ark.intel.com) or the VMXNET 3 which is a CPU’s 3200 MHz clock speed meaning that it underperformed in this virtual network adapter. Vmware.com (2020) shows that these are the two The tes5ng was done on a virtual machine with a Windows 10 area. guest opera5ng system which had performance benchmark NIC cards compa5ble with a windows 10 VM. Geek.-university.com (2020) tools installed. shows that the NIC card speeds and duplex of a VM on ESXI can be Hyper-V came in at second and it was running on a single CPU with Each Hypervisor was tested in four different areas. Those areas configured. It also shows that the speeds can be set on Auto nego5ate which two cores. According to techterms.com (2020), systems with two were CPU performance, Memory performance, Disk means that it will set the NIC card speed to the fastest speed possible without CPUs are considerably faster than systems with one CPU however Performance, and LAN Performance. a speed limit. This would be one of the reasons why ESXI had a high read barely “twice as fast” which is a surprise as the VM on Proxmox was (download) speed. using two CPUs with one core but had a lower core speed than Tes)ng Environment Hyper-V Hyper-V had the second fastest read (download) speed which was 1102.09 The hardware used for the tes5ng was a HP Compaq Elite 8300 Mbps and the second highest write (upload) speed which was 200.13 Mbps. SFF x64-based system. ESXI was installed and tested first Hyper-V was using a Intel 82579LM Gigabit Ethernet adapter (ark.intel.com) followed by Proxmox and then Hyper-V. The hypervisor was which is another emulated NIC card. connected to the network and the virtual machine got an internet connec5on from the same network as the hypervisor. Proxmox had the lowest read (download) speed which was 670.24 Mbps and the highest write (upload speed) 220.17 Mbps. This could mean that Performance Benchmark Tools Proxmox’s network adapter had similar data transfer rates as the network The performance Benchmrk tools that were used were adapters that ESXI and Hyper-V were using. PerformanceTest (passmark.com), NovaBench (novabench.com), GFX Memory Speed benchmark (techspot.com), and LAN Speed Reliability test (totusoZ.com). Bandicam (bandicam.com) is a recording tool that was used to record the ac5vity on CPU-Z during the When it comes to reliability, the demonstra5ons that were performed show benchmark test. that on the hardware and tes5ng environment that was used, Hyper-V is the most reliable when it comes to live migra5on between two storage devices as The memory test results show that Hyper-V would be the best op5on if Host Machine all of the demonstra5ons which required the VM to be transferred to another looking for fast memory speeds overall as it has the fastest write speed CPU: Intel Core i5-3470 3.20 GHz, Memory: 2x Hynix/Hyundai, which was 9.39 GB/s and the second fastest read speed in RAM which was storage device while powered on were a success aZer being tested. For 4F80D198, 4GB, Hard Disk: Seagate ST1000DM003-1SB102, Proxmox only one feature worked and it was a live backup. For ESXI an 9.39 GB/s. Hyper-V also had the fastest RAM speed from the NovaBench Z9A5WE4T, 1TB amempt to transfer the VM while powered on was not successful and there test results which was 17554 MB/s. This would be suitable to users who was no live backup feature as well. want efficiency on their hypervisors as faster RAM will improve CPU Portable SSD performance as well. Seagate, NAA40EC5, 1TB Conclusion Transcend-info.com shows that the RAM that was being used on the While some of the results from this project are debatable, the knowledge and physical hardware was DDR3 1333 as the highest speed came from The VM on ESXI and Proxmox was given two CPUs with one insight gained from them from them leads to a far more intriguing discussion. Proxmox’s read speed tests which was 10.61 GB/s and the transfer rate of core, 2GB RAM, a 32GB hard disk for the installa5on, and an The plan for this project was always to conduct the tests in the fairest way DDR3 1333 is 10.6 GB/s. addi5onal 16GB hard disk. possible by using the same hardware and that was how it was carried out. However, there were some other factors which could have been taken into Storage Tests However, for Hyper-V when the VM was given two CPUs Hyperconsidera5on and they had an impact on the experiment test results. The V did something different and gave the VM one CPU with two factors which could be inves5gated for future work are the CPU senngs, cores instead. The VM was given one CPU instead with Hyper-V Storage senngs, and Network senngs. As previously stated, some of the however one of the benchmark tools was sending an error experiment results might be debatable, however u5mately this project has message whenever the benchmark tests were about to start been able to prove that not all type 1 hypervisor pla\orms are the same. running. To avoid this error the VM was given two CPUs with Factors which help prove this are some of the setups and configura5ons in Hyper-V meaning that it had one CPU with two cores. This would this project, the features and tools that each manufacturer provides to the influence the performance benchmark results for Hyper-V. References 1.1.12.26, G., 2020. GFX Memory Speed Benchmark. [online]. TechSpot. Available from: https://www.techspot.com/downloads/6767-gfx-memoryFigures and Results speed-benchmark.html [Accessed 22 April 2020]. 2020. [online]. Available from: https://www.dell.com/support/article/en-uk/ CPU Performance Tests sln179266/how-random-access-memory-ram-affects-performance?lang=en [Accessed 24 April 2020]. Backup and Restore - Proxmox VE, 2020. [online]. Available from: https:// pve.proxmox.com/wiki/Backup_and_Restore [Accessed 22 April 2020]. The hard disk that was used to install and store the hypervisors and was a COMPANY, B., 2020. Bandicam - Recording Software for screen, game and Seagate Barracuda 1TB 7200 RPM SATA 3 hard disk. According to Seagate.com webcam capture. [online]. Bandicam.com. Available from: https:// (2020) the hard disk’s average data rate read/write is 156 MB/s, the max www.bandicam.com/ [Accessed 24 April 2020]. sustained data rate OD Read is 210 MB/s, and the highest SATA transfer rate for CPU-Z | Softwares | CPUID, 2020. [online]. Available from: https:// this hard disk is 6 GB/s. Also according to kb.sandisk.com (2020) states that www.cpuid.com/softwares/cpu-z.html [Accessed 22 April 2020]. SATA 3’s transfer rate is 6GB and the highest bandwidth for SATA 3 is 600 MB/s. Download PassMark PerformanceTest - PC Benchmark Software, 2020. [online]. Available from: https://www.passmark.com/products/ ESXI’s results were unusual as they were way above thehard disk’s average read performancetest/index.php [Accessed 22 April 2020]. Dual Processor Definition, 2020. [online]. Available from: https:// and write speeds and they are also higher than SATA 3’s bandwidth. Hyper-V’s techterms.com/definition/dual_processor [Accessed 24 April 2020]. read speed was close to the average read speed but the write speed was HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: unusual as it way above the average speeds and the bandwidth. Proxmox’s https://www.computerhope.com/issues/ch001240.htm [Accessed 24 April might have had the lowest speeds, however its results were normal compared 2020]. to ESXI and Proxmox HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: https://www.computerhope.com/issues/ch001240.htm [Accessed 25 April 2020]. Microsoft Hyper-V Server commands 12.75% market share in Virtualization Platforms, 2020. [online]. Available from: https://enlyft.com/tech/products/ microsoft-hyper-v-server [Accessed 29 April 2020]. Emulation or virtualization: What’s the difference? - Direct2Dell, 2020. [online]. Available from: https://blog.dell.com/en-us/emulation-orvirtualization-what-s-the-difference/ [Accessed 29 April 2020]. HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: https://www.computerhope.com/issues/ch001240.htm [Accessed 29 April 2020].

199


STUDENT BIOGRAPHY

Peter Dobo Course: BSc (Hons) Computer Network Management and Design Comparison of Solutions for Meltdown and Spectre Attacks and Their Impact of Performance Microcode related attacks were well known in the science community about 20 years, but around that time cybersecurity was not a major concern, it was a common agreement amongst scientists that the development of technology was more important. However, with technology the people of our society developed as well. That vulnerability can leak information through the cache memory which then transmitted through a covert channel. This issue is hardware related therefore new hardware technologies would be the best way to fix this security issues. For the existing hardware around a globe, a software workaround was released in a form of microcode patches, however they impact performance of the CPU and the Storage drive speed, because they throttle aforementioned exploitable microprocesses.

<#> 200


Comparison of Solutions for Meltdown and Spectre Attacks and Their Impact of Performance

Peter Dobo

Introduction

Figures and Results

Microcode related attacks were well known in the science community about 20 years, but around that time cybersecurity was not a major concern, it was a common agreement amongst scientists that the development of t e ch n o lo gy wa s m o re im p o rt a n t . However, with technology the people of our society developed as well. That vulnerability can leak information through the cache memory which then transmitted through a covert channel. This issue is hardware related therefore; new hardware technologies would be the best way to fix this securit y issues. For the existin g hardware around a globe, a software workaround was released in a form of microcode patches, however they impact performance of the CPU and the Storage drive speed, because they throttle aforementioned exploitable m i c r o p r o c e s s e s .

The implemented design is being put to a stress test to measure the CPU of the virtual machines installed on it, more specifically CPU processing power (MHz). The project also measuring storage device performance including sequential(1MB) read and write speed on multiple queues and threads as well as random(4KB) read and write speed on multiple queues and threads. Storage device performance is not directly affected by CPU performance, however the microcode fixes are affecting interaction speed between user and kernel space to mitigate cyberattack vulnerability, therefore data access speed is affected. The tests provides multiple results as we run them on the different virtual machines, gathering statistics to measure the difference as the project implements more up-to-date technology into the design. With multiple performance testing software, the project measures the performance of the CPU and the storage drive of each guest operating system. The first scenario is where the hypervisor is an older, unprotected version that runs an unprotected guest operating system on it. The second scenario has a protected hypervisor, that runs a protected and an unprotected guest operating system.

Project Aim In a virtual infrastructure, the effect of these patches is amplified, since they are stacked on each other. The aim of this project is to measure the performance impact of the mitigation patches on today’s up-to-date version operating systems compared to an u n p r o t e c t e d d e s i g n .

Methods

The project demonstrates the effect of these patches by creating two basic virtual infrastructure designs with a hypervisor and a guest operating system. The reason is to measure the impact, step-by-step as we stack the mitigation patches on top of each other, creating three results.

Conclusion Sequential Storage Drive Speed

CPU performance (MHz) CPU performance (MHz)

140

6.0/8.1/old – (MB/s)

6.7/8.1/old – (MB/s)

6.7/10/new – (MB/s)

120

720 715 710

100 80

705

60

700

40

695

20 0

690 SEQ1M – SEQ1M – SEQ1M – SEQ1M – Q8T1 (read) Q1T1(read) Q8T1 (write) Q1T1 (write)

685

6.0/8.1/old 6.7/8.1/old 6.7/10/new

Previous tests of the same nature have been executed after the initial releases of the mitigation patches. At that time the impact of those fixes were much more significant on the CPU, depending on the hardware could reach even 15% performance hit on the processing power. In virtual infrastructures, this effect was amplified even more due to each additional layer of mitigation. In 2019, software developers stated that those performance issues will be fixed and will not affect the CPU as significantly as it was, however for better efficiency, a new technology is required. According to these test results, these expectations have been met when it comes to the performance impact of the processing power.

Acknowledgments

Above figures show the hardware specifications of the operating system the tests were running during the testing phase as well as the benchmark test to measure the baseline performance of the design. Below figures are the compared performance results of the CPU and Storage drive test results. As the statistics shown, there is a steady drop of performance with each implementation of microcode patches, however Storage drive performance is more affected than the CPU with the overall loss of around 16% read and write speed loss.

I would like to express my special thanks of gratitude to my lecturer and supervisor Ian Harris for his guidance and support in completing this project. I would also like to extend my gratitude to the Office of School of Computing Science and Digital Media, for providing support in these unusual circumstances. I would like to thank Robert Gordon University, for providing me the required facilities to achieve this p r o j e c t .

References Meltdown and Spectre. 2020. Meltdown and Spectre. [ONLINE] Available at: https://meltdownattack.com/. [Accessed 1 9 F e b r u a r y 2 0 2 0 ] . Virtualization Howto. 2020. VMware Performance Impact of Meltdown and Spectre Patches - Virtualization Howto. [ONLINE] Available at: https://www.virtualizationhowto.com/20 18/01/vmware-performance-impact-ofmeltdown-and-spectrepatches/.[Accessed 19 February 2020].

CMND4105

201


STUDENT BIOGRAPHY

Daniel Garbulski Course: BSc (Hons) Computer Network Management and Design Deployment of Software Defined Network The project aims to successfully deploy a software defined network, and highlight its benefits, advantages and disadvantages in comparison to well-known legacy networking. One of the key aspects of the experiment analyses the difficulty in transition from traditional, mostly hardware based networking to software defined networking. It attempts to answer questions in regard to the qualifications required from an individual to get into the field of SDN, and how difficult such transition might be.

<#> 202


203

Early development showed, that the newest version of OpenDaylight, "Magnesium" does not support graphical interface that would help to understand and manage the networking configurations. Due to this unfortunate problem, an older version of OpenDaylight was installed "Carbon" to allow this feature to operate.

In this specific project, the software defined network has been deployed on Linux Ubuntu 20.04 LTS. OpenDaylight has been installed as the SDN controller of choice, which operates the network. In the first iterations of the project, it was planned to use physical devices to emulate the behaviour of real SDN switches, The Zodiac FX, however due to limitations caused by the worldwide pandemic, multiple issues arose that could not be easily resolved. Hence, the decision of using mininet, it allowed for implementing a virtual SDN network with virtual switches which emulated the behaviour of real devices.

Methods

Without a doubt, Software Defined Networking has its place due to some benefits that cannot go unseen. Due to virtualization of many networking devices, software defined solutions can save money, increase performance, and allow for a high level of automation.

Deployment of Software Defined Network has been a big challenge due to many issues involving software incompatibility, discontinued support for specific features, and multiple configuration problems that required constant troubleshooting, furthermore, most of the issues were completely unexpected, and the troubleshooting was consuming significant amounts of time. The difficulties in deployment show, that it requires a lot of experience and knowledge, in both, technical areas of networking as well as good understanding of software based solutions, where programming capabilities come extremely useful, and are absolutely necessary to be successful and effective when dealing with SDN environment.

Implementation

Multiple versions of SDN controllers have been installed and configured in order to test different functionality and compatibility. The OpenDaylight "Carbon" version of the controller has been successfully deployed, however it is a complex and difficult controller to manage, and there are definitely more options aimed towards less experienced users who seek to learn about SDN such as Floodlight, Ryu or ONOS.

The project aims to successfully deploy a software defined network, and highlight its benefits, advantages and disadvantages in comparison to well-known legacy networking. One of the key aspects of the experiment analyses the difficulty in transition from traditional, mostly hardware based networking to software defined networking. It attempts to answer questions in regard to the qualifications required from an individual to get into the field of SDN, and how difficult such transition might be.

As per [Figure 1], OpenDaylight used to give users an option to generate the network’s topology and display it in the form of the above graph. Since the feature has been discontinued, it is not available anymore, and even in earlier versions of OpenDaylight controllers, the feature does not work anymore, and it is officially unsupported. However, there are other SDN controllers such as ONOS that fully support graphical interface, and can give very similar output of the SDN network as in the above example.

Virtual switches and virtual clients have been deployed and connected with the earlier installed remote controller, all devices have connectivity, the entire installation process and configuration has been performed from the command line interface of Ubuntu.

Fig. 1: OpenDaylight User Interface (DLUX) - Topology [1]

SDN controllers can be monitored by further implementation of additional software for network monitoring such as SolarWinds. It allows for much better data collection about the network and its data flow. SolarWinds offers much more than monitoring a network, it can also be used for easier network management upon proper configuration. This feature however has not been implemented, and it is something that can be done in further development with access to proper equipment.

Results

Introduction

Robert Gordon University

Daniel Garbulski

Comparison

URL : https://docs. opendaylight . org / en / stable - carbon / getting - started - guide / common - features / dlux.html.

[1] OpenDaylight Project. OpenDaylight User Interface (DLUX). 2016.

References

Special thanks to my supervisor Dr. Omar Al Kadri for all the support and guidance throughout the project during this difficult times caused by COVID-19 pandemic.

Acknowledgements

Fig. 2: OpenDaylight User Interface (DLUX) - Nodes [1]

Software Defined Networking allows for innovative approach towards networking in general, and adds variety of advantages including enhanced configuration of the network, improved performance, much cheaper deployment and automation. The downsides, however are quite significant. The qualifications required to be successful in SDN deployment are much higher. Software Defined Networking requires very good technical knowledge, good understanding of computer networking, decent programming skills, and adaptability to ever changing industry. Security challenges can be another difficult subject, legacy networking has much more possibilities of implementing strong security measures, where as software defined networks can be much more difficult to secure since most of the operations are performed through software.

DEPLOYMENT OF SOFTWARE DEFINED NETWORK


STUDENT BIOGRAPHY

Robbie Gittins Course: BSc (Hons) Computer Network Management and Design Connected IT system using an all-in-one VDI solution – Can Virtualisation be useful outside of the corporate network? Virtualisation, has for a number of years, become more and more integrated into the corporate IT solutions available today, however is it only of benefit on a corporate scale? This project explored whether smaller scale applications would be of benefit to the end user. The project proposed that in situations when IT resourcing is limited such as a conference event that a virtualised all in one solution can be used. There may be an unmet need for a portable and reusable piece of equipment to provide multimedia access in non conventional venues. Such a system should need no more management than plugging it into a wall socket and then handing out laptops/Chromebooks.

<#> 204


Connected IT system using an all-in-one VDI solution – Can Virtualisation be useful outside of the corporate network? Robbie Gittins

Introduction Virtualisation, has for a number of years, become more and more integrated into the corporate IT solutions available today, however is it only of benefit on a corporate scale? This project explored whether smaller scale applications would be of benefit to the end user. The project proposed that in situations when IT resourcing is limited such as a conference event that a virtualised all in one solution can be used. There may be an unmet need for a portable and reusable piece of equipment to provide multimedia access in non conventional venues. Such a system should need no more management than plugging it into a wall socket and then handing out laptops/Chromebooks.

Project Aim

The aim of the project was to provide a virtualised desktop environment which could be accessed using provided laptops or Chromebooks. The virtualised environments would be preloaded with common productivity programs such as the office suite and adobe. It would also ideally provide an internet connection for access to email or cloud resources.

Methods

figures

The performance of a Stand alone machine running Win10 on an i5 2520M with a physical 320GB HDD and 8Gb of RAM and a virtual machine running Win10 on 4x 2GHz Xeon E5504 Cores with 90GB physical HDD and 8GB RAM were compared. This was carried out using Userbench (UserBenchmark, 2020) which is one of the largest depositories of user generated benchmarking in the world. The stand alone machine provided by the university for the project is typical of the device a user at a conference might have. Doing performance benchmarks on the different machines allows comparison of relative performances. The stand alone machine while meeting Win10’s minimum requirements is an aging machine. On paper these specs should perform rather evenly. However Virtualisation has an inherent performance overhead of between 1-30% depending on the complexity of the task (Li, et al., 2017). Sequential execution results of common productivity applications was measured to assess real world performance.

Conclusion

Figures and Results 250

The project demonstrates that by creating a 200 Virtualised Non virtualised virtual machine of approximately comparable specs it will out perform its stand alone 150 equivalent. However with an enhanced spec 100 eg, a 64 core AMD Threadripper and SSD’s it 50 is possible to envisage provisioning enough 0 resources to everyone who might want access CPU (Single core integer CPU (Mul: Core Integer Disk (Read) MB/s Disk (Write) MB/s speed) Speed) at a remote venue. The bottle neck then would CPU Performance is measured using become wireless provisioning. Which can be 400.perlbench and then compared against other achieved with a high density AP. The project user results. The scores above are performances has faced a few challenges either in hardware against averages. limitations or ethical limitations. The original Virtualised Non virtualised Chart 2 16 design called for a 5G internet connection for 14 the virtualised desktops, however 5G is not 12 implemented in Aberdeen. Even if available 10 8 there would have been an ethical issue with 6 using the 5G network without asking the 4 mobile service providers for permission 2 beforehand. The original design was to have 0 Memory Read Speed (GB/s) Memory Write Speed (GB/s) the system set up as a full VDI solution with As the results show the physical machine had individual sessions running up (deploying) as a many advantages when it came to user logged into the ProjectDomain. This performance. The one exception being that however was not possible due to hardware VMwares’s ESXi hypervisor strorage controller limitations of the older Dell R710 used nor was had a much greater performance margin than it possible to get an evaluation license for the its stand alone counterpart. This is probably VMware product that was needed to deploy due to the harddrive beneath the hypervisor the set up in such a way. being in a RAID0 Configuration. It could also be an aging drive in the stand alone machine. Chart 1

figures

Chart 3 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0

figures

Virtualised

Acknowledgments

Non virtualised

I would like to thank Ian Harris for his supervision on this project. And for helping spark my interest in virtual systems.

Future work

MicrosoN Word (Time to open Outlook Email Client (Time to in seconds) open in Seconds)

Internet Explorer (Time to open in Seconds)

Based on graphs 1+2 it is unclear which system would perform better in a real life scenario. Running benchmarks can only show part of a story however, in the real world tests above (graph 3) where common productivity applications (PassMark Software, 2020) were loaded sequentially the virtual machines always outperform the stand alone one. This is surprising as the file system cache should allow the stand alone machine to load the application faster after the initial load is complete. We can see from the table below that this is not the case. The virtual machine performs better in real life tests but it is unclear why. Table 1 Internet Explorer

Word

Outlook

Largest/shortest load :me virtualised

Load :mes (Seconds)

0.1864/0.1032

0.5334/0.2645

0.4157/0.3111

Largest/shortest load :me non virtualised

4.4043/0.0767

0.7632/0.4517

9.3876/0.5922

In my future work the goal will be to finish the project to the original specification in order to demonstrate the benefits of a portable virtualised solution. It would be interesting to understand if the unexpected finding is due to a non sterile test machine which was provided by the university. This led to a large performance overhead already existing on the laptop.

References

Li, Z., Kihl, M., Lu, Q. & Andersson, J. A., 2017. Performance Overhead Comparison between Hypervisor and Container Based Virtualization. Taipei, Taiwan, IEEE. PassMark Software, 2020. AppTimer. [Online] Available at: https://www.passmark.com/ products/apptimer/ [Accessed 25 04 2020]. UserBenchmark,2020.UserBenchmark. [Online] Available at: https://www.userbenchmark.com [Accessed 25 04 2020].

205


STUDENT BIOGRAPHY

Chris Headrick Course: BSc (Hons) Computer Network Management and Design Mass Configuration of Network Devices in a Lab Environment using Python Network automation adoption continues to grow year on year, with python being popular choice of tool. In a computing lab environment, network devices are configured, reset and reloaded multiple times throughout the day as students and staff utilize them for labs and projects. Without a fixed IP address and method of access, it is impossible to take advantage of many popular python libraries that allow for mass configuration options.

<#> 206


Mass Configuration of Network Devices in a Lab Environment using Python Chris Headrick and Christopher McDermott

Introduction

Implementation

IOS Upgrade Time Comparison

Network automation adoption continues to grow year on year, with python being popular choice of tool. In a computing lab environment, network devices are configured, reset and reloaded multiple times throughout the day as students and staff utilize them for labs and projects. Without a fixed IP address and method of access, it is impossible to take advantage of many popular python libraries that allow for mass configuration options.

35 30

Axis Title

25

Methods

Figure 1 - NM-32A asynchronous network module

To connect to the devices, a console server was created using a Cisco 2620xm Router with an attached NM32A (see Figure 1) asynchronous network module that uses 68-pin OCTAL Cables and provides out of band connectivity to the console (or auxiliary) ports on up to 32 devices at one time. Each line is allocated a corresponding name that is used for access (D1 – D32). SSH is also configured on the console server to allow the program access to the device.

15

5 0

Octopush

Manual 15 Devices

Figure 2 – From CLI to GUI

The program started as a simple script designed to run on Linux that allowed the user to navigate through menus using the CLI. During the development process it became apparent that a small application that ran on Windows would be far more practical and easy to use. The program has two modes of operation: OctalLine and SSH. The OctalLine process works like so: First, a Device Check is performed by entering the number of devices connected to the console server. The program accesses each device and looks for either “Switch” or “Router” in the name and provides a “PASS” message. If the device has a different name, it is possible that it has a saved startupconfiguration. A optional checkbox is provided that will delete the startup-configuration and reload such devices. I have tried to account for every error. Where possible, the script will alert the user and automatically perform corrective action. If the script is unable to perform such an action, the user is given a list of problem devices and the issue so that they can fix manually. Once the user is satisfied that all the devices are ready, they choose to configure each device with SSH or push pre-made configurations stored in .txt files.

The program utilises the following libraries to achieve its aims:

Netmiko: Used to SSH to the console server and perform the task selected by the user over the octal lines. Nornir: Once SSH is configured, Nornir is used to perform configuration tasks and upgrades on the devices. Its does this concurrently thus greatly reducing the time needed to perform tasks. Gooey: Gooey is used to turn (almost) any Python command line programs into GUI applications.

20

10

Project Aim The ultimate goal of the project is to create the fastest and most convenient way to connect to as many lab devices as possible to perform configuration changes and upgrades.

Results

A traditional method of performing IOS upgrades is to manually connect to each device and download the image from a TFTP server. To test how much faster the script could perform the process, a test network of 15 switches was created. Using the traditional method, it took on average two minutes to download the image, change the boot settings and reload the device. By comparison, the script was able to transfer the image, configure the devices and perform a reload in just under 5 minutes making it 83.3% faster.

Conclusion This project has successfully demonstrated the time automation can save on tasks like mass configuration and upgrades. The inspiration for this project came from a discussion with a fellow student who along with two others had spent weeks upgrading devices at the university the previous year. From a business perspective, the money saved on labour by reducing a task from weeks to perhaps a little as a day makes a strong argument in favour for automation.

Future Work Figure 3 – Error Handling

With SSH configured, it is now possible to take advantage of Nornir’s parallelization and configure multiple devices at once. The main feature of the SSH component is performing an IOS upgrade on multiple devices. This is done by transferring an IOS image, setting the boot variable on the device and reloading it.

There are many features that can be added to this app such as the ability to pull configurations from devices and compare them to a template. I’d also like to attempt to bypass password protected devices by using a password list that uses common passwords used on network devices in labs. Lastly, I intend on packaging the script as an .exe. Still time!

Computer Network Management and Design 207


STUDENT BIOGRAPHY

Philip Joman Course: BSc (Hons) Computer Network Management and Design Analysing and Testing Security Mechanism on IoT Mome Automation Tools From the day computers were commercialised to normal people for their day to day life, it helped us in so many ways such as increasing our productivity as may of the features that comes with computers are easy to implements and does the job for us e g e mail packages allows us to write, edit and send data with simple click. While the computer system and its features were getting updated and sold on the market every year there is a huge area that needed to be updated and managed very carefully this is the security aspect. This is because as years pass the development of computers were rapidly increases, today as a result of this, we have computers to mobile phone and IoT Devices such as smart thermostat to smart watch that allows us to check sensitive details such as bank details, e mails etc. There for it very important for us to re analysis and test the security of systems that we use in or everyday life which will be demonstrated in this project.

<#> 208


Analysing and Testing security mechanism on IoT home automation tools Student Name: Philip Joman Supervisor Name: Christopher McDermott

Introduction Form the day computers were commercialised to normal people for their day to day life, it helped us in so many ways such as increasing our productivity as may of the features that comes with computers are easy to implements and does the job for us e.g. e-mail packages allows us to write, edit and send data with simple click. While the computer system and its features were getting updated and sold on the market every year there is a huge area that needed to be updated and managed very carefully this is the security aspect. This is because as years pass the development of computers were rapidly increases, today as a result of this, we have computers to mobile phone and IoT Devices such as smart thermostat to smart watch that allows us to check sensitive details such as bank details, e-mails etc. There for it very important for us to re-analysis and test the security of systems that we use in or everyday life which will be demonstrated in this project.

Project Aim

Testing and Results For the testing we have 3 section this includes scanning and reconnaissance, monitoring and filtering information and stopping service

Ones the testing is completed it is clearly show that it is possible to perform a cyberattack against device any devices that are in the same network these types of attack can be easy implemented in public areas and local area network with less security restriction (Man-in-the-Middle Attacks and How To Avoid Them, 2020)

1.First task before the attacks is to do a Zen map scan with that the attackers quickly gained access to information of the targeted devices this includes operating system, ports etc 2.Performed a man in the middle attack using ARP spoofing and managed to intercept and monitor all the traffic send and received by the PI as a result gain vital information

3.The third attack performed was denial of service using the hping3 tool to do a flood attack against the PI to stopping it from sending data to Ubidots

The standard aim of this project is to test the security of a give raspberry pi-based sense hat that upload data to an online based cloud platform in order to test the security a multiple attacks are conducted against the raspberry pi such as man in the middle attack using ARP spoofing and denial of service attack to see if it is possible penetrate and block the system.

Acknowledgments

Methods

For this project a raspberry PI with sense hat is configured with wireless internet connection latter an account is created on Ubidots which allows us to visualise all 3 parameter which are temperature, position and humidity in different way then a python script is used to connect the device to Ubidots which will transfer data from PI to Ubidots. following several scanning and attacks are conducted using kali Linux machine against the PI

Conclusion

Some of the attacks proposed initially failed due to technical limitation, student limited knowledge and the current situation however from the test result images above clearly show that all the scanning and attacks that were done against the PI was successful. Arp spoofing (MITM) DDOS Replay attack SSL strip

Yes Yes No No

I would like to thank Chris who has supervised me throughout this project, without his help I would not be able to this project successfully, he during this project he has given me lot of support and help in terms of give me kits and showing me platform that I could work with, I would also like to thank Ian Harris for all the additional help and guidance he has given me during this project this includes giving me instruction on how perform certain attacks along with proved me with clear answer for any queries I have about my project.

References Netsparker.com. 2020. Man-In-The-Middle Attacks And How To Avoid Them. [online] Available at: <https://www.netsparker.com/blog/websecurity/man-in-the-middle-attack-how-avoid/> [Accessed 29 April 2020].

Computer Network Management and Design 209


STUDENT BIOGRAPHY

Peter Kiss Course: BSc (Hons) Computer Network Management and Design The role of the Firewall and the benefits of using DMZ with pfSense for Network Security in Small Business Network One of the most important and effective tools for protection against network attacks is a firewall. Firewall is a security system that is located at the connection point of computer networks and provides protection by monitoring and filtering all through network traffic, among many other benefits. A DMZ is an interface located between a trusted network segment and an untrusted segment (Internet), thus providing physical isolation between the two networks. With the design of the DMZ (demilitarized zone) and optimal firewall settings, the network can be extremely cost-effectively protected against external threats.

<#> 210


The role of the Firewall and the benefits of using DMZ with pfSense for Network Security in Small Business Network Peter Kiss & Andrei Petrovski

2020

Introduction

SMB Network Environment

Comparison

One of the most important and effective tools for protection against network attacks is a firewall. Firewall is a security system that is located at the connection point of computer networks and provides protection by monitoring and filtering all through network traffic, among many other benefits. A DMZ is an interface located between a trusted network segment and an untrusted segment (Internet), thus providing physical isolation between the two networks. With the design of the DMZ (demilitarized zone) and optimal firewall settings, the network can be extremely cost-effectively protected against external threats.

Medium-sized network topology was developed to demonstrate the efficiency operate and characteristics of the firewall and the DMZ segment. The network was designed using Cisco and Netgear devices. The Internal network connectivity is provided by a Cisco 3560 Layer 3 Switch, two additional Cisco Catalyst 2950 type and a Catalyst 2960 type Layer 2 switch. Mobile access is provided by Cisco WAP 371 Access Points. (The topology shows 1 AP for demonstration purposes.)

Brief feature and pricing comparison between Cisco ASA and pfSense

pfSense • • • • • •

Project Aim The general aim of the project is to demonstrate the indispensability of a firewall application when designing a network today that provides an effective solution for optimal network security with the services of an open-source firewall application and the creation of a network segment from the home network to the enterprise level. It demonstrates the importance and role of the De-Militarized Zone, which provides extra security to the organization or individual’s local area network.

Cisco ASA • •

pfSense as firewall pfSense is very versatile and flexible with several additional plug-ins available for best performance. In the SMB topology, pfSense acts as a firewall to apply rules and other security settings and uses rules to control inbound and outbound network traffic. Use the positive control model to control access to network resources.

Firewall rules on pfSense

Methods

pfSense as firewall

pfSense is an open-source firewallrouter operating system based on FreeBSD. For the firewall, the hardware interface is provided by a desktop miniF-PC with i5 CPU - 8GB RAM - 120GB SSD and three Ethernet network cards, that provide the connection to the threesegmented networks as WAN, LAN, and DMZ.

Excellent load balancer and HA. Easily configure firewall rules. Many additional packages and features can be installed on the fly Configure VPN setups is easy and rock solid for Linux, Mac, Windows It is Open Source and FREE. It has Centralized configuration, documentation. Enterprise - level.

LAN: Allow TCP/UDP 53 (DNS) from LAN subnet to LAN Address - TCP 80 (HTTP) from LAN subnet to any - TCP 443 (HTTPS) from LAN subnet to any - TCP 21,25,110, 143, UDP 3389 DMZ: Allow TCP 80 from DMZ subnet (HTTP) to any - TCP 443 from DMZ subnet (HTTPS) to any - UDP 123 from DMZ subnet (NTP) to any.

DeMilitarized Zone The DMZ interface is that separates the service network segment, which is considered reliable, from both the trusted network segment and the untrusted external network. Its advantage is that it restricts unknown Internet traffic to the servers in the DMZ at most, preventing traffic from entering the internal network. To further increase network security, DMZ traffic is audited, it is possible to integrate an IDS (Intrusion Detection System) system and a name server (DNS) in the DMZ.

• • •

Provides enterprise-level firewall. ASA is suitable for every organization from Mid to High Range. However, for small customers, they can buy Cisco product, but support cost will be the challenge . It is fully compatible with other key security technologies. The pricing devices, Licensing of ASA higher than others.

Conclusion

All the enterprise-class services provided by pfSense are reliable, powerful and rock solid in both stability and security, with lot of benefits. pfSense is great for companies with medium budgets and advanced networking needs. It is very versatile and can be installed on any x86 platform with as little as 16GB of disk and 1GB of RAM. This makes it great for applications such as hosted environments where you only have VMs. It also supports many kinds of hardware.

References 1. O’Reilly Online Learning. 2020. Network Security With Pfsense. <https://learning.oreilly.com/library/view/networksecurity-with/9781789532975/> [Accessed 26 April 2020]. 2. Docs.netgate.com. 2020. Configuration And Webgui — Basic Firewall Configuration Example | Pfsense Documentation. [online] Available at: <https://docs.netgate.com/pfsense/en/latest/config/e xample-basic-configuration.html> [Accessed 26 April 2020].

Computer Management and Network Design 211


STUDENT BIOGRAPHY

Zarko Krsmanovic Course: BSc (Hons) Computer Network Management and Design Physical Security of Computer Systems and BYOD Security Risk Assessment Allowing employees to independently choose the type and model of the device with which to carry out their business responsibilities, while being able to use it as personal devices outside the workplace, will undoubtedly increase the productivity of the company. However, enabling BYOD increases the level of security risks, and one of the biggest problems is data leaks. An enterprise will have little or no control over corporate data because corporate data are now stored and accessed by personal mobile devices of employees (Olalere Abdullah, Mahmod and Abdullah, 2015). In the event of theft or loss of a BYOD device, companies may reduce the risk of unauthorised access to company data by using one of the Mobile Device Management (MDM) solutions.

<#> 212


devices are used for private p Essentially, it is necessary to right balance between secur and the privacy of users o devices.

Acknowledgme

Student: Zarko Krsmanovic Supervisor: Harsha Kalutarage

Introduction Allowing employees to independently choose the type and model of the device with which to carry out their business responsibilities, while being able to use it as personal devices outside the workplace, will undoubtedly increase the productivity of the company. However, enabling BYOD increases the level of security risks, and one of the biggest problems is data leaks. An enterprise will have little or no control over corporate data because corporate data are now stored and accessed by personal mobile devices of employees.(Olalere, Abdullah, Mahmod and Abdullah, 2015) In the event of theft or loss of a BYOD device, companies may reduce the risk of unauthorized access to company data by using one of the Mobile Device Management (MDM) solutions

Figures and Results

m accessing company data from the loss of the device to the company as an then remotely locate the device screen that the device has been lost (Fig.3). inistrator can remotely initialize the Project Aim ny or even delete the entire device to Using the information on the MDM nsight into the system of each device use of rooted Android or Jailbreaked of specific software to reduce the nformation system itself. This project aims to demonstrate the importance of using Mobile Device Management (MDM) to protect corporate information systems and to show an effective way to protect data in cases of theft or loss of BYOD devices. The aim of this project is not to compare MDM software solutions of different companies, but to generally present MDM capabilities to prevent unauthorized access to mobile devices and prevent data leaks. Also, the selection of ManageEngine's MDM solution is in no way funded by the company "ManageEngine", but it has been used because it allows up to 25 enrolled devices for free.(ManageEngine, 2020)

Fig4: Remote access to wipe data

Fig2: Mobile MDM application

Fig7: MDM services

Fig3: Lost device Info screen

Using the MDM software solution, the administrator has an insight into the operation of portable devices. The company may allow employees to use personal devices and may also assign devices owned by the company to its employees. In either case, the IT Administrator will first enroll the device. Enrolment can be done by sending an invitation to an email (Fig1) or instructing the user to self-enrollment using the MDM application (Fig2). After successful enrolment using credentials, employees will have access to company resources. MDM software enables the management and control of various services that affect the operation and safety of mobile devices. Some of the available services are Content management, Security Management, Profile Management etc. (Fig.7). Indeed, one of the essential services offered by MDM is the prevention and mitigation of data leakage. Because mobile devices used for day-to-day work within the company, are brought outside the company after business hours, there is a high possibility of accidental loss of the equipment or theft of the device.

Fig5: Remotely wiping data

Through this project, the importance of using MDM solutions in certain aspects of data security and information systems could be seen. However, the question of the privacy of users arises since the company can, at any time monitor the location as well as the activities of the users even when the devices are used for private purposes. Essentially, it is necessary to find the right balance between security risks and the privacy of users of BYOD devices.

References

Methods

Mobile device management (MDM) is the process of managing users' mobile devices by defining different security policies. After enrolment, MDM communicates with mobile devices such as laptops, tablets, and mobile phones by sending policies previously specified by the Administrator. Different types of policies can be defined for each type of platform or device, and most MDM solutions can manage Windows, iOS, Android, or Chrome environments. MDM can set minimum system requirements for mobile devices to access the corporate network. OS must be updated, rooted or uninterrupted devices are not allowed in the corporate network, suspicious applications on the blacklist, mandatory storage encryption, use of complex passwords, use of VPNs, remote access in case of theft or theft of devices.

t summary

Fig1: Enrolment Invitation

Special thanks to Dr Harsha Ka for giving me significant projec as well as for transferring kn regarding the security of c networks and systems. I would also like to tha Aberdeenshire Council ICT Te Conclusion unselfishly supported implementation of my project a me the server and network eq to use. Acknowledgments

Special thanks to Dr Harsha Kalutarage

Olalere, M., Abdullah, M., Mahmod, R. and A for giving me significant project advice as well as for transferring knowledge 2015. A Review regarding Of Bring Yourof Own the security computerDevice networks and systems. Issues. [online] I Journals.sagepub.com. A would also like to thank the Aberdeenshire Council ICT Team, who <https://journals.sagepub.com/doi/pdf/10.11 unselfishly supported the implementation of my project and gave 015580372> [Accessed 17 April 2020]. me the server and network equipment use. ManageEngine, toc., 2020. Mobile Device M MDM Cloud SoftwareReferences - Manageengine Mo Manager Plus. [online] Manageengine.com at: <https://www.manageengine.com/m management/?MEblog> [Accessed 13 April

To prevent an unauthorized person from accessing company data from the missing device, the user must report the loss of the device to the company as soon as possible. The administrator can then remotely locate the device (Fig.4), place a message on the device screen that the device has been lost with visible company contact information (Fig.3). In case the device is stolen, the administrator can remotely initialize the deletion of data belonging to the company or even delete the entire device to prevent unauthorized access (Fig.5) Using the information on the MDM software, the administrator will have an insight into the system of each device (Fig.6). Also, it is possible to ban the use of rooted Android or Jailbreaked Apple devices, as well as the use of specific software to reduce the vulnerability of both the device and the information system itself.

Fig6: Enrolment summary

Olalere, M., Abdullah, M., Mahmod, R. and Abdullah, A., 2015. A Review Of Bring Your Own Device On Security Issues. [online] Journals.sagepub.com. Available at: <https://journals.sagepub.com/doi/pdf/10.1177/2158244 015580372> [Accessed 17 April 2020]. ManageEngine, c., 2020. Mobile Device Management | MDM Cloud Software - Manageengine Mobile Device Manager Plus. [online] Manageengine.com. Available at: <https://www.manageengine.com/mobile-devicemanagement/?MEblog> [Accessed 13 April 2020].

213


STUDENT BIOGRAPHY

Andy Menzies Course: BSc (Hons) Computer Network Management and Design Is there a real time benefit to the latency of LI-FI networks as to Wi-Fi/Bluetooth networks? The purpose of this project is to find out if there is a benefit to using LI-FI technology for sending data using Light Fidelity (LI-FI) rather than the normal day to day methods of data transmission such as Wireless fidelity (WI-FI) Bluetooth and even a legacy technology such as Infrared transmitters. LI-FI was developed by Dr Harold Haas as a solution to the now limited range of frequencies that can be used for WI-FI broadcasting. Due to the demonstrate how LI-FI works, purchasing of frequencies for various reasons, Dr Haas first demonstrated how LI-FI works in a TED Talk in 2011. There is now an influx into the studying of LI-FI for the use in automotive safety features such as collision avoidance systems and now the development of self-driving cars.

<#> 214


Is there a real time benefit to the latency of LI-FI networks as to Wi-Fi/Bluetooth networks?

Andrew Menzies & Ian Harris

Introduction The purpose of this project is to find out if there is a benefit to using LI-FI technology for sending data using Light Fidelity (LI-FI) rather than the normal day to day methods of data transmission such as Wireless fidelity (WI-FI) Bluetooth and even a legacy technology such as Infrared transmitters. LI-FI was developed by Dr Harold Haas as a solution to the now limited range of frequencies that can be used for WI-FI broadcasting Due to the purchasing of frequencies for various reasons. Dr Haas First demonstrated How LI-FI work’s in a TED Talk in 2011 where he would demonstrate how LI-FI w o r k ’ s . https://www.ted.com/talks/ harald_haas_forget_wi_fi_meet_t he_new_li_fi_internet? utm_campaign=tedspread&utm_ medium=referral&utm_source=ted comshare

Project Aim

There is now an influx into the studying of LI-FI for the use in automotive safety features such as collision avoidance systems and now the development of self driving cars

Methods

Figures and Results

figures

figures

figures

figures

IAt the time of this print the project was not running as expected as it was possible to send the binary pulses from the Arduino but at this moment in time the receiver is unable to receive the binary pulses and convert them back into a readable format but some of the testing that was carried out showed some different results I was able to find a way to print out the lumen of a LED bulb. I know there is a way to carry out the functionality that I was intending but I know it is possible as I have seen many videos and similar projects. But due to unforeseen circumstances I didn’t find the materials that may have helped in one way or another but I will endeavour to find a way to carryout the experiment and have it work. As I couldn’t have the test run I was unable to test my initial thoughts that Li-Fi was a faster transfer medium rather than Wi-Fi and Bluetooth. “ I didn't fail the test, I just found 100 ways to do it wrong.- Benjamin Franklin” As the quote states I have found some of the ways not to do it but there is a way to do it.

figures The methods used in this project was designing a test platform to prove how it is possible to use a set of Arduino’s one to project the LI-Fi signal and one to receive the input and present the output in a serial terminal.

Conclusion

The conclusion I have come to on this report is that it may be possible to carry out the physical testing but I was unable to do so but as for previous testing in reports and journals it is possible and I still feel this is a technology that is up and coming and may take the strain of Wireless networks bandwidths.

Acknowledgments There are several people I would like to Acknowledge for the help in my report Firstly My Wife Mandy My Children Lukas Josh and Zoe who have put up with me hogging the computers so much, My supervisor Ian Harris who helped guide me on my way And Dr John Issacs and Adam Lyons who helped me bash out some scripting issues as I have no idea how it works. Lastly to the CNMD Computing dept for inspiring me to keep going even if it doesn’t work the first time it doesn’t mean it wont.

References

figures

Anon., 2012. Bluetooth technology. [Online] Available at: https://www.epo.org/learning-events/ european-inventor/finalists/2012/haartsen.html [Accessed 16th February 2020]. GUNJAN4542, 2018. Gunjan4542/Light_FidelityLifi-. [Online] Available at: https://github.com/Gunjan4542/ Light_Fidelity-Lifi[Accessed 20 February 2020]. Haas, H. et al., n.d. s.l.: s.n. jpiat, 2017. jpiat/arduino. [Online] Available at: https://github.com/jpiat/arduino [Accessed 18 February 2020].

215


STUDENT BIOGRAPHY

Jonathan Merchant Course: BSc (Hons) Computer Network Management and Design An academic study and comparison of the call setup delay and performance of SIP and SCCP in a VoIP environment Voice over IP (VoIP) makes use of call signalling protocols for the initiation and teardown of calls. Two of these signalling protocols are the Skinny Call Control Protocol (SCCP) and the Session Initiation Protocol (SIP). A key, measurable indicator on the performance of these two protocols is the time delay taken for a call to be setup. This issue has been explored in a testing environment, under different loads of circuit usage, to determine which protocol can perform call setup more efficiently. The results of these tests are then compared, evaluating the performance of the protocols against each other, looking for the lowest average call setup time

<#> 216


An academic study and comparison of the call setup delay and performance of SIP and SCCP in a VoIP environment By Jonathan Ian Merchant

Supervisor: Mhd Omar Al Kadri

Introduction

Methods

Voice over IP (VoIP) makes use of call signalling protocols for the initiation and teardown of calls. Two of these signalling protocols are the Skinny Call Control Protocol (SCCP) and the Session Initiation Protocol (SIP). A key, measurable indicator on the performance of these two protocols is the time delay taken for a call to be setup. This issue has been explored in a testing environment, under different loads of circuit usage, to determine which protocol can perform call setup more efficiently. The results of these tests are then compared, evaluating the performance of the protocols against each other, looking for the lowest average call setup time.

The experiment consisted of making multiple test calls, whilst putting the call management server under different loads of UDP traffic (% based on the maximum circuit usage). To get an average result, ten calls were made at four different levels of circuit usage (0%, 50%, 75%, 100%). Wireshark was used to capture these results in .pcap files, which were then analysed to determine the time between the initial signalling protocol transmission and call setup. To calculate the setup delay, the following formulae were used for their respective protocols: for SCCP, the initial CallStateMessage packet time was subtracted from the StartMediaTransmissionAck packet time. For SIP, the initial INVITE SDP packet was subtracted from the ACK packet.

Project Aim

The goal of this project was to compare and contrast the SIP and SCCP protocols; to determine if the time delay for call setup was lower for one particular protocol than another. The aims of this project are as follows: •  To develop and implement a stable test environment, in which both the SCCP and SIP protocols can be tested with limited human interaction. •  To introduce a method of performing the tests, with the call manager under different levels of load. •  To capture the results of calls made with both signalling protocols, and use them to measure the call setup delay. •  To identify which of the two signalling protocols has the lowest average call setup delay, indicating a greater level of efficiency. The success of this project will be determined by the artifacts ability to compare and contrast the setup delay speeds between the two protocols.

Results

Assessing the results of the project, it was found that the average call setup delay is approximately 17ms lower when SIP is being used and the call manager is under minimal load, indicating a greater level of efficiency. As the load on the call server was increased, it was observed that the results became more erratic, with some call setup delays spiking abnormally high. Investigation found these spikes, however, to be related to abnormally high ARP requests from the test environment.

Conclusion

(Cisco, 2020)

(Montazerolghaem et. al., 2016).

Figures

I found the project as a whole to be successful as the goal of identifying a difference in call delay was achieved. For future work, it would be of benefit to repeat these experiments using a call generation tool, to produce a bulk number of calls concurrently, to identify any further variance.

Acknowledgments

I would firstly like to thank my supervisor, Omar, for all of his support and insight throughout the lifecycle of this project, especially when challenges were faced. I would also like to thank all of my lecturers over the past four years for enabling me to reach this point. Finally, I would like to thank Arrowdawn Ltd for providing me with an amazing work placement and invaluable experience.

References

These figures detail the measured call setup delay of both the SIP and SCCP setups under the different call manager load conditions. A figure has also been included to highlight the overall average delay of each protocols call setup delay under these parameters.

Cisco. (2020). Cisco ATA 186 and Cisco ATA 188 Analog Telephone Adaptor Administrator's Guide (SCCP) (2.15.ms, Rev B0) - SCCP Call Flows [Cisco ATA 180 Series Analog Telephone Adaptors]. [ o n l i n e ] A v a i l a b l e a t : https://www.cisco.com/c/en/us/td/docs/ voice_ip_comm/cata/186_188/2_15_ms/ english/administration/guide/sccp/sccp/ sccpaaph.html [Accessed 10 Feb. 2020]. Montazerolghaem, A., Hosseini Seno, S., Yaghmaee, M. and Tashtarian, F. (2016). Overload mitigation mechanism for VoIP networks: a transport layer approach b a s e d o n r e s o u r c e management. Transactions on Emerging Telecommunications Technologies, 27.

217


STUDENT BIOGRAPHY

Adam Nayler Course: BSc (Hons) Computer Network Management and Design Comparing the Use of Intrusion Detection and Intrusion Prevention Systems within Small-scale Networks As outlined by Stokdyk (2018), 58 percent of business owners with up to 299 employees have been a victim of cyber-attacks. With such a large number of small business falling prey to cyber criminals it is paramount the correct security is in place to prevent and protect data and users. Even a single breach is devastating with the average cost of one in 2019 being $3.92 million (Sobers 2019). With this knowledge companies need to know what technology to implement to defend against these threats. This project aimed to explore if a high cost IPS system was necessary for a small business. It is judged against instead running an IDS system alongside defence strategies implemented by default in Windows 10. However, the results may show that like many situations these alone are not enough and instead a defence in depth approach is required to provide an adequate, consistent solution to detection and prevention.

<#> 218


Comparing the Use of Intrusion Detection and Intrusion Prevention Systems within Small-scale Networks. Adam Nayler & Andrei Petrovski

Introduction As outlined by Stokdyk (2018), 58 percent of business owners with up to 299 employees have been a victim of cyber-attacks. With such a large number of small business falling prey to cyber criminals it is paramount the correct security is in place to prevent and protect data and users. Even a single breach is devastating with the average cost of one in 2019 being $3.92 million (Sobers 2019). With this knowledge companies need to know what technology to implement to defend against these threats. This project aimed to explore if a high cost IPS system was necessary for a small business. It is judged against instead running an IDS system alongside defence strategies implemented by default in Windows 10. However, the results may show that like many situations these alone are not enough and instead a defence in depth approach is required to provide an adequate, consistent solution to detection and prevention.

Project Aim The aim of this project was to compare the detection and prevention abilities of each system and determine their effectivity. This would help determine if one system is a better investment over the other for a small-scale network. The results should outline if either is able to fully protect an enterprise network on its own or if a defence in depth strategy is still required for smaller networks.

Methods This project made use of two separate but identical GNS3 networks implementing VMs running within VMWare. The network contains three routers and switches to accommodate three separate subnets. Each subnet contained three host device running Windows 10 for testing. Subnet one also contained the system specific devices. A Kali Linux VM was included to represent an attacker. The base layout for these networks is shown below.

The IDS system makes use of Wazuh running on a Debian 10 server. It monitors specified folders for new files and sends those new files to VirusTotal for analysis. The IPS system uses SolarWinds Security Event Manager to carry out pre-configured rules when the test systems send alerts about malware detection.

Figures and Results Each end devices within each network was tested against four iterations of four different types of malware for a total of sixteen tests. The goal was to measure the detection and prevention ability of each system. The results of the first round of testing against the IDS system found that it was able to detected the malware 100% of the time. This was most likely due to its use of VirusTotal. VirusTotal analysis possible malicious files against over 70 different databases to check if it is a threat. Number of database detec/ons Number of database engines Eda2_xor_context.exe Eda2_zutto_dekiru.exe Eda2_xor.exe Eda2.exe Worm_xor_context.exe Worm_zutto_dekiru.exe Worm_xor.exe Worm.exe Rat_xor_context.exe Rat_zutto_dekiru.exe Rat_xor.exe Rat.exe Npp.installer.obfuscated.exe Npp.installer.polyinfected.x64.exe Npp.installer.infected.x64.exe Malware.exe

41 42 40

26

49 38 42 43 46 38 42 42 52 36

48 47

However, prevention for the IDS system was only 81% as it was dependent on Windows Defender with no IPS capabilities of its own.

WINDOWS DEFENDER DETECTION Undetected Malware 19% Detected Malware 81%

The IPS system on the other hand turned out to be dependent on third party software for detection and therefore its rate of detection and prevention were both lower than the IDS system. Its detection rate of the malware was only 68.75% a n d c o u l d o n l y successfully carry out IPS capabilities when a malware event was detected by the system through third party software such as Windows Defender. It did however, carry out all actions successfully when the event was successfully received.

Below is a pie chart outlining the IPS system detection rate. WINDOWS DEFENDER DETECTION Undetected malware 31%

Detected malware 69%

figure Conclusion Although the results were not as expected, I believe this project has been carried out successfully and effectively. I expected the results to show that the IDS had better detection rates but the IPS would have an increased prevention capability. However, due to how it functions its detection and prevention rates ended up being less than that of the IPS. It is difficult to know if this would have still been the case if I had the ability to replicate them physically but due to COVID-19 this was impossible. Given a larger time frame I would of preferred to carry out every test on multiple IDS and IPS solutions so the results were not dependant on a single type or brand. Overall I am disappointed that unforeseen circumstances prevented some objectives of this project from being achieved. On the other hand, I am very happy with the work that was carried out for those still achievable.

Acknowledgments I would like to thank all of the teaching staff at Robert Gordon University for their knowledge and support during my studies. Specifically, the support and advice given throughout my honours project process by Andrei Petrovski and Ian Harris. An additional thank you goes out to my fiancĂŠe, Jodie. She has been consistently supportive and patient with me throughout the entire project.

References SOBERS, R., 2020. 110 Must-Know Cybersecurity Statistics for 2020. [online]. Available from: https://www.varonis.com/blog/ cybersecurity-statistics/ [Accessed 14th April 2020]. STOKDYK, D., 2018. What is Cyber Security and Why is it Important? [online]. Available from: https://www.snhu.edu/about-us/newsroom/ 2018/05/what-is-cyber-security [Accessed 14th April 2020].

219


STUDENT BIOGRAPHY

Vadims Nevidomskis Course: BSc (Hons) Computer Network Management and Design Evaluation of the networking security tools that detect and respond to threats Today the world`s security “data protection� is crucial. The stable and well - coordinated work of Small and Medium Enterprises networks is extremely important. In order to maintain the security of equipment and employees, the company needs to conduct security monitoring. There are thousands of types of malicious software, ransomware, viruses and spyware out there, each one of this is using different techniques and vulnerabilities exposing sensitive personal data to the hackers. Network security monitoring tools will help to detect anomalous activity within the network. Monitoring ensures that the IT Administrators are informed in a timely manner about the events taking place in the system, as well as giving them the opportunity to take effective information ontime decision on how to react to these events. A r reliable and efficient monitoring network security system will allow preventing problems prior they can occur. The use of the security system could help to secure and improve the operation of the network infrastructure, as a result, efficient data protection.

<#> 220


Evaluation of the networking security tools that detect and respond to threats. Vadims Nevidomskis

Introduction Today the world`s security “data protection� is crucial. The stable and well-coordinated work of Small and Medium Enterprises networks is extremely important. In order to maintain the security of equipment and employees, the company needs to conduct security monitoring. There are thousands of types of malicious software, ransomware, viruses and spyware out there, each one of this is using different techniques and vulnerabilities exposing sensitive personal data to the hackers. Network security monitoring tools will help to detect anomalous activity within the network. Monitoring ensures that the IT Administrators are informed in a timely manner about the events taking place in the system, as well as giving them the opportunity to take effective information on-time decision on how to react to these events. A reliable and efficient monitoring network security system will allow preventing problems prior they can occur. The use of the security system could help to secure and improve the operation of the network infrastructure, as a result, efficient data protection.

Figures and Results

Fig 2 shows SolarWinds Rules.

figures Fig 3 shows SNORT Rules.

Fig 4 shows SURICATA Rules.

Simulating the single threat such as a Distributed Denial of Service Attack (DDoS) and running malware.exe on victim`s host as well as simulating these multiple threats at the same time. With an intrusion detection system and intrusion prevention system tools have analyzed the network traffic for malicious behaviour and other signs that a threat has entered the network infrastructure.

Project Aim The aim of this project to evaluate the opensource and commercial network security systems that detects and responds to the most common attacks. It outlines security systems such as Intrusion Detection and Prevention System (IDPs) and the benefits of security for Small and Medium Enterprise Networks.

Methods To respond to the most common threats configuring system rules and response actions for each tool as the rules play a key role in detecting threats in the network infrastructure. Rules examples shown in Fig 2, 3 & 4. Simulating the most common threats such as DDoS attack and malware.exe. Security systems such as SNORT, Suricata & SolarWinds were tested against those threats using simple Network Topology shown in Fig 1.

figures Fig 1 shows simple network topology.

Conclusion As a result, security systems have successfully identified the most common threats. It`s a free open-source tool but hard to configure. Additionally, SNORT offers rule subscriptions at extra cost. It has the possibility to integrate with different software such as Splunk and pfSense firewall. Suricata is similar to Snort but there is a lack of proper documentation, very hard to configure and maintenance very tricky. Solar Winds is an enterprise-level, wide-ranging events detection and intrusion detections software. Easy to set up and clear documentation but very expensive. Protecting the data should not only fall on the shoulders of the IT Administrators. End users should also be involved by giving them continuous security education use a whole range of complex tools to protect data. Recommendations are entirely up to budget and business owners however one of the good solutions would be SNORT along with integrated software. The cost of a breach is fundamentally more financially disturbing than paying for network security that prevents breaches. Regardless of what it involves, the cost of recovering from a security breach is not calculated into the IT budget. Implementing commercial or open-source software to protect your data definitely better safe, than sorry.

Acknowledgments Fig 5 shows comparison chart of real and false alerts.

Once security tools have detected a threat, they were alerted to the potential threat. Depending on the mode of the tools, it will take action to detect or prevent the threat. All security systems have shown effective result in the detection of known threats, however occasionally, approx. 10% of all alerts - false alerts have been generated shown in Fig 5. Additionally, there is an option to set-up email notifications or add components that allow configuring automated alerts notifications that the IT Administrators are informed in a timely manner about the alerts or events taking place in the network infrastructure. Each security tool has its advantages, disadvantages and limitations example installations and maintenance so that should be thoughtful about selecting the security approaches. However, there are many open problems and future challenges such as wireless network security.

This project was supported by a free Trial of Security Event Manager from SolarWinds. Gratefully acknowledge the contributions of SNORT and Suricata technical support, technical guidance forums their community, who is contributing to technical guides installation and errors have occurred during the software installation.

References Snort.org. 2020. Snort Setup Guides For Emerging Threats Prevention. [online] Available at: https://www.snort.org/documents [Accessed 9 April 2020]. Suricata. 2020. Suricata. [online] Available at: https://suricata-ids.org [Accessed 9 April 2020]. Solarwinds.com. 2020. Security Event Manager - View Event Logs Remotely | Solarwinds. [online] Available at: https://www.solarwinds.com/security-eventmanage [Accessed 9 April 2020].

Computer Network Management and Design

221


STUDENT BIOGRAPHY

Josh Parkinson Course: BSc (Hons) Computer Network Management and Design Environmentally-conscious Computing – An Embedded Systems-Based Approach. In 2008, the Climate Group published a report predicting that in 2020, 57% of ICT’s global carbon footprint will be from PCs and peripherals. [1] With this high figure, and a general shift in global mentality towards greater care for the environment, it has never been more important to look into ways of making computing and network design more ecofriendly, especially in such an impactful sector. When it comes to embedded systems, few studies investigate their use in the way that this project has, with many looking into their viability as clusters for high-performance computing at low cost. With this in mind, this study takes a different approach, and with different priorities, looking to reduce that predicted 57% emissions figure by offering a lowpower alternative to home or office desktop systems, while maintaining scalability and cost-effectiveness.

<#> 222


Environmentally-conscious Compu4ng – An Embedded Systems-Based Approach.

Student – Joshua Parkinson Supervisor – Chris McDermoL – c.d.mcdermoL@rgu.ac.uk

Introduction In 2008, the Climate Group published a report predicJng that in 2020, 57% of ICT’s global carbon footprint will be from PCs and peripherals.[1] With this high figure, and a general shi\ in global mentality towards greater care for the environment, it has never been more important to look into ways of making compuJng and network design more eco-friendly, especially in such an impacQul sector. When it comes to embedded systems, few studies invesJgate their use in the way that this project has, with many looking into their viability as clusters for highperformance compuJng at low cost. With this in mind, this study takes a different approach, and with different prioriJes, looking to reduce that predicted 57% emissions figure by offering a lowpower alternaJve to home or office desktop systems, while maintaining scalability and costeffecJveness.

Project Aim The overall aim of this project is to evaluate the viability of the use of embedded systems in a classroom or workplace environment, reducing overall power consumpJon while maintaining acceptable levels of performance. This is to be assessed by either measuring power consumpJon and performance in real-Jme, or using an indicaJve approach, looking at maximum consumpJon when compared with potenJal desktop equivalents.

Method The project makes use of a networked group of Raspberry Pi devices, running a thin-client operaJng system called WTWare (available at wtware.com). These clients connect to a laptop running Windows Server 2016, via Remote Desktop sessions. The server hosts an offline domain, with user accounts configured for remote login. The image below shows the physical setup.

Results and Expectations The outcome of this design is a scalable system that is most limited by the resources available on the server itself; even on the aging 2015 Intel i7, 16GB RAM laptop used in this experiment, the expectaJon is that it would be good enough to support a small office team, perhaps an accounJng department of 10 staff, using word-processing, spreadsheet-based tasks and/or accounJng so\ware. A more high-spec, custom-built server would certainly be able to handle larger groups, more demanding tasks, as well as be configured for further purposes such as file storage and email, and would be suitable for classroom or larger office environments. Benchmarking opJons specifically for thin-clients appear to be rather limited, with their focus mainly being on network performance. As such, real-world tesJng with mulJple sessions and processes are more appropriate, to esJmate an acceptable number of clients handled at any one Jme. Comparisons to low-end thin client opJons Dell Wyse 3040 and HP t530 show comparable power consumpJon to the Raspberry Pi 4 at idle[2], according to Energy Star[3, 4], though for much higher prices than the Pi, making a compelling argument for thin client compuJng on a budget using embedded systems.

Conclusion IniJal indicaJons with the use of a rudimentary setup are promising, suggesJng that it would be perfectly possible to create a network of thin clients using Raspberry Pi embedded systems. While benchmarking individual systems doesn’t appear to be as easy as hoped, extrapolaJon based on server performance seems to be a suitable alternaJve. In any case, there is a strong argument to be made for the use of Raspberry Pi systems in conjuncJon with WTWare to create a funcJonal educaJonal or workplace environment centred around reducing ICT’s impact on the environment.

Future Work This work should be undertaken in future using more up-to-date hardware, to further prove its viability in a real-world scenario and with greater comparisons to the systems it aims to replace. More detailed benchmarking aimed at embedded systems is needed, though not readily available.

References [1] WEBB, M., 2008. SMART 2020: Enabling the low carbon economy in the informaJon age. [2] Online hLps://raspi.tv/2019/how-muchpower-does-the-pi4b-use-powermeasurements [3] Online - hLps://www.energystar.gov/ producQinder/product/cerJfiedcomputers/details/2323364 [4] Online - hLps://www.energystar.gov/ producQinder/product/cerJfiedcomputers/details/2325117

223


STUDENT BIOGRAPHY

Adam Perry Course: BSc (Hons) Computer Network Management and Design Demonstrating why Cyber Security should be a bigger part of Scottish Secondary School Curriculum. Secondary school education is vitally important in the development of young people Technology and the current generation are tightly entwined, with each relying on the other. The lack of Cyber Security knowledge and education can have lasting detrimental effects to the current young generation. A study by IEEE found that there was tension between teenagers’ self-perception of invincibility and their online vulnerability” (2019 This is mainly due to the lack of Education and Adult support. There are small groups who want to expand their knowledge themselves and pursue a course or career is the tech sector. The notion that they are “online gives little regard to online safety, they done recognize the ‘Cyber Footprint’ they leave behind with a trail of information behind it. This lack of knowledge can lead to small numbers posting illegal content on social media e.g. sexualised imagery of themselves or ‘friends. This online footprint can also prove detrimental to the person when they get older, employers can check what they have said/done over the years.

<#> 224


Demonstrating why Cyber-Security should be a bigger part of Scottish Secondary School Curriculum. Adam Perry & Ian Harris

Introduction Secondary school education is vitally important in the development of young people. Technology and the current generation are tightly entwined, with each relying on the other. The lack of Cyber-Security knowledge and education can have lasting detrimental effects to the current young generation. A study by IEEE found that there was “A tension between teenagers self-perception of invincibility and their online vulnerability” (IEEE, 2019). This is mainly due to the lack of Education and Adult support. There are small groups who want to expand their knowledge themselves and pursue a course or career is the tech sector. The notion that they are “invincible” online gives little regard to online safety, they done recognize the ‘Cyber-Footprint’ they leave behind with a trail of information behind it. This lack of knowledge can lead to small numbers posting illegal content on social media e.g sexualised imagery of themselves or ‘friends. This online footprint can also prove detrimental to the person when they get older, employers can check what they have said/done over the years.

Example of Lesson

figures figures The Caesar Cipher is essentially shifting the letters of the alphabet by a number either left or right. For example, a cipher with a left shift of 3, The letter A = The letter X. This can be adjusted any way to decode and encode words. This is an easy introduction into encryption and can easily engage pupils of all ages. A cipher wheel can be created for easy encryption/decryption of phrases and words.

Project Aim

The aim of this project is to provide insight in just how underfunded and desperate the secondary education system in Scotland really is. It will also provide easy activities to help improve cyber security knowledge which teachers and students can use.

Methods I will be using different ciphers to help explain encryption and why it is important in modern day cybersecurity. The lesson will feature a brief history on Alan Turing and the Enigma code, demonstration how he built a machine and cracked the German encryption in WW2, then there will be activities with increasing difficulties about the Caesar/Shift and Affine Cipher

Conclusion

The project demonstrates that Technology is a large part of the younger generations identity, growing up in an ever evolving technological world. These resources can help create a solid Cyber-Security foundation, from this, they can expand their knowledge securely and safely. However, this can only be done with support from the government, over a decade of education cuts has hit the sector hard however the Scottish government has pledged to improve the education sector over the coming years, injecting over “£15 million of funding for services and staff…”

Acknowledgments The Affine cipher is similar to the Caesar cipher, however each letter is assigned a numerical value, A=0, B=1 etc. to get the cipher, you can do any mathematical operation to the number, add, subtract, multiply etc. This cipher can get complicated fast, but with careful planning and slow explanations you can easily build up to a complex cipher. You can tie in functions to this task

I would like to thank my partner, Dillon. Without his constant encouragement and pressure, this project would not have been done. My best friend Melissa who has been my rock throughout this, allowing me to bounce ideas off her and helping me come up with resources. I would like to thank the BECS department at Fraserburgh Academy, namely Elaine Bryson and Roselyn Souter, for allowing me to come work beside them, showing me the reality behind teaching and offering me support and feedback during the six months I was there.

References

A Cipher with a +3 shift

A cipher of multiply by 3 and add 4

IEEE, 2019. Bringing Cyber To School:. https://www.cybok.org/media/downloads/IEEE_SP_Brin ging_Cyber_to_School_-_Oct_19.pdf Scottish Government, 2018. A CYBER RESILIENCE STRATEGY FOR SCOTLAND https://www.gov.scot/binaries/content/documents/govsc ot/publications/strategy-plan/2018/03/learning-skillsaction-plan-cyber-resilience-201820/documents/00532325-pdf/00532325pdf/govscot%3Adocument/00532325.pdf?forceDownloa d=true Corner, Crypto, 2019. Cryptography Worksheet — The Caesar Shift. https://crypto.interactive-maths.com/caesar-shiftcipher.html

Computer Network Management and Design 225


STUDENT BIOGRAPHY

Pawel Plachecki Course: BSc (Hons) Computer Network Management and Design The impact of IPsec Vpn algorithms within the cisco environment. A Vpn allows you to create a private network connection on a public network making you almost untraceable. According to an article written by Chris, VPNs now a days are used by the society in most cases to access sites not available at the region. (Hoffman, 2019) A prime example would be using a VPN to access an American version of Netflix to be able to view the shows they currently have. VPNs are known to be secure but does the choice of algorithm influence the time an attack can be detected? And if so is the time difference significant enough to be considered a threat? How much of an impact does the choice operating system within the router impact safety? That’s some of the questions this project will try to answer.

<#> 226


The impact of IPsec Vpn algorithms within the cisco environment. Thesis by: Pawel Plachecki

Supervised by: Chris McDermott

A Vpn allows you to create a private network connection on a public network making you almost untraceable. According to an article written by Chris, VPNs now a days are used by the society in most cases to access sites not available at the region. (Hoffman, 2019) A prime example would be using a VPN to access a American version of Netflix to be able to view the shows they currently have. VPNs are known to be secure but does the choice of algorithm influence the time an attack can be detected? And if so is the time difference significant enough to be considered a threat? How much of an impact does the choice operating system within the router impact safety? That’s some of the questions this project will try to answer.

Project Aim The aim of this project is to check if implementation of different algorithms can have an impact on the security of VPN. In other words is one hash and encryption algorithms significantly better than the other? The project aims to compare detection times of 2 VPN configurations. These configurations will have a aes 256 (secure configuration and md5 the (less secure) alternative.

Conclusion

Figures and Results

Introduction

Scenario 1 using c2801-advsecurityk9 & esp-aes 256 esp-sha-hmac Attacks

Kali (Time)

Snort(Time)

Difference (Time) Adjustments

Actual Difference

Ping

13:18:42

13:18:30

-00:00:12

- 2 seconds

00:02:12

15:10:00

15:10:49

00:00:49

SQL Injection

13:08:01

13:11:31

00:03:30

00:03:30

Nmap scan

00:00:49

Kali(Time)

Snort(Time)

Adjustments

Actual Difference

Ping

Attacks

18:44:35

18:44:19

-00:00:16

-2 seconds

00:02:16

Difference (Time)

Nmap Scan

18:51:00

18:51:55

00:00:55

00:00:55

SQL Injection

19:17:44

19:17:44

00:00:00

00:00:00

As seen from results of scenario 1 both the ping and Nmap scan are barely impacted by the different algorithm used. In both parts the ping takes just over 2 seconds to get recognized by snort while the Nmap scan remains just under 1 second in both cases. The biggest difference is present when we look at the results of the SQL injection test. With a more secure encryption and hash algorithm that specific attack takes 00:03:30 seconds to get picked up by the ids in contrast when a less secure algorithm is implemented in this case the esp-des ah-md5-hmac the detection time of the SQL injection attack is instant. This proves that for the SQL injection attack specifically the algorithms used do matter as the 3 seconds is quite a significant time difference. Although 3 seconds is a substantial difference when compared to other attacks on in the table above, it would not be worth for a company to undergo changes to the configuration just to simply gain an extra 3 seconds.

Attacks

Kali(Time)

Snort(Time)

Difference (Time)

Adjustments

Actual Difference

Ping

00:33:58

00:34:08

00:00:10

00:00:10

Nmap Scan

00:31:00

00:31:29

00:00:29

00:00:29

SQL Injection

00:23:58

00:24:01

00:00:03

00:00:03

Scenario 2 part 2 using c2801-advipservicesk9 & esp-des ah-md5-hmac Attacks

Kali(Time)

Snort(Time)

Difference (Time)

Adjustments

Actual Difference

Ping

11:56:53

11:56:52

-00:00:01

-00:00:01

Nmap Scan

11:52:00

11:52:56

00:00:56

00:00:56

SQL Injection

11:45:40

11:45:39

-00:00:01

-00:00:01

Fig1: Network Topology

The setup for the experiment consists of 3 virtual machines. Kali as they attacker, ubuntu 19.0 with Snort Intrusion detection system and Ubuntu 18.0 as the victim machine. The network is configured with default routes. An IPsec site to site VPN which will have 2 different combination of algorithms. And 2 different OS tested.

Scenario 1 (aes 256)

Scenario1 (md5 & Des)

Scenario 2 (md5 & Des)

Ping

00:02:12

00:02:16

00:00:10

-00:00:01

Nmap

00:00:49

00:00:55

00:00:29

00:00:56

SQL Injection

00:03:30

00:00:00

00:00:03

-00:00:01

Scenario 2 (aes 256)

Scenario 1 Part 2: using c2801-advsecurityk9 & esp-des ah-md5-hmac

Scenario 2 using c2801-advipservicesk9 & esp-aes 256 esp-sha-hmac

Experiment Setup

Comparison between both scenarios and parts

When comparing results from scenario 2 to the ones in scenario 1 we begin to notice the impact of the OS change to one that only supports security features as opposed to one that is security dedicated. This could be due to the security version having additional filters that cause a delay or temporarily are able to block the attack for a few seconds before it manages to get through. The results show that in the less secure os all attacks get identified in less than 1 second.

In ping and Nmap related attacks the algorithm implemented has close to no impact at all. The operating system seems to influence the attack more, as in the less secure version the attacks are detected almost immediately in contrast to the secured version that tends to block the attack for couple of seconds before it can be detected. In comparison the SQL injection attack is hugely impacted by a combination of Security os and more enhanced security algorithms. The results obtained in the experiment suggest that ultimately both the algorithm choice and operating system can both have an impact in detection time of attacks. However the differences in times are not significant enough to be considered by a business or a school. It's not worth changing a VPN configuration and implementing a new os on your routers to gain an extra 3 secondo's of protection its simply not worth it. A much better option would be to implement a good intrusion protection & intrusion detection system.

References Loy, M., 2015. Cisco Next Generation Encryption and Postquantum Cryptography. [Blog] Cisco Blogs, Available at: <https://blogs.cisco.com/ security/cisco-next-generationencryption-and-postquantumcryptography> [Accessed 9 May 2020]. Hoffman, C., 2019. What Is A VPN, And Why Would I Need One?. [online] HowTo Geek. Available at: <https:// www.howtogeek.com/133680/htgexplains-what-is-a-vpn/> [Accessed 13 May 2020].

227


STUDENT BIOGRAPHY

Paul Stuart Course: BSc (Hons) Computer Network Management and Design Implementation and configuration of VOIP over VSAT From Alexander Graham Bell in the 1800s to the present day, phones have come a long way – to the point you can make a call to the other side of the world in seconds. With the added implementation of VoIP – this process is made even more seamless and accessible to all. Building a voice solution on top of your already existing network may add more traffic, but there are multiple techniques and protocols which can be used to help voice perform as well as possible. Voice is a crucial part of any business and operation and is essential to keep tasks running as smoothly and effectively as possible. Also with the developments of VSAT and LTE technologies calls can be made from and to pretty much anywhere, on or offshore. In this project I will evaluate the opetions for network with a voice solution in the North Sea.

<#> 228


Implementation and configuration of VOIP over VSAT Analysis of methods of deployment of VOIP Paul Stuart & Omar Al Kadri

Introduction

Figures and Results

From Alexander Graham Bell in the 1800s to the present day, phones have come a long way – to the point you can make a call to the other side of the world in seconds.

The conclusion I have brought from this is whereas VSAT has increased latency, it is much more stable than LTE and will allow for smoother phone calls as constrains such as jitter and packet loss are brought to a minimum.

With the added implementation of VoIP – this process is made even more seamless and accessible to all. Building a voice solution on top of your already existing network may add more traffic, but there are multiple techniques and protocols which can be used to help voice perform as well as possible. Voice is a crucial part of any business and operation and is essential to keep tasks running as smoothly and effectively as possible. Also with the developments of VSAT and LTE technologies calls can be made from and to pretty much anywhere, on or offshore.

figures Using the tools in VMWare is possible to tune each VM to have limited bandwidth, increased latency. LTE has much reduced latency however is more susceptible to jitter which is detrimental to voice services. There is also much more external factors which affect LTE services especially in summer months such as ducting.

Project Aim

Methods The methods used here will incorporate a soft phone on a VM with shoreside terrestrial latency and another on another VM which will have variable latency and jitter, one with LTE latency and jitter and one with VSAT latency and jitter and both will be tested to see which offers the best performance.

VSAT may also be more susceptible to issues in more rainy parts of the world so VSAT and LTE implementation will depend on coverage in that area. For the North Sea however both options are available but issues such as ducting which causes reduced signal and packet loss, VSAT is a much more stable option for this, although there is occasional heavy rain, this can be deferred by using Cband but there will only be issues in extremely heavy rain.

Acknowledgments I would like to give thanks to the team over at Speedcast for the continued support in learning VSAT and LTE technologies. They have also been instrumental for my continued learning and expanding my knowledge in networking whether that be terrestrial issues or LAN issues Onboard vessels.

In this project I will evaluate the opetions for network with a voice solution in the North Sea.

The aim of the project is to determine the best form of communications for utilising voice in a network when used in remote locations where the only links available are LTE or VSAT, such as offshore and in remote inland locations such as the North Sea.

Conclusion

LTE is also known to cause increased packet loss especially under demand, even though QoS is put in place to reduce the impact of this utilization. LTE is also limited to where it can be used, especially offshore. Only certain areas have LTE (such as the north sea) VSAT is available pretty much anywhere depending on elevation, such as far north and south areas as the satellites being used orbit the equator. However more and more low earth orbit satellites are being launched to specific areas increasing the availability of VSAT. VSAT is also susceptible to issues depending on the band used, KU antennas are more susceptible to rain and weather fade where as C-band is more susceptible to interference in certain areas.

I would also like to thank Omar Al Kadri for his continued support and advise throughout the progress of my project.

References Voipinsights.com. Voice Over IP History. [online] Available at: http:// www.voipinsights.com/voip_history.html [Accessed 18 Feb. 2020]. Offshore Technology | Oil and Gas News and Market Analysis. (2015). Bringing offshore into 4G connectivity. [online] Available at: https:// www.offshore-technology.com/features/ featurebringing-offshore-into-4gconnectivity-4496524/ [Accessed 20 Feb. 2020]. Tampnet.com. (n.d.). Tampnet • Coverage. [online] Available at: http://www.tampnet.com/ coverage/ [Accessed 20 Feb. 2020]. Getvoip.com. (2015). Acceptable Jitter & Latency for VoIP: Everything You Need to Know | GetVoIP. [online] Available at: https:// getvoip.com/blog/2018/12/20/acceptable-jitterlatency/ [Accessed 20 Feb. 2020].

229


STUDENT BIOGRAPHY

Tibor Varga Course: BSc (Hons) Computer Network Management and Design Low-Cost Real-time Wireless Intrusion Detection System The usage of wireless networks in today’s society has increased tenfold compared to even a decade back in time. (Perspectives, E. and Report, C., 2018) While this is a welcome change it has also given a wealth of opportunity for malicious individuals to hack into people’s mobile devices in areas where public Wi-Fi is available. The availability and low cost of access points and devices that can create hotspots have particularly made it easy to attach oneself to a corporate or public Wi-Fi network. Therefore, the monitoring and detection of rouge equipment in an enclosed space like a Starbucks coffee bar, or in a corporate office is essential and needed in order to ensure the safe use of these easily accessible networks. This project will focus on the detection of these rouge devices using carefully placed and concealed sensors that connect to a central server which then passively analyses incoming packets from all devices in real time, providing information such as hidden BSSID, ip-range, signal strength, noise levels, time of detection, brand, and other useful information which can then be used to physically locate and remove the offending device.

<#> 230


Low-Cost Real-time Wireless Intrusion Detection System Tibor Varga & Omar Al Kadri

Introduction The usage of wireless networks in today’s society has increased tenfold compared to even a decade back in time. (Perspectives, E. and Report, C., 2018) While this is a welcome change it has also given a wealth of opportunity for malicious individuals to hack into people’s mobile devices in areas where public Wi-Fi is available. The availability and low cost of access points and devices that can create hotspots have particularly made it easy to attach oneself to a corporate or public Wi-Fi network. Therefore, the monitoring and detection of rouge equipment in an enclosed space like a Starbucks coffee bar, or in a corporate office is essential and needed in order to ensure the safe use of these easily accessible networks. This project will focus on the detection of these rouge devices using carefully placed and concealed sensors that connect to a central server which then passively analyses incoming packets from all devices in real time, providing information such as hidden BSSID, ip-range, signal strength, noise levels, time of detection, brand, and other useful information which can then be used to physically locate and remove the offending device.

Figures and Results

After various testing, I can determine which networks are new and not part of the initial networks by filtering the known BSSIDs out of the list, this gives an even shorter and clearer list, however the positioning can’t be determined by using Kismet alone. Kismet will only show the details from the drone that has the strongest signal power recorded out of all sensors. I have used Kali Linux as the operating system for both drone and server as it automatically comes with the Airodump-ng packet capturing tool, which is used to show the power levels recorded in every sensor at the same time. The testing was done on a Raspberry pi model 2.

Project Aim The aim of this project is to produce a system that is capable of detecting and listing all wireless signals in real-time in the basic coverage area of the protected wireless network, while remaining cost-efficient and inconspicuous. This system should also provide enough information to allow the authorities to find the offending device(s).

Methods

Packet sniffer Kismet is used to analyse transmitted packets coming from detected devices. This is a none intrusive, passive way of analysing traffic that enables the server to monitor all connected device’s transmit power and noise levels, BSSIDs, packet and data rates, as well as devices that are connected to each hotspot. Triangulation is used to determine the position of the offending device. This can be done by having at least four sensors in the drone/server topology. (Kwok and Lau, 2007)

Conclusion

This project has definitely shown that it is possible to create a WIDS (Wireless Intrusion Detection System) for a fraction of what it would cost to buy a complete Cisco WIDS suit, and with that alone a Wireless network’s security can be improved. However the system can be even better with a stronger wireless adapter, the newer version of Kismet (I used the 2016-07-R1) instead of the older version that I have used. The most up to date version would be compatible with newer models of Raspberry pi and it would also be able to provide additional functions such as the “kismetdb statistics” (Kismet. 2020.) which is an addition to the software since the 2019-04 version which provides a way to log and store statistics that the monitor mode cards capture. While with the current setup I could see a relation between how far away the Rouge AP is and how strong the signal it transmits as per sensor, as you can see on the figure above, it does not provide very accurate results. Having more time in development and research I am confident this system could be improved upon immensely, and with over 628 million public Wi-Fi spots expected by 2023 this kind of research is definitely needed. (Perspectives, E. and Report, C., 2018)

Acknowledgments Once logged into each drone (at least 4) (Kwok and Lau, 2007) it is possible to then use triangulation to localise the desired device. The localisation of different devices with different antenna powers can be a challenging task, as it is harder to determine a frameset to work by. In my experiment I have used a very weak (18dBm max Output power) wireless adapter, and after my testing I would recommend something much stronger for more reliable and realistic results.

Special thanks to Ian Harris and Omar Al Kadri for helping out with any questions that I had related to my experiment and offering guidance whenever needed. Especially in these uncertain times with the COVID-19 pandemic. I would also like to thank the System Support Team for the Equipment, they have spared me to go out and buy my own equipment which was really helpful, as well as it spared some time as well.

References

Kismet. 2020. Kismetdb Statistics. [online] Available at: <https://www.kismetwireless.net/docs/readme/ kismetdb_statistics/> [Accessed 12 May 2020]. Perspectives, E. and Report, C., 2018. Cisco Annual Internet Report - Cisco Annual Internet Report (2018–2023) White Paper. [online] Cisco. Available at: <https://www.cisco.com/c/en/us/solutions/ collateral/executive-perspectives/annual-internetreport/white-paper-c11-741490.html> [Accessed 12 May 2020]. Kwok, Y. and Lau, V., 2007. Wireless Internet And Mobile Computing. pp.538-539.

231


STUDENT BIOGRAPHY

Dawid Wilczynski Course: BSc (Hons) Computer Network Management and Design Cloud Computing Platforms Comparison Computing clouds are the multiple data centres prepared from computer storage resources connected via network but what makes them extraordinary is that all resources are virtualised into a single database that is full of resources that are automatically organised. The cloud computing is getting strong and that will continue to grow in the future especially that nowadays cloud computing keeps providing more jobs opportunities within a computing industry.

<#> 232


Cloud Computing Platforms Comparison Dawid Wilczynski & Harsha Kalutarage

Introduction

Figures and Results

Computing clouds are the multiple

Bandwidth speed test

data

centres

storage

Provider 1

from

800

connected via network but what

600

makes them extraordinary is that all resources are virtualised into a

The cloud computing is getting

strong and that will continue to grow in the future especially that nowadays cloud computing keeps providing more jobs opportunities within a computing industry.

Project Aim

400

customers choose a cloud that will be most suitable for their

360.193

which

businesses should look at while looking for a provider to move their infrastructure to the cloud.

Methods There

were

Machines

on

created into

0

Download Speed

The datagram presented above has been produced using SpeedSmart test and shows that cloud

figures

computing companies can provide us with highspeed internet access and that upload speed is not always slower than download speed.

remote

access

virtual machine with Windows Remote Desktop and Putty. There was also performed component testing with software such as CrystalDiskMark and SpeedSmart

. Also, findings related to this report show advantages of using virtual machines

such

as

simple

maintenance and access where at the same time can exist multiple

different

OS

environments remotely connected to each other.

show that within virtualisation there may not be much difference between using SSD drive and HDD drive.

Acknowledgments I would like to express my sincere appreciation,

particularly

to

my

supervisor Harsha Kalutarage, for help and support throughout this project. Ian

CrystalDiskMark Benchmark Results SSD and HDD Provider 2 2.29

4KiB Q1T1 - HDD

13.49

4KiB Q32T1 - HDD

Provider 1

Harris whose advice and guidance

20.6

11.95

4KiB Q1T1 - SSD

4KiB Q32T1 - SSD

12.93

4KiB Q8T8 - HDD

12.73

4KiB Q8T8 - SSD

12.87

helped

25.98

26.13

104.2

40

60

104.2 80

100

120

CrystalDiskMark Benchmark Results SSD and HDD Provider 2 7.16

5.12 9.09

Provider 1

12.57

4KiB Q32T1 - HDD

12.83 13.06

4KiB Q32T1 - SSD

13.25 12.54

4KiB Q8T8 - HDD

13.26 12.99

4KiB Q8T8 - SSD

13.36 12.54

Seq Q32T1 - HDD

52.19

Seq Q32T1 - SSD

66.18 67.85

51.59 0

10

purpose

and

this difficult time.

Read Speed Mb/s

4KiB Q1T1 - SSD

the

her support and understanding during

66.17 67.32

4KiB Q1T1 - HDD

find

am extremely grateful to my fiancĂŠe for

26.14 26.13

20

me

direction for this project. Additionally, I

26.14

Seq Q32T1 - HDD

cloud the

and

CrystalDiskMark Software that is free software that

Virtual

to

environment

The datagrams below have been produced using

0

environments where also was tested

Upload Speed

Seq Q32T1 - SSD

two

cloud

using Remote Desktop and Putty.

100

Test name and hard drive type

at

computer

of connecting to a virtual machine

200

Test name and hard drive type

aspects

how

presents the most common ways

300

needs. There were be pointed some

the

411.496

lets measure performance for reading and writing

The aim of this project is to help

successfully

components can be tested within 588.064

500

resources that are automatically

project

demonstrates

799.202

700

single database that is full of organised.

The

Provider 2

900

resources Speed [Mb/s]

computer

prepared

Conclusion

20

30

40

50

60

70

80

References References Chiark.greenend.org.uk. 2020. Putty: A Free SSH And Telnet Client. [online] Available at: <https://www.chiark.greenend.org.uk/~sgtatham/putt y/> [Accessed 28 April 2020]. Crystal Dew World [en]. 2020. Crystaldiskmark. [online] Available at: <https://crystalmark.info/en/software/crystaldiskmark/ > [Accessed 28 April 2020]. Speedsmart.net. 2020. Speedsmart - HTML5 Internet Speed Test. [online] Available at: <https://speedsmart.net/> [Accessed 28 April 2020]. Support.microsoft.com. 2020. [online] Available at: <https://support.microsoft.com/engb/help/4028379/windows-10-how-to-use-remotedesktop> [Accessed 28 April 2020].

Write Speed Mb/s

Computer Network Management and Design

233


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.