School of Computing
2020 SHOWCASE 1
<#> 2
Contents BSc (Hons) Computer Science
6
BSc (Hons) Computing Application Software Development
20
BSc (Hons) Digital Media 28 BSc (Hons) Computer Network Management and Design 32
www.rgu.ac.uk/computing 01224 262700 CSDM-Enquiries@rgu.ac.uk ComputingRGU ComputingRGU ComputingRGU
3
School of Computing
Introduction
We are shaping the future industry-sought skills in computer science, digital media, computer networks, cyber security and intelligent information systems. Our courses are innovative, practical and designed to meet current and future industry needs. The computing and digital media industries have always been rapidly changing environments. In recent years, the rise of artificial intelligence, data analytics and processing technologies has only increased the rate of change. These established growth areas have introduced new development practices and approaches that require graduates to be more resilient and adaptable to disruptive changes in the industry, which are emphasised in our range of courses. At the School of Computing, we invest in the student experience, allocating a personal tutor to help support and guide our students throughout all years of study. Our open-door policy provides a real sense of community in our school, where we invest time in our students. We support students to form their own venture, and develop innovative skills to expand their horizons beyond the taught curriculum. Recent students have launched their own businesses, undertaken commercial projects in parallel with their studies, published research papers at international conferences, and won national competitions. Our school has an excellent reputation for its innovative and practical approaches to teaching and learning. For generations it has produced qualified professionals across a broad spectrum of technical careers with graduates in high profile, international roles. Watch our School film and see the School of Computing in action: www.rgu.ac.uk/compfilm Do you want to learn more about a studentâ&#x20AC;&#x2122;s project or have any questions? Get in touch with us at SoC: Enquiries@rgu.ac.uk Interested in studying a Computing course at RGU? Be part of the movement that helps society move forward and drives innovation across all industries. Check out our courses: www.rgu.ac.uk/computing
4 <#>
Head of School
Welcome
Dear Reader, Welcome to the School of Computing showcase! Like everything else this year, we have had to rearrange our usual on-campus show and move online. Our degree show is always a hotly anticipated part of the academic year. For our students, it enables them to show off the work they undertook as for their honours project. Each students honours project is a unique piece of work that they have researched, designed, developed and documented over the final year of their course. Each project represents the culmination of the student’s degree, and our degree show acts as an arena where they can demonstrate their pride in what they have created. For our industry representatives, it provides an opportunity to meet and discuss ideas with our students and to see the kind of work the school does at an undergraduate level. The projects that are detailed in this brochure highlight the range of work in which the school is involved. Topics covered by our undergraduate courses include Computer Science, Software Engineering, Multimedia development, Graphics, Animation, Cybersecurity, and Networking. This year is a very special year for us, as of August 1st 2020, we have changed the name of the school to reflect the diversity of what we do. Computing is at the core of all our activities, and our new name, the School of Computing, encompasses the breadth of research and teaching that we do. We are incredibly proud of our students and the work they have created this year. We do hope you enjoy the brochure and please get in contact if you would like more detail about any of the projects or the school. We look forward to welcoming you on campus soon, Regards John Isaacs Head of School School of Computing
5
6 <#>
BSc (Hons) Computer Science
7
STUDENT BIOGRAPHY
Oliver Aarnikoivu Course: BSc (Hons) Computer Science Detecting Emotion from Text using Deep Learning Currently, a vast majority of research which has been applied with regards to sentiment analysis has been focused on classifying text as either “positive” or “negative”. Evidently, if we can move from a binary classification task into analysing and detecting distinct emotions, this could lead to advancements in various fields. However, it’s difficult to gain an understanding on how we define ”emotion” due to the complexity of human behaviour. Emotion can be expressed in so many different ways, such as facial expressions, gestures, speech, text and even from less obvious indicators such as heart-rate, skin clamminess, temperature, and respiration velocity. Nevertheless, since 1979, an illustration provided by the psychologist Robert Plutchik (Plutchik 1979) has been widely used to demonstrate how different emotions can blend into another creating new ones. These emotions are (joy, trust, fear, surprise, sadness, disgust, anger and anticipation). If we can agree that emotions can be categorised into these distinct labels, it begs the question on whether it is possible to convey these emotions through text?
<#> 8
Detecting emotion from text using Deep Learning Oliver Aarnikoivu & Eyad Elyan
Introduction Currently, a vast majority of research which has been applied with regards to sentiment analysis has been focused on classifying text as either “positive” or “negative”. Evidently, if we can move from a binary classification task into analysing and detecting distinct emotions, this could lead to advancements in various fields. However, it’s difficult to gain an understanding on how we define ”emotion” due to the complexity of human behaviour. Emotion can be expressed in so many different ways, such as facial expressions, gestures, speech, text and even from less obvious indicators such as heart-rate, skin clamminess, temperature, and respiration velocity. Nevertheless, since 1979, an illustration provided by the psychologist Robert Plutchik (Plutchik 1979) has been widely used to demonstrate how different emotions can Wheel of Emotions blend into another Figure 1: Plutchik (Plutchik 1979) creating new ones. These emotions are (joy, trust, fear, surprise, sadness, disgust, anger and anticipation). If we can agree that emotions can be categorised into these distinct labels, it begs the question on whether it is possible to convey these emotions through text?
Project Aim
This aim of this project is to assess the ability of different deep learning models to classify a text as having one, or more, emotions for eight of the (Plutchik 1979) categories plus Optimism, Pessimism, and Love. The model should be able to generalise adequately to unseen data.
Methods This project uses the SemEval Task 1: Affect in Tweets E-c dataset. The dataset consists of 6838 training examples, 886 validation examples and 3259 testing examples. The experiment is tested using both a Text CNN (Convolutional Neural Network) and Attention LSTM (Long short term memory network) proposed by the authors (Kim 2014) and (Zhou. et al 2016) respectively. Due to the limited amount of training data, we make use of transfer learning such that the embedding layer of both models is initiated using both pre-trained GloVe (Global vectors for word representation) vectors and BERT (Bidirectional Encoder Representations from
Transformers) embeddings, generated from a pre-trained BERT transformers model. While the two chosen model architectures are considerably different, they have been selected due to their ability to identify words within a sentence regardless of its position.
This suggests that the model does an adequate job taking into account the label imbalance of the dataset.
Figure 2: Attention LSTM Model Architecture (Zhou et al. 2016)
Figure 4: Example of attention visualisation for emotional classification
Figure 3: Text CNN Model Architecture Kim (2014)
Figures and Results
The figure above displays the words that the attention model considers the most significant with regards to its predictions. The color intensity and word size corresponds to the weight given to each word. We can see that the model is successfully able to “place attention” towards words which correlate to the predicted emotions.
Conclusion
Table 1: Performance comparison of Attention LSTM and Text CNN.
Table 2: Attention LSTM vs. Text CNN on Plutchik Categories by F1 Score.
Based on our results, it’s evident that the Attention LSTM performs better for emotion detection. The Attention LSTM using BERT embeddings outperformed both Text CNN models as well as the Attention LSTM model using GloVe embeddings. Moreover, with regards to both the Attention LSTM and Text CNN, in both cases the model using the embeddings produced by the pre-trained BERT model proved better results. This suggests that the contextualised embeddings produced by BERT may be superior in comparison to the non-context dependent vectors produced by GloVe.
This project compared an Attention-based bidirectional LSTM to a CNN using transfer learning such that the embedding layer is initialised with pre-trained GloVe and BERT embeddings. The results achieved by the Attention LSTM model using BERT embeddings proved comparable to the current top 10 official SemEval Task 1: E-c (multi-label label emotion class.) competition results. The results displayed in Table 2 indicate that both chosen models have a difficult time generalising to categories with just a few training examples, whereas categories with a sufficient amount of training data perform well. This suggests that the labels with a worse class imbalance would benefit from having a larger dataset.
Acknowledgments
I would like to give a special thank you to my honours supervisor Dr. Eyad Elyan, whose support and guidance throughout this project has been invaluable.
References
Table 3: SemEval 2018: Task 1 E-c (multi-label emotion class.) English leaderboard, snippet of the top 10 results.
figures
GloVe. Furthermore, as shown by the table above, we see that in terms of the Micro F1 score, the results achieved by the Attention LSTM model using BERT embeddings are comparable to the top 10 official SemEval Task 1 (multi-label emotion class.) competition results.
Kim, Y. (2014), Convolutional neural networks for sentence classification, in ‘Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)’, Association for Computational Linguistics, Doha, Qatar, pp. 1746–1751. URL: https://www.aclweb.org/anthology/D14-1181 Plutchik, R. 1979. Emotions: evolutionary theory. 1.
A
general
psych
Zhou, P., Shi, W., Tian, J., Qi, Z., Li, B., Hao, H. & Xu, B. (2016), Attention-based bidirectional longshort-term memory networks for relation classification, In ‘ACL’.
BSc (Hons) Computer Science 9
STUDENT BIOGRAPHY
Marcus Douglas Course: BSc (Hons) Computer Science Investigating the Influence of Soundtrack on Player Experience in Video Games An often somewhat underappreciated feature of video games today is the quality of audio and soundtrack that is implemented into the experience. Soundtrack is a feature that has come to be expected to the point where its value is perhaps overlooked. With much of the gaming industries development focus directed toward graphical innovations, this project looks to place a spotlight on the importance of soundtrack in video games. A video game will be designed for this project and take inspiration from Hideo Kojimaâ&#x20AC;&#x2122;s most recent release Death Stranding. This game received many negative reviews due to the tedious nature of its gameplay. For those however who did enjoy the game, it seems that the soundtrack was a rather large part of the enjoyment players had. Many in fact wishing that the games soundtrack was available to listen to again through an in-game playlist feature. This project will therefore consider the ways in which soundtrack is presented in games.
<#> 10
Inves)ga)ng the Influence of Soundtrack on Player Experience in Video Games Student: Marcus Douglas Supervisor: Carrie Morris
Introduc)on An o%en somewhat underappreciated feature of video games today is the quality of audio and soundtrack that is implemented into the experience. Soundtrack is a feature that has come to be expected to the point where its value is perhaps overlooked. With much of the gaming industries development focus directed toward graphical innova?ons, this project looks to place a spotlight on the importance of soundtrack in video games. A video game will be designed for this project and take inspira?on from Hideo Kojima’s most recent release Death Stranding. This game received many nega?ve reviews due to the tedious nature of its gameplay. For those however who did enjoy the game, it seems that the soundtrack was a rather large part of the enjoyment players had. Many in fact wishing that the games soundtrack was available to listen to again through an in-game playlist feature. This project will therefore consider the ways in which soundtrack is presented in games.
Project Aim This project seeks to gain insight into the influence of soundtrack implementa?on on the player experience and the level to which it affects player engagement and enjoyment. To do this, a video game will be created with varying soundtrack implementa?ons which will be tested by groups of par?cipants for analysis.
Design Methods The video game was developed en?rely within the Unity game engine. All assets such as textures, character models, and other environment objects were imported from the Unity asset store. Unity had all the appropriate built in tools to allow for implementa?on and mixing of audio.
Unity Implementa)on
Conclusion
figures
figures
All the imported assets were chosen with care to ensure a thema?c consistency with Death Stranding from which the design of this game takes inspira?on from. Extra terrain sculp?ng tools were also imported from the asset store to ease the crea?on of a large, drama?c landscape that the player could explore. The player can manoeuvre through the game world with a humanoid character in the third person aspect. Some simple cube shaped objects were created to serve as collectable sub and main objec?ves for the player to complete. Three different builds of the game were created; one which had soundtrack scripted in response to in-game events, one with soundtrack implemented as a playlist feature, and one with no soundtrack. Where the soundtrack was implemented, the songs were selected from Death Stranding’s soundtrack maintaining the thema?c consistency. To help gain a beOer understanding of how the user played the game, ?mers were scripted to run in the background with the results shown when the main objec?ve was completed.
Tes)ng
figures
To test the video game and achieve the project aim, three groups of par?cipants were required to play different versions of the game each and fill out a ques?onnaire. While tes?ng is s?ll underway, the generally an?cipated result is that par?cipants who get to play versions of the game with soundtrack implementa?on will perhaps gather more of the op?onal objec?ves. Furthermore, it would be expected to be found from the ques?onnaire responses that par?cipants find versions with soundtrack more enjoyable, and that having soundtrack implemented to play at scripted points is further preferable to soundtrack scripted as a playlist.
Many challenges have arisen throughout the course of this project, with many adaptions having been necessary. The main challenge of which has been to build and test a video game of the scale that this project would ideally require. With more time and expertise, a more refined video game may have been created which did not focus quite so obviously on soundtrack. Were there more time, it may have been more ideal to have participants revisit the video game and see how their engagement with the game dropped or increased over multiple playthroughs. To conclude, video games have far more complexity to them and how we enjoy them than simply a good soundtrack. Video games are art, and art is subjective.
Future Work There are many avenues for future work in this area. Further investigation could be made into what happens to player engagement when for example the soundtrack does not match the context of what is happening on screen. With more time and a video game demo at the ready, participants could be exposed to the demo multiple times and their engagement over each exposure compared. The experimental design choice could even be changed to repeated measures. This would mean participants would experience all versions of the game and their engagement could be analysed across versions; for example, would a participant who played with no soundtrack initially be as willing (if not more) to explore the game more when music is playing?
11
STUDENT BIOGRAPHY
Fatima Ghanduri Course: BSc (Hons) Computer Science Determine a Robust Machine Learning approach for Credit Risk Analysis With the current financial climate, borrowing methods like mortgage and credit support is widely used. These algorithms will help these organisations find the credit risk through the usage of the most efficient method. This project will benefit these banking institutions by having a visualisation to distinguish how the credit risk is affected based on occupation of the borrowing party. Machine Learning methods provide a better fit for the non linear relationships between the explanatory variables and default risk. Using these techniques have been found to predict credit risk greatly improves the accuracy probability.
<#> 12
Computer Science (Hons)
Determine a Robust Machine Learning approach for Credit Risk Analysis Fatima Ghanduri
Introduction
With the current financial climate, borrowing methods like mortgage and credit support is widely used. These algorithms will help these organisations find the credit risk through the usage of the most efficient method. This project will benefit these banking institutions by having a visualisation to distinguish how the credit risk is affected based on occupation of the borrowing party. Machine Learning methods provide a better fit for the non-linear relationships between the explanatory variables and default risk. Using these techniques have been found to predict credit risk greatly improves the accuracy probability.
Project Aim
Finding a robust Machine Learning algorithm for credit risk. Main techniques: - When passing processed entities through the machine and deep Learning models, the data used for visualisation to determine the efficiency and viability of these models. - These visualisation patterns and determined to be the results of this project.
Methodology
Nirmalie Wiratunga
Evaluation and Results -
-
-
Four model types were evaluated based on a Receiver Operating Characteristic Curve (ROC) and an a Precision Recall Curve (PRC). A ROC is used in the interpretation of the risk factor as-well as the efficiency and overall quality of the model. The PRC is a curve between precision and recall for various values.
1. Random Forest Model The value of logistic linear AUC calculated for RFM is 0.69 suggesting that no discrimination is in the model and this the predicted credit risk is viable. However, this result does not produce a well enough value meaning that the predicted values are mostly False Positive. A very small part of the predictions are in the True Positive making the model not very accurate.
4. Deep Learning The final method that was used was a neural network that consisted of three equ-al layers. The logistic linear AUC 0.53 making the model discriminatory to an extent. Although the predictive accuracy is 81%, the models, the PR curve has a smaller value in true positive making the results not majority in the zero value, making it a one in a half chance of the predicted result is correct. Thus, the model is the least viable model to use.
2. Gradient Boosting Gradient Logistic linear AUC value is 0.68 making this model with no discrimination, however, the accuracy is lower than the RFM. There is a smaller value of the data that is predicted is in the True positive area and a large majority is either in the false negative or in the false positive. In addition, the PR curve in figure.. Represents the reduction of the accuracy over a recall period where it starts to go down steeply. This model can be better represented if more data was present to train the model more.
3. Elastic net The best predictive model is the Elastic Net model. The logistic linear AUC value is 0.71 making that the model non discriminatory and with a larger percentage of the predictions are in the true positive rate. The PR curve is also represented. The PR curve is a lot less steep proving that the data that is recalled has a higher precision for a large portion of the predicted values. However, with a larger data set in training, the model could be more accurate as more training is implemented.
Conclusion
This project has successfully established how machine learning can be used to further efficiency in banking. In the four models that were used, the elastic net has happened to be the best in terms of the evaluation of the AUC and PR curves. The deep learning model was initially predicted to be the best model that has no discrimination due to its many layers, but through the above graphs it is apparent that the model has values that are more in the false negative than in the false positive. With a larger training set of and different input layers may make a large difference in the logistic linear value. The data collected for this research consisted of large clusters and pre-processing needed to be implemented. - Lexical analysis was used to create tokens of jobs using Levenshtein comparison. - Similar industrial jobs were then placed in buckets using Locality sensitive Hashing. - To achieve LSH, shingling was used to convert the text to characters of length k-grams. - The Jaccard Index was then implemented as a coefficient to find the similarity between sample sets from a database. - These industry buckets were processed individually into the models separately.
Acknowledgments I would like to express my deep gratitude to Professor Nirmalie Wiratunga, my research supervisor, for her patient guidance, enthusiastic encouragement and useful critiques of this research work. I would also like to thank my parents for their constant support through-out my four years of university. Lastly, a very sincere appreciation to Lee Robbie, Mark Scott-Kiddie and Calum McGuire for their encouragement throughout my final year.
References AI in Banking Report: How Robots are Automating Finance. (n.d.). Retrieved April 17, 2020, from https://amp.businessinsider.com/8-5-2018-ai-in-bankingand-payments-report-2018-8
Tel: 07473931385 Linked-in: linkedin.com/in/fatima-ghanduri Email: fghanduri@protonmail.com
13
STUDENT BIOGRAPHY
Jehanzeb Mobarik Course: BSc (Hons) Computer Science Automatic Classification of Pneumonia Using Deep Learning Chest X-rays are one of the most popular medical imaging techniques that are used to look for abnormalities within a patient(WHO, 2001). One of these abnormalities includes pneumonia infected lungs which appear upon the X-ray as obscure white spots Radiologists diagnose chest X-rays, however, over the past few years, there have been a drop radiologists leading to increase backlogs for the National Health Service(NHS). With the advent of public datasets and compute power, the possibility of real time diagnosis tool of X-rays showing either healthy or pneumonia infected lungs is needed today more than ever.
<#> 14
Automatic Classification of Pneumonia Using Deep Learning Jehanzeb Mobarik & Dr Eyad Elyan
Introduction Chest X-rays are one of the most popular medical imaging techniques that are used to look for abnormalities within a patient(WHO,2001). One of these abnormalities includes pneumonia infected lungs which appear upon the X-ray as obscure white spots. Radiologists diagnose chest Xrays, however, over the past few years, there have been a drop radiologists leading to increase backlogs for the National Health Service(NHS). With the advent of public datasets and compute power, the possibility of real-time diagnosis tool of X-rays showing either healthy or pneumonia infected lungs is needed today more than ever.
Project Aim This project aims to implement a pneumonia-classification pipeline using various deep learning techniques. Also, this project aims to apply techniques such as data augmentation to deal with class imbalance and transfer learning to help with the challenges that come with a limited dataset
Figures and Results
Conclusion
Figure 2 : Confusion Matrix before(left) and after(right) data augmentation
Figure 3 : Example of Data augmentation
By running several experiments and fine tuning hyper-parameters, our initial model of 3 convolutional layers and 3 fully connected layers achieved an accuracy of 77% on the test set. However, the model suffered from low specificity and precision as the dataset was imbalanced with normal X-rays under represented. From the confusion matrix in figure 2, it can be observed the model is highly biased towards the positive class resulting in a high false positive count. To combat this under representation, data augmentation is used to increase the number of normal X-rays via affine transformations. Figure 3 shows a case of augmentation where a single patient X-ray can be rotated in various degrees to create new samples. Our model was then trained on the original and augmented images which resulted in accuracy increasing to 90% with both precision and specificity increasing. Figure 2 shows a confusion matrix(right) of the model evaluated on the test set showing less false positives and increase in false negatives
Methods
In this project, we provided an overview of the deep learning techniques applied on medical images to diagnose pneumonia in chest X-rays. Different CNN models were trained and tested on chest X-rays where it was observed that novel methods such as transfer learning via the VGG16 model performed worse compared to a much simpler model with only 3 convolutional layers. We also demonstrated that techniques such as data augmentation to help with class imbalance greatly helped in reducing the number of false positives. Ultimately this project proved there is a place where deep learning can be utilised in healthcare to automate diagnosis. In future work we hope to build a generalised classification pipeline on more than one pathology.
Acknowledgments I would like to thank my supervisor, in particular, Dr Eyad Elyan for his supervision during this project. Without his guidance and encouragement this project would not have been possible. I would also like to thank my family for their support during this project.
References
Figure 1: Methods of deep learning
The dataset contained various sizes of images due to different devices used for scanning which proved challenging since all images need to be scaled to the same size in order to classify with a Convolutional Neural Net(CNN). Different methods of using CNNs is illustrated in figure 1. The CNN algorithm is one of the well know deep learning techniques used to extract features automatically without human assistance (Krizhevsky et al., 2012) .
Figure 4: VGG16 Architecture
Table 1: Summary of Algorithm Performance
A common approach to overcoming limited dataset size and long training time is to utilize transfer learning. This approach allowed us to use a pretrained VGG16 model, shown in figure 4, whose convolutional layers are trained to extract low to high level features of an image. We adapted the VGG16 model to work on X-ray images by freezing the convolutional layers and training the fully connected layers which allows us to take advantage of VGG16â&#x20AC;&#x2122;s feature extraction without training from scratch (Simonyan et al, 2015)
We evaluated the performance of each of the algorithms on accuracy and F1-score, which is a measure of the harmonic average between precision and recall. Three different algorithms were compared where it was noted that transfer learning produced sub-optimal scores compared to a smaller network. The table above shows a CNN without data augmentation producing higher accuracy and F1-score compared to the transfer learning approach. Finally, the same CNN model with data augmentation outperformed all other algorithms on accuracy and F1-score
Simonyan, K. and Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Krizhevsky, A., Sutskever, I. and Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 10971105). WHO. Standardization of interpretation of chest radiographs for the diagnosis of pneumonia in children. 2001
More Information Email: j.mobarik@rgu.ac.uk
Computer Science 15
STUDENT BIOGRAPHY
Zander Orvis Course: BSc (Hons) Computer Science Utilising Neural Networks to Create a Self-Driving Car Self-Driving cars have been a subject of fascination for decades and have recently seen a massive boost in interest. (Anderson, et al., 2014) However, there are lofty expectations both from the public and investors set for self-driving cars, leading to rapid, aggressive testing that has led to several fatalities.(Lyon, 2019 Currently, selfdriving vehicles utilise neural networks for their control systems, and understanding how these work will highlight why the technology is still limited, and why self-driving cars are still not widespread.
<#> 16
Utilising Neural Networks to Create a Self-Driving Car Alexander Orvis & Kit-ying Hui
Introduction
Self-Driving cars have been a subject of fascination for decades and have recently seen a massive boost in interest. (Anderson, et al., 2014) However, there are lofty expectations both from the public and investors set for self-driving cars, leading to rapid, aggressive testing that has led to several fatalities.(Lyon, 2019 Currently, self-driving vehicles utilise neural networks for their control systems, and understanding how these work will highlight why the technology is still limited, and why self-driving cars are still not widespread.
Project Aim The aim of the project is to create a self driving car that utilises neural networks to control its movements. The implementation of this network and the car should meet the minimum specifications laid out in the requirements.
Figures and Results
Conclusion A Problem with Uncertainty
figures
Track 1
Each of the neural networks are of similar design, but have varying inputs and hidden nodes. The Angles input the angle of the path in degrees, either if it is left or right, but never together, in order for the car to know which direction it is in.
figures
Track 2
Track 3
figures
A sample of the training data used for the navigation network, the line separates the recorded inputs from the recorded outputs. The 4 input values and 4 output values match the order they are shown top to bottom on the diagram
The two designs were implemented to reasonable success, with both cars utilising a total of 11 sensor readings, which proved to be adequate for navigating the three unseen testing tracks. Although the cars showed good performance, they were still imperfect and would make mistakes. In order to test their performance, each car made timed laps around the track whilst the mistakes they made were recorded. Certain mistakes, like not stopping at a red light, incurred a time penalty, but time was stopped if a car got stuck and restarted when it was moving again.
Methods
Car and Obstacles The hybrid network car on the training track, a traffic light and a road node are in the foreground, whilst several obstacles are in the background
For the scope of this project, many features typically seen on real cars were simplified, such as not having a camera for the car to see thus not needing any complex image processing. With this in mind, two designs were created, one using a single network, and one that used a hybrid of two networks that each handle different tasks. Both designs were trained with supervised learning methods that required the collection of training data by manually driving the car. Implementation was achieved by simulating a car in the Unity 3D Game Engine.
Visualisation of External Sensors Red is the obstacle sensors, detecting distance to the obstacle, Yellow is the node path sensor, detecting the angle and distance to the next path node, and Green is the traffic light sensor, detecting the distance to either a red or green traffic light
Looking at the results, both cars showed comparable performance in time, although there is a clear difference in the number of mistakes made. The single network car made far more mistakes than the hybrid, which also explains why the times are so similar, as the hybrid was far more cautious driving around obstacles, slowing its time especially on test track 3, which contained many obstacles. However, preferably the cars would want to be safer and make less mistakes, rather than be faster. The single network struggled to perform tasks, such as stopping at the red light, which can be attributed to its more complex and difficult to train network, compared to the specialised hybrid networks.
In a situation like this, the car gets confused and will end up crashing, unless specifically trained to choose a single direction, which can cause problems
The hybrid design showed clear advantages: • Better driving performance • Far faster to train • Separate networks allowed for the training of a specific function However, the limiting factor with the hybrid was the ‘transition’ between the two networks, which could be improved. Although the final result worked, it had some crucial flaws that limited its performance. The biggest of which would be that it cannot deal with uncertain or unseen situations. Additional training can help with this, but this is a problem faced by even the most advanced self-driving cars. Further work would include adding more obstacles, such as pedestrians, creating proper roads with boundaries the car must stay within, and improving the functionality of some sensors.
Acknowledgments I’d like to thank Kit-ying Hui for the guidance he has given throughout the project, especially for inspiring the idea of the hybrid design..
References Anderson, J. M. et al., 2014. Chapter Four: Brief History and Current State of Autonomous Vehicles. In: Autonomous Vehicle Technology: A Guide for Policymakers . s.l.:RAND Corporation, pp. 55-74.
Training Track
Lyon, P., 2019. Why The Rush? SelfDriving Cars Still Have Long Way To Go Before Safe Integration. [Online] Available at: https://www.forbes.com/ [Accessed 4 November 2019].
Computer Science
17
STUDENT BIOGRAPHY
Lee Robbie Course: BSc (Hons) Computer Science Increasing Usability of Flight Trackers using Augmented Reality With the app stores possessing millions of apps, 2.5 million on Google Play Store and 1.8 on Appleâ&#x20AC;&#x2122;s App Store, (Statista, 2019) there is an oversupply of applications. The attention of users is limited, resulting in users wanting quick and streamline applications. Flight tracking applications have been proven to use an abundance of irrelevant features aiming for quantity rather than quality and simplicity to compete with other applications. This has resulted in a negative impact on the applicationâ&#x20AC;&#x2122;s usability, rendering users who would otherwise use the kind of application to be put off in using them due to their complexity. The decrease in usability also spans to the use of AR where applications have overcomplicated this already unfamiliar functionality again discouraging users from trying to use the feature. Flight Detector aims to solve this problem with a minimalised set of features to provide the primary functional user experience, which ight tracking application are designed to deliver. The application will incorporate an easy to follow user interface in addition to creating the application as a web-based app rather than the existing native based equivalent.
<#> 18
Increasing Usability of Flight Trackers using Augmented Reality Student: Lee Robbie Introduction With the app stores possessing millions of apps, 2.5 million on Google Play Store and 1.8 on Apple’s App Store, (Statista, 2019) there is an oversupply of applications. The attention of users is limited, resulting in users wanting quick and streamline applications. Flight tracking applications have been proven to use an abundance of irrelevant features aiming for quantity rather than quality and simplicity to compete with other applications. This has resulted in a negative impact on the application’s usability, rendering users who would otherwise use the kind of application to be put off in using them due to their complexity. The decrease in usability also spans to the use of AR where applications have overcomplicated this already unfamiliar functionality again discouraging users from trying to use the feature. Flight Detector aims to solve this problem with a minimalised set of features to provide the primary functional user experience, which flight tracking application are designed to deliver. The application will incorporate an easy to follow user interface in addition to creating the application as a web-based app rather than the existing native based equivalent.
Project Aims The project aims to produce a mobile web-based flight tracking application which will take live data from an API presenting the data in a readable format based on user input. The application will allow for a selected flight to be tracked visually and displayed in a live environment using AR.
Methods The mobile application was created using the IDE Atom where a flight data API was used from The OpenSky Network. The API supplied live data from its coverage around the world, presenting aircraft with ADS-B and MLAT transponders. The interactive AR was created using the open-source frame work AR.js. The design process throughout the production of the application was exposed to user feedback, ensuring user opinions were a central part throughout all aspects of the project. Final usability metrics will be used to scrutinise the application before user Fig. 2: AR.js Location-Based Example testing.
Flight Detector
Supervisor: Dr John Isaacs
Figures and Results The design process of the application resulted in 8 separate/variations of the home page. Following the most preferred design, colour schemes were created and used on the chosen design. The designs were put to users through a survey. The most preferred design was taken forward to include the different colour schemes.
Fig. 5: Survey Results From Home Design Preference
Fig. 3: Chosen Home Page Design
Users again would choose a preferred colour scheme against the home page design. The resultant home page is shown in Figure 3. The home page survey results are shown in Figure 5 were the survey results for the colour scheme survey are shown in Figure 6. Both surveys show a clear preference between 4 choices where both surveys came with a clear majority of choice. The process was followed for all aspects of features within the app such as the search tab, flight data display and AR page. The AR page chosen is shown in Figure 4, adopting the colour scheme “Electric and Crisp”.
Fig. 4: Chosen AR Page Design Fig. 2: The OpenSky Network Logo
Acknowledgements I would like to thank Dr John Isaacs, Head of the School of Computing Science and Digital Medi, for his dedication in providing guidance throughout the year. I would also like to thank Mark Scott-Kiddie, Fatima El-Ghanduri and Calum McGuire for their support and motivation throughout the years at university. Finally, of course, I would like to thank my family for their continued support throughout my time at university, especially emphasising the support received this year above all.
Fig. 6: Survey Results From Colour Scheme Preference
Conclusion The project aimed to create a usable application for all demographics of user. The application had thorough user feedback throughout to ensure the application design would appeal to the broader audience of users. Only a small scale of user testing has been conducted since the application has been implemented. Although only small scale testing has been completed, it has shown the application to have been successful in achieving a usable interface with usable features. Further testing will be completed to gauge a more substantial proportion of user feedback to determine if the project has been entirely successful.
References Statista. (2019). Number of apps in leading app stores 2019. Statista. https://www.statista.com/statistics/276623/number-of-apps-available-in-leading-app-stores/
BSc (Hons) Computer Science 19
20 <#>
BSc (Hons) Computing Application Software Development
21
STUDENT BIOGRAPHY
Kevin Kelbie Course: BSc (Hons) Computing Application Software Development An implementation of the Statechain Protocol with applications to Bitcoin The Statechain Protocol is a second-layer technology that runs on top of cryptocurrencies and provides a novel way of sending coins in an off-chain manner thereby improving privacy and scalability (Somsen, 2019).
<#> 22
An implementation of the Statechain Protocol with applications to Bitcoin
Sending keys; not coins Introduction
Figures and Results Conclusion
The Statechain Protocol is a second-layer technology that runs on top of cryptocurrencies and provides a novel way of sending coins in an off-chain manner thereby improving privacy and scalability (Somsen, 2019).
MESSAGE 1
MESSAGE 2
INIT
TRANSFER
TRANSFER
OWNER 1 PUBLIC KEY
OWNER 2 PUBLIC KEY
Project Aim
MESSAGE 1
The project aimed to implement the Statechain Protocol on the Bitcoin Network. We plan to implement a Statechain server which is introspectable using an easy to use web interface as well as expose an API for clients to verify the Statechain server is honest.
Motivation
Statechains would help reduce that by allowing individuals to send update transactions between each other off-chain thereby reducing on-chain transactions.
Methods The methods used to implement the Protocol was to leverage as much of the existing cryptocurrency infrastructure as possible to prevent reinventing the wheel. The server was partially implemented in Javascript because that’s what we were more familiar with but we decided to but the rest of my energy in implementing it in Rust because of the performance gains to be had there.
One of the HTTP endpoints we used was for retrieving data out of the Statechain explorer which was done over a GraphQL endpoint so that the amount of bandwidth required reduced by allowing the user to specify what data they need in the request in a declarative manner.
SHA256 PRE IMAGE 1
SHA256 PRE IMAGE 2
MESSAGE 1
MESSAGE 2
OWNER 2 PUBLIC KEY
OWNER 3 PUBLIC KEY
SIGN
OWNER 1 PUBLIC KEY
OWNER 2 PUBLIC KEY
SERVER PUBLIC KEY
SIGNED MESSAGE 1
OWNER 3 PUBLIC KEY
SIGNED MESSAGE 2
VERIFY
FIGURE 1
In figure 1 we have created a simplified diagram to show how the client interacts with the Statechain server. This shows the two mains functions that the state chain server supports however it is not limited to just these functionalities but these are the functions required for transferring UTXO’s from one user public key to another. I’ve omitted the blind signatures in this diagram because we didn’t find an adequate solution. TCP
If and when the Eltoo proposal is implemented on Bitcoin we will be able to implement the full state chain protocol but until that happens there is no way to force the Bitcoin network to accept the updated state chain transactions (Decker, 2018). Implementing blind Schnorr signatures proved to be more difficult than initially anticipated because we couldn’t find any libraries that implemented it due to the rough key attack but fortunately, the core of my project was not reliant on this. In order to be a viable technology in the future, we would have to implement a way for transferring smaller denominations of the currency because state chain coins cannot be divided into smaller amounts of change. The project was more complicated than initially anticipated making it difficult to deliver on every single requirement we set out for myself so we put most of my time into making sure the server was robust rather than adding lots of features that were not critical.
Acknowledgments I am grateful for all the help that my supervisor, Mark Zarb offered to me over the course of the project. Ruben Somsen, the author of the Statechain white paper answered a great many of my questions privately which was crucial to my understanding of the protocol.
References
After careful consideration, we decided to use a PostgreSQL, a relational database over RocksDB, a key-value store database because we found the flexibility of PostgreSQL to be much useful than any performance benefits that would be made with RocksDB. Rather than using HTTP for communication across the board, we decided to use TCP connections for peer-to-peer communication between the servers to reduce the amount of bandwidth required because we avoid sending the HTTP headers and can maintain a connection throughout the request.
OWNER 2 SIGNATURE
SIGN
The motivation for implementing Statechains is because the Bitcoin network can often be quite expensive to broadcast transactions especially in times of high-density throughput because it causes the network to be congested since Bitcoin can only handle between 3.3 and 7 transactions per second (Croman et al., 2016).
MESSAGE 2
OWNER 1 SIGNATURE
We were able to implement the state chain server but due to the limitations of the current cryptocurrency infrastructure, it was not possible to implement the entire protocol.
HTTP
FIGURE 2 In figure 2 we show how we have implemented the peer-to-peer protocol and our solution was to use HTTP for the client-to-server communication and TCP for server-to-server communication. When the client makes requests to the server it doesn’t require multiple rounds trips hence the choice to use HTTP. However, TCP is used for serverto-server communication because there is a four round trip required to sign the signatures due to the nature of how MuSig works (Decker, 2018).
BSc. (Hons) Computing (Application Software Development)
Decker, C., 2018. eltoo: A Simplified Update Mechanism for Lightning and OffChain Contracts. Blockstream. URL https:// blockstream.com/2018/04/30/en-eltoo-nextlightning/ (accessed 10.9.19). Somsen, R., 2019. Statechains: Off-chain Transfer of UTXO Ownership 7. Croman, K., Decker, C., Eyal, I., Gencer, A.E., Juels, A., Kosba, A., Miller, A., Saxena, P., Shi, E., Gün Sirer, E., Song, D., Wattenhofer, R., 2016. On Scaling Decentralized Blockchains: (A Position Paper), in: Clark, J., Meiklejohn, S., Ryan, P.Y.A., Wallach, D., Brenner, M., Rohloff, K. (Eds.), Financial Cryptography and Data Security. Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 106– 125. https://doi.org/ 10.1007/978-3-662-53357-4_8
statechain.info
23
STUDENT BIOGRAPHY
Craig Pirie Course: BSc (Hons) Computing Application Software Development Applying Computer Vision Techniques To Identify Corrosion In Underwater Structures Corrosion is a naturally occurring phenomena that causes the deterioration of a metal due to exposure to certain environmental factors, which if left untreated, can become a major safety and cost concern. The National Association of Corrosion Engineers (NACE) conducted a two-year study, ending in 2016, which estimates the annual global cost of corrosion in society at US$2.5trillion (The Global Cost and Impact of corrosion, 2020). In the Oil & Gas Sector, it is the job of the Inspection Engineer to analyse the integrity of pipelines, valves, infrastructure and more. As most of these domains reside underwater, this usually involves the aide of an ROV or underwater drone fitted with a camera to feed back footage of the infrastructure to the engineer. This footage can then be monitored to analyse the impact of corrosion on the metalwork, so that the engineer can then advise the action needed to treat and correct the effects of the damage. With Artificial Intelligence gaining evermore trust and popularity in society, there is a push for the inspection process within the energy sector to be assisted by this new technology in order to cut the costs of the inspection process. Automating the underwater inspection process is however rather difficult due to the qualities of the subsea world. It is a difficult and expensive process to capture images below the surface, requiring specialist equipment, making data a scarce commodity. In addition, light behaves differently underwater which distorts the quality and makes it difficult to detect objects in images. Such a hurdle is why it is hypothesised to be vital to appropriately correct the image quality before attempting to automate the inspection process.
<#> 24
Applying Computer Vision Techniques To Identify Corrosion In Underwater Structures Craig Pirie (Dr Carlos Moreno-Garcia)
Introduction
Methods
Corrosion is a naturally occurring phenomena that causes the deterioration of a metal due to exposure to certain environmental factors, which if left untreated, can become a major safety and cost concern. The National Association of Corrosion Engineers (NACE) conducted a two-year study, ending in 2016, which estimates the annual global cost of corrosion in society at US$2.5trillion (The Global Cost and Impact of Corrosion, 2020). In the Oil & Gas Sector, it is the job of the Inspection Engineer to analyse the integrity of pipelines, valves, infrastructure and more. As most of these domains reside underwater, this usually involves the aide of an ROV or underwater drone fitted with a camera to feed back footage of the infrastructure to the engineer. This footage can then be monitored to analyse the impact of corrosion on the metalwork, so that the engineer can then advise the action needed to treat and correct the effects of the damage. With Artificial Intelligence gaining evermore trust and popularity in society, there is a push for the inspection process within the energy sector to be assisted by this new technology in order to cut the costs of the inspection process. Automating the underwater inspection process is however rather difficult due to the qualities of the subsea world. It is a difficult and expensive process to capture images below the surface, requiring specialist equipment, making data a scarce commodity. In addition, light behaves differently underwater which distorts the quality and makes it difficult to detect objects in images. Such a hurdle is why it is hypothesised to be vital to appropriately correct the image quality before attempting to automate the inspection process.
Sample image data was gathered by using a web scraper and was then processed using appropriate labelling tools for the varying computer vision methods. Three different image pre-processing methods were compared in total; Retinex, Gray World and Contrast Limited Adaptive Histogram Equalization (CLAHE). Three distinct computer vision methods â&#x20AC;&#x201C; image classification, object recognition and instance segmentation were then compared. These were built from four different underlying architectures; CNN, Faster RCNN, Mask RCNN and YOLO. The workflow is outlined in the figure below.
Project Aim The main aim of the project is to analyse and compare state-of-the-art computer vision and image pre-processing techniques in order to provide a system to assist the Inspection Engineer in the corrosion identification process.
Figures & Results
YOLO
CNN
Faster RCNN
Mask RCNN
Above, are sample results from each of the trained models on an image from the underwater environment, with varied degrees of success. The CNN fails to recognise the existence of corrosion in the image. YOLO begins to recognise that there is rust present on the prominent pipe in the foreground, however the predictions are quite unstable. Faster RCNN correctly identifies the presence of corrosion in the foreground, but ignores the presence in the background. Mask RCNN successfully acknowledges the presence of multiple occurrences of corrosion across the image. However, falsely highlights part of the image as instances of rust.
Conclusion Original
Retinex
Gray World
CLAHE
The above images display the output results from each of the chosen image pre-processing techniques compared to the original. Gray World showed some promise in its ability to mitigate the factors of the underwater environment, however introduced some undesirable effects of its own. It appears to over-compensate for the domination of blue pixels in the image by introducing too many red pixels into the image, giving rise to its redness. Retinex appears to produce the most desirable outcome of the three techniques. With this technique we can see a stark reduction in the blue tint of the image and start to see more object definition and detail in the image. CLAHE is shown to have an adverse effect on the image. By smoothing the image, it displays a loss in definition across the sample. After comparison of the three techniques it was decided that Retinex had the most promise, hence was chosen for use in the final study.
figures
CNN
Faster RCNN
YOLO
Mask RCNN
92%
49%
6%
56%
The work done in this project validates the use of Image Recognition techniques in the corrosion inspection process. Pre-processing using the named techniques was also found to be unnecessary and in fact detrimental to the performance of corrosion detection. Although the project has taken steps towards proving this concept works underwater, more work still needs done with larger underwater datasets to further explore the outcomes.
Acknowledgments I would like to thank my supervisor, Dr Carlos Moreno-Garcia for his expert advice and mentorship throughout the entire project. And thankyou to the Foxshyre Analytics team for their financial support and for their input into the project.
References 1) Inspectioneering.com. 2020. The Global Cost And Impact Of Corrosion. [online] Available at: <https://inspectioneering.com/news/2016-0308/5202/nace-study-estimates-global-cost-ofcorrosion-at-25-trillion-ann> [Accessed 24 April 2020].
Computing Science: Application Software Development
25
STUDENT BIOGRAPHY
Grant Sheils Course: BSc (Hons) Computing Application Software Development Klink: A visual programming language game to teach the basics of coding In the past few years, there has been a rising need for software developers around the world, in The United States alone the number of employed developers went from 800,000 in 2004 (Geer, 2006) to over 4 million in 2016 (Daxx Team, 2019). However, coding and computer science, in general, can be a hard field to get involved with especially for younger students as there are many schools that fail to provide any Computer Science education. A tired and proven way of helping beginners start with coding is introducing them to a Visual Programming Language, this will allow them to understand the basic fundamentals of coding while still keeping the process simplified. These VPLs have been used in conjunction with school education programs with great results (Grout and Houlden, 2014), increasing the childâ&#x20AC;&#x2122;s knowledge base and interest in computer science. Then there is also the research behind the use of general software or video games within the field of childrenâ&#x20AC;&#x2122;s education, showing that not only do students have fun playing these games but their interest in the actual subject field itself increases. It seems then that a strong combination that will help introduce coding to younger audiences would be between video games using visual programming languages. And that is exactly what this project hopes to achieve.
<#> 26
Klink: A visual programming language game to teach the basics of coding Grant Shiels & Dr Mark Zarb
Introduction In the past few years, there has been a rising need for software developers around the world, in The United States alone the number of employed developers went from 800,000 in 2004 (Geer, 2006) to over 4 million in 2016 (Daxx Team, 2019). However, coding and computer science, in general, can be a hard field to get involved with especially for younger students as there are many schools that fail to provide any Computer Science education. A tired and proven way of helping beginners start with coding is introducing them to a Visual Programming Language, this will allow them to understand the basic fundamentals of coding while still keeping the process simplified. These VPLs have been used in conjunction with school education programs with great results (Grout and Houlden, 2014), increasing the child's knowledge base and interest in computer science. Then there is also the research behind the use of general software or video games within the field of children's education, showing that not only do students have fun playing these games but their interest in the actual subject field itself increases. It seems then that a strong combination that will help introduce coding to younger audiences would be between video games using visual programming languages. And that is exactly what this project hopes to achieve.
Project Aim The main goal of this project is to create some form of visual programming language software that will encourage people, specifically younger students, to take an interest in coding. The software should either be a video game or implement video gamelike mechanics in order to allow accessibility and increase the user's engagement with the application.
Methods
Before starting I spent time researching the different methods that could be used to achieve my goal. I first had to look at what engine the game should be made in. For this, there were two main competitors, Unity or Godot. In the end, I went with Godot as it had amazing digital resources available and I personally found it much easier to understand as a first-time game developer. Since I was using Godot I decided to use their built-in language: GDScript. During the development process, I had to create multiple different sprites and objects that needed visual assets. For creating these assets I used GIMP as I've had experience using this software for many years.
Gameplay The basic gameplay idea of the project would be that the user is faced with different levels, and within each of these levels, a maze-like path can be seen with the player character inside. It is then the job of the user to navigate the player character from one end of the maze to the other. However they won't be able to use the classic move keys to move the character, this is where the visual programming language aspect comes into play.
Conclusion
In conclusion, I feel the fundamental requirements for the project were reached, a game was created that allows users to create commands using blocks of code, a visual programming language. At this stage, full testing has yet to be completed, but the feedback that has been received so far has been positive. With what testers have said, users seem to enjoy the gameplay experience and find it accessible as an entry into programming. I also believe this project has given me insight into the process of working with game engines and how game development differs from other forms of development that I'm familiar with. I hope that the skills that I've gained during this project will help me in future opportunities.
Further Work
The user shall use a selection of buttons that when pressed will add a code block to a command that the user can see and edit. This command will be made up of different tasks that the player character will carry out when the user presses the run button. These tasks will do things such as move and change direction, with text boxes for the user to have input on certain aspects, like the distance that the character will move. Once the user has built their command they shall press the run button and watch to see if they've managed to successfully guide the character to the goal area, if they are successful the game will move on to the next level. However, if they don't reach the goal they will be transported back to the beginning of the path where they can try again.
There is potential to add in a larger variety of code blocks allowing for more complex commands to be built and in turn, allow users to gain more knowledge of the fundamentals of coding. Implementation of online features could also lead to the ability to create and share custom levels for other users to complete.
Acknowledgments
I would like to use this section of the poster to give thanks to my supervisor Dr Mark Zarb, who provided great support across both semesters as well as holding weekly meetings that I found incredibly helpful at providing information and insight into the projects of other members of my class. As well as give thanks to both NESCol and RGU for giving me the opportunity to study.
References
Geer, D., 2006a. Software developer profession expanding. IEEE Softw. 23, 112â&#x20AC;&#x201C; 115. https://doi.org/10.1109/MS.2006.56 Daxx Team, 2019. Software Developer Statistics 2019: How Many Software Engineers Are in the US and in the World? [WWW Document]. Daxx Softw. Dev. Teams. URL https://www.daxx.com/blog/ development-trends/number-softwaredevelopers-world Grout, V., Houlden, N., 2014. Taking Computer Science and Programming into Schools: The GlyndĹľr/BCS Turing Project. Procedia - Soc. Behav. Sci., 4th World Conference on Learning Teaching and Educational Leadership (WCLTA-2013) 141, 680â&#x20AC;&#x201C;685. https://doi.org/10.1016/j.sbspro. 2014.05.119
27
28 <#>
BSc (Hons) Digital Media
29
STUDENT BIOGRAPHY
Hamish MacRitchie Course: BSc (Hons) Digital Media How the use of visual effects can be used to increase a userâ&#x20AC;&#x2122;s immersion of a sci-fi trailer This project is in investigation into how visual effects(VFX) impact a viewers immersion in a sci-fi themed trailer. Key aims of the project were; to research current visual effect techniques and technologies and how they are implemented into current films. Identify, from research, what type of visual effects could be implemented into this project. To learn how to create and manipulate VFXâ&#x20AC;&#x2122;s so that they would improve the overall quality/user experience of the final production. Key techniques that were included; 3d animation, motion tracking, fire/ smoke simulations, spark particles and digital compositing.
<#> 30
overall effectiveness of the trailer it was sent out to a number of participants that were d answer a series of questions evaluating the effectiveness of key elements including; p, the fire/smoke effect,spark effect and overall compositing of digital elements. Overa y to the trailer but the effectiveness of it could have been improved based on results. E the location emission of them could have been improved based to on feedback. Howand the use of visual effects can be used
The Crashing Ship:
increase a user’s immersion of a sci-fi trailer
By Hamish MacRitchie Supervised by Jay Lytwynenko
Abstract/Aims This project is in investigation into how visual effects(VFX) impact a viewers immersion in a sci-fi themed trailer. Key aims of the project were; to research current visual effect techniques and technologies and how they are implemented into current films. Identify, from research, what type of visual effects could be implemented into this project. To learn how to create and manipulate VFX’s so that they would improve the overall quality/user experience of the final production. Key techniques that were included; 3d animation, motion tracking, fire/smoke simulations, spark particles and digital compositing.
Design
ootage The overall story and direction of the short trailer was
established during the start of the design phase. Multiple different ideas based on a sci-fi theme were considered. The final idea of a ship crashing from space down to earth was decided to be the direction of the trailer. This was due to the potential range of VFX’s that could realistically incorporated into it. Once the overall direction of the trailer was finalised a script and storyboards were created as well as location scouting, all process included in standard pre-production phases in film making.
ion
Implementation & Testing The majority of the implementation for this project took place during the post-production phase. Much of the key techniques had to be learned before implementing them into a final scene. Separate tests creating different effects were created to understand how they worked. These included Fire and smoke simulations using eevee renderer. Spark effects using a particle system. Motion tracking using the tracking suite included in Spark Effect Test Fire Effect Test 3d tracking Test blender. The techniques learned in the single case scenarios were later used when implementing all elements into a single scene. The first technique that was worked on was 3d animation. The animation included was very simple and was done using the auto keying tool. The ship was moved to the its start and end positions in the timeline and key frames were added with additional frames added in between were the ship needed to rock and shake to show how the damage was effecting it. One of the most challenging techniques to perform was the motion tracking. This was required to simulate the live camera movement in a 3d space. This requires creating multiple tracking points that use contrast data in a scene to interpret the camera movement. Many problems faced where due to the shots being very barren and lacking contrasting detail. The lack of detail couldn't be changed but original footage was flat in colour and this could be improved using colour correction. Basic colour correction was performed on the footage and the tracking was attempted again with a much better result. Fire and smoke effect was created using the quick effect tool in the object settings in blender. This allowed for the quick creation and simulation of a fire and smoke effect. The main challenge encountered with this effect was the overall quality. The effect was attached to the ship which animated a long distance in the scene. This meant the domain for the effect had to be large too but it caused a reduction in quality of the effect. The spark effect was created using a particle system in blender. A particle modifier was added to the object that was to be used as the emitter. The settings were then tweaked to make the particles react like sparks. The key settings for this were a short and random life time, high velocity, high number of particles and random emission. An ico sphere was used for the particle object and material that would fade out depending on the age of the particle was added. This material was created using the node editor in blender. The final stage of the implementation was to composite all elements elements together. Each scene was rendered using eevee and cycles for different elements. Eevee was used to render the effects whilst cycles was used to render the ship and shadows. The sequence were rendered as PNG's and then imported into adobe after effects as a sequence. Each element was layered appropriately in the time line, usually in the order of; background footage/ship and shadows/effects.
With Ship Model
With Effects
Future Work
s highlighted some key for Overall this project was challenging but v cus for future iterations. These fulfilling to conduct. Many of the techniqu animation, the emission and this project can be used in a variety of diff In order to test the overall effectiveness of the trailer it was sent out to a number of participants that were asked to fire and spark effects and the watch the trailer and answer a series creative ways. Thisof project has of questions evaluating the effectiveness key elements including; the served to animation of the ship, the fire/smoke effect,spark effect and overall compositing of digital elements. Overall viewers ing emitted. Example 60%of storyboard of 20 participants the of how areof create responded positively to the trailer but the understanding effectiveness of it could have been improved based on VFX results. Elements the effects such as the location and emission of them could have been improved based on feedback. as immersive they can be better implemented by focus During this stageoverall the software to bewhich used to create means the implementation of this project was considered. A combination of Blender and Adobe After effects was somewhat successful in their certain areas of them. This will in turn imp chosen to be used in this project. Blender is free to use and is a very diverse and comprehensive 3d package. uld be greatly improved to increase quality of VFX used in future projects. It allows for 3d moddeling, animation, VFX and compositing to be done in the one software. However, e who Adobe feltafterimmersed. effects was chosen to do the compositing in due to the tools is had avalible to fix potential problems that may have arisen during the rendering. Premier was also used to edit the individual clips together and to add in sound work into the sequence.
Original Footage
With Ship Model
Digital Media
Conclusion
Feedback from users highlighted some key for improvement and focus for future iterations. These include the; detail of animation, the emission and location of both the fire and spark effects and the volume of sparks being emitted. 60% of 20 participants agreed the trailer was immersive overall which means that the effects were somewhat successful in their goal. However it could be greatly improved to increase the number of people who felt immersed.
(Design,Production,Development) Workspace Hamish Blender MacRitchie(1706216)
Digital Media
(Design,Production,Development) Hamish MacRitchie(1706216)
With Effects
References
Future Work
Overall this project was challenging but very fulfilling to conduct. Many of the techniques used in this project can be used in a variety of different and creative ways. This project has served to increase the understanding of how VFX are created and how they can be better implemented by focusing on certain areas of them. This will in turn improve the quality of VFX used in future projects.
3D Model of space Sourced from: https://www.turbos Under public doma References 3D Model of space ship Sourced from: https://www.turbosquid.com Under public domain license
31
32 <#>
BSc (Hons) Computer Network Management and Design
33
STUDENT BIOGRAPHY
Lewis Anderson Course: BSc (Hons) Computer Network Management and Design Performance Analysis of Open VPN and Current Industrial Trends VPNs have traditionally provided a secure tunnel to a remote location, allowing remote workers to establish a private connection to their company headquarters. More recently, VPNs have seen an increase in popularity with personal users who wish to privatise their own connections or remove geographical restrictions. But with many overheads, can a VPN connection provide users with adequate bandwidth to complete their tasks? Coupled with the rise of cloud computing, it may also be viable to run a VPN server in the cloud, with OpenVPN now offering a cloud-based VPN service too. This led to the question; can protocols like OpenVPN benefit from being run over highspeed infrastructure, such as Amazon Web Services (AWS)?
<#> 34
Performance Analysis of OpenVPN and Current Industrial Trends supervised by
a thesis by
Chris McDermott
Lewis Anderson
INTRODUCTION
TEST RESULTS
VPNs have traditionally provided a secure tunnel to a remote location, allowing remote workers to establish a private connection to their company headquarters. More recently, VPNs have seen an increase in popularity with personal users who wish to privatise their own connections or remove geographical restrictions. But with many overheads, can a VPN connection provide users with adequate bandwidth to complete their tasks? Coupled with the rise of cloud computing, it may also be viable to run a VPN server in the cloud, with OpenVPN now offering a cloud-based VPN service too. This led to the question; can protocols like OpenVPN benefit from being run over highspeed infrastructure, such as Amazon Web Services (AWS)?
The above graph shows the average network bandwidth for all four tests. The “classic internet” tests saw a small, expected decrease in performance when OpenVPN was enabled. However, the AWS bandwidth was affected considerably by OpenVPN.
PROJECT AIMS
SURVEY RESULTS
The first goal of this thesis was to determine whether running OpenVPN over high speed cloud infrastructure could have an impact on performance. This involved network tests being run to determine the bandwidth over different links. Additionally, it was aimed to find out if the industry favours a VPN protocol, and whether poor VPN performance would sway the industry’s choice of protocol. This goal was done by distributing a survey, aimed at IT professionals.
METHODS
Four tests were run, each over a 24-hour period. Testing with and without OpenVPN over a classic internet connection, and the same tests with AWS infrastructure. Three virtual machines were setup in AWS, with 1 CPU core and 1GB of RAM; running Windows Server 2019. It was hypothesized that the hardware limitations may impact performance.
Tests lasted 30 seconds and were run every six hours for a day. This is the average speed over the 24-hour period.
The survey was aimed at IT professionals, to determine the way the VPN industry is currently leaning. The chart below offers insight into whether poor performance would be a deciding factor in choice of VPN protocol.
• 46% of participants use OpenVPN on a regular basis • 25% of participants who use OpenVPN said they experience poor performance with it.
• 66% of participants who use OpenVPN agree that
poor performance would influence their choice of protocol.
CONCLUSION
Overall, the results have shown that IT professionals are using IPsec and OpenVPN almost equally, with IPsec slightly winning. This could be down to performance related issues with OpenVPN, which this thesis also concludes, or it could simply be because IPsec is a long-established protocol, natively available on a variety of different hardware and software. When a VPN tunnel is running, there will always be some overhead. The results from the Internet tests as seen to the right, show a 5.6% bandwidth drop with OpenVPN. The tests conducted over AWS were incredibly surprising, showing a 43% decrease in bandwidth with OpenVPN running. This could suggest that cloud infrastructure is not a good platform for tunneling traffic across. Alternatively, it could simply be the hardware limitations faced with this experiment.
THANKS TO
Special thanks to Chris, my supervisor. This project would not have been as possible, exciting or interesting without his guidance! Another big thanks to all my lecturers and the university itself for providing me with the opportunity to complete this degree.
REFERENCES
Coonjah et al. (2015). TCP vs UDP tunneling using OpenVPN. Coonjah et al. (2018). Investigating the TCP Meltdown problem in OpenVPN. Donenfeld, J. (2017). WireGuard whitepaper. Kotuliak et al. (2011). Performance comparison of IPSec and TLS based VPNs.
BSc (HONS) COMPUTER NETWORK MANAGEMENT & DESIGN
35
STUDENT BIOGRAPHY
Cameron Birnie Course: BSc (Hons) Computer Network Management and Design Automation of Network Device Configuration, Management and Monitoring with a Graphical Interface using Python Network automation as defined by Cisco is “the process of automating the configuring, managing, testing, deploying, and operating of physical and virtual devices within a network.” [1]. Automation provides three main benefits to an organisation – reduced OPEX, reduction in human errors and providing a framework to implement agile development or services. These benefits have lead to increased adoption of automation in recent years to a point that the use of automation within networks has become prevalent with Juniper’s SoNAR stating that 96% of businesses have implemented automation in some form [2]. One of the more popular methods of implementing network automation is the use of network management tools. network management tools are a type of software that assist in the management and monitoring of a network.
<#> 36
Automation of Network Device Configuration, Management and Monitoring with a Graphical Interface using Python Cameron Birnie & Christopher McDermott
Introduction Network automation as defined by Cisco is “the process of automating the configuring, managing, testing, deploying, and operating of physical and virtual devices within a network.” [1]. Automation provides three main benefits to an organisation – reduced OPEX, reduction in human errors and providing a framework to implement agile development or services. These benefits have lead to increased adoption of automation in recent years to a point that the use of automation within networks has become prevalent with Juniper’s SoNAR stating that 96% of businesses have implemented automation in some form [2]. One of the more popular methods of implementing network automation is the use of network management tools. network management tools are a type of software that assist in the management and monitoring of a network.
Project Aim Design and implement a user-friendly network management tool using a GUI with a focus on providing usability, modularity and multi-vendor support. In order to facilitate rapid configuration, deployment, management and monitoring of networks in comparison to traditional methodologies
Figures and Results
figures figures
The tool was to be assessed by evaluating the three attribute of usability as defined by ISO standard 924111:2018 [4] – Efficiency, Effectiveness and Satisfaction. The proposed testing involved having five users perform a set of configurations on multiple devices both with the tool and manually to assess efficiency and effectiveness. Satisfaction would then be examined using a system usability scale (SUS) to accurately gauge user satisfaction after using the tool. Due to the current pandemic it has become unfeasible to perform the original testing, the amended plan involves using myself as the sole subject. Satisfaction will also no longer be assessed due to the fact that it is an opinion based evaluation and using myself would potentially provide inaccurate results due to inherent bias towards the tool.
Overall I feel the project successfully meets the aim of the project by creating an easy to use network management tool. Areas for improvement in the future might include remote storage or the ability to configure multiple devices simultaneously. Further testing will be performed on the tool prior to the final hand in and it is expected that the results will conclude that the tool successfully provides an easy to use method of configuring devices in a more efficient manner than traditional methodologies. The tool has also been developed in a manner that will allow it to be easily modified and extended in the future to provide additional functionality.
Acknowledgments
Methods
figures In order to ensure that the GUI was well designed, user friendly and intuitive the design followed Jakob Nielsen’s 10 Usability Heuristics for User Interface Design [3]. These principles provide a set of high level design statements to ensure that user interfaces are designed in a manner to provide a high level of usability while avoiding common design pitfalls such as cluttered screens or unnecessary complexity. The tool itself was created using Python and modules such as Kivy, Netmiko, Netifaces, Winreg, Os, Re and Sys have been leveraged to provide additional functionality .
Conclusion
I would like to express my thanks to my supervisor Christopher McDermott for providing guidance throughout the project to ensure a finished product was produced
References Pilot testing has been carried out on the tool to provide an initial assessment on it’s suitability and highlight areas for improvement prior to the final usability testing. Overall the tool performed as expected and all functions were executed successfully, although a number of issues were highlighted that if left unchanged will hinder the effectiveness of the tool. There were a number of instances where crashes occurred during operation of the tool due to unexpected user input, as such improvements to the tools error handling will be carried out. While the method of storing credentials as plain text within the source code has been deemed a major security risk and a login screen will instead be implemented to allow the user to enter credentials at each login.
1). What Is Network Automation?, 2020. [online]. Available from: https://www.cisco.com/c/en/us/solutions/automation/ network-automation.html [Accessed 6 May 2020]. 2). JUNIPER ENGNET, 2020. 2019 State of Network Automation Report. [online]. Juniper. Available from: https:// www.juniper.net/assets/us/en/local/pdf/ebooks/7400113en.pdf [Accessed 6 May 2020]. 3). Enhancing the explanatory power of usability heuristics, n.d. Boston Massachusetts USA. Boston, Massachusetts, USA: Association For Computing Machinery. pp. 152–158. Available from: https://dl.acm.org/doi/ 10.1145/191666.191729 [Accessed 6 May 2020]. 4). ISO 9241-11:2018, 2020. [online]. Available from: https:// www.iso.org/standard/63500.html [Accessed 6 May 2020].
37
STUDENT BIOGRAPHY
Gift Chilera Course: BSc (Hons) Computer Network Management and Design VMware ESXI Vs Proxmox VE, Vs Microsoft Hyper-V Virtualisation has become an important factor in the world of IT today, there are over 36,000 “companies that use VMware vSphere” (enlyft. com) and there are over 41,000 “companies that use Microsoft Hyper-V server” (enlyft.com). VMware who developed vSphere (www.vmware. com) and Microsoft who developed Hyper-V (docs.microsoft.com) are popular platforms in virtualisation, with some of their virtualisation software being ranked high based on reviews (trustradius.com) and ease of use (g2.com).
<#> 38
VMware ESXI Vs Proxmox VE, Vs Microsoft Hyper-V Gift Chilera, Ian Harris Introduc)on Virtualisa5on has become an important factor in the world of IT today, there are over 36,000 “companies that use VMware vSphere” (enlyZ.com) and there are over 41,000 “companies that use MicrosoZ Hyper-V server” (enlyZ.com). VMware who developed vSphere (www.vmware.com) and MicrosoZ who developed Hyper-V (docs.microsoZ.com) are popular pla\orms in virtualisa5on, with some of their virtualisa5on soZware being ranked high based on reviews (trustradius.com) and ease of use (g2.com). Project Aim The aim of this project was to find out which one out of three type 1 hypervisors would be best suited to the user based on their needs. The hypervisors that were compared were Vmware ESXI, MicrosoZ Hyper-V, and Proxmox VE. The latest free version of each hypervisor at the 5me were used for the tes5ng in this project. Methods The tes5ng was done on a virtual machine with a Windows 10 guest opera5ng system which had performance benchmark tools installed. Each Hypervisor was tested in four different areas. Those areas were CPU performance, Memory performance, Disk Performance, and LAN Performance. Tes)ng Environment
Ark.intel.com (2020) shows that the clock speed/stock speed of the Intel Core i5-3470 CPU that was used is 3.2 GHz which is equivalent 3200 MHz. The turbo speed of the CPU is 3.6 GHz which is equivalent to 3600 MHz. Acording to intel.co.uk (2020), turbo boost will increase the core speed to be higher than the stock speed of the CPU. Intel also state that for Turbo boost to work the CPU “must be working in the power, temperature, and specifica5on limits of the thermal design power (TDP)”. This will lead to improved “performance of both single and mul5threaded applica5ons.” The test results show that ESXI had the highest clock speed which was 3432.35 MHz which is not only above the CPU clock speed, but it is the closest speed to the turbo speed. From what Intel state this should be a safe clock speed as it is below 3600 MHz and it should be safe it is also opera5ng within the temperature, thermal design power, and power limits. This makes ESXI the best out of the three hypervisors when it comes to CPU performance. Hyper-V’s highest clock speed was 3243 which is just above the stock speed meaning that it is a safe clock speed. Proxmox’s highest clock speed was 3193 MHz which is below this CPU’s 3200 MHz clock speed meaning that it underperformed in this area. Hyper-V came in at second and it was running on a single CPU with two cores. According to techterms.com (2020), systems with two CPUs are considerably faster than systems with one CPU however barely “twice as fast” which is a surprise as the VM on Proxmox was using two CPUs with one core but had a lower core speed than Hyper-V
The hardware used for the tes5ng was a HP Compaq Elite 8300 SFF x64-based system. ESXI was installed and tested first followed by Proxmox and then Hyper-V. The hypervisor was connected to the network and the virtual machine got an internet connec5on from the same network as the hypervisor.
The performance Benchmrk tools that were used were PerformanceTest (passmark.com), NovaBench (novabench.com), GFX Memory Speed benchmark (techspot.com), and LAN Speed test (totusoZ.com). Bandicam (bandicam.com) is a recording tool that was used to record the ac5vity on CPU-Z during the benchmark test.
Portable SSD Seagate, NAA40EC5, 1TB The VM on ESXI and Proxmox was given two CPUs with one core, 2GB RAM, a 32GB hard disk for the installa5on, and an addi5onal 16GB hard disk.
TotusoZ.com (2020) says that when LAN Speed Test tests the LAN speed it will create a file in memory and send it in both direc5ons. Out of the highest read/write speeds: ESXI had the fastest read (download) speed which was 4404.58 Mbps and the lowest write (upload) speed which was 106.29 Mbps. The windows 10 virtual machine was either using an E100e network adapter which is an emula5on of the Intel Gigabit NIC 82574L (ark.intel.com) or the VMXNET 3 which is a virtual network adapter. Vmware.com (2020) shows that these are the two NIC cards compa5ble with a windows 10 VM. Geek.-university.com (2020) shows that the NIC card speeds and duplex of a VM on ESXI can be configured. It also shows that the speeds can be set on Auto nego5ate which means that it will set the NIC card speed to the fastest speed possible without a speed limit. This would be one of the reasons why ESXI had a high read (download) speed. Hyper-V had the second fastest read (download) speed which was 1102.09 Mbps and the second highest write (upload) speed which was 200.13 Mbps. Hyper-V was using a Intel 82579LM Gigabit Ethernet adapter (ark.intel.com) which is another emulated NIC card. Proxmox had the lowest read (download) speed which was 670.24 Mbps and the highest write (upload speed) 220.17 Mbps. This could mean that Proxmox’s network adapter had similar data transfer rates as the network adapters that ESXI and Hyper-V were using.
Performance Benchmark Tools
Host Machine CPU: Intel Core i5-3470 3.20 GHz, Memory: 2x Hynix/Hyundai, 4F80D198, 4GB, Hard Disk: Seagate ST1000DM003-1SB102, Z9A5WE4T, 1TB
LAN Tests
Reliability
The memory test results show that Hyper-V would be the best op5on if looking for fast memory speeds overall as it has the fastest write speed which was 9.39 GB/s and the second fastest read speed in RAM which was 9.39 GB/s. Hyper-V also had the fastest RAM speed from the NovaBench test results which was 17554 MB/s. This would be suitable to users who want efficiency on their hypervisors as faster RAM will improve CPU performance as well. Transcend-info.com shows that the RAM that was being used on the physical hardware was DDR3 1333 as the highest speed came from Proxmox’s read speed tests which was 10.61 GB/s and the transfer rate of DDR3 1333 is 10.6 GB/s.
Storage Tests However, for Hyper-V when the VM was given two CPUs HyperV did something different and gave the VM one CPU with two cores instead. The VM was given one CPU instead with Hyper-V however one of the benchmark tools was sending an error message whenever the benchmark tests were about to start running. To avoid this error the VM was given two CPUs with Hyper-V meaning that it had one CPU with two cores. This would influence the performance benchmark results for Hyper-V.
When it comes to reliability, the demonstra5ons that were performed show that on the hardware and tes5ng environment that was used, Hyper-V is the most reliable when it comes to live migra5on between two storage devices as all of the demonstra5ons which required the VM to be transferred to another storage device while powered on were a success aZer being tested. For Proxmox only one feature worked and it was a live backup. For ESXI an amempt to transfer the VM while powered on was not successful and there was no live backup feature as well. Conclusion While some of the results from this project are debatable, the knowledge and insight gained from them from them leads to a far more intriguing discussion. The plan for this project was always to conduct the tests in the fairest way possible by using the same hardware and that was how it was carried out. However, there were some other factors which could have been taken into considera5on and they had an impact on the experiment test results. The factors which could be inves5gated for future work are the CPU senngs, Storage senngs, and Network senngs. As previously stated, some of the experiment results might be debatable, however u5mately this project has been able to prove that not all type 1 hypervisor pla\orms are the same. Factors which help prove this are some of the setups and configura5ons in this project, the features and tools that each manufacturer provides to the References
Figures and Results CPU Performance Tests
The hard disk that was used to install and store the hypervisors and was a Seagate Barracuda 1TB 7200 RPM SATA 3 hard disk. According to Seagate.com (2020) the hard disk’s average data rate read/write is 156 MB/s, the max sustained data rate OD Read is 210 MB/s, and the highest SATA transfer rate for this hard disk is 6 GB/s. Also according to kb.sandisk.com (2020) states that SATA 3’s transfer rate is 6GB and the highest bandwidth for SATA 3 is 600 MB/s. ESXI’s results were unusual as they were way above thehard disk’s average read and write speeds and they are also higher than SATA 3’s bandwidth. Hyper-V’s read speed was close to the average read speed but the write speed was unusual as it way above the average speeds and the bandwidth. Proxmox’s might have had the lowest speeds, however its results were normal compared to ESXI and Proxmox
1.1.12.26, G., 2020. GFX Memory Speed Benchmark. [online]. TechSpot. Available from: https://www.techspot.com/downloads/6767-gfx-memoryspeed-benchmark.html [Accessed 22 April 2020]. 2020. [online]. Available from: https://www.dell.com/support/article/en-uk/ sln179266/how-random-access-memory-ram-affects-performance?lang=en [Accessed 24 April 2020]. Backup and Restore - Proxmox VE, 2020. [online]. Available from: https:// pve.proxmox.com/wiki/Backup_and_Restore [Accessed 22 April 2020]. COMPANY, B., 2020. Bandicam - Recording Software for screen, game and webcam capture. [online]. Bandicam.com. Available from: https:// www.bandicam.com/ [Accessed 24 April 2020]. CPU-Z | Softwares | CPUID, 2020. [online]. Available from: https:// www.cpuid.com/softwares/cpu-z.html [Accessed 22 April 2020]. Download PassMark PerformanceTest - PC Benchmark Software, 2020. [online]. Available from: https://www.passmark.com/products/ performancetest/index.php [Accessed 22 April 2020]. Dual Processor Definition, 2020. [online]. Available from: https:// techterms.com/definition/dual_processor [Accessed 24 April 2020]. HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: https://www.computerhope.com/issues/ch001240.htm [Accessed 24 April 2020]. HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: https://www.computerhope.com/issues/ch001240.htm [Accessed 25 April 2020]. Microsoft Hyper-V Server commands 12.75% market share in Virtualization Platforms, 2020. [online]. Available from: https://enlyft.com/tech/products/ microsoft-hyper-v-server [Accessed 29 April 2020]. Emulation or virtualization: What’s the difference? - Direct2Dell, 2020. [online]. Available from: https://blog.dell.com/en-us/emulation-orvirtualization-what-s-the-difference/ [Accessed 29 April 2020]. HOPE, C., 2020. IDE vs. SCSI. [online]. Computerhope.com. Available from: https://www.computerhope.com/issues/ch001240.htm [Accessed 29 April 2020].
39
STUDENT BIOGRAPHY
Chris Headrick Course: BSc (Hons) Computer Network Management and Design Mass Configuration of Network Devices in a Lab Environment using Python Network automation adoption continues to grow year on year, with python being popular choice of tool. In a computing lab environment, network devices are configured, reset and reloaded multiple times throughout the day as students and staff utilize them for labs and projects. Without a fixed IP address and method of access, it is impossible to take advantage of many popular python libraries that allow for mass configuration options.
<#> 206
Mass Configuration of Network Devices in a Lab Environment using Python Chris Headrick and Christopher McDermott
Introduction
Implementation
IOS Upgrade Time Comparison
Network automation adoption continues to grow year on year, with python being popular choice of tool. In a computing lab environment, network devices are configured, reset and reloaded multiple times throughout the day as students and staff utilize them for labs and projects. Without a fixed IP address and method of access, it is impossible to take advantage of many popular python libraries that allow for mass configuration options.
35 30
Axis Title
25
Methods
Figure 1 - NM-32A asynchronous network module
To connect to the devices, a console server was created using a Cisco 2620xm Router with an attached NM32A (see Figure 1) asynchronous network module that uses 68-pin OCTAL Cables and provides out of band connectivity to the console (or auxiliary) ports on up to 32 devices at one time. Each line is allocated a corresponding name that is used for access (D1 – D32). SSH is also configured on the console server to allow the program access to the device.
15
5 0
Octopush
Manual 15 Devices
Figure 2 – From CLI to GUI
The program started as a simple script designed to run on Linux that allowed the user to navigate through menus using the CLI. During the development process it became apparent that a small application that ran on Windows would be far more practical and easy to use. The program has two modes of operation: OctalLine and SSH. The OctalLine process works like so: First, a Device Check is performed by entering the number of devices connected to the console server. The program accesses each device and looks for either “Switch” or “Router” in the name and provides a “PASS” message. If the device has a different name, it is possible that it has a saved startupconfiguration. A optional checkbox is provided that will delete the startup-configuration and reload such devices. I have tried to account for every error. Where possible, the script will alert the user and automatically perform corrective action. If the script is unable to perform such an action, the user is given a list of problem devices and the issue so that they can fix manually. Once the user is satisfied that all the devices are ready, they choose to configure each device with SSH or push pre-made configurations stored in .txt files.
The program utilises the following libraries to achieve its aims:
Netmiko: Used to SSH to the console server and perform the task selected by the user over the octal lines. Nornir: Once SSH is configured, Nornir is used to perform configuration tasks and upgrades on the devices. Its does this concurrently thus greatly reducing the time needed to perform tasks. Gooey: Gooey is used to turn (almost) any Python command line programs into GUI applications.
20
10
Project Aim The ultimate goal of the project is to create the fastest and most convenient way to connect to as many lab devices as possible to perform configuration changes and upgrades.
Results
A traditional method of performing IOS upgrades is to manually connect to each device and download the image from a TFTP server. To test how much faster the script could perform the process, a test network of 15 switches was created. Using the traditional method, it took on average two minutes to download the image, change the boot settings and reload the device. By comparison, the script was able to transfer the image, configure the devices and perform a reload in just under 5 minutes making it 83.3% faster.
Conclusion This project has successfully demonstrated the time automation can save on tasks like mass configuration and upgrades. The inspiration for this project came from a discussion with a fellow student who along with two others had spent weeks upgrading devices at the university the previous year. From a business perspective, the money saved on labour by reducing a task from weeks to perhaps a little as a day makes a strong argument in favour for automation.
Future Work Figure 3 – Error Handling
With SSH configured, it is now possible to take advantage of Nornir’s parallelization and configure multiple devices at once. The main feature of the SSH component is performing an IOS upgrade on multiple devices. This is done by transferring an IOS image, setting the boot variable on the device and reloading it.
There are many features that can be added to this app such as the ability to pull configurations from devices and compare them to a template. I’d also like to attempt to bypass password protected devices by using a password list that uses common passwords used on network devices in labs. Lastly, I intend on packaging the script as an .exe. Still time!
Computer Network Management and Design 207
STUDENT BIOGRAPHY
Adam Nayler Course: BSc (Hons) Computer Network Management and Design Comparing the Use of Intrusion Detection and Intrusion Prevention Systems within Small-scale Networks As outlined by Stokdyk (2018), 58 percent of business owners with up to 299 employees have been a victim of cyber-attacks. With such a large number of small business falling prey to cyber criminals it is paramount the correct security is in place to prevent and protect data and users. Even a single breach is devastating with the average cost of one in 2019 being $3.92 million (Sobers 2019). With this knowledge companies need to know what technology to implement to defend against these threats. This project aimed to explore if a high cost IPS system was necessary for a small business. It is judged against instead running an IDS system alongside defence strategies implemented by default in Windows 10. However, the results may show that like many situations these alone are not enough and instead a defence in depth approach is required to provide an adequate, consistent solution to detection and prevention.
<#> 218
Comparing the Use of Intrusion Detection and Intrusion Prevention Systems within Small-scale Networks. Adam Nayler & Andrei Petrovski
Introduction As outlined by Stokdyk (2018), 58 percent of business owners with up to 299 employees have been a victim of cyber-attacks. With such a large number of small business falling prey to cyber criminals it is paramount the correct security is in place to prevent and protect data and users. Even a single breach is devastating with the average cost of one in 2019 being $3.92 million (Sobers 2019). With this knowledge companies need to know what technology to implement to defend against these threats. This project aimed to explore if a high cost IPS system was necessary for a small business. It is judged against instead running an IDS system alongside defence strategies implemented by default in Windows 10. However, the results may show that like many situations these alone are not enough and instead a defence in depth approach is required to provide an adequate, consistent solution to detection and prevention.
Project Aim The aim of this project was to compare the detection and prevention abilities of each system and determine their effectivity. This would help determine if one system is a better investment over the other for a small-scale network. The results should outline if either is able to fully protect an enterprise network on its own or if a defence in depth strategy is still required for smaller networks.
Methods This project made use of two separate but identical GNS3 networks implementing VMs running within VMWare. The network contains three routers and switches to accommodate three separate subnets. Each subnet contained three host device running Windows 10 for testing. Subnet one also contained the system specific devices. A Kali Linux VM was included to represent an attacker. The base layout for these networks is shown below.
The IDS system makes use of Wazuh running on a Debian 10 server. It monitors specified folders for new files and sends those new files to VirusTotal for analysis. The IPS system uses SolarWinds Security Event Manager to carry out pre-configured rules when the test systems send alerts about malware detection.
Figures and Results Each end devices within each network was tested against four iterations of four different types of malware for a total of sixteen tests. The goal was to measure the detection and prevention ability of each system. The results of the first round of testing against the IDS system found that it was able to detected the malware 100% of the time. This was most likely due to its use of VirusTotal. VirusTotal analysis possible malicious files against over 70 different databases to check if it is a threat. Number of database detec/ons Number of database engines Eda2_xor_context.exe Eda2_zutto_dekiru.exe Eda2_xor.exe Eda2.exe Worm_xor_context.exe Worm_zutto_dekiru.exe Worm_xor.exe Worm.exe Rat_xor_context.exe Rat_zutto_dekiru.exe Rat_xor.exe Rat.exe Npp.installer.obfuscated.exe Npp.installer.polyinfected.x64.exe Npp.installer.infected.x64.exe Malware.exe
41 42 40
26
49 38 42 43 46 38 42 42 52 36
48 47
However, prevention for the IDS system was only 81% as it was dependent on Windows Defender with no IPS capabilities of its own.
WINDOWS DEFENDER DETECTION Undetected Malware 19% Detected Malware 81%
The IPS system on the other hand turned out to be dependent on third party software for detection and therefore its rate of detection and prevention were both lower than the IDS system. Its detection rate of the malware was only 68.75% a n d c o u l d o n l y successfully carry out IPS capabilities when a malware event was detected by the system through third party software such as Windows Defender. It did however, carry out all actions successfully when the event was successfully received.
Below is a pie chart outlining the IPS system detection rate. WINDOWS DEFENDER DETECTION Undetected malware 31%
Detected malware 69%
figure Conclusion Although the results were not as expected, I believe this project has been carried out successfully and effectively. I expected the results to show that the IDS had better detection rates but the IPS would have an increased prevention capability. However, due to how it functions its detection and prevention rates ended up being less than that of the IPS. It is difficult to know if this would have still been the case if I had the ability to replicate them physically but due to COVID-19 this was impossible. Given a larger time frame I would of preferred to carry out every test on multiple IDS and IPS solutions so the results were not dependant on a single type or brand. Overall I am disappointed that unforeseen circumstances prevented some objectives of this project from being achieved. On the other hand, I am very happy with the work that was carried out for those still achievable.
Acknowledgments I would like to thank all of the teaching staff at Robert Gordon University for their knowledge and support during my studies. Specifically, the support and advice given throughout my honours project process by Andrei Petrovski and Ian Harris. An additional thank you goes out to my fiancĂŠe, Jodie. She has been consistently supportive and patient with me throughout the entire project.
References SOBERS, R., 2020. 110 Must-Know Cybersecurity Statistics for 2020. [online]. Available from: https://www.varonis.com/blog/ cybersecurity-statistics/ [Accessed 14th April 2020]. STOKDYK, D., 2018. What is Cyber Security and Why is it Important? [online]. Available from: https://www.snhu.edu/about-us/newsroom/ 2018/05/what-is-cyber-security [Accessed 14th April 2020].
219