A note from the editors
‘Technology’ is a word used freely and widely nowadays; interacting with humans everywhere we go, in everything we do, shapingour futures in this world and defining the past. Thus, it seems natural for technology to dominate the headlines of weekly news articles; be a part of any topical conversation. You, the reader, absorbing this off a screen, and I, the writer, pressing keys on a keyboard, all of which can be decomposed to a few thousand 1s and 0s, are a prime example itself. And Ididn’t even need to look past this document.
Download magazine seeks to address niche areas of computer science and technology, explore these and share them with the NLCS and wider community. In this edition, we explore the specific interactions and influences of technology and humanity (and vice versa), from book reviews to reports on art. We hope there is something for everyone here.
Your editors, Reva, Ilakiyaa
Millie vs the Machines by Kiera O’Brien contains an amazing insight into what life could look like in 2099 with advanced robots. The protagonist is a thirteen-year-old girl called Millie, in many ways just an ordinary teenager with everyday problems. She is involved in a car accident. She can only remember bits and pieces and is left with memories that haunt her. She goes to a school called Oaktree which is a safe haven, given its name by the governing body, “The Company”. The school relies on machines to survive. They use retina chips, which are phones that appear on your eyes, and no one can tell when you are looking at it. Index chips located in your finger get you into buildings, like our identity cards here at school, but index chips can also track you wherever you go. There are machines called units. These machines have human like bodies and do mundane tasks that humans do not want to do. Millie has a twin brother; she feels anger at him but cannot remember why.
This book follows the story of mysterious disappearances at her school and shows us the benefits of this high-tech machinery but also the dangers it can cause. This book is full of twists and at the end of the book there is a horrific turn in the story that changes Millie’s life.
I would recommend this book to readers who enjoy futuristic books and dystopian fiction. This book is not very joyful, dark at times and even gruesome in parts. The most interesting aspect of this story was that it put to light that humans have an extreme sense of empathy that can sometimes put them and others in danger, and robots are only useful up to a certain point and in their most basic form. There are two groups in this book. The first wants to give units rights so they do not have to live serving humans, the other believing that freeing units would be dangerous because they do not know what the extents of their capabilities are. Both groups are dangerous.
I believe that the moment technology gets that advanced is when we must start to evaluate whether machines are just metal or another form of life. This also begs the question whether humans should even explore that far in their pursuit of a more advanced technology.
It is always a human’s choice, because we decide what we create, so this is all and only in our hands. Engineers have to work together and believe in the same process so that nobody can create something that is too dangerous. Laws need to be strict.
If someone tried to create a robot that could recreate human emotions and eventually pose as a human, this would seem clever and fascinating at first, but as humanity has seen it is our strong emotions that lead us to disagreements and war, and what stops the robots experiencing the same anger. This form of artificial intelligence is made from metal not fragile flesh and blood, so the consequences could be catastrophic.
Kiera O’Brien has also written a sequel, just as fast paced and delves even deeper into the world of machines.
Here are some additional books that explore this fascinating topic.
I, Robot – Isaac Asimov
Cinders - Marissa Meyers
Mila2.0 - Debra Driza
Freak of Nature - Julia Crane
Wires and Nerve- Marissa Meyers
Beta - Rachel Cohn
piece of art that was displayed in the Guggenheim Museum in 2016. The installation u ‘Can’t help myself’ is a piece of art that was displayed in the Guggenheim Museum in 2016. The installation used an industrial robotic arm that are often found in factories. This robotic arm was fitted with visual recognition sensors and software that controlled its 32 different movements.
The robot lived in a square acrylic box where it was to continue the only job it had: to survive. It was programmed to try to contain the hydraulic fluid that was constantly leaking out and required to keep itself running...if too much escaped, it would die so it was desperately trying to pull it back to continue to fight for another day. The sensors detected when the liquid had escaped a certain distance from the robot, triggering the robotic arm to reach out and pull back the liquid with its shovel-like attachment. The saddest part is they gave the robot the ability to do 'happy dances' for spectators while the spill was contained. When the project was first launched it danced around spending most of its time interacting with the crowd since it could quickly pull back the small spillage. Many years later...it looks worn down and hopeless...Because the amount of leaked fluid became unmanageable as the spill grew over time, there now isn't enough time to dance as it only has enough time to try to keep itself alive. Living its last days in a never-ending cycle between sustaining life and simultaneously bleeding out...
There is a very human-like, relatable aspect to the robot. People feel the sadness and hopelessness it projects as it moves about, at first seeming happy enough to do its job but over time growing slower as if the work has become tedious and exhausting. This work of art is telling a story of desperately struggling to keep yourself together just so you can live another moment to struggle again. The fluid is in relation to how we drain ourselves both mentally and physically for money just in an attempt to sustain life, how the system is set up for us to fail on purpose to essentially enslave us and to steal the best years of our lives. How this robs us of our happiness, passion and our inner peace. How we are slowly drowning with more responsibilities, and less free time to enjoy ourselves with as the years go by. We connect with the robot because we often feel like robots, imprisoned by society, and are often treated as such.
The arm slowly came to a halt and died in 2019, but with a twist - the bot, actually runs off of electricity, not hydraulics, so it was working its entire life towards something it didn't even need, tricked by the system it was brought into. This robot was programmed to live out this fate and no matter how hard it tried, there was no escaping it and spectators watched as it worked itself to death. Created by Sun Yuan & Peng Yu, named, 'Can't Help Myself'.
Among all the organs, the heart has always been a bit of an anomaly, leaving it mostly untouched by surgeons until the late 19th century. Operating on the heart came with a mountain of difficulties, both practical - such as its inaccessible position and the question of how to operate on it whilst it was beating, as well as difficulties rooted in belief, such as the common misconception that injury to the heart would cause instant death.
However, over the past few centuries, these misconceptions that stopping the heart would result in instantaneous death, were overthrown, together with the evolution of heart surgery, and of the heartlung machine.
The heart-lung machine is a device frequently used in open heart surgery, which temporarily takes over the job of the heart and lungs. It does this by removing blood from the patient’s body, and by removing the carbon dioxide and adding in fresh oxygen, and then pumping the oxygenated blood back into the body. Therefore, it requires two parts - an artificial lung to oxygenate the blood, and a pump to propel the blood through the machine and around the patient’s body.
The first heart-lung machine was designed by John Heysham Gibbon Jr. in 1934. The blood would enter a rotating vertical cylinder, where it was spread into a thin film and exposed to oxygen. It would then be collected and pumped back into the body.Together with his wife, Mary Gibbons, he conducted many experiments on cats, which were conducted as follows. They would first anaesthetise the cat and connect it to an artificial respirator. Then they would reveal the heart by opening the thorax and inject the cat with heparin to prevent blood clots from forming. Tubes would also be inserted to carry blood to and from the heart lung machine. And then, after the setup was ready, they would simulate obstruction in the pulmonary artery, turn on the heart lung machine and observe. After many such experiments, they had theirfirst success in 1935,markinganimportant point in thehistoryof theheartlung machine.
Shortly after, the second world war broke out, where Gibbon served as a surgeon, putting a temporary pause on his research, which he took up again after his return. In 1945, he began working with Thomas WatsonSr.,thethenpresidentofIBMtocreateanimprovedversionoftheoriginalheart-lungmachine. The new machine delivered in 1946 was larger and more sophisticated, reducing the risk of haemolysis and preventing air bubbles from entering the circulation.
However, there were still issues with the design, in particular with the size of the machine. The key to solving this came from the observation that if an obstruction was placed in the bloodstream to produce turbulence, the rate of oxygenation would increase. Gibbon replaced the revolving cylinder with six stainless-steel mesh screens, which were suspended in parallel. The blood would trickle down them
into an oxygen rich atmosphere where it would be oxygenated, and then collected at the bottom. This managed to greatly increase the surface area and efficiency of the artificial lung, whilst keeping it to a scale that would allow it to fit in an operating theatre.
On 6 May 1953, Gibbon performed the world’s first successful cardiopulmonary bypass surgery on patient Cecelia Bavolek, using his heart-lung machine. Bavolek was born with an atrial septal defect and was operatedon when shewas 18.Inanoperationthat tookoverfivehours, theheart-lung machine stood in for her heart and lung for twenty-six minutes.
However, with two consequent operations both resulting in the death of the patient, Gibbon decided to step away, abandoning both the machine and surgery altogether. This was not the end of his machine, however, as he agreed to share the design with Mayo Clinic, who improved the machine and managed to lower the mortality rate to 10% within just a few years.
Perhaps somewhat ironically, in his old age, Gibbon suffered from heart trouble, and passed away in 1973 due to a heart attack. However, the legacy he has left behind is great; not only has his heart-lung machine saved countless lives, his mentoring of other physicians and his standard textbook on chest surgery helped to impart his knowledge to the next generations.
Aswe are currentlydiscovering,nurses workingin theNHS areundermassivestress and areonstrikes around the country. Is there a way for technology to relive some of the work they must do, giving them more time and freeing up more money for the NHS? As the NHS is already under enormous amounts of pressure due to underfunding and covid, the expected growth of people who need care homes will force changes in the way the NHS is handled. Technology can facilitate this, without compromising on the quality of care provided, a foundational belief for treatment in the NHS.
Motion sensors and personal assistants, such as Amazon Alexa or Google Home, are becoming increasingly popular in social care to monitor and assist those who may need additional support, such as the elderly and those with disabilities. Companies are creating systems which use motion sensors to enable fall detection and alert caregivers if there is an emergency. This aims to reduce the frequency of required home visits from nurses, saving valuable time and resources. Personal assistants can help people set up reminders for medication, making phone calls and controlling other devices in their homes. Individuals who struggle with moving and memory can be helped immensely by this type of technology. Motion sensors and personal assistants can also track patients’ wellbeing, such as determining if someone is not present or moving in a home, tracking sleeping patterns or any other changes in behaviour which could be indicative of a health issue.
Most people can see the obvious issue with this approach: it is a complete invasion of your privacy. However, studies which have been done show that vulnerable people still support the idea as it allows them to be more independent whilst getting the care they need, and trials are currently running in the UKforthis to beimplementedin alargerscale. In addition, thereis oftenahigh initial cost ofinstalling enough motion sensors and personal assistants for it to be beneficial and prevent the need for as frequent nurse visits, however, this upfront cost will be ultimately more cost-effective through allowing nurses to reach more patients.
In early 2023, the NHS is planning to move to a more user-friendly way of transferring data, enabling easier access to patient records and more informed choices about treatment options (such as giving length of wait for treatment). Currently, doctors regularly use over 10 different systems for patient care and treatment, delaying the process of sharing results for tests. This will make it easier to diagnose patients and proceed with the needed treatment.
In conclusion, there are multiple ways the NHS is attempting to revolutionise social care using technology, including the use of motion sensors and personal assistants. However, they will have to navigate privacy and security in the process, considering the needs of each individual separately and determining the best way to continue.
By now you may have heard of OpenAI’s Chat GPT Chat Bot, the hottest technology in the American market currently. – if you’ve tried it already, it’s no wonder Microsoft have planned an additional ten billion dollar investment into the groundbreaking tech. But what exactly is it? And should we be worried about science fiction catching up to us too fast, or is it just another technology toy?
Short for ‘Generative Pre-Trained Technology’, Chat GPT is essentially a program, or ‘AI assistant’ that has studied as much as the internet can provide and can provide answers to any written request (with the exception of opinionated pieces, whereby it gives a response similar to the photo on the right). The newest version has been trained on BILLIONS of text samples before 2021, and is now able to contextualise this data, meaning it can alter the facts and information it gives based on your input.
The more frightening about Chat GPT is that more often than not… it tends to be extremely realistic, passing the famous ‘Turing Test’ (a test designed by the computer scientist Alan Turing to test how realistic robot responses to human questions are), and achieving numerous accolades such as qualifying for a medical degree and passing the University of Pennsylvania’s law test. Anything and everything you could possibly think of can be answered by Chat GPT (with a few obvious exceptions, such as opinions and questions like ‘What am I wearing today?’). This means the Chat Bot is capable of trawling through a character count of 170 billion to be able to answer questions as accurately and usefully as possible.
More interestingly, on top of academic queries, Chat GPT can also… give advice! Of course, it is important to remember that this is still just a machine, and the ‘advice’ and support is based on swaths of self-help text. However, if you do just want someone or something to listen to you as you vent, this can be a surprisingly therapeutic solution:
On top of this, there are numerous other practical applications of OpenAI’s newest tech, as the Chat Bot can also help to write code (though this is, surprisingly, one of its weaker areas, as code is often buggy and not tested), offer recipes and even write letter templates! (Though of course this is something an NLCS girl would surely never encounter.)
Overall, this tech has an innumerable number of applications that can aid us in our daily lives, both academically and to help better us as human beings. However, who’s to say where this will stop? When will it become the norm that AI is coded and developed by other AI? Is it okay that we are already able to rely on technology to comfort us and provide emotional support – something that is inherently human?
Only time will tell.
ChatGPT is a state-of-the-art language model developed by OpenAI, which has been gaining a lot of attention in the artificial intelligence (AI) community. The model's ability to understand and generate human language with a high degree of accuracy has made it a popular choice for a wide range of natural language processing (NLP) tasks. In this article, we will take a closer look at ChatGPT, its capabilities, and why it has become so popular.
OpenAI conducts research in a wide range of areas related to AI, including deep learning, computer vision, natural language processing, and reinforcement learning. OpenAI also develops and releases open-source software and pre-trained models, such as GPT-3, that can be used by researchers, developers, and companies to build advanced AI applications.
One of OpenAI's main focuses is on developing safe and beneficial AI, which includes researching and developing methods to ensure that AI systems are transparent, robust, and aligned with human values. The organization also works on projects related to AI governance and policy, and actively engages with policymakers and industry leaders to promote the responsible use of AI.
ChatGPT is based on the transformer architecture, which is a neural network architecture that was introduced in 2017. The transformer architecture allows ChatGPT to process and understand large amounts of text data, making it more accurate and versatile in its language understanding.
One of the main reasons for ChatGPT's popularity is its ability to generate human-like text, which makes it ideal for tasks such as content creation, text summarization, and language translation. The model can also answer questions, provide research on a topic, and assist with proofreading, grammar checking, and other language-related tasks.
Another reason for ChatGPT's popularity is its ability to generate code snippets and assist with programming-related tasks such as code completion, debugging, and explaining code concepts. This makes it a valuable tool for software developers. ChatGPT is open-source, which means that researchers, developers, and companies can use it to build advanced AI applications. Also, OpenAI is constantly working to improve ChatGPT's capabilities, which means that its accuracy and versatility will continue to increase.
There are several areas where my future development could focus on:
• Improving my ability to understand and generate more nuanced and complex language, such as idiomatic expressions and sarcasm.
• Enhancing my ability to understand and respond to context, which will make my responses more accurate and relevant.
• Developing a deeper understanding of different cultures and languages, which will make me more versatile in different regions.
• Enhancing my ability to handle more complex tasks, such as writing articles, composing speeches, or creating poetry.
• Incorporating new technologies such as reinforcement learning to improve my learning capabilities.
In conclusion, ChatGPT is a state-of-the-art language model that has become incredibly popular due to its ability to understand and generate human language with a high degree of accuracy. Its versatility and ability to handle a wide range of natural language tasks make it a valuable tool for a variety of industries. Furthermore, the fact that it is open-source and constantly being improved makes it a promising technology for the future, with the ultimate goal of making it easier for people to interact with technology and allowing for the development of more advanced AI applications.
The article you have just read was in fact, written by ChatGPT. Reflecting on this, how do you feel, could you tell? Tell us your thoughts!
As many of you might already know, AI generated art is becoming increasingly popular and is starting to earn its place among respected art in all styles. But can we really call this ‘art’? What does art mean, and is the purpose of it primarily for the viewers or the creators? I will let you conclude some of these questions yourself, but before we do that it is crucial to consider the context behind this controversial advancement.
How do they work?
AI generators are trained with huge datasets scraped from the internet, so anything they create is influenced by a piece of artwork that has been documented digitally. The program then stores attributes associated with specific artists and styles, such as the ‘roundness’ or ‘redness’ of an apple. These attributes are then weighted and linked to each other; such that when given a prompt, the AI may retrieve the relative weights of the keywords in this prompt and seek to create something with these starting points. It is interesting to also note that each generation using most AI generators involves a random seed, so in its own way, you could never prompt the same piece of art twice.
A salient issue with this is to do with copyright – many believe that artists should have the agency to decide whether an AI is allowed to ‘learn’ from their art or perhaps imitate it and use original concepts. However, you could still argue that every artist is influenced by others, and an AI only does this in a more computerised way.
Will they replace artists?
In short- No, I don’t believe so. I think it is important to consider their effect on society and look at previous integrations of innovative technology in the modern world. While, for example, a piece of AI generated art won a painting competition in Colorado last year against handmade pieces, its origin was obvious to the trained eye and brought something different to the table. Looking back to the invention of the camera, this certainly didn’t render portrait artists obsolete, but rather assisted their work. Now, artists can use AI generated images as prompts or inspiration for their pieces, easily looking at things from a different perspective. I believe they can also be useful in areas we haven’t seen visual prompts before, such as alongside newspaper articles or to aid the understanding of abstract concepts.
Where do the dangers lie?
There are certainly dangers to be addressed with the use of these generators. Firstly, if not regulated correctly, passing artwork off as created by a human when it wasn’t (and vice versa) could be very dangerous. Especially on social media platforms, when people are likely to glance at these images for a few seconds, and not question their origins. So, in the future, I think it should be very important that we are careful to differentiate where different forms of media originate and how we view them.
Sources:
https://www.wired.com/story/picture-limitless-creativity-ai-image-generators/
https://www.smithsonianmag.com/smart-news/artificial-intelligence-art-wins-colorado-state-fair180980703/
Robotics used in surgery were hypothesized as far back as 1967 and began to be used in the late 1980s with Robodoc which is an orthopaedic image guided system used in prosthetic hip replacement. The most well-known robotic system in surgery, the da Vinci surgical system was created in the year 2000. It is a system that is used by a surgeon to perform robotic assisted minimally invasive surgery. During a procedure with the da Vinci system, the robotic arms are placed strategically by the surgeon and robotic team. The surgeon sits at a special console near the patient and is in 100% control of the robot. A 3D camera and very small surgical instruments are placed inside the patient through tiny incisions. The da Vinci system translates the surgeon’s hand movements at the console in real time. This system has been used in over 8.5 million procedures worldwide. In comparison to traditional techniques, using this system allows for a much higher level of precision, as the surgeons have greater dexterity, vision, and control. This also reduces patients time in the hospital and their risk of infections.
This system which has been used for over 20 years now, only assists the surgeon, it is not autonomous. However, researchers at Johns Hopkins University have developed the Smart Tissue Autonomous Robot (STAR), which is a self-guiding surgical robot. It was able to perform laparoscopic surgery (a surgical procedure that allows the surgeon to access the abdomen and pelvis through tiny incisions) on the soft tissue of a pig without the guidance of a human. Pigs were chosen for prototype development and demonstrations as their skin is genetically closest to human skin. STAR was able to reconnect two ends of an intestine and was able to do this with better results than humans performing the same procedure. This procedure requires a high level of repetitive motion and precision, and STAR was able to execute this well. High accuracy and consistency are needed when a surgeon connects two ends of an intestine, as even the slightest hand tremor can result in a leak and infection. They first created a model in 2016, that repaired a pig’s intestines accurately, but required a large incision
and a lot of guidance from humans. Soft tissue surgery is particularly difficult as it can be very unpredictable, but STAR is able to adjust the surgical plan in real time, just like a human surgeon. STAR is the first robotic system to plan, adapt and execute a surgical plan in soft tissue with minimal human intervention. STAR uses a structural light-based three-dimensional endoscope and machine learning-based tracking algorithm. It is still undergoing pre-clinical studies and there is still much to develop. They are working on reducing the size of the endoscope and improving the robot’s fail-safe operation, so that the operating surgeon can help make adjustments if needed. They are working on a plan to perform the first in human studies within the next five years.
In this way huge advancements have been made in the surgical field. In general, there are many positives to robotic surgery, as it is almost always a minimally invasive procedure, meaning that the surgery will involve smaller incisions, which means a reduced likelihood of pain or injury to surrounding ligaments or tissues and it also keeps blood loss low. The three-dimensional view allows surgeons better view of blood vessels so they can ensure that the patients’ blood levels remain stable. Smaller incisions also shorten the overall procedure time and minimal scarring. There are fewer complications with robotic surgery, as the risk of infection is significantly reduced, as minimally invasive surgery is used. There are many benefits to robotic surgery, and it has a 95% success rate overall. However, there are many ethical issues surrounding robotic surgery, as there is potential for mechanical failure or malfunction. Robotic surgery is also often criticized for not providing surgeons with enough haptic feedback, which is feedback on the feeling of touch. This is important so that the surgeon can recognize this and adjust the force of the instruments in the surgery. Robotic surgery already has already made huge advancements in the past 50 years and will continue to do so, perhaps even leading to entirely autonomous robotic surgeries on humans.