This topic titled, The Ethics of Technology asks us to consider the materials, processes, energy, information and forms of power that exist behind technological interfaces. As technology becomes increasingly implicated within our daily lives, what are the ethical questions and issues we need to consider as we continue to use technology?
Before we discuss the ethical implications of the immaterial affects of technology on our daily lives, lets consider some of the material affects of technology.
Image: Video stills from Louis Henderson’s ‘Lettres du Voyant’ film 2013. These scenes were filmed in Ghana where workers extract valuable materials from used computer equipment, often shipped from Western countries.
As mentioned previously, we think of the cloud as something immaterial, that our everyday interactions with technology are digital and apart from the minimal amount of metal used to make our computers – technology is not a resource-heavy industry.
The cloud – which has become the main information storage space - has been created in the image of something nebulous, natural and omnipresent. In reality the cloud is constituted by huge data centers which draw incredible amounts of energy to power masses of computer hardware. As James Bridle says:
We connect to the cloud, think of it as place-less, a digital ‘elsewhere’ for storing and retrieving our data, content and memories. But far from being immaterial, the cloud is a vast, physical network made up of concrete, silicon and steel, of earthbound server farms, subterranean data centers and cables beneath the sea. It is not a publicly owned space or digital ‘commons’. It is a multi-billion dollar, private infrastructure dominated by some of the world’s most powerful companies – principally Amazon, Microsoft and Google. The cloud exists within the same geography that we do: a patchwork of national and legal jurisdictions, which determine – most of the time – what it can and cannot do. (James Bridle, ‘Under the Cloud’ 2020, BBC Radio 4)
These competing regimes of e-waste governance have used these narratives of ‘digital development’ and ‘toxic trade’ to legitimize certain classifications and (voluntary) management requirements on the supply chain, but inevitably these black and white representations of the problem obscure the messiness and uncertainty that actually exists in terms of where e-waste goes and what happens to it. In mainstreaming e-waste as a political object, simplifications occur. For example, organizations like iFixit, an electronics repair advocacy group, argue that repair and reuse in the informal sector ought to be distinguished from burning, acid leaching, and disposal as a positive activity (Chamberlain, 2012). By using certified recyclers, are electronics consumers in the North combating the ‘race to the bottom’, or depriving secondary markets of materials? Deciding just when and where e-waste recycling and trade is opportunity or exploitation is not at all simple (see Minter, 2012). Pickren, G. (2014) Political ecologies of electronic waste: uncertainty and legitimacy in the governance of e-waste geographies, in Environment and Planning, volume 46, pages 26 – 45
Image: Video stills from Louis Henderson’s ‘Lettres du Voyant’ film 2013. These scenes were filmed in Ghana where workers extract valuable materials from used computer equipment, often shipped from Western countries.
Beyond the material impact of technology on the earth, we must consider how our engagement with technology can impact our privacy.
Jamal Khashoggi was a Saudi journalist murdered in Turkey, his fiancee Hatice Cengiz is among those allegedly targeted by an Israeli spyware technology called Pegasus. Image: BBC and Getty Images https://www.bbc.com/news/world-57891506
The scandal around Pegasus revealed the incredible reach that spyware can have - where a target’s phone can be used to install the spyware, which is then able to feed all forms of communication from the phone to the Pegasus system. The spyware is also capable of activating the target’s phone’s camera, GPS information and microphone without them being aware.
There are plenty of other forms of surveillance technology, less sinister than Pegasus, where people willingly install in their home to keep track of activity around the house, movements of other family members and changes in the houses atmosphere. The Sense Mother system from Paris-based Sen. se – created by Rafi Haladjian - is a small ‘device coalition’ consisting of a ‘mother’ unit and four ‘motion cookie’ sensors controlled by the mother. The mother acts as a central hub, and ‘takes care of what matters most for you today’, while the cookies are simply motion sensors that one can attach to any object or body. The sensors ‘detect and understand the movements of objects and people’ by transmitting the data to the mother unit which in turn contextualises it. The entire system is controlled through a series of apps for smartphone or tablet that, depending on what is being tracked, allow a user to track and regulate mundane acts such as sleeping, walking, teeth brushing, and taking your medication.
Artificial stupidity While the Pegasus and Sense Mother examples suggest a form of powerful, hyper-intelligent technology that is able to know everything about our communication and movement. And while we are constantly told new technologies are ‘smart’ and that machine learning is the future of intelligence. Artist Hito Steyerl has pursued her interest in what she called ‘Artificial Stupidity’. She defines this as ‘weak’ AI systems that are part of our everyday life but are not yet refined. Like a robot who needs to learn how to walk, AI systems which are designed to automate our lives – self driving, face recognition, image recognition, etc. constantly make mistakes. In the video below, Janelle Shane explains that ‘The Danger of AI is Weirder Than You Think’ and compares most AI intelligence to that or an earthworm.
One of the examples in Janelle Shane’s presentation is how AI systems will complete a task in the most efficient way possible. So one experiment for an AI system to create a robot to go from point A to point B resulted in:...
The AI making the robot into a tower, and simply falling from point A to point B. Effectively the robot completed the task it was asked but committed suicide in the process. This absurd test is an example of AI completing the task we ask of it, but in an irrational and potentially dangerous way that we did not predict.
In another bizarre example of technology not behaving how we anticipated, in 2016 Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation.”
The problem being that Tay responded to the information she was fed, people starting tweeting the bot with all sorts of misogynistic, racist, and Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users.
What we can learn from Tay, and other emergent AI systems, is that they reflect the context in which they are made and from which they learn. Part of our ethical responsibility, is therefore to not only consider the material impact of new technology but also the power we allocate to technological systems like AI. Without conscious direction, these systems can behave unpredictably.