The Oxford Scientist: Breakthrough (#7)

Page 11

Breakthrough Previous page Aleisha Durmaz Left Cass Baumberg Below Ishbel Jamieson

A

“Alexa, why are we so obsessed with AI?”

rtificial intelligence (AI) seems to have impacted all aspects of our lives, from diagnosing our diseases to giving us Spotify song suggestions. And with the boom in the capabilities of AI has come a boom in our discussion of it. A better phrase to use, though, is machine learning: feeding a machine data and getting it to learn from this information in order to make future decisions. For example, you can train AI to diagnose brain tumours by feeding it lots of patient brain scan images, and it can then use this information to spot early signs of tumours in future scans. It’s incredibly powerful stuff. However, AI has some big problems. For one, it is only as good as the information it is fed. And as this information is coming from an imperfect world with many social biases, the outputs from machine learning algorithms are prone to bias too. For example, facial recognition algorithms fed images of white males will be very good at recognising the faces of white males but won’t be much good for anyone else. This is, in theory, relatively easily addressed though, by ensuring the use of large and diverse data sets. But the issue goes much deeper. Any information that we take from society is going to be biased to some degree. A notorious example of this was the ‘Correctional Offender Management Profiling for Alternative Sanctions’ (COMPAS) algorithm used to increase the efficiency of sentencing criminals in the US by predicting how likely they were to reoffend. The algorithm was racist. It predicted that black people were more likely to reoffend than white people because it was fed data from a racist criminal justice system. This is a much tougher problem to address, but COMPAS did serve as an important wake-up call in the field. It demonstrated how vital it is that algorithms undergo rigorous checks before they are released for use in the world. A whole other issue with AI is the question of how we programme it to make moral decisions. The most widely discussed exam-

ple of this is driverless cars. Imagine a trolley problem scenario where the car is going to hit either a grandma or a baby and has to decide which. Who should the car should save? What about if there was an added option of destroying the vehicle and killing the passe gers inside instead? In order to programme the cars to make these decisions, we need to come up with a moral code to programme them with. The best way to do this would seem to be to gather as many peoples’ opinions on dilemmas such as these and generate a consensus view. A survey called the ‘Moral Machine’, designed by researchers at MIT in 2014, asked millions of people across the world what they would do in various trolley problem-like scenarios in an attempt to do exactly this. Unhelpfully, but predictably, people’s opinions varied across different countries. For example, participants from China and Japan were more likely to save the grandma, whereas in France and the UK people tended to save the baby. This only creates more questions – should the moral code for driverless cars therefore vary from country to country? And what about if someone was visiting the UK from Japan? Do they then have to adhere to the so called ‘UK moral code’? The Moral Machine was only the start of this discussion. There are many ethical questions surrounding AI, and it seems as though as soon as you answer one, another arises. But we are working towards finding answers. Plans are in place to open a new institute for ethics in AI here at Oxford University to promote ongoing debate of AI ethics and to work towards developing an ethical framework in which the technology could work. A key priority for the institute is to involve people from a wide range of disciplines in the conversation. The future of AI is about so much more than the algorithms and the computer science. We need to talk about the ethics and the policy, and how AI reflects and affects society. We definitely can’t just leave it to scientists. Shakira Mahadeva is a Biochemistry undergraduate at The Queen’s College.

11


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.