4 minute read

From Nietzsche to Nissan

the Oxford Scientist

dividuals are complicated and there needs to be much more research at lots of levels aimed at understanding how to limit excess weight gain as individuals age and reduce BMI in the overweight”.

Advertisement

Despite the stark campaign slo‐gan, the study also failed to prove a positive causal rela‐tionship between high BMI and cancer. These criticisms were high‐lighted in an open letter from nutritionist Laura Thomas to the chief exec‐

utive of CRUK, which called for the reconsideration of the campaign to ‘prioritise wellbeing over weight’. The letter was supported by fellows from both the University of Cambridge and King’s College London.

With such controversies surrounding collab‐oration, how can it continue to aid scientific progression in an unbiased manner? Maybe the answer lies in transparency, with a focus on making sure a level of honesty is maintained throughout a project’s development, especially when research may be influen‐tial in setting public policy.

Megan Boreham is a Biomedical Sciences undergraduate at St Hilda’s College.

The ethics of self-driving cars R apid technological development has made selfdriving cars a reality. This advancement raises questions about how these cars should make ethical decisions in place of human drivers. While technology can replace, and will undoubtedly supersede humans in actual driving ability, driving a car involves moral decisions. These choices would have to be programmed—for instance whether to collide with another vehicle or swerve towards pedestrians in a crash. Such a circumstance would likely be exceedingly rare, but this, and similar situations, still need to be considered as we move towards a future without a human, in both the literal and metaphorical driving seat. A study conducted by the Massachusetts Institute of Technology (MIT) was one of the first large-scale investig‐ations of public attitudes towards ethical quandaries posed by self-driving cars. In the “Moral Machine" experiment, researchers cre‐ated a game based on the famous "trolley prob‐lem." Participants had to choose the most favourable result from a series of rather gloomy outcomes in a hypothetical car crash. The announcement of the experiment was a viral hit and led to millions of people from over 200 countries contributing. This made the experiment one of the largest ever studies conducted on moral preferences in popula‐tions.

The "Moral Machine" experiment investigated nine different criteria including whether a self-driving car should prioritise passengers over pedestrians, people over pets, and youth over the elderly. Results were gathered by asking users questions such as: should the car continue for‐wards and hit a child, or swerve and hit an old lady? Here, people had to consider both the ages of the potential vic‐tims along with the moral implications of changing the car’s trajectory. From Nietzsche to Nissan

After four years, key results of the study were published in the journal Nature, focusing on differences in moral views between countries. Countries in close geographical proximity and those with a similar culture and economy were likely to have closely aligned views. Indeed, three dominant clusters of “moral alignment"' were seen: the West, the East, and the South.

One prominent example in highlighting this geo‐graphical divide was the spread of countries that were more likely to favour saving the young over the old. France was the country most skewed towards sparing youth, with many European countries such as Sweden and Germany, as well as the USA and Canada sitting above the average global preference for sparing younger lives. On the other hand, Japan, China, and Taiwan had a greater preference for saving older lives. A similar split was seen in different countries’ propensities towards sav‐ing the maximum number of people. More "individual‐istic" cultures (typically seen in Western countries) were more likely to prefer saving as many lives as possible.

Interestingly, when looking at people's preference for saving pedestrians over passengers whilst ignoring other factors such as age, these clusters seemed to break down. Japanese and Greek people had on average a much stronger tendency to avoid hurting pedestrians at the cost of the car's passengers than the Chinese and French participants did. These findings could have major implications for man‐ufacturers of self-driving cars. The research also suggested that people in less economically developed countries were more tolerant of pedestrians crossing improperly ("jay‐walking"). Economic differences such as these were per‐haps the most notable with participants in countries with higher levels of economic inequality showing greater gaps in their treatment of individuals based on their socio-eco‐nomic status. This led Edmond Awad, a researcher in‐volved in the study, to point out that policy should not necessarily just reflect public opinion: “It seems concerning that people found it okay to a significant degree to spare higher status over lower status. It's important to say, ‘Hey, we could quantify that’ instead of saying, ‘Oh, maybe we should use that.’”

Critics of the study have pointed out that the posed hy‐pothetical situations often seemed contrived. Having a bin‐ary choice to make in a car crash, as was the case in the experiment, is exceedingly unlikely and indeed the number of crashes happening at all would be expected to drop with increased use of self-driving cars. Still, regardless of the dir‐ect usefulness of the data, it seems clear that the ''Moral Ma‐chine'' experiment has kick-started a dialogue surrounding ethics which is necessary not only with regards to self-driv‐ing cars but within the field of artificial intelligence in gen‐eral. When technology meets philosophy and public policy, answers are unlikely to be obvious even though our lives and consciences will depend on them. Asher Winter is a Chemistry undergraduate at St Hugh’s College. “...a future without a human, in both the literal and metaphorical driving seat.”

This article is from: