3 minute read
HUMAN INTELLIGENCE SO WHO’S DRIVING?
HUMAN INTELLIGENCE SO WHO’S DRIVING?
By Hunt Lyman
The trolley problem has been a staple of philosophy since the 1960s.
Imagine a trolley speeding out of control, about to hit five innocent people. You stand next to a lever that, if pulled, will divert the trolley to another track where it will kill one person. Do you pull the lever?
Some argue it’s right to sacrifice one to save five, while others believe no one should act in a way that results in murder. My computer science graduate students consider a modern twist: how would this dilemma shape your programming of self-driving car software?
This ethical dilemma has become newly relevant due to technological advances. I tend toward a utilitarian framework: spare five by sacrificing one. But I’m far less willing if that means programming my Tesla to save five lives by driving me into a concrete barrier.
Using self-driving software makes me acutely aware that I am allowing the car to make decisions— choices really—for which I am still ultimately responsible. Autonomous driving is convenient, but it should not give drivers the illusion that they are free from accountability.
Self-driving vehicles illustrate how emerging technologies create situations that traditional ethical frameworks and legal standards struggle to keep pace with.
Examples from the past two decades abound: How do people’s internal guidelines for civility change when they communicate anonymously online? What should schools do when students create deep fake nudes of their classmates and post them online, which doesn’t happen on school grounds or violate specific rules? Is paying ransomware to protect essential city services morally acceptable? Should we allow underage children to violate an unenforced policy and sign up for Instagram when all their friends are doing it? Are social media companies responsible for misinformation and offensive speech that gets posted on their platforms?
And how should educational institutions respond to Artificial Intelligence companies offering to write papers, conduct research, or solve problems for students, free of charge?
Most people try to deal with these questions by relating them to more familiar, analogous ethical situations: Would you make that comment if everyone you know could read it? Would you pay a terrorist to free a hostage? Would you ask a friend to write that paper for you?
These comparisons can be helpful, but they miss the deeper issue. Technology is expanding into areas that were once uniquely human. In thousands of years, we have not developed a universally accepted moral framework for living virtuously and treating others well.
Now, we are forced to make far-reaching moral decisions involving technology that affect how we spread information, choose leaders, determine truth, conduct warfare, and make other critical choices. Meanwhile, technology is developing on a very different timeline from the one governing laws, leaving us, at best, in loosely charted territory.
I’m not proposing we abandon traditional ethics. Quite the opposite. We need to think even more deeply about the fundamental questions of how to live well and treat others fairly—questions that have engaged thinkers from Socrates and Plato to the American founders and beyond.
Our world may be changing, but the core moral questions are not. We can never relinquish our ethical responsibilities to machines that, while miraculous in many ways, lack the ability to understand the complexities of human morality.
Hunt Lyman is the academic dean at The Hill School in Middleburg.