7 minute read

Sam Harris on Mindfulness in this (Dis)Connected Age

By Dylan Rodgers & Patricia Miller

Sam Harris, author of numerous bestselling books and host of the Waking Up podcast, is no stranger to deep or difficult topics. In fact, he built an advertiser-free podcasting model in order to discuss ideas openly, without fear of losing sponsors. Often focusing his energy on exploring consciousness, ethics, and artificial intelligence, he recently ventured into a new type of conversation with the launch of the Waking Up Course app, a series of guided meditations and lessons focused on mindfulness.

Appropriately, our conversation about technology-enhanced dialogue, ethics, and artificial intelligence has an undercurrent of mindfulness.

Innovation & Tech Today: You’ve built a career on having deep and difficult conversations, whether that be via your books or your podcast. Social media is another way to have a public conversation, but while it’s useful for getting your message

out, it’s often a driver of hyperbole and outrage. What do you think a healthier social media would look like?

Sam Harris: Well, I think in virtually every case anonymity is a bad idea. I understand the need for it in certain cases, like with whistle blowers or dissidents who would have their lives threatened if it were known who they were, but generally speaking, I think anonymity is almost entirely a toxic influence on our public conversation. So the fact that on Twitter and YouTube you really don’t know who anyone is, I think largely accounts for how vile the comments can be.

And I’ve noticed, for instance, if you select in Twitter that you just want to hear from people who have verified email addresses, that cuts down immensely on the craziness that I see coming back at me. So that’s one very easy lever to pull and if all the platforms did it, I think it would improve the conversation immensely. Beyond that, we have a psychological problem

with how people engage these media, and it is somewhat analogous to road rage, which is this paradoxical fact about the human mind.

In the case of road rage, if you put someone in a car and just have them interact with other people in cars, they are often plunged into states of mind and into patterns of reactivity that would never be available to them if they were walking around on the street.

It’s like the level of outrage, the kinds of things they’ll say and even do while in the imaginary safety of their car, it becomes its own form of mental illness, and there’s something about being behind the keyboard on social media that selects for a similar level of overreaction where they lose sight of the fact that they’re dealing with other human beings who are actually going to read the products of their typing. So it allows the inner maniac to come out in a way that simply wouldn’t come out in conversation with other people, certainly not face-to-face conversations.

Harris frequently hosts live podcasts, book clubs, and tours which attract thousands of avid fans, often resulting in sold out venues. His new app Waking Up utilizes guided meditation and a series of lessons to instill mindfulness habits into the everyday lives of its users.

So we have to learn to notice what this deeply unnatural circumstance is pulling out of us and I think we need to remind ourselves that there’s a person on the other side of the thing on which you’re typing and they’re about to read it when you hit send. Making that more vivid does change people’s behavior.

I&T Today: In other words, we need to be mindful of others. How do you think mindfulness connects to ethics?

SH: Well, traditionally in a Buddhist context, the connection is very direct. I think it does go in both directions empirically, it seems, which is to say that living ethically is viewed as a real support for one’s meditation practice. If you’re spending your days lying, cheating, stealing, and killing, you don’t have the kind of life that allows for equanimity and any kind of profound focus.

And so you just need to simplify your life and your relationships and ethics is what does that. Treating people well naturally causes them to treat you well, and your practice in that context can draw further energy from your desire to just be a better person in relationships.

What’s the whole point of learning to meditate and be happier? Well, it’s not merely this selfish pursuit; it is a way of dealing with all of the limitations on your well being you discover in relationship to others. It’s an antidote to your pettiness and enviousness and just mediocrity in relationships, even in relationships with the people you claim to love.

So the desire to be a better person is intrinsically an ethical and pro-social one, but then there’s also the fact that the kinds of insights one has in meditation do feed back into one’s behavior in the world in that you become more sensitive to your actual motives in situations and you can become less committed to motives that are antisocial.

You can erode your attachment to yourself through meditation. All of the conventional selfish motives that would get expressed in the world – your grandiosity, your egocentricity, your arrogance, your grasping at pleasure, and your fear of what’s unpleasant – all of that stuff can get relaxed as well, and that becomes a very durable basis for improving one’s ethics.

I&T Today: Speaking of mindfulness, humans often don’t take a mindful approach to disruptive technologies. We tend to release a technology, realize the issues it’s created, and then figure out how to clean up the mess later. A great example of that, I think, is how speed limits were invented 30 years after cars became popular. Similarly, AI is a huge leap forward, but it can spread worldwide instantly. Is it too powerful of a tool to popularize now and regulate later?

SH: Well, as it gets more and more powerful, I think the regulation has to be in step with the growth and its power. Superhuman intelligence can’t be regulated after the fact. We have to get the cadence of our having conversation about the possible downside of what we’re doing to more closely track the curve of innovation and after the fact just doesn’t work when your technology is more powerful than you are.

So at some point we have to get that right. We don’t yet have superhuman AI or even human AI, but what we do have are these narrow AI projects that get regulated or reacted to long after they’re launched, and that’s at some point going to have to flip.

I&T Today: But since AI is in its infancy, as you said, what do you think the AIs that are available today could do if they go unchecked?

SH: Well, I’m not worried about what’s available today spontaneously becoming more powerful than we understand or anticipated. I think we, the people who are doing the work, will one day bring us general intelligence. I think we’re almost certainly aware that they’re doing this work, and it won’t be a matter of some narrow AI suddenly becoming general in its capacity…

I think what has to happen is that the people, the groups that are doing the work that could conceivably bring us general intelligence, they need to have some clear landmarks along their development pathway that would cause them to stop and reflect on where all this is going.

Like what capacity could a computer demonstrate in the lab at Google that would force, and should force, Google to call a meeting with everyone else who is doing this work, all their competitors, and in a moment of transparency say, “Okay, this is what we just achieved. We need to think about the implications and what can go wrong if we flip this switch.”

And I don’t know, frankly, if that’s top of mind for anyone to do at this point. I think what we more likely have is an arms race where people are just trying to advance the technology as quickly as possible before anyone else advances it more quickly than they do, and that’s obviously not incentivizing any kind of truly cautious path forward.

People at this point just don’t see any real need to worry about caution because we seem so far away from any of these potentially scary breakthroughs, but we really have no idea how long it will take us to make breakthroughs that fundamentally change the game.

I&T Today: People often conjure up doomsday scenarios when thinking about AI, but it’s already doing some good. What are some of the most important challenges you think we should be using AI to solve?

SH: Well, I think there are some isolated cases which have been much talked about and celebrated, and I think we really do want AI to solve these problems for us. Self driving cars is perhaps the most obvious example, and that’s just a fact that people are bad at driving cars.

We’ve been bad ever since we started driving cars. We’ve looked like we’re going to be bad forever and the moment robots are better than we are, we should be letting the robots drive, and that’s just because it should be intolerable to us that, year after year in the United States, tens of thousand of people are dying despite the fact that we’re making our best efforts not to kill one another while driving.

There are many cases of that sort of thing. I think if you look at the prevalence of medical errors, people get the wrong drugs in hospitals because of doctor or nursing errors. Anyway we can use automation or AI to prevent the predictable errors of the inattention of apes like ourselves, that’s just something we should do and we shouldn’t be sentimental about replacing that human labor with automation because real lives hang in the balance. It’s just something like a hundred thousand people die every year from hospital errors. Any way in which AI could solve for that is something we want. ■

For the full interview, visit innotechtoday.com/samharris-interview.

This article is from: