12 minute read
Interview: David Chalmers on Consciousness Rishi Chhapolia
INTERVIEW: DAVID CHALMERS ON CONSCIOUSNESS
David Chalmers is a Professor of Philosophy at NYU. As a philosopher of mind, his research is focused on understanding human consciousness and its applications to His work has appeared in The New York conversation focuses on Chalmers’ ideas from his book The Conscious Mind, and his paper, “The Virtual and the Real.” The interview has been transcribed and edited for brevity and clarity by Rishi Chhapolia.
Gadfly: How close are we to understanding why consciousness exists? Would an answer help us understand the ever-elusive “meaning” of human life?
David Chalmers: This is one of the great unanswered questions at the moment, why does consciousness exist at all? That’s maybe the very core of the “hard problem of consciousness.” Can we explain why consciousness exists fully in terms of physical processes? There’s certainly correlations between brain processes and consciousness, and we’re making a lot of progress at narrowing down which processes in the brain are more correlated with consciousness, but we still have a long way to go. All that, though, is really at the level of correlations, and there’s really nothing here that would explain why consciousness exists wholly in terms of the brain.
My favored approach is to assume that consciousness exists, and then come up with principles that connect it to everything else in the world. It may be that the best kind of theory of consciousness will presuppose that consciousness exists, and give us the laws that govern it in the same way that our theories of space and time presuppose that space and time exist. That’s at least where things stand right now. But, of course, it’s still very early days, and you’ve got to stay updated on what’s coming next.
On the connection to “meaning,” I’m inclined to think that consciousness is somehow what gives our life meaning. If we were not conscious at all, there’s a sense in which our lives would lack meaning. As conscious beings, we experience things as ‘good’ or ‘bad’, or broadly as ‘meaningful’. Mainly I’ve been thinking about a related issue, which is the connection between consciousness and value. I’m inclined to think that a system has to be conscious in order for anything to have value for it. Only conscious beings are beings that have, for example, moral calculations. It’s a slightly different question from meaning, but of course, value and meaning are very closely connected. And I’m sympathetic to the view that consciousness is the ground of meaning. Part of the project of leading a meaningful life will involve having the right kind of conscious life.
It seems that human evolution has significantly shaped how we perceive, and interact with, the world. How do you think human evolution has affected the development of conciousness? If there exists an objective
external reality outside of the mind, is it possible for us to have become more accurate at perceiving it over time?
Consciousness has gotten far more sophisticated over time. There is an interesting question about whether there was any kind of consciousness present at the beginning of the evolutionary process. There are certainly views that there was some element of consciousness all along. The mainstream view is that consciousness a very simple form, and then gradually evolved from feeling to perceiving to
There’s a big evolutionary advantage to getting the structure of the external world right. We should expect that consciousness gradually becomes more accurate in those structural respects over time. Which is not to say that there can’t be quite a few, at least local, illusions and things that we misrepresent and get wrong. You would certainly think that the ability to accurately represent the external world is evolutionarily useful. Perception has very sophisticated models of the external world. And on the face of it, these models would be more useful if they’re accurate. But some people have argued against that — most recently, the psychologist Donald Hoffman wrote in The Case Against Reality that evolution doesn’t care about truth or an accurate representation of the world. We could be just as evolutionarily world; there is no reason for the external
world to be anything like the way we perceive it to be. I think that argument doesn’t work but it’s a very interesting one.
In the past you’ve argued in favor of the possibility of an artificial intelligence that is equal to human intelligence. In The Conscious Mind, you wrote that “there is a class of computations such that the implementation of any computation in that class is sufficient for the existence of conscious experience.” What is the threshold that a set of computations must reach to be classified as “consciousness”?
That’s one of the big unknowns. We don’t know how widespread consciousness is. I have some small element of sympathy with panpsychism, the view that consciousness is everywhere. In which case, it may be that even very trivial computations, like bits flipping, may have some degree of consciousness. That said, I’m very far from confident this view is true.
On a much more mainstream view, there’s going to be some degree of complex computation at which consciousness kicks in. I’m inclined to think that it’s going to be a level far simpler than, say, the level of complexity of the human brain. There is good reason to think that many animals much simpler than humans have some degree of consciousness. I’d like to think that mice are conscious, flies are conscious, worms are conscious. But worms are pretty simple. C. Elegans if worms can be conscious, some kind elements with the right wiring could be conscious too. But that said, figuring out what that threshold is at the core of developing a good theory of consciousness, which we don’t have yet.
Suppose we have an AI that
someone claims is “conscious.” How could we know that this AI really possesses consciousness and isn’t simply providing a simulation of it?
How do I know that other humans are conscious? How do I know that you are conscious? You’re acting like you’re conscious, and you’re talking about consci-
ousness, that helps. You’re similar to me in various ways, and I know I’m conscious, so I assume you’re conscious but I certainly can’t prove it. With AI, we have to roughly go on similar things— behavior, similarity to us, and so on. If we have an AI that behaves in a very humanlike way, walks and talks like us, and has relevantly similar internal processes, then I’d start to at least strongly make the assumption of consciousness. What would be especially convincing is if the AI starts talking about consciousness itself. It says, “I have these conscious processes. It’s very weird and mysterious, but it feels like something from the of this is proof.
As a practical matter, if we end up with AI systems which are humanlike in all these respects, at some point people are going to extend to them the assumption that they’re conscious, as we do with other people.
At what point should we start treating an AI equivalently to a human being?
An AI may have some degree of moral status well before it gets to the human level of moral status. Most of us make a distinction between a kind of “full” moral status that humans have, and a “partial” moral status that many non-human animals have. People think that you shouldn’t make chickens, for example, suffer unnecessarily. But very few people think that animals should have the kinds of rights that humans have. If it comes to saving a human or saving a chicken life, most people will save the get to the point where they have moral status akin to that of a chicken, and only later get to the point where they have rights akin to that of a human.
Exactly what the threshold is, however, is extremely hard to know. Getting to the level of being a ciding, feeling, language-using creature existence, those are all elements of what’s involved in humans’ “full” moral status as rational beings. So once we have an AI that has those things at a human level, we can say it’s close enough.
Do you think that virtual reality can allow us to experience what it’s like to be other people or beings? For example, according to Thomas Nagel’s famous example, what it’s like to be a bat?
Some people have called virtual reality an “empathy machine.” Partly because, at least to some extent, it can put you in other people’s shoes. So maybe I could be put in the shoes of an explorer or somebody from another culture. Or at least be getting the sensory input that they receive, and occupy their perspective. And that would give me some sense of what it’s like to be them. But of course, there’s much more to what it’s like to be a person or being than just this pure sensory perspective. There’s the way you interpret it, how you feel about it, how you think about it, all of
which is the result of years and decades of being embedded in a culture, society and place. I don’t think we can expect virtual reality to really give you that, at least not immediately, and not easily.
Regarding whether VR could tell me what it’s like to be a bat. It could do some initial things towards that experience—flying around, hanging upside down, etc. But when it comes to things like the SONAR signals that the bat sends out: for the bat, is it more like hearing, does it hear the beeps? Is it more like vision, a constructed image? Or something in between? I suspect that it’s something in between. And merely putting on a VR headset and hearing these chirps - we probably won’t experience that the way the bat does, and that’s because the bat brain is probably very different from the human brain. Now humans do have the ability to echolocate, which could be used to help understand the bat’s experience of echolocation. But when the differences between individuals or species arise due to differences in the brain, that is going to be much harder to effect through virtual reality equipment alone.
That said, the brain is very plastic. People have found that you can put people in new environments, and that different parts of the brain can adapt to processing things it didn’t before. For example, blind people can become better at echolocation. Maybe this could happen if we get exposed enough to another perspective. But bats are so different from us, I’m doubtful of VR’s capability to give us too much insight into what it’s like to be one.
You have claimed in the past that, in the long term, our experiences in virtual reality can be as valuable to us as our experiences in physical reality. Since an individual can choose between living in a virtual and non-virtual reality, do you think there is one we should spend more time in? Is there a normative way of thinking about this?
Iargue that a sufficiently sophisticated virtual world, like Neal Stephenson’s “metaverse,” could be a perfectly valuable place to spend one’s time in and, in principle, is not a second-class reality. You could live a perfectly meaningful life there. In any individual case, there are going to be pros and cons of doing so. For example, your existing friends and family are mostly in the non-virtual reality. If so, that will be a reason to favor the non-virtual reality. If they all go into a virtual reality and say, “Come join me in the metaverse,” that might be a reason to live in the virtual reality.
It may be that virtual reality enables you to do things that are simply impossible in non-virtual realities. Whether it’s enhanced sensory capacities or the ability to fly or explore new worlds, and you may really value those things. I don’t think there’s a simple way to choose which world is best. Every individual has different preferences and values. Choosing to spend most of my life in VR can be a perfectly reasonable choice.
There may be some downsides in the short term: you will lack all sorts of capacities that are currently present in non-virtual reality. You can’t experience the bodily senses, food, drink, sex in VR. But who’s to say, in 50 years, that might be different. Even once we have much more sensorily satisfying VR, though, people might have a preference for natural reality. But just as many of us choose to spend our lives in cities such as New York, which are not terribly “natural,” it’s perfectly reasonable to choose to spend your life in VR.
There has been a global cultural discourse about how AI is taking over intelligence, which can learn things exponentially faster than a human being. Can we use these developments in AI to start deciphering the “hard problem of consciousness”?
Once we’ve got machines which are smarter than we are, they’ll be better than us at doing philosophy, so we’ll hand off the problem to them, and they’ll solve our philosophical problems for us. The “hard problem” might be too hard for human brains, but let’s put an what they come up with.
I do take the possibility of superintelligence very seriously. I wrote an article about 10 years ago on the idea of singularity, or intelligence explosion. But once we have AIs that are more capable than the human mind, they will also be more capable at designing machines, so they will design machines better than we can, smarter than the ones we design, and hence machines smarter than themselves. And then if you repeat that process, you get a rapid spiral towards superintelligence. I suspect that that will happen. Who’s to say that, at the end of a process like that, we might have AI systems that stand to Einstein as Einstein stands to a mouse. It’s unimaginable what that might lead to.
I’d imagine that, certainly when it comes lems, AIs will have left regular humans in the dust. They’ll regard our problems as trivial and have a whole new set of philosophical problems to contend with. cial future of philosophy ends up being continuous with current philosophical problems, like the one of consciousness we’ve thought about. One way all this could go is if these super intelligent systems are somehow themselves successors of human systems, and not separate AIs. Maybe we actually upgrade the human brain along the way with AIstyle technology and we end up being super intelligent creatures ourselves, then there might be the hope of more continuity between humans and AIs in many domains, including these scien be that our successors will look back at us and say that we were just messing around in the dark with problems of consciousness, the same way we look at naive humans 10,000 years ago making got it started.