14 minute read
WE, ROBOTS
A CONVERSATION WITH A POET, A PHILOSOPHER, AND A ROBOTICIST
This conversation began with garden snails. I was reading a philosophy paper on the conjecture that simple-brained snails might be conscious. In 1974, Thomas Nagel famously asked, “What’s it like to be a bat?”
Advertisement
The philosophy paper followed Nagel’s question, wondering if snails have some dim sense of self. Is there something it’s like to be a snail? Which got me thinking about the explosive development of artificial intelligence. Could AIs become conscious?
I began to wonder if there could be something it’s like to be ... a robot.
And that brought to mind my favorite roboticist, Joshua Bongard, the Veinott Green and Gold Professor in the Department of Computer Science in UVM’s College of Engineering and Mathematical Sciences. He’s been building world-leading, category-busting robots for decades, including his most recent collaboration to create Xenobots, the world’s first reproducing robots—custom-built out of living frog cells. He also thinks deeply about technology, artificial intelligence, and cognition—and what this all means for the future of the human experiment. And what this all means is by no means a question that only lives in the halls of engineering. One of the great strengths of the University of Vermont is the way scholars and researchers reach out from their disciplinary homes to ask other scholars in radically different fields: what do you think? I knew that Bongard had had fruitful conversations with professor Randall Harp in UVM’s Department of Philosophy, a researcher who ponders the meaning of free will, teaches courses on dystopias, and asks questions about robots. And with Tina Escaja, University Distinguished Professor of Spanish in the Department of Romance Languages and Cultures, and director of UVM’s program in Gender, Sexuality, and Women’s Studies. Escaja is a writer, artist, and pioneer in digital poetry whose category-defying creations include “Robopoem@s,” five insect-like robots whose bodies are engraved with seven parts of a “poem@”—written in both Spanish and English, from the robot’s point of view. I invited them to speak together, prompting them with several questions. When gathered in the gorgeous library of Alumni House, they took these questions and ran, adding many of their own. Here’s a small sample of the freewheeling, two-hour conversation, edited and condensed for clarity. It was a meeting of minds that kept returning to powerful questions, including this opening one: what is a robot?
Arranged and Edited by Josh Brown • Photography by Bailey Beltramo
TINA ESCAJA: This morning I asked Alexa to help me with this (I didn’t have the opportunity to ask ChatGPT, but that would have been nice): what is a human and what is a robot? The answer was predictable; it was probably coming from Wikipedia. Alexa said a human is a species of mammal that comes from “homo sapiens”—which is “wise man.” (Just by itself, there is a problem there because of gender construction!) And a robot is a machine—programmed through a computer and mostly related to functionality—according to Alexa—and less related to aesthetics and emotions. I thought that was interesting: a robot is explained by what it's not, how it’s not like the human. I've been looking at robots that are geared into emotional and aesthetic perspectives, thinking about robots that feel. The problem with binaries is that they're faulty by definition.
RANDALL HARP: Okay, I’ll take a stab at this. I think about robots in the context of what humans do as agents. We make changes in the world. We reach out our limbs to implement changes, but we also plan for those changes—and think about the way we would like the world to be, right? Robots are attempts to artificially do those things. You might want robots to be able to accomplish a task: weld this door onto this car. But you also want them to be able to plan, which means figuring out which actions are needed and in what order. Suppose there are civilians trapped in rubble. “Robot, figure out what you need to do to get them out.” Maybe the robot needs to cut a hole in a wall. We want them to plan autonomously. That's the second step in making robots: are they able to decide what they want the world to be? I would say a robot is an artificial agent, implementing changes in the world—and making plans for how they want the world to be. How they decide the way they want the world to be is where I start to get some concerns! Do we really want robots to decide the way they want the world to be?
JOSH BONGARD : That makes sense to me, Randall. For most people, the intuitive idea of a robot is a machine that acts on the world, that influences the world directly, compared to other machines— computers, airplanes—that indirectly affect the world, or at least affect the world with a human watching closely. But when you start to unpack that, what really does this mean? As a researcher, as a scientist, that's where things start to get interesting. Tina, you mentioned binary distinctions. Those start to fall apart once you dig down into what do we mean by intelligent machines or robots.
Some dictionaries have the historical roots of the word, which comes from the Czech language, “robota,” which comes from a play by Karel Capek. And in that play—I won't give away the plot—the machines, the “robota” are actually slaves. There's a dramatic tension there. Lots of things have changed in the 102 years since that play was published, but these underlying issues remain: What are machines? What are humans? Are we machines? Are we something more? How closely do we want to keep an eye on our machines? Those questions and tensions have remained, but now they've become pressing because the robots are no longer in science fiction or in plays. They're here. And we have to decide: what do we want to do with these things?
ESCAJA : Josh, you said the word “artificial.” I've been considering the question—another binary—what is organic and what is artificial? In your work with Xenobots these limits are being blurred. Even the concept of human/not human— that's a binary I question. We ask: what is a robot? The next question: what is a cyborg? This combination of artificial and organic makes us who we are. Some of us have in our bodies the artificial, machines and devices—and that doesn't make us less human. So that's the blur.
The literary imagination has often considered that technology is here to destroy humanity—when that technology achieves consciousness. I imagine just the opposite. I think of technology and robots as not necessarily only a tool, but as a way of interaction that is positive.
HARP: Tina, I'm interested in what you said: what is the artificial part doing? On the one hand, I can imagine a biological creature being turned into a robot. I could turn ants into robots if I can ensure that they only do the task that I want them to do when I want them to do it. What's happening to the ant is artificial, it's contrary to the way it would ordinarily act. Usually there's some thought that a robot doesn't have free will to decide what it's doing next. On the other hand, it’s interesting to think about robots as autonomous—autonomous tools. They're not like a crane where someone needs to operate that. A robot is created to act like a human agent—without the human directly involved. That's the artificial part—it's something created for the purpose of independently doing the thing that we want it to do. Autonomous creation is important for being a robot.
BONGARD : In the history of robotics, which goes back to the Second World War, there’s been ongoing debate: what is a robot? I'd like to invite all three of us to pull back and think about the larger community of machines: robots, AI, Alexa, the stuff running on your phone, the stuff running in the drone. For most folks, it's the combination of all these technologies, and how quickly they're progressing, that is frightening or exciting or some combination of the two. We can talk about definitions of robots and cyborgs—but there are other questions: What can they do? What can't they do? What will they never be able to do? Only humans and animals and biological systems can do X and machines never will be able to do X. And then the deepest question, which moves us into philosophy, is: what is it, this X? What exactly is this thing?
ESCAJA : I could start to answer—as a poet. Alan Turing's famous test tells us what is human and what is a machine. A test could also tell our robot what is not a human! I have a CAPTCHA poem. What is CAPTCHA? It's a tool to tell humans and machines apart—a “Completely Automated Public Turing test to tell Computers and Humans Apart.” You see them on websites. I transformed that—during the Covid-19 pandemic. I created a CAPTCHA poem: which is a “Completely Automated Public test to Tie Computers and Humans as Allies.” A public test to tie computers and humans as allies—it's a capture. It's in the direction that you were mentioning, Josh: what programming makes us human? Go back to the binary. A CAPTCHA could tell a bot what they are not—to recognize themselves as what they are, which is different from a human. In a CAPTCHA test, a human needs to recognize, say, taxi cabs and traffic lights to be recognized as human. So here we are in a test that asks particular bots to recognize what they are not. I transform that into a poem, a poem related to the theme of what makes us human? What makes us machines? Who is the creature, who is the creator? Who creates what? Who makes the decisions about us?
I'm talking about poetry. Is it possible for a machine to write a poem? Is a poem the epitome of humanity? The answer is yes. Yes and no, of course, because here we are. That's why we have a debate about what is a robot, what is a cyborg—because we don't know the answers, and we want to get closer to the answer, but we're not going to get there, to the truth.
Over centuries, the sonnet developed as a very specific set of rhymes and it's based on skill. In ways, it's a program, it's an algorithm. So can this be replicated? Yes, probably. What are the limits of robots? What is it they cannot do—eventually? Maybe now, robots are primarily labor, and it's scary. The origin of the word “robot”, which is exploitation and labor and slavery, is scary. But in theory, yes, they can write a sonnet. So what is the soul? What else can they do that we can also do? Where are the limits? What do you think, philosopher?
HARP: I'm always daunted by these conversations because you guys know more about philosophy than I know about your fields! It's always humbling to have these conversations with you, but I really enjoy it.
You brought up Turing's paper, Tina. Turing was trying to understand what intelligence means in the first place. He said if a machine can fool somebody who is intelligent into thinking that it’s another intelligent human being then it’s passed the test. In the 1950s, Turing wondered: is there another test we can have for what it means to be intelligent?
Now comes artificial intelligence. What is our marker for when we think that these systems— robots, AI—have passed a threshold into being recognizably intelligent? The answer always is going to be measured against who we recognize ourselves to be right now. Look at the “large language models” that underlie technologies like ChatGPT. Essentially, they’re just a way of finding what is, statistically, the most likely word or phrase to follow from a prompt. Obviously there are some concerns there. Do we want our artificial systems to look like the average human being? The average human being might have all sorts of— let's put this delicately ... problems.
BONGARD : Let say peccadillos.
HARP: Peccadillos is probably better! If we look at technologies like ChatGPT, they undergo refining on the backend to make sure that they're not actually producing the statistically most likely thing that might be said, because those are often terrible things. It's like, okay, let's take the statistically likely thing so long as it stays inside certain guardrails. Let's not make it be super racist, even though there's lots of super racist stuff online. And this is going back to that question: do we want robots to start deciding? Right now, given the history of the United States, it might be that there are certain professions in which, for example, members of racialized minorities, Black people, or women are underrepresented. And so then, if you ask a machine to take the statistical average of saying, “My doctor is a blank,” it might very well say, “Oh, my doctor is a man.”
Do you want the machines to be able to imagine a better world?
Because right now the machines are not really able to imagine a better world. Then the question becomes: are AI's useful tools for us if they can't imagine a better world?
BONGARD : This is really interesting, this idea of asking the machines, or inviting them, to help us imagine and possibly create a better world. This is the big picture, and this is a discussion also about research. As this technology develops—and some of us have a hand in that—what is it that we want these slaves, these machines, these things that are “them,” and we are “us,” what is it exactly that we want them to do? And how much control do we have? Having worked in robotics, one of the first things I teach my students is the concept of “perverse instantiation,” which is that the machines do exactly what we ask them to do.
Train on every word out there on the internet, and use that to hold an intelligent conversation—that's what we asked ChatGPT to do. It did exactly what we asked it to do, but it did it perversely. In retrospect, when we look back, we, as the humans, we're actually the ones that made the mistake. We say, “Oh, that's not quite what we meant.” You mentioned guardrails—“so please do this, but don't do it in this way, and also don't do it in this way.” I tell my students, robots are like teenagers.
ESCAJA: Yes, that’s funny. For ChatGPT, the problem is the “P,” which is “pre-trained.” What is our level of constructing the answer? At the same time, I'm very happy that ChatGPT is providing more than simple combinations. In that sense, it creates its own.
BONGARD: Teenagers and robots will do what you want them to do, but they know how to do it in the way that you didn't want them to do it. You can get on ChatGPT today and play around with it, and you'll see perverse instantiation start to emerge immediately, which is hilarious. But if you're sitting in an autonomous car on a California freeway and the car starts to perversely instantiate, “Get me to my destination as fast as possible”—now it's no longer funny. It’s a matter of life and death. A few weeks ago, there was an autonomous car that slammed on the brakes in a tunnel. Luckily no one was seriously hurt. But that's what's coming. We have machines that actually can do what we want. We are the problems. We can't specify well enough what exactly it is we want them to do— and not do. So how do we move forward with a technology like that? I think there's a lot of research and scholarship that needs to happen and happen quickly, because this is coming whether we want it or not. It cannot be stopped.