6
Features
Sensor Readings
Sensor Readings
Features computer was able to compute its way to passing the Turing Test, conversing well enough with judges for them to decide that it could pass for being human. This happened some time last year. Since then, this week in fact, a robot is said to have demonstrated a level of “self-awareness” that has apparently not been seen before. More stories like this will inevitably emerge, and add fuel to the ongoing philosophical debate regarding robots. However, it could be argued that the real development that is going on, the actual progress, is in translation capabilities – the computers, or the robots, are becoming far more skilful at translating human language, and calculating what responses would be most appropriate.
I compute, therefore I am Conscious robots The Turing Test is a well-known way to check if a computer has achieved a certain level of equivalence with human beings, but can any test truly reveal what it is to be human?
M
ost people would probably agree that robots will one day achieve a level of computational power that will enable them to pass for humans. But when that happens, as some say it already has, would it be a sign that they are on the way to becoming conscious, or sentient, beings? Or would it simply be a sign of their everincreasing ability to calculate more accurately? While it is relatively easy to test an industrial robot for physical accuracy in its operations, for which there is an internationally recognised standard in the form of ISO 9283, tests to evaluate robots and computers for their level of “human-like capabilities” are somewhat more ambiguous. The Turing Test is probably the most well known method of evaluating if a computer’s intelligence is indistinguishable from that of a human, although controversial. The test was articulated in 1950 by Alan Turing, a British computer scientist. Some may say that the Turing Test is too simplistic, crude even, but the idea of such a test has captured the imagination ever since computers were first being developed, from the early 1900s onwards.
editorial@roboticsandautomationnews.com
Turing was a pioneer of his day, playing an important role in World War II, when computational machines were built using mechanical gears and vacuum tubes, and used to crack coded messages by calculating all of the possible meanings of those messages. It would have taken human beings centuries of dedicated calculation to do what computers did, even back in the 1940s. And computers have become far more capable since then. Moreover, they are increasingly being networked together to create gargantuan machines with colossal computing power. “Meanings” is probably the wrong word to use, as the word meanings implies much more than mere translation, which is what the code-breaking machines essentially did. It was up to the humans to look at the computergenerated translations and decide which ones made most sense, which ones had the most relevant meanings, and then how to respond. Translation is also what modern computers do when they communicate with humans, albeit using far more computational power. So much power that at least one
www.roboticsandautomationnews.com
A matter of mind and consciousness As well as Turing, other figures from history may be appropriate to mention. Namely, French philosopher René Descartes. And, of course, Isaac Asimov. Descartes is famous for the quote, “Cogito ergo sum”, or “I think, therefore I am”, as it’s known in English. Many believe that Descartes meant that thinking itself is a form of self-awareness, which implies that if you think, it means that you are aware of yourself – you have self-consciousness, or more generally, you are conscious. However, substitute the word “think” for “compute” or “calculate", and Descartes’ argument becomes less convincing as a definition of consciousness, or selfawareness. Many robots give a good impression of being conscious, in that they are able to think or calculate. Many are programmed to know their own name and communicate as though they were aware of themselves, as though they were conscious. However, while robots may be able to say, “I am a robot”, or be self-referential in their conversation, the most that they can do is analyse the input of language from a source – a user, such as a human – and then calculate the most appropriate response. They merely compute within the parameters defined by the programmer. Having said that, the cloud is bringing unprecedented computing power to robots like Pepper, which its makers say is able to understand emotions and language. It’s quite possible that Pepper and its like will give responses that will appear to have been spontaneously created from something beyond the parameters defined by the programmers – from nothing. But logic would dictate that that is not possible. Upon investigation, it would be possible to find the lines of code that led to that human-like output, an output that would imply, inaccurately, that the robot or computer has a mind of its own. It does not, it is a machine – that is the general view. Human beings are organic, connected to the Earth, the
“
utomation is a core driver for every A company. From the point of view of the [end user], things seem simple. But in the background a huge amount of things have to happen, and that is only possible through automation” Ivano Rondelli, NTT Communications
www.roboticsandautomationnews.com
7
moon, the planets, the solar system and the indeed the universe in a way that computers and robots are not. The development of organic computers notwithstanding. Isaac Asimov is a professor of biochemistry who wrote a collection of nine science fiction short stories called “I, Robot”. This work of fiction has proved to have an enduring power. Despite being written in 1950, the “Three Laws of Robotics” articulated in the book are often quoted and referred to in entertainment culture, often borrowed as a central theme in films. Now, the three laws are increasingly being discussed in wider society. The Three Laws of Robotics, as first listed in Asimov’s short story “Runaround”, are: l A robot may not injure a human being or, through inaction, allow a human being to come to harm. l A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. l A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. For the most part, Asimov clearly differentiates robots from humans, mostly through the depiction of robots as mechanical, made of metal. Moreover, his three laws are blunt in the way they subjugate robots, making them subservient to humans. However, in another short story called “Evidence”, set in a fictional world where there exist humanoid robots, Asimov does deal with the concept of robots passing for being human. “Evidence” centres on whether the main character is a humanoid robot or not. However, Asimov largely does not directly deal with questions such as, “What is consciousness?”, and “What is mind?” Perhaps that is the most fundamental idea in the book. None of the peripheral characters dwell on questions about the central character’s level of consciousness, just whether he is a robot or not. It would appear that the character had overcome that argument, as evidenced perhaps by his being married and running for political office. But in the world of “Evidence”, robots are not allowed to hold political office no matter how humanoid or “conscious” they appear to be. This is what provides the story with its central conflict. Further evidence of just how prescient Asimov’s stories were can be be found in the real world of today, where there are currently many who are arguing for laws to control robots and artificial intelligence. Beyond the three laws of robotics In a paper calling for the introduction of legislation to deal with robots and artificial intelligence, Ryan Calo, assistant professor in the School of Law at University of Washington, writes: “Technology has not stood still. The same private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence. “Courts that struggled for the proper metaphor to apply to the internet will struggle anew with robotics.” He adds that “the widespread distribution of robotics in society will, like the internet, create deep social, cultural, economic, and of course legal tensions”. Calo has previously called for a federal robotics commission. “Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm. Robotic systems accomplish tasks in ways that cannot be anticipated in advance, and robots increasingly blur the line between person and instrument.” Calo also makes the distinction between robots and
editorial@roboticsandautomationnews.com