Human and Robot: A commom future
This is another part of the robot series. This time we will try to get into the ethical side of robotics. It is very speculative and there are plenty of ‘IF’s’ in this. We talk concepts here rather than technology, so fasten the seat belts, this is going to be different. If we all agree that the Singularity can happen and that it could be somewhat in the 2050’s, we need to see what the characteristics are: Robots exceed human intelligence to a point where we cannot even fathom it and where we cannot predict anything (due to our limited intelligence compared to robots).
If you remember the movie ‘iRobot’ you may remember the ‘three laws for robots’. I have copied these off Wikipedia and here they are: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws These laws were first published by Asimov in 1942 in his book ‘Runaround’. Look up Asimov and you will see a lot of debate and things. Now let us turn to the laws. These should be rather clear, but there are inconsistences. What is ‘harm’? is it physical harm? Is it also to lock up a human being until the human gives you the password to the bank account? (or the nuclear release code?). What if a robot can start to distinguish between ‘human’ and ‘humanity’? Now we have a situation where killing one person is good for humanity (stopping a nuclear release). So what is ‘harm’.
Robots can self-improve and can build next-generation robots exceeding their own limitations. Robots are aware of themselves and their environment. Robots can set goals for their existence themselves and improve these over time. It becomes very dramatic now. A robot may ask itself: “why am I here? What is my role in the universe?” This is not different from what humanity is asking itself, is it?
If an entity is half-robot and half-human (like Robocop) must it then obey orders? If it is dependent on whether a brain is electronics or tissue, well, an electronic brain can be grown as tissue, and then what? So what is a human now? It does become horrible complex and it can get worse. You can figure this out yourself.
Self-protection is also good. Law 3 should cover this, but what if there is a need to go into a nuclear reactor to switch something off. No human can do it, and if a robot is doing it, it will succeed but melt in the process. If the robot is obeying the order (law 2) it is in conflict with law 3. Even my (human) brain will short circuit on an order like this.
Now we have the situation where a robot is designed to be self-aware and to establish goals according to a higher meaning of its life. Then it is looking at its creator and having to obey ‘commands’ from an imperfect entity. I can just see the placards coming our way now. In essence: the robot may ask itself why it should obey laws from a human insofar as a human is not adhering to laws itself. It is the same with us after all. Why obey orders from a flawed entity. ‘Do what I say, not what I do’ is the command from leaders across the board.
The genie paradox: You get what you ask for, not what you want
If we look at the good ol’ movie of Terminator, we may remember that Skynet was invented to protect humanity. Humanity is a concept, whereas a human is a concrete entity. Skynet determined that the best way of looking out for humanity was to kill them all off; hence it started a nuclear war. Poorly defined goals will cause problems and we have the paradox of the Genie in the lamp: You get what you ask for; not what you want. What should be simple is not simple – well, it never is. Here is another solution: we will somehow code a robot to always be ‘nice’. Always compassionate and ethical. Never able to harm anything. A real goodie robot. But this is a contradiction because the humans who will code this do not adhere to such sentiments at all. They might drink and drive, cheat in tax or far worse. And what is a robot to make out of that?
So, appealing to a coded robot is not really possible either.
Additional laws have been suggested, like these:
But it gets much worse.
A robot must establish its identity as a robot in all cases A robot must know it is a robot
We all know we cannot travel to the stars to settle a new Earth. But we could send a robot family. After all, we power up the reactor and off they go. But who says they would like to? And what are they going to do when they find a new Earth. Settle it? As robots or humans? It is impossible to guess now. But it gets much much worse.
Obviously, if a robot and a human cannot be distinguished from each other, it would be nice to know who you are talking to. But for what purpose? To bully the poor robot? Or is it actually racism?
This entire discussion is, after all, about how to define humanity and our existence. Why are we here? Can we define life? Can we ‘upload’ ourselves (look it up on Wiki) and become immortal? Is a robot immortal? Why? Is DNA really a computer programme?
And a robot must know it is a robot. That smacks of inferiority. So a robot must know it is not human, as though being human is the ultimate goal. This is racism. Anybody watched the film ‘District 9’? yes, exactly. But it gets worse. What is the legal status of a robot? If it is self-aware – meaning it can think and reflect on life – we cannot treat the robot as a slave. As an entity with no right and no feelings and so on. Robots have rights, isn’t it? Now we need to have laws governing how we interact with robots and have laws protecting them. Robots cannot have owners then. That is slavery. Can we marry a robot?
It gets very complex. And that is why it is so fun writing these things. My friends: go and think!