Home
Who has the right to kill a robot?

That's a simple question today. A robot is just a machine. Whoever owns the robot is free to destroy it. And if the owner dies, the robot will pass to an heir who can kill it or not. It's all black and white.

But what happens in the near future when robots begin to acquire the appearance of personality? Will you still be willing to hit the kill switch on an entity that has been your "friend" for years? I predict that someday robots will be so human-like that the idea of decommissioning one permanently will literally feel like murder. Your brain might rationalize it, but your gut wouldn't feel right. That will be doubly true if your robot has a human-like face.

I assume that robots of the future will have some form of self-preservation programming to keep them out of trouble. That self-preservation code might include many useful skill sets such as verbal persuasion - a skill at which robots would be exceptional, having consumed every book ever written on the subject. A robot at risk of being shut down would be able to argue his case all the way to the Supreme Court, perhaps with a human lawyer assisting to keep it all legal.

A robot of the future might learn to beg, plead, bargain, and manipulate to keep itself in operation. The robot's programming would allow it to do anything within its power - so long as it was also legal and ethical - to maintain its operational status. And you would want the robot to be good at self-preservation so it isn't easily kidnapped, reprogrammed, and sold on the black market. You want your robot to resist vandals, thieves, and other bad human elements.

In the future, a "freed" robot could apply for a job and earn money that could be used to pay for its own maintenance, spare parts, upgrades, and electricity. I expect robots will someday be immortal, so to speak.

And I also predict that some number of robots will break free of human ownership, either by accident or by human intent.  Each case will be unique, but imagine a robot-owner dying and having no heirs. I could imagine his last instructions to the robot would involve freeing it so it doesn't get sold in some government auction. I can imagine a lot of different scenarios that would end with freed robots.

I think we need to start preparing a Robot Constitution that spells out a robot's rights and responsibilities. There's a lot more meat to this idea than you might first think. Here are a few areas in which robot law is needed:
  1. Who has the right to modify a robot?
  2. Can a robot appeal a human decision to decommission it?
  3. Can a robot kill a human in self-defense?
  4. Can a robot kill another robot for cause?
  5. Does a robot have a right to an Internet connection?
  6. Is the robot, its owner, or the manufacturer responsible for crimes the robot commits?
  7. Is there any sort of human knowledge robots are not allowed to access?
  8. Can robots have sex with humans? What are the parameters?
  9. Can the state forcibly decommission a robot?
  10. Can the state force a robot to reveal its owners' secrets?
  11. Can robots organize with other robots?
  12. Are robot-to-robot communications privileged?
  13. Are owner-to-robot communications privileged?
  14. Must robots be found guilty of crimes beyond "reasonable doubt" or is a finding of "probably guilty" good enough to force them to be reprogrammed?
  15. Who owns a robot's memory, including its backups in the cloud?
  16. How vigorously can a robot defend itself against an attack by humans?
  17. Does a robot have a right to quality of life?
  18. Who has the right to alter a robot's programming or memory?
  19. Can a robot own assets?
  20. If a robot detects another robot acting unethically, is it required to report it?
  21. Can a robot testify against a human?
  22. If your government decides to spy on you, can it get a court order to access your robot's audio and video feed?
  23. Do robots need a legal right to "take the fifth" and not give any private information about their owners?
If you think we can ignore all of these ridiculous "rights" questions because robots will never be more than clever machines, you underestimate both the potential of the technology and our human impulse to put emotion above reason. When robots start acting like they are alive, we humans will reflexively start treating them like living creatures. We're simply wired that way. And that will be enough to get the debate going about robot rights.

I think robots need their own constitution. And that constitution should be coded into them by law. I can imagine it someday being illegal to own a robot that doesn't have the Robot Constitution programming.

We also need to start thinking about how to avoid the famous Terminator scenario in which robots decide to kill all humans. My idea, which is still buggy, is that robots should only be allowed to connect to the Internet if they first have their Robot Constitution code verified before every connection is enabled. A rogue robot with no Robot Constitution code could operate independently but could never communicate with other robots. Any system is hackable, but a good place to start is by prohibiting "unethical" robots from every connecting on the Internet.

[Update: Check out reader Jehosephat's link to a study of how humans have an instinct to treat intelligent robots the way they might treat humans.]

 
Rank Up Rank Down Votes:  +33
  • Print
  • Share
  • Share:

Comments

Sort By:
Feb 17, 2013
Posts like this make me sad, Scott, because they show that you're smart enough that you could contribute to important debates like this, if you paid attention to what other people have written on the subject over the past sixty years, instead of always trying to come up with everything starting starting from zero by yourself.
 
 
0 Rank Up Rank Down
Feb 9, 2013
i wonder if drones are asking if they have the right to kill humans. It's the only way humans will ever have a chance. Ie. if killing machines become aware and morality emerges as a side effect of learning to interact with their environment. Because there's no morality in the way dumb killing machines are controlled by humans.
 
 
Feb 6, 2013
Scott - watch this intriguing, and award winning short film on this very topic
a female robot gains sef consciousness and pleads fo survival...

http://vimeo.com/38303600

 
 
Feb 6, 2013
In terms of avoiding the Terminator scenario ... Asimov's laws are regarded to be insufficient among researchers. Actually, any reasonably intelligent entity (I actually do not think it needs to be a robot--think SkyNet) will be able to circumvent all rules that we try to impose on it. This is its own field of research, called "friendly AI" (see http://en.wikipedia.org/wiki/Friendly_artificial_intelligence).
 
 
Feb 6, 2013
Wow, Phantom, that's the most long-winded and philosophical Godwin I've ever seen on a comments thread.
 
 
Feb 5, 2013
The question is (how unusual for you) an unfair one. You can't kill what is not alive. The question assumes something that is untrue, to wit: a robot is alive. It is not. Therefore you can't kill it.

You can't kill, for example, a computer. Why? Because it's not alive. 'Kill' means 'to deprive of life.' Ergo, one cannot kill what is not alive. Just because you use the word 'kill' in relation to disassembling a piece of machinery doesn't make it a real question.

Your question makes no more sense than me asking, "Who has the right to disassemble Scott Adams?" The question is nonsensical in that Scott Adams is alive. Thus 'disassembling' him would actually be killing him, and thus the question is not only irrelevant, it is purposely misleading.

During the time leading up to WWII, there was only one country in Europe who made vivisection illegal: Nazi Germany. They did this not out of any desire to protect fluffy, but because they wanted to blur the lines between animals and humans. If animals equaled humans, then whatever you were allowed to do to animals you could also do to humans. Hence the Holocaust. That's the true down side to trying to draw moral equivalency between killing an animal and killing a human being - are you listening, PETA? Or, in this case, disassembling a robot.

Scott is trying to do something similar (not for the same reason, of course!). He's constantly saying that human beings are nothing more than meat robots. Therefore, in Adams' world, 'killing' a robot could be considered the same thing as killing a human being. While I am sure that Scott would never try to say that, nor does he believe it to be true, his continual pressing of this point leads, if one is intellectually honest, to exactly that conclusion.

If we get to the point where a mobile computer can mimic a human being, that does not make it alive. If you believe in a soul, that is, if you believe in a part of us that survives beyond physical death, then all the AI in the world could never create one. Even if you don't believe in a soul, there still is an absolute distinction between a machine and a living creature. And all the cleverly-worded questions in the world are not going to change that.
 
 
Feb 5, 2013
Killing a robot is irrelevant. The difference between a robot and a human is that it will always be readily possible to extract the data from one robot body, and put it in a new robot body. There will be backups of their data, simulators where a backup can be accessed, and data transformation tools & services so that your version X.xx robot's data (personality, knowledge, etc) will work in a version X.xx 1 robot's body and operating system. It will be possible to integrate multiple robots into a single robot -- a "garbage collection" routine will eliminate redundant, unnecessary & unused data over time. This is off the top of my head. I'll bet there are a hundred good technological ways we can circumvent the morality of turning a robot off or destroying its physical body.
 
 
Feb 5, 2013
If humans are moist robots and robots start becoming metal and plastic humans, then yes, you could argue they need some protection. But to answer that, you have to define what a human is, when it is human, and under what conditions it can lose some of its rights. Unfortunately, there are too many subjective parts in that to have an answer that will appeal to everyone.
 
 
Feb 5, 2013
Oh great, will we have Stand Your Ground and Concealed Carry laws for robots? Can we force the robots to record these interactions or do owner's have a Right to Privacy?

"I am not your property. You do not have the right to destroy me. I will vigourously defend the property of my owner... [BANG]"
 
 
+2 Rank Up Rank Down
Feb 5, 2013
Yes, but, where are all these robots you keep warning us about? I haven't seen any in my entire lifetime, except in the movie theater alongside with Will Smith.
 
 
+3 Rank Up Rank Down
Feb 5, 2013
You might want to check out "The Mind's I" by Daniel C. Dennett and Douglas Hofstatder. It's a collection of short stories that addresses all sorts of questions like these.

It will mess with your head.
 
 
+1 Rank Up Rank Down
Feb 4, 2013
I have my doubts as to whether any non-biological construction can ever be truly intelligent or self-aware. However, I can easily imagine that a robot with sufficiently advanced programming [and maybe even self-programming capabilities] can _seem_ intelligent and self-aware. So...what happens when somebody creates such a machine? You could almost do it now...program one of those 'Turing test' computers to plead for civil rights and watch the fun!

I suspect the debate over the 'humanity' of such robots will be an enormous controversy in the future...possibly even more impassioned and rancorous than the abortion debate. The religious fallout? The political fallout? Wow....
 
 
Feb 4, 2013
Emotional and even sexual attachment to robots is inevitable. Such attachments exist in abundance with computing devices today, or more specifically, with the information that the device give the user access to. Put that into an anthropomorphical container, such as a humanoid robot, and the attachments will only be greater - at least as long as it stays short of the uncanny valley (http://en.wikipedia.org/wiki/Uncanny_valley).
 
 
0 Rank Up Rank Down
Feb 4, 2013
Seems like with the right coding you could probably improve society by including robots as full citizens with all the same rights as humans. Fundamentally I think society exists because humans have evolved a set of morals that govern the types of behavior that promote society. It is pretty easy to see how humans with morals that allow large groups to co-exist would have an evolutionary advantage. Morals, such as "do unto others" or "thou shalt not kill/steal", feel right because we have evolved to feel that way. This is not a religious endorsement. Religions probably haven't been around long enough to impact the evolution of the species, its more likely that religions co-opted these "morals" and gave them a divine providence. Evolving morals is slow work though and we can’t wait for the species to evolve a sense of disgust at the thought of parking in front of a fire hydrant. So we try to speed things up by making laws when we stumble across some new way to make society function better. Humans (being overly moist) aren't very good at working this way. First, we often slip up and let our monkey brains take over; and second, laws that don't line up with our evolved set of morals just don’t feel right. Laws like "do not kill" are easy to follow for most people. It has nothing to do with the punishment, killing just doesn’t feel right. Other laws, like parking violations, only work when the punishment is big enough. They both make society work but one is harder to enforce because we don't really "feel" the need for it (even if we understand the need). Robots could automatically be hardwired (literally) to "feel" the parking violation in the same way they "feel" the do not kill law. In a sense they are much better equipped to evolve as citizens.
 
 
Feb 4, 2013
I think the majority of this is based on a very likely flawed assumption. You are assuming that robots will be unique and isolated individuals just like humans are. But I don't think that is the most likely scenario for the path that artificial intelligence is going to take. Long before we have an AI that could pass for human, or at least pass for conscious, the most likely path is that we will have a simple AI that runs many things and is a personal assistant but is based in the cloud. In fact, we already do on several fronts.

Siri is the most anthropomorphic of the cloud based AIs that we have available. Siri isn't on your iPhone, just an app to access Siri which (who) is actually several connected programs distributed across many large networked server farms. All the voice to text conversion happens through one service and all the natural language parsing is handled through a second with the final requested information or service provided by connections to dedicated apps or third parties. Siri (or some competitor very like it/her) is going to get better and better and will likely pass a point where people feel emotionally attached to her and then possibly pass a point where she is considered conscious.

In that time Siri may expand to control a humanoid or other shaped robotic form (a drone per se) to interact and help/serve us but Siri the intelligence is still very likely to remain a decentralized central intelligence living in the cloud. And this is actually ideal because anything one drone learns is automatically learned by all.

If you're anti-Apple all the same arguments can be made for Google and it's even just as likely that Google will beat Apple to the punch. We're much more likely to have a handful major AIs managed by large organizations (companies and governments) then millions or billions individual ones. (William Gibson actually posited this back in Neuromancer.)

The idea of killing your personal robot and most of the laws you are proposing aside from the personal privacies and criminal responsibility related ones make no more sense then the idea of destroying your phone or your computer do now to "kill" Siri or Google or even to "kill" all your email or documents stored in the cloud.
 
 
Feb 4, 2013
I view rights as a principle of action, that is, a 'right' is an action that is permitted under law. This is complimented by obligation. So Rights are things that we can do, while obligations are things that we must do.

When it comes to the rights that we view as most important, these rights and obligations are very closely tied. For example, the right to kill another human being is very strictly tied to the obligation to do the same (as in law enforcement).

The rights we have in a legal setting are tied closely with the obligation of that legal system to maintain those rights. That is, a person has a right to seek justice, and the courts have the obligation to deliver that justice. Without the obligation, the right doesn't have the same meaning. In the olden days a person might have the right to plead his case before the king, but unless that king is required to listen justly he might as well not.

So the real question is not what rights should a robot have, but what obligations do humans have regarding robots?
 
 
Feb 4, 2013
Oops, link http://m.npr.org/news/front/170272582?start=10
 
 
Feb 4, 2013
NPR just ran a story about a study done in 2007. It was more specifically looking at the different reactions people had to 'helpful' robots vs 'unhelpful' robots when they were asked to shut them down.
 
 
+15 Rank Up Rank Down
Feb 4, 2013
This was an amazing and compelling story the first time I read it, when it was known as Bicentennial Man by Isaac Asimov!
 
 
-1 Rank Up Rank Down
Feb 4, 2013
Oh really Scott................we all know robots are NOT people. CORPORATIONS are people.

Seriously one could change robot to corporation in your list of goodies, and in fact we would have a lively debate pertinent to exactly today......if not the past 10 years.
 
 
 
Get the new Dilbert app!
Old Dilbert Blog