Home
Who has the right to kill a robot?

That's a simple question today. A robot is just a machine. Whoever owns the robot is free to destroy it. And if the owner dies, the robot will pass to an heir who can kill it or not. It's all black and white.

But what happens in the near future when robots begin to acquire the appearance of personality? Will you still be willing to hit the kill switch on an entity that has been your "friend" for years? I predict that someday robots will be so human-like that the idea of decommissioning one permanently will literally feel like murder. Your brain might rationalize it, but your gut wouldn't feel right. That will be doubly true if your robot has a human-like face.

I assume that robots of the future will have some form of self-preservation programming to keep them out of trouble. That self-preservation code might include many useful skill sets such as verbal persuasion - a skill at which robots would be exceptional, having consumed every book ever written on the subject. A robot at risk of being shut down would be able to argue his case all the way to the Supreme Court, perhaps with a human lawyer assisting to keep it all legal.

A robot of the future might learn to beg, plead, bargain, and manipulate to keep itself in operation. The robot's programming would allow it to do anything within its power - so long as it was also legal and ethical - to maintain its operational status. And you would want the robot to be good at self-preservation so it isn't easily kidnapped, reprogrammed, and sold on the black market. You want your robot to resist vandals, thieves, and other bad human elements.

In the future, a "freed" robot could apply for a job and earn money that could be used to pay for its own maintenance, spare parts, upgrades, and electricity. I expect robots will someday be immortal, so to speak.

And I also predict that some number of robots will break free of human ownership, either by accident or by human intent.  Each case will be unique, but imagine a robot-owner dying and having no heirs. I could imagine his last instructions to the robot would involve freeing it so it doesn't get sold in some government auction. I can imagine a lot of different scenarios that would end with freed robots.

I think we need to start preparing a Robot Constitution that spells out a robot's rights and responsibilities. There's a lot more meat to this idea than you might first think. Here are a few areas in which robot law is needed:
  1. Who has the right to modify a robot?
  2. Can a robot appeal a human decision to decommission it?
  3. Can a robot kill a human in self-defense?
  4. Can a robot kill another robot for cause?
  5. Does a robot have a right to an Internet connection?
  6. Is the robot, its owner, or the manufacturer responsible for crimes the robot commits?
  7. Is there any sort of human knowledge robots are not allowed to access?
  8. Can robots have sex with humans? What are the parameters?
  9. Can the state forcibly decommission a robot?
  10. Can the state force a robot to reveal its owners' secrets?
  11. Can robots organize with other robots?
  12. Are robot-to-robot communications privileged?
  13. Are owner-to-robot communications privileged?
  14. Must robots be found guilty of crimes beyond "reasonable doubt" or is a finding of "probably guilty" good enough to force them to be reprogrammed?
  15. Who owns a robot's memory, including its backups in the cloud?
  16. How vigorously can a robot defend itself against an attack by humans?
  17. Does a robot have a right to quality of life?
  18. Who has the right to alter a robot's programming or memory?
  19. Can a robot own assets?
  20. If a robot detects another robot acting unethically, is it required to report it?
  21. Can a robot testify against a human?
  22. If your government decides to spy on you, can it get a court order to access your robot's audio and video feed?
  23. Do robots need a legal right to "take the fifth" and not give any private information about their owners?
If you think we can ignore all of these ridiculous "rights" questions because robots will never be more than clever machines, you underestimate both the potential of the technology and our human impulse to put emotion above reason. When robots start acting like they are alive, we humans will reflexively start treating them like living creatures. We're simply wired that way. And that will be enough to get the debate going about robot rights.

I think robots need their own constitution. And that constitution should be coded into them by law. I can imagine it someday being illegal to own a robot that doesn't have the Robot Constitution programming.

We also need to start thinking about how to avoid the famous Terminator scenario in which robots decide to kill all humans. My idea, which is still buggy, is that robots should only be allowed to connect to the Internet if they first have their Robot Constitution code verified before every connection is enabled. A rogue robot with no Robot Constitution code could operate independently but could never communicate with other robots. Any system is hackable, but a good place to start is by prohibiting "unethical" robots from every connecting on the Internet.

[Update: Check out reader Jehosephat's link to a study of how humans have an instinct to treat intelligent robots the way they might treat humans.]

 
Rank Up Rank Down Votes:  +33
  • Print
  • Share

Comments

Sort By:

There are no comments yet.

Be the first to comment!

 
Get the new Dilbert app!
Old Dilbert Blog