Asimov's Laws of Robotics - Useful or Not?

Asimov's Laws of Robotics - Useful or Not?

Gizmodo has a funny article discussing Asimov's three laws of robotics, which is partly tongue-in-cheek, but raises some interesting questions.

We can tell that Asimov's laws were silly and ineffective, because his books hinged on one or more of the Three Laws being misused or loopholed into uselessness.   In a way, his robot novels relied on playing a trick upon the reader: the reader has to believe that the Three Laws are correct and complete, otherwise there is no "Aha!" when the plot unfolds as it must.  He did this so successfully that even now many people believe the Three Laws are right.  Even though the entire point of the Three Laws was that they never worked! 

Golf clap to Asimov.

The bigger issue here is that we have no plausible way to program robots to never do the wrong thing.  We can't even get our own morality right, so how could we possibly instill it into a robot?  For example, the Ten Commandments could be said to be humanity's analogue to the Three Laws.  And aside from the fact that we're often breaking those rules for the wrong reasons, sometimes we decide to break them for the right reasons.

"Thou shalt not kill" is at the top of the list, which is all well and good.  We might even want to put it at the top of the robots' list, too.  However, we often decide that it is correct to kill another person.  There are many circumstances under which one person is allowed to kill another.  Informally, you are allowed to kill someone who breaks into your home and attacks you.  Formally, you are allowed to kill people if you are in the military, as long as you stick to killing the right people (i.e. the enemy).  At the societal level, we kill prisoners who have committed offenses grave enough to warrant death, although the topic of capital punishment has always been (and will always be) the subject of great debate even as we practice it.

How can you possibly encapsulate this complicated set of rules and contexts into a few lines of programming?  "Thou shalt not kill, unless your own life is in danger, or a government entity has given you orders, or legally we decide that someone should be put to death"?  It wouldn't be possible to prevent all the loopholes which could arise.  We can tell this already, because we ourselves use loopholes to contravene any and all of these rules.

The sad truth is that we have no effective way to prevent a Terminator future.  If artificial intelligence is inevitable - as many people believe it is - then we may well be screwed.  Even if you designed an AI to adhere to the rules, if you locked it down so that it couldn't simply ignore them (the way that we often do), Asimov has amply demonstrated that the rules can still be broken. 

It may be true that any possible robotics rules can ALWAYS be broken.  In which case, Bender has a proposition for you!