OK, fine. Don’t listen to me.
But maybe you should listen to Patrick Lin, author of a report for the Office of Naval Research that concludes that our militarized robot fighters could rise up against us at any time:
The report, the first serious work of its kind on military robot ethics, envisages a fast-approaching era where robots are smart enough to make battlefield decisions that are at present the preserve of humans. Eventually, it notes, robots could come to display significant cognitive advantages over Homo sapiens.
“There is a common misconception that robots will do only what we have programmed them to do,” Patrick Lin, the chief compiler of the report, said. “Unfortunately, such a belief is sorely outdated, harking back to a time when . . . programs could be written and understood by a single person.” The reality, Dr Lin said, was that modern programs included millions of lines of code and were written by teams of programmers, none of whom knew the entire program: accordingly, no individual could accurately predict how the various portions of large programs would interact without extensive testing in the field – an option that may either be unavailable or deliberately sidestepped by the designers of fighting robots.
The answer, Lin says, is to “teach” the army robots right from wrong through an artificial intelligence learning process. Also, robots should each be programmed to adhere to a strict code of conduct that prevents them from killing the wrong humans.
I think a better idea would be to give every human a keyword that would cause all robots to self-destruct. Like if you said “Sproing!” and the robots head and arms and legs popped out and bounced around like they were on slinkys. Oh man that would be hilarious. Wait, what were we talking about again?