A while ago I wrote a story about self-replicating, autonomous robots that would clean up debris in our immediate portion of the solar system and then propagate out into the galaxy, returning information and perhaps informing us of our first alien encounter via laser-targeted, light-speed communications. A very futuristic concept and a cool story.
In the course of my interview with John D. Mathews, professor of electrical engineering, I asked him if there were any concerns about these robots — which would also, by necessity, learn — suddenly turning against their makers. I was specifically thinking of the replicators on the TV show Stargate.
Mathew’s response was that some form of Isaac Asimov’s “Three Laws of Robotics” would of course direct the robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I pressed a little harder, but Mathews thought that potential problems could be worked out. I don’t doubt him. What struck me is that, when it comes to robots that have any greater intellect than an industrial, repetitive, one-armed, pick-and-place machine, we automatically turned to fiction, science fiction, for references to possibilities and potential hazards.
Running through my mind was HAL of 2001 A Space Odyssey fame going insane, being shut down and slowly and slower singing “Daisy.” What came to mind were both the Replicators and the Cylons that began as obvious machines and eventually took the form of their makers and tried to kill them. Or the robotic house in Demon Seed trying to replicate itself via forced impregnation of the house’s female inhabitant. And even Project 79 in the God Machine, Martin Caidin, who became sentient and tried to take over the world. I remembered all the stories of robot rage, death and destruction.
Mathews on the other hand pulled out the Three Laws. A way for robots to be beneficial to humanity while protecting themselves and a basis for a great many stories about good robots, or at least no worse than most humans robots.
Perhaps it is a case of the glass half full or half empty. I’m not sure. Today, besides industrial robots, the most contact most of us have with a robotic device is a Roomba and they certainly are not sentient. Surgeons do robotic surgery, but that is usually a misnomer. What they are actually doing is teleoperating very small tools. The military also has robot drones, but as far as I know, they too are teleoperated. None of our robots are sentient, yet.
But, an IBM computer managed to beat two of the best Jeopardy champs last year. And beat them soundly. Certainly the machine could not move on its own, and its process of answering was not actual artificial intelligence, but the possession of an enormous amount of available data and the speed to access it. But it is a first step.
Will we some day explore other stars side-by-side with robotic companions, helpers, equals? Will we be able to trust them any more than we would trust a human crewmember? Would it really be all that different?