If you long for the sort of space-age future once envisioned on the Jetsons, here’s one to watch: a robot-drone combo that could soon make garbage pick-up automatic.
ROAR, which stands for Robot-based Autonomous Refuse handling, is a joint project of the Volvo Group in collaboration with a Swedish waste-recycling company and engineering students at two Swedish universities and Penn State.
“Within Volvo Group we foresee a future with more automation,” says Per-Lage Götvall, Volvo Patent Coordination Manager. “This project provides a way to stretch the imagination and test new concepts to shape transport solutions for tomorrow.” The concept the ROAR team came up with is shown in this video.
S. Shyam Sundar is Distinguished Professor and founding director of the Media Effects Research Laboratory in Penn State’s College of Communications. His research investigates the social and psychological impacts of human interaction with the websites and social media.
More recently, Sundar has turned his attention to the emerging complexities of the human-robotic relationship. He and his graduate students are exploring questions about what people really want from robots, and what they fear the most about them. When it comes to cozying up to robots in our homes and lives, what makes us comfortable? And what gives us the creeps? Tune in and find out. Please email series producer Melissa Beattie-Moss at firstname.lastname@example.org with ideas, comments and questions.
A while ago I wrote a story about self-replicating, autonomous robots that would clean up debris in our immediate portion of the solar system and then propagate out into the galaxy, returning information and perhaps informing us of our first alien encounter via laser-targeted, light-speed communications. A very futuristic concept and a cool story.
In the course of my interview with John D. Mathews, professor of electrical engineering, I asked him if there were any concerns about these robots — which would also, by necessity, learn — suddenly turning against their makers. I was specifically thinking of the replicators on the TV show Stargate.
Mathew’s response was that some form of Isaac Asimov’s “Three Laws of Robotics” would of course direct the robots:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I pressed a little harder, but Mathews thought that potential problems could be worked out. I don’t doubt him. What struck me is that, when it comes to robots that have any greater intellect than an industrial, repetitive, one-armed, pick-and-place machine, we automatically turned to fiction, science fiction, for references to possibilities and potential hazards.
Running through my mind was HAL of 2001 A Space Odyssey fame going insane, being shut down and slowly and slower singing “Daisy.” What came to mind were both the Replicators and the Cylons that began as obvious machines and eventually took the form of their makers and tried to kill them. Or the robotic house in Demon Seed trying to replicate itself via forced impregnation of the house’s female inhabitant. And even Project 79 in the God Machine, Martin Caidin, who became sentient and tried to take over the world. I remembered all the stories of robot rage, death and destruction.
Mathews on the other hand pulled out the Three Laws. A way for robots to be beneficial to humanity while protecting themselves and a basis for a great many stories about good robots, or at least no worse than most humans robots.
Perhaps it is a case of the glass half full or half empty. I’m not sure. Today, besides industrial robots, the most contact most of us have with a robotic device is a Roomba and they certainly are not sentient. Surgeons do robotic surgery, but that is usually a misnomer. What they are actually doing is teleoperating very small tools. The military also has robot drones, but as far as I know, they too are teleoperated. None of our robots are sentient, yet.
But, an IBM computer managed to beat two of the best Jeopardy champs last year. And beat them soundly. Certainly the machine could not move on its own, and its process of answering was not actual artificial intelligence, but the possession of an enormous amount of available data and the speed to access it. But it is a first step.
Will we some day explore other stars side-by-side with robotic companions, helpers, equals? Will we be able to trust them any more than we would trust a human crewmember? Would it really be all that different?