Researching the current state of “roboethics” (a lame term that marginalizes “AI ethics”, a more-relevant superset of roboethics), I find a bunch of references to a South Korean project to draft a Robot Ethics Charter. All these references occur in March 2007, and they promised the ethics charter would be released in April 2007 and subsequently adopted by the government. However, I can’t find it anywhere. Anyone have a clue about where it went? One article summarized the effort as follows:
The prospect of intelligent robots serving the general public brings up an unprecedented question of how robots and humans should be expected to treat each other. South Korea’s Ministry of Commerce, Industry and Energy has decided that a written code of ethics is in order.
Starting last November, a team of five members, including a science-fiction writer, have been drafting a Robot Ethics Charter to address and prevent “robot abuse of humans and human abuse of robots.” Some of the sensitive subject areas covered in the charter include human addiction to robots, humans treating robots like a spouse, and prohibiting robots from ever hurting a human.
Critics of the charter say that the charter is premature and may not have a practical application once robots are really an integral part of society. Says Mark Tilden, the designer of the toy RoboSapien, “From experience, the problem is that giving robots morals is like teaching an ant to yodel. We’re not there yet, and as many of Asimov’s stories show, the conundrums robots and humans would face would result in more tragedy than utility.”
“Asimov” refers to science-fiction author Isaac Asimov, who created a robot code of ethics for one of his stories. His Three Rules were: (1) a robot could not hurt a human or through inaction allow a human to be harmed, (2) a robot must obey human orders unless those orders would make it violate rule number one, and (3) a robot must protect itself unless that protection would violate the first two rules. These apparently served as inspiration for the South Korean Robot Ethics Charter.
However, South Korea’s Ministry of Information and Communication plans to have a robot in every household by 2020. “Personally, I wish to accomplish that objective by 2010,” said Oh Sang Rok, head of the ministry’s project.
Personally, I think Asimov’s Three Laws are a terrible inspiration for any roboethics code. The laws were created to be used as a plot device. When they disintegrated, a story came out of it. Unfortunately, they’ve actually been taken seriously as a possible solution to the problem of human-unfriendly robots and AI for many decades now. But Asimov himself said, “There was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the 61 words of the Three Laws.”
Back in summer 2004, the Singularity Institute launched a website project, “Three Laws Unsafe”, a critique of Asimov’s Laws riding on the publicity of the “I, Robot” movie. Check out the articles section, which includes a submission by myself.
But yeah, anyone know where that Robot Ethics Charter is, or the names of anyone who was working on it? We need to get our magnifying glasses out and scrutinize that shit.