I’m Quoted on Friendly AI in the United Church Observer

This magazine circulates to 60,000 Canadian Christians.  The topic of the article is friendly AI, and many people already said that they thought this was one of the best mainstream media articles on the topic because it doesn’t take a simplistic angle and actually probes the technical issues.

Here’s the bit with me in it:

Nevertheless, technologists are busy fleshing out the idea of “friendly AI” in order to safeguard humanity. The theory goes like this: if AI computer code is steeped in pacifist values from the very beginning, super-intelligence won’t rewrite itself into a destroyer of humans. “We need to specify every bit of code, at least until the AI starts writing its own code,” says Michael Anissimov, media director for the Singularity Institute for Artificial Intelligence, a San Francisco think-tank dedicated to the advancement of beneficial technology. “This way, it’ll have a moral goal system more similar to Gandhi than Hitler, for instance.”

Many people who naively talk about AI and superintelligence act like superintelligence will certainly do X or Y (of course there are all sorts of intuitive camps, “they’ll just leave us alone and go into space” is a popular sentiment) no matter what the initial conditions, implying that trying to set the initial conditions doesn’t matter.

Would you rather have an AI with initial motivations closer to Gandhi or Hitler? If you have any preference, then you’ve just demonstrated concern for the Friendly AI problem. It’s remarkable that I actually have a challenging time arguing on a daily basis that an AI with more in common with Gandhi would be better to build first than one with more in common with Hitler, but it’s true.

Some people say, “but, whatever initial programming it has will be gone after many cycles of self-improvement”. No, not necessarily, because the AI will be making its own programming changes. It will dictate its goal structure, not outside forces. More like a being creating itself than an evolution-made being with a goal system filled with strange attractors that flip back and forth depending on immediate context (humans).

Setting the initial conditions for AI properly is probably the most important task humanity faces, because AGI seems more likely to reach superintelligence first than human intelligence enhancement, despite the better science fiction movie potential and personal/tribal identification possibilities of the latter. John Smart presents a few good reasons why this is likely in his Limits to Biology essay.

Comments

  1. Christopher Carr

    As I can’t readily recall an instance of initial motivations being important to the ultimate motivational disposition of a recursively self-improved AI, initial motivations are likely not important.

    ;-)

    • Panda

      The reason initial motivations matter is because they direct how so-called “ultimate motivation[s]” will evolve.

      For instance, imagine the following initial motivation: do nothing except run a subroutine displaying the windows logo. How do you get from that to a new ultimate motivation, much less a destructive one?

      Although the example is simplistic, the point is general. For ultimate motivations to arise, they must develop from somewhere. Unless you think that ultimate motivations develop randomly, then initial motivations are the causal origin of any AI behavior, including the behavior of developing new motivations. You are possibly right, if you can show that all AI behavior will inevitably be random. But I doubt you can show that AI behavior will inevitably be random.

    • Panda, he was being sarcastic… (see the smiley face?) Your sarcasm-cluelessness reminds me of myself, though.

      I think people imagine motivations coming along inherently with agenthood, it’s not about randomness.

  2. Dave

    Congrats on the citation Michael, and thanks for the spirit of open inquiry re: the Christian newspaper.

    I also thought it was a terrific article – the last line was especially profound.

  3. Al Zindiq

    Mr. Anissimov, you say “It’s remarkable that I actually have a challenging time arguing on a daily basis that an AI with more in common with Gandhi would be better to build first than one with more in common with Hitler, but it’s true.”

    Why so challenging ?. I ask you what Stalin himself asked once – How many tank battalions did Gandhi build.

    You censored my comment from yesterday, the Ten Indian Commandments…

    If somebody had written bad about Lenin or Saddam, you would post it immediately, yes?. Seems you are a closet religious fanatic.

  4. Cool post keep up the good work, thanks for the information, I will pass your post on to my friends.

  5. It seems like #HashTraffic is just not doing work on Silicon Florist anymore?

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>