Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

1Feb/1142

I’m Quoted on Friendly AI in the United Church Observer

This magazine circulates to 60,000 Canadian Christians.  The topic of the article is friendly AI, and many people already said that they thought this was one of the best mainstream media articles on the topic because it doesn't take a simplistic angle and actually probes the technical issues.

Here's the bit with me in it:

Nevertheless, technologists are busy fleshing out the idea of "friendly AI" in order to safeguard humanity. The theory goes like this: if AI computer code is steeped in pacifist values from the very beginning, super-intelligence won't rewrite itself into a destroyer of humans. "We need to specify every bit of code, at least until the AI starts writing its own code," says Michael Anissimov, media director for the Singularity Institute for Artificial Intelligence, a San Francisco think-tank dedicated to the advancement of beneficial technology. "This way, it'll have a moral goal system more similar to Gandhi than Hitler, for instance."

Many people who naively talk about AI and superintelligence act like superintelligence will certainly do X or Y (of course there are all sorts of intuitive camps, "they'll just leave us alone and go into space" is a popular sentiment) no matter what the initial conditions, implying that trying to set the initial conditions doesn't matter.

Would you rather have an AI with initial motivations closer to Gandhi or Hitler? If you have any preference, then you've just demonstrated concern for the Friendly AI problem. It's remarkable that I actually have a challenging time arguing on a daily basis that an AI with more in common with Gandhi would be better to build first than one with more in common with Hitler, but it's true.

Some people say, "but, whatever initial programming it has will be gone after many cycles of self-improvement". No, not necessarily, because the AI will be making its own programming changes. It will dictate its goal structure, not outside forces. More like a being creating itself than an evolution-made being with a goal system filled with strange attractors that flip back and forth depending on immediate context (humans).

Setting the initial conditions for AI properly is probably the most important task humanity faces, because AGI seems more likely to reach superintelligence first than human intelligence enhancement, despite the better science fiction movie potential and personal/tribal identification possibilities of the latter. John Smart presents a few good reasons why this is likely in his Limits to Biology essay.

Filed under: friendly ai, me 42 Comments