Interviewed by The Rational Future

Here’s a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity? -Is there a substantial chance we will significantly enhance human intelligence by 2050? -Is there a substantial chance we will create human-level AI before 2050? -If human-level AI is created, is there a good chance vastly superhuman AI will follow via an “intelligence explosion”? -Is acceleration of technological trends required for a Singularity? – Moore’s Law (hardware trajectories), AI research progressing faster? -What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??) -Does AI need to be conscious or have human like “intentionality” in order to achieve a Singularity? -What are the potential benefits and risks of the Singularity?

Read More

Why We Need Friendly AI

An article I often point people to is “Why We Need Friendly AI”, an older (2004) article by Eliezer Yudkowsky on the challenge of Friendly AI:

There are certain important things that evolution created. We don’t know that evolution reliably creates these things, but we know that it happened at least once. A sense of fun, the love of beauty, taking joy in helping others, the ability to be swayed by moral argument, the wish to be better people. Call these things humaneness, the parts of ourselves that we treasure – our ideals, our inclinations to alleviate suffering. If human is what we are, then humane is what we wish we were. Tribalism and hatred, prejudice and revenge, these things are also part of human nature. They are not humane, but they are human. They are a part of me; not by my choice, but by evolution’s design, and the heritage of three and half billion years of lethal combat. Nature, bloody in tooth and claw, inscribed each base of my DNA. That is the tragedy of the …

Read More

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI …

Read More

Singularity Institute Announces Research Associates Program

From SIAI blog:

The Singularity Institute is proud to announce the expansion of our research efforts with our new Research Associates program!

Research associates are chosen for their excellent thinking ability and their passion for our core mission. Research associates are not salaried staff, but we encourage their Friendly AI-related research outputs by, for example, covering their travel costs for conferences at which they present academic work relevant to our mission.

Our first three research associates are:

Daniel Dewey, an AI researcher, holds a B.S. in computer science from Carnegie Mellon University. He is presenting his paper ‘Learning What to Value‘ at the AGI-11 conference this August.

Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He helped Wei Dai develop updateless decision theory, in pursuit of one of the Singularity Institute core research goals: that of developing a ‘reflective decision theory.’

Peter de Blanc, an AI …

Read More

John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

Continue.

Read More

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?

One of my favorite books about the mind is the classic How the Mind Works by Steven Pinker. The theme of the first chapter, which sets the stage for the whole book, is Artificial Intelligence, and why it is so hard to build. The reason why is that, in the words of Minsky, “easy things are hard”. The everyday thought processes we take for granted are extremely complex.

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don’t solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its own infrastructure and displace us without much of a fight. It would be disappointing to spend billions of dollars …

Read More

Some Singularity, Superintelligence, and Friendly AI-Related Links

This is a good list of links to bring readers up to speed on some of the issues often discussed on this blog.

Nick Bostrom: Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com/ethics/ai.html

Nick Bostrom: How Long Before Superintelligence? http://www.nickbostrom.com/superintelligence.html

Yudkowsky: Why is rapid self-improvement in human-equivalent AI possibly likely? Part 3 of Levels of Organizational in General Intelligence: Seed AI http://intelligence.org/upload/LOGI/seedAI.html

Anissimov: Relative Advantages of AI, Computer Programs, and the Human Brain http://www.acceleratingfuture.com/articles/relativeadvantages.htm

Yudkowsky: Creating Friendly AI: “Beyond anthropomorphism” http://intelligence.org/ourresearch/publications/CFAI/anthro.html

Yudkowsky: “Why We Need Friendly AI” (short) http://www.preventingskynet.com/why-we-need-friendly-ai/

Yudkowsky: “Knowability of FAI” (long) http://acceleratingfuture.com/wiki/Knowability_Of_FAI

Yudkowsky: A Galilean Dialogue on Friendliness (long) http://sl4.org/wiki/DialogueOnFriendliness

Stephen Omohundro — Basic AI Drives http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ http://selfawaresystems.com/2009/02/18/agi-08-talk-the-basic-ai-drives/ (video)

Links on Friendly AI http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/

Anissimov: Yes, the Singularity is the Biggest Threat to Humanity http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/

Abstract of a talk I’m giving soon http://www.acceleratingfuture.com/michael/blog/2011/01/my-upcoming-talk-in-texas-anthropomorphism-and-moral-realism-in-advanced-artificial-intelligence/

Most recent SIAI publications: http://www.acceleratingfuture.com/michael/blog/2010/12/new-singularity-institute-publications-in-2010/

More posts from this blog http://www.acceleratingfuture.com/michael/blog/2010/06/the-world-the-singularity-creates-could-destroy-all-value/ http://www.acceleratingfuture.com/michael/blog/2010/06/reducing-long-term-catastrophic-artificial-intelligence-risk/

Read More

I’m Quoted on Friendly AI in the United Church Observer

This magazine circulates to 60,000 Canadian Christians.  The topic of the article is friendly AI, and many people already said that they thought this was one of the best mainstream media articles on the topic because it doesn’t take a simplistic angle and actually probes the technical issues.

Here’s the bit with me in it:

Nevertheless, technologists are busy fleshing out the idea of “friendly AI” in order to safeguard humanity. The theory goes like this: if AI computer code is steeped in pacifist values from the very beginning, super-intelligence won’t rewrite itself into a destroyer of humans. “We need to specify every bit of code, at least until the AI starts writing its own code,” says Michael Anissimov, media director for the Singularity Institute for Artificial Intelligence, a San Francisco think-tank dedicated to the advancement of beneficial technology. “This way, it’ll have a moral goal system more similar to Gandhi than Hitler, for instance.”

Many people who naively talk about AI and superintelligence act like superintelligence will certainly do X or Y (of course there are all …

Read More

My Upcoming Talk in Texas: Anthropomorphism and Moral Realism in Advanced Artificial Intelligence

I was recently informed that my abstract was accepted for presentation at the Society for Philosophy and Technology conference in Denton, TX, this upcoming May 26 Р29. You may have heard of their journal, Techn̩. Register now for the exciting chance to see me onstage, talking AI and philosophy. If you would volunteer to film me, that would make me even more excited, and valuable to our most noble cause.

Here’s the abstract:

Anthropomorphism and Moral Realism in Advanced Artificial Intelligence Michael Anissimov Singularity Institute for Artificial Intelligence

Humanity has attributed human-like qualities to simple automatons since the time of the Greeks. This highlights our tendency to anthropomorphize (Yudkowsky 2008). Today, many computer users anthropomorphize software programs. Human psychology is extremely complex, and most of the simplest everyday tasks have yet to be replicated by a computer or robot (Pinker 1997). As robotics and Artificial Intelligence (AI) become a larger and more important part of civilization, we have to ensure that robots are capable of making complex, unsupervised decisions in ways …

Read More

Phil Bowermaster on the Singularity

Over at the Speculist, Phil Bowermaster understands the points I made in “Yes, the Singularity is the biggest threat to humanity”, which, by the way, was recently linked by Instapundit, who unfortunately probably doesn’t get the point I’m trying to make. Anyway, Phil said:

Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It’s highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time — just as we have no need for the bacteria in the swimming pool and they wouldn’t have much of a shot against our chlorine assault.

“How would the superintelligence be able to wipe …

Read More

Tallinn-Evans Challenge Grant Successful

As many of you probably know, I’m media director for the Singularity Institute, so I like to cross-post important posts from the SIAI blog here. Our challenge grant was a success — we raised $250,000. I am extremely appreciative to everyone who donated. Without SIAI, humanity would be kind of screwed, because very few others take the challenge of Friendly AI seriously — at all. The general consensus view on the questions is “Asimov laws, right?” No, not Asimov Laws. Many AI researchers still aren’t clear on the fact that Asimov laws were a plot device.

Anyway, here’s the announcement:

Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by Jaan Tallinn and Edwin Evans, raising a total of $250,000 to fund the Singularity Institute’s operations in 2011. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Keep watching this blog …

Read More