Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that the term “human-equivalent AI” always means AI of …

Read More

Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”

I always think of this guy when I go by Woz Way in San Jose.

Read More

Hard Takeoff Sources

Definition of “hard takeoff” (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear Fire Upon the Deep (1992) by Vernor Vinge “The Coming Technological Singularity” (1993) by Vernor Vinge The Metamorphosis of Prime Intellect (1994) by Roger Williams “Staring into the Singularity” (1996) by Eliezer Yudkowsky Creating Friendly AI (2001) by Eliezer Yudkowsky “Wiki Interview with Eliezer” (2002) by Anand “Impact of the Singularity” (2002) by Eliezer Yudkowsky “Levels of Organization in General Intelligence” (2002) by Eliezer Yudkowsky “Ethical Issues in Advanced Artificial Intelligence” by Nick …

Read More

John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

Continue.

Read More

Michael Vassar Speaks to Yale Students on the Singularity

Coverage from Yale Daily News:

Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the “singularity,” this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

“For the most important event in the history of events, it really should get a fair amount of buzz,” he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small evolutionary step from other primates, a …

Read More

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?

One of my favorite books about the mind is the classic How the Mind Works by Steven Pinker. The theme of the first chapter, which sets the stage for the whole book, is Artificial Intelligence, and why it is so hard to build. The reason why is that, in the words of Minsky, “easy things are hard”. The everyday thought processes we take for granted are extremely complex.

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don’t solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its own infrastructure and displace us without much of a fight. It would be disappointing to spend billions of dollars …

Read More

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven’t bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences’ desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn’t survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. …

Read More

TIME Article on Ray Kurzweil, Singularity Summit, Singularity Institute

Here’s the cover. Front-page article.

By Lev Grossman, 2045: The Year Man Becomes Immortal:

The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your …

Read More

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070 48. Scientists will be able to understand and describe human intentions, beliefs, desires, feelings and motives in terms of well-defined computational processes. (5.1)

2085 50. The computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

Read More

Some Singularity, Superintelligence, and Friendly AI-Related Links

This is a good list of links to bring readers up to speed on some of the issues often discussed on this blog.

Nick Bostrom: Ethical Issues in Advanced Artificial Intelligence http://www.nickbostrom.com/ethics/ai.html

Nick Bostrom: How Long Before Superintelligence? http://www.nickbostrom.com/superintelligence.html

Yudkowsky: Why is rapid self-improvement in human-equivalent AI possibly likely? Part 3 of Levels of Organizational in General Intelligence: Seed AI http://intelligence.org/upload/LOGI/seedAI.html

Anissimov: Relative Advantages of AI, Computer Programs, and the Human Brain http://www.acceleratingfuture.com/articles/relativeadvantages.htm

Yudkowsky: Creating Friendly AI: “Beyond anthropomorphism” http://intelligence.org/ourresearch/publications/CFAI/anthro.html

Yudkowsky: “Why We Need Friendly AI” (short) http://www.preventingskynet.com/why-we-need-friendly-ai/

Yudkowsky: “Knowability of FAI” (long) http://acceleratingfuture.com/wiki/Knowability_Of_FAI

Yudkowsky: A Galilean Dialogue on Friendliness (long) http://sl4.org/wiki/DialogueOnFriendliness

Stephen Omohundro — Basic AI Drives http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/ http://selfawaresystems.com/2009/02/18/agi-08-talk-the-basic-ai-drives/ (video)

Links on Friendly AI http://www.acceleratingfuture.com/michael/blog/2006/09/consolidation-of-links-on-friendly-ai/

Anissimov: Yes, the Singularity is the Biggest Threat to Humanity http://www.acceleratingfuture.com/michael/blog/2011/01/yes-the-singularity-is-the-biggest-threat-to-humanity/

Abstract of a talk I’m giving soon http://www.acceleratingfuture.com/michael/blog/2011/01/my-upcoming-talk-in-texas-anthropomorphism-and-moral-realism-in-advanced-artificial-intelligence/

Most recent SIAI publications: http://www.acceleratingfuture.com/michael/blog/2010/12/new-singularity-institute-publications-in-2010/

More posts from this blog http://www.acceleratingfuture.com/michael/blog/2010/06/the-world-the-singularity-creates-could-destroy-all-value/ http://www.acceleratingfuture.com/michael/blog/2010/06/reducing-long-term-catastrophic-artificial-intelligence-risk/

Read More

Tallinn-Evans Challenge Grant Successful

As many of you probably know, I’m media director for the Singularity Institute, so I like to cross-post important posts from the SIAI blog here. Our challenge grant was a success — we raised $250,000. I am extremely appreciative to everyone who donated. Without SIAI, humanity would be kind of screwed, because very few others take the challenge of Friendly AI seriously — at all. The general consensus view on the questions is “Asimov laws, right?” No, not Asimov Laws. Many AI researchers still aren’t clear on the fact that Asimov laws were a plot device.

Anyway, here’s the announcement:

Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for dollar by Jaan Tallinn and Edwin Evans, raising a total of $250,000 to fund the Singularity Institute’s operations in 2011. On behalf of our staff, volunteers, and entire community, I want to personally thank everyone who donated. Keep watching this blog …

Read More

Yes, The Singularity is the Biggest Threat to Humanity

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as “the biggest threat to humanity”, but that’s exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event. This shouldn’t be surprising — after all, intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it and whose survival depends on the whims of the greater intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the #1 species on the planet.

Why is the Singularity potentially a threat? Not because robots will “decide humanity is standing in their way”, per se, as Aaron writes, but because robots that don’t explicitly value humanity as a whole will eventually eliminate us by pursuing instrumental goals not conducive to our survival. No explicit …

Read More