Steve Wozniak a Singularitarian?


Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”

I always think of this guy when I go by Woz Way …

Read More

Hard Takeoff Sources

Definition of “hard takeoff” (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear Fire Upon the Deep (1992) by Vernor Vinge “The Coming Technological Singularity” (1993) by Vernor Vinge The Metamorphosis of Prime Intellect (1994) by Roger Williams “Staring into the Singularity” (1996) by Eliezer Yudkowsky Creating Friendly AI (2001) by Eliezer Yudkowsky “Wiki Interview with Eliezer” (2002) by Anand

Read More

John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!


Read More

Michael Vassar Speaks to Yale Students on the Singularity

Coverage from Yale Daily News:

Twenty to 60 years from now, the advent of computers with above-human intelligence could transform civilization as we know it, according to Michael Vassar, president of the Singularity Institute for Artificial Intelligence. In a talk with around 35 students and faculty members in William L. Harkness Hall on Sunday, Vassar expounded the vision that his institute, featured in a Feb. 10 article in TIME Magazine, is working to make a reality. Known as the “singularity,” this futuristic scenario posits that artificial intelligence will surpass human intelligence within the next half-century. Once super-intelligent computers exist, they could generate even more intelligent and sophisticated machines, to the extent that humans would lose all control over the future, Vassar said.

“For the most important event in the history of events, it really should get a fair amount of buzz,” he said.

Vassar compared human and chimpanzee intelligence to argue that small changes in a system can represent large leaps in mental capacity. Just as a human is a small …

Read More

Does the Universe Contain a Mysterious Force Pulling Entities Towards Malevolence?

One of my favorite books about the mind is the classic How the Mind Works by Steven Pinker. The theme of the first chapter, which sets the stage for the whole book, is Artificial Intelligence, and why it is so hard to build. The reason why is that, in the words of Minsky, “easy things are hard”. The everyday thought processes we take for granted are extremely complex.

Unfortunately, benevolence is extremely complex too, so to build a friendly AI, we have a lot of work to do. I see this imperative as much more important than other transhumanist goals like curing aging, because if we solve friendly AI, then we get everything else we want, but if we don’t solve friendly AI, we have to suffer the consequences of human-indifferent AI running amok with the biosphere. If such AI had access to powerful technology, such as molecular nanotechnology, it could rapidly build its …

Read More

Anna Salamon at UKH+: Survival in the Margins of the Singularity?

Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.

This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven’t bothered with, as rats and bacteria survive today?

Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences’ desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn’t survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.

This talk will argue that all these scenarios are unlikely. …

Read More

TIME Article on Ray Kurzweil, Singularity Summit, Singularity Institute

Here’s the cover. Front-page article.

By Lev Grossman, 2045: The Year Man Becomes Immortal:

The Singularity isn’t just an idea. it attracts people, and those people feel a bond with one another. Together they form a movement, a subculture; Kurzweil calls it a community. Once you decide to take the Singularity seriously, you will find that you have become part of a small but intense and globally distributed hive of like-minded thinkers known as Singularitarians.

Not all of them are Kurzweilians, not by a long chalk. There’s room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won’t happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you’re walking around living your life and watching TV as if the artificial-intelligence revolution were not about to …

Read More

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070 48. Scientists will be able to understand and describe human intentions, beliefs, desires, feelings and motives in terms of well-defined computational processes. (5.1)

2085 50. The computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

Read More

Some Singularity, Superintelligence, and Friendly AI-Related Links

This is a good list of links to bring readers up to speed on some of the issues often discussed on this blog.

Nick Bostrom: Ethical Issues in Advanced Artificial Intelligence

Nick Bostrom: How Long Before Superintelligence?

Yudkowsky: Why is rapid self-improvement in human-equivalent AI possibly likely? Part 3 of Levels of Organizational in General Intelligence: Seed AI

Anissimov: Relative Advantages of AI, Computer Programs, and the Human Brain

Yudkowsky: Creating Friendly AI: “Beyond anthropomorphism”

Yudkowsky: “Why We Need Friendly AI” (short)

Yudkowsky: “Knowability of FAI” (long)

Yudkowsky: A Galilean Dialogue on Friendliness (long)

Stephen Omohundro — Basic AI Drives (video)

Links on Friendly AI

Anissimov: Yes, the Singularity is the Biggest Threat to Humanity

Abstract …

Read More

Tallinn-Evans Challenge Grant Successful

As many of you probably know, I’m media director for the Singularity Institute, so I like to cross-post important posts from the SIAI blog here. Our challenge grant was a success — we raised $250,000. I am extremely appreciative to everyone who donated. Without SIAI, humanity would be kind of screwed, because very few others take the challenge of Friendly AI seriously — at all. The general consensus view on the questions is “Asimov laws, right?” No, not Asimov Laws. Many AI researchers still aren’t clear on the fact that Asimov laws were a plot device.

Anyway, here’s the announcement:

Thanks to the effort of our donors, the Tallinn-Evans Singularity Challenge has been met! All $125,000 contributed will be matched dollar for …

Read More

Yes, The Singularity is the Biggest Threat to Humanity

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as “the biggest threat to humanity”, but that’s exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event. This shouldn’t be surprising — after all, intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it and whose survival depends on the whims of the greater intelligence/power. The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the #1 species on the planet.

Why is the Singularity potentially a threat? Not because robots will “decide humanity is standing in their way”, per se, as Aaron writes, but because robots that don’t explicitly value humanity as a whole will eventually eliminate us by …

Read More

Michael Nielsen: What Should a Reasonable Person Believe About the Singularity?

Here’s the post. Basically, it takes a common Bayesian belief and ties it into the Singularity. The belief is that extreme probabilities are not justified unless someone has a very good understanding of the situation, therefore to put the probability of a Singularity too low or too high implies an understanding that people simply don’t have, and is unjustified. Here’s the conclusion paragraph:

These are interesting probability ranges. In particular, the 0.2 percent lower bound is striking. At that level, it’s true that the Singularity is pretty darned unlikely. But it’s still edging into the realm of a serious possibility. And to get this kind of probability estimate requires a person to hold quite an extreme set of positions, a range of positions that, in my opinion, while reasonable, requires considerable effort to defend. A less extreme person would end up with a probability estimate of a few percent or more. Given the remarkable nature of the Singularity, that’s quite high. In my opinion, the main reason the Singularity has attracted some people’s scorn …

Read More