Superintelligent Will

New paper on superintelligence by Nick Bostrom:

This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.

Read More

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI …

Read More

The Final Weapon

It’s not really “fair”, but history generally consists of people getting better and better weapons, and whoever has the best weapons and the best armies makes the rules. The number of historical examples of this phenomenon are practically unlimited. The reason America is respected and feared today is because of our military capabilities, particularly nuclear weapons. Complain if you want, this is reality.

I am excited by the possibility that the 200,000 year arms race will finally come to an end by a singleton. It had to end sometime. Personally, it will be a relief, if we survive. While many people can happily enjoy their lives on a daily basis, just focusing on their tiny sphere, myself and others are cursed with concerns about the overall trajectory of humanity and human conflict. My relationship with Murphy’s law is so close that I would hardly be surprised to hear the detonation of nuclear weapons in the distance, practically anytime or anywhere.

Nuclear weapons, of course, are toys in comparison to the products of MNT, or worse yet, true …

Read More

The Illusion of Control in a Intelligence Amplification Singularity

From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions. The Singularity is the most extreme power discontinuity in history. A probable “winner takes all” effect means that after a hard takeoff (quick bootstrapping to superintelligence), humanity could be at the mercy of an unpleasant dictator or human-indifferent optimization process for eternity.

The question of “human or robot” is one that comes up frequently in transhumanist discussions, with most of the SingInst crowd advocating a robot, and a great many others advocating, implicitly or explicitly, a human being. Human beings sparking the Singularity come in 1) IA bootstrap and 2) whole brain emulation flavors.

Naturally, humans tend to gravitate towards humans sparking the Singularity. The reasons why are obvious. A big one is that people tend to fantasize that they personally, or perhaps their close friends, will be the people …

Read More

Responding to Alex Knapp at Forbes

From Mr. Knapp’s recent post:

If Stross’ objections turn out to be a problem in AI development, the “workaround” is to create generally intelligent AI that doesn’t depend on primate embodiment or adaptations. Couldn’t the above argument also be used to argue that Deep Blue could never play human-level chess, or that Watson could never do human-level Jeopardy?

But Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans. And I don’t mean on a different level — I mean actually different. On this point, I’d highly …

Read More

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that the term “human-equivalent AI” always means AI of …

Read More

Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”

I always think of this guy when I go by Woz Way in San Jose.

Read More

Hard Takeoff Sources

Definition of “hard takeoff” (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear Fire Upon the Deep (1992) by Vernor Vinge “The Coming Technological Singularity” (1993) by Vernor Vinge The Metamorphosis of Prime Intellect (1994) by Roger Williams “Staring into the Singularity” (1996) by Eliezer Yudkowsky Creating Friendly AI (2001) by Eliezer Yudkowsky “Wiki Interview with Eliezer” (2002) by Anand “Impact of the Singularity” (2002) by Eliezer Yudkowsky “Levels of Organization in General Intelligence” (2002) by Eliezer Yudkowsky “Ethical Issues in Advanced Artificial Intelligence” by Nick …

Read More

Interview at H+ Magazine: “Mitigating the Risks of Artificial Superintelligence”

A little while back I did an interview with Ben Goertzel on existential risk and superintelligence, it’s been posted here.

This was a fun interview because the discussion got somewhat complicated, and I abandoned the idea of making it understandable to people who don’t put effort into understanding it.

Read More

John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

Continue.

Read More

The Navy Wants a Swarm of Semi-Autonomous Breeding Robots With Built-In 3-D Printers

Popular Science and Wired reporting. Here is the proposal solicitation.

This is a fun headline, but we’re still far from useful functionality in this direction. 3D printers can barely even print circuit boards except for a few exotic prototypes of trivial complexity at hilariously low resolution. More impressive than the progress so far in the DIY community is Xerox’s silver printed circuits. Various conductive inks have been developed before and nothing came of them in terms of commercialization. Development by Xerox started in late 2009, it’s been over a year now and no news yet.

In terms of strength, the products of 3D printers are weak and can easily be pulled apart with your bare hands. If you want a strong product you still have to go to the machine shop or foundry.

Interesting proposal solicitation, however it is worth remembering that military commanders have been making breathless requests for futuristic technologies since time immemorial. There will be no “semi-autonomous breeding robots with built-in 3D printers” of practical battlefield …

Read More

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070 48. Scientists will be able to understand and describe human intentions, beliefs, desires, feelings and motives in terms of well-defined computational processes. (5.1)

2085 50. The computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

Read More