Interviewed by The Rational Future

Here’s a writeup.

Embedded below is an interview conducted by Adam A. Ford at The Rational Future. Topics covered included:

-What is the Singularity? -Is there a substantial chance we will significantly enhance human intelligence by 2050? -Is there a substantial chance we will create human-level AI before 2050? -If human-level AI is created, is there a good chance vastly superhuman AI will follow via an “intelligence explosion”? -Is acceleration of technological trends required for a Singularity? – Moore’s Law (hardware trajectories), AI research progressing faster? -What convergent outcomes in the future do you think will increase the likelihood of a Singularity? (i.e. emergence of markets.. evolution of eyes??) -Does AI need to be conscious or have human like “intentionality” in order to achieve a Singularity? -What are the potential benefits and risks of the Singularity?

Read More

Superintelligent Will

New paper on superintelligence by Nick Bostrom:

This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so. In combination, the two theses help us understand the possible range of behavior of superintelligent agents, and they point to some potential dangers in building such an agent.

Read More

More Nonsense Reporting Overblowing IBM’s Accomplishments

Last month in New York I had the pleasure to talk personally with the creator of Watson, Dr. David Ferrucci. I found him amicable and his answers to my questions on Watson very direct and informative. So, I have nothing against IBM in general. I love IBM’s computers. Several of my past desktops and laptops have been IBM computers. The first modern computer I had was an IBM Aptiva.

However, there is a constant thread of articles related to claims being reported that IBM has “completely simulate(d)” “the brain of a mouse (512 processors), rat (2,048) and cat (24,576)”, which was revived in force this last weekend. This is entirely false. IBM has not simulated the brain of a mouse, rat, or cat. Experiments have just recently been pursued to even simulate the 302-neuron brain of a flatworm, for which a wiring diagram exists. Instead, IBM has made “mouse-SIZED” neural simulations, “rat-SIZED” neural simulations, and “cat-SIZED” neural simulations, given certain assumptions about the computational power of mammalian brains. The arrangements between neurons being simulated bear …

Read More

Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI …

Read More

Replying to Alex Knapp, July 2nd

Does Knapp know anything about the way existing AI works? It’s not based around trying to copy humans, but often around improving this abstract mathematical quality called inference.

I think you missed my point. My point is not that AI has to emulate how the brain works, but rather that before you can design a generalized artificial intelligence, you have to have at least a rough idea of what you mean by that. Right now, the mechanics of general intelligence in humans are, actually, mostly unknown.

What’s become an interesting area of study in the past two decades are two fascinating strands of neuroscience. The first is that animal brains and intelligence are much better and more complicated than we thought even in the 80s.

The second is that humans, on a macro level, think very differently from animals, even the smartest problem solving animals. We haven’t begun to scratch the surface.

Based on the cognitive science reading I’ve done up to this point, this is false. Every year, scientists discover cognitive abilities in animals that were

Read More

The Illusion of Control in a Intelligence Amplification Singularity

From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions. The Singularity is the most extreme power discontinuity in history. A probable “winner takes all” effect means that after a hard takeoff (quick bootstrapping to superintelligence), humanity could be at the mercy of an unpleasant dictator or human-indifferent optimization process for eternity.

The question of “human or robot” is one that comes up frequently in transhumanist discussions, with most of the SingInst crowd advocating a robot, and a great many others advocating, implicitly or explicitly, a human being. Human beings sparking the Singularity come in 1) IA bootstrap and 2) whole brain emulation flavors.

Naturally, humans tend to gravitate towards humans sparking the Singularity. The reasons why are obvious. A big one is that people tend to fantasize that they personally, or perhaps their close friends, will be the people …

Read More

Two Approaches to AGI/AI

There are two general approaches to AGI/AI that I’d like to draw attention to, not “neat” and “scruffy”, the standard division, but “brain inspired” and “not brain inspired”.

Accomplishments of not brain inspired AI:

Wolfram Alpha (in my opinion the most interesting AI today) spam filters DARPA Grand Challenge victory (Stanley) UAVs that fly themselves clever game AI AI that scans credit card records for fraud the voice recognition AI that we all talk to on the phone intelligence gathering AI Watson and derivatives Deep Blue optical character recognition (OCR) linguistic analysis AI Google Translate Google Search text mining AI OpenCog AI-based computer aided design the software that serves up user-specific Internet ads pretty much everything

Accomplishments of brain-inspired AI:

Cortexia, a bio-inspired visual search engine Numenta (no product yet) Neural networks, which have proven highly limited ???? (tell me below and I’ll add them)

One place where brain-inspired AI always shows up is in science fiction. In the real world, AI has very little to do with copying neurobiology, and everything to do with abstract mathematics …

Read More

Response to Charles Stross’ “Three arguments against the Singularity”

Stross:

super-intelligent AI is unlikely because, if you pursue Vernor’s program, you get there incrementally by way of human-equivalent AI, and human-equivalent AI is unlikely. The reason it’s unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing. We may want machines that can recognize and respond to our motivations and needs, but we’re likely to leave out the annoying bits, like needing to sleep for roughly 30% of the time, being lazy or emotionally unstable, and having motivations of its own.

“Human-equivalent AI is unlikely” is a ridiculous comment. Human level AI is extremely likely by 2060, if ever. (I’ll explain why in the next post.) Stross might not understand that the term “human-equivalent AI” always means AI of …

Read More

Hard Takeoff Sources

Definition of “hard takeoff” (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear Fire Upon the Deep (1992) by Vernor Vinge “The Coming Technological Singularity” (1993) by Vernor Vinge The Metamorphosis of Prime Intellect (1994) by Roger Williams “Staring into the Singularity” (1996) by Eliezer Yudkowsky Creating Friendly AI (2001) by Eliezer Yudkowsky “Wiki Interview with Eliezer” (2002) by Anand “Impact of the Singularity” (2002) by Eliezer Yudkowsky “Levels of Organization in General Intelligence” (2002) by Eliezer Yudkowsky “Ethical Issues in Advanced Artificial Intelligence” by Nick …

Read More

Wolfram on Alpha and Watson

Stephen Wolfram has a good blog post up describing how Alpha and Watson work and the difference between them. He also describes how Alpha is ultimately better because it is more open-ended and works based on logic rather than corpus-matching. Honestly I was more impressed by the release of Alpha than the victory of Watson, though of course both are cool.

In some ways Watson is not much more sophisticated than Google’s translation approach, which is also corpus-based. I especially love the excited comments in the mainstream media that Watson represents confidence as probabilities. This is not exactly something new. In any case, Wolfram writes:

There are typically two general kinds of corporate data: structured (often numerical, and, in the future, increasingly acquired automatically) and unstructured (often textual or image-based). The IBM Jeopardy approach has to do with answering questions from unstructured textual data — with such potential applications as mining medical documents or patents, or doing ediscovery in litigation. It’s only rather recently that even search engine methods have become widely used for these kinds of …

Read More

Converging Technologies Report Gives 2085 as Median Date for Human-Equivalent AI

From the NSF-backed study Converging Technologies in Society: Managing Nano-Info-Cogno-Bio Innovations (2005), on page 344:

2070 48. Scientists will be able to understand and describe human intentions, beliefs, desires, feelings and motives in terms of well-defined computational processes. (5.1)

2085 50. The computing power and scientific knowledge will exist to build machines that are functionally equivalent to the human brain. (5.6)

This is the median estimate from 26 participants in the study, mostly scientists.

Only 74 years away! WWII was 66 years ago, for reference. In the scheme of history, that is nothing.

Of course, the queried sample is non-representative of smart people everywhere.

Read More

Josh Tenenbaum Video Again: Bayesian Models of Human Inductive Learning

I posted this only a month ago, but here’s the link to the video again. People sometimes say there’s been no progress in AI, but the kind of results obtained by Tenenbaum are amazing and open up a whole approach to AI that uses fast and frugal heuristics for reasoning and requires very minimal inspiration from the human brain.

Abstract:

In everyday learning and reasoning, people routinely draw successful generalizations from very limited evidence. Even young children can infer the meanings of words, hidden properties of objects, or the existence of causal relations from just one or a few relevant observations — far outstripping the capabilities of conventional learning machines. How do they do it? And how can we bring machines closer to these human-like learning abilities? I will argue that people’s everyday inductive leaps can be understood as approximations to Bayesian computations operating over structured representations of the world, what cognitive scientists have called “intuitive theories” or “schemas”. For each of several everyday learning tasks, I will consider how appropriate knowledge representations are structured …

Read More