Introducing the Singularity: Three Major Schools of Thought

 

three_schools_ey_3_tn.jpg

Eliezer Yudkowsky is one of the world’s foremost researchers on Friendly AI and recursive self-improvement. He created the Friendly AI approach to AGI, which emphasizes the importance of the structure of an ethical optimization process and its supergoal, in contrast to the common trend of seeking the right fixed enumeration of ethical rules a moral agent should follow. At the 2007 Singularity Summit, he introduced three schools of thought currently associated with the word “Singularity,” their core arguments and bolder conjectures, while noting where they support or contradict each other.

three_schools_ey_1_tn.jpg

The following transcript of Eliezer Yudkowsky’s Singularity Summit presentation “Introducing the Singularity: Three Major Schools of Thought” has not been approved by the author. An audio version of the talk is available at the Singularity Institute website.

Introducing the Singularity: Three Major Schools of Thought

three_schools_slide_2.png

I am going to deliver a quick introduction to the Singularity and three major schools of thought that have popped up. Back when the Singularity Institute was first starting up, the word ‘Singularity’ got used a lot less often than it does now. It means a different sort of thing today than when the Singularity Institute got started. There are three major schools of thought that have become associated with the word. One that you have all heard of already, I’m sure, is Ray Kurzweil‘s accelerating change. There is also Vernor Vinge‘s event horizon, and I.J. Good‘s intelligence explosion. We’ll start off with accelerating change.

three_schools_slide_4.png

Stripped down to its core essentials, the accelerating change thesis is that human intuitions about the future are linear, but technology change feeds on itself and therefore accelerates. People expect about as much change in the future as they have seen in the past, if not less. But technological progress feeds on itself. The more we learn, the more we learn. So the future will contain more technological change than people expect. There is also a bolder version of accelerating change, which says that technology change is smoothly exponential, so we can predict the date when new technologies will arrive. These are the manifold variations of Moore’s Law for the speed of the fastest supercomputers: transistors per square centimeter, operations per second, per thousand dollars. All doubling every year, or two years, or 18 months. Here we see a graph with a fully generic version of Moore’s Law, which shows “techno juju” increasing exponentially over time. As you can see, the amount of techno juju we have is going up by a factor of a thousand every fifteen years. If we extrapolate this trend into the future, what do we get? That’s right, “Big Juju!” As you can see from this graph, once you cross the threshold of Big Juju in 2031 on April 27th between 4:00 and 4:30 in the morning.

three_schools_slide_5.png

Now, not even a bold claim is actually that bold, of course. The real argument goes something like this: if you look back at the rise of the internet from the perspective of the man on the street, the internet blew up from out of nowhere. There is a sudden, huge spike in the number of internet users on a linear graph. On a logarithmic graph, the increase looks much more steady. So, accelerationists would say there is no use in acting all surprised by your business model blowing up – you had plenty of warning. The core thesis of accelerationism is that huge changes are coming larger than you would expect from linear thinking. And the bold thesis is that you can actually time the breakthroughs. Criticisms of the bold thesis don’t necessarily hit the core thesis. Computing progress could be only roughly exponential, too bumpy to predict exactly, but roughly exponential progress still means we are going to get hit with huge changes somewhere down the line. Any positive second derivative implies future changes larger than past changes. So criticizing Moore’s Law is not enough of an argument against accelerating change.

And now for something completely different. The event horizon, which is what Vernor Vinge originally named the Singularity back in the 1970s. Sometime in the future, technology will advance to the point of creating minds that are smarter-than-human through brain-computer interfaces or purely biological neuro-hackery, or by constructing a true artificial intelligence. Vernor Vinge was a professor of mathematics who also wrote science fiction. And he realized that he was having trouble writing stories set in a future past the point where technology creates characters who were smarter than he was. And at that point, his crystal ball cracked in the center. This is why Vernor Vinge called it the Singularity after the center of a black hole where 1970s models of the laws of physics broke down. Note that it’s the model of the future that breaks down, not necessarily the future itself. If I am ignorant about a phenomenon, that is a fact about my own state of mind, not a fact about the phenomenon itself. Something happens, you just don’t know what it is.

three_schools_slide_6.png

Stripped to its bare essentials, the core thesis of the event horizon is that smarter-than-human minds imply a weirder future than flying cars and amazing gadgets with lots of blinking lights. Imagine, if you like, that future technology finally produces the personal jet pack that lets you fly all around the city. Well, birds flew before humans did, but they didn’t take over the world. The rise of the human species did not occur through flapping our arms. In our skulls, we each carry three pounds of slimy wet gray stuff corrugated like crumpled paper. The brain does not look anywhere near as impressive as it is. It doesn’t look big, or dangerous, or even beautiful. But a skyscraper, a sword, a crown, a gun, all these pop out of the brain like a jack from a jack-in-the-box. A space shuttle is an impressive trick. A nuclear weapon is an impressive trick. But not as impressive as the master trick. The brain trick. The trick that does all other tricks at the same time. Political strategizing happens in the brain, not the kidneys. You won’t find many famous politicians or military generals who are monkeys. Intelligence, you might say, is the foundation of human power. It’s the strength that fuels all our other arts.

three_schools_slide_10.pngthree_schools_slide_11.png

In everyday life, people think about the scale of cognitive minds as if it ran the scale from village idiot to Einstein. But this is a range within humans who are themselves the smartest creatures on the planet. If you can take an IQ test designed for humans, you have already established yourself as a member of the cognitive elite no matter what you score, because a mouse would just eat the IQ test. So, when I talk about intelligence I’m talking on the trans-species scale. The scale that starts with a rock, zero intelligence, and runs from there to flatworms, insects, lizards, mice, chimpanzees, and to humans. At the core, Vinge’s event horizon is about intelligence. Improving the brain is very serious business. It tampers with the roots of the technology tree, goes back to the cause of all technology. And that makes the future a lot stranger than strapping on a jet pack. If you want to know the true shape of the future, don’t be distracted by amazing gadgets with lots of blinking lights. Look to the cognitive technologies, the technologies that impact upon the mind.

three_schools_slide_13.png

The bolder thesis of the event horizon, the stronger claim, is that to predict anything a transhuman mind would do, we would have to be at least that smart ourselves. If this is true, the future becomes absolutely unpredictable and our models break down entirely. The event horizon thesis tends to argue against the bold thesis of accelerating change. We can’t predict the future precisely via smooth exponential graphs. But the core thesis of accelerationism is just that future changes will be greater than past changes, because technological change feeds upon itself. And that, the event horizon thesis definitely supports. So, the event horizon supports the core thesis of accelerationism, but argues against the bold thesis. And this is why it’s important to disentangle all these concepts. Another disentanglement: the event horizon does not require accelerating change. And especially not bold accelerationism. We could reach the threshold level of techno juju to create transhuman intelligence by following the previous historical line, shown here, and as the bold thesis of accelerationism implies. Or we could reach the threshold by following a different line, a rougher line. One that proceeds faster or slower than history would lead us to expect. We could reach the threshold following some totally weird trajectory that dips down and comes back. As long as you eventually get enough technology to reach the threshold to create transhuman intelligence.

three_schools_slide_14.png

The third school of Singularity thought is the intelligence explosion, which goes back to the 1960′s and was invented by the famous Bayesian mathematician I.J. Good, and also pre-invented in the 1930′s by the science fiction editor John Campbell. Mind has always been the source of technology. All the changes that occurred over the past 10,000 years were produced by constant human brains. 10,000 years ago, as today, our ancestors had a pre-frontal cortex, visual cortex, limbic system – the same brain architecture as today. But now we are talking about using technology to improve intelligence. And that closes the loop. Suppose we have humans with brain-computer interfaces that augmented their intelligence. What might they do with their augmented intelligence? Play the stock market? Cure cancer? One good bet is that they would use their augmented minds to design the next generation of brain-computer interfaces. The smarter you are, the more intelligence you have at your disposal to make yourself even smarter. Minds using technology to improve minds is a positive feedback cycle, and this stripped down to its bare essentials is the core thesis of the intelligence explosion. Intelligence enhancement is a tipping point, like a triangle balanced on one corner. Once it tilts over even a little, gravity pulls it down the rest of the way.

three_schools_slide_15.png

The most extreme version of this thesis is an artificial intelligence improving its own source code. If you try to do intelligence enhancement by genetic engineering, then it takes 18 years for the kids to grow up and help engineer the next generation. It’s when we start talking about artificial intelligence that we start to see how large the intelligence explosion might be. Even if you consider only the hardware of the human brain, as opposed to the software, you can see plenty of room for improvement. Human neurons spike an average of 20 times per second. And the fastest recorded neurons in biology spike 1000 times per second, which is still less than a millionth of what a modern computer chip does. Similarly, neural axons transmit signals at less than 150 meters per second. One meter per second is more usual. And that’s less than a millionth the speed of light. So it should be physically possible to have a brain that thinks at one million times the speed a human does without even shrinking it or cooling it. At that rate, you could do one year’s worth of thinking every 31 physical seconds.

three_schools_slide_16.png

So, I should emphasize that this in particular is more of a thought experiment than a prediction. The main reason for discussing this is to illustrate that the human mind is not an upper bound. Just as a skyscraper is orders of magnitude taller than a human, and a jet plane travels orders of magnitude faster than a human, you could have minds that think orders of magnitude faster or have orders of magnitude more computing power. There is nothing in the laws of physics against it. Okay, so, one widespread criticism of this is that we should not worry about any of this because AI has failed to make progress over the last few decades. Yes, I hear this a lot. Very amusing. Artificial intelligence has been dumber than a village idiot for quite some time now, so it seems clear that AI has failed to make progress. But we shouldn’t use the human scale of intelligence to judge AI’s. It appears to me that AI has come quite a long way, that we have been creeping up the scale, though slowly. But to a human, it all falls off the cliff of the human scale and becomes dumber than a village idiot. Plus, of course, as soon as Rodney Brooks does something impressive, it’s not AI anymore.

three_schools_slide_17.png

So, strictly steady progress in artificial intelligence, from a human perspective, might look something like this. And that is not even taking recursive self-improvement or the intelligence explosion thesis into account. There is no threshold in that diagram where the human programmers stop improving the AI from the outside and the AI starts improving itself from the inside. If an AI is thinking a thousand times as fast as human programmers, shouldn’t it improve itself faster than humans tinkering from outside? So maybe what we ought to see is something like this. And that may seem a bit silly, but if you were to look at the graph of, say, how much complexity there was on earth, the graph would look a lot like that starting with the invention of human intelligence. If you think of that as an economic sort of graph, what does the global economy look like once human intelligence comes along, it would probably look remarkably like that. This sort of thing is not unprecedented, it’s just very impressive.

So the bold claim of the intelligence explosion is minds making technology to improve intelligence. This sort of thing is the argument for why it would not be a good idea to wait until after we have human-level AI before we start thinking about the implications of the technology, and in particular about transhuman AI. So if we put the bold claims of the intelligence explosion on a Moore’s Law graph, it might look something like this. Note that this graph contradicts both strong accelerationism, because change is not accelerating at a smooth pace, and also contradicts the strong event horizon, because we are making a prediction about what happens after the Singularity. One often hears, ‘Well, there are physical limits to computation, so this can’t continue forever.’ Well, according to our current models of physics, there are physical limits, but they’re way the heck off the top of this graph. It’s way above the ceiling even.

So, another important point about this graph of the intelligence explosion is, What does that blue line represent? In the intelligence explosion the key threshold is criticality of recursive self-improvement. It’s not enough to have an AI that improves itself a little. It has to be able to improve itself enough to significantly increase its ability to make further self-improvements, which sounds to me like a software issue, not a hardware issue. So there is a question of, Can you predict that threshold using Moore’s Law at all? Geordie Rose of D-Wave Systems recently was kind enough to provide us with a startling illustration of software progress versus hardware progress. Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or a 1977 computer, an Apple II, running a 2007 algorithm? And Geordie Rose calculated that Blue Gene/L with 1977′s algorithm would take ten years, and an Apple II with 2007′s algorithm would take three years.

In artificial intelligence, this sort of thing is harder to calculate and graph. AI breakthroughs usually let you do things that previoulsy would have been outright impossible because you just had no clue how to do them. But I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970′s theory and a Blue Gene. Each conceptual breakthrough in AI drops the computing power necessary to achieve AI. At some point you get enough computing power to cross the current threshold, or you get one last theoretical breakthrough that crosses the current threshold of computing power, and that perhaps is when you get true AI. Or you might say that brute force, more computing power, lets you get away with a less clever design. But if you don’t know what you’re doing, if you fundamentally just have no clue how to build a mind, then it doesn’t matter how much computing power you have. Every 18 months, the minimum IQ to destroy the world drops by one point.

three_schools_slide_18.png

So, from the perspective of the intelligence explosion school, the critical threshold may have nothing to do with human equivalence per se, because humans don’t rewrite their own source code. You could get the intelligence explosion as the result of a theory breakthrough in self-modification and reflectivity, thinking about thought. And other things could fall out of that if the AI was smart enough to add them to itself. So, to sum up, the three schools’ core theses are as follows. Accelerating change: intuitive futurism is linear, but technology change accelerates. Event horizon: transhuman minds imply a weirder future than flying cars and gadgetry. Intelligence explosion: minds making technology to improve minds is a positive feedback cycle. So the three schools of thought are logically distinct, but can support or contradict each others’ core or bold claims. The core theses all support each other. They don’t necessarily imply each other, or logically require each other, but they support each other. And I fear that is why the event horizon, the intelligence explosion, and accelerating change are often mashed together into Singularity paste.

three_schools_slide_19.png

These three schools did not always exist. There may be room for a fourth school. These three schools all have substantive theses, interesting claims. You can distinguish their premises from their conclusions. A new school should make equally interesting claims, say here is the premise and here is what results from it that makes the premise interesting. If you give the Singularity a new definition, as I’m sure many people will do at this summit, I would ask that you please, for the love of cute kittens, tell us exactly what you mean by the word. This has been Eliezer Yudkowsky for the Singularity Institute for Artificial Intelligence.

yudkowsky_bio_summit.png

Trackback:

Phil Bowermaster, The Speculist, Hardware, Software, Civilized Chimps
Stephen Gordon and Michael Sargent, The Speculist, FastForward Radio

5 thoughts on “Introducing the Singularity: Three Major Schools of Thought

  1. Pingback: Accelerating Future » Introducing the Singularity: Three Major Schools of Thought

  2. Pingback: Three schools of thought on the Singularity. | From Here To Singularity

  3. Pingback: Accelerating Future » The Three Singularity Schools, Kurzweil, and Superintelligence

  4. Pingback: Progress with AI « dw2

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>