Artificial Intelligence and Society

ai_and_society_tn.jpg

Ben Goertzel, Eliezer Yudkowsky, and Melanie Swan at the Artificial Intelligence and Society event

In everyday life, we underrate the importance of intelligence because our social environment consists only of other humans, yet it is our real trump card as a species and the foundation for everything else we do. The Bayesian statistician I. J. Good proposed that an “intelligence explosion” brought about by an artificial intelligence improving the design of its own intelligence could be expected to reshape the universe more than all human actions up to this point. In January 2008, Eliezer Yudkowsky examined this arguments in an informal presentation of his talk “The Human Importance of the Intelligence Explosion” at the Artificial Intelligence and Society event hosted by the University of Santa Clara and Singularity Institute for Artificial Intelligence.

The following transcript of Eliezer Yudkowsky’s presentation “The Human Importance of the Intelligence Explosion” has not been approved by the speaker. An audio mp3 is also available.

Artificial Intelligence and Society

human_importance_01.png

This is “The Human Importance of the Intelligence Explosion.” The term “intelligence explosion” was invented by I.J. Good, who is a fairly famous mathematician. The core idea goes something like this: suppose you could invent brain-computer interfaces that would substantially augment human intelligence. What might these augmented humans do with their newfound intelligence? Medical research? Play the stock market? One fairly good guess is that they would turn their intelligence toward designing the next generation of brain-computer interfaces. Then, having become even smarter the next generation, they could invent the third generation of brain-computer interfaces. Lather, rinse, repeat.

The notion here is that intelligence is the source of all technology. If you can use technology to improve intelligence, you aren’t simply developing one interesting kind of new application–you are actually closing a cycle and creating a positive feedback loop. The purest case of this idiom would be an artificial intelligence that would have complete access to its own source code and be able to rewrite it very quickly. This is what I.J. Good was originally talking about when he said “intelligence explosion,” though the notion does generalize to anything you can use to create more intelligence than you currently have now.

I would like to emphasize that the notion of an intelligence explosion is actually distinct from a lot of the theses that get tossed around in this area. It does not necessarily imply, nor does it require that more change occurred between 1970 and 2000 than between 1940 and 1970. This could well be the case. It’s just that it is logically distinct and is not required by the thesis, and that strengthens it; because the fewer assumptions, you need the stronger the overall thesis is.

Similarly, the notion that the intelligence explosion does not require or imply that technological progress follows a smooth curve or a predictable curve. The first AI to improve itself could be created in an age of technological progress slowing down, speeding up, or remaining steady. It’s a threshold. The qualitative prediction of getting there eventually just requires that you don’t actually go backwards. I emphasize this because people do tend to mean a lot of different things when they say the word “singularity,” and if we are not quite careful to keep track of the distinctions between all these various concepts we will end up quite confused.

Another word you may have noticed over here is “intelligence.” When people hear the word “intelligence” they usually think of what we might call “book smarts.” Calculus, chess, good recall of facts… they would contrast that with being socially skillful or persuasive. It does take more than chess to succeed in the human world, but these other factors are also cognitive. Being able to manipulate other people, that’s a talent that resides in the brains, not in the kidneys. I think we tend to systematically underestimate the actual importance of intelligence because in everyday life everyone we are dealing with is a human, as opposed to a mouse, or a lizard, or a rock. We act like our tiny little corner of mindspace is the whole universe. We think of the scale of intelligence as if it ran from a village idiot to Einstein, rather than running from an amoeba up to humans. On the scale of interspecies differences of intelligence–if you are comparing yourself to a mouse–then the distance between a village idiot and Einstein fits onto a small dot. You might be able to tell the difference between a village idiot and an Einstein, but a chimpanzee would have a bit of trouble administering the IQ test.

human_importance_02.png

The rise of human intelligence in its modern form happened between 35,000 and 150,000 years ago depending on who you ask. That event of our reaching this particular level of intelligence had an enormous impact on the Earth. The land sprouted skyscrapers, footsteps appeared on the moon… okay, technically that’s not a change to Earth, but you get the picture here. If you look around you at this very room, most of the objects you see–your clothes, the floor, the desks–these are all byproducts of human intelligence. We left them behind us like smoke puffs coming out of an engine. People say things like “intelligence is no match for a gun,” as if guns had grown on trees. People say things like “intelligence doesn’t count as much in life as money,” as if mice used money. You might say that intelligence is our real trump card as a species. It’s the foundation for everything else we do. Fire, language, nuclear weapons, skyscrapers, spaceships, money, science–they all didn’t grow on trees, they grew on human minds.

All jokes aside, you won’t find many great novelists, or great military generals, or politicians who are lizards. In everyday life you’re just looking at the differences you have with the other people around you because everyone who is really different from you in intelligence, like a cat, isn’t really in the game you’re playing at the moment. Advertisers, of course, want you to believe that the word “futuristic” means lots of flash, glitter, metal, chrome and blinking lights. They want you to believe that “futuristic” means an expensive gadget, because that’s what they want to sell you.

Imagine if you like that technology produces amazing artificial red blood cells built of nanotechnology that will let you hold your breath for four hours. This is an actual application that has been proposed. It would be pretty respectable if you could pull it off, because it means that you could have a heart attack and walk to the doctor’s office instead of keeling over on the spot. But it doesn’t really change the rules of the game. It’s something neat you could do within the rules of the game, but it doesn’t change the game itself.

human_importance_03.png

Humanity did not rise to prominence on Earth by holding its breath longer than other species. The future technologies that really matter, I would argue, are the ones that impact upon the mind. The mind is the root of the technology tree. When you do something to the mind, you are picking up the tree and shaking it by its roots. Everything else is just sort of touching a leaf. So one of these things is not like the other. One of these things does not belong, and it’s the artificial intelligence entry. These are things you do within the game. They are leaves on the technology tree, except for that one bit about artificial intelligence, which is completely different, even though it is often mentioned in the same breath as the others.

Intelligence is the most powerful force in the known universe. We see the effects everyday. It’s also the most confusing question left to modern day science. You can ask, “How does intelligence work,” and get ten different answers from ten different scientists. This you cannot really do if you say something like “How does fire work?” Most scientists will tend to give you the same answer. There is a great deal that we do know about the mind. We have this whole library of different things we know about the mind. A friend of mine once said, “Anyone who claims that the mind is a complete mystery should be slapped upside the head with MIT Encyclopedia of the Cognitive Sciences, all 1,046 pages of it.”

We know an enormous amount about the mind. In fact, we know so much about the mind that we could spend a whole human lifetime studying it and still not have read all the papers, but we also know that there are some things that we don’t know about the mind just yet, because we have not created a human-level AI. There are things we don’t understand, but let us remember the words of the mathematician Edwin Thompson Jaynes who observed that if we are ignorant about a phenomenon, that is a fact about our own state of mind, not a fact about the phenomenon itself. Confusion exists in our minds, not in reality. A blank spot on your map does not correspond to a blank territory. I sometimes summarize this by saying there are mysterious questions but no mysterious answers, because mysteriousness is a property of questions, not a property of answers.

Do not say intelligence is a mysterious phenomenon. It’s just mysterious to us for the moment because we don’t understand it yet, the same way that two hundred years ago we didn’t understand how it could possibly be that my muscles move according to my will. Life used to be a big mystery. If you look at some of what was said about life historically, things like “It is infinitely beyond the reach of science.” That was Lord Kelvin. It actually sounds quite a lot like what people are saying now about the mind, but you’ve got to have a sense of historical perspective. Everything used to seem this mysterious, from the dawn of time right up until the moment science solved it. Everything starts off as a mystery.

Back to the intelligence explosion, I originally gave the example of human beings augmented with computer interfaces. But if you strap a computer onto the brain, there is still a problem. The problem is the brain. You have a bit of a bottleneck in the system over here. You have a part of the system that is presumably doing something pretty important or you would not have bothered strapping the computer onto the brain in the first place. But it runs on 100 hertz, you have no read-write access, you cannot add new neurons to it, you can’t back it up, the code is this gigantic mess of spaghetti with all the comments missing… basically evolution did not make the human brain to be end-user modifiable. It’s encrypted, you might say. It’s locked up behind some of the most effective digital rights management software ever created.

The analogy I sometimes use is that trying to use brain-computer interfaces to create smarter-than-human intelligence may work about as well as trying to strap jet engines onto a bird. I’m not saying it could never ever be done, I’m just saying that we might need a pure AI, a smarter-then-human AI, just to handle the job of untangling that gigantic mess of spaghetti code and upgrading humans. It might be easier, metaphorically speaking, to build a 747 first, so that the 747 can amplify itself and get even larger, and then have that, metaphorically speaking, upgrade the bird.

It is not easy to build a Boeing 747 from scratch–it took us a long time to get to that point–but is it any easier to start with a bird, modify the design to create a 747-sized bird that actually flies as fast as a 747, then migrate an actual living bird to the new design without killing the bird or making it very unhappy? I’m not saying it could never ever be done. With sufficiently advanced nanotechnology, you could do that… but first we build the 747, historically speaking.

For similar reasons I don’t really think we should merge with our machines. Let’s say I want to make toast using my hands. Trying to fuse with the toaster is probably not the most efficient way to modify my biology so I can toast bread with my hands. The same probably goes for anything you want to do with your mind, which is actually a lot more complicated than your hand. Merging with an AI would be the equivalent here of merging with the toaster.

That is why recursive self-improvement and the intelligence explosion are usually designed in the context of artificial intelligence, and in particular in the context of artificial intelligence rewriting its own source code.

An AI can have total read and write access to its own state. It can absorb more hardware. Let’s say you run into a problem and you are not quite bright enough to solve it. You can’t just go out and purchase ten times as much brain. Whereas any AI that shows commercial potential, unless the AI was built on the most expensive supercomputer on the planet up to that point, you probably go out and buy a hundred times as much hardware. Understandable code is a big one–the AI gets our documentation. Modular design–if part of your brain stops working you can’t just swap it out.

And there is a clean internal environment, which means that when we program a computer we actually have some idea of what the low-level consequences would be. We don’t always fully understand the consequences of what we program. After all, if we knew the exact output of a program in advance we wouldn’t need the program in the first place. Nonetheless, if you try to start making changes to the brain it’s not deterministic, even in principle, what happens next.

Then, of course, there is the old stereotypical point that an AI can be really, really fast. Light speed is around a million times faster than your neurons transmit signals, even at your neurons’ top speed. A synaptic spike, sort of like one little blip of information moving to your brain, dissipates more than a million times the minimum heat for a one bit operation at 300 degrees Kelvin, although transistors do even worse. In fact, heat dissipation is now the main advantage that biology has on electronics.

And, of course, clock speed. Any neural computational algorithm you postulate, any computation the brain actually performs, has to be done in no more than one hundred serial steps–one hundred steps, one after the other. A lot of times it’s more like ten, because at their top speed your neurons fire maybe two hundred times per second, and usually it’s more like twenty times per second. A CPU, of course, does somewhere on the order of a million operations per second.

So, it looks to be permitted by the laws of physics to build a brain at least one million times as fast as the human brain, and that is without shrinking the brain, lowering the operating temperature, quantum computing, reversible computing, and all sorts of other neat little tricks that we could in principle apply to go a lot faster than a mere million-to-one speed-up. With a little nanotechnology, we can also get observed act loops: sensors coming in, motor actions coming out. We could also get a speed-up on the order of a million out of that. If you could speed everything up by a factor of a million, then what we think of as 31 seconds of time would be sufficient to do a year’s worth of thinking, and even a year’s worth of observing and acting on a molecular scale.

Let’s say there is a smart AI that thinks a million times as fast as a human, but the AI does not yet have tools that work on its own timescale. In other words, it has the mind part of this but not the sensors and tools. What is the fastest path you could take from current technology to get to molecular nanotechnology from where we are now? This is an interesting sort of question because I don’t actually have thousands of years, at one year per 31 seconds, to think about the problem. I can try to think of a creative method that you could use to bootstrap yourself to nanotechnology starting from our current technology if you thought fast enough. Of course, I don’t know what the real answer is because I cannot actually think that fast.

human_importance_04.png

The fastest I can think of how to get to molecular nanotechnology starting from our current technology requires around ten thousand years of fast-time. Probably around a week in what we think of as our time. The first thing you have to do is crack the protein folding problem so you can look at a DNA strand and figure out how the corresponding chain of amino acids is going to fold up into a chemical shape. If you can crack the protein folding problem, you can design DNA sequences that make proteins that do arbitrary molecular operations… for example, building faster nanotechnology in the diamondoid regime, where you have everything built out of carbon atoms. That is what we think of as mature nanotechnology, though we may be able to do better than that. Then there are online providers that will accept your specification of the DNA string, synthesize the DNA, sequence the amino acids, and ship you back the protein by FedEx, with 72 hours turnaround time.

We cannot do all that much with that because we haven’t cracked the protein folding problem yet. You would use these proteins to build the next nanodevice that you would need, that builds the next nanodevice that you need, and so on. Then, when you get sufficiently high quality nanotechnology out of this chain, then you have an unlimited supply of spare parts around you–what we call “atoms.” This you can use to build a copy of yourself, over a hundred seconds, or so. Two copies of you build four copies of you, four copies build sixteen copies, and the moral of the story here is that we are separated from rather extreme technologies more by our own stupidity than by any fundamental physical barriers. If we can break down the stupidity barrier, there isn’t much else to stand in our way.

eliezer_bio.png

One thought on “Artificial Intelligence and Society

  1. Pingback: Accelerating Future » AI & Society 2008

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>