How Long Before Superintelligence?

“How long before superintelligence” is a paper by Nick Bostrom. Here is the abstract:

“This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.”

The paper is also updated with several postscripts, including one from 2008, which says:

“I should clarify what I meant when in the abstract I said I would “outline the case for believing that we will have superhuman artificial intelligence within the first third of the next [i.e. the this] century”. I chose the word “case” deliberately: In particular, by outlining “the case for”, I did not mean to deny that one could also outline a case against. In fact, I would all-things-considered assign less than a 50% probability to superintelligence being developed by 2033. I do think there is great uncertainty about whether and when it might happen, and that one should take seriously the possibility that it might happen by then, because of the kinds of consideration outlined in this paper.

There seems to be somewhat more interest now in artificial general intelligence (AGI) research than there was a few years ago. However, it appears that as yet no major breakthrough has occurred.”

Recently on Nanodot, Foresight Institute President J. Storrs Hall said:

“I would guess, and this is blatantly a speculation, albeit a fairly well informed one, that the “secret trick” of AI will fall in the next decade. That means that the 20s will see robots not just as good as humans at specific, well-defined tasks, but able to learn new tasks the way humans do.”

Have we not learned anything? The very idea that there is any discrete “secret trick” is reminiscent of the physics envy that pervades thinking on AI. Fortunately, such beliefs delay work on AGI in general, leaving more time for Friendliness Theory to be developed.

Comments

  1. MZ

    That paper is funny.

    “In about the year 2007 we will have reached the physical limit of present silicon technology.”

    This is why I argued years ago on WTA-TALK that transhumanists are too optimistic. History consistently proves the vast majority of predictions to be too optimistic. People argue, “nobody predicted the Internet would happen by the 1990s.” Sure, because you can’t predict something you can’t imagine. If you had told somebody in the 1950s that a global communications infrastructure will exist and millions of people will be connected by tele-type machines, and then asked them when they think it will happen, they would have said 1970. So failing to predict the Internet is not a win a for technological acceleration, but another failure of prediction.

  2. @MZ:

    What exactly is your point? You quote:

    “In about the year 2007 we will have reached the physical limit of present silicon technology.”

    You then use this as evidence for overly optimistic predictions in transhumanism when in fact Moore’s Law is still ticking over nicely, and probably will continue to do so for another 8 years or so according to the man himself:

    http://www.engadget.com/2007/09/19/gordon-moore-predicts-end-to-moores-law-in-10-years/

    You’re citing evidence contrary to your argument.

  3. http://singularityhub.com/2008/11/02/singularity-summit-2008-reviewed/

    Rattner, CTO of Intel, first focused on the technical pathways by which Intel and the microprocessor industry will be able to continue to uphold Moore’s law of doubling cpu capacity every 18 months for the foreseeable future. Rattner pointed out that in one sense we have already reached the limit of Moore’s original law [in 2007] because we can no longer appreciably increase the speed or shrink the size of yesterday’s cmos silicon gate.

    [So Bostrom could be viewed as exactly accurate]

    Luckily Intel and others have been able to circumvent the limits of silicon by innovating new types of transistor gates, such as the HiK-MG gate that recently allowed us to move to 32 nm microprocessors during the last year. Rattner argued that the industry has a pipeline of transistor innovations such as HiK-MG, Trigate finfet, and III-V, that will allow us to continue “non-pure silicon cmos” Moore’s law for at least another 10 years.

  4. Rattner on algorithmic advances versus hardware speed.
    http://www.theinquirer.net/inquirer/news/393/1049393/intel-s-rattner-says-the-machines-will-get-us-in-the-end

    Dwave Rose algorithms vs hardware throwdown
    http://dwave.wordpress.com/2007/08/27/algorithms-vs-hardware-the-throwdown/

    I would add sensors and other technology to make it easier to make very useful algorithms

    http://www.newscientist.com/gallery/dn16585-amazing-mirrors

    LIDAR very important for working robot cars (self driving cars)
    http://en.wikipedia.org/wiki/LIDAR

    So a full court technology and methods press.

  5. “secret trick“ of AI will fall in the next decade

    I would argue that this is the case, also.

    It’s a “dirty hack“ and it will be exploited soon.

  6. Get in line — Marvin Minsky has accused me of “physics envy” to my face. I’m unimpressed by the implied argument. Lots of fields floundered before a key principle was found, and then took off: Biology, with evolution; economics, with price theory (supply and demand curves); aviation, which took a couple of bycicle mechanics to realize that you had to be able to balance a plane or it would crash.

  7. Hi Josh, thanks for visiting and congrats on your new role at Foresight.

    Regarding whether AI has a discrete secret or not, we’ll find out soon enough.

  8. Hi Michael,

    Have you read Bostrom’s “Whole Brain Emulation Roadmap”? I did (thought only understood parts of it). It’s fascinating.

    My intuition is that we’ll get AGI (FAI, hopefully) from a mostly de novo design, but knowing more about plan B (WBE) certainly puts things in perspective.

  9. Speaking of the “secret trick”, I’m not hearing too often about what Jeff Hawkins is doing. It did seem promising based on his book On Intelligence and from a talk at MIT (http://mitworld.mit.edu/video/316), but few of the other AI researchers that I try to follow even mention his theories.

    Anyone can comment on this?

  10. I don’t like Jeff Hawkins approach at all. I think, he is out of the race already.

    But it’s only my POV.

  11. Vladimir Golovin

    Thomas, what do you find wrong about Hawkins’ approach? (That’s not to say that I find his approach right — I just wanted to know the specific reasons why you dislike it).

  12. Yes, I understand that.

    I don’t like his idea, that brains do something else than compute. They don’t. IMO.

    This is the gap between us.

  13. But it doesn’t matter really, Vladimir. One of those guys will be right soon, I guess. And will deliver the machine.

  14. Hey MGR,

    Nah I haven’t read the entire WBE roadmap. I should check it out.

    Hawkins’ plan is very unoriginal, uncreative, and unlikely to work. Like 98% of other people who work towards AGI, he has physics envy, which means he looks for one organizing principle (in his case, Hierarchial Temporal Memory), and instead of calling it necessary but not sufficient, makes the very strong claim that implementing an AI based on this principle alone would be enough to duplicate intelligence.

    His book is filled with flakey anecdotes that illustrate obvious truisms such that most patterns can only be observed over time rather than in an instant, or that the brain works in a predictive way. Hawkings rediscovers 10 or 20 basic principles of neuroscience and AI that practically everyone already agrees on, then says they’d be sufficient for human-level AI. One of his only original ideas is saying that the hierarchical design on which the visual cortex is based is likely reproduced throughout the cortex as a result of evolutionary conservation of complexity. Maybe so, but even if so, this would only get you 0.01% of the way further towards a solution, but Hawkins acts like this is a huge deal.

    On some list somewhere Ben Goertzel made a rebuttal similar to the above that I agreed with. Too bad that mailing lists are black holes of information where any specific recovery of a message years later is difficult if not impossible.

    The only way that Hawkins’ idea has gotten any traction is from Seth Godin-reading, TechCrunch-loving entrepreneur types that want to read a book about AI but want to read one by someone “respectable”, like one of their own who has already succeeded in making bucket-loads of cash. The complexity of the text and the sophistication of the ideas in the book is similar to what one might read in a book titled, “Neuroscience: a Simple Intro”. The slightly condescending way in which Hawkins presents simple ideas that I was exposed to while learning about neuroscience at age 14 is almost enough to make me want to recycle my copy of On Intelligence right away, but I have to keep it because people often ask me what I think about his theories.

    Reading Hawkins’ book is one of those experiences that makes me think for a moment that there could be an anthropic shadow around AI research — universes where there is quality thinking about AI are those universes that eventually end up empty, and we’re more likely to find ourselves in a universe with more people, so we find ourselves in a universe where people keep failing at AI. Maybe entirely false, but that’s what I often think of when I read On Intelligence.

    TL;DR Hawkins’ theories are too simple to have any respect in the AGI community.

    Thomas, it does matter, and your explanation for why Hawkins is wrong is too brief.

  15. That’s what I suspected, but I haven’t really dug in my MITECS copy yet (I want to read Molecular Biology of the Cell (Alberts, 5th edition from 2008) first, but it’s slow-going), so from my vantage point it’s hard to differentiate between truly new ideas and re-heated leftovers that are labelled as such.

    Thanks for the heads up, Michael.

  16. Micheal GR – Hawkins is working with a few others on a startup called Numenta.
    I quote from their homepage:
    “Numenta’s first implementation of Hierarchical Temporal Memory, or HTM technology is a software platform called NuPIC, the Numenta Platform for Intelligent Computing, which is available to developers under a free research license. Numenta also is developing a Vision Toolkit and a Prediction Toolkit that will simplify the task of creating HTM networks for specific problems. Interested partners and developers should download NuPIC for experimentation and register for the Numenta Newsletter to learn about future releases of the Toolkits as well as other developments in the HTM world.

    The basic idea is that they’ve created a freeware prototype software to get some of Hawkins’s ideas off the ground, so that people can download it, learn it, use it, and develop it in the process. It’s actually pretty cool to check out the Numenta forum. They speak a different language than me, but one thing’s clear – people are really using it.

    I agree with you, Michael, that this book is frustrating and condescending to anyone who knows about current neuroscience… but the again, it IS a self-proclaimed layperson’s book. It’s pop science. For anyone who has a firm knowledge in the area of any bestselling science-based book, it’s always frustrating to comb through all the hand-holding prose and self-evident (to us) truisms. If you thought this was bad, try “Your Brain on Music” by Daniel J. Levitin. He has some fascinating research and examples in there, but before getting to that, he basically feels the need to convince the reader (appealing to the 18th century demographic) that brains are the source of emotion and thought, before moving on. Oy… That said, it’s definitely true that he’s stuck on one, very unidimensional idea, and he doesn’t even mention other approaches (the most needed in this book, I think, is embodied intelligence, or at least some reference to the rest of the body), but he does many good things here.
    1. Popular science books are too rare. I don’t need to preach to the choir about public feelings of intimidation by all things science.
    2. Ditto and even more so for the AI/Superintelligence/Singularity movement. It’s good to have popular books on the subject that lots of people read, and the movement needs more people to talk to an uninitiated audience in a straightforward, not-futuristic-insane-guy kind of way.
    3. He puts his money where his mouth is. He has his own neuroscience lab, he’s one of the biggest single contributors to handheld electronics, and he’s doing the Numenta thing for starters.
    He may not be the biggest thinker, but he pulls his own weight and then some for this movement. And, frankly, there are too many big thinkers in the superintelligence world, and not enough Jeff Hawkins to hold it down in their respective niches. It’s all about specialization.

  17. It may be, that Jeff Hawkins preaches one thing and does another. Or maybe, he will learn as he goes.

    That would be very good for him, his project and maybe for all of us.

    I now give him about 1% chance to come with something useful in the next 10 years. What is a lot.

  18. “It may be, that Jeff Hawkins preaches one thing and does another.”

    It may be? What is that supposed to mean? Are we to assume that we are being lied to about the nature of his current Numenta project, for instance, as being a vessel for his research on applying cortical algorithms? He doesn’t claim to be Jesus Almighty – he never said he would solely and positively design a thinking, feeling computer. He’s concentrating on helping to advance the field of computationally modeling cortical structure. That’s it. And that is exactly what he’s doing, as we speak. A new startup, that people are using right now, based largely off of his own research, in his own lab, that he bought. What does a guy have to do? 1% chance? To come up with SOMETHING useful in the next 10 years? How is he preaching one thing and doing another? What grounds do you have for that claim, other than a strange personal grudge? By the way, you’ve never said what it is about this approach of his that you don’t like. What’s wrong with cortical algorithms? Or rather, what surefire approaches to AI do we have that are good enough to make it useless to experiment in the field with approaches LIKE algorithmic cortical modeling?

  19. Well, I hope he already knows that everything what brains do is the computing. It is difficult now to retract from his claims in the book, how something else than computing is going on in the brains. So yes, in that sense he is lying, and I don’t care very much. If it is a deliberate lie or just a mistake – it’s all the same info noise.

    Yes, it’s my personal grunge, my wild guess, he has 1% chance of a success. Sure, what else it could be?

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>