“How long before superintelligence” is a paper by Nick Bostrom. Here is the abstract:
“This paper outlines the case for believing that we will have superhuman artificial intelligence within the first third of the next century. It looks at different estimates of the processing power of the human brain; how long it will take until computer hardware achieve a similar performance; ways of creating the software through bottom-up approaches like the one used by biological brains; how difficult it will be for neuroscience figure out enough about how brains work to make this approach work; and how fast we can expect superintelligence to be developed once there is human-level artificial intelligence.”
The paper is also updated with several postscripts, including one from 2008, which says:
“I should clarify what I meant when in the abstract I said I would “outline the case for believing that we will have superhuman artificial intelligence within the first third of the next [i.e. the this] century”. I chose the word “case” deliberately: In particular, by outlining “the case for”, I did not mean to deny that one could also outline a case against. In fact, I would all-things-considered assign less than a 50% probability to superintelligence being developed by 2033. I do think there is great uncertainty about whether and when it might happen, and that one should take seriously the possibility that it might happen by then, because of the kinds of consideration outlined in this paper.
There seems to be somewhat more interest now in artificial general intelligence (AGI) research than there was a few years ago. However, it appears that as yet no major breakthrough has occurred.”
Recently on Nanodot, Foresight Institute President J. Storrs Hall said:
“I would guess, and this is blatantly a speculation, albeit a fairly well informed one, that the “secret trick” of AI will fall in the next decade. That means that the 20s will see robots not just as good as humans at specific, well-defined tasks, but able to learn new tasks the way humans do.”
Have we not learned anything? The very idea that there is any discrete “secret trick” is reminiscent of the physics envy that pervades thinking on AI. Fortunately, such beliefs delay work on AGI in general, leaving more time for Friendliness Theory to be developed.