homeAboutArticlesLexiconNetworkStoreStaffLinks
 
 

Forecasting Superintelligence: the Technological Singularity

©2003 by Michael Anissimov


"Singularity forecasting" is an emerging field of study attempting to predict when the technological creation of smarter-than-human intelligence (the "Singularity", as coined by Vernor Vinge) might occur. Greater-than-human intelligence might be accomplished through the physical enhancement of human brains (less likely), the creation of artificial general intelligence (more likely), or possibly some subtle mix of the two. The scientific community's current understanding of intelligence suggests that this should eventually become technologically possible. "Intelligence" is a function that certain physical systems perform, and like any system that performs useful functions, it will eventually be reverse-engineered by human researchers. Prominent futurist Ray Kurzweil told the US Congress of Representatives that smarter-than-human intelligence is likely to arrive within twenty to thirty years, at his testimony at a hearing on the societal implications of nanotechnology. Kurzweil believes that once nanotechnology matures to a certain degree, it will permit us to scan the human brain at the fine-grained molecular level, uncovering the cognitive regularities responsible for our intelligence. At that point, Kurzweil claims, it will be possible to either enhance human brains cybernetically, or completely transfer ("upload") human brains onto computers - probably nanocomputers - and run them at accelerated clock rates, resulting quickly in transhuman intelligence. (A community of ultrafast thinkers living within a virtual environment could conceivably recapitulate millenia of technological progress in seconds, depending on the degree of speedup involved.) The greatest advantages of transhuman intelligence will not necessarily come from speed advantages, but from genuinely more accurate and elaborate thinking processes. We have no cause to believe that humans represent a theoretical upper bound on intelligence.

The barriers between our current level of technology and the Singularity appear to be technical and scientific (as opposed to opaquely philosophical) fields where incremental or substantial progress occurs daily. (Examples: 1, 2, 3.) Enhancing human intelligence or creating artificial intelligence will be a matter of combining the appropriate hardware (either biological or nonbiological) with suitable software. Like the wheel and steam engine, it seems likely that the "invention" of transhuman intelligence is highly convergent - it is liable to happen given a wide range of future scenarios - so the question is not "will it happen?" but "when will it happen?" and "what can be done to improve the chances of a pleasant outcome?" (Although a sufficiently large planetary disaster could conceivably destroy our civilization before the creation of transhuman intelligence, if our path of technological development goes on as it has, its eventual creation seems very likely.) Like any technological development, the Singularity must be approached carefully and cautiously - the fact that this technological development could go on to create manifold technological developments on its own makes safety all the more urgent. If we develop an artificial general intelligence that is smarter than a human being, it will only be a matter of time before it uses its superior intelligence to hop off of its immobile substrate and into the real world. This could result in massive threats or huge benefits depending on the motivations of this transhuman intelligence, which will at least partially derive from the initial programming it receives from its human creators.

The "Singularity" happens when transhuman intelligence is created; not when the rate of technological development accelerates rapidly, not when human "collective intelligence" reaches some critical threshold, not when low-level AIs become integrated with our society. Although the space between human and transhuman intelligence is almost certainly continuous (rather than discrete), it may be worthwhile to specify several potentially salient levels of transhumanity:

  • Intelligences barely as "smart" as humans or slightly less so, but possessing hardware advantages (ultrafast thinking, perfect recall, better pattern recognition, etc.) that signify de facto transhuman intelligence.

  • Smarter than any human that has ever lived, plus substantial hardware advantages that signify powerful transhuman intelligence.

  • Far smarter than any human that has ever lived, plus substantial hardware advantages that signify de facto superintelligence.

  • Nick Bostrom, a philosophy professor at Oxford University, defines a "superintelligence" as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills". The question of "how much smarter?" is usually left open, but here, when the word "superintelligence" is used, we will mean substantially smarter than all human geniuses, with orders of magnitude more computing power, in addition to profound "mental software" advantages. Although it may be offensive to some, "smarter than humans as we are smarter than cats" would be another suitable definition, insofar as the analogy is useful. Superintelligence would be the quick consequence of any recursively self-improving transhuman intelligence - that is, any transhuman intelligence capable of improving or adding to the very hardware it is running on. If the transhuman intelligence in question finds a way to accelerate its own thinking speed by upgrading its hardware (a near-certainty), then superintelligence would probably be a fairly rapid consequence of "mere" transhuman or even roughly human-equivalent intelligence. One day, it wins its first game of chess against a human. The next day, it discovers a cure for cancer. The next day, it accomplishes something we can't even imagine. Intelligence, when turned inwards upon itself for the purpose of improving its underlying hardware, will give rise to qualitatively different solutions than collectives of human beings incrementally chipping away at outstanding problems.

    "Transhuman" would refer to intelligences only "slightly" smarter than humans, perhaps on the same scale as humans are smarter than chimps. However, even this comparison may prove fruitless, as the first transhuman intelligences will likely have profound hardware advantages as natural consequences of their construction. For example, it makes little sense to call an accelerated software duplicate of a human mind "merely human" - the difference in processing speeds could be in the billions or trillions. (Present-day silicon chips are already tens of millions of times faster than biological neurons.) Cognition on a more flexible substrate than biological neurocircuitry would also open up huge opportunities for deep self-analysis, eidetic memory, self-revision, rapid learning, high-level pattern recognition, impromptu specialization of cognitive resources for difficult problems, module overclocking, and much more.

    Although the first transhuman intelligences might possibly be pharmacologically or cybernetically enhanced human beings, artificial general intelligence currently seems like a more probable contender. The human brain is tied to legacy software and hardware; most of the brain evolved in the absence of our unique human intelligence and behavioral flexibility, resulting in compatibility problems. Any company or government that wants to enhance a human brain will need to work within these overwhelmingly complex considerations and constraints. There are also massive ethical barriers that will prevent the gathering of data, testing of methods, and fundraising of research dollars in the foreseeable future. Artificial intelligence is subject to none of these limitations, implemented entirely on a reprogrammable, cheap, fast, ethically unobtrusive substrate. Intermediate successes are also more likely to be economically desirable than human brain enhancement. For one thing, enhanced humans probably wouldn't be able to make perfect copies of themselves or integrate with surplus computing power. Keeping this in mind, here are some central points to consider in the field of Singularity forecasting:

  • If we assume that artificial general intelligence will reach transhumanity before augmented human beings, then the initiation of truly recursive self-improvement will occur when sufficient software complexity is mixed with sufficient processing power on a suitable computing substrate. The difficulty of the software problem will decrease drastically as processing power and cognitive science data continue to increase exponentially.

  • Nanocomputing (which might arrive as soon as 2007 or as late as 2020) will multiply available computing power by several orders of magnitude, and would drastically decrease the software complexity problem of AI if it became available to AI researchers. After first-generation nanocomputers, we should expect the quick development of more advanced types of nanocomputers, allowing even greater computing power (several more orders of magnitude.) This is partially because nanotechnology is powerfully self-applicable.

  • Basic-to-intermediate medical nanotechnology (which would arrive between a few months and several years after nanocomputing) would multiply the resolution of our human brain scans by several orders of magnitude, greatly decreasing the difficulty of the software problem.

  • There is evidence that algorithm design for artificial general intelligence can be heavily nonbiological; i.e., engineered based on the essential principles of general intelligence rather than biological inspiration. If so, a viable AI design might be orders of magnitude simpler than the human brain's design. This has been argued at length by Moravec, Yudkowsky, and others.

  • Intermediate general AI designs need not be impressive or newsworthy; there is no explicit evidence that the optimal waypoints between where we are now and general AI must be visibly impressive or solve common human problems. General AI may not appear flashy (or dangerous) until it enters recursive self-improvement (at which point it quickly will be.) This generates the unfortunate possibility that we might be caught off guard by general AI.

  • General AI technology rests upon the intersection of several exponentially advancing technologies; computers, brain scanning, data analysis, and nanotechnology. These exponential trends have mutually synergistic effects; the effect they have upon one another will be multiplicative rather than additive.

  • Artificial Intelligence need not be conscious, possess humanlike aesthetics, emotions and intuition, or be popularly accepted in order to pose a threat (or benefit.) An Artificial Intelligence need not be eloquent in human speech or even fully sane in order to improve itself recursively; even a "retarded" AI might be able to solve the technical problems of nanotechnology and begin creating new hardware for itself, resulting in significant improvements to overall intelligence, which could in turn be applied to devising new methods of intelligence enhancement. An AI stupid enough to take 1,000 subjective years to make a competent decision could rapidly become a threat if integrated with hardware allowing sufficient cognitive accleration. Once higher intelligence is reached, this could of course be applied to the creation of advanced technology and sophisticated plans for acquiring autonomy.

  • Considering the massive threats and opportunities inherent in Singularity technologies, it is probably prudent to take the conservative position and assume that general AI will be here sooner rather than later. That way we can be better prepared for its arrival.
  • Singularity forecasting quotes:

    "The neuroscience community has advanced our collective knowledge of brain function to the point where it is now possible to build accurate and meaningful computational models of major brain pathways. I have focused on the auditory pathway, aided by direct collaboration with the world's leading auditory neuroscientists. It is now possible to visualize the responses of large ensembles of neurons to complex real-world sounds such as speech, music, and sounds moving through space, for the first time giving us the opportunity to see the computations we are effortlessly performing at a subconscious level. With care, it is possible to verify that our models agree with biological function -- once the principles of operation are known, it is in fact possible to build engineered systems that outperform the human system in quantifiable ways. [...] The next two decades promise an exciting period of advances in our understanding of the nature of human intelligence, and the development of increasingly intelligent assistants and prosthetics that enrich human life in ways we can now only imagine."

    - Lloyd Watts, computational neuroscientist, 2002 World Congress on Computational Intelligence, Plenary Session


    "As the computational power to emulate the human brain becomes available - we're not there yet, but we will be there within a couple of decades - projects already under way to scan the human brain will be accelerated, with a view both to understand the human brain in general, as well as providing a detailed description of the contents and design of specific brains. By the third decade of the twenty-first century, we will be in a position to create highly detailed and complete maps of all relevant features of all neurons, neural connections and synapses in the human brain, all of the neural details that play a role in the behavior and functionality of the brain, and to recreate these designs in suitably advanced neural computers."

    - Ray Kurzweil, futurist and author


    "As I discuss in Engines of Creation, if you can build genuine AI, there are reasons to believe that you can build things like neurons that are a million times faster. That leads to the conclusion that you can make systems that think a million times faster than a person. With AI, these systems could do engineering design. Combining this with the capability of a system to build something that is better than it, you have the possibility for a very abrupt transition. This situation may be more difficult to deal with even than nanotechnology, but it is much more difficult to think about it constructively at this point. Thus, it hasn't been the focus of things that I discuss, although I periodically point to it and say: 'That's important too.'"

    - K. Eric Drexler, nanotechnology pioneer

    "Phase 4: Complete the brain. This involves scaling up the computing resource by the final order of magnitude. Timescale: 15-20 years. These "plans" could easily turn out to be very cautious; all that is required is a major breakthrough in understanding neural encoding and appropriate abstractions and the whole lot could fall into place in half the time I suggest here."

    - Steve Furber, computational neuroscientist


    "Computers have come from nowhere 50 years ago and are rapidly catching up in capability with the human brain, which hasn't improved in performance for hundreds of thousands of years. We can expect man machine equivalence by about 2015, perhaps even woman machine equivalence by 2016. But after this, the computers will continue to get smarter."

    - Ian Pearson, futurist


    "It may seem rash to expect fully intelligent machines in a few decades, when the computers have barely matched insect mentality in a half-century of development. Indeed, for that reason, many long-time artificial intelligence researchers scoff at the suggestion, and offer a few centuries as a more believable period. But there are very good reasons why things will go much faster in the next fifty years than they have in the last fifty."

    - Hans Moravec, futurist and roboticist


    "Dramatic increases in collective human-machine intelligence are possible within 25 years. It is also possible that within the next 25 years single individuals acting alone might use advances in science and technology (S&T) to create and use weapons of mass destruction (WMD).

    Most people do not appreciate how fast science and technology will change over the next 25 years. The synergies and confluence of nanotechnology, biotechnology, information technology, and cognitive science (NBIC) are a particularly important new merger of science and engineering supported by both government and venture capitalists. NBIC tools will dramatically increase individual and group performance and the support systems of civilization."

    - 2003 State of the Future, Executive Summary, American Council for United Nations University


    "It is suggested that there will be an intermediate stage, before "pure" nanoelectronics, in which nanometer-scale quantum-effect devices will be introduced as subcomponents embedded in microelectronic chips. Design studies show that this should greatly increase the density and flexibility of conventional digital logic. Fabrication work toward this "hybrid" approach is ongoing in the research community. If it continues to be successful, it could accelerate the arrival of commercially useful quantum-effect, nanoelectronics. Some experts believe this could make a form of nanoelectronics available for applications as early as the year 2005."

    - Daniel Mumzhiu, Michael Montemerlo, and James Ellenbogen, of the MITRE Nanosystems Group



    (Back to articles.)