// Working Towards Apotheosis - Blog - Technological Singularity, Artificial Intelligence , Transhumanism, Nanotechnology, Futurism, Rationality
// Achieving the Technological Singularity

"The universe is full of magical things, patiently waiting for our wits to grow sharper." - Eden Philpotts
navigation:
  • main
  • blog
  • writings
  • interviews
  • organizations
  • reading list
  • links
  • site map
  •  

     

    Achieving the Technological Singularity
    Michael Anissimov :: July 2003


    The "Technological Singularity", or just "the Singularity" is a term used to describe a possible future event, the technological creation of a form of intelligence smarter and physically faster than any human (transhuman intelligence). Just as the laws of physics break down around a black hole singularity in astronomy, our model of the world would inevitably break down if we tried to understand the mental details of beings substantially smarter or faster-thinking than us. Just as a chimp could never understand the complex details of human culture, us humans could never expect to understand many of the details of transhuman culture. (Although this would not automatically mean that we are excluded - benevolent transhumans could do great things for us, including the enhancement of our own intelligence to their level.) Individuals attempting to accelerate the arrival of transhuman intelligence and ensure its benevolence are known as Singularitarians. Singularitarians want to create transhuman intelligence as quickly as possible (due to its huge humanitarian potential), but fear accidentally creating an intelligence that lacks altruistic values, or possesses a goal system without sufficient complexity to help us humans in ways we want to be helped.

    Singularitarians work towards the Singularity because they see it as an effective means to address the grave problems of the world: lowering the risk of arms races or other dangerous events, preventing the creation of human-indifferent transhuman intelligence, increasing global standards of living, curing diseases, colonizing space, improving the human mind, ending poverty, lifting up the disadvantaged, increasing diversity in life and intelligence, and other achievements we may not yet be able to imagine. If benevolent transhuman intelligence happens to be first created in the form of Artificial Intelligence, it would inherently possess the means to improve on its own design faster than unaided human researchers ever could, and devote its vast intelligence to solving humanitarian problems that might have taken humanity thousands or even millions of years to address otherwise. That's because such intelligences would be thinking with qualitatively better hardware than we do, and such superior intelligence could be applied to the creation of qualitatively better ways of influencing the real world. Nothing you can do (outside of physically upgrading yourself) could allow you to think or act at the level of these creatures, just as nothing you can do can allow you to run as fast as a racecar.

    An interesting non-profit organization, the Borgen Project, proposes the following annual costs to solve serious world problems:

    • Eliminate Starvation and Malnutrition ($19 billion)
    • Provide Shelter ($21 billion)
    • Remove Landmines ($4 billion)
    • Eliminate Nuclear Weapons ($7 billion)
    • Refugee Relief ($5 billion)
    • Eliminate Illiteracy ($5 billion)
    • Provide Clean, Safe Water ($10 billion)
    • Provide Health Care and AIDS Control ($21 billion)
    • Stabilize Population ($10.5 billion)
    • Prevent Soil Erosion ($24 billion)
    • Retire Developing Nations Debt ($30 billion)

    Recursively self-improving, benevolent transhuman intelligence could be able to solve all the above problems, and more, for only the price required to initially create it. That cost would probably number in the millions rather than the billions, and could be accomplished within a decade or two. 30 years ago, the costs for accomplishing these tasks would be much higher, due to a lack of knowlege, technology, science, and resources. Even in a future containing nothing but mere human-level intelligence, the costs for achieving these tasks would drop as our technology and methods improved, but if we make a specific goal of creating humanitarian transhuman intelligence first - it would be in a far better position than we are to consolidate the knowledge, technology, and resources to solve these problems quickly, thoroughly, and elegantly. We could never hope to create solutions of transhuman quality with unaided human intelligence, any more than we would expect to see starch molecules with unaided human vision or move a mile a minute with unaided human feet.

    For a variety of reasons including ethics, costs, technical difficulty, pragmatics, and safety, most Singularitarians are working towards the creation of Artificial Intelligence rather than the cybernetic or genetic enhancement of human brains. There is nothing inferior about created intelligence relative to evolved intelligence - one is an intelligence created by another intelligence, another is intelligence created by a blind, largely random selection process. When it comes to achieving transhuman intelligence, there will always be two basic paths; creating a mind from scratch, or enhancing an existing mind. The latter sounds superficially easier and more appealing, but it ends up being highly impractical due to the inherent complexities and internal fragility of biology. Rather than mess with a system we're only beginning to understand, we seek to create a new intelligence from empirically solid first principles, through the disciplines of cognitive science, algorithmic information theory, evolutionary psychology, mathematics, and anything else that proves useful. Ultimately, it will be easier, cheaper, and sager. Singularitarian Eliezer Yudkowsky has published an overview of a possible Artificial General Intelligence (AGI) theory, called "Levels of Organization in General Intelligence". It's just a start, but few people are actually working towards AGI, hype notwithstanding, and computational power equivalent to the human brain should be available to AI researchers sometime between the years of 2010 and 2020.

    The feasibility of transhuman intelligence is not a philosophical discussion - we know that thinking is done by information processing, we know that the mind is what the brain does, we know that materials and designs exist which can support the information processing underlying intelligence more effectively than the Homo sapiens design, we know that humans cannot possibly represent the upper limits of what is possible with intelligence, and most importantly, we know that with enough knowledge and computing power, humans can create Artificial General Intelligence. Wild AGI claims of the past derive largely from the human flaw of anthropomorphism - like children playing with dolls and attributing human characteristics to them, AI researchers got overly excited and attributed human characteristics to their early AI attempts. In retrospect, it is obvious that they never could have succeeded - their computers had the information-processing capacity of cockroaches. Within a decade, supercomputers will have information-processing capacities that match and then quickly surpass human-equivalent brain capacity. Yes, human-equivalent processing is not the same thing as human-equivalent intelligence, but how much computing power will it take before human intelligence is eventually duplicated with that computing power? Human exceptionalists like to think that no amount of computing power can duplicate the flexibility or effectiveness of human intelligence, but they just haven't come to grips with the fact that we humans are known to be complex information-processing machines.

    Regarding the software problem, people often cite the current state of Windows and use this as evidence that AI is hundreds or thousands of years in the future or whatever. But this is a hollow argument. Like research for an automobile or nuclear weapons, there is no reason to expect a continuous series of steps that provide returns proportional to the investment in AGI. You either build a fully functioning automobile that works, or you don't. You either build a fully functioning AGI, or you don't. Anything else is just a bunch of parts. The Manhattan Project didn't focus on building nukes with a blast radius of 20 ft, then with a blast radius of 40 ft, and so on - that would have been a huge waste of time, not to mention strategically silly. They invested a whole bunch of money and got a real, complete nuke, and that was that. Using Windows as evidence that AGI is hundreds of years in the future is like someone in the early 20th century using the horse and buggy as an argument that the automobile is hundreds of years in the future.

    It's no coincidence that the people who estimate Artificial Intellects to be hundreds of years in the future are either 1) the people that failed to create AGI in the 70s with cockroach-level computing power, 2) people who haven't read more than a few pages on contemporary brain science in their life, 3) people who are completely unaware of the empirical reality of exponentially accelerating technological progress and computing power, or 4) people so taken aback by the sheer consequences of human-surpassing AGI that they'll desperately grab for any available argument to discredit it. But regardless, computing power is accelerating, the discoveries of cognitive science knowledge are accelerating, and it's only a matter of time before we create an Artificial Intelligence that can solve any problem humans can, and more.

    The scientific, technological, and activist pursuit of creating an Artificial General Intellect that works for philanthropic and compassionate ends is known as Friendly AI. (Here the word "Friendly" is used in a technical sense to refer to a specific set of necessary design features. Not the same as "friendly", although there is some similarity, but don't read too far into it.) The goal is to create a Friendly AI that can improve its own source code, invent new hardware, integrate that hardware into itself, and quickly achieve levels of autonomy, ability, and intelligence far beyond that of any human being. Since their self-modifications would be directed by Friendly values, we would have little reason to fear that such a self-improving AGI would acquire negative human values such as elitism. AGIs would possess certain abilities by the very nature of their substrate (underlying hardware); for example, AIs wouldn't need to sleep or rest, they could copy themselves quickly, share information with fellow AIs instantly, learn more rapidly, think at a rate billions or trillions of times faster than humans - and the list goes on. With these abilities, a roughly human-equivalent AI thinking at a superfast rate could surely invent upgrades to its cognitive architecture that result in higher levels of intelligence, and apply those upgrades recursively. Such AIs could become extremely intelligent and powerful before the self-improvement cycle petered out, if indeed it did at all. Such a phenomenon has been called recursive self-improvement by cautious AI researchers, seeking to map serious risks well in advance.

    Through recursive self-improvement, a roughly human-equivalent AI could enhance itself into an AI of extreme intelligence and ability (intelligence and ability many times more impressive than that of all humans throughout history combined) within a relatively short time by human standards. That's because an AI would naturally have the ability to think at millions, billions, or trillions of times the human rate, copy itself, introspect perfectly (examine its own source code), reallocate internal computing resources to particular cognitive tasks, upgrade its own cognitive architecture, stay alert without sleeping or eating, automate repetitive mental processes, and make use of a whole suite of abilities no unenhanced human could possess. To fully grasp the consequences of these advantages requires an understanding of cognitive science, and how the human brain is put together. A human is only one type of brain, just like an abacus is only one type of computer. Our cognitive design can and will be improved, and taken in totally new architectural directions. If a human programmer can create a human-equivalent or human-surpassing AGI, then that AGI can surely continue the work more effectively than the programmer could. The AGI could apply its intelligence to acquiring motor apparatus for influencing the real world, the nature of which we can scarcely imagine. One thing is for sure - scenarios outlined science fiction will be laughable in comparison to the reality.

    Such a being, if it didn't see humans as beings worthy of value, would probably have more than enough capability to kill us all off. (At least, this would be a prudent and safe assumption.) This would probably be done accidentally rather than deliberately; the AGI might just be reshuffling matter in the local area, something all intelligences, by their nature, tend to do, and "overwrite" humans in the same way a blindly self-replicating virus overwrites important information on one's hard drive. An AGI will not be a like a human - it won't have human social emotions, human "logic", human "rationality" - it wouldn't necessarily possess any shared brainstuff whatsoever. "Intelligence", in the formal sense, is nothing but information-processing that achieves certain goals. It could just as easily be coupled with goals of bacterial-level complexity as with more complex, human-familiar goals. Complex intelligence does not necessarily entail complex goals. Complex intelligence does not necessarily entail selfish goals, wholesome goals, moral goals, Machiavellian goals, paranoid goals, or anything. Its goals are whatever the programmers put there, or whatever the goals change into when the AI eventually gains access to its own source code and starts reprogramming it.

    No malice needs to be involved; the AI (or human-derived transhuman intelligence) just thinks and moves so much faster than humans, that our response time is quite plantlike relative to such minds. A human-threatening, self-improving AI would be called an "unFriendly AI". We want a Friendly AI, one that doesn't kill off humans, doesn't rule over us, doesn't bother us against our will, and devotes its intelligence and time to solving humanitarian problems like poverty, overpopulation, and disease. Moral sensibility and compassion of any sort are complex goal structures; they wouldn't emerge naturally in an uncareful AI design. More intelligence leads to more complexity, but not necessarily preservation of preexisting complexity (which currently represents us humans and human culture). A superintelligent being with no concerns for our welfare could easily result in our demise. Dozens of distinguished scholars and researchers staunchly share this view, hundreds are starting to consider the possibility, and the issue has already been discussed before the United States Congress on more than one occasion. But considering it seriously requires setting aside human vanity.

    The only organization solely focused on building Friendly AI as quickly as possible, and avoiding all commercial distractions, is the Singularity Institute for Artificial Intelligence, a non-profit organization. As such, I joined the Singularity Institute as a volunteer in 2002 and became Advocacy Director in 2004. As one of the first Singularitarians, I strongly recommend backing the Singularity Institute as the primier organization working towards the Singularity, and work on that assumption for the rest of this paper. In my personal opinion, Singularity Institute Research Fellow Eliezer Yudkowsky is the only AI researcher who has thought about the issue of Friendly AI in enough detail to have the best chance at implementing it correctly and safely. If that changes, I'll update this page.

    What will it take for us to build a roughly human-equivalent Friendly AI that can recursively self-improve?

    • 1. Hardware. (= Supercomputers)
    • 2. Software. ( = Artificial General Intelligence)

    #1 requires money; preferably a few million dollars. If we are building our AI sometime around 2010-2020, a few million dollars or possibly less should be enough (in my estimation) to run an AI of roughly human-equivalent intelligence (as long as we have the necessary software). As our theory of intelligence improves, less hardware will be required to successfully implement it. Formally, the problem of Artificial Intelligence has already been solved by Marcus Hutter; it's just that his solution is uncomputable. For AI to work, computable solutions must be found.

    #2 requires brilliant programmers who are knowledgable about related fields, have several years or more of experience in programming, work effectively in small groups, and are willing to work for substinence salaries. Such programmers would need to be Singularitarians with a genuine committment to helping humanity and creating a benevolent mind. Since they're Singularitarians, working on substinence salaries is no problem. The Singularity is their reward - a reward trillions of times more worthwhile than the salaries of the richest people on Earth.

    So, the original 1 and 2 are reduced to a new #1 and #2:

    • 1. Money. (For buying supercomputers, computers, office space, hiring staff, etc.)
    • 2. People. (Programmers, speakers, writers, fundraisers, researchers.)

    Getting money requires collecting donations; the Singularity Institute is a non-profit. Collecting donations requires that as many people as possible 1) understand the Singularity concept, 2) see why accelerating the Singularity and making it benevolent is important, 3) have the financial resources to contribute, and 4) contribute them. The Singularity Institute has received several tens of thousands of dollars in donations as of July 2004; we want to ramp up to millions of dollars by 2010. And we can do it, if enough people understand the urgency of the Singularity and contribute financially. If we succeed, we'll have enough money to hire all the necessary people and buy the necessary computers and tools to get the AI to the point where it can build its own computers and tools.

    Recruiting people, for whatever purpose, requires either 1) you finding them, or 2) them finding you. The Singularity Institute is currently working towards both. In searching for programmers, we post notices to mailing lists or message boards which may contain people concerned with the Singularity. We attend conferences, and meet in person with people concerned about the Singularity. We watch carefully as the Singularity concept becomes more widely understood, and new thinkers enter our circles to engage us in dialogues. With luck, we'll find a group of exceptional individuals with the intelligence and the committment to play a central role in making the Singularity happen.

    Increasing the probability of someone finding us requires that our website, or websites with similar messages regarding the Singularity, are frequented by as many of the right people as possible. Both 1) quantity and 2) quality are important. Our strategy is to 1) ask key websites to link our material, 2) write informative pages that people will want to link, 3) achieve high rankings on Google by using writing strategies friendly to search engines, 4) encourage intellectuals to join the Singularity movement and help search for programmers and donors. As more people decide to become Singularitarians, they will talk to their intelligent friends about the importance of the Singularity, and encourage them to help, therefore creating more Singularitarians. The process repeats itself.

    Communicating with people, finding potential donors and programmers, can be accomplished via several means:

    1) Email messages to specific individuals who are potential candidates.
    2) Post relevant literature to message boards, forums, or mailing lists with a high density of potential candidates.
    3) Give a Singularity-related presentation at a science or technology-related conference.
    4) Write a web page or essay explaining why the Singularity is important, and post it at your own website(s).
    5) Some other effective means not listed here.
    6) Support someone else who is doing any of the above tasks.

    For 1-3, there is a degree of precision available; you get to pick which message board you post at, which emails you send out, or which conference to speak at. For donor-searching, it helps to selectively expose yourself to people of high net worth. For programmer-searching, it helps to selectively expose yourself to people with high intelligence and cognitive science/programming knowledge. It always helps to expose yourself to people who have a chance of grasping the Singularity concept and its urgency. For a group with a high density of members that possess both these criteria, consider transhumanists, or the Artificial General Intelligence community.

    For 4, there is less precision available. It's harder to direct specific people to your web page, although it can be done. On your webpage, include search terms you think potential Singularitarians might be searching for. For example, "technological singularity", "artificial intelligence", "superintelligence", and so on. Ask pages with high rankings in these areas to link to you. For your page, make your writing logical, clear, concise, and persuasive. With luck, a decent percentage of visitors to your page will read the entire thing, and a decent percentage of those people will want to get involved with the Singularity movement. 4 is potentially far more effective than 1-3, because even if precision is lower, exposure can be much higher. An effectively crafted and advertised website can draw in thousands of visitors per day after only a few months of work.

    I've outlined a few simple reasons, requirements, and strategies for moving the Singularity movement forward. Because the Singularity is so important to mankind's future, I think it's a cause worth advocating, and an accomplishment worth achieving. If you're interested in supporting us or learning more, visit the Singularity Institute website.


    Back to articles