Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

5Jan/108 Criticisms of the Singularity

Yesterday, Good posted the seventh and second-to-last installment of myself and Roko's series on the Singularity, "Criticisms of the Singularity". (My last contribution to the series, "The Benefits of a Successful Singularity", was promoted to the front page of Digg.) For your benefit, the complete article is reproduced here.

Part seven in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.

As was previously discussed in our series, the "singularity" means the creation of smarter-than-human intelligence, or "superintelligence," a type of intelligence that is impressively more intelligent than humans. Possible methods for its creation include brain-computer interfaces and pure artificial intelligence, among others. Various scientists, futurists, and mathematicians that write about the singularity, such as Ray Kurzweil, Nick Bostrom, and Vernor Vinge, consider such an event plausible sometime between about 2025 and 2050. Among those who consider the singularity plausible, it is widely agreed that the event could alter the world, our civilization, and even our bodies and minds profoundly, through the technologies that superintelligence could create and deploy.

Because the singularity is such a new and speculative idea, and the subject of little academic study, there are people that take practically every imaginable position with respect to it. Some, unfamiliar and shocked by the idea, dismiss it outright or simply react with confusion. Others, such as philosopher Max More, dismiss some of the central propositions after more careful study. A substantial number embrace it openly and without too many qualifications, such as futurist Ray Kurzweil, who seems to expect a positive outcome with a very high probability. My organization, the Singularity Institute, and related thinkers such as philosopher Nick Bostrom, see a positive outcome as possible but not without very careful work towards ensuring that superintelligences retain human-friendly motivations as they grow in intelligence and power.

Criticisms of the singularity generally fall into two camps: feasibility critiques and desirability critiques. The most common feasibility critiques are what I call the Imago Dei objection and the Microsoft Windows objection. Imago Dei refers to Image of God, which is the doctrine that humans are created in God's image. If humans are really created in the image of God, then we must be sacred beings, and the idea of artificially creating a superior being becomes dubious-sounding. If such a superior being could be possible, then wouldn't God have created us that way to begin with? Unfortunately for this view, science, experimental psychology, and common sense have revealed that humans possess many intellectual shortcomings, and that some people have more of these shortcomings than others. Human intelligence isn't perfect as it is; long-term improvements may become possible with new technologies.

The Microsoft Windows objection often surfaces when the topic of superintelligent artificial intelligence is brought up and goes something like this: "How can you be expecting superintelligent robots in this century when programmers can't even create a decent operating system?" The simple answer is that too many cooks ruin a dish, and operating systems are plagued by a huge number of programmers without any coherent theory that they can really agree on. In other fields, such as optics, aerospace, and physics, scientists and engineers cooperate effectively on multi-million dollar projects because there are empirically supported theories that restrict many of the final product parameters. Artificial intelligence can reach the human level and beyond if it one day has such an organizing theory. At the present time, no such theory exists, though there are pieces that may fit into the puzzle.

Lastly, there are desirability critiques. I am very sympathetic to many of these. If we humans build a more intelligent species, might it replace us? It certainly could, and evolutionary and human history support this possibility strongly. Eventually creating superintelligence seems hard to avoid though. People want to be smarter, and to have smarter machines that do more work for us. Instead of trying to stave off the singularity forever, I think we ought to study it carefully and make purposeful moves in the right direction. If the first superintelligent beings can be constructed such that they retain their empathy for humanity, and wish to preserve that empathy in any future iterations of themselves, we could benefit massively. Poverty and even disease and aging could become things of the past. There is no cosmic force that compels more powerful beings to look down upon weaker beings—rather, this is an emotion that comes from being animals built by natural selection. In the context of much of natural selection it is evolutionarily advantageous to selectively oppress weaker beings, though some humans, such as vegans, have demonstrated that genuine altruism and compassion are possible.

In contrast to Darwinian beings, superintelligence could be engineered for empathy from the ground up. A singularity originating with enhanced human intelligences could select the most compassionate and selfless subjects for radical enhancement first. An advanced artificial intelligence could be built with a deep, stable sense of empathy and even lacking an observer-centered goal system. It would have no special desire to discard its empathy because it would lack the evolutionary programming that causes that desire to surface to begin with. The better you understand evolution and natural selection, the less likely you think it is for Darwinian dynamics to apply to superintelligence.

We should certainly hope that benevolent or human-friendly superintelligence is possible, or human extinction could be the result. Just look at what we're already doing to the animal kingdom. Yet, by thinking about the issues in advance, we may figure out how to tip the odds in our favor. Human-posthuman synergy and cooperation could become possible.

Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.

Comments (8) Trackbacks (0)
  1. Speaking of Good, I just flipped open my newly received issue today and was delighted to see a page by Roko and Michael.

    “If such a superior being could be possible, then wouldn’t God have created us that way to begin with?”

    Most Christian literalists would say no because of this:, and we’re not supposed to question it because we can’t comprehend it. I wonder why they don’t feel ripped off.

  2. My main criticism of “the singularity” is that I think the its name is stupid:

  3. Be careful assigning human attitudes toward weaker beings to natural selection. Contrary to evolutionary psychology, much research points against the modularity of higher order cognitive processes. While we don’t have conclusive evidence yet, flexibility of thought strikes me as consistent with observed behavior and possibility even advantageous for reproduction. What humans do varies greatly depending on history and circumstances. Oppression comes out of specific cultural and social contexts.

    I also question using vegans as proof of genuine altruism and compassion. There are definite benefits and few downsides to eschewing animal products among certain social circles. Like any cultural phenomenon, a whole host of interacting factors motive veganism.

  4. Accepted theories are initial conjectures that have been validated – or rather not been falsified (yet) – in a fairly well defined scope and context. Validation has a number of pre-requisites. There must be a shared ontology, i.e. a clear definition and logical structure of what the stuff is about. It also requires that there be agreed-upon metrics so that predictions can be assessed independently and eventually lead to empirical confidence. Note, however, that even in “hard” sciences and engineering such as “optics, aerospace, and physics” where the best theoretical foundation is indeed available, the likelihood of success rapidly decreases when the context is broadened beyond the original assumptions. Just as an example, the Concorde was a superb technical achievement as a “closed” system but failed rather miserably when it was brought into the reality of the “open” world, with its multitude of other conflicting facets such as environmental, energy and social aspects of travel, all elements that are highly anthropomorphic. This is where complicatedness morphes into complexity, and where reliable theories loose their grip.

    Centuries of deep introspection by great minds have not yet produced anything that resembles a theoretical framework for intelligence, with agreed upon ontologies and metrics. Certainly its remarkable adaptable and learning capabilities indicate that it is not a closed construct. The difficulty in achieving it as an artificial form has little to do with “too many cooks ruining a dish”. Rather it is an epistemological quagmire as to what dish is intended and as to how to ascertain its quality.

    A very interesting read in any case!

  5. I feel a few other site operators really should take a look at this valuable homepage as a model. Particularly clean and user friendly structure, together with awesome subject matter! You are experienced when it comes to this kind of issue :)

  6. I think that other web-site creators should certainly take a look at this important webpage as a model. Especially clean and easy to use styling, combined with incredible content and articles! You are an expert when it comes to this important subject matter :)

  7. Did the Jonathan Ames letter come yet? I signed up in January and that i seriously truly hope I didn’t skip it. Is there a way I’m able to study it if it did already come?

  8. When I first saw this title Criticisms of the Singularity | Accelerating Future on google I just whent and bookmark it. I appreciate, cause I found just what I was looking for. You have ended my 4 day long hunt! God Bless you man. Have a nice day. Bye

Leave a comment

No trackbacks yet.