Another Nick Bostrom Quote

“One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.”

– Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”

Comments

  1. One criterion of super-intelligence might be a super-ability to predict. But whereas a high IQ Asperger may predict the behaviour of mindless systems extremely well, he will often be baffled by the actions of mediocre IQ “mind-readers”. This cognitive failure highlights how our existing mind-blind IQ tests lack ecological validity. Any naturalistic conception of intelligence will need to include measures of the cognitive capacity that drove the evolution of distinctively human intelligence and makes us the most successful species on the planet i.e. our unique “mind-reading” capacity.

    Rather unfairly, James Gardner once described autism as “a flaky barrier between consciousness and subconsciousness” and victims of autistic spectrum disorder have “a whole lot of extra RAM and crappy ROM”. Now I disagree, not least because I predict that it will be “autistic”, systematizing intelligence that ultimately abolishes suffering and systematically delivers a high quality of life for all sentient beings – not empathizing with orphaned kittens. However, if one conceives of intelligence as simple, unidimensional problem-solving capacity measured by IQ tests (hence “g”), then yes, a recursively self-improving singleton AI with IQ of 300, 400, 500… (etc) might take over the world in a hard Singularity later this century: a Solipsistic SuperIntelligence – or at least a Super-Asperger. By contrast, full-spectrum SuperIntelligence will, I reckon, be inescapably social – as Nick Bostrom’s original definition of “superintelligence” makes clear. Full-spectrum superintelligence entails an unimaginably richer capacity for perspective-taking and empathetic understanding of other intentional systems.

    If one is a cognitivist about ethics, then we may expect ethical superintelligence too. Moral nihilists, naturally, will disagree.

  2. Dave

    Can something understand joy or pain if it has never experienced them itself?

  3. No IMO. Perhaps see
    http://plato.stanford.edu/entries/qualia-knowledge/

    One sometimes meets the argument that sooner-or-later digital computers must become conscious, since they will cross an (unspecified) threshold of complexity and go on to achieve human-level cognitive competence (and more) in different domains. But consciousness – as distinct from reflective self-consciousness – seems to have very little to do with intelligence. Thus the most primitive and phylogenetically ancient types of qualia – e.g. raw agony or terror – are typically also the most intense, whereas the most “intelligent” forms of cognitive phenomenology – e.g. involving language-production or solving equations etc – are so subtle as to be barely accessible to introspection.

  4. Dave

    Extremely interesting. Thanks very much for the insight!

  5. John Hunt

    Bostrom is correct but there are two cautions to consider.

    It may well be that Friendly AI work will advance AI generally including Unfriendly AI. If FAI is much more difficult than UAI then the sooner-or-later argument might be irrelevant since whether sooner or later UAI will always be achieved before FAI.

    Secondly, the belief that FAI is the solution may well be causing AI leaders to neglect raising a movement to impede AI research applicable to UAI (so as to buy time for the more difficult FAI development).

  6. MC

    If one believes that uploading humans and improving them is less risky than de-novo AI, then developing nanotech first would be better.

    One could have this belief because the prospects of developing FAI look dim.

  7. @ David Pearce: It will be the systematizers who come up with the technology that eliminates suffering, but it will be the empathizers who create the demand and set the goal for said technology.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>