Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

27Jun/074

Enlightenment is Complex, Boring, and Expensive

To gain the best possible perspective on any given situation or problem, one must be familiar with as many different views on it as are available. One must be familiar with all the background statistics, and related past cases. Jaynes writes:

You and I form simultaneous judgments not only as to whether it is plausible, but also whether it is desirable, whether it is important, whether it is useful, whether it is interesting, whether it is amusing, whether it is morally right, etc. If we assume that each of these judgments might be represented by a number, then a fully adequate description of a human state of mind would be represented by a vector in a space of a rather large number of dimensions.

Not all propositions require this. For example, the proposition, "the refractive index of water is less than 1.3" generates no emotions; consequently the state of mind which it produces has very few coordinates. On the other hand, the proposition, "Your mother-in-law just wrecked your new car" generates a state of mind with many coordinates. A moment's introspection will show that, quite generally, the situations of everyday life are those involving many coordinates. It is just for this reason, we suggest, that the most familiar examples of mental activity are often the most difficult to reproduce by a model

We might speculate further. Perhaps we have here the reason why science and mathematics are the most successful of human activities; they deal with propositions which produce the simplest of all mental states. Such states would be the ones least perturbed by a given amount of imperfection in the human mind.

One might argue that the simplest of all mental states also tend to be the most boring. I know that not everyone agrees on this, and the human brain provides a baseline dopamine dividend for any problem-solving activity, no matter how unidimensional. But most everyone would agree that the best expositions of science or math are peppered with intriguing analyses and comments, a la Feynman.

Even if you don't believe that science and mathematics are boring, reading the necessary literature to thoroughly understand a situation or problem can be very time-consuming and challenging, if one has not had prior exposure to the field. It is expensive for the opportunity cost in economically productive work. It is complex due to the innate complexity of math and science.

For more on the idea that science is hard, see The Onion.

Filed under: philosophy 4 Comments
22Jun/0714

Aubrey de Grey on the Singularity

Filed under: singularity 14 Comments
22Jun/0747

The Simple-as-Possible Universe Hypothesis

It seems that math may be unreasonably effective for understanding the universe. Complex phenomena, simple rules.

The universe may be simpler than it looks. It may in fact contain almost no information. Tegmark and other physicists argue that the universe is isomorphic to a mathematical structure and we are currently uncovering all the information content incrementally. In this view, our mathematics is a mathematical structure approximating another mathematical structure, rather than a mathematical structure approximating a physical structure.

So the universe could be a simple mathematical structure with self-similarity on all scales, like a fractal. In the abstract to an aforelinked paper, Tegmark writes, "In this paper, it is suggested that most of this information is merely apparent, as seen from our subjective viewpoints, and that the algorithmic information content of the universe as a whole is close to zero." So the universe's mathematical simplicity can be reconciled with its apparent complexity from our point of view.

Many physicsts believe all possible universes exist. According to the teleological-sounding but theoretically elegant anthropic principle, only those universes which permit conscious observers to exist are observable. If our universe is indeed quite simple, it surely cannot be too simple, otherwise it would lack conscious observers to experience it. It would make much more sense if it were as simple as possible but still complex enough to harbor consciousness.

I reached this idea on my own some time ago, and it seems that a few others have also discovered it independently. A search for "simplest possible universe" brings up a mailing list post by Fred Chen, a page on anthropics without an author indicated, and a book, Theory of Nothing, by Russell K. Standish, an associate professor with the math department at the University of New South Wales. German AI researcher Jürgen Schmidhuber also addresses the issue here.

Two begging questions seem to come out of this idea. The first is that there must exist some absolute criteria for the development of self-aware consciousness, and that these criteria have, self-evidently, been satisfied in this universe - but what are they? With a sample set of one, it's hard to tell. The second question is, "is there an underlying mechanism with its own internal complexity that generates universes?" If all types of universe are realized an infinite number of times, then why is it any more likely for any given sentient being to be born into a simple universe?

Filed under: anthropics 47 Comments
22Jun/0733

How Can I Contribute to the Transhumanist Movement?

Many people may wonder how they can contribute to the loose coalition of people and organizations that is the transhumanist movement. Let me make a few suggestions.

1. Order and read transhumanist books, like Engines of Creation and The Singularity is Near. If you try to "get by" in transhumanist discussions having read nothing but magazine articles and news items, it will eventually become evident that your knowledge is relatively shallow and you aren't contributing as much as you could be. The more everyone is familiar with the standard literature, the sooner enclaves of people can move on into discussing more advanced topics.

2. Join transhumanist organizations. The World Transhumanist Association, Immortality Institute, and Lifeboat Foundation all offer basic membership for yearly fees of $50 - $100. Organizations with more membership have more leverage. If enough people chip in, even a regular staff becomes affordable, lending the group a greater edge. If you've ever considered joining any of these organizations, ask yourself, "why not join right now?"

3. Network with other transhumanists: join Transhumanists.org. On average, transhumanists tend to be intelligent, well-educated, friendly people. Our intellectual output per capita is much higher than seen in most modern movements. Poke around the community and you'll find that each individual brings a unique perspective: there are transhumanist writers, economists, programmers, physicists, artists, musicians, biologists, and many more. Reach out to them and it'll be worth your while.

4. On the same note as above, attend transhumanist conferences! Transvision 2007 is happening next month in Chicago, for example, so I'll see you there if you're going. Attendees will include William Shatner, Ariana Huffington, Peter Diamandis, Aubrey de Grey, Ray Kurzweil, etc. Alcor also puts on transhumanist-oriented conferences, usually in Scottsdale, AZ, that I hear are good.

5. Start a transhumanist or futurist blog. One of the reasons that the Web 2.0 business blogs have such high Google and Technorati ratings is because so many of them exist and mutually link one another. Communities of similar size but somewhat less inclination to blog, say, environmentalists in general, are comparatively missing out on the Internet's massive traffic. Why let it happen to us? Barry Mahfood, Tom McCabe, and the Singularity Institute have all recently started transhumanist-oriented blogs, make the next one yours!

Filed under: transhumanism 33 Comments
21Jun/078

Examining the Feasibility of Molecular Machine Systems

(Image by John Burch of Lizard Fire Studios.)

Some nanotechnologists, such as Eric Drexler, believe molecular nanotechnology (MNT) and nanofactories are physically feasible. Others, such as George Whitesides, are skeptical. The UK's "nano champion", Richard Jones, lists six challenges for molecular nanotechnology in a blog post from two years ago:

1. Stability of nanoclusters and surface reconstruction. Surfaces have a tendency to "reconstruct" - seek out stable equilibria in ways not necessarily predicted by molecular dynamics simulations.

2. Thermal noise, Brownian motion and tolerance. Atoms on the nanoscale may be too wobbly to build complex machines out of. Drexler addressed this, but not in thorough detail.

3. Friction and energy dissipation. Surface area becomes much larger as machinery scales down, and high functional densities will give rise to high power densities in molecular machine systems. The friction and heat may be so intense that molecular machine systems cannot be reliably constructed.

4. Design for a motor. Richard is skeptical that the electrostatic motor as described in Drexler's Nanosystems would actually work. More detail needs to be fleshed out and supported by experimental testing.

5. The eutactic environment and the feed-through problem. For MNT systems to work, they would need to operate in ultra-high vacuum. But, interacting with the outside, they'd be exposed to a very atomically messy environment. Valves and pumps need to be around 100% efficient to exclude foreign molecules.

6. Implementation path. How do we get there from here? If "soft" nanotechnology is all that works, how do we transition from there to hard?

These are all valid arguments, but some are a bit more interesting than others. To estimate them roughly in order of declining importance based on my own opinion, I'd list them as 3, 1, 2, 4, 5, and 6.

For 3 I definitely recommend taking a look at the full text as written by Dr. Jones. He anticipates a major issue in MNT machines will be energy leakage from the driving modes of a smaller machine to the larger vibrational modes of the structure it is embedded within. Jones writes, "MNT systems will have very large internal areas, and as they are envisaged as operating at very high power densities; thus even rather low values of friction may in practise compromise the operations of the devices by generating high levels of local heating which in turn will make any chemical stability issues (see challenge 1) much more serious." To address this, power densities can merely be kept lower than the theoretical maximum - scaling laws would still allow the construction of MNT systems with much higher throughput and product customization performance than conventional factories.

1 has to do with surface stability of nanostructures. Part of the argument is that more careful quantum chemistry techniques should be used where mere molecular dynamics simulations are being used today. I'd don't know much about the details of this issue so I won't comment. More research is definitely needed.

2 is the thermal noise, Brownian motion, etc. As Jones mentions in his blog post, Drexler laid out a framework in Nanosystems to calculate the impact of thermal noise, which was used to estimate positional uncertainty at the tip of a molecular positioner. The uncertainty was found to be less than an atomic diameter, which is promising, but Jones would like to see simulations with more complex structures where both the positioners and their foundations are subjected to thermal noise and Brownian motion. Making serious progress on this will likely require hundreds or thousands of molecular engineer hobbyists, using programs like Nanorex's NanoEngineer-1 to try out a wide range of possible designs and see what works. For a look at the work of someone already playing with a beta release of NanoEngineer, see the Machine Phase blog. The projects on this blog help you get a visual idea of the challenges in designing molecular machine components.

4 is the design for a motor. Drexler's electrostatic motor needs to be thoroughly simulated with quantum chemistry techniques, and eventually, of course, experimentally tested. However, even if it doesn't work out, some other power source will surely be devised. Even MNT motors using ATP as an energy currency would probably be able to achieve power densities far superior to today's best manufacturing machinery, so I don't think that the availability of a nanoscale electrostatic motor is a showstopping issue.

5, the concern with maintaining the molecular integrity (ultra-high-vacuum) of the MNT workspace, seems like one of the weakest challenges to me. If filters only work at 99% efficiency, say, then they can merely be daisy chained until the desired purities are achieved. Because these are nanoscale filters, they'd barely take up much space in comparison to the functional machinery. Also, there is no need for MNT machinery to interact directly with the chaotic external environment. Nanobots would be pretty poor at locomotion anyway. What we want is molecular manufacturing systems to build microbots stable in a variety of external environments. Most commonly-discussed MNT applications: nanoscale implants, utility fog, diamondoid products, etc., do not depend on autonomous nanoscale assemblers operating in messy surroundings. The delicate molecular machinery can be kept safe in an ultra-high vacuum, shielded by multiple layers of containment and filtering systems. Airlock-type systems can be used to extrude the product without permitting dust inside the factory.

6 is the least bothersome of all. If the potentially huge payoff of developing molecular machine systems becomes obvious to more people, then we will be able to afford to try out a very large number of different implementation paths. Technologies for viewing and manipulating the nanoscale are growing ever more accurate and inexpensive, so the right tools will be there, we just have to embed ourselves in the engineering challenges and see what works. Granted, a major implementation route holdup could delay progress for a decade or two, but much longer than that seems implausible - the number of possible routes is so large that it seems very likely one of them will work.

A more recent list of challenges can be found on the Nanofactory Collaboration site.

Filed under: nanotechnology 8 Comments
20Jun/0728

Brown’s Human Universals

Anthropologist Donald E. Brown's landmark book Human Universals points out over 200 behavioral and cognitive features it is suspected are common to all human beings. The list is very instructive for thinking about this species that we so happen to have been born into, and how it might be different from future species we engineer or otherwise create. Here are a few of the more interesting ones:

  • tabooed foods
  • childhood fear of loud noises
  • husband older than wife on average
  • anthropomorphization
  • reciprocal exchanges (of labor, goods, or services)
  • dreams, interpretation of
  • statuses on other than sex, age, or kinship bases
  • onomatopoeia
  • magic to win love
  • language, prestige from proficient use of

See the full list here. Human cognitive biases may also be universal. Also related is the search for a list of inductive biases.

Filed under: intelligence 28 Comments
18Jun/0753

The Longest Word in the English Language

The following, the name of a protein coat used by a certain strain of Tobacco Mosaic Virus, is the longest word used in the English language in a serious context, i.e., published not just for the sake of the length of the word itself:

    acetylseryltyrosylserylisoleucylthreonylserylprolylserylglutaminyl-
    phenylalanylvalylphenylalanylleucylserylserylvalyltryptophylalanyl-
    aspartylprolylisoleucylglutamylleucylleucylasparaginylvalylcysteinyl-
    threonylserylserylleucylglycylasparaginylglutaminylphenylalanyl-
    glutaminylthreonylglutaminylglutaminylalanylarginylthreonylthreonyl-
    glutaminylvalylglutaminylglutaminylphenylalanylserylglutaminylvalyl-
    tryptophyllysylprolylphenylalanylprolylglutaminylserylthreonylvalyl-
    arginylphenylalanylprolylglycylaspartylvalyltyrosyllysylvalyltyrosyl-
    arginyltyrosylasparaginylalanylvalylleucylaspartylprolylleucylisoleucyl-
    threonylalanylleucylleucylglycylthreonylphenylalanylaspartylthreonyl-
    arginylasparaginylarginylisoleucylisoleucylglutamylvalylglutamyl-
    asparaginylglutaminylglutaminylserylprolylthreonylthreonylalanylglutamyl-
    threonylleucylaspartylalanylthreonylarginylarginylvalylaspartylaspartyl-
    alanylthreonylvalylalanylisoleucylarginylserylalanylasparaginylisoleucyl-
    asparaginylleucylvalylasparaginylglutamylleucylvalylarginylglycyl-
    threonylglycylleucyltyrosylasparaginylglutaminylasparaginylthreonyl-
    phenylalanylglutamylserylmethionylserylglycylleucylvalyltryptophyl-
    threonylserylalanylprolylalanylserine

The Wikipedia entry is here. The word contains 1185 letters. A much longer word is the full chemical name for titin, the longest known protein, weighing in at 189,819 letters. Thanks to our wonderful computer technology, this word could probably be stored on a hard drive the size of a microbe.

I look forward to a day when superintelligent agents will toss words like these back and forth in microseconds, comprehending their full significance and cross-referencing them effortlessly. I'm excited about this not merely for the sake of grandiosity or hubris, but in anticipation of the new ideas that would become accessible through engaging in discourse on the superhuman level.

It's interesting that humans usually find long words humorous. We like to laugh off things we don't understand very well.

Filed under: intelligence 53 Comments
12Jun/0735

James Miller’s Cryonics Agreement

James D. Miller, an Accelerating Future reader and associate professor of economics at Smith College, just came up with a really interesting hypothetical economic agreement about cryonics, reproduced here for your convenience:

"Some people are planning to have their head frozen just after they die. These believers in cryonics think that freezing the head preserves brain patterns. They also believe that there is a reasonably high chance that someday humanity will have the technology to restore life to those who have undergone cryonic head freezing.

If the price of cryonics becomes low enough then a cryonics believer and unbeliever should try to come to the following three part agreement:

(1) The believer will immediately pay the unbeliever some amount of money.

(2) The believer will pay for the unbeliever to undergo cryonic freezing shortly after death.

(3) If the unbeliever is ever brought back to life he will owe a huge debt to the believer. It is hard to know what will be valued in the far future. But if brought back to life the unbeliever promises to try his best to spend at least 50% of his time and resources improving the life of the believer.

This agreement will always make the unbeliever better off, and given his beliefs it may well improve the expected future welfare of the cryonics believer."

In an unrelated item, I've joined the SIAI blog team and made my first post here.

Filed under: transhumanism 35 Comments
10Jun/0753

AI-Related Poll at IEET

There is an AI-related poll at the site for the Institute for Ethics and Emerging Technologies, the transhumanist think tank manned by our friends George Dvorsky, Anne Corwin, James Hughes et al. The question is:

Is building a "friendly" super-AI a way to protect against a hostile super-AI?

The choices:

1. Yes, nothing else could stop it
2. Maybe, but super-AIs are unlikely
3. Guaranteed friendly AI is impossible
4. Let's prevent super-AIs of any type

Go vote yourself.

Update: Here are the final poll results.

It looks like we broke it. After the link to this poll was posted on this site, the number of votes for #1 skyrocketed from about 15 to 95. #3 also grew from about 10 to 44. Since nothing in reality is guaranteed, #3 is true... and since permanent nano-dictatorship enforcing technological stasis could also stop unFriendly AI, #1 is not quite true... but I admit I did vote #1 anyway. The linkage issue necessitated a special message from IEET Executive Director James Hughes:

"After IEET contributor Michael Anissimov put a shout-out to the Singularitarians to answer our poll, we got a vigorous response, most of whom endorsed the SIAI idea that only a super-powerful AI programmed with core friendliness towards humanity can keep us safe from hostile and indifferent super-powerful AIs, which might decide that we were a nuisance, or not even recognize that we exist."

Filed under: AI, risks 53 Comments
8Jun/07107

Intelligence Augmentation vs. Artificial Intelligence

To some, it seems "obvious" that significant human intelligence augmentation will come before human-level AI. To others, it's the reverse that's obvious. I don't think either is obvious, but I believe there's a strong likelihood AI will come first.

In the IA camp, one of the arguments goes, [Brain+Computer] will always be more intelligent than [Computer] alone. But this is untrue, as the I/O channels between brain and computer make all the difference, and with today's technology, these channels are quite limited. Even if we had million-electrode brain-computer interfaces, it would be a cybernetics problem to ask which outputs to plug into which inputs, and what changes might need to be made to the central executive to handle the new cognitive architecture without information overload or psychosis. Reprogramming the executive center of the human brain would require advanced neurosurgery and extensive knowledge of the brain, knowledge that could take decades of research and advanced experimental techniques to uncover.

Other cons for IA, in my view:

  • Experimentation on the human brain is likely to be made illegal globally
  • The design-and-test cycle is on the order of weeks or months
  • Lack of human volunteers willing to die for the cause of IA research
  • Someone left out the line notes for the brain's code
  • Experimenting on the deep brain is difficult because neocortex is in the way
  • All that medical hardware is really expensive
  • The human brain was not designed to be upgraded
  • Gene therapies not likely to give enough improvement for takeoff speed

A remark on that last one... the issue of takeoff speed. It's not enough to create an Einstein with IA. You have to create an Einstein that can go immediately to work on new intelligence augmentation techniques, and actually come up with something of use in a reasonable amount of time, before AI is developed. It seems more likely to me that an intelligence-enhanced human would just go into the business of creating AI. Smarter-than-human intelligence cannot just be a really smart human being - it has to be something qualitatively off the scale. Manipulating the genes associated with genius, as James Miller suggested, would likely produce "only" human geniuses at first. You'd need to go an extra level of theory and genetic engineering to get something genuinely smarter-than-human in a human-like package. I'm not saying it couldn't be done, but that the whole process could drag on for a number of years.

Benefits of IA:

  • Evolution has already done a lot of work for us
  • Some might think a human seed is more predictable
  • Sparks human-centric patriotism in ways AI doesn't

On to the cons of AI:

  • Present-day computers might not be fast enough to implement AI
  • You have to build everything from scratch yourself
  • Everyone is working on narrow AI, but AGI is unpopular
  • Requires strong theory of general intelligence, difficulty unknown
  • Stigma of excessive past claims

And the benefits of AI:

  • Design-and-test cycle can be very rapid
  • All aspects of the AI are read/write friendly
  • Line notes are included with the code
  • Cognitive features can be optimized for self-improvement
  • Computational power can be expanded as funds allow
  • Virtual worlds are available as a flexible training zone
  • Hardware can be used to "overclock" beneficial functions
  • Probabilistically realistic, flexible learning can be implemented
  • Nascent AIs can share information with each other rapidly
  • Much larger regions of the mind configuration space can be tested
  • AIs can be copied indefinitely, allowing to commercial spin-offs
  • Substantial advances in AI, but not IA, have already been achieved
  • The hardware itself is inherently cheaper
  • Little to no legal concerns

Comment away. Whether or not IA or AI reaches smarter-than-human intelligence first is pretty important, as the step into this new domain could spark a runaway self-improvement process, something I.J. Good called an "intelligence explosion". This is normally what we think of when we hear the word superintelligence.

Filed under: AI, intelligence 107 Comments
6Jun/0713

“The Rapids of Progress”, by Mitchell Howe

From our earliest days as an intelligent species, it has always been more difficult to create than to destroy. From fire to fission, forces of great constructive potential have invariably been used as weapons against innocent people with tragic results. The cumulative losses to individuals, nations - and indeed, the whole human family - can never be fully understood.

Despite a pervasive - and in many ways false - sense of security that came with the end of the Cold War, we are far from being past the threat of technologically facilitated global ruin. The rise of trans-national terrorism may not on the surface seem nearly as dangerous as a full-scale atomic conflict. But the bold acts of hatred performed by those who place no value on their own lives remind us daily of the fact that, among billions, there will always be a few who would destroy civilization itself if they had the capacity to do so.

The day is approaching when this awful power will be all too abundant. Technological progress, that relentless engine that has refined our tools of creation and destruction, is not slowing down. It is accelerating. Technologies that seemed like fantasy a few years ago are now discussed as old news and common knowledge. Scientists recently built a lethal polio virus from scratch by assembling a custom strand of DNA. Nanotechnology - the engineering of materials and machines at the molecular level - is already churning out fibers and coatings incorporated into commercial products, and concept components for devices smaller than human cells are created daily.

Times of tremendous potential are upon us. Where we had only decades ago acquired the ability to observe the most fundamental processes of nature, we are now becoming masters of them. The most intractable diseases and disabilities cannot long stand against the perfect scrutiny and manipulation of genetic engineers. The endless drought of economic scarcity that lingers in so much of the world has no chance of resisting the impending flood of material prosperity unleashed by self-reproducing nanofactories that produce goods of unprecedented quality at negligible cost.

But this flood of prosperity cannot help but flow with a dangerous swiftness equal to the technological progress which propels it. Even ignoring the usual sources of murderous discontent, this radical shift in the quantity and quality of life will probably be sufficient to cause dangerous political upheaval. And, as has always happened with knowledge, the arts of genetic engineering and nanotechnology will inevitably see perversions into killing applications. But this time the danger will be far greater than the threat of nuclear catastrophe - an event entirely survivable by many who might nevertheless wish they hadn't. A custom-designed plague might be virulent enough to kill everyone, and a swarm of self-replicating nanomachines could swallow the biosphere whole.

Calls to relinquish technologies that could lead to such ends are unrealistic, as these are inextricably linked to positive applications - which greatly outnumber the negative ones. And any attempt to suppress technological progress through means of legislation and enforcement will only mean that when these technologies do inevitably mature, they will be in the hands of those who operate outside the law. Government bodies and committees certainly deserve respect for their ability to mediate disputes and create safety guidelines, but these have never proven capable of ensuring that a given technology is never once used for destructive purposes. And with advanced genetic engineering and nanotechnology, one single misuse may be all it takes to write the epitaph for the human race. We simply cannot rely on traditional organizations and regulations to guide us safely through these turbulent rapids of progress. The current is too swift and the hazards too numerous. And we know from history that somewhere, somehow, there is always a mistake. A human mistake.

Human mistakes are inevitable for the obvious reason that we are in possession of mere human intelligence. We also carry in our genes a myriad of irrational tendencies that do not serve us well, having been so far removed from the ancestral environments where they were useful. We often cherish our primitive instincts and delight in our child-like awe at mysteries "beyond human comprehension," but these are fertile ground for the kinds of critical failures that could send civilization crashing into the lodestones of oblivion. Our need, then, is for faculties beyond human reasoning, and for minds free of evolutionary liabilities. Whether collectively or in the minds of a select few, we need greater-than-human intelligence to skillfully shoot the rapids of progress and chart the seas of universal prosperity.

Fortunately, the means to achieve greater-than-human intelligence (a milestone called the Singularity by many futurists) are found within the very currents of technology pushing us to this critical juncture. Genetic engineering is one possible answer, but given the relatively long time it takes for a human baby to mature into an adult, this approach would probably not be timely enough even if there were no ethical questions to consider. Augmenting human intelligence by connecting brains directly to powerful computers is another option, but this may not do anything to reduce the likelihood of rash, biological mistakes being made, and may actually amplify their damage. At present, the only conceivable way to promptly give rise to greater-than-human intelligence free of the most significant human failings is through the creation of Artificial General Intelligence (AGI). (The "General" is sometimes added by researchers to distinguish it from the narrowly specialized programs that are often claimed, for marketing reasons, to possess Artificial Intelligence (AI).) Computer technology has matured to the point where most AI researchers feel that an AGI could exist on today's equipment, given the right design.

But of course the "right" design has not yet been developed, and will not be without determined effort. Any sufficiently intelligent AGI will be able to assist in the design of its own successors, making subsequent leaps in intelligence easier. But the initial design must do more than "merely" think in ways that match or exceed human capability. It must empathize with and care about the problems of its human comrades - a trait called "Friendliness" by some researchers. An AGI lacking this compassion would be as dangerous as any other technological nightmare. And since computer technology is so rapidly increasing in power and decreasing in cost, a time will come when rogue nations or sociopaths could create an unsafe AI - with consequences as potentially catastrophic as the misuse of nanotechnology or genetic engineering. Unless, that is, we already have greater intelligence on our side helping to discover ways to prevent such disasters.

The fate of humanity thus hinges on this question: When will we create greater-than-human intelligence that cares about our problems? There is every reason to act now. Without greater intelligence we are doomed to make human mistakes regarding forces so powerful that there may be no second chances. But, with the assistance of Friendly AI, we will have an extraordinary new capacity to not only safeguard our continued existence, but to meet every other challenge we currently face - or may face.

There is no greater or more responsible use for discretionary resources today than the advancement of this effort. Whether it be a few pennies, a few million dollars, or years of volunteer service, investments in Friendly AI will go further to improve the human condition than donations to any other charity or research project. After all, there are few causes that would not benefit from an infusion of Friendly superintelligence. But, more importantly, if we do not safely navigate the rapids of progress we will not be around to worry about disease, poverty or global warming. This is one swift ride that we are all along for, whether we like it or not, and it is up to each of us to help make sure the human family can survive the journey and come out on top.

(Read the latest short, Singularity-relevant story by Mitch at SIAI's new blog.)

4Jun/0792

Response to Cory Doctorow on the Singularity

Cory Doctorow is an editor of what, for a long time, was the most popular blog on the Internet, Boing Boing. (It's now #2 after Engadget.) He is also a science fiction author who is known for copyright activism on behalf of the Electronic Frontier Foundation. In the Spring 2003 issue of Whole Earth Magazine, an article of his was published, "The Rapture of the Geeks", that ripped into advocates of the Singularity and intelligence enhancement, such as myself. I will respond to the central accusations.

First, a couple definitions. The Singularity is the technological creation of smarter-than-human intelligence. We can further specify good Singularities, where this smarter intelligence is on humanity's side, and bad Singularities, where it isn't. So-called Singularitarians are individuals who advocate intelligence enhancement for global benefit. Rather than tackling the really hard problems - poverty, war, hatred, poor infrastructure, mental and physical illness - at our present level of intelligence, Singularitarians advise pursuing intelligence enhancement and then applying qualitatively smarter intelligence to these age-old problems. We also foresee a recursive self-improvement process resulting from smarter-than-human intelligence, where the first superintelligence is much better than humans at coming up with new intelligence enhancement techniques, and applies them iteratively, further magnifying the initial gains. To a Singularitarian, intelligence enhancement that improves benevolence as well as brainpower is the best possible investment in humanity's future.

Cory Doctorow: "The Vingean Singularity is at the center of a classic mystical belief system: to believe in The Singularity is to believe in the transcendence of human flesh and the ascension to a higher state—a belief that, in turn, depends on several highly dubious articles of faith."

Here, Doctorow associates the Vingean Singularity with transcending the flesh. While it is true that many advocates of the Singularity believe in the possibility of mind uploading, cyborgization, etc., none of these things are necessary to make intelligence enhancement a highly desirable prospect. Even if life and intelligence were somehow permanently affixed to proteinaceous water envelopes forever, it would still would be prudent to pursue intelligence enhancement for its own sake.

Humans share 98% of our genetic material with chimps, but the step from chimps to humans produced creatures that could walk on the Moon, exploit the power of the atom, and build skyscrapers. If a similar jump up in intelligence could produce the same discontinuous results, then wouldn't it be fascinating to take that step? And if that step is theoretically possible and will happen one day anyway, wouldn't it to responsible of us to help guide it so that the first superintelligences at least are given human-friendly initial conditions rather than human-unfriendly initial conditions? If the Singularity were sparked by human intelligence enhancement, would you rather the first augmentee be more like Fred Rogers or Ted Bundy?

There are multiple reasons why the Singularity is not a mystical belief system, but the most obvious is that it is experimentally testable. If we cannot build smarter-than-human AIs despite our best efforts and human-equivalent computing capacity, if no brain-computer interface or genetic enhancement project gives rise to improved intelligence, then it will be proven that smarter-than-human intelligence is forbidden by the laws of the universe.

But if a reliable intelligence enhancement procedure is developed and can be applied to anyone for a low cost, would that not be an "ascension to a higher state" of just the type that Doctorow is belittling? It would be an ascension to a "higher state" in the real world, based on making deliberate neural modifications to let people think faster, more creatively, more empathically, with expanded working memory and capacity for complexity. This can be done by taking control of our own brains at the physical level, rather than the more superficial route (but so far the best we've had) of traditional learning.

Doctorow: "First off, Singularians ask you to believe that a model of a brain in a computer, properly executed, will become conscious—will, in fact, have a consciousness continuous with that of the person whose brain was scanned. While it’s true that consciousness depends on the brain— judicious experimentation with a bone saw and scalpel can readily demonstrate this—it’s an enormous leap to conclude that consciousness’s seat is in the brain."

Where else could it be? Doctorow is contradicting decades of work in brain science by suggesting that consciousness may reside somewhere external to the brain. Does consciousness reside in the stomach? The heart perhaps? Or is it floating by us at all times, on the supernatural plane? I'm not sure what he is suggesting, but it sounds pseudoscientific.

Regardless, Singularitarians are not asking anyone to believe that a model of the brain in a computer is continuous with its real-life counterpart, or that uploading is possible. It just so happens that many do believe it, but this seems like more of a common belief than a central component of Singularity advocacy, namely intelligence enhancement.

But if the subject is brought up, why not respond: if consciousness disappears when a brain is implemented on a computer, then it should be possible to observe consciousness disappearing in partly-computerized brains. For instance, people with hippocampal implants would be less conscious than ordinary human beings. Somehow I doubt this. Even if computers as we know them turn out not to be able to simulate conscious beings, who says we are limited to traditional computers based on the serial von Neumann architecture implemented in silicon? We could try parallel computers, biological computers, ultrafast neuron-equivalents, carbon computers... whatever works. The point is not to have a philosophical shouting match, but to dismiss carbon chauvinism - the idea that all life or intelligence must depend on traditional biological building blocks. For a humorous angle on this, see the short story "They're Made Out of Meat" by Terry Bisson.

Doctorow: "Then there’s the further presumption that consciousness exists at an atomic or even molecular level: that an atom-by-atom copy of the brain, properly modeled in a Turing Machine, will have all the data necessary to awaken into consciousness. As more and more subatomic particles are catalogued, particles whose properties range from counterintuitive to goddamned spooky, it seems equally probable that nano-disassemblers’ pincers will be far too clumsy to ever extract the important information contained in a brain. It’s cargo-cultism: the airstrips bring the airplanes, so if we lay down airstrips, the planes will come back. Put all a brain’s atoms into a brain-shaped pile, a mind will come back."

How come embryogenesis keeps putting the brain's atoms in a brain-shaped pile a third-million times every day, and the mind keeps coming back? If it is necessary to duplicate pregnancy rituals in order to create a conscious being in a non-carbon substrate, then it will be an inconvenience we'll have to bear. Again though, the issue of whether or not a human brain can be uploaded is irrelevant to the primary issue of whether or not intelligence enhancement (the Singularity) is worth pursuing.

Doctorow: "The Singularity depends on hypothetical technological events — nanotech, brain scanning, consciousness in the brain, sufficient granularity in the scan—but none is more wishful than the belief that the correct model will be lucked into."

It's interesting that Doctorow set out to write an article on the Singularity and ended up writing an article on mind uploading. This, along with his use of the term "Singularian" when he means "Singularitarian", shows that he didn't really research this article very well, but probably wrote it as a reaction to something he read that conflated the Singularity and mind uploading. A Google search for "Singularian" only yields a few results - all instances where people made up the word erroneously. "Singularian" is sort of like "irregardless" - a made up, etymologically incorrect word that gets spread about on a limited scale through repetition.

Doctorow: "After The Singularity, we’ll be immortal. All goods will be nonscarce. Entropy will be tamed. We will have complete mastery over our selves and our environment — we’ll be ascended masters. The best part is, we’ll get there using Moore’s Law. Write code, get smart, advance the cause and soon, you, too, will be immortal."

This seems like an attack on Ray Kurzweil's particular views, but it is a straw man because by definition, we cannot know precisely how things will go after the Singularity. The standard view is simple: if we aggressively enhance our own intelligence, the benefits could be large. Because most intelligence enhancement advocates are also transhumanists, the ideas of molecular manufacturing and radical life extension co-occur with discussions of the Singularity, but they are not the same thing. It's hard to tell if Doctorow picked this up while onstage at the Singularity Summit at Stanford, but I sure hope so.

Doctorow: "Your mystical belief: that everything will just transform on its own, for the infinitely better, because, well, because that’d rock."

From the beginning of an organized movement in support of the Singularity (around 2000), there has been the idea of personal responsibility and direct activism. So, as far as I know, there are zero thinkers on the Singularity that think it would be totally inevitable. Support of routes to smarter intelligence is a pro-active thing, that primarily manifests itself in Artificial Intelligence projects.