Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

27Feb/10104

Valid Transhumanist Criticism?

Lately, I've been seeing something interesting -- valid criticism of the transhumanist project. The concern is decently articulated by the people who are being paid to attack me and other transhumanists, over at The New Atlantis Futurisms blog, funded by the Ethics and Public Policy Center, "dedicated to applying the Judeo-Christian moral tradition to critical issues of public policy". To quote Charles T. Rubin's "What is the Good of Transhumanism?":

While some will use enforcement costs and lack of complete success at enforcing restraint as an argument for removing it altogether, that is an argument that can be judged on its particular merits -- even when the risks of enforcement failures are extremely great. The fact that nuclear non-proliferation efforts have not been entirely successful has not yet created a powerful constituency for putting plans for nuclear weapons on the Web, and allowing free sale of the necessary materials. In the event, transhumanists, like "Bioluddites," want to make distinctions between legitimate and illegitimate uses of "œapplied reason," even if as we will see they want to minimize the number of such distinctions because, as we will note later, they see diversity as a good. Of course, those who want to restrict some technological developments likewise look to some notion of the good. This disagreement about goods is the important one, untouched by "Bioluddite" name-calling. The mom-and-apple-pie defense of reason, science and technology one finds in transhumanism is rhetorically useful, within the framework of modern societies which have already bought into this way of looking at the world, to lend a sense of familiarity and necessity to arguments that are designed eventually to lead in very unfamiliar directions. But it is secondary to ideas of what these enterprises are good for, to which we now turn, and ultimately to questions about the foundation on which transhumanist ideas of the good are built.

Yes, "diversity" can be good. But transhumanists have a problem. Diversity is so darn huge, and contains far far more of what would broadly be considered "hideous" than anything beautiful.

I approach the idea of "diversity" from an information theory based perspective. In such a perspective, "diversity" can be achieved by randomly rearranging molecules to achieve a new, unique, "diverse" state. In this view, if absolute freedom to self-modify became possible in a society with sophisticated molecular nanotechnology, then eventually a very large and exotic collective of wireheaded and partially wireheaded beings could emerge. It could be ugly, not beautiful. For a "real-world" example, look at how everyone had great expectations for SecondLife, then it "degenerated" into a haven of porn and nightclubs. While it's debatable whether a world of porn and nightclubs is a bad thing, it's obviously not what many in society would want, and I think that an optimal transhumanist future should be appealing to all, not just a few.

Simplistic libertarian transhumanism simply argues, "anything is possible, and everything should be". Pursued to its logical conclusion, that means that I should be allowed to manufacture a trillion cyborg nematodes filled with botulism toxin and just chill with them. After all, it's my own choice, what right do you have to infringe upon it? The problem is that that cluster of nematodes would become a weapon of mass destruction if launched into stratospheric air currents for worldwide distribution, and programmed to fall in clusters on major cities where they would inject their toxins into targets which they would navigate to via thermal sensing. My unlimited "freedom" could become your unlimited doom, overnight. The same applies to people in space with the ability to anonymously cloak and accelerate asteroids towards ground targets. Any substantial magnification in human capability raises the same "civil rights" issues.

Many transhumanist writings advocate simplistic libertarian transhumanism. I won't bother to list any by name, but they're all around.

A regular commenter here, Sulfur, recently articulated his objection to transhumanism, responding to my recent statement "The latter makes sense, the former doesn't," with regards to solving the flaws of the Homo sapiens default chassis:

The fundamental problem with that sentence is that transhumanists see human body as a problem to solve and they are quick to judge what is needed and what is not. If that would be for them to decide, we already would have done terrible mistakes in augmenting our bodies ("Hell, we don't need so many genes! let's get rid of them!" hype-like attitude). Transhumanism uses imperfect tools to perfect human. That can easily lead to disaster. Besides, the most important issue is not weather small changes correcting some flaws are desirable, needed or wanted, but rather to what extend we can change human and not to commit suicide in ambitious yet funny way thanks to augmentation which would radically change our minds, creating new quality.

It's true -- we do see the human body as a problem to solve. After all, the human body can't even withstand 5 psi overpressure without our eardrums exploding, or intercept rifle bullets without severe tissue damage, which I consider unacceptable. Moving more in a mainstream direction, many transhumanists (a small group of less than 5,000 people with mainstream intellectual influence far beyond their numbers) agree that solving aging is a major priority. After all, Darwinian evolution did not have our best interests in mind when it designed us. As far as I am concerned, the question of whether the human body is a problem to be solved is obvious: it is. The question is not whether or not we need to solve it, but how.

The "how" question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.

The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate "stringent" top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don't understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.

The problem with transhumanism is that it has become, in some quarters, merely a proxy for the idea of Progress. Progress is all well and good. The problem is that the idea isn't indefinitely extensible. The human world is a small floating platform in a sea of darkness -- a design space that we haven't even begun to understand. In most directions lie Monsters, not happiness. Progress within the human regime is one thing, but the posthuman regime is something else entirely. Imagine having First Contact with a quadrillion different alien species simultaneously. That is what we are looking at, with an uncontrolled hard takeoff Singularity. Just one First Contact would be the most significant event in human history, but transhumanists are talking about that times a billion, or a trillion, all at once.

In the comments, Sulfur referenced the "transhumanist mindset which says that upward change is a dogma". But there is a portion of transhumanists who resist that dogma. Take Nick Bostrom's "The Future of Human Evolution" paper, very popular among SIAI staff. I believe that Bostrom's 2004 publication of this paper was a ground-breaking moment for transhumanism, definitive of a schism that has been ongoing since. The schism is between those who see transhumanism as unqualifiedly good and those who see humanity's self-enhancement as a challenging project that demands close attention and care. Here's the abstract:

Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.

I am strongly magnetized to the Singularity Institute, Future of Humanity Institute, and Lifeboat Foundation, because I see these three organizations as the cautious side of transhumanism, exemplified by the concerns aired in the above paper. Many other iterations of transhumanism seem to be awkward fusions between SL2 transhumanism and the boilerplate leftist or rightist politics of the Baby Boomer generation. Though even our new President is attempting to engage in post-Boomer politics, the USA Boomer Politics War is so huge that it sucks in practically everything else. It's pathetic when transhumanists can't be intellectually strong enough to transcend that. Really, it is a generational war.

As somewhat of a side note, people misunderstand the SIAI position with respect to this question. SIAI seeks not to impose a superintelligent regime on the world, but rather asks, "given that we believe a hard takeoff is likely, what the heck can we do to preserve Human Value, or structures at least continuous with human value?" The question is not easy, and people often misinterpret the probability assessment of a fast transition as a desire for a fast transition. I would desire nothing more than a slow transition. I just don't think that the transition from Homo sapiens to recursive self-improvement will be very slow. Still, even if it's fast, value can probably be retained, if we allocate significant resources and attention to specifically doing so.

I believe that there can be a self-enhancement path that everyone can agree on as beneficial. I think there is enough room in the universe to hold diverse values, but not exponentially diverse in the information theory sense. I doubt that intelligent species throughout the multiverse retain their legacy forms as they spread across the cosmos. Inventing and mastering the technologies of self-modification is not optional for intelligent civilizations -- it's a must. The question is what we use them for, and whether we let society degenerate into a mess of a million of shattered fragments in the process.

25Feb/101

Last Chance to Contribute to 2010 Singularity Research Challenge!

Cross-posted from SIAI blog:

Thanks to generous contributions by our donors, we are only $11,840 away from fulfilling our $100,000 goal for the 2010 Singularity Research Challenge. For every dollar you contribute to SIAI, another dollar is contributed by our matching donors, who have pledged to match all contributions made before February 28th up to $100,000. That means that this Sunday is your final chance to donate for maximum impact.

Funds from the challenge campaign will be used to support all SIAI activities: our core staff, the Singularity Summit, the Visiting Fellows program, and more. Donors can earmark their funds for specific grant proposals, many of which are targeted towards academic paper-writing, or just contribute to our general fund. The grants system makes it easier to bring new researchers into the fold on a part-time basis, widening the pool of thinkers producing quality work on Artificial Intelligence risks and other topics relevant to SIAI's interests. It also provides transparency so our donor community can directly evaluate the impact of their contributions.

Human-level and smarter Artificial Intelligence will likely have huge impacts on humanity, but only a tiny number of researchers are working to understand how to ensure those impacts are good ones. The role of the Singularity Institute is to fill that void, bringing scholarship and science to bear on challenging questions. Instead of just letting the chips fall where they may, help the Singularity Institute increase the probability of a positive Singularity by contributing financially to our research effort. We depend completely on donors like you for all funding.

2010 marks the 10th year since SIAI's founding. With your help, SIAI will still exist in 2015, 2020, 2025... however long it takes to get to a positive Singularity. Thank you for your support!

Filed under: SIAI, singularity 1 Comment
24Feb/108

New Book Examines the Flawed Human Body

From the Genetic Archaeology blog:

Humanity's physical design flaws have long been apparent - we have a blind spot in our vision, for instance, and insufficient room for wisdom teeth - but do the imperfections extend to the genetic level?

In his new book, Inside the Human Genome, John Avise examines why - from the perspectives of biochemistry and molecular genetics - flaws exist in the biological world. He explores the many deficiencies of human DNA while recapping recent findings about the human genome.

Distinguished Professor of ecology & evolutionary biology at UC Irvine, Avise also makes the case that overwhelming scientific evidence of genomic defects provides a compelling counterargument to intelligent design.

Here, Avise discusses human imperfection, the importance of understanding our flaws, and why he believes theologians should embrace evolutionary science.

Our brains and bodies are both full of flaws. According to the pre-transhumanist worldview, the plan is just to sit around for the rest of eternity with these flaws, even as we colonize the Galaxy. According to the transhumanist worldview, the plan is to analyze these flaws, debate whether they are flaws or not, and consider fixing them if it seems practical and desirable. The latter makes sense, the former doesn't.

The New Scientist CultureLab blog has more info on the book.

Filed under: biology, science 8 Comments
22Feb/1058

Richard Dawkins on the Singularity

In this video, Dawkins acknowledges the possibilities of hard takeoff and open-ended recursive self-improvement in Artificial Intelligence.

22Feb/1012

Diamond Trees (Tropostats): A Molecular Manufacturing Based System for Compositional Atmospheric Homeostasis

Robert Freitas has a new idea for a product that could be built using molecular manufacturing -- diamond trees designed to sequester carbon dioxide. The concept is fleshed out in technical detail at a paper now available at the Institute for Molecular Manufacturing website. Let's bring up that abstract!

The future technology of molecular manufacturing will enable long-term sequestration of atmospheric carbon in solid diamond products, along with sequestration of lesser masses of numerous air pollutants, yielding pristine air worldwide ~30 years after implementation. A global population of 143 x 109 20-kg “diamond trees” or tropostats, generating 28.6 TW of thermally non-polluting solar power and covering ~0.1% of the planetary surface, can create and actively maintain compositional atmospheric homeostasis as a key step toward achieving comprehensive human control of Earth’s climate.

On the topic of MNT, I also wonder what it will take for the skeptics to become convinced that the technology is plausible. Positional atomic placement has already been demonstrated, including at room temperature. Will complex rotating 3D nanosystems convince them? I doubt those are far off.

20Feb/105

Assorted Links for 2/20/2010

Scientific research indicates human athletic performance has peaked
Corporations, agencies infiltrated by botnet
Dolphin cognitive abilities raise ethical questions, says Emory neuroscientist
Synchronized flying robots could paint pictures in the sky (w/ Video)
Millimeter-scale, energy-harvesting sensor system developed
Nanodiamonds Produce 'Game Changing Event' for MRI Imaging Sensitivity
The Onion: U.S. Economy Grinds To Halt As Nation Realizes Money Just A Symbolic, Mutually Shared Illusion
CNN: Bill Gates and the "Nuclear Renaissance"
Telegraph: Christians to debate impact of high-profile atheist scientists
Michael Graham Richard: Science is the Only News

Filed under: random 5 Comments
19Feb/102

Artificial Flight Article on BoingBoing

Cory Doctorow linked the Aaron Diaz article yesterday, which is good for exposure. Doctorow said:

Dresden Codak's "Artificial Flight and Other Myths (a reasoned examination of A.F. by top birds)" is a superb, spot-on critique of artificial intelligence skeptics (like, ahem, me), comparing the our arguments against the emergence of "real AI" to the arguments a bird might make against "real" artificial flight. I love being made to re-examine my own convictions while laughing my ass off.

The problem with the online hipster culture that Doctorow embodies is that its attention span is so unbelievably short that these sorts of short humorous pieces are the only way to get them to pay attention, ever. The idea of reading papers is absolutely foreign to this huge subculture, which powers Digg, Reddit, and practically every other social news site on the Internet. They are the mainstream media (MSM) of the Internet.

You know the motto of Improbable Research, "research that makes people laugh and then think"? I always think of this motto when I look at the mainstream Internet public, but with a different spin on it. Their motto should be, "make us laugh or we refuse to think".

Fortunately, BoingBoing linked Futurismic for the news, which prominently mentions me in their article, so people can think about anthropomorphism in AI in more depth. Thanks, Paul Raven!

Filed under: AI 2 Comments
18Feb/1045

God is an Alien Scientist

Via M[C]S. Apparently the Vatican is OK with belief in ETs. But is it OK if we believe God was an ET and we can summon his powers with the right radio signals? Try asking your local pastor that, and his head will explode.

Filed under: images 45 Comments
18Feb/1051

Kevin Warwick: Terminator Scenario “Realistic”, Singularity Likely in “Not Too Distant Future”

Kevin Warwick, though obviously is a Singularitarian, portrays the same adversarial stance against AI as other human chauvinists, such as James Hughes. I paraphrase it as: "If there's an entity around that's smarter and more powerful than me, then I'm going to equate that with me being subservient and freak the fuck out!"

My suggestion: calm down. Let's do what we can to develop AIs that are nice people. There is no way we are going to outrace AI in the long run, so have to pursue this path, whether we like it or not. We are not going to eliminate all computers in the world, or keep power in the hands of humans forever. The question is not, "will the most powerful and capable entities in the world eventually be AIs?" (the answer is yes), the question is, "what the heck can we do to ensure our continued survival and prosperity once these entities inevitably become more capable than us?"

Sooner or later, positive experiences with AI programs or robots will cause these AI adversaries to understand that AIs could potentially become people too: worthy of our trust and love. The longer they keep up their adversarial attitude, the more time is wasted ignoring the challenge of engineering Friendly AIs. The year is 2010 and the clock is ticking.

17Feb/103

Aaron Diaz: “Artificial Flight and Other Myths (a reasoned examination of A.F. by top birds)”

Aaron Diaz, author of the webcomic Dresden Codak (one of the most scientifically and philosophically literate webcomics on the internet) and "Enough is Enough: a Thinking Ape's Critique of Trans-Simianism", a hilarious defense of transhumanism, has now written "Artificial Flight and Other Myths (a reasoned examination of A.F. by top birds)", which pokes fun at those who think that Artificial Intelligence will require replicating every aspect of the human brain. Here is the opening:

Artificial Flight and Other Myths
a reasoned examination of A.F. by top birds

Over the past sixty years, our most impressive developments have undoubtedly been within the industry of automation, and many of our fellow birds believe the next inevitable step will involve significant advancements in the field of Artificial Flight. While residing currently in the realm of science fiction, true powered, artificial flying mechanisms may be a reality within fifty years. Or so the futurists would have us believe. Despite the current media buzz surrounding the prospect of A.F., a critical examination of even the most basic facts can dismiss the notion of true artificial flight as not much more than fantasy.

We can start with a loose definition of flight. While no two bird scientists or philosophers can agree on the specifics, there is still a common, intuitive understanding of what true flight is: powered, feathered locomotion through the air through the use of flapping wings. While other flight-like phenomena exist in nature (via bats and insects), no bird with even a reasonable education would consider these creatures true fliers, as they lack one or more key elements. And, while some birds are unfortunately born handicapped (penguins, ostriches, etc.), they still possess the (albeit undeveloped) gene for flight, and it is indeed flight that defines the modern bird.

This is flight in the natural world, the product of millions of years of evolution, and not a phenomenon easily replicated. Current A.F. is limited to unpowered gliding; a technical marvel, but nowhere near the sophistication of a bird. Gliding simplifies our lives, and no bird (including myself) would discourage advancing this field, but it is a far cry from synthesizing the millions of cells within the wing alone to achieve Strong A.F. Strong A.F., as it is defined by researchers, is any artificial flier that is capable of passing the Tern Test (developed by A.F. pioneer Alan Tern), which involves convincing an average bird that the artificial flier is in fact a flying bird.

Continue here.

Filed under: AI 3 Comments
16Feb/1069

Revisiting ‘Beyond Anthropomorphism’

My understanding of the concept of anthropomorphism really "clicked" when I first read "Beyond anthropomorphism", part of Creating Friendly AI, an early (2000) Singularity Institute document. I strongly recommend it for those who are interested in better understanding the concept of non-anthropomorphic artificial intelligence. Here is the opening:

If you punch a human in the nose, he or she will punch back. If the human doesn't punch back, it's an admirable act of self-restraint, something worthy of note.

Imagine, for a moment, that you walk up and punch an AI in the nose. Does the AI punch back? Perhaps and perhaps not, but punching back will not be instinctive. A sufficiently young AI might stand there and think: "Hm. Someone's fist just bumped into my nose." In a punched human, blood races, adrenaline pumps, the hands form fists, the stance changes, all without conscious attention. For a young AI, focus of attention shifts in response to an unexpected negative event - and that's all.

As the AI thinks about the fist that bumped into vis nose, it may occur to the AI that this experience may be a repeatable event rather than a one-time event, and since a punch is a negative event, it may be worth thinking about how to prevent future punches, or soften the negativity. An infant AI - one that hasn't learned about social concepts yet - will probably think something like: "Hm. A fist just hit my nose. I'd better not stand here next time."

The more I study nature and biology, the more I see that anthropomorphism gets in the way of understanding animals as well. Certain birds, cats, dogs, and even rodents are intelligent, but thinking of their intelligence merely as inferior to humans is not the whole story. Different forms of intelligence have to be understood on their own terms -- not through starting with an archetype of human intelligence and making incremental modifications to that archetype. That sort of thinking can lead to anchoring.

Filed under: friendly ai 69 Comments
15Feb/1013

The Power of Self-Replication

How can a small group of people have a big impact on the world? Develop a machine or service that is self-replicating or self-amplifying.

In a mundane way, artifacts such as iPhones and even shovels engage in human-catalyzed self-replication. People see them, then want them, then offer their money for them (or build them themselves, in a few cases), which provides the economic juice necessary to increase production and maintain the infrastructure necessary for that self-replication, like the Apple Store.

Self-replication can be relatively easy as long as the substrate is designed to contain components not much less complex than the finished product. For instance, the self-replicating robot built at Cornell self-replicates not from scratch, but rather from a set of pre-engineered blocks not much simpler than the robot itself. Using a hierarchy of such self-replicators, where each step is relatively simple but results in the creation of more complex components used in the next stage of self-replication, could provide a bootstrappable pathway to self-replicating infrastructures. Such a scheme also makes recycling easier -- if a large machine falls apart, perhaps only some of its components need by discarded, and the rest can be reused.

At the root of a substantial number of transhumanists' wild visions appears to be confidence that self-replicating factories will ultimately be produced. Otherwise, it is hard to imagine how society would acquire the necessary wealth to implement changes of the type that transhumanists discuss. In fact, it appears to me that modern transhumanism evolved in large part out of enthusiasm for the idea of molecular nanotechnology in the mid-1990s. The ongoing philosophical connection of transhumanism to other Enlightenment movements is more of a post hoc project designed to make transhumanism palatable and comprehensible to larger groups.

At its core, I believe that transhumanism's greatest accomplishment is identifying self-replicating and self-amplifying processes as humanity's greatest opportunity and hazard of the 21st century -- technology with the potential to allow us to transcend our material, physiological, and psychological limitations or, if handled poorly, cause a reprise of the Permian-Triassic extinction. You don't have to be a transhumanist to appreciate this insight; you only need to be convinced that self-replicating machines are technically plausible at some point in the near or mid-term future. Indeed, a substantial minority of tech-oriented people seem open to the possibility. Here is a poll from a 2005 CNN article on RepRap:

Even more exciting to me than self-replication is the power of self-amplification. I define self-amplification as a growing optimization process that extends its own infrastructure in a diverse way rather than simple self-replication, where "infrastructure" is defined as both core structures and the peripheral structures that support them. Humanity is an interesting edge case here, at the boundary of what I would consider the transition from self-replication to self-amplification. We are able to create diverse artifacts, but our ability to inject diversity into our own bodies and minds through self-transformation or directed evolution is extremely limited.

There is an opportunity here for the development of a mathematical model that quantifies the information and structural content produced by a given self-replicating or self-amplifying entity. Humans like to think that we exhibit nearly infinite variety in the creation of artifacts, but this is untrue. We mostly create artifacts that we have cultural and evolutionary predispositions to create. If we realized how constrained our information-producing tendencies are, it would help us become a more mature species through better self-reflection.