Lately, I've been seeing something interesting -- valid criticism of the transhumanist project. The concern is decently articulated by the people who are being paid to attack me and other transhumanists, over at The New Atlantis Futurisms blog, funded by the Ethics and Public Policy Center, "dedicated to applying the Judeo-Christian moral tradition to critical issues of public policy". To quote Charles T. Rubin's "What is the Good of Transhumanism?":
While some will use enforcement costs and lack of complete success at enforcing restraint as an argument for removing it altogether, that is an argument that can be judged on its particular merits -- even when the risks of enforcement failures are extremely great. The fact that nuclear non-proliferation efforts have not been entirely successful has not yet created a powerful constituency for putting plans for nuclear weapons on the Web, and allowing free sale of the necessary materials. In the event, transhumanists, like "Bioluddites," want to make distinctions between legitimate and illegitimate uses of "œapplied reason," even if as we will see they want to minimize the number of such distinctions because, as we will note later, they see diversity as a good. Of course, those who want to restrict some technological developments likewise look to some notion of the good. This disagreement about goods is the important one, untouched by "Bioluddite" name-calling. The mom-and-apple-pie defense of reason, science and technology one finds in transhumanism is rhetorically useful, within the framework of modern societies which have already bought into this way of looking at the world, to lend a sense of familiarity and necessity to arguments that are designed eventually to lead in very unfamiliar directions. But it is secondary to ideas of what these enterprises are good for, to which we now turn, and ultimately to questions about the foundation on which transhumanist ideas of the good are built.
Yes, "diversity" can be good. But transhumanists have a problem. Diversity is so darn huge, and contains far far more of what would broadly be considered "hideous" than anything beautiful.
I approach the idea of "diversity" from an information theory based perspective. In such a perspective, "diversity" can be achieved by randomly rearranging molecules to achieve a new, unique, "diverse" state. In this view, if absolute freedom to self-modify became possible in a society with sophisticated molecular nanotechnology, then eventually a very large and exotic collective of wireheaded and partially wireheaded beings could emerge. It could be ugly, not beautiful. For a "real-world" example, look at how everyone had great expectations for SecondLife, then it "degenerated" into a haven of porn and nightclubs. While it's debatable whether a world of porn and nightclubs is a bad thing, it's obviously not what many in society would want, and I think that an optimal transhumanist future should be appealing to all, not just a few.
Simplistic libertarian transhumanism simply argues, "anything is possible, and everything should be". Pursued to its logical conclusion, that means that I should be allowed to manufacture a trillion cyborg nematodes filled with botulism toxin and just chill with them. After all, it's my own choice, what right do you have to infringe upon it? The problem is that that cluster of nematodes would become a weapon of mass destruction if launched into stratospheric air currents for worldwide distribution, and programmed to fall in clusters on major cities where they would inject their toxins into targets which they would navigate to via thermal sensing. My unlimited "freedom" could become your unlimited doom, overnight. The same applies to people in space with the ability to anonymously cloak and accelerate asteroids towards ground targets. Any substantial magnification in human capability raises the same "civil rights" issues.
Many transhumanist writings advocate simplistic libertarian transhumanism. I won't bother to list any by name, but they're all around.
A regular commenter here, Sulfur, recently articulated his objection to transhumanism, responding to my recent statement "The latter makes sense, the former doesn't," with regards to solving the flaws of the Homo sapiens default chassis:
The fundamental problem with that sentence is that transhumanists see human body as a problem to solve and they are quick to judge what is needed and what is not. If that would be for them to decide, we already would have done terrible mistakes in augmenting our bodies ("Hell, we don't need so many genes! let's get rid of them!" hype-like attitude). Transhumanism uses imperfect tools to perfect human. That can easily lead to disaster. Besides, the most important issue is not weather small changes correcting some flaws are desirable, needed or wanted, but rather to what extend we can change human and not to commit suicide in ambitious yet funny way thanks to augmentation which would radically change our minds, creating new quality.
It's true -- we do see the human body as a problem to solve. After all, the human body can't even withstand 5 psi overpressure without our eardrums exploding, or intercept rifle bullets without severe tissue damage, which I consider unacceptable. Moving more in a mainstream direction, many transhumanists (a small group of less than 5,000 people with mainstream intellectual influence far beyond their numbers) agree that solving aging is a major priority. After all, Darwinian evolution did not have our best interests in mind when it designed us. As far as I am concerned, the question of whether the human body is a problem to be solved is obvious: it is. The question is not whether or not we need to solve it, but how.
The "how" question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.
The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate "stringent" top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don't understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.
The problem with transhumanism is that it has become, in some quarters, merely a proxy for the idea of Progress. Progress is all well and good. The problem is that the idea isn't indefinitely extensible. The human world is a small floating platform in a sea of darkness -- a design space that we haven't even begun to understand. In most directions lie Monsters, not happiness. Progress within the human regime is one thing, but the posthuman regime is something else entirely. Imagine having First Contact with a quadrillion different alien species simultaneously. That is what we are looking at, with an uncontrolled hard takeoff Singularity. Just one First Contact would be the most significant event in human history, but transhumanists are talking about that times a billion, or a trillion, all at once.
In the comments, Sulfur referenced the "transhumanist mindset which says that upward change is a dogma". But there is a portion of transhumanists who resist that dogma. Take Nick Bostrom's "The Future of Human Evolution" paper, very popular among SIAI staff. I believe that Bostrom's 2004 publication of this paper was a ground-breaking moment for transhumanism, definitive of a schism that has been ongoing since. The schism is between those who see transhumanism as unqualifiedly good and those who see humanity's self-enhancement as a challenging project that demands close attention and care. Here's the abstract:
Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.
I am strongly magnetized to the Singularity Institute, Future of Humanity Institute, and Lifeboat Foundation, because I see these three organizations as the cautious side of transhumanism, exemplified by the concerns aired in the above paper. Many other iterations of transhumanism seem to be awkward fusions between SL2 transhumanism and the boilerplate leftist or rightist politics of the Baby Boomer generation. Though even our new President is attempting to engage in post-Boomer politics, the USA Boomer Politics War is so huge that it sucks in practically everything else. It's pathetic when transhumanists can't be intellectually strong enough to transcend that. Really, it is a generational war.
As somewhat of a side note, people misunderstand the SIAI position with respect to this question. SIAI seeks not to impose a superintelligent regime on the world, but rather asks, "given that we believe a hard takeoff is likely, what the heck can we do to preserve Human Value, or structures at least continuous with human value?" The question is not easy, and people often misinterpret the probability assessment of a fast transition as a desire for a fast transition. I would desire nothing more than a slow transition. I just don't think that the transition from Homo sapiens to recursive self-improvement will be very slow. Still, even if it's fast, value can probably be retained, if we allocate significant resources and attention to specifically doing so.
I believe that there can be a self-enhancement path that everyone can agree on as beneficial. I think there is enough room in the universe to hold diverse values, but not exponentially diverse in the information theory sense. I doubt that intelligent species throughout the multiverse retain their legacy forms as they spread across the cosmos. Inventing and mastering the technologies of self-modification is not optional for intelligent civilizations -- it's a must. The question is what we use them for, and whether we let society degenerate into a mess of a million of shattered fragments in the process.