Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

5Sep/1251

Comprehensive Copying Not Required for Uploading

Recently, there was some confusion by biologist P.Z. Myers regarding the Whole Brain Emulation Roadmap report of Anders Sandberg and Nick Bostrom at the Future of Humanity Institute.

The confusion arose when Prof. Myers made incorrect assumptions about the 130-page roadmap from reading a 2-page blog post by Chris Hallquist. Hallquist wrote:

The version of the uploading idea: take a preserved dead brain, slice it into very thin slices, scan the slices, and build a computer simulation of the entire brain.

If this process manages to give you a sufficiently accurate simulation

Prof. Myers objected vociferously, writing, "It won’t. It can’t.", subsequently launching into a reasonable attack against the notion of scanning a living human brain at nanoscale resolution with current fixation technology. The confusion is that Prof. Myers is criticizing a highly specific idea, the notion of exhaustively simulating every axon and dendrite in a live brain, as if that were the only proposal or even the central proposal forwarded by Sandberg and Bostrom. In fact, on page 13 of the report, the authors present a table that includes 11 progressively more detailed "levels of emulation", ranging from simulating the brain using high-level representational "computational modules" to simulating the quantum behavior of individual molecules. In his post, Myers writes as if the 5th level of detail, simulating all axons and dendrites, is the only path to whole brain emulation (WBE) proposed in the report (it isn't), and also as if the authors are proposing that WBE of the human brain is possible with present-day fixation techniques (they aren't).

In fact, the report presents Whole Brain Emulation as a technological goal with a wide range of possible routes to its achievement. The narrow method that Myers criticizes is only one approach among many, and not one that I would think is particularly likely to work. In the comments section, Myers concurs that another approach to WBE could work perfectly well:

This whole slice-and-scan proposal is all about recreating the physical components of the brain in a virtual space, without bothering to understand how those components work. We’re telling you that approach requires an awfully fine-grained simulation.

An alternative would be to, for instance, break down the brain into components, figure out what the inputs and outputs to, say, the nucleus accumbens are, and then model how that tissue processes it all (that approach is being taken with models of portions of the hippocampus). That approach doesn’t require a detailed knowledge of what every molecule in the tissue is doing.

But the method described here is a brute force dismantling and reconstruction of every cell in the brain. That requires details of every molecule.

But, the report does not mandate that a "brute force dismantling and reconstruction of every cell in the brain" is the only way forward for uploading. This makes it look as if Myers did not read the report, even though he claims, "I read the paper".

Slicing and scanning a brain will be necessary but by no means sufficient to create a high-detail Whole Brain Emulation. Surely, it is difficult to imagine how the salient features of a brain could be captured without scanning it in some way.

What Myers seems to be objecting to is a kind of dogmatic reductionism, "brain in, emulation out" direct scanning approach that is not actually being advocated by the authors of the report. The report is non-dogmatic, writing that a two-phase approach to WBE is required, where "The first phase consists of developing the basic capabilities and settling key research questions that determine the feasibility, required level of detail and optimal techniques. This phase mainly involves partial scans, simulations and integration of the research modalities." In this first phase, there is ample room for figuring out what the tissue actually does. Then, that data can be used simplify the scanning and representation process. The required level of understanding vs. blind scan-and-simulate is up for debate, but few would claim that our current neuroscientific level of understanding suffices.

Describing the difficulties of comprehensive scanning, Myers writes:

And that’s another thing: what the heck is going to be recorded? You need to measure the epigenetic state of every nucleus, the distribution of highly specific, low copy number molecules in every dendritic spine, the state of molecules in flux along transport pathways, and the precise concentration of all ions in every single compartment. Does anyone have a fixation method that preserves the chemical state of the tissue?

Measuring the epigenetic state of every nucleus is not likely to be required to create convincing, useful, and self-aware Whole Brain Emulations. No neuroscientist familiar with the idea has ever claimed this. The report does not claim this, either. Myers seems to be inferring this claim himself through his interpretation of Hallquist's brusque 2-sentence summary of the 130-page report. Hallquist's sentences need not be interpreted this way -- "slicing and scanning" the brain could be done simply to map neural network patterns rather than to capture the epigenetic state of every nucleus.

Next, Myers objects to the idea that brain emulations could operate at faster-than-human speeds. He responds to a passage in "Intelligence Explosion: Evidence and Import", another paper cited in the Hallquist post which claims, "Axons carry spike signals at 75 meters per second or less (Kandel et al. 2000). That speed is a fixed consequence of our physiology. In contrast, software minds could be ported to faster hardware, and could therefore process information more rapidly." To this, Myers says:

You’re just going to increase the speed of the computations — how are you going to do that without disrupting the interactions between all of the subunits? You’ve assumed you’ve got this gigantic database of every cell and synapse in the brain, and you’re going to just tweak the clock speed… how? You’ve got varying length constants in different axons, different kinds of processing, different kinds of synaptic outputs and receptor responses, and you’re just going to wave your hand and say, “Make them go faster!” Jebus. As if timing and hysteresis and fatigue and timing-based potentiation don’t play any role in brain function; as if sensory processing wasn’t dependent on timing. We’ve got cells that respond to phase differences in the activity of inputs, and oh, yeah, we just have a dial that we’ll turn up to 11 to make it go faster.

At first read, it almost seems in this objection as if Prof. Myers does not understand the concept that software can be run faster if it is running on a faster computer. After reading this post carefully, it doesn't seem as if this is what he actually means, but since the connotation is there, the point is worth addressing directly.

Software is a series of electric signals passing through logic gates on computers. The software is agnostic to the processing speed of the underlying computer. The software is a pattern of electrons. The pattern is there whether the clock speed of the processor is 2 kHz or 2 GHz. When and if software is ported from a 2 kHz computer to a 2 GHz computer, it does not stand up and object to this "tweaking the clock speed". No "waving of hands" is required. The software may very well be unable to detect that the substrate has changed. Even if it can detect the change, it will have no impact on its functioning unless the programmers especially write code that makes it react.

Speed change in software is allowed. If the hardware can support the speed change, pressing a button is all it takes to speed the software up. This is a simple point.

The crux of Myers' objection seems to actually be about the interaction of the simulation with the environment. This objection makes much more sense. In the comments, Carl Shulman responds to Myers' objection:

This seems to assume, contrary to the authors, running a brain model at increased speeds while connected to real-time inputs. For a brain model connected to inputs from a virtual environment, the model and the environment can be sped up by the same factor: running the exact same programs (brain model and environment) on a faster (serial speed) computer gets the same results faster. While real-time interaction with the outside would not be practicable at such speedup, the accelerated models could still exchange text, audio, and video files (and view them at high speed-up) with slower minds.

Here, there seems to be a simple misunderstanding on Myers' part, where he is assuming that Whole Brain Emulations would have to be directly connected to real-world environments rather than virtual environments. The report (and years of informal discussion on WBE among scientists) more or less assumes that interaction with the virtual environment would be the primary stage in which the WBE would operate, with sensory information from an (optional) real-world body layered onto the VR environment as an addendum. As the report describes, "The environment simulator maintains a model of the surrounding environment, responding to actions from the body model and sending back simulated sensory information. This is also the most convenient point of interaction with the outside world. External information can be projected into the environment model, virtual objects with real world affordances can be used to trigger suitable interaction etc."

It is unlikely that an arbitrary WBE would be running at a speed that lines it up precisely with the 200 Hz firing rate of human neurons, the rate at which we think. More realistically, the emulation is likely to be much slower or much faster than the characteristic human rate, which exists as a tiny sliver in a wide expanse of possible mind-speeds. It would be far more reasonable -- and just easier -- to run the WBE in a virtual environment with a speed suited to its thinking speed. Otherwise, the WBE would perceive the world around it running at either a glacial pace or a hyper-accelerated one, and have a difficult time making much sense of either.

Since the speed of the environment can be smoothly scaled with the speed of the WBE, the problems that Myers cites with respect to "turn[ing] it up to 11" can be duly avoided. If the mind is turned up to 11, which is perfectly possible given adequate computational resources, then the virtual environment can be turned up to 11 as well. After all, the computational resources required to simulate a detailed virtual environment would pale in comparison to those required to simulate the mind itself. Thus, the mind can be turned up to 11, 12, 13, 14, or far beyond with the push of a button, to whatever level the computing hardware can support. Given the historic progress of computing hardware, this may well eventually be thousands or even millions of times the human rate of thinking. Considering minds that think and innovate a million times faster than us might be somewhat intimidating, but there it is, a direct result of the many intriguing and counterintuitive consequences of physicalism.

12Feb/1113

Confirmed: Key Activities by “Anonymous” Masterminded by Small Groups of Decision-Makers

In a recent post I  made on "Anonymous", commenter "mightygoose" said:

i would agree with matt, having delved into various IRC channels and metaphorically walked among anonymous,i would say that they are fully aware that they have no head, no leadership, and while you can lambast their efforts as temporary nuisance, couldnt the same be said for any form of protest (UK students for example) and the effective running of government.

I responded:

They are dependent on tools and infrastructure provided by a small, elite group. If it weren't for this infrastructure, 99% of them wouldn't even have a clue about how to even launch a DDoS attack.

A week ago in the Financial Times:

However, a senior US member of Anonymous, using the online nickname Owen and evidently living in New York (Xetra: A0DKRK - news) , appears to be one of those targeted in recent legal investigations, according to online communications uncovered by a private security researcher.

A co-founder of Anonymous, who uses the nickname Q after the character in James Bond, has been seeking replacements for Owen and others who have had to curtail activities, said researcher Aaron Barr, head of security services firm HBGary Federal.

Mr Barr said Q and other key figures lived in California and that the hierarchy was fairly clear, with other senior members in the UK, Germany, Netherlands, Italy and Australia.

Of a few hundred participants in operations, only about 30 are steadily active, with 10 people who "are the most senior and co-ordinate and manage most of the decisions", Mr Barr told the Financial Times. That team works together in private internet relay chat sessions, through e-mail and in Facebook groups. Mr Barr said he had collected information on the core leaders, including many of their real names, and that they could be arrested if law enforcement had the same data.

Many other investigators have also been monitoring the public internet chats of Anonymous, and agree that a few seasoned veterans of the group appear to be steering much of its actions.

Yes... just like I already said in December. There may be many participants in Anonymous that would like to believe that they have no leadership, no head, but the fact is that any sustained and effective effort of any kind requires leadership.

It's funny how some people like to portray Anonymous as some all-wise decentralized collective, but like I said, if /b/ were shut down, they would all scatter like a bunch of ants. Anonymous has the weakness that it isn't unified by any coherent philosophy. This is not any kind of intellectual group. In contrast, groups like Transhumanism, Bayesianism, and Atheism are bound together by central figures, ideas, texts, and physical meetings.

27Jan/1112

Happiness Set Point and Existential Risk

Talking to Phil, Stephen, and PJ on FastForward Radio last night, I made a point that I make often in person but I don't think I've ever said on my blog.

The point is a reaction to accusations of doomsaying. People say, "you're so negative, contemplating catastrophic scenarios and apocalypse!" My response is that rather than being indicative of me being pessimistic or depressed, it is actually evidence that I am a happy person. Because I have a high happiness set point, I am enabled to consider negative scenarios without suffering personal depression or momentary sadness. I am immune from the reactive flinching away that most people have when they consider nuclear war or robots destroying all humans. Well, not entirely immune, but certainly more immune than most, and acclimation is part of it.

Because of my high happiness set point, there are greater volumes of idea space that I can comfortably navigate. Try it. Can you consider nuclear war in an entirely objective way, thinking about scientific facts and evidence, rather than fixating on the emotional human impact? For me and some of my friends, nuclear war can be brought up at a casual conversation, without gloominess, simply because it's interesting to work through the probabilities involved. We can be sad and humanistic/emotional about it too, but we have the option to be analytical as well. Others don't have a choice. More choices is good in this situation.

People with an average or low happiness set-point are unfortunately handicapped. They can't think about negative possibilities without feeling sad. Thus, that portion of the memetic state space is blocked off to them. Poor schmucks.

Ironically, their inability to rationally confront existential risks increases the probability that we will all experience a disaster. Unfortunate, because their actions will cause others to suffer.

A corollary of this effect is that when existential risks are brought up at all, it tends to be in a humorous context, because most people are too fragile to consider it in a non-humorous context.

Filed under: philosophy, risks 12 Comments
15Dec/101

Singularity Summit 2010 Videos: Michael Vassar on The Darwinian Method

Michael Vassar at Singularity Summit 2010 -- The Darwinian Method from Singularity Institute on Vimeo.

4Nov/102

Katja Grace Honors Thesis Now Available

See the summary here, download it at the little box towards the lower right. Title: "Anthropic Reasoning in the Great Filter".

A major part of this effort is asking the questions, "what are different possible reference classes for anthropics/Doomsday Argument and what do they imply?", and "can we agree on updating our probabilities for being close to the Great Filter (whatever is responsible for the Fermi Paradox) if we aren't absolutely certain what reference class we're in?"

Read this first.

My current position is that it's extremely unlikely that life would develop to our stage because we live in a simple universe where even the evolution of consciousness is a miracle, but if it never happened, we'd never be around to observe it, so we happen to find ourselves in a universe where it did happen -- but just barely. Because there are many more simple universes (without life) than those with it (assuming whatever process generates universes in the multiverse generates more simple universes than complex ones), we should assume to find ourselves in one of the most abundant universes (we're typical, after all), we just happen to find ourselves in a universe that is common enough that it's simple, but complex (and consciousness-biased) enough that at least one conscious species evolved in it.

Is the universe really as simple as all that? Max Tegmark has published a paper arguing that the universe may in fact contain close to zero information.

Filed under: philosophy 2 Comments
2Nov/1022

“Liberal Eugenics” — An Awkward Term

I just ran into "liberal eugenics" on Wikipedia:

Liberal eugenics is an ideology which advocates the use of reproductive and genetic technologies where the choice of the goals of enhancing human characteristics and capacities is left to the individual preferences of consumers, rather than the ideological priorities of a government authority.

The term "liberal eugenics" does not necessarily indicate that its proponents are social liberals in the modern sense or that they are non-classist and non-racist. Rather, the term is used to refer to any ideology of eugenics which is inspired by an underlying liberal theory but also to differentiate it from the authoritarian or totalitarian eugenic programs of the first half of the 20th century, which were associated with coercive methods to decrease the frequency of certain human hereditary traits passed on to the next generation. The most controversial aspect of those programs was the use of "negative" eugenics laws which allowed government agencies to sterilize individuals alleged to have undesirable genes.

Historically, eugenics is often broken into the categories of positive (encouraging reproduction in the designated "fit") and negative (discouraging reproduction in the designated "unfit"). Many positive eugenic programs were advocated and pursued during the early 20th century, but the negative programs were responsible for the compulsory sterilization of hundreds of thousands of persons in many countries, and were contained in much of the rhetoric of Nazi eugenic policies of racial hygiene and genocide.

Liberal eugenics is conceived to be mostly "positive", relying more on reprogenetics than on selective breeding charts to achieve its aims. It seeks to both minimize congenital disorder and enhance capacity, traditional eugenic goals. It is intended to be under the control of the parents exercising their procreative liberty while guided by the principle of procreative beneficence, though the substantial governmental and corporate infrastructure required for reprogenetics may limit or steer their actual choices.

Because of its reliance on new reprogenetic technologies, liberal eugenics is often referred to as "new eugenics", "neo-eugenics" or "techno-eugenics". However, these terms may be misleading since current or future collectivist, authoritarian, and totalitarian eugenic programs do or could also rely on these new biotechnologies.

Eugenicist Major General Frederick Osborn laid the intellectual groundwork for liberal eugenics as early as the 1930s when he was the director of the Carnegie Institution for Science. Osborn argued that the public would never accept eugenics under militarized directives; rather, time must be allowed for "eugenic consciousness" to develop in the population. Accordingly, eugenic consciousness did not have to be aggressively and intentionally micro-manufactured; instead, it would develop as an emergent property as capitalist economy increased in complexity.

Osborn argued that all that was needed was to simply wait until a specific set of social structures (a consumer economy and the nuclear family) developed to a point of dominance within capitalist culture. Once these structures matured, people would act eugenically without a second thought. Eugenic activity, instead of being an immediately identifiable, repugnant activity, would become one of the invisible taken-for-granted activities of everyday life (much like getting a vaccination).

It seems like "liberal eugenics" is so much on its way to becoming an invisible taken-for-granted part of life that even giving it a specific name is unnecessary. "My right to have children without genetic disorders" is a name for the genetic screening we have today. Though some may disagree on the definition of a disorder, many disorders are unambiguous.

Filed under: philosophy 22 Comments
5Sep/101

Assorted Links September 6th, 2010

Robin Hanson on Who Should Exist? and Ways to Pay to Exist.

IEEE Spectrum has an interview with Ratan Kumar Sinha, who designed India's new thorium reactor.

The popular website "The Big Think" has a couple transhumanist writers, Parag and Ayesha Khanna. Their latest article, Can Hollywood Redesign Humanity? continues forward the H+/Hollywood connection which has been promoted previously by Jason Silva and others. "Documentaries Ponder the Future" is another one of their articles.

13May/102

Gary Marcus at Singularity Summit 2009: The Fallibility and Improvability of the Human Mind

Gary Marcus at Singularity Summit 2009 -- The Fallibility and Improvability of the Human Mind from Singularity Institute on Vimeo.

Gary Marcus Professor of Psychology at New York University, director of the NYU Center for Child Language, and author of The Birth of the Mind and Kludge.

27Feb/10104

Valid Transhumanist Criticism?

Lately, I've been seeing something interesting -- valid criticism of the transhumanist project. The concern is decently articulated by the people who are being paid to attack me and other transhumanists, over at The New Atlantis Futurisms blog, funded by the Ethics and Public Policy Center, "dedicated to applying the Judeo-Christian moral tradition to critical issues of public policy". To quote Charles T. Rubin's "What is the Good of Transhumanism?":

While some will use enforcement costs and lack of complete success at enforcing restraint as an argument for removing it altogether, that is an argument that can be judged on its particular merits -- even when the risks of enforcement failures are extremely great. The fact that nuclear non-proliferation efforts have not been entirely successful has not yet created a powerful constituency for putting plans for nuclear weapons on the Web, and allowing free sale of the necessary materials. In the event, transhumanists, like "Bioluddites," want to make distinctions between legitimate and illegitimate uses of "œapplied reason," even if as we will see they want to minimize the number of such distinctions because, as we will note later, they see diversity as a good. Of course, those who want to restrict some technological developments likewise look to some notion of the good. This disagreement about goods is the important one, untouched by "Bioluddite" name-calling. The mom-and-apple-pie defense of reason, science and technology one finds in transhumanism is rhetorically useful, within the framework of modern societies which have already bought into this way of looking at the world, to lend a sense of familiarity and necessity to arguments that are designed eventually to lead in very unfamiliar directions. But it is secondary to ideas of what these enterprises are good for, to which we now turn, and ultimately to questions about the foundation on which transhumanist ideas of the good are built.

Yes, "diversity" can be good. But transhumanists have a problem. Diversity is so darn huge, and contains far far more of what would broadly be considered "hideous" than anything beautiful.

I approach the idea of "diversity" from an information theory based perspective. In such a perspective, "diversity" can be achieved by randomly rearranging molecules to achieve a new, unique, "diverse" state. In this view, if absolute freedom to self-modify became possible in a society with sophisticated molecular nanotechnology, then eventually a very large and exotic collective of wireheaded and partially wireheaded beings could emerge. It could be ugly, not beautiful. For a "real-world" example, look at how everyone had great expectations for SecondLife, then it "degenerated" into a haven of porn and nightclubs. While it's debatable whether a world of porn and nightclubs is a bad thing, it's obviously not what many in society would want, and I think that an optimal transhumanist future should be appealing to all, not just a few.

Simplistic libertarian transhumanism simply argues, "anything is possible, and everything should be". Pursued to its logical conclusion, that means that I should be allowed to manufacture a trillion cyborg nematodes filled with botulism toxin and just chill with them. After all, it's my own choice, what right do you have to infringe upon it? The problem is that that cluster of nematodes would become a weapon of mass destruction if launched into stratospheric air currents for worldwide distribution, and programmed to fall in clusters on major cities where they would inject their toxins into targets which they would navigate to via thermal sensing. My unlimited "freedom" could become your unlimited doom, overnight. The same applies to people in space with the ability to anonymously cloak and accelerate asteroids towards ground targets. Any substantial magnification in human capability raises the same "civil rights" issues.

Many transhumanist writings advocate simplistic libertarian transhumanism. I won't bother to list any by name, but they're all around.

A regular commenter here, Sulfur, recently articulated his objection to transhumanism, responding to my recent statement "The latter makes sense, the former doesn't," with regards to solving the flaws of the Homo sapiens default chassis:

The fundamental problem with that sentence is that transhumanists see human body as a problem to solve and they are quick to judge what is needed and what is not. If that would be for them to decide, we already would have done terrible mistakes in augmenting our bodies ("Hell, we don't need so many genes! let's get rid of them!" hype-like attitude). Transhumanism uses imperfect tools to perfect human. That can easily lead to disaster. Besides, the most important issue is not weather small changes correcting some flaws are desirable, needed or wanted, but rather to what extend we can change human and not to commit suicide in ambitious yet funny way thanks to augmentation which would radically change our minds, creating new quality.

It's true -- we do see the human body as a problem to solve. After all, the human body can't even withstand 5 psi overpressure without our eardrums exploding, or intercept rifle bullets without severe tissue damage, which I consider unacceptable. Moving more in a mainstream direction, many transhumanists (a small group of less than 5,000 people with mainstream intellectual influence far beyond their numbers) agree that solving aging is a major priority. After all, Darwinian evolution did not have our best interests in mind when it designed us. As far as I am concerned, the question of whether the human body is a problem to be solved is obvious: it is. The question is not whether or not we need to solve it, but how.

The "how" question is where things can get sticky. Most of human existence is not so crime-free and kosher as life in the United States or Western Europe. Business as usual in many places in the world, including the country of my grandparents, Russia, is deeply defined by organized crime, physical intimidation, and other primate antics. The many wealthy, comfortable transhumanists living in San Francisco, Los Angeles, Austin, Florida, Boston, New York, London, and similar places tend to forget this. The truth is that most of the world is dominated by the radically evil. Increasing our technological capabilities will only magnify that evil many times over.

The answer to this problem lies not in letting every being do whatever they want, which would lead to chaos. There must be regulations and restrictions on enhancement, to coax it along socially beneficial guidelines. This is not the same as advocating socialist politics in the human world. You can be a radical libertarian when it comes to human societies, but advocate "stringent" top-level regulation for a transhumanist world. The reason why is that the space of possibilities opened up by unlimited self-modification of brains and bodies is absolutely huge. Most of these configurations lack value, by any possible definition, even definitions adopted specifically as contrarian positions to try and refute my hypothesis. This space is much larger than we can imagine, and larger than many naive transhumanists choose to imagine. This is especially relevant when it comes to matters of mind, not just the body. Evolution crafted our minds over millions of years to be sane. More than 999,999 out of every 1,000,000 possible modifications to the human mind would be more likely to lead to insanity than improved intelligence or happiness. Transhumanists who don't understand this need to study the human mind and looming technological possibilities more closely. The human mind is precisely configured, the space of choice is not, and ignorant spontaneous choices will lead to insane outcomes.

The problem with transhumanism is that it has become, in some quarters, merely a proxy for the idea of Progress. Progress is all well and good. The problem is that the idea isn't indefinitely extensible. The human world is a small floating platform in a sea of darkness -- a design space that we haven't even begun to understand. In most directions lie Monsters, not happiness. Progress within the human regime is one thing, but the posthuman regime is something else entirely. Imagine having First Contact with a quadrillion different alien species simultaneously. That is what we are looking at, with an uncontrolled hard takeoff Singularity. Just one First Contact would be the most significant event in human history, but transhumanists are talking about that times a billion, or a trillion, all at once.

In the comments, Sulfur referenced the "transhumanist mindset which says that upward change is a dogma". But there is a portion of transhumanists who resist that dogma. Take Nick Bostrom's "The Future of Human Evolution" paper, very popular among SIAI staff. I believe that Bostrom's 2004 publication of this paper was a ground-breaking moment for transhumanism, definitive of a schism that has been ongoing since. The schism is between those who see transhumanism as unqualifiedly good and those who see humanity's self-enhancement as a challenging project that demands close attention and care. Here's the abstract:

Evolutionary development is sometimes thought of as exhibiting an inexorable trend towards higher, more complex, and normatively worthwhile forms of life. This paper explores some dystopian scenarios where freewheeling evolutionary developments, while continuing to produce complex and intelligent forms of organization, lead to the gradual elimination of all forms of being that we care about. We then consider how such catastrophic outcomes could be avoided and argue that under certain conditions the only possible remedy would be a globally coordinated policy to control human evolution by modifying the fitness function of future intelligent life forms.

I am strongly magnetized to the Singularity Institute, Future of Humanity Institute, and Lifeboat Foundation, because I see these three organizations as the cautious side of transhumanism, exemplified by the concerns aired in the above paper. Many other iterations of transhumanism seem to be awkward fusions between SL2 transhumanism and the boilerplate leftist or rightist politics of the Baby Boomer generation. Though even our new President is attempting to engage in post-Boomer politics, the USA Boomer Politics War is so huge that it sucks in practically everything else. It's pathetic when transhumanists can't be intellectually strong enough to transcend that. Really, it is a generational war.

As somewhat of a side note, people misunderstand the SIAI position with respect to this question. SIAI seeks not to impose a superintelligent regime on the world, but rather asks, "given that we believe a hard takeoff is likely, what the heck can we do to preserve Human Value, or structures at least continuous with human value?" The question is not easy, and people often misinterpret the probability assessment of a fast transition as a desire for a fast transition. I would desire nothing more than a slow transition. I just don't think that the transition from Homo sapiens to recursive self-improvement will be very slow. Still, even if it's fast, value can probably be retained, if we allocate significant resources and attention to specifically doing so.

I believe that there can be a self-enhancement path that everyone can agree on as beneficial. I think there is enough room in the universe to hold diverse values, but not exponentially diverse in the information theory sense. I doubt that intelligent species throughout the multiverse retain their legacy forms as they spread across the cosmos. Inventing and mastering the technologies of self-modification is not optional for intelligent civilizations -- it's a must. The question is what we use them for, and whether we let society degenerate into a mess of a million of shattered fragments in the process.

15Feb/1013

The Power of Self-Replication

How can a small group of people have a big impact on the world? Develop a machine or service that is self-replicating or self-amplifying.

In a mundane way, artifacts such as iPhones and even shovels engage in human-catalyzed self-replication. People see them, then want them, then offer their money for them (or build them themselves, in a few cases), which provides the economic juice necessary to increase production and maintain the infrastructure necessary for that self-replication, like the Apple Store.

Self-replication can be relatively easy as long as the substrate is designed to contain components not much less complex than the finished product. For instance, the self-replicating robot built at Cornell self-replicates not from scratch, but rather from a set of pre-engineered blocks not much simpler than the robot itself. Using a hierarchy of such self-replicators, where each step is relatively simple but results in the creation of more complex components used in the next stage of self-replication, could provide a bootstrappable pathway to self-replicating infrastructures. Such a scheme also makes recycling easier -- if a large machine falls apart, perhaps only some of its components need by discarded, and the rest can be reused.

At the root of a substantial number of transhumanists' wild visions appears to be confidence that self-replicating factories will ultimately be produced. Otherwise, it is hard to imagine how society would acquire the necessary wealth to implement changes of the type that transhumanists discuss. In fact, it appears to me that modern transhumanism evolved in large part out of enthusiasm for the idea of molecular nanotechnology in the mid-1990s. The ongoing philosophical connection of transhumanism to other Enlightenment movements is more of a post hoc project designed to make transhumanism palatable and comprehensible to larger groups.

At its core, I believe that transhumanism's greatest accomplishment is identifying self-replicating and self-amplifying processes as humanity's greatest opportunity and hazard of the 21st century -- technology with the potential to allow us to transcend our material, physiological, and psychological limitations or, if handled poorly, cause a reprise of the Permian-Triassic extinction. You don't have to be a transhumanist to appreciate this insight; you only need to be convinced that self-replicating machines are technically plausible at some point in the near or mid-term future. Indeed, a substantial minority of tech-oriented people seem open to the possibility. Here is a poll from a 2005 CNN article on RepRap:

Even more exciting to me than self-replication is the power of self-amplification. I define self-amplification as a growing optimization process that extends its own infrastructure in a diverse way rather than simple self-replication, where "infrastructure" is defined as both core structures and the peripheral structures that support them. Humanity is an interesting edge case here, at the boundary of what I would consider the transition from self-replication to self-amplification. We are able to create diverse artifacts, but our ability to inject diversity into our own bodies and minds through self-transformation or directed evolution is extremely limited.

There is an opportunity here for the development of a mathematical model that quantifies the information and structural content produced by a given self-replicating or self-amplifying entity. Humans like to think that we exhibit nearly infinite variety in the creation of artifacts, but this is untrue. We mostly create artifacts that we have cultural and evolutionary predispositions to create. If we realized how constrained our information-producing tendencies are, it would help us become a more mature species through better self-reflection.

5Jan/108

Good.is: Criticisms of the Singularity

Yesterday, Good posted the seventh and second-to-last installment of myself and Roko's series on the Singularity, "Criticisms of the Singularity". (My last contribution to the series, "The Benefits of a Successful Singularity", was promoted to the front page of Digg.) For your benefit, the complete article is reproduced here.

Part seven in a GOOD miniseries on the singularity by Michael Anissimov and Roko Mijic. New posts every Monday from November 16 to January 23.

As was previously discussed in our series, the "singularity" means the creation of smarter-than-human intelligence, or "superintelligence," a type of intelligence that is impressively more intelligent than humans. Possible methods for its creation include brain-computer interfaces and pure artificial intelligence, among others. Various scientists, futurists, and mathematicians that write about the singularity, such as Ray Kurzweil, Nick Bostrom, and Vernor Vinge, consider such an event plausible sometime between about 2025 and 2050. Among those who consider the singularity plausible, it is widely agreed that the event could alter the world, our civilization, and even our bodies and minds profoundly, through the technologies that superintelligence could create and deploy.

Because the singularity is such a new and speculative idea, and the subject of little academic study, there are people that take practically every imaginable position with respect to it. Some, unfamiliar and shocked by the idea, dismiss it outright or simply react with confusion. Others, such as philosopher Max More, dismiss some of the central propositions after more careful study. A substantial number embrace it openly and without too many qualifications, such as futurist Ray Kurzweil, who seems to expect a positive outcome with a very high probability. My organization, the Singularity Institute, and related thinkers such as philosopher Nick Bostrom, see a positive outcome as possible but not without very careful work towards ensuring that superintelligences retain human-friendly motivations as they grow in intelligence and power.

Criticisms of the singularity generally fall into two camps: feasibility critiques and desirability critiques. The most common feasibility critiques are what I call the Imago Dei objection and the Microsoft Windows objection. Imago Dei refers to Image of God, which is the doctrine that humans are created in God's image. If humans are really created in the image of God, then we must be sacred beings, and the idea of artificially creating a superior being becomes dubious-sounding. If such a superior being could be possible, then wouldn't God have created us that way to begin with? Unfortunately for this view, science, experimental psychology, and common sense have revealed that humans possess many intellectual shortcomings, and that some people have more of these shortcomings than others. Human intelligence isn't perfect as it is; long-term improvements may become possible with new technologies.

The Microsoft Windows objection often surfaces when the topic of superintelligent artificial intelligence is brought up and goes something like this: "How can you be expecting superintelligent robots in this century when programmers can't even create a decent operating system?" The simple answer is that too many cooks ruin a dish, and operating systems are plagued by a huge number of programmers without any coherent theory that they can really agree on. In other fields, such as optics, aerospace, and physics, scientists and engineers cooperate effectively on multi-million dollar projects because there are empirically supported theories that restrict many of the final product parameters. Artificial intelligence can reach the human level and beyond if it one day has such an organizing theory. At the present time, no such theory exists, though there are pieces that may fit into the puzzle.

Lastly, there are desirability critiques. I am very sympathetic to many of these. If we humans build a more intelligent species, might it replace us? It certainly could, and evolutionary and human history support this possibility strongly. Eventually creating superintelligence seems hard to avoid though. People want to be smarter, and to have smarter machines that do more work for us. Instead of trying to stave off the singularity forever, I think we ought to study it carefully and make purposeful moves in the right direction. If the first superintelligent beings can be constructed such that they retain their empathy for humanity, and wish to preserve that empathy in any future iterations of themselves, we could benefit massively. Poverty and even disease and aging could become things of the past. There is no cosmic force that compels more powerful beings to look down upon weaker beings—rather, this is an emotion that comes from being animals built by natural selection. In the context of much of natural selection it is evolutionarily advantageous to selectively oppress weaker beings, though some humans, such as vegans, have demonstrated that genuine altruism and compassion are possible.

In contrast to Darwinian beings, superintelligence could be engineered for empathy from the ground up. A singularity originating with enhanced human intelligences could select the most compassionate and selfless subjects for radical enhancement first. An advanced artificial intelligence could be built with a deep, stable sense of empathy and even lacking an observer-centered goal system. It would have no special desire to discard its empathy because it would lack the evolutionary programming that causes that desire to surface to begin with. The better you understand evolution and natural selection, the less likely you think it is for Darwinian dynamics to apply to superintelligence.

We should certainly hope that benevolent or human-friendly superintelligence is possible, or human extinction could be the result. Just look at what we're already doing to the animal kingdom. Yet, by thinking about the issues in advance, we may figure out how to tip the odds in our favor. Human-posthuman synergy and cooperation could become possible.

Michael Anissimov is a futurist and evangelist for friendly artificial intelligence. He writes a Technorati Top 100 Science blog, Accelerating Future. Michael currently serves as Media Director for the Singularity Institute for Artificial Intelligence (SIAI) and is a co-organizer of the annual Singularity Summit.

17Dec/0928

Complexity Metric Blog on Jaron Lanier vs. Eliezer Yudkowsky

Here is the commentary. Most of all, I enjoy reviews and comments by outsiders with no contact with our current community. Here are a few quotes and my comments:

It is video conference phone call split screen debate between this Yudkowsky guy who is the head scientist at the Singularity Institute, and Lanier who has been the genius hippy in red dread locks since his early pioneering work with Virtual Reality and artificial vision systems.

Before you click the link, let me frame the debate.

These two guys represent the two extremes of a subtle range of viewpoints on evolution, AI, and human consciousness.

An interesting and subtle range that deserves more popular and academic attention and will get it sooner or later because we are building technologies that produce divisive responses to the relevant philosophical issues.

Jaron's main criticism of the hard AI camp in this debate is that their strong attachment to finding a way past death and their apriori beliefe in the posibility of resonably building self evolving intelegence together become so rhetorically invasive that they can no longer do objective investigation or engineering... that their beliefs and desires make them "religious".

Well, Jaron would probably prefer if we didn't do any objective investigation or engineering, but that's not true. Remember, as cybernetic totalists, we are totally devoted to our goal. Totally awesome!

From my perspective, Jaron is a nothing more than a (very bright) priest who can't stop doing science in the basement, and Yudkoswsky is nothing less than a scientist that can't help wanting to build a God.

Hah! A superintelligence would be like a god. I can vaguely understand why people who don't regard MNT as plausible would disagree with this, but I never understand why those who do believe that MNT is plausible would.

The fireworks in the video begin at 11:00! I actually agree with many of Jaron's points in the abstract. I disagree with him when he says that we cannot represent some physical systems in totality or simulate them precisely.

Filed under: philosophy, videos 28 Comments