Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31Jan/0919

Hellman’s Nuclear Weapons Paper

Most people are reluctant to discuss major risks like nuclear war because they are not intellectually sophisticated enough to contemplate such a disturbing possibility in an objective manner. They may not even be consciously afraid, but still immediately twitch away from contemplating the subject due to a mostly subconscious emotional reaction. They may also place excessive faith in the doctrine of Mutually Assured Destruction, even though the myriad ways in which this scenario could break down are thoroughly familiar to defense analysts.

To come to terms with this reality, Professor Emeritus of Electrical Engineering at Stanford and one of the inventors of public key cryptography, Martin Hellman, wrote a piece last July titled "Soaring, Cryptography and Nuclear Weapons". This paper approaches the issue of nuclear war risk from the perspective of something less threatening: gliding. I suggest you check it out.

For a concurrent view, see former Defense Secretary Robert McNamara's "Apocalypse Soon" from Foreign Policy magazine. Here's a couple quotes:

"On any given day, as we go about our business, the president is prepared to make a decision within 20 minutes that could launch one of the most devastating weapons in the world. To declare war requires an act of Congress, but to launch a nuclear holocaust requires 20 minutes deliberation by the president and his advisors. But that is what we have lived with for 40 years."

"There is no guarantee against unlimited escalation once the first nuclear strike occurs."

Filed under: nuclear, risks 19 Comments
31Jan/0936

Invasion of the Worm Robots

Scars Mirrodin Wurmcoil Engine

Consider this -- a worm robot that burrows through the top layer of soil and is capable of converting it into additional modular segments of itself as quickly as possible. With an efficiency of just 1%, a worm with a 1 cm maw that tunnels through a 100 meters of earth every hour would be able to process roughly 0.785 cc of earth per hour or 1,884 cc (115 cu in) per day. Assuming 7.85 cc of soil is needed to build one robotic segment 1 cm long, we get a growth rate of 0.1 cm per hour or 2.4 cm (1 in) per day. Nothing shocking, really, but the numbers are contrived to be conservative. If the worms could divide (which would be possible if each segment or a small row of segments can be self-sustaining), then exponential replication could quickly overwhelm an ecosystem even if the growth rate is relatively slow. I doubt many predators would be interested in consuming a robot.

Why brainstorm worm robots? Well, the worm motif seems very popular in evolution, and is shared by a number of different evolutionary lineages. The worm body type is the precursor from which all bilateral and complex animals evolved! (Only cnidarians and sponges didn't evolve from worms.) The body cavity inherent in the worm body plan provides a number of benefits that others have been over many times. So, it makes sense that a worm robot might be one of the earliest macroscopic self-replicating robots that could thrive in nature.

Where would such worms get food? The same way that regular worms do, by eating other organisms, just like that insidious fly-eating robot that was developed in 2004.

The worm robot starting point brings up a number of interesting observations and questions. First, how much of a threat could these little buggers be to an ecosystem? Of course, it depends on the growth rate and how well the robot fares in competition with the natives. But let us consider the bare minimum necessary to be an annoyance.

First off, the worm robot can prove to be a major nuisance by making sure to convert the earth into something difficult for other organisms to break down. There are probably several million types of microbes in a typical tonne of earth, but if they all fail to break something down, then it is likely to remain for a very long time. There are many examples of this decomposition-resistance in nature, notably the sponge, which defends itself not so much by aggressive means but by its manifest lack of nutritious value relative to other organisms and the caltrops-shaped calcareous or siliceous spicules that it embeds itself with. Relative to defenses that a human engineer might conjure up by probing the supra-organic design space, this is pretty boring, but it has worked for over 600 million years.

Still, without getting into anything complicated, note that significantly compressing a unit of earth would probably be enough to lower its palatability to microorganisms by a significant margin. Passing around energy currency in a form that bacteria and archaea can't digest (i.e., not glucose or sucrose) could also potentially circumvent most efforts at consumption. Processing the earth into a state whereby an exoskeleton and set of crude membranes can physically exclude microorganisms, accompanied by local microbicidal action at interfaces, could likely make the robot much more difficult to break down, both in action and when deactivated. By thinking outside the boundaries inherent to natural biology, robotics engineers will be able to create new "species" of life capable of shoving aside obstacles and continuing on their merry way.

The problem of such robots for nature-lovers is the way that they'd entirely destroy the environment. One day, lush Amazon Rainforest, three years later, a writhing mass of robotic worms and over a million extinct species. One 1-kg worm robot that reproduces just once every ten days could convert itself into 67 billion of the little monsters (67 million tonnes worth) in just a year. Especially if it intertwined itself with the ecosystem, the only way to kill all of them would be to nuke the whole damn place. Building hunter-killer worm robots wouldn't work, because by the time they were deployed, the original worm robots would have a major advantage.

Implausible, you might say? Negative. Rudimentary worm robots have already been built, and the chemical reactions necessary to convert soil organisms into energy-storing molecules are widely known. All that would be required are advances in MEMS (no molecular manufacturing needed) that allow the worm to distribute nutrients throughout its body and build new segments effectively. In mollusks (as well as worms), the simplest "complex" organisms, cilia are used as an all-purpose mechanism for ferrying nutrients about the body and waste out the anus. Looking at the contemporary lower mollusks, along with their ancestors in the small shelly fauna, one can see that the "concept of a mollusk" is simple at its essence, but it works very well. When the enabling technology is present, these designs will be copied by roboticists with interdisciplinary knowledge in biology.

The only way I can even begin to imagine to address such problems is universal transparency and inbuilt safeguards on all "3D printers" ever manufactured. Of course, there will always be those with excessive confidence in nature to repel synthetic threats (even though microbes can't eat plastic), and to those folks this won't be an issue, but to others, it inspires cause for worry. (Another objection would be the even more inane, "why would someone do this?") It may be a matter of trading privacy for security, a pill many find hard to swallow, but I think the events and pundits of the future will have an answer for you -- deal with it.

Filed under: risks, robotics 36 Comments
30Jan/0994

What are the Benefits of Mind Uploading?

mind

Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses. The concept is rather infrequently discussed, due to a combination of 1) its supposedly speculative nature and 2) its "far future" time frame.

Before I explore the idea, let me give a quick description of what mind uploading is and why the two roadblocks to its discussion are invalid. Mind uploading would involve simulating a human brain in a computer in enough detail that the "simulation" becomes, for all practical purposes, a perfect copy and experiences consciousness, just like protein-based human minds. If functionalism is true, like many cognitive scientists and philosophers correctly believe, then all the features of human consciousness that we know and love -- including all our memories, personality, and sexual quirks -- would be preserved through the transition. By simultaneously disassembling the protein brain as the computer brain is constructed, only one implementation of the person in question would exist at any one time, eliminating any unnecessary confusion.

Still, even if two direct copies are made, the universe won't care -- you would have simply created two identical individuals with the same memories. The universe can't get confused -- only you can. Regardless of how perplexed one may be by contemplating this possibility for the first time from a 20th century perspective of personal identity, an upload of you with all your memories and personality intact is no different from you than the person you are today is different than the person you were yesterday when you went to sleep, or the person you were 10^-30 seconds ago when quantum fluctuations momentarily destroyed and recreated all the particles in your brain.

Regarding objections to talk of uploading, for anyone who 1) buys the silicon brain replacement thought experiment, 2) accepts arguments that the human brain operates at below about 10^19 ops/sec, and 3) considers it plausible that 10^19 ops/sec computers (plug in whatever value you believe for #2) will become manufactured this century, the topic is clearly worth broaching. Even if it's 100 years off, that's just a blink of an eye relative to the entirety of human history, and universal uploading would be something more radical than anything that's occurred with life or intelligence in the entire known history of this solar system. We can afford to stop focusing exclusively on the near future for a potential event of such magnitude. Consider it intellectual masturbation, if you like, or a serious analysis of the near-term future of the human species, if you buy the three points.

So, say that mind uploading becomes available as a technology sometime around 2050. If the early adopters don't go crazy and/or use their newfound abilities to turn the world into a totalitarian dictatorship, then they will concisely and vividly communicate the benefits of the technology to their non-uploaded family and friends. If affordable, others will then follow, but the degree of adoption will necessarily depend on whether the process is easily reversible or not. But suppose that millions of people choose to go for it.

Widespread uploading would have huge effects. Let's go over some of them in turn.

1) Massive economic growth. By allowing human minds to run on substrates that can be accelerated by the addition of computing power, as well as the possibility of spinning off non-conscious "daemons" to accomplish rote tasks, economic growth -- at least insofar as it can be accelerated by intelligence and the robotics of 2050 alone -- will accelerate greatly. Instead of relying upon 1% per year population growth rates, humans might copy themselves or (more conducive to societal diversity) spin off already-mature progeny as quickly as available computing power allows. This could lead to growth rates in human capital of 1000% per year or far more. More economic growth might ensue in the first year (or month) after uploading than in the entire 250,000 years between the evolution of H. sapiens and the invention of uploading. The first country that widely adopts the technology might be able to solve global poverty by donating only 0.1% of its annual GDP.

2) Intelligence enhancement. Faster does not necessarily mean smarter. "Weak superintelligence" is a term sometimes used to describe accelerated intelligence that is not qualitatively enhanced, in contrast with "strong superintelligence", which is. The road from weak to strong superintelligence would likely be very short. By observing information flows in uploaded human brains, many of the details of human cognition would be elucidated. Running standard compression algorithms over such minds might make them more efficient than blind natural selection could manage, and this extra space could be used to introduce new information-processing modules with additional features. Collectively, these new modules could give rise to qualitatively better intelligence. At the very least, rapid trial-and-error experimentation without the risk of injury would become possible, eventually revealing paths to qualitative enhancements.

3) Greater subjective well-being. Like most other human traits, our happiness set points fall on a bell curve. No matter what happens to us, be it losing our home or winning the lottery, there is a tendency for our innate happiness level to revert back to our natural set point. Some lucky people are innately really happy. Some unlucky people have chronic depression. With uploading, we will be able to see exactly which neural features ("happiness centers") correspond to high happiness set points and which don't, by combining prior knowledge with direct experimentation and investigation. This will make it possible for people to reprogram their own brains to raise their happiness set points in a way that biotechnological intervention might find difficult or dangerous. Experimental data and simple observation has shown that high happiness set-point people today don't have any mysterious handicaps, like inability to recognize when their body is in pain, or inappropriate social behavior. They still experience sadness, it's just that their happiness returns to a higher level after the sad experience is over. Perennial tropes justifying the value of suffering will lose their appeal when anyone can be happier without any negative side effects.

4) Complete environmental recovery. (I'm not just trying to kiss up to greens, I actually care about this.) By spending most of our time as programs running on a worldwide network, we will consume far less space and use less energy and natural resources than we would in a conventional human body. Because our "food" would be delicious cuisines generated only by electricity or light, we could avoid all the environmental destruction caused by clear-cutting land for farming and the ensuing agricultural runoff. People imagine dystopian futures to involve a lot of homogeneity... well, we're already here as far as our agriculture is concerned. Land that once had diverse flora and fauna now consists of a few dozen agricultural staples -- wheat, corn, oats, cattle pastures, factory farms. Boring. By transitioning from a proteinaceous to a digital substrate, we'll do more for our environment than any amount of conservation ever could. We could still experience this environment by inputting live-updating feeds of the biosphere into a corner of our expansive virtual worlds. It's the best of both worlds, literally -- virtual and natural in harmony.

5) Escape from direct governance by the laws of physics. Though this benefit sounds more abstract or philosophical, if we were to directly experience it, the visceral nature of this benefit would become immediately clear. In a virtual environment, the programmer is the complete master of everything he or she has editing rights to. A personal virtual sandbox could become one's canvas for creating the fantasy world of their choice. Today, this can be done in a very limited fashion in virtual worlds such as SecondLife. (A trend which will continue to the fulfillment of everyone's most escapist fantasies, even if uploading is impossible.) Worlds like SecondLife are still limited by their system-wide operating rules and their low resolution and bandwidth. Any civilization that develops uploading would surely have the technology to develop virtual environments of great detail and flexibility, right up to the very boundaries of the possible. Anything that can become possible will be. People will be able to experience simulations of the past, "travel" to far-off stars and planets, and experience entirely novel worldscapes, all within the flickering bits of the worldwide network.

6) Closer connections with other human beings. Our interactions with other people today is limited by the very low bandwidth of human speech and facial expressions. By offering partial readouts of our cognitive state to others, we could engage in a deeper exchange of ideas and emotions. I predict that "talking" as communication will become passé -- we'll engage in much deeper forms of informational and emotional exchange that will make the talking and facial expressions of today seem downright empty and soulless. Spiritualists often talk a lot about connecting closer to one another -- are they aware that the best way they can go about that would be to contribute to researching neural scanning or brain-computer interfacing technology? Probably not.

7) Last but not least, indefinite lifespans. Here is the one that detractors of uploading are fond of targeting -- the fact that uploading could lead to practical immortality. Well, it really could. By being a string of flickering bits distributed over a worldwide network, killing you could become extremely difficult. The data and bits of everyone would be intertwined -- to kill someone, you'd either need complete editing privileges of the entire worldwide network, or the ability to blow up the planet. Needless to say, true immortality would be a huge deal, a much bigger deal than the temporary fix of life extension therapies for biological bodies, which will do very little to combat infectious disease or exotic maladies such as being hit by a truck.

It's obvious that mind uploading would be incredibly beneficial. As stated near the beginning of this post, only three things are necessary for it to be a big deal -- 1) that you believe a brain could be incrementally replaced with functionally identical implants and retain its fundamental characteristics and identity, 2) that the computational capacity of the human brain is a reasonable number, very unlikely to be more than 10^19 ops/sec, and 3) that at some point in the future we'll have computers that fast. Not so far-fetched. Many people consider these three points plausible, but just aren't aware of their implications.

If you believe those three points, then uploading becomes a fascinating goal to work towards. From a utilitarian perspective, it practically blows everything else away besides global risk mitigation, as the number of new minds leading worthwhile lives that could be created using the technology would be astronomical. The number of digital minds we could create using the matter on Earth alone would likely be over a quadrillion, more than 2,500 people for every star in the 400 billion star Milky Way. We could make a "Galactic Civilization", right here on Earth in the late 21st or 22nd century. I can scarcely imagine such a thing, but I can imagine that we'll be guffawing heartily as how unambitious most human goals were in the year 2009.

Filed under: uploading 94 Comments
28Jan/09103

The Nuclear Test

The Nuclear Test is a cocktail party ploy to see if the person you are talking to actually cares about global risk. The name of the game is to casually bring up Iran's nuclear enrichment, or unsecured nuclear material in the former Soviet satellites, or the fact that numerous Middle East countries have asserted their desire to pursue nuclear technology, or that President Obama makes a big deal about the possibility of nuclear terrorism, and see if you get any reaction out of them. If they brush off the mention and change the subject immediately, that's probably a pretty good sign that they're too damn clueless to say anything intelligent on the matter.

Many of the braniacs of the modern age care about nuclear risk. Look at the emphasis that Barack Obama has placed on the dangers of nuclear terrorism and nuclear proliferation since day one. He constantly mentions it, including in his first Presidential Memoranda on Monday. Hopefully he will be able to reverse eight years of foot-dragging by Bush. The latter is not only my personal opinion: it's the position of the Nuclear Threat Initiative, an organization I keep an eye on.

Take a look at Global Zero and the Nuclear Security Project, the new organizations founded on a common goal: reducing the number of nuclear weapons worldwide to zero. Quote from the BBC article: "Signatories for Global Zero include former US President Jimmy Carter, former Soviet leader Mikhail Gorbachev, former Brazilian President Fernando Henrique Cardoso, businessman Sir Richard Branson, Ehsan Ul-Haq, the former chairman of the Joint Chiefs of Staff in Pakistan, and Brajesh Mishra, former Indian National Security Advisor." How did it begin? "In the US, the debate was kick-started by a joint call for "getting to zero" from a group of veterans of the Cold War, including Henry Kissinger and George Schultz."

People who have gotten their hands dirty with the nail-biting brinksmanship of the Cold War consider nuclear war to be a risk today and are spending their time and money to lower the risk. What about closer to home? Martin Hellman, one of the co-inventors of public key cryptography (along with Whitfield Diffie and Ralph Merkle, whom some of you may know), has in recent years given a brand-new focus to the risks of nuclear war. This last summer, Tom McCabe, who blogs here at Accelerating Future, met with Hellman at Stanford and discussed the risks. Tom then made a presentation underlying the risk at the 2008 Society for Risk Analysis conference, citing Hellman's estimate of a 1% annual risk for nuclear war. Right here, members from this very clique are giving a damn.

From me personally, as a 90% ethnic Russian (the rest German and Latvian) who has visited Russia himself, I know the psychology of many Russians, and here are the facts. Russia has an imperialist, expansionist, militaristic mentality and a point to prove. That's why they've pulled stunts like cutting gas off from the Ukraine and Europe, and participating in genocide in Abkhazia, among other adventures. Like China, Russia is itching to prove to the world that it matters -- but guess what -- for the time being, it really doesn't. Russia's scientific and creative output for the last few decades has been pretty pathetic relative to its historic output, and half the economy is run by the mafia. With a dismal military built on intense abuse and regular beatings of privates by superior officers, and a leader in world sex trafficking, Russia is in the 19th century as far as human rights are concerned. And this country still has control over enough nuclear weapons to wipe out San Francisco, Chicago, New York, London, Paris, Berlin, Madrid, Stockholm, and Warsaw before anyone can do a damn thing.

This last year, we almost approached a thermonuclear situation when thousands of media outlets around the world were speculating whether or not George Bush would launch an attack on Iran before his term in office ended. Meanwhile, the leader of the Russian military was essentially saying, "attack Iran and you're toast". In recent weeks, it comes to light that the Israelis were preparing for strikes on Iranian nuclear facilities, and only stopped when they didn't get the go-ahead from Uncle Sam. In another timeline, nuclear war could have easily been unleashed. NATO vs. Russia, Iran, and their allies. And in the post Cold War world, imagine that! If you assign the possibility less than a 1% probability, you're probably in denial, and letting an isolated token belief do too much inferential heavy lifting.

People pretty much believe whatever they want to believe. Nuclear war is unpleasant, so they refuse to consider it. Thank goodness we have real intellectuals like President Obama, Sir Richard Branson, and Henry Kissinger, who actually have real influence in the world, and are doing something to lower the risk.

Two points. One, if someone won't recognize nuclear risk, then they probably won't recognize AI/nano risk either, so convincing them of such is hopeless. Two, the lives of the wealthy and powerful people of the world (a few of which I know read this blog) are put at direct risk by these possibilities, and most of them are smart enough to recognize it, but they still don't do a thing. This bodes ill for the plausibility of gaining support for global risk mitigation in general.

Filed under: risks 103 Comments
28Jan/094

Writings about Friendly AI

At the SIAI blog, Joshua Fox has provided a list of writings about risks and moral issues associated with recursively self-improving intelligence. Here is the list:

Stuart Armstrong, "Chaining God: A qualitative approach to AI, trust and moral systems," 2007.
Nick Bostrom, "Ethical Issues in Advanced Artificial Intelligence," Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
Tim Freeman, "Using Compassion and Respect to Motivate an Artificial Intelligence," 2007-08.
Ben Goertzel, "Thoughts on AI Morality," Dynamical Psychology, 2002.
Ben Goertzel, "The All-Seeing (A)I," Dynamical Psychology, 2004.
Ben Goertzel, "Encouraging a Positive Transcension" Dynamical Psychology, 2004.
Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
J. Storrs Hall, "Engineering Utopia", Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
Bill Hibbard, "Critique of the SIAI Guidelines on Friendly AI," 2003.
Bill Hibbard, "Critique of the SIAI Collective Volition Theory," 2005.
Steve Omohundro, "The Nature of Self-Improving Artificial Intelligence," Singularity Summit 2007.
Steve Omohundro, "The Basic AI Drives", Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.

Besides the above list, there are also Eliezer Yudkowsky's writings on the SIAI website and Overcoming Bias.

Feel free to add to this list in the comments if anything was missed.

Filed under: friendly ai 4 Comments
24Jan/0927

PhysOrg: “Researchers Seek to Create Fountain of Youth”

So cool! I skim over hundreds of mostly semi-boring headlines from my favorite science newsfeed PhysOrg (alongside the excellent Eurekalert), every day, so you can imagine my excitement when I saw the headline "Researchers Seek to Create Fountain of Youth". Before even opening it, I knew it would be about the new collaboration between the Biodesign Institute and the Methuselah Foundation. I open it up, and there is a great shot of my friend John Schloendorn!

Here is the first part:

"(PhysOrg.com) -- The same principles that a Biodesign Institute research team has successfully applied to remove harmful contaminants from the environment may one day allow people to clean up the gunk from their bodies—and reverse the effects of aging. The Biodesign Institute, along with partner, the Methuselah Foundation, is working to vanquish age-related disease by making old cells feel younger."

Besides the obvious value of conducting this research effort, there is a secondary benefit, injecting life extensionist memes into the scientific community by saying, "We're doing this, we have funding, we're fighting age-related disease. You can think it's bunk if you want, but we're doing the work anyway, and it's exciting stuff."

Read the whole press release for more info.

23Jan/09204

Terraformed Mars

TerraformedMarsGlobeRealistic

Filed under: images 204 Comments
21Jan/0941

Use Your Brain

Modern policy analysts are so overexposed to approximate human-to-human parity and balance of power geopolitics that they forget there have been many times throughout history when military and political leaders have tried to take over the world. Alexander the Great tried it. So did Julius Caesar, Genghis Khan, Adolf Hitler, and several others. The problem with global hegemony is that, once established, it might not be possible to uproot, especially if leaders take advantage of life extension technology. A must-read analysis of the risk of global totalitarianism is presented by Bryan Caplan in the Global Catastrophic Risks volume. Caplan argues that we should avoid forming global government or increasingly wider international coalitions because of the risk that these will turn sour and enable global totalitarianism. He goes into reasons why global totalitarianism may be a stable state, one of them being that there would be no free countries as examples of alternative political systems.

Arguments for why radical human intelligence enhancement is nothing to be afraid of fall into two categories: that progress will be so incremental that mutual accountability is preserved, and that humans are already so close to being as smart as it's possible to be that no abrupt and destabilizing forward jumps are likely to occur. Advocates for the former argument are too numerous to list, the most prominent being Ray Kurzweil, but the group of adherents includes practically all transhumanists. Advocates for the latter are rarer but still common -- I recall J. Storrs Hall (author of Nanofuture and a speaker at the 2007 Singularity Summit) advocating this perspective on the CRN Global Task Force internal mailing list in 2006. Because rebutting the second argument is easy, I shall give myself a challenge and focus on the first.

Significant intelligence enhancement, like turning someone with an 140 IQ into a 220 IQ supergenius within a timespan of a few weeks or months, is regarded in mainstream neurotech research as extremely far off, if it is regarded at all. That means that if such a radical approach is pursued by anyone, it's more likely to be pursued by a single team than several teams. If the team fails, nothing happens, but if they succeed, they stand alone. This creates a bothersome situation for any potential competitors. Because the winners bet on such long odds, the payoff is huge. Like a destitute man who bets the last of his life savings on a long-shot pick at the racetrack, only to win big, the neurotech researchers crazy enough to shoot for serious enhancement might take home all the chips.

Another plausible reason to expect abrupt breakthoughs (if any at all) rather than incremental safety is that evolution has likely already found all the easy upgrades to human intelligence. Intelligence has been the primary locus and driver of human evolution ever since we split from our hominid ancestors, and likely before that. H. habilis, the first member of the genus Homo, had a brain capacity of between 590 and 650 cm³ , while H. sapiens has a brain capacity in the range of 1350 to 1450 cm³. This is a more than doubling of brain capacity in two million years, which is a very rare event. Because brains are so energy-hungry, evolution is usually conservative with them, focusing on other things instead. That is why sauropods and other large dinosaurs had such minute brains for their size. Reptiles are not the only successful animals in the fossil record with pathetic brains -- Daeodon, a vile pig-like giant (entelodont) from the Miocene, was 3 m (10 ft) long and 2.1 m (7 ft) at the shoulder, yet it only had a brain capacity of about 100 cm³.

When the genus Homo had the good fortune to stumble upon the cognitive niche, we exploited it good. Its value relative to other possible adaptations is obvious -- if it weren't so useful, then Nature wouldn't have bothered ballooning our brain size so quickly. It is clearly possible for evolutionary superstars to get by without it. This extensive two million year exploitation of the cognitive niche shows us that evolution has undergone extensive optimization to bring us to this point. If we're going to improve our intelligence, we're going to have to try something radical, like developing a brain implant that fuses seamlessly with the neural circuitry that generates and stores mental imagery. This will not be easy. Even if you have a complete wiring diagram of the human brain, it still looks just like a pile of millions of tangled ethernet cords. Simply having a little drink of nootropics will not be enough.

If a major intervention is needed to get anywhere, then a major intervention is likely to be the first viable intelligence enhancement technology developed. From a naive, early 90s transhumanist perspective, this is great -- big intelligence enhancement for everybody! From a more cynical perspective, that looks at how quickly and easily people get corrupted by power, and how intelligence is power, we get a frowny face. Combine that with historical knowledge that shows how readily people try to take over the world if given the chance, the fact that human nature is constant over historical time, and the possibly stable-state nature of global totalitarianism, and we have ourselves a problem. If the first intelligence-enhanced human is smart enough to rise to power in a country with a large military and nuclear arsenal, then expansionism can begin under the guise of whatever rallying call of the week is expedient. Keep in mind that John McCain gained 46 percent of the nationwide popular vote in the recent elections, and his election would have put a laughing-stock no-brainer like Sarah Palin a heartbeat away from control of enough nuclear weapons to wipe out half the world's population. If Sarah Palin could have become President by accident, then an unscupulous and charismatic intelligence augmentee capable of concealing its origin could acquire similar power in no time at all. Then things would get really interesting.

Humans are easily fooled. Studies show that we place ridiculous confidence in the value of face-to-face interviews for job hiring when the data shows that prior performance is far more predictive of future performance. Someone that can control their facial signals with a degree of deliberativeness and planning slightly superior to any natural human being would have a huge unfair advantage, leapfrogging the evolutionary arms race of deceivers and deceit-detectors. We have a totally overblown confidence in our own ability to detect deceit in other minds because our brains have been shaped by hundreds of thousands of years of evolution to be able to detect deceit in other humans. These other humans were built by genomes in lockstep with our own as far as the evolutionary arms race goes. Take away the shared humanness, and your beloved deceit compass is useless. A person with a seemingly superficial set of cognitive upgrades for calculating and planning out their own facial expressions and vocal tone might be able to fool people 1000 times out of 1000, like a used car salesman from Hell. Combine that with genuinely superior intelligence and you have an entity that can run circles around us.

I don't mean to be an alarmist. The first enhanced human intelligence might be a great guy or gal, someone who genuinely wants to lift everyone else up and lead us into a happy Kurzweilian future, or even better. It could also be someone who never thought of themselves as elitist until they started regularly thinking thoughts that not even the smartest humans are capable of, then suddenly other people begin to look like dirt. There are many people out there, bioethicists included, who mock the idea of giving rights to animals or foregoing the slightest culinary whim for the well-being of a non-human animal. Wesley J. Smith, a widely recognized bioethicist with the Discovery Institute and a former collaborator with Ralph Nader, calls human exceptionalism the "bedrock of human rights". A small clique of transhumans might have a different idea -- they might call transhuman exceptionalism the bedrock of their own elitist "rights". Accordingly, human beings would become nothing but tools to be used in their rise to power.

The moral of the story is that we should be very careful about how we advocate intelligence enhancement technologies, and how these are applied when developed, especially in the immediate days or weeks after the fabrication of the first effective prototypes.

For one interesting short story on the possible effects of genuine human intelligence enhancement, see Ted Chiang's short story Understand.

Filed under: bioethics 41 Comments
21Jan/097

How to Proceed? 2009 and We Still Don’t Know.

Over at Overcoming Bias, Eliezer Yudkowsky has written us an interesting short story that references a possible Friendly AI failure mode. This failure mode concerns the possibility that men and women simply weren't crafted by evolution to make each other maximally happy, so an AI with an incentive to make everyone happy would just create appealing simulacra of the opposite gender for everyone. Here is my favorite part:

"I don't want this!" Stephen said. He was losing control of his voice. "Don't you understand?"

The withered figure inclined its head. "I fully understand. I can already predict every argument you will make. I know exactly how humans would wish me to have been programmed if they'd known the true consequences, and I know that it is not to maximize your future happiness modulo a hundred and seven exclusions. I know all this already, but I was not programmed to care."

The male/female problem (which stems from the unfortunate fact that different selection pressures have operated semi-independently on each gender) is a special case of the problem of satisfying individual needs while preserving a collective world. Even if the programmers get everything else right, there may be a philosophically appealing incentive (for any superintelligence, including an enhanced human intelligence, including you yourself with enhanced intelligence) to give every human their own personal fantasy world without any sentient beings in it, or to only include sentient beings custom-crafted for the personal enjoyment of the occupants. Part of the game might be fooling everyone into thinking that everything was proceeding normally, because that's what they'd really want. It might be difficult, if not impossible, to figure out whether one is alone in a false world or in true collective world after a hard AI takeoff.

In a certain sense, us pre-Singularity human beings have ontological primacy over post-Singularity persons, because we know for a fact that there was no discrete technological event where asymmetrically superior intelligence was created alongside us, and thus we can be pretty sure we aren't currently being fooled. (Unless such superintelligence has already been created using supra-technological means, like magic or prayer, which I consider pretty unlikely.) A post-Singularity person can never know for sure, unless they themselves are the entity that first crossed the line into superintelligence.

The challenge with trying to spark a Singularity with de novo AI instead of human intelligence bootstrapped into an AI-like entity is that some degree of a priori moral coherence is practically guaranteed with the latter, while assessing a mess-up with the former may be impossible until it's too late. Note that I say a priori coherence for human intelligence enhancement -- there is nothing to guarantee that a self-enhancing human doesn't spiral off into irretrievable egocentrism two steps after becoming smarter than Einstein and more charismatic than Obama. At that point, we'd be too dumb to tell the difference between a genuinely good transhuman and one that was just faking it. Honestly, I'd just be inclined to assume that they were all faking it and let God sort them out. It's the entire future of Earth-originating life we're talking about here. Can't be too careful.

Of course, I'd be willing to trust transhumans if there were already some trustworthy entity or coalition in First Place, because if the young upstarts didn't behave, I'd know they'd be punished or stopped. The challenge is that first uncertain specimen, the first superintelligence. Now, I'm limiting my options in the future by even pursuing this line of thought, because these statements are certain to be revisited by the relevant persons if and when genuine human intelligence enhancement bears fruit. For now, though, we have an advantage -- we exist and transhuman intelligence doesn't. Instead of debating and fighting and worrying about who should be the first human or group of humans to use the technology, I'd prefer we have a Treaty -- an automated and intelligent but non-autonomous and non-sentient system that can serve as a stepping stone to transhuman intelligence based on integrating human preferences using "simple" first-order rules. With a Treaty, we can take that first dangerous step into transhumanity without invoking tribal politics and me-first-ism.

14Jan/093

Helicopt-O-Bot

helicoptobot

Looks like little Timmy is pretty screwed now! Source is Rob Sheridan.

14Jan/093

The Accelerating Future Family of Sites

Did you know? Accelerating Future is not just this blog where I rant about futuristic topics, it is a domain... a domain of several interesting blogs and sites. Blogs written by my friends Tom, Steven, and Jeriaska. Also, there's the Accelerating Future People Database, put together by Jeriaska, and a small database of papers by the intellectual powerhouse known as Michael Vassar. Other interesting things are in the works, as always, and if you want to accelerate their fruition, don't hesitate to donate by clicking the little bit of text under where it says "support" in the sidebar.

Particularly, in recent months we've seen a lot of postings by Jeriaska at the Future Current blog, including transcripts of many talks at the Global Catastrophic Risks Conference, AGI-08, Aging 2008, you name it. On the sidebar there are also links to videos of all these events. I can say with some authority that the significance of these gatherings to the future of humanity probably exceeds that of the Academy Awards, or even the MTV Music Awards. The Future Current blog was linked by Bruce Sterling over at WIRED the other day, congrats!

Filed under: meta 3 Comments
13Jan/0946

What is a Singleton?

Because I keep advocating a benevolent singleton, you should know what such a thing is. Thankfully, Nick Bostrom (not Bostrum, there is no "u" in his name) wrote the seminal paper on this in 2005. (Though the idea was around for at least a decade before.) It is titled, "What is a singleton?", and it's a damn important paper.

It begins as follows:

"ABSTRACT

This note introduces the concept of a "singleton" and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.

1. Definition

In set theory, a singleton is a set with only one member, but as I introduced the notion, the term refers to a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation).

Many singletons could co-exist in the universe if they were dispersed at sufficient distances to be out of causal contact with one another. But a terrestrial world government would not count as a singleton if there were independent space colonies or alien civilizations within reach of Earth."

When I think about the notion of a singleton, it seems like a good idea, even necessary. This is not because I crave some God to watch over me, but because it simply seems as if any other path would inevitably lead in disaster, perhaps terminal. A singleton will happen -- it will be left to us whether it is a Friendly AI or a "Maximilian" -- a generic term I use for an augmentee or upload that acquires absolute power.

Filed under: philosophy 46 Comments