Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

27Nov/07138

Death Race

Getting anything done in this world, almost anything at all, is very expensive. Either you have a venture that makes more money than it consumes (a business), a volunteer effort (often including personnel lacking in motivation, time, resources, and/or experience), a university (funded by tuition and endowment payments) or that elusive beast, the non-profit, funded by individual and corporate giving.

Usually, the free market does a pretty good job of motivating people to do stuff that other people want. For instance, I can analyze some emerging technology like holographic displays for a company interested in the field, and get some money out of the deal. Maybe that sort of research can get a little boring, but they want it, so they pay me, right?

But the free market economy sometimes fails, or leads to suboptimal outcomes. It satisfies our vices just as much as our virtues. The market for alcohol, tobacco, prostitution and gambling is enormous. Addiction to these vices costs our society billions of dollars a year in treatment and lost potential. The actuaries can give it a number, but when you get down to it, these losses are unquantifiable. Unfortunately, governmental nanny attempts at setting things right often blow up in our face. Hence libertarians advocating a completely hands-off approach. The whole situation is a morass.

Just like the free market sometimes encourages the generation and purchase of substances and lifestyles that don't represent the best of human activity, the free market can inflate the price of essential products or services when there is economic incentive to do so (or not). People in South America (and elsewhere) are suffering because brand-name drugs are being sold for sky-high prices and the government is pushing to revoke patents and allow the production of generic versions of these drugs. Tragedies of the Commons occur on a regular basis, to a greater or lesser degree. The emission of greenhouse gases, for instance. Some of these suboptimal configurations are being curtailed by legislative action, but as mentioned above, the nanny approach often fails.

The Tragedy of the Commons that concerns me most is that of international security. Each nation has an incentive to promote international security only insofar as it protects that nation, and to a lesser degree, its allies. Even moreso, each nation has an incentive to develop powerful weapons and large armies to increase its bargaining power in international politics. Even though it isn't widely publicized, we are in the largest arms race in the history of humanity right now, in 2007. Rudy Guliani has suggested doubling the size of the United States military, and if he gets elected, he could actually take steps towards achieving that. Over $1 trillion is spent on militaries worldwide, half of it in the United States. Even if your job is sweeping a floor, part of your salary goes to building killing machines.

Contemplating how many resources should be diverted to the military is a Catch 22. On the one hand, a nation should have a large and strong military so that it is taken seriously in bargaining for international treaties and unspoken rules that promote peace. In a very real sense, power can be used for peace, and to deny it is naive. On the other hand, building up the military forces other countries to follow suit, leading to an arms race from which there is no escape. It's like Lewis Carroll's Red Queen -- we have to keep running just to stay in the same place.

As we inch forward into the 21st century, I and many others foresee new weapons of mass destruction becoming available, weapons that can annihilate every living thing on this planet in weeks or even days. The everyday person has little interest or concern in such eventualities, but the cognoscenti (or should I say Illuminati?) of academia, business, and government are not idiots, and they see what's coming. If you want to learn more, read "Some Limits to Global Ecophagy" by Robert Freitas.

We are rushing towards an arms race of much greater magnitude than the Cold War. Rather than dumb nukes which are delivered via huge, conspicuous ICBMs, we will have massive clouds of robotic drones which creep, crawl, bounce, slither, and fly into every nook and cranny of the enemy's hardware or even bodies, capable of dismantling them from within. We are already seeing the beginning of it today. Look at the success of the Predator UAV and minesweeping drones. In a couple decades, soldiers will be more like mages, directing and controlling lethal swarm intelligence, than riflemen, using a souped-up slingshot to direct hunks of metal in a straight line.

The danger begins when we hand over more responsibilities and control to the swarm intelligence. Humans are dangerous and sadistic enough, but there is reason to believe that true artificial intelligence will have trouble sympathizing with humans or grasping our moral norms without large amounts of special (expensive and difficult) programming. Moral acumen is not a side dish that comes free of charge along with the main meal of intelligence. Our distinct morality was produced by millions of years of evolution in social groups, and contains numerous complex elements, some poorly understood: reading the facial cues of others, modeling others circumstances and state of mind, projecting others current and future intentions contingent on different hypothetical actions, vague philosophical questions like "what does a typical human really want?", and so on. For military robotics, these complexities are quite beside the point: won't they all be directed by intelligent operators? But assuming that humans will be on top forever (intellectually or physically) is a recipe for disaster.

The free market economy is the dominant force in the world. The world's governments, the wealthiest existing entities, want better weapons to give themselves weight in international relations. Without international arms reduction treaties, which can be quite difficult to enact, the arms race persists indefinitely. Even if I have a button that can destroy the opposing country with a single press, what if they destroy that facility before I have a chance to push it? So I must build multiple such facilities, ad infinitum. The only way to avoid slamming into the wall is to put on the breaks ahead of time.

We need more international cooperation, more global unity, but most of all, technologies that are inherently protective rather than aggressive. So that if even humans are human, things still turn out alright in the end. Friendly artificial intelligence would be the most powerful tool in this category. If we could create AI that is verifiably friendly, not just in some abstract technical way but in ways that are clear as day, even to a child, then we've hit the technological jackpot. The Friendly AI could then invent and apply additional protective technologies for our benefit. We would have little to fear from such an AI morphing out of control, because, hey, it cares about its own integrity even more than we do or are capable of.

The question is, how to build a Friendly AI? Besides the huge challenge of developing an artificial intelligence to begin with, there is the additional challenge of "what is friendly supposed to mean?" At this point, I often defer to Nick Bostrom's Maxipok principle: rather than arguing forever on the specifics, I suggest that we attempt to maximize the probability of an OK outcome for everyone. A model that has been suggested before for Friendly AI is that of a prototypical altruist, with certain changes, like the absence of a self-centered goal system. If such an AI were programmed correctly, it would not consider "self" as a moral entity worthy of special treatment, and would truly be concerned with the good of all.

Several other models have been proposed for creating Friendly AI, but delving into them is a contentious and philosophically complex project. Before getting distracted by the difficulties of creating such AI, it's worth acknowledging in the abstract whether such AI is worth developing. I definitely believe so. Humans are inherently self-interested -- wouldn't it be nice if all that selfishness could be diluted with agents who are truly altruistic and care about the human race? Then we could possibly avoid shooting ourselves in the foot as this Dangerous Century proceeds. And our ancestors and future selves will thank us for it.

Filed under: risks 138 Comments
12Nov/0713

AGI from AI

Cross-posted from SIAI blog.

Will AGI emerge from a preexisting narrow AI field, incrementally improving?

In my opinion, the answer is likely no, but people working in narrow AI like to tell me that their work will eventually give rise to the Friendly AI I want to see.

Should the idea of AGI emerging from narrow AI be dismissed outright? Probably not. Let's say AGI does indeed emerge from AI. If so, what are possible routes?

Can you think of any others? Different paths have different advantages from both a FAI and an AGI perspective. Some of these "narrow" applications, such as Novamente's, are in fact built on an AGI-oriented architecture. Could the first AGI blindside us, by superficially appearing like narrow AI?

Filed under: AI 13 Comments
10Nov/0756

Full-Body Haptic Suits

Full-body haptic feedback suits. They're coming -- the question is, what will we use them for? The best paper on the topic appears to be this one. Interesting fact -- the human body's pressure sensors responsible for deep pressure touch, Pacinian corpuscles, can't even tell the difference between pressure and suction, so one can simulate touch simply by using tiny vacuums. Alternatively, miniature actuators could push downwards to create the sensation conventionally. Another, somewhat more advanced way to implement full-body haptics would just be to jack into the brain directly, though I think it could be difficult to justify brain surgery for recreational purposes in the near-term future.

The point of full-body haptic feedback is that it allows virtual communication of touch. I push you in an online game, and you actually feel like you get pushed. To help with the immersion, it would likely be accompanied by convincing VR goggles. We can expect VR goggles before full-body haptic suits because the former is technologically easier. I predict that convincing haptic suits will arrive by 2020.

One of the first applications for such suits will of course be sex. Cybersex with both other humans and NPCs. Will the "haptic resolution" around the groin area be enough? What about moisture? I think that engineers, in their infinite ingenuity, will likely figure these things out. More realistic cybersex with NPCs will mean that people will be less inclined to get boyfriends and girlfriend IRL. How much less inclined? Don't know, but it's worth thinking about.

Because the sensors would only stimulate the very surface, and not go in deep, women could have problems with enjoying the virtual sex. Some sort of teledildonics add-on would be necessary, possibly with lube secretions. I'll leave the details to your imagination.

Masseuses might get more business, although the lack of deep stimulation could make it hard to give a convincing massage. Acupressure would be another app. Of course, virtual sports would become much more exciting, although you'd have to stay within a bounded area so you don't accidentally run into a wall.

Perhaps the most interesting application in sports would be for virtual fighting -- protected from any serious harm by a haptic suit that serves as the intermediary, novices in martial arts could fight experts without requiring a trip to the hospital afterwards. There could even be "difficulty settings", giving experts a harder whack from a weaker punch. The right little girl could even defeat her own father or another adult male, given the right calibrations. Fights could even include "magic", force projections based on hand gestures or similar.

The suit would also enable touch sensations impossible in real world environments. For instance, waves of touch going from head to toe. Hood attachments would let the user feel sensations like wind on the face, although some might feel squirmy wearing a full-body suit that covers their head. The more advanced versions, which would probably require molecular manufacturing to make, could even generate heat or cold. It's likely that VR world designers will come up with fascinating new touch sensations we can't imagine here in 2007.

Full-body haptic suits could revolutionize military training, as well as many trades. Want to learn how to be a blacksmith? A virtual agent could likely teach you. The availability of such suits would revolutionize education and entertainment in general, allowing people to "experience" digging for fossils, flying an F-22 (minus g forces), fighting with swords, visiting an "exhibition of touch sensations", etc.

More mundanely than full-body haptic feedback, "haptic datagloves" would also provide a way of telling the computer where your body is in space, allowing that positioning to be reflected in the virtual environment. This could do wonders for making users feel like they're actually "being there" in the VR environment.

Haptic suits could improve communication for the deaf. In real life, they could be using sign language, while in the sim an artificial voice could provide the appearance of speech. This could allow the person to communicate with a listener via sign language even if they don't understand sign language per se.

Can you think of any other possible applications for this technology?

Filed under: futurism 56 Comments
10Nov/07139

Yellowstone Caldera Rising

The Yellowstone caldera has moved upwards nine inches over the last three years, a record rate since geologists first began taking measurements in the 1920s. This is the result of a Los Angeles-sized blob of magma that recently rose up into the chamber only six miles below the surface. The Yellowstone caldera is an ancient supervolcano. Last time it erupted, 642,000 years ago, it ejected 1,000 cubic kilometers of magma into the air. If this happened in today's world, it would kill millions and cover most of the United States in a layer of ash at least a centimeter thick. The lighter ash would rise up into the atmosphere, initiating a volcanic winter and ruining crops worldwide.

Calderas rise and fall worldwide all the time without erupting. But the activity in Yellowstone is still concerning. Like a reckless teenager in a sports car, it seems as if our civilization laughs off the possibility of its own demise like a complete joke. Yet the right sort of event, and we could be knocked flat. Instead of waiting for a disaster to happen, we should prepare in advance to minimize its probability.

I would like to see scientists do a study on the feasibility of using nuclear weapons to initiate a supervolcano eruption. If it looks feasible, park security in Yellowstone should be increased.

Filed under: risks 139 Comments
5Nov/0715

Foresight Vision Weekend 2007 Review

This last weekend I attended the Foresight Vision Weekend 2007, which was in the innovative unconference format. Basically, an unconference means that people break into about seven groups for any given hour-long time slot, led by a person enterprising enough to write down a topic and tape it to a big paper grid on the wall. We did this in the afternoons. The mornings were more like a conventional conference, with a single star speaker and everyone in one room. Changing the Foresight Vision Weekend into the unconference format was a great idea by Brad Templeton.

This was an special Vision Weekend not just due to the unconference format. Unlike ones before it, this Vision Weekend was open to everyone for only $90, rather than being closed to just the Foresight Senior Associates. It was held at Yahoo headquarters, thanks to Chip Morningstar, a Foresight Associate and Yahoo employee. The venue was pretty cool. We could see a nice view of the empty marshlands which dominate the southern part of the San Francisco Bay.

It was very difficult to choose which sessions to attend. There were many ad hoc sessions on a diversity of topics, nanotechnology being in the minority rather than the majority. Basically, I would have considered it a futurist/transhumanist conference. I've never seen so many cryonics bracelets in one place in my life. Topics included AI, cryonics, the metaverse, rationality, space travel, biotech, science in general, and many more. The demographic was the usual mix between the hip young people, the Silicon Valley programming veterans, the nerdy scientists, and the occasional suit here and there.

Because the groups were relatively small (10-20 people per session), there was more interesting conversation in the audience, in some cases dominating the whole thing. This was both a blessing and a curse: in one session, a know-it-all essentially highjacked the session from the organizer and segued into a ten-minute summary of what "open source" means for the one gal in the group that wasn't familiar with the term. This reminds me of what sometimes happens at normal conferences, when smart alecks use the Q&A session not to ask a question, but to soliloquize at length about their pet topic, boring most everyone to tears in the process. But I only saw this happen once here, and the intimacy gained from the unconference format was, on the whole, a good thing.

At the conference, I was thinking about the contrast between the Singularity Summit I recently went to (900 people, 12 speakers) and the CRN conference (30 people, 12 speakers). I thought that the Singularity Summit was good from a getting-the-attention-of-the-public point of view, and also giving newbies a taste of the ideas, but I got a lot more out of the CRN conference, already being a "veteran" of futurist conferences already.

For the Foresight Unconference, I was pleased that I had so many choices for sessions to attend, and indeed, only 2-3 out of the 7 sessions per time slot grabbed my interest. One session I tried to attend, "the Extremely Extreme Future of Nanotech in Architecture", flopped entirely because no one showed up! Out of the 2-3 sessions I was interested in, usually one or two were already being filmed, so I decided to pass on them, knowing I'd see the footage later. Some of the presentations were repeats of Singularity Summit or CRN talks, so I passed on those also. Eric Drexler was conspicuously absent from this conference.

As for standout presentations, Melanie Swan brought a group up to speed on the latest "stuff that matters" in SecondLife, including a huge project to educate the public about (non-Drexlerian) nanotechnology, emerging stock markets, and efforts to link objects in SecondLife with database info from "the outside". A live model of the weather was mentioned.

Eric Boyd led a discussion about the "sustainable transhumanist lifestyle", which had some really stellar people participating. The general gist of the discussion was, "how should we plan our lives if we can potentially live forever?". I also asked "ignoring our aspirations to extend our lifespans, what other lifestyle choices characterize transhumanists?" Self-modification through technology in the here and now was discussed.

A disappointing session on "when will MNT become reality?" included an audience participant that seriously saw Terence McKenna's 2012 Timewave Zero theory as evidence for the likelihood that the first assembler will be built in 2012. This is idiotic. The facilitator of the discussion said "that's a valid point", which knocked my socks off. We were asked to raise our hands if we thought that the Feymann Grand Prize would not be won by 2012, and I was one of the only people in the room who did. I couldn't believe that either. I really doubt the prize will be won in five years, sorry. At the Lifeboat Foundation, our general consensus is that MNT is likely to be developed between 2015 and 2025, which makes sense to me. I would actually lean towards the mid-to-late portion of that range.

At the last time slot on Sunday, I gave a talk on Technological Armageddon. I was totally unprepared, but it ended up okay. I pointed out that I don't think that there are any technologies powerful enough to threaten the entirety of mankind today, but they are coming soon. I said that if humans are the only intelligent species in our corner of space, the difference between a Milky Way full of happy people and a dead Milky Way could depend on the decisions we make in the next few decades. I talked about a hierarchy of risks, from bio to nano to AI/robotics, and how discussing risks from the latter categories is progressively more difficult to discuss openly because they are predicated on more futuristic-sounding versions of these technologies. I did a quick survey and found, out of the 10 people who voted, that 6 are most concerned with bio risks, two with nano-risks, and two with AI-risks. The talk was filmed so I'm sure we'll see that online in the not-too-distant future.

For a humorous look at the Foresight unconference, see the Lifeboat Foundation blog. And if you feel so inclined, don't forget to donate! Thanks to everyone who donated to support our Navy meeting which is taking place tomorrow. The call for donations made it to Instapundit and one of the blogs on Wired. We raised enough money to fly in everyone who could make it, thanks to you!