Superlongevity, Superintelligence, Superabundance

Dale Carrico, one of the more prominent critics of transhumanism, frequently refers to “superlongevity, superintelligence, and superabundance” as transhumanist goals, of course in a disparaging way. Yet, I openly embrace these goals. Superlongevity, superintelligence, and superabundance are a perfect summary of what we want and need. How can we achieve them?

Superlongevity can be achieved by uncovering the underlying mechanisms of aging and counteracting them at the molecular level faster than they can cause damage. Huge research project, a long-term effort, but definitely worth the time and money. Leading organization in this area? The SENS Foundation.

Superintelligence will be a difficult challenge, creating an intelligent being smarter than humans in every domain. It could take decades, or possibly longer, but it does seem possible. There are various possible routes to superintelligence: brain-computer interfacing, neuroengineering, and last but not least AI. I humbly offer my own organization, the Singularity Institute, as the …

Read More

Robin Hanson on SETI in USA Today

Robin Hanson, economist and author of Overcoming Bias, recently appeared in USA Today talking about SETI. He appears as a counterpoint to Seth Shostak, a guy who I believe is totally out of it. Here’s the relevant section:

But researchers such as Robin Hanson of George Mason University in Fairfax, Va., wonder whether the big picture really looks so promising when it comes to advanced life. Hanson supports SETI but finds it telling that humans haven’t come across anything yet. “It has been remarkable and somewhat discouraging,” Hanson says, “that the universe is so damn big and so damn dead.”

Great quote, love it. To quote Marshall T. Savage, author of that superlative masterpiece, The Millennial Project:

There is a program to actively search for signals from other civilizations in the galaxy: SETI (Search for Extraterrestrial Intelligence). This is a noble cause, but it seems slightly absurd. Scientists huddle around radio telescopes listening intently to one star at a time for the sound of …

Read More

Seasteading Institute Conference and Floating Festival to Set Sail in September

I just got an email from Seasteading Institute President Patri Friedman letting me know about the organization’s upcoming conference and floating festival, which will be September 28th – 29th for the conference and October 2-4 for the festival. Yes, they are having a floating freedom festival, as Patri calls it.

Ephemerisle (floating freedom festival): Website, Press Release Seasteading 2009 Conference: Website, Press Release

I am planning to attend the seasteading conference and right after that will fly to New York to set up for Singularity Summit. Cool!

Read More

Friendly AI Supporter Solves Super Mario

Robin Baumgarten, a PhD student at Imperial College, London, author of the AI Panic blog, and fellow Friendly AI supporter, recently got some nice blog coverage for creating an AI (script, really) that plays Super Mario more effectively than any human. Check it:

At this point I must brag that I have beaten Lost Levels. The tactics that the script uses can actually work pretty well — in a lot of the harder levels, running semi-blindly seems to work better than taking it slow and easy, which just puts you at greater risk of being attacked. I wonder — which video game will get solved next? Many of them seem trivially easy, but platformers (like Mario) seem relatively challenging, from an AI perspective.

Congrats, Robin! I am reminded of the Black Belt Bayesian post “Speedrunning Through Life”. Superintelligences will speedrun through real life problem-solving, analysis, and mediation in the same way that Robin’s AI speedruns through Mario. …

Read More

I Have the *One Secret* of Friendly AI… Not.

David Brin recently wrote a post on AI morality that I thought was sort of anthropomorphic. Read his post, then here’s my response:

I think you’re being somewhat anthropomorphic by assuming that by extending a hand to AIs they’ll necessarily care. A huge space of possible intelligent beings might not have the motivational architecture to give a shit whatsoever even if they are invited to join a polity. The cognitive content underlying that susceptibility evolved over millions of years of evolution in social groups and is not simple or trivial at all. Without intense study and programming, it won’t exist in any AIs.

Establishing that motivational architecture will be a matter of coding and investigation of what makes motivational systems tick. If you’ve created an AI that is actually susceptible to being convinced to joining society based on questioning mental delusions, or whatever else, you’ve already basically won.

The challenge is in getting an AI from zero morality whatsoever to roughly human-level morality. Your …

Read More

In-vitro Meat: Would Lab-Burgers be Better for us and the Planet?

Nice article on in vitro meat at CNN. Big congratulations to Jason Matheny. You’re a winner. Soon we will be able to stop eating animals, which everyone knows deep down might be conscious (though they like to underweight the probability due to their love of eating them).

First step: eliminate the killing of animals by humans for food. Step two: rearrange the entire ecosystem so that predators cannot harm conscious prey. A fairly modest proposal, if you ask me.

Read More

A Nice and Meaty Introduction to Friendly AI

I would strongly prefer to avoid the bad-faith discussion/debate with Mike Treder, Managing Director of the Institute for Ethics and Emerging Technologies (how much longer must we be attacked as if we were a cult that is as blinded to reason as the worst fundamentalists?), but in a recent post he raised legitimate questions that may be of interest to those new to the concept of Friendly AI, so I will address them. After defining the basic concept of the intelligence explosion (recursively self-improving superintelligence), Mike writes:

The rub, of course, is that this brainy new intelligence might not necessarily be inclined to work in favor of and in service to humanity. What if it turns out to be selfish, apathetic, despotic, or even psychotic? Our only hope, according to “friendly AI” enthusiasts, is to program the first artificial general intelligence with built-in goals that constrain it toward ends that we would …

Read More