Hellman’s Nuclear Weapons Paper

Most people are reluctant to discuss major risks like nuclear war because they are not intellectually sophisticated enough to contemplate such a disturbing possibility in an objective manner. They may not even be consciously afraid, but still immediately twitch away from contemplating the subject due to a mostly subconscious emotional reaction. They may also place excessive faith in the doctrine of Mutually Assured Destruction, even though the myriad ways in which this scenario could break down are thoroughly familiar to defense analysts.

To come to terms with this reality, Professor Emeritus of Electrical Engineering at Stanford and one of the inventors of public key cryptography, Martin Hellman, wrote a piece last July titled “Soaring, Cryptography and Nuclear Weapons”. This paper approaches the issue of nuclear war risk from the perspective of something less threatening: gliding. I suggest you check it out.

For a concurrent view, see former Defense Secretary Robert McNamara’s “Apocalypse Soon” from Foreign Policy magazine. Here’s a couple quotes:

“On any given day, as we go about our business, the president is prepared to …

Read More
Scars Mirrodin Wurmcoil Engine

Invasion of the Worm Robots

Consider this — a worm robot that burrows through the top layer of soil and is capable of converting it into additional modular segments of itself as quickly as possible. With an efficiency of just 1%, a worm with a 1 cm maw that tunnels through a 100 meters of earth every hour would be able to process roughly 0.785 cc of earth per hour or 1,884 cc (115 cu in) per day. Assuming 7.85 cc of soil is needed to build one robotic segment 1 cm long, we get a growth rate of 0.1 cm per hour or 2.4 cm (1 in) per day. Nothing shocking, really, but the numbers are contrived to be conservative. If the worms could divide (which would be possible if each segment or a small row of segments can be self-sustaining), then exponential replication could quickly overwhelm an ecosystem even if the growth rate is relatively slow. I doubt many predators would be interested in consuming a robot.

Why brainstorm worm robots? Well, the worm motif seems very popular in evolution, and …

Read More
mind

What are the Benefits of Mind Uploading?

Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses. The concept is rather infrequently discussed, due to a combination of 1) its supposedly speculative nature and 2) its “far future” time frame.

Before I explore the idea, let me give a quick description of what mind uploading is and why the two roadblocks to its discussion are invalid. Mind uploading would involve simulating a human brain in a computer in enough detail that the “simulation” becomes, for all practical purposes, a perfect copy and experiences consciousness, just like protein-based human minds. If functionalism is true, like many cognitive scientists and philosophers correctly believe, then all the features of human consciousness that we know and love — including all our memories, personality, and sexual quirks — would be preserved through the transition. By simultaneously disassembling the protein brain as the …

Read More

The Nuclear Test

The Nuclear Test is a cocktail party ploy to see if the person you are talking to actually cares about global risk. The name of the game is to casually bring up Iran’s nuclear enrichment, or unsecured nuclear material in the former Soviet satellites, or the fact that numerous Middle East countries have asserted their desire to pursue nuclear technology, or that President Obama makes a big deal about the possibility of nuclear terrorism, and see if you get any reaction out of them. If they brush off the mention and change the subject immediately, that’s probably a pretty good sign that they’re too damn clueless to say anything intelligent on the matter.

Many of the braniacs of the modern age care about nuclear risk. Look at the emphasis that Barack Obama has placed on the dangers of nuclear terrorism and nuclear proliferation since day one. He constantly mentions it, including in his first Presidential Memoranda on Monday. Hopefully he will be able to reverse eight years of foot-dragging by Bush. The latter is not only …

Read More

Writings about Friendly AI

At the SIAI blog, Joshua Fox has provided a list of writings about risks and moral issues associated with recursively self-improving intelligence. Here is the list:

Stuart Armstrong, “Chaining God: A qualitative approach to AI, trust and moral systems,“ 2007. Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003. Tim Freeman, “Using Compassion and Respect to Motivate an Artificial Intelligence,” 2007-08. Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002. Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004. Ben Goertzel, “Encouraging a Positive Transcension“ Dynamical Psychology, 2004. Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction. J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and …

Read More

PhysOrg: “Researchers Seek to Create Fountain of Youth”

So cool! I skim over hundreds of mostly semi-boring headlines from my favorite science newsfeed PhysOrg (alongside the excellent Eurekalert), every day, so you can imagine my excitement when I saw the headline “Researchers Seek to Create Fountain of Youth”. Before even opening it, I knew it would be about the new collaboration between the Biodesign Institute and the Methuselah Foundation. I open it up, and there is a great shot of my friend John Schloendorn!

Here is the first part:

“(PhysOrg.com) — The same principles that a Biodesign Institute research team has successfully applied to remove harmful contaminants from the environment may one day allow people to clean up the gunk from their bodies—and reverse the effects of aging. The Biodesign Institute, along with partner, the Methuselah Foundation, is working to vanquish age-related disease by making old cells feel younger.”

Besides the obvious value of conducting this research effort, there is a secondary benefit, injecting life extensionist memes into the scientific community by saying, “We’re doing this, we have funding, we’re fighting …

Read More

Use Your Brain

Modern policy analysts are so overexposed to approximate human-to-human parity and balance of power geopolitics that they forget there have been many times throughout history when military and political leaders have tried to take over the world. Alexander the Great tried it. So did Julius Caesar, Genghis Khan, Adolf Hitler, and several others. The problem with global hegemony is that, once established, it might not be possible to uproot, especially if leaders take advantage of life extension technology. A must-read analysis of the risk of global totalitarianism is presented by Bryan Caplan in the Global Catastrophic Risks volume. Caplan argues that we should avoid forming global government or increasingly wider international coalitions because of the risk that these will turn sour and enable global totalitarianism. He goes into reasons why global totalitarianism may be a stable state, one of them being that there would be no free countries as examples of alternative political systems.

Arguments for why radical human intelligence enhancement is nothing to be afraid of fall into two categories: that progress will be so …

Read More

How to Proceed? 2009 and We Still Don’t Know.

Over at Overcoming Bias, Eliezer Yudkowsky has written us an interesting short story that references a possible Friendly AI failure mode. This failure mode concerns the possibility that men and women simply weren’t crafted by evolution to make each other maximally happy, so an AI with an incentive to make everyone happy would just create appealing simulacra of the opposite gender for everyone. Here is my favorite part:

“I don’t want this!” Stephen said. He was losing control of his voice. “Don’t you understand?”

The withered figure inclined its head. “I fully understand. I can already predict every argument you will make. I know exactly how humans would wish me to have been programmed if they’d known the true consequences, and I know that it is not to maximize your future happiness modulo a hundred and seven exclusions. I know all this already, but I was not programmed to care.”

The male/female problem (which stems from the unfortunate fact that different selection pressures have operated semi-independently on each gender) is a special case of the problem of …

Read More

The Accelerating Future Family of Sites

Did you know? Accelerating Future is not just this blog where I rant about futuristic topics, it is a domain… a domain of several interesting blogs and sites. Blogs written by my friends Tom, Steven, and Jeriaska. Also, there’s the Accelerating Future People Database, put together by Jeriaska, and a small database of papers by the intellectual powerhouse known as Michael Vassar. Other interesting things are in the works, as always, and if you want to accelerate their fruition, don’t hesitate to donate by clicking the little bit of text under where it says “support” in the sidebar.

Particularly, in recent months we’ve seen a lot of postings by Jeriaska at the Future Current blog, including transcripts of many talks at the Global Catastrophic Risks Conference, AGI-08, Aging 2008, you name it. On the sidebar there are also links to videos of all these events. I can say with some authority that the significance of these gatherings to the future of …

Read More

What is a Singleton?

Because I keep advocating a benevolent singleton, you should know what such a thing is. Thankfully, Nick Bostrom (not Bostrum, there is no “u” in his name) wrote the seminal paper on this in 2005. (Though the idea was around for at least a decade before.) It is titled, “What is a singleton?”, and it’s a damn important paper.

It begins as follows:

“ABSTRACT

This note introduces the concept of a “singleton” and suggests that this concept is useful for formulating and analyzing possible scenarios for the future of humanity.

1. Definition

In set theory, a singleton is a set with only one member, but as I introduced the notion, the term refers to a world order in which there is a single decision-making agency at the highest level. Among its powers would be (1) the ability to prevent any threats (internal or external) to its own existence and supremacy, and (2) the ability to exert effective control over major features of its domain (including taxation and territorial allocation).

Many singletons could co-exist in the universe if they …

Read More