Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

19Dec/0659

Transhumanist Sects

Transhumanism, like any large movement, consists of multiple currents. Many individuals identify themselves with more than one. A short overview of a few, written a couple years ago by Nick Bostrom, can be found here. In this post, I will present my own classification scheme, and include descriptions and names that Dr. Bostrom didn't include in the Transhumanist FAQ. They will be listed in rough order of their popularity, but please don't take the ordering scheme too seriously - it's roughly based on the number of Google search results for each term.

Transhumanism is unique because it is so diverse. That's why it never makes sense to label us as a religion or unified conspiracy - besides being mostly unreligious, transhumanists can barely agree on something long enough to cooperate towards it. That's why the #1 version of transhumanism is...

1) Salon transhumanism. This is the huge group that dabbles on the fringes of transhumanism, making small donations to a few organizations here, commenting on blogs or mailing lists there, and exploring issues for the first time that other transhumanists are already tired of. The most impressive aspect of this noncommittal category of transhumanism is its sheer size - it includes folks like Bill Gates, congressman Brad Sherman, and the literally millions of people who have read Kurzweil, Garreau, Brin, Egan, et al. Many of those in this category may not explicitly call themselves transhumanists, but sure act like it, openly advocating extended lifespans, intelligence enhancement, and space colonization, their primary familiarity being through fiction however. A huge task for other transhumanists is to get salon transhumanists more closely invovled.

2) Immortalists. One of the most powerful strands in transhumanism, in recent years especially, but dating all the way back to Robert Ettinger or before, are the immortalists. Immortalists are focused on living forever. In some abstract sense, they're not fundamentally different than all those billions of people who want to live forever by going to Heaven, but have an actual plan to do it here on Earth. Immortalists are doing really well financially - the Methuselah Mouse Prize bank account just passed $4 million, which, in the immortal words of Aubrey de Grey, is "quite a lot, really". The Immortality Institute, which I co-founded back in 2002, is one of the most active transhumanist forums on the internet, and if you type "immortality" into Google, it's right after the Wikipedia page. The immortalists have it all - bloggers, television appearances, a large community of devoted donors, and a productive nucleus of aging researchers who are engaged in innovative research to beat the crap out of aging. When many people hear the word "transhumanist", they think of immortalists. Which makes sense, because practically all transhumanists are immortalists. The #1 immortalist blog on the interwebs is Fight Aging.

3) The World Transhumanist Association. Ah, the WTA. Even though he is no longer Executive Director, many associate the WTA with the transhumanly-active Dr. James Hughes, who built it up from nothing since it was founded by Nick Bostrom and David Pearce in 1998. The WTA has almost 4,000 members worldwide, with dozens of chapters located in places like Toronto, Seattle, London, San Francisco, New York, Chicago, DC/Boston, Israel, Moscow, Buenos Aires, Helsinki, etc. You might call it a worldwide transhumanist conspiracy. The WTA has no official headquarters, though by looking at the global map, we can see that there is a much-higher-than-usual density of WTA transhumanists in California and New England. As far as I can tell, there are no transhumanists in Wyoming. The WTA is not a sect of transhumanists so much as it is an umbrella organization for all transhumanists. There are many transhumanists, however, that are much more active in their sects than in the WTA as a whole. Here is a survey of members from 2005.

4) Extropians. The extropians have been around a long time, since the late 80s, when T.O. Morrow coined the term “extropy”, meaning “the extent of a system's intelligence, information, order, vitality, and capacity for improvement”. The extropians have seven principles: Perpetual Progress, Self-Transformation, Practical Optimism, Intelligent Technology, Open Society, Self-Direction, and Rational Thinking. There used to be six: Boundless Expansion, Self-Transformation, Dynamic Optimism, Intelligent Technology, and Spontaneous Order, which could be summed up in the spiffy acronym, "BEST DO IT SO!" Extropianism reached its zenith in the mid-90s with the WIRED article, "Meet the Extropians", but still maintains an active mailing list to this day. The Extropy Institute, the primary organizational instantiation of the extropians, shut down earlier this year, but there are plans to turn the site into a "library of transhumanism and the future". Extropians have a reputation for being libertarian in their politics, though there are extropians of all political stripes. Classic extropians are people like Max More, Natasha Vita-More, and Robert Bradbury, all of whom have contributed much to the transhumanist dialogue over the last decade or longer.

5) Singularitarianism. What the hell? My own favorite flavor of transhumanism is all the way down here, at #5. The first thing I have to say about singularitarianism is that all of its syllables are entirely necessary, and if it's really so hard for you to pronounce or spell, you should consider revisiting English 101. (If you've never heard the world aloud before, this song might help you remember.) Singularitarianism was first conceived in 1996, by child prodigy Eliezer Yudkowsky, who was 16 at the time. Singularitarianism centers around the idea of superintelligence, and its incredible potential. The idea is that, if, either through human intelligence enhancement or artificial intelligence, we were to create a mind significantly smarter than all human geniuses, it could eventually reach a point where it could continue to improve its own intelligence unaided, leading to a feedback loop of cognitive enhancement. This could quickly lead to something way, way more powerful and smarter than the human race, which, if it cared about us, could do us a lot of good. Conversely, if a superintelligence didn't explicitly care for us, its natural activity could lead to our destruction. The proposed solution to this problem is Friendly AI - a seed that cares about us, and only makes modifications to itself in such a way that this quality is preserved indefinitely. Singularitarianism is possibly the most controversial branch of transhumanism, and is represented by the Singularity Institute.

6) Democratic transhumanists. This left-leaning, democracy-boosting segment of transhumanism has been popularized in Dr. James Hughes' recent book, Citizen Cyborg, and his online essay, Democratic Transhumanism 2.0. Democratic transhumanism puts a lot of effort towards placing transhumanism within the wider political context of today: in addition to the economic and social dimensions of political orientation, James points to another: biopolitics, where the gamut ranges from Luddite (anti-enhancement) to transhumanist (pro-enhancement). The interesting insight here is that this dimension is entirely orthogonal to the others - where someone is on the traditional 2D political compass is not indicative of where they fall on the biopolitical continuum. The quintessential blog of democratic transhumanists is Cyborg Democracy.

7) Academic transhumanism. Transhumanism - at school! Academics like Nick Bostrom and Robin Hanson are brilliant and well-regarded enough to write about transhumanism without getting quickly ejected from their respective universities. They are academic transhumanists, who write about academic transhumanist things. This branch of transhumanism is powerful, because 1) it tends to be more precise and well-researched than the vast majority of transhumanist discourse, 2) the tone allows it to be easily integrated with other academic topics, such as economics, heuristics and biases, cognitive science, ethics, and the like, 3) students have a greater tendency to respect it, 4) other academics might take it seriously, 5) it has the potential to discover new and powerful ideas that lie at the end of long and deep roads of thought. In Bostrom's Transhumanist FAQ, this current is known as "theoretical transhumanism", though I think "academic transhumanism" is more self-explanatory.

8) Transhumanist arts and culture. Natasha Vita-More formally kicked off this segment of transhumanism in 1982 with her Transhumanist Arts Statement. There is a website devoted to TA&C, and dozens of transhumanist-oriented artists including Stelarc, Anders Sandberg, Gina Miller, and many more. There are a few transhumanist bands. The ones I am aware of are: Eidölon, Cyanotic, and Yluko. They are all metal/industrial. Mr. Bungle also has a couple songs about nanotechnology and transhumanism. You should check them out if you enjoy chaotic noise.

9) Non-transhumanist transhumanists. There is a segment of transhumanists, of unknown size, that feels uncomfortable with the connotations of the transhumanist label, or consider it divisive, but still hold many of the beliefs and values of transhumanists. Examples would be our friends Jamais Cascio and Dale Carrico. These individuals participate in the transhumanist/futurist mileu but just don't like to use the T-word to describe themselves. Less provocative labels, such as "technoprogressive", tend to be preferred.

Filed under: transhumanism 59 Comments
11Dec/0617

Transhumanism in The Economist

In both the online and print edition of the latest Economist, we have this delightfully positive little article, Towards Immortality. An excerpt:

There is no greater goal for transhumanism than the conquest of death. Some of the most controversial advocates of technological improvements to humans, including Ray Kurzweil, an American inventor and author, and Aubrey de Grey, a gerontologist and chairman of the Methuselah Foundation, argue optimistically that immortality may become achievable for people who are alive today. But even without the yet-to-be-invented technologies that they say will make this possible, there are good reasons why we can hope to live a lot longer.

The article goes on to discuss the proven anti-aging power of caloric restriction and resveratrol. It mentions something interesting I wasn't aware of, that there's work to develop drugs to activate the genes (sirtuin) that kick in during caloric restriction, while skipping the whole not-eating thing.

In caloric restriction, you eat less, so your body goes into starvation mode, where it tries harder to live longer at the expense of other things, like libido. The idea is that your body thinks that a famine is happening, and it puts a higher priority on living through the famine to reproduce another day rather than reproducing immediately.

Because sirtuin genes, when activated, can decrease sex drive and increase androgenous characteristics in either gender, sirtuin-targeting drugs wouldn't be for everybody. However, it could lead to a drug that provably extends human lifespan by 30% - 50%.

Say, for example, that you're a 25-year-old white American female. Here in 2006, your life expectancy is just over 80. So you might say that you can expect to live to sometime around 2051.

But not so fast. Average human lifespan has been increasing by more than 1/4 of a year per year throughout the 20th century, and it shows no signs of slowing down. This trend has continued for every year in the 21st century as well. When I was younger I was told that it can't keep going forever, as the maximum theoretically possible human lifespan is in the neighborhood of 120, the lifespan of the longest-lived person at that time. But then I realized that the so-called theoretical limit was probably just derived from the latter figure. From the article:

Back in 1928, an American demographer, Louis Dublin, calculated that the upper limit on average life expectancy would be 64.8 years, a daring figure at the time, with American life expectancy then just 57 years. But now his figure looks timid, given that life expectancy for women in Okinawa, Japan, has passed 85.3 years, 20 years more than Dublin claimed possible. Also looking timid are the scientists who later predicted that life expectancy would nowhere pass 78 years (in 1952), 79 years (1980) and 82.5 years (1984).

Thanks to better nutrition and health care, we can safely assume that life expectancy will continue to increase by at least 1/4 of a year per year for many decades to come. Advances like sirtuin-targeting drugs, or the work done by our friends at the Methuselah Foundation, could present "game-changing" life-extension strategies that outperform the mere 1/4 per year trend. With tech like this, our 25-year old female would get 11 years of extra life from the ongoing trend, putting her life expectancy at 91, plus any qualitatively new ambitious anti-aging therapies, which could add an additional 30% - 50%, opening up the 100+ age range, and putting her estimated year of death at around 2070 or 2080 rather than 2050. And what about the possibility of entirely new therapies that blow the lid off a finite lifespan altogether? Actively supporting these initiatives is what transhumanism is all about.

Filed under: transhumanism 17 Comments
6Dec/0661

Space Colonization and Existential Risk

If all goes well, NASA could have a permanent moon base by 2020. This is hopeful, because it's a step towards putting our eggs in more than one basket. At the Lifeboat Foundation, there is a general consensus that setting up autonomous colonies outside of Earth's atmospheric envelope is an urgent priority, even more urgent than traditional lofty goals, like curing cancer. If the 250+ members of our Scientific Advisory Board is any indication, quite a few people are on the same page about this.

For a colony to qualify as a true "Lifeboat", it requires enough people to provide a bare minimum of genetic, racial, and skillset diversity - 200 individuals, preferably 2,000. Men, women, and children would all need to operate in harmony with maximum safety and minimum conflict. To be truly autonomous, a Lifeboat would need years worth of supplies - computers, medical equipment, robotics, food, water, recycling systems, and in the longer run, industrial facilities that can process raw materials into useful products. To avoid the need for constant resupplying from earth, a space or lunar colony would need to have very efficient recycling processes, and eventually start growing its own food.

Al Globus, a prominent advocate of space colonization who works at the NASA Ames Research Center, argues convincingly that we should build space colonies in orbit rather than on the Moon or Mars. Rapid resupply, continuous solar energy, better communication with Earth, and the availability of 1g artificial psuedogravity are all cited as good reasons to choose orbit rather than Mars or the Moon. In particular, children that grow up in the 1/6g or 1/3g environments of the Moon or Mars would lack the musculature necessary to function in 1g environments, making it extremely difficult or impossible for them to visit the Earth, which could be a problem.

By arguing that we ought to be setting up colonies on the Moon or Mars rather than in orbit, NASA is wasting taxpayers' time and money.

In my view, the first priority of space colonization should be to create a viable backup of the human race. Stephen Hawking, a relative newcomer to the field of risk analysis, has proposed traveling all the way to other star systems, which, as grandiose and visionary as it sounds, barely confers any risk avoidance benefit above and beyond a space station situated at a Lagrange point.

Can we get the risk avoidance benefits of space simply by setting up colonies deep underground, or on remote Pacific islands? Unfortunately, probably not. The main reason is that, without full autonomy, and with such tempting access to the surface and the rest of the world, a subterranean Lifeboat is not likely to be fully secure, with people and goods constantly going back and forth, eliminating the point of setting up the colony in the first place. Also, the majority of the biomass is deep underground - if a destructive self-replicator were developed that thrives anaerobically, it could affect underground colonies in a profound way.

And who really wants to live a mile under the ground? With advanced VR, it could become more palatable, but it's hard to imagine 2000 men, women, and children excited about spending their lives in a hole in the ground, much less a hole that you aren't allowed to leave and has to be sealed practically airtight. The prospect is right out of a dystopic Phillip K. Dick novel. I, for one, would pass.

Burt Rutan, developer of SpaceShipOne, the first private vessel to reach space, made the following predictions in late 2004:

* Within 5 years 3,000 tourists will have been to space.
* Within 15 years sub-orbital tourism will be affordable, and 50,000 people will have flown.
* Within 15 years the first, expensive orbital tourist flights will have happened.
* Within 25 years orbital tourism will be affordable.

Al Globus states that if these estimations are roughly correct, he'd expect to see the first orbital colony built within 50 years, around 2055. Of course, various factors could slow or speed this up. Continuing private investment in space, popular support for organizations like the Lifeboat Foundation, and advances in molecular nanotechnology could bring the dream of space colonization radically closer - with a lot of hard work and exponential progress in science and technology, perhaps we could "break vacuum" on the first space colony in the early 2030s or even before. It would do humanity a great service, and might even be regarded as our greatest accomplishment this side of the Singularity.

Speaking of the Singularity, artificial superintelligence is possibly the only conceivable risk that would put even space colonies in danger of destruction. An AI with great robotics capabilities and optimization power, but lacking a goal system that assigns special status to sentient beings, could easily start remaking the Earth in its own image, not even conscious of inadvertantly taking numerous lives. For example, an AI designed to optimize a factory for making more widgets might realize that it could make the most widgets if it converted all the matter in the solar system into widget factories. Because advanced AI, once developed, could quickly become capable of thinking and acting millions of times faster than us meat puppets, by the time we realized what was going on and called a meeting, our atoms would be duly rearranged to maximize our god-given widget-making potential. Widgets, 1, Humanity, 0.

Because of the 21st century superthreats of AI and artificial life, we will not be able to rest truly easy, even when an autonomous space colony is orbiting above us. From the perspective of a recursively self-enhancing widgetmaking AI, that colony would look like just another tasty little matter nugget, perfectly suited for integration into the newest cutting-edge widget factory. Without inbuilt cognitive delimiters that assign diminishing marginal utility to the construction of each new widget factory, this poorly programmed UnFriendly AI (UFAI) would be just as excited about the twenty trillionth widget factory as the first.

For this reason, it is just as important, if not more so, to invest humanity's resources in Friendly AI, cognitive systems that recognize us as sentient beings and respect our volition, even given complete read/write access to their own source code. Whether or not such a goal is possible is room for endless dualistic debate, but if the qualities of kindness and compassion really correspond to certain cognitive structures, and are actually not magical, inexplicable auras given to us by God, then it's only a matter of time before we understand them in sufficient detail to create software systems that display these qualities.

This is not just a blue-sky transhumanist idea. Since Turing, computer scientists and the lay public have been fascinated with the idea of humans getting along with superhuman machines. The idea has been taken seriously in the United States Congress, where Ray Kurzweil testified that he expects superhuman artificial intelligence within the next thirty years. Congressman Brad Sherman, (D-CA), agrees that the issue deserves much more attention, as do mainstream risk analysts like Fred C. Ilke, and superstar VC Peter Thiel. For many who have given the issue serious examination, it's not a matter of if it needs to be dealt with, but how.

As educated First Worlders who happen not to be starving, it's our responsibility to start preparing solutions now, not later. This is not something that should only be attended to by dedicated scientists. Like the movement to stop global warming, it needs support from the media, teachers, the government, and the public. Here are a few things that you, specifically, can do to help:

1. Get informed. The complexity and newness of these issues can be overwhelming to anyone without a prior exposure to the subject. Thankfully, those who write about these topics are some of the clearest communicators in the scientific community, and strongly realize the importance of raising awareness on a global level. I recommend the writings of Christopher Phoenix, Nick Bostrom, and Eliezer Yudkowsky. Of course, you need only keep an eye on news headlines to see that prominent intellectuals like Stephen Hawking and Sir Martin Rees are concerned about existential risk.

2. Get engaged. Thanks to the proliferation of weblogs and personal websites, news and analysis is becoming a massively distributed, grassroots phenomena. There are hundreds of blogs that focus on the future, both the benefits and risks it might pose. By my standards, many of them paint a naively rosy picture of the coming century, but there are still many worth reading, and of course the vast majority of them encourage comments. Some of my favorite futurist blogs include Mike Treder's CRN blog, Phil Bowermaster and Stephen Gordon's Speculist, and Brian Wang's Advanced Nanotechnology. Of course, there's nothing stopping you from starting your own blog and blowing all these guys away. If you can chew gum and walk at the same time, you can probably start a blog.

3. Get serious. Accomplishing anything big requires serious people, serious time, and serious money. We're not actually going to accomplish anything if we sit around chatting about how great it would be if someone else would work to fight existential risk. The few organizations engaged in activity of value are composed of people who are putting their financial security and professional reputations at risk by working full-time for ventures that depend on the foresight and regular contributions of their supporters. The least non-specialists can do is join the organizations and adopt a pattern of charitable donations. Also, supporters can offer contacts - friends or friends of friends that can help us get our message out into the media, or offer expertise useful for fleshing out and implementing mitigative strategies. Have a friend who is interested in global risk mitigation but has a few questions to ask? Refer them to us. And if you live in or around the Bay Area, I invite you to join me in San Francisco for lunch anytime.

Supporters of organizations working to fight existential risk have a big, hairy, audacious goal - a world where the threat of human exinction has been lowered to zero. So despite all the doomsaying and apocalyptic warnings, we're actually quite more optimistic than your average guy on the street, who accepts global risk as a fact of life, something to be ignored lest it give us a bad day. I'd love to be alive in 2050, or 2100, and say, "hey, we did it, the threat is over." Like the lowly Horseshoe crab, which has been going strong for 400 million years now, humanity - or whatever we choose to become - deserves to live long and prosper.

Filed under: risks 61 Comments
5Dec/0691

Review of Accelerando, by Charles Stross

Transhumanist and Creative Commons CTO Mike Linksvayer has written something you don't see too often - a negative review of Charles Stross's Accelerando. In the comments, he muses, "one person's dense content is another person's thicket of cliches", well put. Here is the review:

I expected to enjoy Accelerando by Charles Stross and have a really hard time finishing Down and Out in the Magic Kingdom by Cory Doctorow. The former includes cool stuff like mind uploading, space colonization, and singularity. The latter is set in an incredibly challenging environment (in terms of holding my interest)–a theme park. I experienced the reverse.

Manfred Macx, an open source entrepreneur of the future (very approximately), has a kid with his IRS agent luddite wife. They and their descendents carry their family squabbles across the universe and singularity. As this incredibly non-interesting story unfolds, Accelerando takes every opportunity to reference dot com bubble, transhumanist, and obscure political cliches and inside jokes, without any real depth.

Accelerando was originally written as ten stories, many of which won awards, and several of which I can imagine being enjoyable as shorts. The book is way too long.

If you (can put up with lots of crap) enjoy science fiction, you’ll probably like Accelerando. Everyone else, skim the Accelerando Technical Companion to pick up any missing memes.

Peter McCluskey, economist, Bayesian, and member of our local transhumanist junta, is even more critical:

Accelerando is an entertaining collection of loosely related anecdotes spanning a time that covers both the near future and the post-singularity world. Stross seems to be more interested in showing off how many geeky pieces of knowledge he has and how many witty one-liners he can produce than he is in producing a great plot or a big new vision. I expect that people who aren’t hackers or extropians will sometimes be confused by some of his more obscure references (e.g. when he assumes you know how a third-party compiler defeats the Thompson hack).

He sometimes tries too hard to show off his knowledge, such as when he says “solving the calculation problem” causes “screams from the Chicago School” - this seems to show he confuses the Chicago School with the Austrian School. He says that in the farther parts of the solar system:

Most people huddle close to the hub, for comfort and warmth and low latency: posthumans are gregarious.

But most of what I know about the physics of computation suggests that warmth is a problem they will be trying to minimize.

The early parts of the book try to impress the reader with future shock, but toward the end the effects of technological change seem to have less and less effect on how the characters lives. That is hard to reconcile with the kind of exponential change that Stross seems to believe in.

He has many tidbits about innovative economic and legal institutions. But it’s often hard to understand how realistic they are, because I got some inconsistent impressions about basic things such as whether Manfred used money.

His answer to the Fermi paradox is unconvincing. It is easy to imagine that the smartest beings will want to stick close to the most popular locations. But that leaves plenty of other moderately intelligent beings (the lobsters?) with little attachment to this solar system, whose failure to colonize the galaxy he doesn’t explain.

Numerous things annoyed me about Accelerando. Near the end, Stross felt compelled to use a relatively recent phenomenon - legal litigation for the purpose of burying someone - as the framework of a space battle above the atmosphere of Jupiter. Like, they're terrified because barristers are coming to fire their tort guns at them. Seriously. In virtual reality situations, his characters always choose the most boring and uninnovative environments to dwell in, like a 20s-style cocktail lounge. Reminds me of Star Trek, which frequently fell back to contemporary culture to hold the viewer's interest. In an effort to connect with readers that don't really care about the future, but in effect want it to mirror the culture and style and psychology of the past, Stross seems to avoid with anything genuinely new, preferring a daytime soap opera set against the backdrop of molecular nanotechnology and uploads.

The third part, "Singularity", opens with the quote, "There's a sucker born every minute." by P. T. Barnum. It doesn't have much to do with the chapter, aside from possibly-maybe referencing the presence of a starwisp called Field Circus, but hey, it looks edgy, so why not toss it in?

Much of cyberpunk fiction, like Accelerando, seems as staid and old-fashioned as when it first started showing up in the early 1980s. "High tech and low-life", as Wikipedia puts it, unfortunately gives its readers a crappy stereotype of the future to anticipate, and through self-fulfilling prophecy, actually desire. It's like we can't allow ourselves to imagine that technology will advance without giving up something else entirely, like honesty in government or cleanliness in the streets. That is why in cyberpunk novels, the government is always corrupt and the streets are always dirty. In Stross's future vision of a solar upload empire, AI-run pyramid schemes and adbots are viewed as the primary existential risk.

Accelerando is not classic cyberpunk, but it maintains the cyberpunk ethos throughout. The way it tries to bring together all the members of the Macx family at the end is a convoluted disaster. Filled with badass-wannabe characters (I've met people on Haight and Ashbury with more edge) puncutating their faux-hip dialogues with awkwardly-placed swear words, Accelerando is a science fiction story that's only interesting if you've had literally zero prior exposure to transhumanist ideas. For transhumanists, the themes and concepts are mostly old news. Accelerando is an exponential rush into a cliched future, accompanied by a boredom level that approaches infinity at the asymptote.

Update: Anders Sandberg references an article critical of the economics in Accelerando.

Filed under: futurism 91 Comments
5Dec/0654

New Book: Military Nanotechnology

A new book on nanotech security is out by the German physicist, Dr. Jurgen Altmann. It looks like an important contribution to the field, which is terribly lacking. But I wonder, can his analysis really be "comprehensive" when many of the applications of nanotech haven't even been dreamed up yet? Anyway, here's the description from Amazon:

This book is the first systematic and comprehensive presentation of the potential military applications of nanotechnology (NT). After a thorough introduction and overview of nanotechnology and its history, it presents the actual military NT R&D in the USA and gives a systematic description of the potential military applications of NT that may include in 10-20 years extremely small computers, miniature sensors, lighter and stronger materials in vehicles and weapons, autonomous systems of many sizes and implants in soldiers' bodies. These potential applications are assessed from a viewpoint of international security, considering the new criteria of dangers for arms control and the international law of warfare, dangers for stability through potential new arms races and proliferation, and dangers for humans and society.

Although some applications (e.g. sensors for biological-warfare agents) could contribute to better protection against terrorist attacks or to better verification of compliance with arms-control treaties, several potential uses, like metal-free firearms, small missiles or implants and other body manipulation raise strong concerns. For preventive limitation of these potentially dangerous applications of NT, specific approaches are proposed that balance positive civilian uses and take into account verification of compliance.

This book will be of much interest to students of strategic studies, peace studies, conflict resolution and international security, as well as specialists in the fields of military technology and chemical-biological weapons.

Here are a few of the specific policy recommendations:

To contain these risks, preventive limits are recommended in seven areas. They do not focus on NT as such, but include NT applications in a broader, mission-oriented approach. Distributed sensors below several cm size should be banned. Metal-free small arms and munitions should not be developed, the Treaty on Conventional Armed Forces should be kept and updated as new weapons systems would arrive. A moratorium of ten years for non-medical body manipulation should be agreed upon. Armed autonomous systems should optimally be banned, with limits on unarmed ones; if the former is not achievable, at least for the decision on weapon release a human should remain in the loop. Mobile systems below 0.2-0.5 m size should be banned in general, with very few exceptions. A general ban on space weapons should be concluded, with exceptions for non-weapons uses of small satellites. The Chemical and Biological Weapons Conventions should be upheld and strengthened.

10-20 years? Oh no! Better start planning. Too bad 99% of security analysts won't take this book seriously until nanofactories are already on shelves, because it just sounds too far out, too much progress in too little time. As anyone who has been in futurism for a while knows, whether or not something sounds far out usually isn't predictive of when it will actually be developed. Sometimes it's harder than it sounds, sometimes it's easier than it sounds, but rarely is it exactly as hard as it sounds at first glance. That would just be too much of a coincidence.