Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31Aug/104

WSJ: Gains in Bioscience Cause Terror Fears

From The Wall Street Journal:

Rapid advances in bioscience are raising alarms among terrorism experts that amateur scientists will soon be able to gin up deadly pathogens for nefarious uses.

Fears of bioterror have been on the rise since the Sept. 11, 2001, attacks, stoking tens of billions of dollars of government spending on defenses, and the White House and Congress continue to push for new measures.

But the fear of a mass-casualty terrorist attack using bioweapons has always been tempered by a single fact: Of the scores of plots uncovered during the past decade, none have featured biological weapons. Indeed, many experts doubt terrorists even have the technical capability to acquire and weaponize deadly bugs.

The new fear, though, is that scientific advances that enable amateur scientists to carry out once-exotic experiments, such as DNA cloning, could be put to criminal use. Many well-known figures are sounding the alarm over the revolution in biological science, which amounts to a proliferation of know-how—if not the actual pathogens.

Another bit later in the article:

All the government attention comes despite the absence of known terrorist plots involving biological weapons. According to U.S. counterterrorism officials, al Qaeda last actively tried to work with bioweapons--specifically anthrax--before the 2001 invasion of that uprooted its leadership from Afghanistan.

This is great. It's best to pay attention to obvious risks, like this, nuclear terrorism, the integrity of the power grid under solar storms, major earthquakes, etc., before they happen, not after. Often times, adequate preparation even requires little marginal effort.

Filed under: biology, risks 4 Comments
29Aug/1043

Geomagnetic Solar Storms and EMP

I wish to qualify my statement in the previous post where I wrote, " I currently think that EMP attack is the second greatest risk we face, right behind a genetically engineered superplague."

What I should really say is that I think that any electromagnetic event that wrecks havoc on electronics is the second greatest risk, and that includes geomagnetic storms as well as EMP. I don't want the particularly vivid risk of EMP attack to distract attention from the fundamental point that the most critical nodes in our power grids simply need to be more protected.

EMP attack is controversial. The experts are divided. Scientists can agree, however, that a solar maximum is on the way for 2013, and it could rival the Carrington Event of 1858 in its intensity.

The Space Review has an article that argues that EMP attack is unlikely while geomagnetic storms are the real threat.

Filed under: risks 43 Comments
29Aug/1069

Welcome to 1850: The Risk of EMP Attack

I am concerned about the PR aspects of the EMP attack risk communication over the last couple years. Awareness of the EMP risk has spread much faster among the right than any other portion of the political spectrum. This is already making it unfashionable among the educated left.

Given the year (2010), I currently think that EMP attack is the second greatest risk we face, right behind a genetically engineered superplague. A small EMP-optimized nuke launched from a container ship in the Gulf of Mexico could take out the power grid of the entire continental United States. The same thing could be done anywhere, like Europe or Japan.

The facts are available from the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack. No one cares except the Fox News crowd. It wasn't like this only a few years ago: EMP attack was primarily a topic limited to analysts and sci-fi TV show writers. Obama seems concerned about nukes in general (which presumably includes the EMP risk that emanates from them), but not many on the left share his concern. People are too busy worrying about global warming. The aging Henry Kissinger is not a good spokesman for the nuclear security movement.

If an EMP attack came, cars and trucks would just stop. Factories, controlled by computers, would stop. Molten steel on the assembly line would cool and solidify in place due to failure of the heating elements. The vast majority of tractors, combines, and other heavy machinery would become useless. Transformers and other electrical elements, large and small, would be fried. The largest transformers have to be ordered from China and are generally ordered with a year of lead time.

An effective EMP attack on the US would cause tens of trillions of dollars of damage. Cities would run out of food in a few days. The US grain stockpile only has about a million bushels of wheat. Wheat is the only common grain with enough nutrients to sustain someone on an all-grain diet. A bushel is only 60 pounds, and someone needs about a pound of wheat a day to avoid hunger pangs. Ideally two pounds if you are doing manual labor. 60 million man-days of food is not a lot. The population of the United States is 300 million. That means our grain stockpiles are enough food for everyone to eat a fifth of a pound and then they're gone.

The long-term prognosis will depend on how hard it will be to get crucial electronics for trucks and tractors in. If security collapses a few weeks after an EMP attack, foreign companies may be reluctant to do business here.

For a few tens of billions of dollars, we (the US) could shield our most important infrastructure from EMP attack. Our power grid is so naked and unprotected right now, we are practically asking to be nuked.

Filed under: risks 69 Comments
31Jul/1018

58% of Americans Expect World War and Nuclear Terrorism by 2050

Here are the results from Pew Research. Thanks to James Hughes on the ieet-x list for the link.

Filed under: risks 18 Comments
28Jun/1011

Patrick Lin in London Times: “The Reality of Robocops”

Patrick Lin is spreading the valuable message of roboethics:

They have everything the modern policeman could need - apart from a code of ethics. Without that, a Pentagon adviser fears, the world could be entering an era where automotons pose a serious threat to humanity.

The robots need to be hack-proof to prevent perpetrators from turning them into criminals, and a code of ethical conduct must be agreed while the technology is nascent.

The article mentions that there are currently over 7 million robots in operation, about half of them cleaning floors.

Filed under: risks, robotics 11 Comments
14Jun/1018

Reducing Long-Term Catastrophic Artificial Intelligence Risk

Check out this new essay from the Singularity Institute: "Reducing long-term catastrophic AI risk". Here's the intro:

In 1965, the eminent statistician I. J. Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind. Good called this process an "intelligence explosion," while later authors have used the terms "technological singularity" or simply "the Singularity".

The Singularity Institute aims to reduce the risk of a catastrophe, should such an event eventually occur. Our activities include research, education, and conferences. In this document, we provide a whirlwind introduction to the case for taking AI risks seriously, and suggest some strategies to reduce those risks.

Pay attention and do something now, or be eliminated by human-indifferent AGI later. Why is human-indifferent AGI plausible or even likely within the next few decades? Because 1) what we consider "normal" or "common sense" morality is actually extremely complex, 2) the default morality for AIs will be much simpler than #1 (look at most existing AI/robotics goal systems -- they're only as complex as they need to be to get their narrow jobs done), simply because it will be easier to program and very effective until the AI reaches human-surpassing intelligence, 3) a superintelligent, super-powerful, self-replicating AI with simplistic supergoals will eventually eliminate humanity through simple indifference, the way that humanity has made many thousands of species extinct through indifference. Over the course of restructuring the local neighborhood to achieve its goals (such as maximizing some floating point variable that represents the bank account it once aimed to maximize), the complex, fragile structures known as humans will fall by the wayside.

The motivation will not derive from misanthropy, but basic AI drives such as the drive to preserve its utility function and defend that utility function from modification. These drives will appear "naturally" in all AIs unless explicitly counteracted. In fact, this should be experimentally verified in the near future with continuing progress towards domain-general reasoning systems. Even AIs with simple game-playing goals, given sufficiently detailed models of the world in which the games are played (most AIs lack such models entirely), will start to spontaneously expand into strategies like deceiving or confusing their opponent, perhaps surprising their programmers. Progress in this area is likely to start off incremental and eventually speed up, just like completing a puzzle gets easier the closer you are towards the end.

Even a "near miss", such as an AI programmed to "make humans happy", could lead to unpleasant circumstances for us for the rest of eternity. An AI might get locked into some simplistic notion of human happiness, perhaps because its programmers underestimated the speed at which a seed AGI could start self-improving, and didn't place enough importance on giving the AGI complex and humane supergoals which remain consistent under reflection and self-modification. The worst possible futures may be ones in which a Singularity AI keeps us alive indefinitely under conditions where our existence is valued but our freedom is not.

Filed under: AI, risks, SIAI, singularity 18 Comments
31May/1015

Weapon Energy Over Time

I believe I found this graph on J. Storrs Hall's website.

Filed under: risks 15 Comments
28May/1016

Hungry Cannibals and Soft Apocalypses

Robin Hanson recently posted about The Road and cannibals, which is great, because I think about this stuff all the time, and it's good not to be alone.

The Road is a movie/book about a man and his son traveling south to reach the coast of the Gulf of Mexico in a post-apocalyptic world where the Sun is blocked out by huge dust clouds, and there are no plants or other life except for a few refugees and murderous cannibals. I thought the book was OK because it gave a sneak preview at what daily life could be like when or if the United States gets hit by a massive EMP attack. (The human conflict and desperate lack of food part, not the blocking out the Sun part.)

Prof. Hanson remarks how some reviewers called the movie "realistic", when it absolutely is not. The story takes place more than seven years after apocalypse, but there are a couple occasions where the characters stumble on stored food supplies, which doesn't make sense to Hanson. Second, he points out that traveling in such a world would be totally suicidal. Third, the pair doesn't try to ally with others to boost their strength. They run across neutral people throughout the story, but never team up with them. Fourth, if the apocalypse really destroyed the biosphere and most food sources, Hanson considers it unrealistic that people living primarily on cannibalism could last more than the seven years it takes for the child character to grow up. According to his calculations, you'd have to eat about a person every 47 days to get adequate nutrition.

Shockingly, many of Hanson's commenters don't agree with his points.

A particular comment concerned me a bit, about another post-apocalyptic book that is popular right now, One Second After:

I recently finished the book One Second After which took place in a small town after a nuclear bomb releasing electromagnetism is set off in the United States. They somewhat resorted to cannibalism in the book, at one point choosing to use all stray dogs as the next food source, and moving on to humans who had died. In this case, I found the book to be pretty realistic and very well thought out.

I find this comment problematic because the book isn't realistic. As far as I can tell, the only contemporary author that gets the basics of a post-apocalypse or economic disaster scenario right is James Wesley Rawles. For all I know, he may be the only storyteller that ever even tries to get it right, because the other popular ones -- On the Beach, Mad Max, Terminator IV, Lucifer's Hammer, The Matrix, and all your other post-apocalyptic favorites -- are just terribly unrealistic. The common thread in all of them is that life after the so-called apocalypse is unrealistically easy. This even includes the non-Hollywood tales that are ostensibly trying to be grittier and more realistic, like The Road. (The movie is mostly a faithful rendition of the book.)

If you're looking for post-apocalyptic fiction, the only book that made any sense to me was Patriots by James Wesley Rawles. Perhaps because Rawles is actually a genuine survivalist, he cares to put the thought towards what a post-collapse society would really be like, while many other authors address it from more of a detached position. Thankfully, Patriots is extremely popular, and is doing a great deal to sew the seeds of resilience so that at least 50% of the population might survive an EMP attack. To quote John Robb, "Localize production. Virtualize everything else."

I'd like to write a full review of One Second After, but it will take me a second.

Filed under: risks 16 Comments
15May/109

Dangers of Molecular Nanotechnology, Again

Over at IEET, Jamais Cascio and Mike Treder essentially argue that the future will be slow/boring, or rather, seem slow and boring because people will get used to advances as quickly as they occur. I heartily disagree. There are at least three probable events which could make the future seem traumatic, broken, out-of-control, and not slow by anyone's standards. These three events include 1) a Third World War or atmospheric EMP detonation event, 2) an MNT revolution with accompanying arms races, and 3) superintelligence. In response to Jamais' post, I commented:

I disagree. I don't think that Jamais understands how abrupt an MNT revolution could be once the first nanofactory is built, or how abrupt a hard takeoff could be once a human-equivalent artificial intelligence is created.

Read Nanosystems, then "Design of a Primitive Nanofactory", and look where nanotechnology is today.

For AI, you can do simple math that shows once an AI can earn enough money to pay for its own upkeep and then some, it would quickly gain the ability to take over most of the world economy.

Have Giulio or Jamais read "Design of a Primitive Nanofactory" or Nanosystems?

Knowledge of where we are today in nanotechnology, plus Nanosystems, plus "Design of a Primitive Nanofactory", equals scary.

Where we are today: basic molecular assembly lines
The most important breakthrough: a reprogrammable universal assembler
Shortly thereafter: a basic nanofactory
Shortly thereafter: every nation with nanofactory technology magnifies its manufacturing potential by a factor of hundreds or more.

Chris Phoenix gets it. Jurgen Altmann gets it. Mark Gubrud gets it. Thomas Vandermolen gets it. Eric Drexler seems to have gotten it a long time ago. Michio Kaku, Annalee Newitz, and many others have called molecular nanotechnology "the next Industrial Revolution".

When will others get it? Here's a quote from the CRN page on the dangers of molecular nanotechnology:

Molecular manufacturing raises the possibility of horrifically effective weapons. As an example, the smallest insect is about 200 microns; this creates a plausible size estimate for a nanotech-built antipersonnel weapon capable of seeking and injecting toxin into unprotected humans. The human lethal dose of botulism toxin is about 100 nanograms, or about 1/100 the volume of the weapon. As many as 50 billion toxin-carrying devices--theoretically enough to kill every human on earth--could be packed into a single suitcase. Guns of all sizes would be far more powerful, and their bullets could be self-guided. Aerospace hardware would be far lighter and higher performance; built with minimal or no metal, it would be much harder to spot on radar. Embedded computers would allow remote activation of any weapon, and more compact power handling would allow greatly improved robotics. These ideas barely scratch the surface of what's possible.

Will weapons like these in the hands of every backwater terrorist and militia lead to a future that is "slow" or "boring? It could lead to a future where numerous major cities become essentially uninhabitable.

Here's a potentially illuminating quote:

"Revolutions are cruel precisely because they move too fast for those whom they strike."
Jacob Bronowski

16Apr/1084

Dispelling Stupid Myths About Nuclear War

In response to discussion in the comments section on my recent post on nuclear war, Dave said:

Really, I mean, honestly, no one is surviving a nuclear war.

This is absolute nonsense. To quote the very first paragraph of Nuclear War Survival Skills, a civil defense manual based on in-depth research at the Oak Ridge National Laboratory:

An all-out nuclear war between Russia and the United States would be the worst catastrophe in history, a tragedy so huge it is difficult to comprehend. Even so, it would be far from the end of human life on earth. The dangers from nuclear weapons have been distorted and exaggerated, for varied reasons. These exaggerations have become demoralizing myths, believed by millions of Americans.

Here's another good quote:

Only a very small fraction of Hiroshima and Nagasaki citizens who survived radiation doses some of which were nearly fatal have suffered serious delayed effects. The reader should realize that to do essential work after a massive nuclear attack, many survivors must be willing to receive much larger radiation doses than are normally permissible. Otherwise, too many workers would stay inside shelter too much of the time, and work that would be vital to national recovery could not be done. For example, if the great majority of truckers were so fearful of receiving even non-incapacitating radiation doses that they would refuse to transport food, additional millions would die from starvation alone.

The whole first chapter of the book is filled with refutations of popular myths about nuclear war. When you know the science, these myths seem extremely stupid. Yet millions of people believe them.

Here is one possible fallout distribution pattern, from FEMA:

Notice that the distribution would go to the east, because the prevailing winds come from the west. That spells good news for people out west. We also notice that there are wide swaths in the map that would just be empty of fallout, including maybe 95% of the area of the western United States.

Continents are big, big places. We may or may not yet have weapons that can threaten life across their entire areas, but probably not. (We may get them soon, though.)

For more information on nuclear war, Notre Dame has an Open Courseware page with lectures from Professor Grant Matthews.

Filed under: nuclear, risks 84 Comments
15Apr/104

Interviews with Academics in Robot Ethics

Over at the Moral Machines blog, Colin Allen lists three recent interviews by Gerhard Dabringer on the topic of robot ethics. One of the interviews is with Jurgen Altmann, who I admire greatly for his academic work on preventive arms control. His book Military Nanotechnology is my favorite book on molecular nanotechnology policy, and I hope that its recommendations will be adopted. A small preview is online, but you'll have to shell out $128 if you want a hard copy. Anyway, here are the interviews:

George Bekey: Professor Emeritus of Computer Science, Electrical Engineering and Biomedical Engineering at the University of Southern California and Adjunct Professor of Biomedical Engineering and Special Consultant to the Dean of the College of Engineering at the California Polytechnic State University. He is well known for his book Autonomous Robots (2005) and is Co-author of the study "Autonomous Military Robotics: Risk, Ethics and Design" (2008).

Jurgen Altmann: University of Dortmund, a founding member of the International Committee for Robot Arms Control. Since 2003 he is a deputy speaker of the Committee on Physics and Disarmament of Deutsche Physikalische Gesellschaft (DPG, the society of physicists in Germany) and currently directs the project on “Unmanned Armed Systems - Trends, Dangers and Preventive Arms Control” located at the Chair of Experimentelle Physik III at Technische Universität Dortmund.

John Sullins: Assistant Professor of Philosophy at Sonoma State University. His specializations are philosophy of technology, philosophical issues of artificial intelligence/robotics, cognitive science, philosophy of science, engineering ethics, and computer ethics.

Filed under: risks, robotics 4 Comments
2Feb/107

Risk From Engineered Microorganisms, Strategies for Evolutionary Dominance

From yesterday's list of links, I particularly want to call attention to the rotifer link. This press release is interesting because it shows how animals can survive even when they are exact genetic copies of one another. Instead of outcompeting parasites through mutation, they run away by going into cryptobiosis. I predict that a form of asexual multicellular synthetic life will be created by 2030 that can defend against parasites through aggressive defense, say silica spines, so that running away isn't even necessary. These organisms will just sit around and reproduce. The primary method to get rid of them at first will be dessication, but this will eventually prove useless as they disperse too widely to target.

What many humans don't realize is that we are surrounded by quintillions of organisms with very little genetic diversity that dominate us in terms of biomass and persistence. They are the status quo -- we are the aberration. These are organisms that have survived every mass extinction. Culprits include the tardigrades (which can survive outer space), nematodes (absolutely ubiquitous; it is estimated there are between 1018 (one quintillion) and 1021 (one sextillion) nematodes worldwide, and they are crawling all over you right now), chaetognaths (considered useful models of basal bilaterans, there are a lot of them in the oceans, really a lot), and so on.

The only reason that these organisms aren't ripping us all to shreds right now is because there have been no synthetic biologists to push them out of evolutionary minima and give them more sensible strategies for total domination. Sorry to be alarmist, but I studied evolutionary biology for a couple years and that is my opinion. Evolution is terribly poor at transversing local minima to reach a global optima, and that is really the only saving grace for fragile macroscale multicellular agglomerations like ourselves. Interesting and low-energy-cost evolutionary innovations are rarely combined because they require several working parts to come together which are maladaptive individually but adaptive in cooperation.

The reason why rotifers are interesting is that their lack of genetic diversity makes them a good model for self-replicating machines. The ability to switch into a dormant, armored state (cryptobiosis) seems characteristic of a variety of small organisms, and we can expect this ability to be exploited to the fullest by human-engineered microscale replicators. The ability to distribute many of these replicators across a wide area will eventually create a "viral load" scenario analogous to the one faced by aging humans -- so many diverse beings build up in our body that the workload faced the immune system to combat nascent infections eventually becomes prohibitive and the system breaks down.

Some scientists have laughed at the idea that human-engineered organisms could dominate microbes that have evolved for billions of years, but I find this ridiculous. Human-engineered artifacts have already outperformed everything created by evolution in terms of energy density, speed, mass, acceleration, local dominance, and so on. The key point is that evolution is radically dumb (but it has many trials available) and humans are very smart. Let's discuss some of the ways to engineer microorganisms that cannot be defeated by the legacy biota.

1. Broad-spectrum biocides: natural organisms use a variety of biocides, but observe that humans have created thousands of highly effective synthetic antibiotics and biocides that evolution never discovered even after four billion years of experimentation.

2. Phage-immune bacteria, for instance bacteria that use genetic programs incompatible with malicious code injection by phages. Phages are the main bacteria-curtailing force on the planet and we depend on them for our survival.

3. Bacteria specifically engineered for immunity to broad-spectrum antibiotics which produce and secrete these antibiotics as a biofilm. There is even the possibility of release-and-shield, where microbes release the biocide then shield themselves from it for long enough for the competitors to be defeated, at which point the shield is raised.

4. Sucking them in: microorganisms could coat themselves in a gel shield which absorbs and dissolves both nutrients, phages, and rival microbes. For instance, the extracellular matrix of animal tissues is much stronger than the slime layer used by bacteria. Cooperative colonial bacteria could create stronger extracellular shields depending on how well-established the colonial region is, devoting stronger shields to the colonial center and weaker shields to the exploratory fringes.

5. Incubation-then-release: many evolutionary minima involve colonial organisms that are evolutionarily strong in larger colonies but evolutionarily weak in small colonies. By sterilizing a large area, filling it with nutrients, and allowing a founder population to develop (a "mega petri dish"), an important evolutionary minima could be hopped.

6. Quorum computing: evolution has developed a variety of means for microbes to communicate with one another on a crude level: quorum sensing. One of the interesting evolutionary innovations of the last billion years was to produce multicellular organisms that survive against many uncooperative microbes. By creating microbial superorganisms that effectively cooperate and compute using biocomputation, it may be possible to beat multicellular life at its own game by creating "organisms" miles across that effectively cooperate to defeat all rivals. This is definitely not a near-term risk but it could be a risk within the lifetimes of many alive today, given no singleton that guards us at a low level.

7. The last point in particular opens up a very large space for experimentation. For a colony that knows how to differentiate its perimeter members from interior members, it can activate all sorts of interesting genes in the perimeter members to make life miserable for organisms next to them. Bacteria already do this in a rudimentary way with quorum sensing. As long as a suitable barrier can be erected, the production of a variety of poisons is possible and safe for the majority of the colony.

Even natural selection in hospitals is enough to create killer bacteria immune to many antibiotics. What about bacteria specifically engineered by smart humans for reproduction and survival?

Filed under: biology, risks 7 Comments