On Monday I proposed that a demonstration of extremely destructive self-replicating technology may be necessary before the world takes the risk of it seriously, and starts devoting the necessary attention and money to developing comprehensive safeguards. I proposed such a demonstration very hesitantly, in the context that it may be the only way to get global attention on this very important issue, but still, some said I had “lost it”, and that such talk “polarises opinions on Transhumanism”.
1) Prevention of global catastrophic risks is only tangentially related to transhumanism. When I was thinking about the proposal, I had nothing about transhumanism in mind. My blog tagline is “Transhumanism, AI, nanotechnology, and extinction risk”, because these are separate topics — interrelated, maybe, but separate. Sometimes I talk about just transhumanism, sometimes I talk about just nanotechnology, sometimes I talk about the relationship between the two. On Monday I was talking about just extinction risk, and what can be done to lower it.
2) Extinction risk prevention is more important than transhumanism. If we don’t survive the 21st century, then not only will the future lack cyborgs, life extension, space stations, and all that other exciting stuff, but it will also lack such mundane things as smiles, good food, jobs, picnics, writing, and just about everything else uniquely associated with the human species. Based on all the conversations and reading I’ve done, I consider humanity’s risk of wiping itself out in the next few decades to be substantial, and therefore consider it worthwhile to develop outside-the-box solutions to counteract that likelihood. Transhumanism and biological modifications are one of those fun activities you get to do if your species survives.
That absolutely includes proposals to test bioweapons in controlled environments for the purpose of determining their destructive capacity.
Take a look at the situation. Two of the greatest living scientists, Stephen Hawking and Martin Rees, have spoken out on the extreme danger of human extinction, and both the public and academia have failed to listen. If the people won’t listen to them, who else is there? Paris Hilton? Wilford Brimley? Our effort to educate the public on the dangers of advanced technology is failing, and we need solutions fast.
No one could deny the destructive power of the atomic bomb after the test at Trinity. But people have difficulty imagining new, better technologies with even more destructive potential. Discussions surrounding the regulation and control of post-nuclear weapons of mass destruction should be front and center. This should be the first topic that gets brought up at Presidential debates, and all the leading universities should have institutes devoted to studying self-replication in biological and nonbiological systems, to understand how difficult these systems are to build and how to keep them in control in the instance of accidental release.
Instead, the possibility of out-of-control self-replicators is widely seen as a joke, a science fictional plot device. I disagree: the possibility is real, so real that it could threaten human life by the year 2015 or earlier. Attention must be focused on the issue immediately. I am merely brainstorming to come up with some way of doing this.
To temper my suggestion, I’ve realized it could be done in a way more limited fashion than currently envisioned. As John Hunt remarked in the comments thread for Monday’s post, the test could be carried out in a level four containment facility, or feature something more mundane than wiping out all the life on an island, for instance “having little robots cracking all of the eggs on an small island and using their juices for fuel”.
Unfortunately, the point here is to make people afraid. We have every reason to be rationally afraid. Being rationally afraid doesn’t mean panicking, it means taking global catastrophic risk seriously. If people aren’t afraid, then either they disagree with us on technical grounds (if so, we welcome these arguments) or emotional grounds (thinking about human extinction is too unpleasant). Even though representatives of the latter group are very common, I have no sympathy for them — if ignoring the prospect of human extinction is necessary for you to feel pleasant on a daily basis, then ignore it, but tolerate the brainstorming of your more pessimistic peers on how to lower the risk.