What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists â€” they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale â€” microwhip scorpions, parasitic wasps, fairyflies and the like â€” merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.
Instead of just saying, â€œweâ€™re screwedâ€, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.
By "benevolent singleton", I mean "an IAeed (Intelligence Amplified) fundamentally considerate and kind human whose intelligence is actually improved above H. sapiens to the tune that H. sapiens is above H. heidelbergensis, and after that point, whatever happens, happens", or "a self-improving Friendly AGI". Nothing so immensely, unimaginably complicated. If the latter seems hundreds of years away in your estimation, then perhaps the former is not quite as far.