Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Challenge of Self-Replication Reprise

Just because I ran into it in a random Google search and I like it, here I am reposting some content from a post I made exactly five months ago, "The Challenge of Self-Replication":

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists — they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale — microwhip scorpions, parasitic wasps, fairyflies and the like — merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.


Instead of just saying, “we’re screwed”, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.

By "benevolent singleton", I mean "an IAeed (Intelligence Amplified) fundamentally considerate and kind human whose intelligence is actually improved above H. sapiens to the tune that H. sapiens is above H. heidelbergensis, and after that point, whatever happens, happens", or "a self-improving Friendly AGI". Nothing so immensely, unimaginably complicated. If the latter seems hundreds of years away in your estimation, then perhaps the former is not quite as far.

Filed under: futurism Leave a comment
Comments (1) Trackbacks (0)
  1. So the next question then is whether to throw everything we have being either AI or IA in the hope of reaching the singularity and creating a singleton that can manage/protect the world and humanity. So which do we get behind? AI or IA?

Leave a comment

No trackbacks yet.