“I must say that this is the greatest factor — the way in which the expedition is equipped — the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order — luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time — this is called bad luck.”
– from The South Pole, by Roald Amundsen
Space stations or lunar settlements won’t help mankind avoid numerous types of extinction risks. This is because 1) any colony would remain near-completely dependent on Earth unless very large and in possession of advanced nanotechnology, and 2) the greatest danger, from superintelligence, could easily reach its long arm into space and crush any human colony if it wanted to.
This is not a challenge we can run away from. We have to stay here and fix it. Space will not swoop down and save the day.
Regarding self-replicating threats, it’s likely that a deep underground self-sufficient bunker would be nearly equivalent in its protective value to a space station, not to mention thousands of times cheaper. On Earth, there is air, organic and inorganic building materials, water, radiation shielding, proximity to other humans, and many other amenities. Even if you completely nuked the face of the planet, it would still remain the most habitable neighborhood in the solar system, hands down. This might have something to do with the fact that we descend from a lineage that has lived here and adapted to the environment for billions of years.
When dealing with extinction risks, we have to be practical, not fanciful, with visions of expensive space stations or lunar bases. That’s reality.
Continuing on with the practical viewpoint, we have to get off our high horse and realize that another species could come along that will easily kick our asses. This species will not come from the skies but from our labs. Ignoring this threat is nothing more than anthropocentric conceit. All the nukes and guns and electromagnetic pulses in the world won’t save us from something that’s fundamentally smarter than we are. The new species will merely think of everything we could come up with to fight against it and plan far in advance to counteract these threats. By the time we realize we’re under attack, it will be way too late. No non-brain-damaged human would lose a battle of wits with a Homo erectus, and no Neo sapiens or Colossus will lose a battle of wits with humans.
Accepting the threat of superintelligence involves 1) understanding that human intelligence is finite, understandable, and ultimately engineerable, just like the body (surprise!), and 2) humans are not local instantiations of some Turing-complete Godhead that intelligent species lapse into the second they’re smart enough to take over their own planet, but actually close to the dumbest that a species can be and establish a civilization. Incremental evolutionary processes don’t provide huge intelligence boosts, so Homo sapiens is just a minor tweak on what came before us, a minor tweak just good enough to launch us into the civilizational feedback loop of local dominance. A major tweak would put us into an entirely new realm, but most thinkers seem to assume that such a major tweak will just result in more entities essentially the same as us, but with bigger, bald heads, the propensity to speak in calm, authoritative language, and wear shiny silver/purple clothing. But this is just another monkey in a suit, not a new being.
Asserting with idle confidence that superintelligence won’t be here for centuries, or ever, is just another repeat of anthropocentric conceit. This is just over-worshipping intelligence like the phenomenon of heavier-than-air flight was once over-worshipped (“they’re trying to be like angels”), life was over-worshipped (“humans will never be able to create life in a lab”), the Sun was over-worshipped (“mankind will never be able to harness the power that illuminates the Sun”), the division between the heavens and Earth was over-worshipped (“we’ll never fly to the Moon”), and so on. We pretend that mysteriousness is a property of the territory rather than the map, in a (sometimes subconscious) effort to protect the last segments of the natural world from being understood scientifically. Why do you think Star Wars was so popular, even among scientists? The mysterious “Force” trumped the most advanced technology in the Galaxy. In real life, technology wins, not the make-believe psychic force. Luke gets hit by a heat-seeking missile before he’s even near the Death Star. He goes boom.
But yes, let’s keep developing cybernetics, synthetic life, space travel, biotechnologies, and advanced robotics. We humans will always be on top, and when we create superintelligence, the magic of market forces and man-machine interfacing will ensure that it embodies our values. No need to panic, be alarmist, apocalyptic, or deluded. Everything will be just fine.