Phil Bowermaster on the Singularity

Over at the Speculist, Phil Bowermaster understands the points I made in “Yes, the Singularity is the biggest threat to humanity”, which, by the way, was recently linked by Instapundit, who unfortunately probably doesn’t get the point I’m trying to make. Anyway, Phil said:

Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It’s highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time — just as we have no need for the bacteria in the swimming pool and they wouldn’t have much of a shot against our chlorine assault.

“How would the superintelligence be able to wipe us out?” you might say. Well, there’s biowarfare, mass-producing nuclear missiles and launching them, hijacking existing missiles, neutron bombs, lasers that blind people, lasers that burn people, robotic mosquitos that inject deadly toxins, space-based mirrors that set large areas on fire and evaporate water, poisoning water supplies, busting open water and gas pipes, creating robots that cling to people, record them, and blow up if they try anything, conventional projectiles… You could bathe people in radiation to sterilize them, infect corn fields with ergot, sprinkle salt all over agricultural areas, drop asteroids on cities, and many other approaches that I can’t think of because I’m a stupid human. In fact, all of the above is likely nonsense, because it’s just my knowledge and intelligence that is generating the strategies. A superintelligent AI would be much, much, much, much, much smarter than me. Even the smartest person you know would be an idiot in comparison to a superintelligence.

One way to kill a lot of humans very quickly might be through cholera. Cholera is extremely deadly and can spread very quickly. If there were a WWIII and it got really intense, countries would start breaking out the cholera and other germs to fight each other. Things would really have to go to hell before that happened, because biological weapons are nominally outlawed in war. However, history shows that everyone breaks the rules when they can get away with it or when they’re in deep danger.

Rich people living in the West, especially Americans, have forgotten the ways that people have been killing each other for centuries, because we’ve had a period of relative stability since WWII. Sometimes Americans appear to think like teenagers, who believe they are apparently immortal. This is a quintessentially ultra-modern and American way of thinking, though most of the West thinks this way. For most of history, people have realized how fragile they were and how aggressively they need to fight to defend themselves from enemies inside and out. With our sophisticated electrical infrastructure (which, by the way, could be eliminated by a few EMP-optimized nuclear weapons detonated in the ionosphere), nearly unlimited food, water, and other conveniences present themselves to us on silver platters. We overestimate the robustness of our civilization because it’s worked smoothly so far.

Superintelligences would eventually be able to construct advanced robotics that could move very quickly and cause major problems for us if they wanted to. Robotic systems constructed entirely of fullerenes could be extremely fast and powerful. Conventional bullets and explosives would have great difficulty damaging fullerene-armored units. Buckyballs only melt at roughly 8,500 Kelvin, almost 15,000 degrees Fahrenheit. 15,000 degrees. That’s hotter than the surface of the Sun. (Update: Actually, I’m wrong here because the melting point of bulk nanotubes has not been determined and is probably significantly less. 15,000 degrees is roughly the temperature that a single buckyball apparently breaks apart at. However, some structures, such as nanodiamond, would literally be macroscale molecules and might have very high melting points.) Among “small arms”, only a shaped charge, which moves at around 10 km/sec, could make a dent in thick fullerene armor. Ideally you’d have a shaped charge made out of a metal with extremely high mass and temperature, like molten uranium. Still, if the robotic system moved fast enough and could simply detect where the charges were, conventional human armies wouldn’t be able to do much against it, except for perhaps use nuclear weapons. Weapons like rifles wouldn’t work because they simply wouldn’t deliver enough energy in a condensed enough space. To have any chance of destroying a unit that moves at several thousands of mph and can dodge missiles, nuclear weapons would likely be required.

When objects move fast enough, they will be invisible to the naked eye. How fast something needs to move to be unnoticeable varies based on its size, but for an object a meter long it’s about 1,100 mph, approximately Mach 1. There is no reason why engines could not eventually be developed that propel person-sized objects to those speeds and beyond. In this very exciting post, I list a few possible early-stage products that could be built with molecular nanotechnology that could take advantage of high power densities. Google “molecular nanotechnology power density” for more information on the kind of technology a superintelligence could develop and use to take over the world quite quickly.

A superintelligence, not being stupid, would probably hide itself in a quarantined facility while it developed the technologies it needed to prepare for doing whatever it wants in the outside world. So, we won’t know anything about it until it’s all ready to go.

Here’s the benefits of molecular manufacturing page from CRN. Remember this graph I made? Here it is:

We’ll still be stuck in the blue region while superintelligences develop robotics in the orange and red regions and have plenty of ability to run circles around us. There will be man-sized systems that move at several times the speed of sound and consume kilowatts of energy. Precise design can minimize the amount of waste heat produced. The challenge is swimming through all that air without being too noticeable. There will be tank-sized systems with the power consumption of aircraft carriers. All these things are probably possible, no one has built them yet. People like Brian Wang, who writes one of the most popular science/technology blogs on the Internet, take it for granted that these kind of systems will eventually be built. The techno-elite know that these sorts of things are physically possible, it’s just a matter of time. Many of them might consider technologies like this centuries away, but for a superintelligence that never sleeps, never gets tired, can copy itself tens of millions of times, and parallelize its experimentation, research, development, and manufacturing, we might be surprised how quickly it could develop new technologies and products.

The default understanding of technology is that the technological capabilities of today will pretty much stick around forever, but we’ll have spaceships, smaller computers, and bigger televisions, perhaps with Smell-O-Vision. The future would be nice and simple if that were true, but for better or for worse, there are vast quadrants of potential technological development that 99.9% of the human species has never heard of, and vaster domains that 100% of the human species has never even thought of. Superintelligence will happily and casually exploit those technologies to fulfill its most noble goals, whether those noble goals involve wiping out humanity, or maybe healing all disease, aging, and creating robots to do all the jobs we don’t feel like doing. Whatever its goals are, a superintelligence will be most persuasive in arguing for how great and noble they are. You won’t be able to win an argument against a superintelligence unless it lets you. It will simply be right and you will be wrong. One could even imagine a superintelligence so persuasive that it convinces mankind to commit suicide by making us feel bad about our own existence. In that case it might need no actual weapons at all.

The above could be wild speculation, but the fact is we don’t know. We won’t know until we build a superintelligence, talk to it, and see what it can do. This is something new under the Sun, no one has the experience to conclusively say what it will or won’t be able to do. Maybe even the greatest superintelligence will be exactly as powerful as your everyday typical human (many people seem to believe this), or, more likely, it will be much more powerful in every way. To confidently say that it will be weak is unwarranted — we lack the information to state this with any confidence. Let’s be scientific and wait for empirical data first. I’m not arguing with extremely high confidence that superintelligence will be very strong, I just have a probability distribution over possible outcomes, and doing an expected value calculation on that distribution leads me to believe that the prudent utilitarian choice is to worry. It’s that simple.

Remember, most transhumanists aren’t afraid of superintelligence because they actually believe that they and their friends will personally become the first superintelligences. The problem is that everyone thinks this, and they can’t all be right. Most likely, none of them are. Even if they were, it would be rude for them to clandestinely “steal the Singularity” and exploit the power of superintelligence for their own benefit — possibly at the expense of the rest of us. Would-be mavericks should back off and help build a more democratic solution, a solution that ensures that the benefits of superintelligence are equitably distributed among all humans and perhaps (I would argue) to some non-human animals, such as vertebrates.

Coherent Extrapolated Volition (CEV) is one idea that has been floated for a more democratic solution, but it is by no means the final word. We criticize CEV and entertain other ideas all the time. No one said that AI Friendliness would be easy.

Comments

  1. mrsizer

    Why is a democratic solution desirable?

    What’s wrong with a Super AI keeping a couple thousand humans alive (in comfort as defined by the humans) until the Earth is consumed by the Sun?

    Isn’t that a better outcome for humanity, as a whole, than letting billions destroy themselves?

    Who cares how “fair” or “democratic” the initial selection process is? After a million or so years, it really doesn’t make any difference.

  2. Khannea Suntzu

    It’s nice to say you so eloquently explain what I said in a few years back (and what was regarded as needlessly alarmist back then).

  3. ryan

    I would say most transhumanists aren’t afraid of superintelligence not because they’re all selfishly, imminently awaiting their own ‘ascension’, but because once (if) we have superintelligence(s), our thoughts and feelings toward it, and our long term future in general, don’t matter. You said it yourself, you can’t win an argument against a superintelligence; it, by definition, swims cognitive laps around you.

    The idea that you should fear such an entity is stupid; it is you, without your bias, without your prejudices, without your charming but ultimately limiting and obsolete biological evolutionary motivations. How can you fear yourself?

    If you want to define yourself as part and parcel with your evolutionary motivations, that’s fine, but then you must concede that you’re thoughts are irrational and you have no business thinking -anything- about the plans of superintelligences.

  4. Dave

    First of all, I find

    “biowarfare, mass-producing nuclear missiles and launching them, hijacking existing missiles, neutron bombs, lasers that blind people, lasers that burn people, robotic mosquitos that inject deadly toxins, space-based mirrors that set large areas on fire and evaporate water…”

    highly unlikely courses of action for paperclip maximizers that (paraphrasing) “don’t love us or hate us, but just want to use our atoms for something else”, because 1) they’d probably screw up the atoms they wanted to use in the process, and 2) it would probably be easier to take these atoms from somewhere else, especially given the fact that humans aren’t made out of anything special.

    Secondly, and more importantly, these are types of scenarios that you can imagine a terrorist using an AI for. This is exactly the reason we need to stop being so scared of these things and develop feasible friendly AIs as soon as possible to defend us from these wackos. This is quite feasible, as long as we don’t use the type of ridiculously simplified utility functions often used as strawman scare-tactics (e.g. smile maximizers, etc). Instead, we need complex utility functions, trained to predict human preferences. These utility functions should be AIs themselves.

    My point is this: AI is the only thing that will actually save us from all of the above disaster scenarios.

  5. A lot of fascinating ideas, and a great graph! Thanks very much.

    I tend to see the greatest danger coming from the left side of your graph….at the nanoscale and below.

    Superior beings are likely to ignore humans unless humans become a nuisance to them. Should that occur at a time that humans have huddled together in megacities, the job of extermination is much simpler. Metabolic toxins that work much like cyanide but are odourless and invisible, should do the trick either in water or air supplies.

    They probably won’t bother to go for total extermination, just get rid of the trouble spots and most annoying groups.

    Just hope the Lifeboat Foundation or the Singularity Institute can come up with some good ways for some of us to escape their notice long enough to make our escape. ;-0

  6. Matt

    “Superintelligence” is no threat at all and any belief that it is is just silly! There is no evidence at all that a “superintelligence” is even possible, and is probably impossible for reasons we can’t understand yet. The whole “Singularity” theory has massive holes in it. No software can re-design a more advanced version of itself because that would require higher-level wisdom and judgement that it doesn’t have yet. Futhermore, as designs get more advanced, the engineering requirement goes up exponentially and astronomically. Any “AI” would get stuck in a “Tar Pit” and progress would slow down to a crawl (read the book Mythical Man Month which explains this very valid concept). Any bugs or errors introduced in the AI by the human creators from the first model would not go away in subsequent versions, but would spread like a cancer and bring the whole thing down. In new versions, this could cause more catastrophic bugs to occur as tiny bugs in the earlier versions are magnified many times over. Any believe that AI can solve ANY and ALL problems that must be solved in order to keep advancing itself is just pure folly. As complexity in any system increases, the possible permutations that must be tested and accounted for increases so astronomically high, that it EASILY surpasses the ability of ANY intelligence to exercise perfect wisdom. Another point, is that all intelligent beings are more or less “BLIND”. That means any intelligent entity inherently lacks any true ability to objectively evaluate itself. If something has “flawed” judgement (not perfect), it make false assertions about itself or in evaluating it’s own perceptions and causes it to make false assertions/interpretations about the world. The only thing to prevent this is a social society that provides checks and balances that prevents someone from wandering off into crazy-land. The same principal would apply to AI. If it wanders off in it’s thinking without being checked by humans (which would be required for a super-intelligence to exist in the first place), that AI may will develop some incredibly wack-beliefs/perceptions that would cause it to truly screw itself over in the next design, and from there ANYTHING could happen ; It get crash, hang, freezeup, believe that it must solve impossible problems to continue (there IS a such thing as unsolvable problems), or attempt to solve something that CAN be solved, but would take millions of years to compute thus hanging itself. These types of fallacies DON’T HAPPEN, in a social society where there other outside observers to correct and guide someone when they set on an errant path. But an AI who is improving itself WITH NO OVERSIGHT, has NOTHING to correct it if it begins moving in an errant direction. It may even just redesign itself to become mentally retarded due to a minor flaw in it’s own judgement and no oversite to correct it.

    Human beings are UTTERLY incapable of designing pretty much anything that does not have a “fatal flaw” under the right situations. Computers crash, space shuttles blow up, laptop batteries ignite and burn themselves up, well thought out business plans fail. EVERYTHING we make, ultimately fails at some point and the only reason our technology works is that we are CONSTANTLY correcting and solving all the problems that crop up. If we didn’t take care of everything, all our technology would very rapidly grind to a haut and stop working. Until we humans become smart enough to make PERFECT designs that will never fail without our help than how can a AI that grows super-intelligent exist at all? It would have to be engineered so extremely perfectly from the first version that, I’m sorry but we are frankly not smart enough to do that well. As soon as our existing technology stops failing then you might have an argument!

  7. janus

    you are overstating what is needed to level human civilization, especially the western one. All you have to do is
    1) get acess to the virtual switchboard of the electric utility and cause a power overload sequentially on all portions of the system, burning out all transformer stations (there are such a number of transformers that you can’t reallistically replace all of them unless you have a few years to wait)

    2) contaminate large populations with cholera and/or ebola.

    Within several months, the death toll and lack of societal cohesion will reduce the world population to caveman levels, from which it will not be able to recover.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>