It seems obvious that Singularity Institute-supporting transhumanists and other groups of transhumanists speak completely different languages when it comes to AI. Supporters of SIAI actually fear what AI can do, and other transhumanists apparently don’t. It’s as if SL3 transhumanists view smarter-than-human AI with advanced manufacturing as some kind of toy, whereas we actually take it seriously. I thought a recent post by Marcelo Rinesi at the IEET website, “The Care and Feeding of Your AI Overlord”, would provide a good illustration of the difference:
It’s 2010 — our 2010 — and an artificial intelligence is one of the most powerful entities on Earth. It manages trillions of dollars in resources, governments shape their policies according to its reactions, and, while some people revere it as literally incapable of error and others despise it as a catastrophic tyrant, everybody is keenly aware of its existence and power.
I’m talking, of course, of the financial markets.
The opening paragraph was not metaphorical. Financial markets might not match pop culture expectations of what an AI should look like — there are no red unblinking eyes, nor mechanically enunciated discourses about the obsolesence of organic life — and they might not be self-aware (although that would make an interesting premise for a SF story), but they are the largest, most complex, and most powerful (in both the computer science and political senses of the word) resource allocation system known to history, and inarguably a first-order actor in contemporary civilization.
If you are worried about the impact of future vast and powerful non-human intelligences, this might give you some ease: we are still here. Societies connected in useful ways to “The Market” (an imprecise and excessively anthropomorphic construct) or subsections thereof are generally wealthier and happier than those than aren’t. Adam Smith’s model of massively distributed economic calculations based on individual self-interest has more often than not surpassed in effectivity competing models of centralized resource allocation.
This post is mind-blowing to me because I consider it fundamentally un-transhumanist. It essentially says, “don’t worry about future non-human intelligences, because they won’t be so powerful that they aren’t indistinguishable from the present day aggregations of humans”.
Isn’t the fundamental idea of transhumanism that augmented intelligences and beings can be qualitatively different and more powerful than humans and human aggregations? If not, what’s the point?
If a so-called transhumanist thinks that all future non-human intelligences will basically be the same as what we’re seen so far, then why do they even bother to call themselves “transhumanists”? I don’t understand.
Recursively self-improving artificial intelligence with human-surpassing intelligence seems likely to lead to an intelligence explosion, not more of the same. An intelligence explosion would be an event unlike anything that has ever happened before on Earth — intelligence building more intelligence. Intelligence in some form has existed for at least 550 million years, but it has never been able to directly enhance itself or construct copies rapidly from raw materials. Artificial Intelligence will. Therefore, we ought to ensure that AI has humans in mind, or we will be exterminated when its power inevitably surges.
If there are any other transhumanists who agree that future superintelligences will be directly comparable to present-day financial markets, please step forward. I’d love to see a plausible argument for that one.