Yes, it’s come to that point. The word “Singularity” has been losing meaning for a while now, but whatever semblance of a unified or coherent definition there ever used to be, it has long faded away over the horizon. Rather than any single idea, Singularity has become a signifier used to refer to a general cluster of ideas, some interrelated; some, blatantly not. These ideas include: exponential growth, transhuman intelligence, mind uploading, singletons, popularity of the Internet, feasibility of life extension, some developmentally predetermined “next step in human evolution”, feasibility of strong AI, feasibility of advanced nanotechnology, some odd spiritual-esque transcension, and whether or not human development is primarily dictated by technological or social forces. Quite frankly, it’s a mess.
Anytime someone gets up in front of an audience and starts trying to talk about the “Singularity” without carefully defining exactly what they mean and don’t mean, each audience member will be thinking of an entirely different set of concepts, draw their own opinions from that unique set, and interpret further things they hear in light of that particular opinion, which may not even based on the same premises as the person sitting next to them. For an audience of 50 people, you could very well have 50 unique idea sets that each listener personally thinks represents the Singularity. For such challenging and sometimes confusing topics, clarity and specificity is a necessity, so we might as well discard the overused “Singularity” word, and talk about what we actually mean using more specific terms. It helps keep things distinct from one another.
Even more confusing is that there are technologies, and then there are plausible or possible consequences from the technologies – two things which are very distinct. Both lines of inquiry can cause heated argument, even when everything is perfectly delineated! But the delineation is still important, so after the argument is over, you actually know what you were arguing about. Below, I’m going to slice up various concepts associated with the term “Singularity” into ideas that can actually be examined individually:
1) Exponential growth: it sure looks like technological progress is accelerating to me, and on many objective metrics, it is, but maybe some others disagree. But guess what: whether or not progress is accelerating is largely irrelevant to the feasibility of mind uploading, cryonics, or superintelligence. It may influence timeframes, but not feasibility in the abstract sense. When acceleration skeptics say: “technological progress is not accelerating, therefore, all this other transhumanist stuff is impossible” – they’re kinda missing the point – if a given technology is feasible, it is likely to be invented eventually unless globally suppressed, but the question of when is entirely separate. In principle, transhuman intelligence could be created during a time of accelerating progress, or constant progress, or even stagnation. This was mentioned at the last Singularity Summit.
2) Radical life extension: again, radical life extension (people living to 100, 200, 300, and beyond) seems very plausible to me, and I believe that we are going to be experiencing this ourselves in our lifetimes, unless an existential disaster occurs. A Berkeley demographer found that maximum lifespan of human beings is increasing at an accelerating rate. However, life extension has very little, if anything, to do with the Singularity, other than that the Singularity is sometimes associated with technological progress and that technological progress may result in radically extended lifespans. This is somewhat like how house mice are somewhat associated with raccoons because both live in areas dense with human populations.
3) Mind uploading: in his “Rapture of the Geeks” article, which I’m not even going to link, Cory Doctorow made the mistake of thinking that the “Singularity” was all about the feasibility of mind uploading and Singularity activists’ primary goal is to upload everyone into a computer simulation. This is confusion caused by not looking hard enough – you’re busy, you have to go protest copyright law or whatever, have to go to a meeting, blah blah blah, so you just read a few web pages that give you a totally skewed view of what you’re trying to criticize, and come to the conclusion that “Singularity” = mind uploading. You hope to get away with it because you realize this is cutting edge stuff and most people don’t know the difference between an uploaded superintelligence or a de novo superintelligence, for instance, so you just go for it. Bad idea. Mind uploading and the Singularity (my definition: transhuman intelligence) are totally different things. Transhuman intelligence might lead to uploading, but they’re not equivalent.
4) Feasibility of strong AI: this is rightly closely associated with the Singularity, but it’s still not the same thing. You can be a refusenik of strong AI and still advocate intelligence enhancement. You can want to die at age 80, believe that progress is not accelerating, and that pro-mind uploading people are crazy, and still advocate “the Singularity”, because the Singularity is supposed to mean intelligence enhancement: that’s it! Feasibility of strong AI is more closely related to the Singularity than the above topics, because there is a large group of Singularity activists (aka Singularitarians, spell it right), trying to build strong AI… but, if you’re anti-strong-AI and think that means you’re anti-Singularity, you should think again, and recognize that the Singularity and strong AI are not the same thing. You can have a Singularity with enhanced human intelligence, no AI involved at all. It’s just that many Singularity activists think that AI is the easiest way to achieve intelligence enhancement – the Singularity. We could change our mind with significant persuasion – we chose AI because it looks like the easiest and safest path, not because we have some special AI-fetish. It’s a means to an end, and that’s all.
5) Transhuman intelligence: what “the Singularity” has always supposed to mean, but has gotten radically, radically diluted as of late. Complicating matters is that many people have different views of what transhuman intelligence is supposed to be, so even if we shave it down to just this, there is still confusion. Let me put it this way: transhuman intelligence is not a specific thing, it’s a space of possible things, encompassing human intelligence enhancement through drugs, gene therapy, brain-computer interfacing, brain-brain interfacing, and maybe other techniques we haven’t even considered. It also encompasses AI, but not present-day human networking or the Internet – these are simply new ways of arranging human-level intelligence. (Legos can’t be made into brick-and-mortar buildings, no matter how you configure them.) To me, transhuman intelligence is completely inevitable in the long run – it will be developed, the question is how, who, and when.
So, five different things. Unrelated, but frequently conflated. If you want to critique or support something, focus on that specific thing: don’t confuse yourself and others by smearing them all together! And if you’re planning on attending the next Singularity Summit in San Francisco, and aren’t already thoroughly familiar with the ideas surrounding the Singularity, I suggest you sit near me, so I can translate, because I doubt most of the speakers will have a very coherent or well-defined view of the Singularity either. Stewart Brand, for instance, says, “The Singularity is a frightening prospect for humanity. I assume that we will somehow dodge or finesse it in reality” – but what does he actually mean? It’s so incredibly difficult to tell. I’m not picking on Brand specifically here, just repeating my original point in this post: that for every 50 people, you may very well have 50 completely different conceptions of what the Singularity is.