Note: This is just a casual response, it isn’t meant to be anything incredibly formal. It’s not on behalf on all transhumanism, or anything like that. It’s just a long but casual response to Lanier’s paper.
Well, it has been seven and a half years since Jaron Lanier’s “One Half of a Manifesto”, but I thought, why not respond to it right this very second? Better late than never. This response is for Mr. Lanier and anyone else who is interested. Below is an image of Mr. Lanier getting jiggy with a VR interface.
First, the introduction:
Jaron Lanier, a pioneer in virtual reality, musician, and currently the lead scientist for the National Tele-Immersion Initiative, worries about the future of human culture more than the gadgets. In his “Half a Manifesto” he takes on those he terms the “cybernetic totalists” who do not seem “to not have been educated in the tradition of scientific skepticism. I understand why they are intoxicated. There is a compelling simple logic behind their thinking and elegance in thought is infectious.”
I label myself a “so-called cybernetic totalist” in some of the responses that follow because I meet the criteria for the term as used by Mr. Lanier, although I object to its rhetorical implications.
In my response below I will argue that scientific skepticism has been duly applied to our claims. Since many arguments will never be settled until the realities we discuss are actually demonstrated, that’s where a lot of the current focus is.
We are not intoxicated, only responding in a rational way to what we see to be the facts. If we disagree, it is on certain propositions which require experimental verification or refutation.
If we are intoxicated by anything, it is the tremendous amount of humanitarian value that recursively self-improving AI has to offer. As Nick Bostrom says in “Ethical Issues in Advanced Artificial Intelligence”:
It is hard to think of any problem that a superintelligence could not either solve or at least help us solve. Disease, poverty, environmental destruction, unnecessary suffering of all kinds: these are things that a superintelligence equipped with advanced nanotechnology would be capable of eliminating. Additionally, a superintelligence could give us indefinite lifespan, either by stopping and reversing the aging process through the use of nanomedicine, or by offering us the option to upload ourselves. A superintelligence could also create opportunities for us to vastly increase our own intellectual and emotional capabilities, and it could assist us in creating a highly appealing experiential world in which we could live lives devoted to in joyful game-playing, relating to each other, experiencing, personal growth, and to living closer to our ideals.
Basically, creating a human-friendly superintelligence would be utterly excellent. Even more important than medical research for curing cancer, because superintelligence could accomplish that, and more. (Of course, relative difficulties come into play here.) It could even help us defend against the creation of unfriendly superintelligence. And what could be more awesome than that?
There is a real chance that evolutionary psychology, artificial intelligence, Moore’s Law fetishizing, and the rest of the package, will catch on in a big way, as big as Freud or Marx did in their times. Or bigger, since these ideas might end up essentially built into the software that runs our society and our lives.
We can throw out the Moore’s law fetishizing, which Ray Kurzweil has been accused of (unjustly, in my view, but that’s another argument). Honestly, even if the availability of better computers grinds to a halt tomorrow, we’re still interested in creating superintelligence. I agree there is some “new philosophical package”, which we adopted a few years back, but as a representative of said philosophy, I claim we can do without obsession over Moore’s law.
Now back to Mr. Lanier:
If that happens, the ideology of cybernetic totalist intellectuals will be amplified from novelty into a force that could cause suffering for millions of people.
Oh, I hope not. We want to amplify our novelty into something that makes life better for seven billion people. You are invited to come along and help us on this, as well.
The greatest crime of Marxism wasn’t simply that much of what it claimed was false, but that it claimed to be the sole and utterly complete path to understanding life and reality.
Yeah, over 100 million people died that way, including some of my family. But the “cybernetic totalism” you worry about does not claim to the be the “sole and utterly complete path to understanding life and reality”, so I’m afraid that’s a bit of a straw man. Sorry to be pithy here, I’ll address the main accusations below.
Cybernetic eschatology shares with some of history’s worst ideologies a doctrine of historical predestination.
Will address this below…
There is nothing more gray, stultifying, or dreary than a life lived inside the confines of a theory.
What about the theory of gravity? Hahaha. Seriously though, I don’t you can point to certain people and say they’re “living life inside the confines of a theory”. I don’t think the distinction is psychologically meaningful. Everyone lives life inside the confines of their conception of reality, it’s a matter of how much you agree with them or not. If you disagree, you say slightly odd things like the sentence quoted above, to sort of dehumanize them and make others seem like they’re being programmed by a cult leader. But humans are humans, and the “ideology” of yesterday can become the common sense of tomorrow. It all really depends on your perspective.
I subscribe to many of the tenets of what you call “cybernetic totalism” and I assure you my life is multi-colored, intellectually enriching, and non-dreary. Of course, some of my acquaintances might argue otherwise.
Let us hope that the cybernetic totalists learn humility before their day in the sun arrives.
Yes, I’m looking forward to learning more about that. If I sound pithy or condescending, just figure I’m doing it to amuse myself in writing this long response, and don’t really mean anything by it.
Now the body of the essay begins:
For the last twenty years, I have found myself on the inside of a revolution, but on the outside of its resplendent dogma. Now that the revolution has not only hit the mainstream, but bludgeoned it into submission by taking over the economy, it’s probably time for me to cry out my dissent more loudly than I have before.
Is a small subset of transhumanists in control of the economy already? I think you are exaggerating the power of so-called “cybernetic totalists” greatly here.
And so I’ll here share my thoughts with the respondents of edge.org, many of whom are, as much as anyone, responsible for this revolution, one which champions the assent of cybernetic technology as culture.
Edge.org is a great group, I hope they have been reading the paper with interest over the last seven years. I am also waiting next to the telephone for my invitation to join them. Real soon now.
The dogma I object to is composed of a set of interlocking beliefs and doesn’t have a generally accepted overarching name as yet, though I sometimes call it “cybernetic totalism”. It has the potential to transform human experience more powerfully than any prior ideology, religion, or political system ever has, partly because it can be so pleasing to the mind, at least initially, but mostly because it gets a free ride on the overwhelmingly powerful technologies that happen to be created by people who are, to a large degree, true believers.
It isn’t really a dogma. The “interlocking beliefs” include an interest in evolutionary psychology, heuristics and biases, statistical inference, and the challenge of human-friendly AI. It’s unfair to call us true believers, because we’re not. From the Wikipedia entry for The True Believer:
Part of Hoffer’s thesis is that movements are interchangeable and that fanatics will often flip from one movement to another.
Singularitarians, like the author of these words, aren’t fanatics, nor True Believers, and we tend to regard our goals as non-interchangeable. We are focused on ensuring that general AI is safe to humans, and that its actions are widely seen as beneficial. The goal is largely technical. Subcultures sometimes form around groups of people that work together. A tenuously connected subculture does pursue safe AI in a unified manner, but we aren’t fanatics. A fanatic is “one who can’t change his mind and won’t change the subject”, but I, and others in this camp, are open-minded and willing to discuss any number of subjects. Surely, I would bore quite a few people if I continuously went on about the likely cognitive differences between human-equivalent AIs and human beings. Still, I think that the way humanity addresses the AI challenge is a matter of life and death.
I could go on about the differences between AI advocates and True Believers, but I already posted Steven’s “Rapture of the Nerds, Not” the other day.
I would very much like to keep a catalog of those calling Singularitarians “True Believers”, however. Please, if there’s anyone else in the audience who thinks we are, please step forward. So far on my list, there’s Dale Carrico, James Hughes, Greg Egan, and John Smart. I’m here to kick ass, but especially to take names.
It’s exhausting to know so many people who think you’re a True Believer, but it helps to have confidence in the face of social ridicule. All the Disney movies I watched as a kid taught me to believe in myself. At least I have that.
Edge readers might be surprised by my use of the word “cybernetic”. I find the word problematic, so I’d like to explain why I chose it. I searched for a term that united the diverse ideas I was exploring, and also connected current thinking and culture with earlier generations of thinkers who touched on similar topics. The original usage of “cybernetic”, as by Norbert Weiner, was certainly not restricted to digital computers. It was originally meant to suggest a metaphor between marine navigation and a feedback device that governs a mechanical system, such as a thermostat. Weiner certainly recognized and humanely explored the extraordinary reach of this metaphor, one of the most powerful ever expressed.
“Singularitarian” might make more sense as a descriptor, as that seems to encompass most of the people you’re worried about. Yet even that word encompasses at least two different definitions. Ray Kurzweil gave one definition in his book, “A Singularitarian is someone who understands the Singularity and has reflected on its meaning for his or her own life”, but unfortunately this must be discarded, as 1) it’s too inclusive, 2) Kurzweil defines the Singularity as a list of dozens of bullet points at the beginning of his book, thereby making it incredibly diffuse and confusing, and 3) the term was already adequately defined in 2000, with The Singularitarian Principles, which Kurzweil already knew about and apparently tried to steamroll over. In Principles, a Singularitarian is defined as “someone who believes that technologically creating a greater-than-human intelligence is desirable, and who works to that end”.
I hope no one will think I’m equating Cybernetics and what I’m calling Cybernetic Totalism. The distance between recognizing a great metaphor and treating it as the only metaphor is the same as the distance between humble science and dogmatic religion.
No, we get it… sort of a harsh warning you threw in randomly here, but alright.
Here is a partial roster of the component beliefs of cybernetic totalism:
1) That cybernetic patterns of information provide the ultimate and best way to understand reality.
Not “cybernetic patterns” per se, but information patterns, yes. People are patterns of information in matter. Although we may never know everything about physical reality perfectly, we can assume that what makes humans important are our information pattern, rather than our Ã©lan vital, or what have you.
2) That people are no more than cybernetic patterns.
To me, this is like saying that people are “no more” than atoms and forces between them. Big deal. My information pattern is currently engaging in reading and responding as I write this, yes. I’m quite comfortable with it. Vital signs are normal.
Most cognitive scientists seem comfortable with the idea of the mind as a series of functional patterns. This is called functionalism.
3) That subjective experience either doesn’t exist, or is unimportant because it is some sort of ambient or peripheral effect.
Well, clearly it does, but subjective experience is quite the doozy of a problem. Even Kurzweil admits that AIs on silicon might not be conscious. I’ve read “Facing Up to the Problem of Consciousness” and although I suspect that Chalmers is spinning in circles, I leave open the possibility that he’s right. Really, it doesn’t matter either way to the other issues we address here. If non-biological matter can’t hold consciousness, some of us (not necessarily me) could still “upload” to extremely dense and gigantic meat computers, integrated with silicon inputs or whatever. Let me quote a response from “Comments on Vinge’s Singularity” on Robin Hanson’s website:
“Can we really program human-equivalent AIs?” Yes. The objections fail to consider this: We can cheat. First and foremost, seed AIs don’t have to be human-equivalent. An acorn is not a tree. Second, we’re allowed to steal code from DNA, observe developing brains… even build AIs out of human neurons if there’s a fundamental Penrosian gap. Third, if unmodified humans don’t rise to the challenge, that doesn’t rule out transhumans or neurosurgically produced specialists.
Coding an AI isn’t an ideological argument. If a method works, we’ll use it.
Yeah, we’re all hell-bent on doing it, and stuff. Lucky for us, superintelligence is way off the radar of most bioconservatives. Some Singularitarians may be paranoid that will change, but one potential solution is to just act more crazy if they start to suspect. We certainly have the capability. (That was a joke.)
So, whether subjectivity is ontologically primitive or not, we want to build AIs and enhance human brains with technology. If we build functionally identical silicon brains and they aren’t conscious, then we just have to find out what we need to do to imbue them with consciousness. Maybe we can fool the universe into giving the automaton consciousness by pretending to give birth to it in a hospital, I don’t know.
4) That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all creativity and culture.
“What Darwin described in biology”? I’m not sure what this means… please rephrase. Creativity and culture comes from many places and can be described in many ways. For instance, I believe that most of the creativity and culture that went into The Matrix was stolen directly from prior sources, including Ghost in the Shell and Dragonball Z. Today, much of the creativity and culture of Internet memes originates with 4chan… am I getting warmer?
5) That qualitative as well as quantitative aspects of information systems will be accelerated by Moore’s Law.
No, that takes work. But computers as powerful as the human brain would be tremendously useful. According to most estimates of the computing capacity of the human brain, we’re practically there now. Anyway, if someone implied that quantitative improvements in computing speed translate directly into qualitative improvements in AI software, they were wrong. You’re probably thinking of Kurzweil, but in his latest book, he qualifies this much more than he did back in 2000, when you wrote this essay. Although maybe he still does still imply it (deliberately or otherwise) a little bit more often that he should.
And finally, the most dramatic:
6) That biology and physics will merge with computer science (becoming biotechnology and nanotechnology), resulting in life and the physical universe becoming mercurial; achieving the supposed nature of computer software. Furthermore, all of this will happen very soon! Since computers are improving so quickly, they will overwhelm all the other cybernetic processes, like people, and will fundamentally change the nature of what’s going on in the familiar neighborhood of Earth at some moment when a new “criticality” is achieved- maybe in about the year 2020. To be a human after that moment will be either impossible or something very different than we now can know.
I agree with half of this. Biology and physics will merge with computer science, and much of our surroundings will indeed become mercurial. For instance, with enough advanced utility fog, we could probably build an entire city and knock it down overnight. And other fun things, like processing the asteroids into a vast expanse of space colonies.
Will this happen soon? Not necessarily. Even if we created a god-like superintelligent AI equipped with advanced nanotechnology and capable of rearranging the face of the planet like a 10-year old playing SimCity, it wouldn’t necessarily use its powers to do that. For instance, if that superintelligent AI were constructed from a recursively self-improving seed that cared about human beings, it could retain those qualities into its maturity. Therefore, being concerned about the welfare of humans, it would refrain from blowing our minds with its super-cool forbidden knowledge and abilities.
I do believe a new criticality will be achieved. This is I.J. Good’s “intelligence explosion”, defined as follows:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
Human-equivalent AIs would have a number of potential advantages, even compared to the smartest human beings or the human race as a whole. The critical tool it would need is a technological means of rapidly building infrastructure for itself. Molecular nanotechnology looks like it fits the bill, but if it turns out to be impossible, the AI might need to resort to microtechnology, fab labs, synthetic biology, or something else we haven’t thought of.
If an AI develops a construction technology far more advanced than anything we have today, and uses it to modify its environment, then sure, it could do a lot of good or evil. According to Steve Omohundro, even an AI with harmless goals could engage in harmful behaviors. I see it as best to assume that a human-equivalent AI would be capable of rapidly bootstrapping itself to superintelligence. Many of the best arguments are here, but I.J. Good was talking about this 50 years ago already.
I’d like a Singularity where I’m still allowed to be human afterwards. I addressed this topic a couple days ago.
During the last twenty years a stream of books has gradually informed the larger public about the belief structure of the inner circle of Digerati, starting softly, for instance with Godel, Escher, Bach, and growing more harsh with recent entries such as The Age of Spiritual Machines by Ray Kurzweil.
I don’t think there is any actual continuity between these. Kurzweil is transhumanist, while Hofstadter is not. In the seven years since you wrote this, there have been barely any books that are as “harsh” about the Singularity as Kurzweil’s. So is the threat smaller than you anticipated?
Recently, public attention has finally been drawn to #6, the astonishing belief in an eschatological cataclysm in our lifetimes, brought about when computers become the ultra-intelligent masters of physical matter and life. So far as I can tell, a large number of my friends and colleagues believe in some version of this imminent doom.
Hm. That large number must be pretty silent. Anyway, the risk of doom from AI is serious, and you should pay attention to it.
I am quite curious who, among the eminent thinkers who largely accept some version of the first five points, are also comfortable with the sixth idea, the eschatology.
Many, I hope. It’s not an eschatology, it’s a natural consequence of what happens when you put recursively self-improving AI with accelerated thinking in the middle of a human society. I know it sounds weird, but it merely stems from the fact that humans aren’t the theoretically smartest, fastest and most capable intelligences that can exist. When we build intelligence on a new substrate, it will have the ability to soar right past us. Unfortunate in the eyes of some, but true. If we handle it responsibly, it won’t be so bad.
In general, I find that technologists, rather than natural scientists, have tended to be vocal about the possibility of a near-term criticality. I have no idea, however, what figures like Richard Dawkins or Daniel Dennett make of it. Somehow I can’t imagine these elegant theorists speculating about whether nanorobots might take over the planet in twenty years. It seems beneath their dignity.
It’s absolutely dignified. If you aren’t convinced by the arguments for a near-term criticality, it just means we disagree on the evidence, not that we’re somehow undignified. How about this idea — we’re both dignified, we just have different positions?
And yet, the eschatologies of Kurzweil, Moravec, and Drexler follow directly and, it would seem, inevitably, from an understanding of the world that has been most sharply articulated by none other than Dawkins and Dennett.
But, who else but you sees this connection? What about the millions of people who love Dawkins and Dennett, yet don’t give a hoot about the others? Honestly, when looking back at the story of my life, that is indeed the philosophical path I took (with Drexler first, actually), but it’s a more unusual one than most. My question is, why haven’t the millions of Dawkins fans joined our AI effort yet?
Do Dawkins, Dennett, and others in their camp see some flaw in logic that insulates their thinking from the eschatological implications?
Maybe they just haven’t really thought about it. Not everyone has the time to read Nanosystems.
The primary candidate for such a flaw as I see it is that cyber-armageddonists have confused ideal computers with real computers, which behave differently. My position on this point can be evaluated separately from my admittedly provocative positions on the first five points, and I hope it will be.
Human intelligence seems to run alright on a non-ideal computer, and AI will too. Anyway, you’re right, this is a separate point.
Why this is only “one half of a manifesto”: I hope that readers will not think that I’ve sunk into some sort of glum rejection of digital technology. In fact, I’m more delighted than ever to be working in computer science and I find that it’s rather easy to adopt a humanistic framework for designing digital tools. There is a lovely global flowering of computer culture already in place, arising for the most independently of the technological elites, which implicitly rejects the ideas I am attacking here. A full manifesto would attempt to describe and promote this positive culture.
It is because I am humanistic that I care about preserving human society and culture in the face of oncoming superintelligence. A massive wave is coming, we want to divert its path so it is channeled into helping humanity, not harming it. If you see no wave, then why do you bother to write this whole manifesto against it?
I will now examine the five beliefs that must precede acceptance of the new eschatology, and then consider the eschatology itself.
Here we go:
Cybernetic Totalist Belief #1: That cybernetic patterns of information provide the ultimate and best way to understand reality.
Yeah, I agree. Many cognitive scientists do, too. It’s called materialism. If you disagree, you won’t find too much sympathy, except perhaps with the New Age crowd.
(From here on out, I skip over some of the chunks of exposition on each “Cybernetic Totalist Belief”. Read the actual essay for the whole story.)
Belief #2: That people are no more than cybernetic patterns
I responded to this above.
Every cybernetic totalist fantasy relies on artificial intelligence. It might not immediately be apparent why such fantasies are essential to those who have them. If computers are to become smart enough to design their own successors, initiating a process that will lead to God-like omniscience after a number of ever swifter passages from one generation of computers to the next, someone is going to have to write the software that gets the process going, and humans have given absolutely no evidence of being able to write such software.
We have evidence of creating progressively more intelligent AI programs. Plenty of extremely intelligent and powerful people consider AI to be feasible, and the list gets longer every day. They just had the The First Conference on Artificial General Intelligence in Memphis. If someone ran into the next AGI conference and shouted, “Stop everything! AI is a cybernetic totalist fantasy!”, I think they would rightly laugh.
Belief #3: That subjective experience either doesn’t exist, or is unimportant because it is some sort of ambient or peripheral effect.
As argued above, I don’t think this makes much difference.
Belief #4: That what Darwin described in biology, or something like it, is in fact also the singular, superior description of all possible creativity and culture.
I’m going to skip this one. Mr. Lanier goes on say that Kevin Kelly and Robert Wright write “dramatic renditions” of “Darwinian eschatology”. I doubt much sympathy will be found here.
Belief #5: That qualitative as well as quantitative aspects of information systems will be accelerated by Moore’s Law.
As mentioned before, I agree that this is a problem. Still, even if we throw it out, my Singularitarian belief system is left intact.
Belief #6, the coming cybernetic cataclysm.
Yes, take it seriously. We want to ensure that it’s not a cataclysm, and in fact a smooth transition. We can do it if we put our brains together.
On to the conclusion:
I share the belief of my cybernetic totalist colleagues that there will be huge and sudden changes in the near future brought about by technology.
This is interesting, because most of our critics don’t. Among those who do believe that there will be huge and sudden changes, our position is considered quite reasonable.
The difference is that I believe that whatever happens will be the responsibility of individual people who do specific things. I think that treating technology as if it were autonomous is the ultimate self-fulfilling prophecy. There is no difference between machine autonomy and the abdication of human responsibility.
Oh, I completely agree with you here. Humans should be responsible for the AIs they create. There is a difference between machine autonomy and abdication of human responsibility: if we hold people responsible for their machines, that difference can be reinforced. Of course, if their machine kills us all, then I think enforcing may be difficult.
Let’s take the “nanobots take over” scenario. It seems to me that the most likely scenarios involve either:
a) Super-nanobots everywhere that run old software- linux, say. This might be interesting. Good video games will be available, anyway.
b) Super-nanobots that evolve as fast as natural nanobots- so don’t do much for millions of years.
c) Super-nanobots that do new things soon, but are dependent on humans.
In all these cases humans will be in control, for better or for worse.
I choose d) Super-nanobots constantly being updated, directed, and routed by intelligent agency. This agency could be a combination of artificial and natural intelligence, or in some narrow contexts, one or the other. For instance, if I ask the super-nanobots to fetch me a cup of hot tea, I should hope that my wish not be intercepted and rewritten as saying I would like a cup of tea poured into my lap. That would hurt.
So, therefore, I’ll worry about the future of human culture more than I’ll worry about the gadgets. And what worries me about the “Young Turk” cultural temperament seen in cybernetic totalists is that they seem to not have been educated in the tradition of scientific skepticism. I understand why they are intoxicated. There IS a compelling simple logic behind their thinking and elegance in thought is infectious.
Ho, ho, ho. And presumably Dawkins, Wright, and Dennett are completely untrained in scientific skepticism as well?
As for the “Young Turk” temperament, I am unaware of the connotations of that term. But at this point I would like to alienate myself further by inserting an RPG reference: yes, the Turks of FF7 were quite slick, and I did like their temperament.
Anyway, that’s my response. The wrap up? I would appreciate it if Lanier, and others, would all get together and help us create a self-improving artificial intelligence to spark that critical point we were talking about. As soon as we successfully make it past that trial, we can all relax.
The clock is ticking, you know, and we don’t have all day.