Trying to Muse Rationally About the Singularity Scenario

summit_27.jpg

Douglas Hofstadter is College Professor of Cognitive Science and Computer Science, and Adjunct Professor of History and Philosophy of Science, Philosophy, Comparative Literature, and Psychology at Indiana University, where he directs the Center for Research on Concepts and Cognition. His books include the Pulitzer Prize winning Gödel Escher, Bach: An Eternal Golden Braid, Metamagical Themas, The Mind’s I (with Daniel Dennett), Fluid Concepts and Creative Analogies, Le Ton Beau de Marot, and a verse translation of Pushkin’s Eugene Onegin.  At the Singularity Summit at Stanford he stated his belief that the Singularity scenario, “even if it seems wild, raises a gigantic, swirling cloud of profound and vital questions about humanity and the powerful technologies it is producing. Given this mysterious and rapidly approaching cloud, there can be no doubt that the time has come for the scientific and technological community to seriously try to figure out what is on humanity’s collective horizon.”

The following transcript of Douglas Hofstadter’s Singularity Summit at Stanford presentation has not been approved by the speaker.  Video and audio are also available.

Trying to Muse Rationally about the Singularity Scenario

As has been mentioned, this symposium is to a certain extent the successor to an e vent that was organized by me in the year 2000.  It took place actually on April 1st of 2000.  Everyone has given it the wrong name—it was not called the “Spiritual Machines Symposium,” but the “Spiritual Robots Symposium,” because I wanted to splice together the names of two books.  One was Ray‘s The Age of Spiritual Machines and the other was Hans Moravec‘s book called Robot.  I took one word from each of them and made that symposium.

Now, why did I do that?  It was in the early part of 1999 that two books were reviewed in the New York Times Book Review together by a philosopher named Colin McGinn. They were Ray’s The Age of Spiritual Machines and Hans Moravec’s Robot.  I read this review with some degree of skepticism and perplexity, because I had never thought about the issues that were being raised by McGinn—but ultimately by the two books.  I immediately went out and bought the two books and read them.  I found them very disorienting and worrisome, kind of scary.

Now, why would I find these things scary?  Well, that’s confusing, even for me—In 1993, long before all these things came to my attention, I had written an article for a newspaper called “Who will be We in 2093.”  I added a hundred years and asked, “Is it possible that the human species as it evolves transforms so radically that it becomes very different from what it is like today, and perhaps it merge with or become subsumed in computers or robots of some fashion?”  With my question—”Who will be We?”—I thought about whether it could be possible that ultimately our progeny could be things like robots that had inherited our culture but had carried it much further.

Now, I published this article in response to an event that I had taken part in in Holland that year.  It was an event that had to do with computer intelligence. Joe Weizenbaum, famous for his book Computer Power and Human Reason, which came out in the 1970′s and was kind of a diatribe against the use of artificial intelligence in anything that would encroach upon human values, was one of the speakers at this event.  Dan Dennett, my philosopher colleague, was there and we were speaking in some sense in favor of artificial intelligence.  Weizenbaum got up and delivered a shoe-thumping diatribe against AI, against its possibility, and for humanity, but “humanity” in its most ordinary sense—just who we are today.  The entire audience rose to its feet at the end and gave him a huge standing ovation.  He was extremely opposed to anything like computer intelligence, and I was really kind of shocked.  It struck me as so parochial and such a narrow vision of what humanity could be.

When I originally published my article, I called it “Who will be ‘We’ in 2093?”   I then had another opportunity to publish it, and I don’t remember where, but I changed the number.  I changed it to 2493.  I moved it away by 400 years—500 years in the future.  I think I was a little bit scared by my own predictions, or by my own imagery.  Perhaps it was because, as a father of children, I was starting to think, “My goodness, I wonder if my children, or at least my grandchildren might be completely rendered superfluous by these kind of changes that I am talking about.”  I didn’t like that idea, so I pushed it into the future.  I’m not sure if it was these emotional pressures or the intellectual ideas, but I pushed it off.

Then in 1999, Ray’s book and Hans’s book came out, and I was struck by a combination of many things—I would say fear, confusion… uncertainty, I would say, most of all.  I organized a panel at Indiana University consisting of people from many different disciplines to talk about this.   I have to say that I was very shocked by the panel because it was as if I had never even asked them to read these books.  Most of the panel members did not talk about any of the issues that I had raised, and I felt very dissatisfied.  A few months later when I came out to Stanford on a sabbatical for a year, the people at symbolic systems particularly pushed me to try to do it again, so I did organize another panel.  I got a very distinguished set of people, and I will talk about that a little bit later.  My goal was, in some sense, to think rationally about the scenarios that had been suggested by these books.

The title of my talk today is”Trying to Muse Rationally About the Singularity Scenario.”  Ray complained to me a little bit, very gently but reasonably I think, in an email a few days ago, saying that it suggested that I was the only one thinking rationally and that everyone else was thinking irrationally.  That was an unintended interpretation of my title.  I can see easily how that can be heard that way, but I did not mean it that way.  I have no claim to be able to think more rationally than anyone else on this panel.  In fact, I am very impressed by the rationality of Ray and of many other people, but I think that rationality exists in each of us in a slightly different form.  I think that ultimately, in order to think about the likelihood of the kind of predictions that we are encountering, we need to have many people discussing it, which is what we have today.  No one has a monopoly on rationality.  In any case, I apologize for that kind of implication.

I have found very few people who are skeptical.  There are some people who are skeptical in the sense that they don’t like it, but very few people who are skeptical in an articulate fashion.  Let me tell you about my panel in 2000 very briefly.  I invited both Ray and Hans to participate, and I was very pleased that they came.  I invited Kevin Kelly, author of Out of Control and one of the editors at Wired. I invited Ralph Merkle (a well-known nanotechnology person), Bill Joy from Sun Microsystems, Frank Drake (head of the SETI Institute), John Holland (the inventor of genetic algorithms), and John Koza (the inventor of genetic programming.)  I was the moderator, so there were nine of us.  I thought it was a very promising lineup, but unfortunately it did not quite work the way I had hoped.

Bill Joy commandeered at least half of the time by talking about the “low IQ” end of humanity scenario—that we are smothered by self-reproducing techno-dust.  It was a valid issue, understand.  It was by no means an invalid issue to bring up, but it was not what I had intended.  A lot of the time was spent on issues that was not my intention.  The “high IQ” end of humanity scenario, that is what I intended to talk about.  You may not agree is the end of humanity; you may say that humanity is transformed.  During that time there were only two people who voiced any skepticism at all about the “high IQ” end of the scenario.  Those were, to my surprise, both of the people involved with genetic programming and genetic algorithms, John Holland and John Koza.  They were very skeptical of the use of genetic algorithms or genetic programming to evolve high levels of intelligence.

I am not going to go into their skepticisms.  That is not the main purpose of my talk.  My talk is really more of a plea to have an articulate discussion about these things.  In Ray’s new book, which I highly recommend, he has done a great deal of impressive research of all sorts.  I read with great interest the sections about nanotechnology—which I knew very little about—and the things about nanotubes, which I knew something about—not as devices for computing, just some of the chemistry, because they are fascinating objects.  I did not know much about them as potential substrates for computation.  That was very interesting to read about.  Of course, as you have seen, Ray is a past master at displaying exponential curves as straight lines by taking logorithms.  The book is just filled with those kinds of things, and it is very interesting to contemplate.  It has also got some very interesting responses to critics.  Of course, close to my heart is the response to John Searle, which I think he does a very good job at.

Nonetheless, I think there is some handwaving in the book. I think Ray would admit, he does not know what is going to happen in ten, twenty, thirty, forty years. He has a belief in the law of accelerating returns. He calls it a law and he wants it to be analogous to mathematical law and laws of thermodynamics. The difference is that in physics you use statistical mechanics and you derive the laws of thermodynamics. The laws of thermodynamics are rigorously derivable from statistics, whereas Ray’s “law” is not so much of a law as it is a tendency or a trend. He has certainly marshaled a great deal of evidence to suggest that it will continue. I am not saying it is wrong, you understand. I am just saying it does not have the same status as a scientific law.

It is a tendency, an extrapolation, but it is not a rigorous scientific law in any sense. It is a very interesting set of observations, with one paradigm supplanting another.  Just as one winds down, the other one starts up—very interesting and provocative observations.  Ray does not talk too much about the details of how the entire world will be simulated.  He did not actually talk about uploading ourselves into cyberspace, but effectively that is one of the themes of the book.  The idea is that after awhile we will all exist as software entities inside computing hardware—basically, that is our destiny.  If that is the case, he did not talk about how we are going to also model or simulate the entire world.  It is one thing to simulate a certain degree of complexity, and it is another thing to simulate many, many orders of magnitude more.  I understand the argument of adding a linear number to the exponent and getting larger and larger exponents.  Again, there is some handwaving going on at that level.

There are also blurs about what does it mean for humans to survive in cyberspace.  What is a human being at that point?  What demarcates one individual from another?  Ray talks very about the merging and un-merging of human beings inside cyberspace, but it is not clear what a human being would be in such an environment.  There are some claims in the book that I would cast some doubt on.  For example, that computers today can compose pieces like Bach.  I have spent a great deal of time looking into computer composition, and the most impressive I have seen is Dave Cope‘s Emmy. Suffice it to say that I am not convinced that Emmy writes anything like Bach. I won’t go into that, but I think it is an exaggerated claim.

Ray suggests that computers understand natural language, at least if you take his cartoon seriously. “Understanding natural language” is on the floor; it’s not on the wall. He suggests that soon, thanks to things like Doug Lenat‘s Cyc project, computers may have common sense. That seems to me to be pie in the sky, but that’s a personal interpretation.

Ray suggests that maybe our brains have only 10^8 bytes worth of information in their structure because of the genome. Well, maybe, but what if it is really more like 10^8 button pushes on a jukebox, where when you push the button on the jukebox you get a record out, and the record contains an unbelievable amount of information? You have to recall that inside neurons there are many thousands of proteins, but IBM’s supercomputer Blue Gene, the most powerful in existence, is supposed to simulate the folding of one protein. If a protein is that complicated, and if it turns out that intelligence does depend upon that—and I am not saying that it does, mind you—then we have a much longer way to go.  Now, I know we can always say “add a few more years” to get those extra exponents, but it is not clear that we can always rely on the law of accelerating returns.  It may be.  I’m not here to cast doubt on that.  It’s just not clear.

I think that I would like to show some of my reactions.  When I opened the symposium six years ago, I used two cartoons.  They were meant to show that I was basically a supporter of many of the ideas that Ray and Hans were proposing.  I was not a gung-ho believer, but I felt that some of these standard objections were baseless.  I want to show some of those cartoons and then a few others that I made specifically for this event.

hofsadter_04.jpg

Here we have two stones talking to each other some billions of years ago.  One of them is saying, “Sentient life?  Don’t make me laugh!  Self-reproduction is impossible!  Nothing can pack a description of itself inside itself!  Infinite regress!  Anyway, mere matter like us rocks can’t feel anything!  No soul, no elan vital, no consciousness… Life!?  The idea is self-contradictory—don’t you worry!”  Then, on the right side we have a thunderstorm, lightning strikes the pond, and some form of self-organization takes place, and molecules appear.  “Ribo ribo” you can interpret as “ribosome” or “ribonucleaic acid,” which is RNA, or deoxyribonucleic acid, which is DNA.  However you want to interpret it.

This is basically intended to show that perhaps the most miraculous transition of all has already taken place—that is, from inanimate matter to life.  That took place!  That is a true miracle, but that took place.  A lot of people don’t think about that when they think about these kinds of things.

hofsadter_05.jpg

Ray talks eloquently in his book about the jump from one-cell life to multi-cell life—that’s another wonderful transition, but let’s talk about this one.  Two fish: “Sentient life on land?  Don’t make me laugh!  You couldn’t even breath!  The idea is self-contradictory—don’t you worry!”  Then we have our amphibian—”Ribbit, ribbit.”  We do have transitions of the substrate of life.  Some people might think that it is impossible for life to change substrate.  That seems an absurdity.  We have seen it change substrate to some extent.

hofsadter_06.jpg

Here is John Searle.   “Sentient life in Silicon?  Don’t make me laugh!  Wrong stuff!  Wrong causal powers!  Chinese room!  Just syntax, no semantics!  Blah, blah, blah… Life in silicon?  The idea is self-contradictory—don’t you worry!”  And of course we have our computer saying, “Robot, robot.”

hofsadter_07.jpg

Now, our next one is about Ray’s ideas about the laws of accelerating returns.  Here is where I have a little bit of skepticism.  I’m going to poke a little bit of fun at Ray, but you have to understand it is with great respect.  It’s a little fun, but it’s not meant to be ridiculous.  “Hit a wall?  Don’t make me laugh!  Quantum computing, DNA computing, artificial evolution, nanotechnology!  Pico! Femto! 10^20! 10^30! 10^40! 10^50! 10^100000000! Recursion! Faster! Smaller! Cheaper! Smarter!  Double exponential today, triple tomorrow! Nothing can stop exponential growth! Limitations?! The idea is self-contradictory—don’t you worry!”

There is a lot of that kind of idea.  I have exaggerated it; I’ve caricatured it—but a lot of those ideas exist in the book.  I think one has to put some degree of skepticism into the thing.  Of course, here is something that Ray mentioned—the idea of standard exponential explosion in rabbits.  Here, we have something that could put a wall into the exponential explosion of rabbits.

hofsadter_08.jpg

Ray did not talk about this too much, but he talked about it a little bit.  He talked about the end of disease, but he also talked about the end of mortality.  Again, this is one of his themes—that we will become immortal.  The subtitle of his second most recent book is “Live Long Enough to Live Forever.”  There is a whole current of thought in what Ray is talking about of survival of some essence of who we are, forever.

“Live forever?  Don’t make me laugh!”  These are two people in wheelchairs who are not so much looking forward to the idea.  “We are programmed to die!  Everything eventually wears out!  Planned obsolescence!  The only thing certain is death and taxes!   Immortality?  The idea is self-contradictory—don’t you worry.”  “Ray bet.  Ray bet.”  Ray is betting on immortality.  He seriously is.  He details this in his books.  I don’t know if he would bet lots of money, but he’s betting his life on it instead.

hofsadter_09.jpg

Finally, once we are all up in cyber-heaven, how long can we stay there?  “It will crash?  Don’t make me laugh!  We figured it all out!  We have mathematically proven it cannot crash!  Software verification!  The idea is self-contradictory!  Don’t you worry—nothing can go wrong, go wrong, go wrong…”  “Reboot, reboot.”

Enough of my cartoons.  They raise the issues.  They are not necessarily expressing my own opinions.  They are trying to raise the issues, to provoke, and to get people to think about this.  Now, I am certainly a believer that life can exist in many substrates.  I read a very interesting science fiction book called Dragon’s Egg by Robert L. Forward, a physicist who postulated the idea, based on an idea of Frank Drake’s, that life could come to exist in the crust of a neutron star. It is a very interesting speculation—whether it is reasonable or not, nobody knows.  I don’t see any reason in principle that such kinds of extreme forms of life could not come to exist.  Nor am I an opponent of life in silicon, or life in carbon nanotubes.  All of these things seem potentially plausible.  I am not somebody who casts cold water on these ideas.  I do, to some extent, not believe in the timeframes that Ray and Hans have put forth, despite their very interesting exponential curves, or linear lines in log space.  They are interesting.

I think what I am really concerned about, after having read Ray’s most recent book, and been very impressed with many of the arguments in it, I ask the question: How realistic is this?  I have asked a number of friends, highly informed intellectual people of different disciplines, and I have heard reactions over the following range:  “The ideas are nutty—not worth the time of day.”  “The ideas are very, very scary.”  “I don’t know.  I just don’t know.”  “They are reasonable, or they are probable.”  …But none of these people have read the book.

This is very interesting to me.  What I find strange about this is that I get the feeling the scientific world is not taking any of this seriously.  In other words, I do not see serious discussions of this among physicists when they get together. I see them all sort of basically say “poo poo.” I see most scientists have a sort of skeptical attitude.  I’ve read the books, and I am still skeptical, but I am less skeptical.  Why am I skeptical?  Well, partly just because I am a little bit conservative.  Maybe there is an emotional component in it.  I am also skeptical because I think the ideas in the books unfortunately are somewhat marred by a blur with too much science fiction.

I will just mention this very briefly—I don’t want to get into it in any detail—but in Hans’s book there is a huge amount of stuff about time travel.  There are five to ten pages about computers that will be based upon time travel, and the idea that when computers get sufficiently complex—and by “computer” he means something that is way ahead of what we have now; he calls them “Minds” with a capital “M.”—inside these “Minds,” all of the universe’s history will be recreated uncountably many times, including our own lives.  These kinds of scenarios, which are so fantastic, are blended in seamlessly with the rest of the book.  To me that renders the whole book somewhat contaminated.  How secure can I be of the sanity of this person?

In Ray’s case, there are some similar things.  It is hard to take it seriously, but you may disagree with me, the idea that he is going to be immortal.  It is hard to take seriously the passages in his prior book, the idea of utility foglets–little molecules that will self-assemble into anything, from the Taj Mahal to a mountain range in a split second.  It is a science fiction scenario at the most extreme level, but it is right there in the middle of the book.  Ray talks about engineering down below the level of quarks, and he also talks about civilizations that would have commandeered the entire galaxy to do their information processing or their communications.  To me these things also seem wild beyond any degree of speculation that I am willing to accept.

Again, I feel like my level of ability to believe in it is seriously damaged by these kinds of admixtures.  Then there are some nitpicky things.  Today Ray used the word genome very accurately—but in the book, every time he refers to the genome he refers to it as “the genetic code.”  That is a very elementary mistake.  That is like confusing an electron with a nucleus.  The genetic code is a very specific four by four by four table that maps codons onto amino acids.  Ray talks about the genetic code of an individual over and over again in the book.  That undermines his credibility to me.

He talks about the “knee” of an exponential curve.  Well, I’m sure he knows there is no such thing.  An exponential curve is a very smooth curve—there is no special place in an exponential curve.  All you need to do is take the logarithm and you see it is a straight line.  There is no such thing as the “knee” of an exponential curve.  These kinds of things to some extent reduce my level of belief, but they do not, by any means, make me stop believing that some of these ideas are possible.

I guess what I think is the following. When I read Ray’s book, I see a large number of things that I think are partially true.  “Partially true” means—they’re blurry. I cannot put my finger on where they are wrong—there’s a lot of handwaving.  When you multiply the partial truth of a bunch of things together, you get down to a very small number.  Maybe a probability of one in a thousand of what Ray is talking about is actually taking place.  Now, again, I’m just handwaving.  When I listen to Ray, I feel like I am listening to one side of a divorce, and I am terribly convinced that this person is right and that the other side could not possibly be right.

I would like to hear serious scientists taking these ideas seriously and giving a serious skeptical response—not that I necessarily want it to be the other side.  I want the debate to be very seriously taken.  I think this is all to Ray’s credit.  He has raised some terribly important issues.  These issues suggest that we are about to be transformed in unbelievably radical ways.  The subtitle of Damien Broderick‘s book The Spike is “Accelerating into an unimaginable future,” and I think that is what is being suggested. I think we really have to take these ideas seriously, so that is my plea. Thank you.

summit-location.png

.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>