Hofstadter, Jones, and Kurzweil on the Singularity

In case you haven’t seen it before, there’s an interview from 2007 with Hofstadter at the American Scientist that goes over what he’s been up to. The interesting part from the point of view of this Singularitarian is where he talks about the Singularity and Kurzweil:

There’s a popular idea currently that technology may be converging on some kind of culmination–some people refer to it as a singularity. It’s not clear what form it might take, but some have suggested an explosion of artificial intelligence. Do you have any thoughts about that?

Oh, yeah, I’ve organized several symposia about it; I’ve written a long article about it; I’ve participated in a couple of events with Ray Kurzweil, Hans Moravec and many of these singularitarians, as they refer to themselves. I have wallowed in this mud very much. However, if you’re asking for a clear judgment, I think it’s very murky.

The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it’s a very confusing thing that they’re suggesting. If you read Ray Kurzweil’s books and Hans Moravec’s, what I find is that it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.

Ray Kurzweil says 2029 is the year that a computer will pass the Turing test [converse well enough to pass as human], and he has a big bet on it for $1,000 with [Lotus Software founder Mitch Kapor], who says it won’t pass. Kurzweil is committed to this viewpoint, but that’s only the beginning. He says within 10 or 15 years after that, a thousand dollars will buy you computational power that will be equivalent to all of humanity. What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?

This is an interesting reaction that, like many reactions, probably describes the views of hundreds of thousands of silent people. Notice how Hofstadter has a visceral emotional aversion to Kurzweil’s ideas — “mud”, “unsavory”, etc. This is something of a contrast to views like those of Dr. Richard Jones, who is far more subtle and interesting:

The difficulty, then, is not that there is no science underlying the claims Kurzweil makes, nor that this science isn’t very exciting on its own terms. It’s that this science can’t sustain the sweeping claims and (especially) the fast timescales that Kurzweil insists on.

If I’m criticizing Kurzweil, this is more along the lines where I prefer to tread, rather than the visceral aversion theme, which is obviously to be expected.

But Hofstadter comes up with an interesting point, which is “What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?” This invokes a line of reasoning by Kurzweil that we’ve seen on several occasions, namely that life and the world would change after the Singularity, but not that much. Other data points:

1) Kurzweil claims that Moore’s law and his other exponential trends will continue to progress at a predictable rate, continuous with their progress in a human-only society, even when human-level AI and neural-enhancement nanobots are introduced around 2029. (According to his book.) These will presumably not accelerate technology by a discontinuous multiplier, because that would throw off the smooth exponential curve.

2) Kurzweil seems to place an awful lot of emphasis on the role of sexuality and throwing off the tyranny of gender in the post-Singularity world. Yet, if we discard most of the trappings of our biology, might we not choose to associate activities other than sex with extreme pleasure? (By directly reprogramming our brains.) And wouldn’t a sex change be quite pedestrian in a world where we have complete morphological freedom to transform ourselves into practically anything we want? Isn’t this is sort of thing we’d try out in the first week, then move on to completely new and barely imaginable realms shortly afterward? This angle is somewhat improved in The Singularity is Near over The Age of Spiritual Machines.

3) Kurzweil’s de-emphasis of the Event Horizon of Vinge, instead preferring to cast the post-AGI future in ways that us pre-Singularity folks can comprehend, referencing specific humanly-imaginable technologies and encouraging his readers to associate the Singularity concept with these technologies. This aim is seemingly reinforced by Singularity University.

4) AGI appears in 2029, but “the Singularity” doesn’t happen until 2045. What happens in between those years? Some corny, pre-Singularity fiction like Accelerando? I have a sneaking suspicion that this vision — AGI exists, but fits in smoothly with preexisting society just like human newborns or modern-day computers — is what much of the public thinks that people like myself are talking about when I say “Singularity”. In reality, what I think of is more like the Romantic past touched up with barely-behind-the-scenes advanced technology that feels more organic than anything else. (At least initially, because I think that is what humans want, and will deliberately choose once we have the technology.)

For me, when the Singularity concept really clicked is when I heard it being described in terms of genuine cognitive improvement and extensive dwelling on what that means, not talking about futuristic technologies we already can easily foresee and would be developed anyway in a human-only world, no superintelligence required. One line from “Staring into the Singularity” goes like this:

Smartness is that quality which makes it impossible to write a story about a character smarter than you are. You can write about super-fast thinkers, eidetic memories, lightning calculators; a character who learns a dozen languages in a week, who can read a textbook in an hour, or who can invent all kinds of wonderful stuff – as long as you don’t have to produce the actual invention. But you can’t write a character with a higher level of emotional maturity, a character who can spot the obvious solution you missed, a character who knows (and can tell the reader) the Meaning Of Life, a character with superhuman self-awareness. Not unless you can do these things yourself.

This is what makes the Singularity intellectually interesting to me. Genuinely enhanced intelligence and awareness. Not technology for technology’s sake.

As I’ve said before, I think that the survival or defeat of humanity in the 21st century almost exclusively depends on how we handle the first self-improving superintelligence. Therefore, while I have substantial interest in new and exciting technologies, I tend to divide them strongly into two categories: technologies that may have an impact on creating superintelligence and those that don’t. Technologies that do are extremely important. Those that don’t are interesting to think about, but more of an intellectual exercise than anything else. However, the scope of technologies involved in the possible creation is superintelligence is substantially wider than it appear at first glance. This is partially because there are several distinct types of superintelligence that may be produced first. Overall, this increases the probability that any one type of superintelligence will be invented in the next 20-50 years. When one is generated, it will soon lead to all the others.

Comments

  1. I temporarily dropped working on my book last spring to take up other important projects (like the SIAI-funded summer research project led by Anna Salamon which extended well into the fall), but have recently resumed working on it due to new-found financial stability. It is very easy to get distracted by projects with more immediate returns.

  2. It’s the experience that I look forward to: the sheer aesthetic and erotic pleasure of arming my present self with the capacity to enjoy unimaginable and unspeakable delights. After the singularity, blasphemy will be made real.

  3. Shawn

    Regarding your Kurzweil points…

    2) Could there be any doubt, when we consider the pain and tumult caused in most human lives by the urges and instincts surrounding multiplication, that many will amplify the pleasure but rid themselves of the imperative as soon as the means appear?

    4) Tolkien shared intimations of something profound with his Elvish culture. His picture of that society speaks to a deep longing in some very smart people. I would not be surprised to see our material culture — our non-virtual set dressing, so to speak — move toward a sort of gracile Medievalism. (Of course, I can’t disentangle the production design of the LOTR films from my own imagination at this point!) But I would expect no monoculture. As we become ensmartened I’m sure we will speciate culturally.

  4. Sorry if this is a faux pas: Shawn (and anyone else) check out the site I have dedicated to your comment on Tolkien. I was wondering if anyone else out there made a similar supposition as you have.

    Michael, I’ll have to post a link to the “Romantic Past” article you wrote up a while ago at my blog. I share a similar suspicion (and bias) regarding the AGI to Singularity in-between time.

  5. Richard, “blasphemy will be made real”, that’s a funny statement, I wonder what you had in mind.

    Shawn, yes!

    Gregory, yes, I’ve been reading your blog lately and my post from 2006 is a rare example of some type of connection between Tolkien-like thoughts and transhumanism.

  6. Anderson

    “It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle–they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”
    -Alfred North Whitehead

    “4) AGI appears in 2029, but “the Singularity” doesn’t happen until 2045. What happens in between those years?”
    -M.A.

    This hypothetical “pause” during a largely unpredictable time period seems fuzzy. haha

  7. When I first saw this title Hofstadter, Jones, and Kurzweil on the Singularity | Accelerating Future on google I just whent and bookmark it. Simply wanna say that this is very beneficial , Thanks for taking your time to write this.

  8. My dream retirement is to hardly ever really have to deal when using the cold of winter or heat of summer. An A-Frame in Vermont and also a cottage by the sea near Savannah.

Trackbacks for this post

  1. ShrinkWrapped

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>