In case you haven't seen it before, there's an interview from 2007 with Hofstadter at the American Scientist that goes over what he's been up to. The interesting part from the point of view of this Singularitarian is where he talks about the Singularity and Kurzweil:
There's a popular idea currently that technology may be converging on some kind of culmination--some people refer to it as a singularity. It's not clear what form it might take, but some have suggested an explosion of artificial intelligence. Do you have any thoughts about that?
Oh, yeah, I've organized several symposia about it; I've written a long article about it; I've participated in a couple of events with Ray Kurzweil, Hans Moravec and many of these singularitarians, as they refer to themselves. I have wallowed in this mud very much. However, if you're asking for a clear judgment, I think it's very murky.
The reason I have injected myself into that world, unsavory though I find it in many ways, is that I think that it's a very confusing thing that they're suggesting. If you read Ray Kurzweil's books and Hans Moravec's, what I find is that it's a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It's as if you took a lot of very good food and some dog excrement and blended it all up so that you can't possibly figure out what's good or bad. It's an intimate mixture of rubbish and good ideas, and it's very hard to disentangle the two, because these are smart people; they're not stupid.
Ray Kurzweil says 2029 is the year that a computer will pass the Turing test [converse well enough to pass as human], and he has a big bet on it for $1,000 with [Lotus Software founder Mitch Kapor], who says it won't pass. Kurzweil is committed to this viewpoint, but that's only the beginning. He says within 10 or 15 years after that, a thousand dollars will buy you computational power that will be equivalent to all of humanity. What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?
This is an interesting reaction that, like many reactions, probably describes the views of hundreds of thousands of silent people. Notice how Hofstadter has a visceral emotional aversion to Kurzweil's ideas -- "mud", "unsavory", etc. This is something of a contrast to views like those of Dr. Richard Jones, who is far more subtle and interesting:
The difficulty, then, is not that there is no science underlying the claims Kurzweil makes, nor that this science isnâ€™t very exciting on its own terms. Itâ€™s that this science canâ€™t sustain the sweeping claims and (especially) the fast timescales that Kurzweil insists on.
If I'm criticizing Kurzweil, this is more along the lines where I prefer to tread, rather than the visceral aversion theme, which is obviously to be expected.
But Hofstadter comes up with an interesting point, which is "What does it mean to talk about $1,000 when humanity has been superseded and the whole idea of humans is already down the drain?" This invokes a line of reasoning by Kurzweil that we've seen on several occasions, namely that life and the world would change after the Singularity, but not that much. Other data points:
1) Kurzweil claims that Moore's law and his other exponential trends will continue to progress at a predictable rate, continuous with their progress in a human-only society, even when human-level AI and neural-enhancement nanobots are introduced around 2029. (According to his book.) These will presumably not accelerate technology by a discontinuous multiplier, because that would throw off the smooth exponential curve.
2) Kurzweil seems to place an awful lot of emphasis on the role of sexuality and throwing off the tyranny of gender in the post-Singularity world. Yet, if we discard most of the trappings of our biology, might we not choose to associate activities other than sex with extreme pleasure? (By directly reprogramming our brains.) And wouldn't a sex change be quite pedestrian in a world where we have complete morphological freedom to transform ourselves into practically anything we want? Isn't this is sort of thing we'd try out in the first week, then move on to completely new and barely imaginable realms shortly afterward? This angle is somewhat improved in The Singularity is Near over The Age of Spiritual Machines.
3) Kurzweil's de-emphasis of the Event Horizon of Vinge, instead preferring to cast the post-AGI future in ways that us pre-Singularity folks can comprehend, referencing specific humanly-imaginable technologies and encouraging his readers to associate the Singularity concept with these technologies. This aim is seemingly reinforced by Singularity University.
4) AGI appears in 2029, but "the Singularity" doesn't happen until 2045. What happens in between those years? Some corny, pre-Singularity fiction like Accelerando? I have a sneaking suspicion that this vision -- AGI exists, but fits in smoothly with preexisting society just like human newborns or modern-day computers -- is what much of the public thinks that people like myself are talking about when I say "Singularity". In reality, what I think of is more like the Romantic past touched up with barely-behind-the-scenes advanced technology that feels more organic than anything else. (At least initially, because I think that is what humans want, and will deliberately choose once we have the technology.)
For me, when the Singularity concept really clicked is when I heard it being described in terms of genuine cognitive improvement and extensive dwelling on what that means, not talking about futuristic technologies we already can easily foresee and would be developed anyway in a human-only world, no superintelligence required. One line from "Staring into the Singularity" goes like this:
Smartness is that quality which makes it impossible to write a story about a character smarter than you are. You can write about super-fast thinkers, eidetic memories, lightning calculators; a character who learns a dozen languages in a week, who can read a textbook in an hour, or who can invent all kinds of wonderful stuff - as long as you don't have to produce the actual invention. But you can't write a character with a higher level of emotional maturity, a character who can spot the obvious solution you missed, a character who knows (and can tell the reader) the Meaning Of Life, a character with superhuman self-awareness. Not unless you can do these things yourself.
This is what makes the Singularity intellectually interesting to me. Genuinely enhanced intelligence and awareness. Not technology for technology's sake.
As I've said before, I think that the survival or defeat of humanity in the 21st century almost exclusively depends on how we handle the first self-improving superintelligence. Therefore, while I have substantial interest in new and exciting technologies, I tend to divide them strongly into two categories: technologies that may have an impact on creating superintelligence and those that don't. Technologies that do are extremely important. Those that don't are interesting to think about, but more of an intellectual exercise than anything else. However, the scope of technologies involved in the possible creation is superintelligence is substantially wider than it appear at first glance. This is partially because there are several distinct types of superintelligence that may be produced first. Overall, this increases the probability that any one type of superintelligence will be invented in the next 20-50 years. When one is generated, it will soon lead to all the others.