Here is a blog post. At the top is the classic Toothpaste for Dinner comic about the Singularity. A funny excerpt:
"I've recently found a third topic to exclude from dinner conversations, alongside politics and religion. The singularity. While I'm rarely one to dichotomise people, in this case I've found you're either excited by the idea, or you do your best to stifle a smirk and offer me another slice of roast beef.
With the propensity to discuss the Singularity at dinner most of all, I'm quite familiar with this phenomenon. When people eat meat, it reminds me of how superintelligences will eat us for dinner if we aren't careful.
Here is the radio show.
Here is another quote from the blog post:
For my money, I think it's far too easy to get lost in the assumption that the trick to speeding up innovation lies in smarter minds. Progress is inhibited more by social concepts such as ethics, resource allocation and effective communication. Sure, a few bright boffins might hurt in the search for academic solutions, but if a super intelligent computer were to seek permission to dissect a living foetus in its search for more information, I hesitate to think it would get the public tick of approval.
Yes, innovation didn't speed up whatsoever when Homo erectus evolved into Homo habilis and then into Homo sapiens, clearly it had only to do with ethics, resources allocation, and effective communication. Wait a second, where do those things come from? Oh, intelligence. (A certain level of intelligence is a necessary prerequisite for ethical action, though it's true that some intelligences choose not to take ethical actions, they seem to create overarching game theoretic structures that encourage ethical choices and punish defectors, like modern law.)
It is likely that high-detail simulations can be used for extensive experimentation (scientists already use them and hope to one day stop using animal models in favor of computational ones). Surely an AI could become very intelligent and effective without violating ethical rules (though it could choose to, and we might be hard-pressed to stop it if we didn't give the AI ethical motivations to start with).
To those who say "intelligence doesn't matter", it's important to consider the difference between interspecies intelligence differentials and intraspecies intelligence differentials. Intelligence only matters less when it's an intraspecies differential. But when you're talking about intelligence gaps equal to the intelligence gaps between different species, it starts to matter a lot. 99% of all humans implicitly assume that the humans are the end of the road of qualitative intelligence improvement, right near the top of the Great Chain of Being, just below God and the angels. I am honestly astonished how many people believe this even when they should know that it is facile anthropocentrism.
Taking the simplest view, we should assume that humans are somewhere in the middle of the qualitative intelligence spectrum, not at the top or the bottom. If anything, we're near the bottom, because we've been designed by natural selection, which has many limitations, rather than intelligent design, which is potentially unlimited in possibilities. Because this is the simplest view, the burden of proof for more complex views, (i.e., humans are at the top of the Great Chain of Being) is on their advocates, not those who put human intelligence in a non-special place in mindspace. That is the essence of the self-sampling assumption: assume we are typical observers, not particularly special members in the set of all observers.