So there was a rumor, possibly false (although the linked page builds on a mention from a scientist that was actually attending the event), that a Dr. Pianka wanted to kill 90% of humanity with ebola. His statement is here. The gist of his statement is that he actually just wouldn't mind if humanity is killed by ebola, but wouldn't actively pursue it. Still, the point is that there could be evil supergeniuses among us. (Read his statement for more detail.)
This issue reminds me of an acronym that Phillippe Van Nedervelde used in his talk on existential risks and the Lifeboat Foundation at Transvision 2007 -- SIMAD -- Single Individual, Massively Destructive. He also pointed to the Unabomber, and showed a picture of him when he was a math teacher, looking just like a typical professor. There is a risk from radical, out-of-control nutcases like Al Qaeda, yes, but these people tend to have problems infiltrating truly relevant organizations or acquiring the complex knowledge necessary to do real damage.
In the case of AI and synthetic biology, the biggest risks will come from smart people who have a grudge against society, and even those with noble motives but insufficient caution or sense of professional ethics. After all, if it were possible for humanity to destroy itself, it would have done so a long time ago... right? Wrong. Selection effects ensure that we will always find ourselves in a civilization that hasn't previously destroyed it.
In the comments section of a blog I was reading yesterday, someone had this to say:
Much of the problem faced by those trying to tell us about existential risks is the fact that we've been bitten to hard and too long by wolf-criers for the past six years. As a result, ANYONE who talks about dangers is likely to get the cold shoulder, regardless of whether 1) they are sincere as opposed to jockeying for power, or 2) whether the risk they're talking about is actually real or not.
This does seem true, and admonitions about global warming may be partially to blame, as well as terrorist fearmongering (some of which may also, in fact, be well-founded). Anthropogenic global warming is a reality, yes, but I don't think it's an existential risk, especially not in the next few decades. Bombardment with warnings on anthropogenic climate change, as well as terrorist attacks, is desensitizing the populace to warnings of existential risk. I'm not saying such warnings are a bad thing, just pointing out the fact that they're desensitizing us. The fact that the most severe risks have to do with technologies just barely beginning to roll off the assembly lines -- advanced AI and robotics, and synthetic biology -- doesn't help matters either.
But, as always, you, the reader, can refuse to be a part of the problem. You can take existential risk seriously, and refuse to write off those who discuss these dangers, like Martin Rees and Stephen Hawking, as "Doomsayers". For most of the past 10,000 years, catastrophic technological risk has been impossible. Even global thermonuclear war would be more likely to kill off 10% or 20% of the population rather than 99% or 100%. And if you care about the long-term future of humankind as a whole, killing a billion and killing everyone makes a hell of a lot of difference.