George Dvorsky gets it right about aliens. Public intellectuals like Stephen Hawking seem to be at an odd point in intellectual evolution where they are smart enough to think about the possibility of ETs but not far along enough to realize that if ETs wanted to kill us they would already be here.
Dvorsky’s five reasons:
1. If aliens wanted to find us they would have done so already
2. If ETIs wanted to destroy us they would have done so by now
3. If aliens wanted our solar system’s resources, they would haven taken them by now
4. Human civilization has absolutely nothing to offer a post-Singularity intelligence
5. Extrapolating biological tendencies to a post-Singularity intelligence is asinine
The only one of these I might question is #5. In “The Basic AI Drives”, Steve Omohundro argues that artificial intelligences will naturally want to 1) self-improve, 2) be rational, 3) preserve their utility functions, 4) prevent counterfeit utility, 5) be self-protective, and 6) acquire resources and use them efficiently. I would argue that any agent needs to have these features to some degree to perpetuate itself in a hostile universe where even just the weather is a formidable foe. (Unless you are in interstellar space, where the weather is relatively calm.) Therefore, I would not hesitate to extend the biological tendencies listed above to a large category of possible agents, including post-Singularity intelligences.