Anna Salamon is a Research Fellow at the Singularity Institute for Artificial Intelligence. Her work centers on analytical modeling of artificial intelligence risks, probabilistic forecasting, and strategies for human survival. Previously, she conducted machine learning research at NASA Ames, and applied mathematics research at the Rohwer Phage Metagenomics lab.
This talk considers the following question. Suppose powerful artificial intelligences are at some point created. In such a world, would humanity be able to survive by accident, in margins the super-intelligences haven’t bothered with, as rats and bacteria survive today?
Many have argued that we could, arguing variously that humans could survive as pets, in wilderness preserves or zoos, or as consequences of the super-intelligences’ desire to preserve a legacy legal system. Even in scenarios in which humanity as such doesn’t survive, Vernor Vinge, for example, suggests that human-like entities may serve as components within larger super-intelligences, and others suggest that some of the qualities we value, such as playfulness, empathy, or love, will automatically persist in whatever intelligences arise.
This talk will argue that all these scenarios are unlikely. Intelligence allows the re-engineering of increasing portions of the world, with increasing choice, persistence, and reliability. In a world in which super-intelligences are free to choose, historical legacies will only persist if the super-intelligences prefer those legacies to everything else they can imagine.
This lecture was recorded on 29th January 2011 at the UKH+ meeting. For information on further meetings please see: