Douglas Hofstadter is College Professor of Cognitive Science and Computer Science, and Adjunct Professor of History and Philosophy of Science, Philosophy, Comparative Literature, and Psychology at Indiana University, where he directs the Center for Research on Concepts and Cognition. His books include the Pulitzer Prize winning Gödel Escher, Bach: An Eternal Golden Braid, Metamagical Themas, The Mind’s I (with Daniel Dennett), Fluid Concepts and Creative Analogies, Le Ton Beau de Marot, and a verse translation of Pushkin’s Eugene Onegin. At the Singularity Summit at Stanford he stated his belief that the Singularity scenario, “even if it seems wild, raises a gigantic, swirling cloud of profound and vital questions about humanity and the powerful technologies it is producing. Given this mysterious and rapidly approaching cloud, there can be no doubt that the time has come for the scientific and technological community to seriously try to figure out what is on humanity’s collective horizon.”
Environmentalist Bill McKibben, a visiting scholar in environmental studies at Middlebury College, is the author of The End of Nature, the first book for a general audience about global warming. Published in 1989, it is now available in 20 languages. His most recent book, Enough, critiques human genetic engineering, nanotechnology and other rapidly advancing technologies. It was his belief, posited at the Singularity Summit at Stanford that we need to decide that we live, most of us in the West, long enough. “In societies where most of us need storage lockers more than we need nanotech miracle boxes, we need to declare that we have enough stuff. Enough intelligence. Enough capability. Enough.”
SIAI Interview Series – Steve Omohundro
John Smart and Eliezer Yudkowsky at the 2006 Singularity Summit at Stanford
John Smart is a developmental systems theorist who studies accelerating change, computational autonomy, and the singularity. He is President of the Acceleration Studies Foundation, a nonprofit community for research, education, consulting, and selected advocacy of communities and technologies of accelerating change. He also co-produces the Accelerating Change Conference, a meeting of 350 change-leaders and students at Stanford University, and edits ASF’s free newsletter, Accelerating Times, read by future-oriented thinkers around the world. He is a member of the Association of Professional Futurists, the FBI Futures Working Group, and the editorial advisory board of Technological Forecasting and Social Change.
In 2006, he presented the talk “Systems Theories of Accelerating Change” at the Singularity Summit at Stanford. There he looked at accelerating change from universal, biological, human cultural, and technological perspectives, and introduced a few well known and unorthodox ideas in acceleration mechanics.
Historical progress isn’t inevitable – the pendulum of history doesn’t have a regular period. Sometimes you get a 500-year Dark Age. Science fiction novelist, blogger and technology activist Cory Doctorow argued at the 2006 Singularity Summit at Stanford that whether a Singularity or Dark Age comes next is not preordained, but depends on our conscious effort.
Nature demonstrates that productive nanosystems can work cleanly and inexpensively, converting common materials into billions of tons per year of intricate, atomically precise structures. Progress in molecular and nanoscale technologies has laid the groundwork for engineering simple productive nanosystems. These will enable the development of more intricate and complex productive systems, creating a feedback loop that drives accelerating change. At the 2006 Singularity Summit at Stanford, K. Eric Drexler spoke on how advanced productive nanosystems will deliver unprecedented productivity.
Will artificial intelligence bring about a technological singularity in a soft take off? Ray Kurzweil at the 2006 Singularity Summit at Stanford gave an overview of smooth doubly exponential progressions that he believes could lead to such an outcome. While his projections are considered radical by some observers, it is often because they are thinking linearly and leave out the historically accurate exponential perspective.
With advanced nanotechnology and machine intelligence on the horizon, we face a future of vast change in our physical world and the world of the mind. But we need not abandon efforts to steer this future toward one which will work for both humans and the biosphere. Christine Peterson identified certain ground conditions needed for such a success in the context of powerful technologies in her 2006 Singularity Summit at Stanford presentation entitlted “Bringing Humanity & the Biosphere Through the Singularity.”
We know that highly intelligent people can make terrible decisions. The question therefore arises: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our cognitive abilities? Max More offered his thoughts by outlining the goals of the proactionary principle at the 2006 Singularity Summit at Stanford.
Rodney Brooks is Director of the MIT Computer Science and Artificial Intelligence Laboratory, Panasonic Professor of Robotics at MIT, and CTO of iRobot Corp (Nasdaq: IRBT). His 2007 Singularity Summit keynote speech, entitled “The Singularity: A Period Not An Event,” argued that the singularity will encompass a time where a collection of technologies were invented, developed, and deployed in fits and starts, driven not by the imperative of the singularity itself, but by the normal economic and sociological pressures of human affairs. While a Hollywood treatment of the singularity would have a world just like today’s, plus the singularity, as a singular event, in reality, the world will be changing continuously due to rapid growth in technologies that are both related and unrelated to the singularity itself.
Eliezer Yudkowsky has two papers forthcoming in the edited volume Global Catastrophic Risks (Oxford, 2007), “Cognitive Biases Potentially Affecting Judgment of Global Risks” and “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” At the 2007 Singularity Summit, he described how shaping a very powerful and general AI implies a different challenge, of greater moral and ethical depth, than programming a special-purpose domain-specific AI. The danger of trying to impose our own values, eternally unchanged, upon the future, can be seen through the thought experiment of imagining the ancient Greeks trying to do the same. Human civilizations over centuries, and individual human beings over their own lifespans, directionally change their moral values.
Ray Kurzweil is an inventor, entrepreneur, author, and futurist. Called “the restless genius” by the Wall Street Journal and “the ultimate thinking machine” by Forbes he was inducted into the National Inventors Hall of Fame in 2002. He helped organize the Singularity Summit at Stanford University in 2006 and gave the keynote presentation exploring some of the central issues explored in his book The Singularity Is Near. At the 2007 Singularity Summit, he attended virtually, giving a brief talk before answering questions from the audience on how technologists are currently uncovering how the brain performs intelligence.