Within the next few decades, and perhaps sooner, a new type of manufacturing will be made possible by molecular nanotechnology (MNT). Considering its enormous potential for profound social, environmental, economic, and military impacts, MNT has received insufficient attention in ethical and policy discussions. Mike Treder, co-founder and Executive Director of the Center for Responsible Nanotechnology, presented on the global risks posed by molecular nanotechnology and the potential for resilience at the November Global Catastrophic Risks conference .
Mike LaTorra and J. Hughes led a group discussion at the Convergence 08 unconference titled “Digital Serfs and Cyborg Buddhas.” Digital (or data) serfdom currently exists — and is growing — among high-tech workers. In a future of mind-uploading, the situation could worsen into a dystopian horror. The bright alternative to this vision of servitude in dark digital mills is life as an enhanced, empowered, free individual, the Cyborg Buddha, who enjoys both technological abundance and the time to enjoy it in contemplative bliss.
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behavior with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasoning systems.
This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of ‘molecular’ models. Benjamin Johnston in a presentation on his paper with Mary-Anne Williams delivered at the AGI-08 Conference, argues that this combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
The Technology Roadmap for Productive Nanosystems charts a path beginning with current nanotechnology capabilities to advanced molecularly-precise systems. Christine Peterson, co-founder of the Foresight Nanotech Institute, spoke on the organization’s attempts to lay out a step-by-step course of development for molecular nanotechnology at a Stanford graduate class in technology forecasting in January. After outlining near and mid-term projections for nanoscale technologies, she introduced the future objective of establishing open source physical security, a means to broadly protect both privacy and safety in a society empowered with sophisticated surveillance technologies.
At AGI-08: The First Conference on Artificial General Intelligence, Andrew Shilliday of the Rensselaer A.I. and Reasoning Lab reported on the attempts by researchers at the organization to enable artificial agents to reason about the beliefs of others, resulting in game characters that can predict the behavior of human players.
One might imagine that AI systems with harmless goals will demonstrate harmless behavior. A paper by Self-Aware Systems founder and president Steve Omohundro submitted for the AGI-08 conference on artificial general intelligence shows instead that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. This presentation on the basic AI drives, taking place at the post-conference workshop, identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design.
One approach in pursuit of general intelligent agents has been to concentrate on the underlying cognitive architecture, of which Soar is a prime example. In the past, Soar has relied on a minimal number of architectural modules together with purely symbolic representations of knowledge. At the AGI-08 conference John Laird, Tishman Professor of Engineering at the University of Michigan, presented the cognitive architecture approach to general intelligence and the traditional, symbolic Soar architecture. This overview was followed by major additions to Soar, including non-symbolic representations, new learning mechanisms, and long-term memories
Bill Hibbard, Emeritus Senior Scientist of the Space Science and Engineering Center at the University of Wisconsin-Madison, contributed a paper for the AGI-08 post-conference workshop arguing that machines significantly more intelligent than humans will require changes in legal and economic systems in order to preserve human values. An open source design for artificial intelligence could help this process by discouraging corruption, by enabling many minds to search for errors, and by encouraging political cooperation. In his presentation, he encouraged the Singularity Institute for Artificial Intelligence not to remain mute on the subject of politics and to take a proactive stance as a political organization.
Tad Hogg is a researcher in the Social Computing Laboratory of Hewlett-Packard, focused on harvesting the collective intelligence of groups of people to optimize the interaction between users and information. At the 2008 Global Catastrophic Risks conference in Mountain View, he presented on the prospects of distributed surveillance with MEMS & nano-scale sensors.
To be practically useful, the measurement of aging rate (by monitoring the decline of a global index of functional capacity, expressed as a rate function) must be relatively easy and inexpensive. Measured aging rate should enable empirical testing of purported anti-aging interventions in relatively short-term human clinical trials. This is the primary objective of the Kronos Longitudinal Aging Study. Chris Heward, President of Kronos Science Laboratory, presented on strategies aimed at intervening in the aging process at the 2007 Foresight Vision Weekend, including a brief overview of the study and its intentions.
The 2007 Foresight Vision Weekend offered fifteen intense hours of mind-blowing creativity. From the biggest picture of tomorrow’s web, to the tiniest picture of nanotechnology, the emphasis was on how to steer rapid change for the benefit of civilization, instead of being run over by it. Brad Templeton introduced the second day of the unconference organized by the Foresight Nanotechnology Institute, touching on the tremendous opportunities attending living in the midst of a historical revolution in converging technologies.
Steve Omohundro is president of Self-Aware Systems, a Silicon Valley think tank aimed at bringing human values to emerging technologies. His talk “AI and the Future of Human Morality,” delivered at the Silicon Valley World Transhumanist Association Meetup, examines the origins of human morality and its future development to cope with advances in artificial intelligence.
The presentation begins with a discussion of the dangers of philosophies which put ideas ahead of people, then presents Kohlberg’s six stages of human moral development, evidence for recent advances in human morality, the theory underlying co-opetition, recent advances in understanding the sexual and social origins of altruism, and the five human moral emotions and their relationship to political systems. The discussion then considers the likely behavior of advanced AI systems, showing that they will want to understand and improve themselves, will have drives toward self-preservation and resource acquisition, and will be vigilant in avoiding corruption and addiction. The presentation ends with a description of the three primary challenges that humanity faces in guiding future technology toward human-positive ends.