Mike LaTorra and J. Hughes led a group discussion at the Convergence 08 unconference titled “Digital Serfs and Cyborg Buddhas.” Digital (or data) serfdom currently exists — and is growing — among high-tech workers. In a future of mind-uploading, the situation could worsen into a dystopian horror. The bright alternative to this vision of servitude in dark digital mills is life as an enhanced, empowered, free individual, the Cyborg Buddha, who enjoys both technological abundance and the time to enjoy it in contemplative bliss.
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behavior with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasoning systems.
This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of ‘molecular’ models. Benjamin Johnston in a presentation on his paper with Mary-Anne Williams delivered at the AGI-08 Conference, argues that this combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
At AGI-08: The First Conference on Artificial General Intelligence, Andrew Shilliday of the Rensselaer A.I. and Reasoning Lab reported on the attempts by researchers at the organization to enable artificial agents to reason about the beliefs of others, resulting in game characters that can predict the behavior of human players.
One might imagine that AI systems with harmless goals will demonstrate harmless behavior. A paper by Self-Aware Systems founder and president Steve Omohundro submitted for the AGI-08 conference on artificial general intelligence shows instead that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. This presentation on the basic AI drives, taking place at the post-conference workshop, identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design.
One approach in pursuit of general intelligent agents has been to concentrate on the underlying cognitive architecture, of which Soar is a prime example. In the past, Soar has relied on a minimal number of architectural modules together with purely symbolic representations of knowledge. At the AGI-08 conference John Laird, Tishman Professor of Engineering at the University of Michigan, presented the cognitive architecture approach to general intelligence and the traditional, symbolic Soar architecture. This overview was followed by major additions to Soar, including non-symbolic representations, new learning mechanisms, and long-term memories
Bill Hibbard, Emeritus Senior Scientist of the Space Science and Engineering Center at the University of Wisconsin-Madison, contributed a paper for the AGI-08 post-conference workshop arguing that machines significantly more intelligent than humans will require changes in legal and economic systems in order to preserve human values. An open source design for artificial intelligence could help this process by discouraging corruption, by enabling many minds to search for errors, and by encouraging political cooperation. In his presentation, he encouraged the Singularity Institute for Artificial Intelligence not to remain mute on the subject of politics and to take a proactive stance as a political organization.
Tad Hogg is a researcher in the Social Computing Laboratory of Hewlett-Packard, focused on harvesting the collective intelligence of groups of people to optimize the interaction between users and information. At the 2008 Global Catastrophic Risks conference in Mountain View, he presented on the prospects of distributed surveillance with MEMS & nano-scale sensors.
To be practically useful, the measurement of aging rate (by monitoring the decline of a global index of functional capacity, expressed as a rate function) must be relatively easy and inexpensive. Measured aging rate should enable empirical testing of purported anti-aging interventions in relatively short-term human clinical trials. This is the primary objective of the Kronos Longitudinal Aging Study. Chris Heward, President of Kronos Science Laboratory, presented on strategies aimed at intervening in the aging process at the 2007 Foresight Vision Weekend, including a brief overview of the study and its intentions.
The 2007 Foresight Vision Weekend offered fifteen intense hours of mind-blowing creativity. From the biggest picture of tomorrow’s web, to the tiniest picture of nanotechnology, the emphasis was on how to steer rapid change for the benefit of civilization, instead of being run over by it. Brad Templeton introduced the second day of the unconference organized by the Foresight Nanotechnology Institute, touching on the tremendous opportunities attending living in the midst of a historical revolution in converging technologies.
Jamais Cascio gave the closing talk at GCR08, a Mountain View conference on Global Catastrophic Risks. Titled “Uncertainty, Complexity and Taking Action,” the discussion focused on the challenges inherent in planning to prevent future disasters emerging as the result of global-scale change.
Jamais Cascio is a Senior Fellow for the Institute for Ethics and Emerging Technologies, a research affiliate at the Institute for the Future, and blogs at Open the Future. He presented on the concept of engineering civilization to be more resilient in the face of catastrophic risks at GCR08, the November Global Catastrophic Risks conference in Mountain View. A day-long seminar on threats to the future of humanity, natural and man-made, the meeting offered various viewpoints on the pro-active steps we can take to reduce global risks.
At the AGI-08 post-conference workshop on the ethical implications of artificial general intelligence, J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine, presented on “Engineering Utopia.” The paper asserts that the likely advent of AGI and the long-established trend of improving computational hardware promise a dual revolution in coming decades: machines which are both more intelligent and more numerous than human beings. This possibility raises substantial concern over the moral nature of such intelligent machines, and of the changes they will cause in society. Will we have the chance to determine their moral character, or will evolutionary processes and/or runaway self-improvement take the choices out of our hands?