At the AGI-09 post-conference workshop‘s Roadmap Panel, Itamar Arel of the University of Tennessee announced the founding of a wiki at agi-roadmap.org that will serve as a supplement to the creation of an AGI Roadmap. Taking as examples several previous, related technology projects, J. Storrs Hall made mention of work conducted by the Foresight Institute on the Technology Roadmap for Productive Nanosystems and Ben Goertzel discussed his participation in the writing of the Metaverse Roadmap.
When cheap, advanced sensors give rise to ubiquitous monitoring technology, there will be the potential for what David Brin in The Transparent Society and others have called “sousveillance” to become universal. One could envision a future in which everyone was monitoring the activities of everyone else. At the AGI-09 post-conference workshop, Ben Goertzel presented on a paper with Stephan Bugaj on various scenarios resulting from a future of advanced artificial intelligence that includes sousveillance technologies.
Ben Goertzel and J. Storrs Hall at the AGI-09 post-conference workshop
Following in the footsteps of AGI-08, the Future of AI workshop was held in conjunction with AGI-09. This year’s workshop, held Monday, March 9th, 2009, at the main conference venue of the Crowne Plaza National Airport in Arlington, Virginia featured a slate of invited talks as well as contributed papers and posters. The event was hosted by J. Storrs Hall, president of the Foresight Institute, and introduced the topic of the economics of advanced AI.
Rich computer simulations or quantitative models can enable an agent to realistically predict real-world behavior with precision and performance that is difficult to emulate in logical formalisms. Unfortunately, such simulations lack the deductive flexibility of techniques such as formal logics and so do not find natural application in the deductive machinery of commonsense or general purpose reasoning systems.
This dilemma can, however, be resolved via a hybrid architecture that combines tableaux-based reasoning with a framework for generic simulation based on the concept of ‘molecular’ models. Benjamin Johnston in a presentation on his paper with Mary-Anne Williams delivered at the AGI-08 Conference, argues that this combination exploits the complementary strengths of logic and simulation, allowing an agent to build and reason with automatically constructed simulations in a problem-sensitive manner.
At AGI-08: The First Conference on Artificial General Intelligence, Andrew Shilliday of the Rensselaer A.I. and Reasoning Lab reported on the attempts by researchers at the organization to enable artificial agents to reason about the beliefs of others, resulting in game characters that can predict the behavior of human players.
One might imagine that AI systems with harmless goals will demonstrate harmless behavior. A paper by Self-Aware Systems founder and president Steve Omohundro submitted for the AGI-08 conference on artificial general intelligence shows instead that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. This presentation on the basic AI drives, taking place at the post-conference workshop, identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design.
One approach in pursuit of general intelligent agents has been to concentrate on the underlying cognitive architecture, of which Soar is a prime example. In the past, Soar has relied on a minimal number of architectural modules together with purely symbolic representations of knowledge. At the AGI-08 conference John Laird, Tishman Professor of Engineering at the University of Michigan, presented the cognitive architecture approach to general intelligence and the traditional, symbolic Soar architecture. This overview was followed by major additions to Soar, including non-symbolic representations, new learning mechanisms, and long-term memories
Bill Hibbard, Emeritus Senior Scientist of the Space Science and Engineering Center at the University of Wisconsin-Madison, contributed a paper for the AGI-08 post-conference workshop arguing that machines significantly more intelligent than humans will require changes in legal and economic systems in order to preserve human values. An open source design for artificial intelligence could help this process by discouraging corruption, by enabling many minds to search for errors, and by encouraging political cooperation. In his presentation, he encouraged the Singularity Institute for Artificial Intelligence not to remain mute on the subject of politics and to take a proactive stance as a political organization.
At the AGI-08 post-conference workshop on the ethical implications of artificial general intelligence, J. Storrs Hall, author of Beyond AI: Creating the Conscience of the Machine, presented on “Engineering Utopia.” The paper asserts that the likely advent of AGI and the long-established trend of improving computational hardware promise a dual revolution in coming decades: machines which are both more intelligent and more numerous than human beings. This possibility raises substantial concern over the moral nature of such intelligent machines, and of the changes they will cause in society. Will we have the chance to determine their moral character, or will evolutionary processes and/or runaway self-improvement take the choices out of our hands?
Professor Hugo de Garis has been given a grant by Xiamen University in Fujian Province, China to build an artificial brain consisting of 10,000 – 15,000 neural net circuit modules evolved in an accelerator board 50 times faster than in a PC. He is scheduled to head a conference session on the subject of artificial brains in May at AGI-09, the second conference on artificial general intelligence, after which he will be teaching at the first AGI Summer School in Xiamen, China in June.
Photo by brewbrooks
At the AGI-08 post-conference workshop Ben Goertzel presented on a paper with Stephan Vladimir Bugaj on the theory of stages of ethical development as applied to artificial intelligence systems. Incorporating prior related theories by Kohlberg and Gilligan, as well as Piaget’s theory of cognitive development, the theory is then applied to the ethical development of integrative artificial general intelligence systems that contain components carrying out simulation and uncertain inference – the key hypothesis being that effective integration of these components is central to the ascent of the AGI system up the ethical-stage hierarchy.
At AGI-08: The First Conference on Artificial General Intelligence, Novamente LLC CSO Ben Goertzel presented on a paper by Cassio Pennachin et al. on a teaching methodology called Imitative-Reinforcement-Corrective (IRC) learning, proposed as a general approach for teaching embodied non-linguistic AGI systems. IRC is a framework for automatically learning a procedure that generates a desired type of behavior. A set of exemplars of the target behavior-type are utilized for fitness estimation, reinforcement signals from a human teacher are used for fitness evaluation, and the execution of candidate procedures may be modified by the teacher via corrections delivered in real-time.