Convergence08, the technology unconference, began with a different kind of AI debate: not on whether to create AI, or which technical path will work fastest, but how we can use AI technology to build the world we want to live in. Jonas Lamis of SciVestor moderated the panel of artificial intelligence experts, which included Barney Pell of Powerset, Steve Omohundro of Self-Aware Systems, Peter Norvig of Google and Ben Goertzel of Novamente.
One might imagine that AI systems with harmless goals will demonstrate harmless behavior. A paper by Self-Aware Systems founder and president Steve Omohundro submitted for the AGI-08 conference on artificial general intelligence shows instead that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. This presentation on the basic AI drives, taking place at the post-conference workshop, identifies a number of “drives” that will appear in sufficiently advanced AI systems of any design.
Steve Omohundro is president of Self-Aware Systems, a Silicon Valley think tank aimed at bringing human values to emerging technologies. His talk “AI and the Future of Human Morality,” delivered at the Silicon Valley World Transhumanist Association Meetup, examines the origins of human morality and its future development to cope with advances in artificial intelligence.
The presentation begins with a discussion of the dangers of philosophies which put ideas ahead of people, then presents Kohlberg’s six stages of human moral development, evidence for recent advances in human morality, the theory underlying co-opetition, recent advances in understanding the sexual and social origins of altruism, and the five human moral emotions and their relationship to political systems. The discussion then considers the likely behavior of advanced AI systems, showing that they will want to understand and improve themselves, will have drives toward self-preservation and resource acquisition, and will be vigilant in avoiding corruption and addiction. The presentation ends with a description of the three primary challenges that humanity faces in guiding future technology toward human-positive ends.
SIAI Interview Series – Steve Omohundro
At the 2007 Foresight Vision Weekend, Self-Aware Systems founder Stephen Omohundro led a discussion in which participants were asked to design the year 2030, assuming the existence of both self-improving artificial intelligence and productive nanotechnology. The speaker illustrated some of the likely characteristics of systems based on these technologies, including drives toward efficiency, self-preservation, resource acquisition, and creativity. The following group discussion focused on identifying rights and obligations that would provide potential societal benefits while preventing the inherent dangers of such advanced technologies.
As computers become more complex and parallel, today’s development paradigm appears increasingly incapable of matching the pace of accelerating technological change. Stephen Omohundro of Self-Aware Systems describes in his October 24, 2007 Stanford University Computer Systems Colloquium a new approach to “software synthesis,” in which artificially intelligent machines take over many of the tasks of software development. Continued from “Self-Improving AI: The Future of Computing.”
Today’s software has been criticized for being buggy and insecure, both too expensive and time-consuming to create. As computers become more complex and parallel, today’s development paradigm appears increasingly incapable of matching the pace of accelerating technological change. Stephen Omohundro of Self-Aware Systems describes in his October 24, 2007 Stanford University Computer Systems Colloquium a new approach to “software synthesis,” in which artificially intelligent machines take over many of the tasks of software development. The approach is based on “self-improving systems” which improve themselves by learning from their own operation. These same systems have the potential to develop radically improved hardware based on nanotechnology, leading to profound technological and social consequences.
Stephen Omohundro has had a wide-ranging career as a scientist, university professor, author, software architect, and entrepreneur. At the 2007 Singularity Summit hosted by the Singularity Institute for Artificial Intelligence, he asked whether we can design intelligent systems that embody our values, even after many generations of self-improvement. His talk demonstrates that self-improving systems will converge on a cognitive architecture first described in von Neumann‘s work on the foundations of microeconomics. He shows that these systems will have drives toward efficiency, self-preservation, acquisition, and creativity, and that these are likely to lead to both desirable and undesirable behaviors unless we design them with great care.