Michael Vassar was recently appointed as President of the Singularity Institute for Artificial Intelligence (SIAI), an organization devoted to advocacy and research for safe advanced AI. On a recent visit to the Bay Area from New York, Michael sat down with me in my San Francisco apartment to talk about the Singularity and the future of SIAI.
Accelerating Future: What is the Singularity Institute for?
Michael Vassar: Sooner or later, if humanity survives long enough, someone will create human-level artificial intelligence. After that, the future will depend on exactly what kind of AI was created, with what exact long-term goals. The Singularity Institute’s aim is to ensure that the first artificial intelligences powerful enough to matter will steer the future in good directions and not bad ones. Put more technically, the Singularity Institute exists to promote the development of a precise and rigorous mathematical theory of goal systems — a theory well enough founded that we can make something smarter and more powerful than we are while still knowing it will create good outcomes. This requires extending current theoretical computer science to include rigorous models of reflectivity, and extending current cognitive science to include rigorous models of what outcomes humans consider “good”.
AF: Who are the primary employees of SIAI and what do they do?
Vassar: The main employees are Eliezer Yudkowsky, our founding AGI Research Fellow, Anna Salamon and Steve Rayhawk (two more recently recruited AGI researchers), myself, and our administrator Alicia Isaac.
The AGI researchers do mathematical work on AGI and on AGI Friendliness, look at potential recruits, and do side projects such as running www.lesswrong.com and building software for singularity timeline modeling and for singularity educational outreach. I network with donors and potential donors while explaining our organization and organizing the Singularity Summit. Along with some volunteers and affiliates I am also putting together an essay contest in order to attempt to elicit creative ideas and analysis regarding the technological opportunities and dangers humankind faces. I think this is important, as there may be any number of potential global catastrophic risks which are only understood in tiny communities and which deserve wider attention.
AF: Can you tell us the story of how you first found out about SIAI’s mission, and why you think it matters?
Vassar:Well, there are really two questions here. The first relates to asking how I concluded that SIAI could have an impact. The second relates to how I concluded that their goal was important.
Regarding the first question, the major influence was my progressive discovery of the inadequacy of the deliberative and decision-making organs of modern society. I saw fundamentalism. I saw the War on Drugs. I saw failure to adequately secure nuclear materials in Eastern Europe and failure to build adequate levies in the world’s richest nation. Eventually I integrated all of these facts into my world-view rather than leaving them as dangling exceptions to an unchallenged assumption that the collective behavior of the world around me was basically sane. It became clear that if a technological singularity this century was pretty likely I should still expect that by default no one with any serious power would react rationally to the possibility until much too late. Warren Buffet, Sam Nunn, and Ted Turner get kudos for being an exception with their Nuclear Threat Initiative, but they are an exception that proves the rule. Yes, the powers that be could really be collectively stupid enough to hear about the singularity, acknowledge it in the occasional speech, but generally ignore it and allow it to happen in whatever manner is the easy default, even if that default is human extinction. They collectively mess up much easier issues all the time.
Speaking with a very wide variety of people, I also discovered that a singularity in the 21st century didn’t actually violate common sense about the 21st century. This is because there is no common sense regarding the future, just cliches taken from science fiction. If you ask most smart educated people to describe the world in 20, 50, or 100 years you will get basically the same random mix of sci-fi cliches. Their answer will actually only address a question about a generalized fictional “future” without regard for chronological distance. If you ask them about their own life 20 or 50 years from now you don’t elicit any sci-fi schemas. They answer as if in 50 years they will be living in the current year.
My other reason for thinking that SIAI can matter is that there seems to me to be precedent. My read of scientific history, especially of medical history, strongly suggests that small groups of thinkers operating outside of the mainstream but in contact with it really can reach correct conclusions that clash with the mainstream. When they do, they can either fail, like Ignaz Semmelweis, or succeed, like Florence Nightingale, in winning over the mainstream decades earlier than would otherwise occur. Success seems to depend significantly on not believing that in order to be rational one must pretend that everyone else is rational.
Regarding the second question, honestly… it’s obvious. The first story about robots was one about them destroying humanity… except Asimov, so were most of the others. The first written story we know about was Gilgamesh, about man’s quest for immortality. The second, Eden, also. The very act of writing stories at all is in a sense a more easily achievable attempt at a sort of limited immortality. I think that human survival and flourishing in the abstract matters because I’m human and to be human implies having preferences from which it can be inferred that human survival and flourishing matter.
AF: When will the Singularity happen?
Vassar: Hopefully, as soon as it can happen safely. More probably, before then, in which case not merely humans but humanity itself will perish. I think we almost definitely have a couple more decades. If I could choose, I’d say millennia and hope conventional life extension works out well enough to save most existing lives, but sadly I can’t. Thirty or forty years would be enough time for humankind to get its act together if a few hundred capable people made a serious effort starting today. A few dozen capable people already have.
AF: Why can’t we just take for granted that the Singularity will go well, due to a gradual merger of humans and machines?
Vassar: I’m not very confident that even our development so far can fairly be said to have gone well from the perspective of past humans. We may be satisfied with what we are, but much of what they valued, the thrill of violent triumph over their enemies for instance, or in most cases even that of hunting no longer appeals to us. Will love, excitement, curiosity and the other things we value be likewise lost? Must we accept that? Humans may in time merge with machines, but that leaves a great deal unsaid. Will they merge as cells merge into a body, a concerted organizational whole each of the parts of which retains the complexity of its ancestors and then some, or in another manner? Cows and chickens frequently merge with contemporary humans, and while this benefits their genes it’s not very satisfactory for them.
Today, humans are better than computers at many things and they are better than us at other things. In such a situation man and machine are complements and cooperation is mutually beneficial. They have not yet filled our ecological niche or economic role. Once computers can do everything that humans can do they will, by default, fill our role. This need not be disastrous. The automobile has not lead to the extinction of the horse. It has filled the horse’s economic role but not their companionate role. Humans care about horses for their own sake, protect them, feed them, maintain their health and join their hooves with metal to build upon their natural propensities. If computers value humans for our own sakes our future can be a far greater improvement over our past than is the life of a well cared for domestic horse over dry and fly bitten savannas filled with implacable predators. If they don’t… well, you could merge the CPU of your 286 with contemporary machines, but why would you ever bother to?
AF: What is your favorite technology invented in the last decade and why?
Vassar: Hmm. I haven’t used many technologies invented in the last decade, though all of the high tech stuff I use today is a lot better than the versions that existed a decade ago. Of the technologies that became widespread in the last decade, Google search obviously wins. Among those that became ubiquitous it’s the cell phone. Cellular telephony was the most rapidly transformative innovation ever once it hit the mainstream, but the S-curve for its adoption meant that it was possible to see it coming decades in advance. In the next decade, I expect e-paper and RFID to be big, especially the former, but today I only use the latter occasionally and the former almost never. There are always new medical and energy technologies in the works, but we don’t notice the former unless we are sick or the latter at all. I’m fairly hopeful about regenerative medicine in the next decade based on work over the last decade and I’d be very surprised if the recent downward trend in heart disease didn’t continue. There are a couple promising approaches to cancer and Alzheimers that could create similar trends starting in the next decade but it will probably take a lot longer before they become ubiquitous.
AF: When will SIAI start its AI project?
Vassar: AGI is basic science, not R&D. SIAI already has, as mentioned above, three researchers working full time on math and philosophy problems relevant to AGI. In the summers we train undergraduate students in some of what we and others have learned, and by Fall 2010 if not earlier we hope to be funding graduate students to work on AGI research projects. The intention is to fund research along lines that will contribute to analytically comprehensible and thus potentially safe AGI. Fortunately, such research is also more mathematically elegant and intellectually engaging than much that goes on in AI, so we believe that we will have an advantage in attracting the best graduate students to such work once it is funded. Continuing our educational theme with a larger audience, the blog Overcoming Bias and its successor Less Wrong are largely an attempt to enable in humans the AI epistemology that Eliezer Yudkowsky developed in the process of developing a generalized understanding of intelligence.
AF: What do you think of the idea of using online virtual worlds as a place to raise and develop AIs?
Vassar: If we create AIs with human-like cognitive architectures, they will definitely need human-like sensory environments containing virtual bodies complex enough to promote cognitive development. This isn’t a very scientifically novel idea, but no one has really made either avatars or virtual worlds with close to the required complexity and it’s a very big job. Fortunately, since it’s not dependent on any exotic mathematical insights, a large community of volunteers can contribute to our work on this. Even if AIs don’t end up using human-like cognitive architectures such worlds may be useful for them and to us.
AF: What do you think is the relative difficulty of substantially enhancing human intelligence via brain-computer interfaces or biotech approaches vs. creating AGI?
Vassar: Biotech approaches to increasing human intelligence seem to be a sure thing in a sense that AGI is not, but the time-frame, expense, and delay of such an approach mean that it probably remains decades away. The world community is also likely to be much more sensitive to ethical issues raised by biotech than even by much more serious ethical issues if the relevant technology is computational. For instance, a lab that tested whether drugs, genetic modifications or infusions of stem cells could be used to increase the intelligence of chimpanzees to human levels would be subject to severe ethical criticism but a project that used evolutionary algorithms to try to evolve a human level AI from a chimpanzee level AI would be much less criticized, even if the latter involved creating and killing billions of simulated organisms (killing billions to produce a tiny change being what natural selection does).
Given SIAI’s limited resources relative to national scientific research institutions, we intend to leave biotech and neurotech approaches to others, except possibly in filling the role of an occasional outsider critic of ethically troubling research. When building software to model likely technological dynamics however and thus to better predict time to singularity, feedback from biotech and neurotech will be treated as major reasons to expect technological acceleration, especially when one looks beyond the next couple decades.
AF: What plans do you have for SIAI over the next couple years?
Vassar: For the last few years SIAI has been heavily focused on developing rationality training materials and an online rationalist community. In the next few months we should see if the community can survive on its own. Some of the materials will be made into a book, which will hopefully contribute to the training of young thinkers. Developing better employee recruitment and training techniques will be a related focus of effort more directly applicable to our goal of creating Friendly AI.
I want to institute a general change in SIAI’s direction over the next year. It is my intention to bring to the forefront a number of technology related ethical concerns that go beyond SIAI’s traditional focus on unfriendly AI. Another important change is that I expect to use our community to launch a large number of small to mid-sized science projects, build futurism and catastrophic risk analysis tools and collaborate more closely with academia.
Other near term efforts will relate to expanding awareness of the Singularity, existential risk and rationality on the East Coast and in Europe, and increasing the scale of the annual Singularity Summit.
AF: Why should someone regard SIAI as a serious contender in AGI?
Vassar: The single biggest reason is that so few people are even working towards AGI. Of those who are, most are cranks of one sort or another. Among the remainder, there is a noticeable but gradual ongoing shift in the direction of provability, mathematical rigor, transparency, clear designer epistemology and the like, for instance in the work of Marcus Hutter and Shane Legg. To the extent that SIAI research and education efforts contribute to rigorous assurance of safety in the first powerful AGIs, that is a victory as great as the creation of AGI by our own researchers.
A secondary reason is that we can do better than academia at making effective use of extremely intelligent nonconformists, the category of person from which almost all really radical innovations emerge. The average level of ability among our researchers may not be higher than that among professors at the best research universities, but their focus is. Within academia, junior faculty must divide their time between teaching, bureaucratic committee work or grant writing, and ‘safe’ research that has a high probability of contributing to a tenure case. Focus on a high risk, pathbreaking research agenda must often wait until after tenure, but psychological research (see, e.g. Richard Nisbett’s recent book) indicates that fluid intelligence, the ability to solve novel problems and acquire new skills, peaks in the late 20s and declines thereafter.