Singularity Institute Overview for Journalists
Up-to-date as of 4/27/2009
This is a concise overview of the history and exploits of the Singularity
Institute for Artificial Intelligence, since its founding in June 2000.
First, what is the Singularity Institute? From Wikipedia:
The Singularity Institute for Artificial Intelligence (SIAI) is a non-profit
organization founded in 2000 to develop safe artificial intelligence software,
and to raise awareness of both the dangers and potential benefits it believes
AI presents. The organization advocates ideas initially put forth by I. J.
Good and Vernor Vinge regarding an "intelligence explosion" or Singularity
predicted to follow the creation of sufficiently advanced AI, which, in its
view, necessitate solutions to problems involving AI goal systems to ensure
powerful AIs are not dangerous if or when they are created. SIAI espouses
the Friendly AI model created by its co-founder Eliezer Yudkowsky as a potential
solution to such problems.
Inventor and futures studies author Ray Kurzweil serves as one of the organization's
directors. SIAI maintains an advisory board whose members include Oxford philosopher
Nick Bostrom, biomedical gerontologist Aubrey de Grey, PayPal co-founder Peter
Thiel, and Foresight Nanotech Institute co-founder Christine Peterson. The
SIAI is tax exempt under Section 501(c)(3) of the United States Internal Revenue
Code, and has a Canadian branch, SIAI-CA, formed in 2004 and recognized as
a Charitable Organization by the Canada Revenue Agency.
What does "the Singularity" mean?
- The original meaning, given by Vernor
Vinge in 1993, was "the technological creation of greater than human
intelligence". Vinge wrote, "What are the consequences of this event?
When greater-than-human intelligence drives progress, that progress will be
much more rapid. In fact, there seems no reason why progress itself would
not involve the creation of still more intelligent entities -- on a still-shorter
- Other meanings have emerged since then, usually associated with accelerating
technological change, but these are logically independent from the original
concept, and not the kind of "Singularity" that the Singularity
Institute is primarily focused on.
- Greater than human intelligence and accelerating technological change are
often conflated in reporting on the Singularity, causing confusion.
- Oxford philosophy professor and Director of the Future of Humanity Institute,
Nick Bostrom, has contributed important and widely influential arguments to
the original Vinge idea, in papers such as "How
long before superintelligence", where he defines superintelligence
as "an intellect that is much smarter than the best human brains in practically
every field, including scientific creativity, general wisdom and social skills",
and outlines an argument that superintelligence could be developed before
roughly 2035, and "Ethical
Issues in Advanced Artificial Intelligence", where he writes "Since
the superintelligence may become unstoppably powerful because of its intellectual
superiority and the technologies it could develop, it is crucial that it be
provided with human-friendly motivations." Developing a mathematically
rigorous and transparent description of "human-friendly motivations"
is one of the primary goals of the Singularity Institute.
Overview on the Singularity Institute for Artificial Intelligence
- Founded in June 2000 by AI researcher Eliezer Yudkowsky and Internet entrepreneurs
Brian and Sabine Atkins.
- The Wikipedia article is here.
- The Singularity Institute is a non-profit organization focused on developing
a theoretical framework for confirming the long term safety of artificial
intelligence designs -- a research project it calls "Friendly
AI". A major element of this involves developing a reflective decision
theory, building on decades of prior work on decision theory in mathematics
- Several employees do AI research for SIAI full time and a larger group of
grad students work on the problem in the San Francisco Bay Area during the
summers with assistance from Research Fellow Eliezer Yudkowsky.
- Many consider "real
AI", the type pursued by the Singularity Institute, to be centuries
away, but the history of science shows that many high impact technologies
are considered centuries away even by those closest to them mere years before
their development. For intance, Orville Wright said in 1901 that "man
will not fly for fifty years" and Ernest Rutherford said in 1933 that
"anyone who looked for a source of power in the transformation of the
atoms was talking moonshine".
- Several notable futurists have voiced support for the Friendly AI research
agenda, including futurist Ray Kurzweil, philosopher Nick Bostrom, and medical
life extension advocate Aubrey de Grey.
Overview of SIAI Publications
- In 2001, SIAI published the book-length Creating
Friendly AI, the the first technical analysis of motivationally stable
goal systems. Summary: "How to create sustainable benevolence in Artificial
Intelligences capable of open-ended self-enhancement." Received coverage
in Wired News.
- In 2001, SIAI also published SIAI
Guidelines on Friendly AI, which outlines guidelines for the production
of human-benefiting, non-human-harming actions in Artificial Intelligence
systems that have advanced to the point of making real-world plans in pursuit
- In 2002, SIAI Research Fellow Eliezer Yudkowsky released "Levels
of Organization in General Intelligence", a preprint of a book chapter
on the evolutionary psychology of general intelligence, which subsequently
appeared in the edited volume Artificial
General Intelligence (eds. Ben Goertzel & Cascio Pennachin, published
by Springer) in 2007. Later in 2002, the introductory pieces "What
is the Singularity?" and "Why
Work Towards the Singularity?" were also published.
- In 2003, Eliezer Yudkowsky released "An
Intuitive Explanation of Bayesian Reasoning" an essay on Bayesian
probability theory, rationality, inductive inference, and philosophy of science
that has been widely linked by computer science departments in the US and
Europe, with over
1,000 inbound links.
- In 2005, Eliezer Yudkowsky released "A
Technical Explanation of Technical Explanation", an essay on Bayesian
probability theory, rationality, inductive inference, and philosophy of science.
- In 2007, Eliezer Yudkowsky released "Artificial
Intelligence as a Positive and Negative Factor in Global Risk" and
Potentially Affecting Judgement of Global Risks", essays which appeared
in the edited volume Global
Catastrophic Risks (Oxford University Press, USA, 2008).
Overview on the Singularity Summit (organized by SIAI)
- The Singularity Summit is an annual event put on by the Singularity Institute
where influential figures in the fields of business, science and technology
discuss the Singularity and issues relating to technologies commonly associated
with the Singularity like nanotechnology and robotics.
- The 2006 Summit was held at Stanford,
in San Francisco, 2008 Summit
in San Jose, 2009 Summit will be held in New York.
- All Singularity Summit audio and many transcripts are available online at
the websites linked in the previous bullet point. Notable presenters include
Peter Norvig, Google's Director of Search Quality, Steve Jurvetson, Managing
Partner of VC firm Draper Fisher Jurvetson, Rodney Brooks, a robotics pioneer
from MIT, writer John Horgan, a skeptic of the Singularity, and environmentalist
Bill McKibben, another skeptic.
- See "Will
machines outsmart man?" at Guardian.co.uk for coverage of the 2008
Singularity Summit, "Coming
to Grips with Intelligent Machines" on CNET News for coverage of
the 2007 Summit, and "Smarter
than thou?" for coverage of the 2006 Summit. Full coverage list of
2008 is here, 2007
is here, and 2006
is here. The 2007 Summit received
front-page coverage in the San Francisco Chronicle.
Overview of SIAI's Team
Michael Vassar is SIAI's President, and provides overall leadership of the
SIAI as it develops its research capabilities and its role as a forum for
discussion of the challenges and potential of artificial general intelligence.
He is also responsible for the organization of the Singularity
Summit. Previously, he was a Founder and Chief Strategist at SirGroovy.com,
an online music licensing firm. Prior to that, he held positions with Aon,
the Peace Corps, and the National
Institute of Standards and Technology. Michael has been writing and speaking
on topics related to the safe development of disruptive technologies for a
number of years: his papers include the Lifeboat Foundation analysis
of the risks of advanced molecular manufacturing co-authored with Robert Freitas,
Cornucopia", authored for the Center for Responsible Nanotechnology Task
Force. He holds an M.B.A. from Drexel University and a B.S. in biochemistry
from Penn State.
Ray Kurzweil is Director of SIAI. CEO of Kurzweil
Technologies, he has been described as "the restless genius" by the Wall
Street Journal, and "the ultimate thinking machine" by Forbes. Inc. Magazine
ranked him #8 among entrepreneurs in the United States, calling him the "rightful
heir to Thomas Edison," and PBS included him as one of the 16 "revolutionaries
who made America," along with other inventors of the past two centuries. As
one of the leading inventors of our time, Ray has worked in such areas as
music synthesis, speech and character recognition, reading technology, virtual
reality, and cybernetic art. He was the principal developer of the first omni-font
optical character recognition, the first print-to-speech reading machine for
the blind, the first CCD flat-bed scanner, the first text-to-speech synthesizer,
the first music synthesizer capable of recreating the grand piano and other
orchestral instruments, and the first commercially marketed large-vocabulary
speech recognition. All of these pioneering technologies continue today as
market leaders. His website, KurzweilAI.net,
has over one million readers. Among his many honors, he is the recipient of
the $500,000 MIT-Lemelson Prize, the world's largest for innovation. In 1999,
he received the National Medal of Technology, the nation's highest honor in
technology, from President Clinton. In 2002, he was inducted into the National
Inventor's Hall of Fame , established by the US Patent Office. Ray has also
received twelve honorary Doctorates and honors from three U.S. presidents.
His books include The
Age of Intelligent Machines, The
Age of Spiritual Machines, and Fantastic
Voyage: Live Long Enough to Live Forever. Three of his books have
been national best sellers. His latest best-selling book, published by Viking
Press, is The Singularity is Near: When
Humans Transcend Biology.
Ben Goertzel, Ph.D., is SIAI Director of Research, responsible for overseeing the direction of the Institute's research division. He has over 70 publications, concentrating on cognitive science and AI, including Chaotic Logic, Creating Internet Intelligence, Artificial General Intelligence (edited with Cassio Pennachin), and The Hidden Pattern. He is chief science officer and acting CEO of Novamente, a software company aimed at creating applications in the area of natural language question-answering. He also oversees Biomind, an AI and bioinformatics firm that licenses software for bioinformatics data analysis to the NIH's National Institute for Allergies and Infectious Diseases and CDC. Previously, he was founder and CTO of Webmind, a 120+ employee thinking-machine company. He has a Ph.D. in mathematics from Temple University, and has held several university positions in mathematics, computer science, and psychology, in the US, New Zealand, and Australia.
Eliezer Yudkowsky, an SIAI Resarch Fellow and co-founder,
is the foremost researcher on Friendly AI and recursive self-improvement.
He created the Friendly AI approach to AGI, which emphasizes the importance
of the structure of an ethical optimization process and its supergoal, in
contrast to the common trend of seeking the right fixed enumeration of ethical
rules a moral agent should follow. In 2001, he published the first technical
analysis of motivationally stable goal systems, with his book-length Creating
Friendly AI: The Analysis and Design of Benevolent Goal Architectures. In
2002, he wrote "Levels of Organization in General Intelligence,"
a paper on the evolutionary psychology of human general intelligence, published
in the edited volume Artificial General Intelligence (Springer, 2006). He
has two papers forthcoming in the edited volume Global Catastrophic Risks
(Oxford, 2007), "Cognitive Biases Potentially Affecting Judgment of Global
Risks" and "AI as a Positive and Negative Factor in Global Risk."
History of the Singularity Institute for Artificial
This summary of the history of the Institute is taken directly from Wikipedia,
with a couple updates for accuracy:
- At first, SIAI operated primarily over the Internet, receiving financial
contributions from sympathetic transhumanists and futurists. On July 23, 2001,
SIAI launched the open source Flare
Programming Language Project, described as "annotative programming
language" with features
inspired by Python, Java, C++, Eiffel, Common Lisp, Scheme, Perl, Haskell,
and others. The specifications were designed with the complex challenges of
seed AI in mind. However, the effort was abandoned less than a year later.
- In 2002, SIAI published on its website Levels
of Organization in General Intelligence, a preprint of a book chapter
published in a 2007 compilation of general AI theories, entitled
"Artificial General Intelligence" (Ben Goertzel and Cassio Pennachin,
eds.) Later that year, SIAI released their two main introductory pieces, "What
is the Singularity" and "Why
Work Toward the Singularity".
- In 2003, the Singularity Institute appeared at the Foresight Senior Associates
Gathering, where co-founder Eliezer Yudkowsky presented a talk titled "Foundations
of Order". They also made an appearance at the Transvision 2003 conference
at Yale University with a talk by SIAI volunteer Michael Anissimov.
- In 2004, SIAI released AsimovLaws.com,
a website that examined AI morality in the context of the "I, Robot"
movie starring Will Smith, released just two days later. From July to October,
SIAI ran a Fellowship Challenge Grant that raised $35,000 over the course
of three months. Early the next year, the Singularity Institute relocated
from Atlanta, Georgia to Silicon Valley.
- In February 2006, the Singularity Institute completed a $200,000 Singularity
Challenge fundraising drive, in which every donation up to $100,000 was matched
by Clarium Capital President, Paypal co-founder and SIAI Advisor Peter Thiel.
Among the stated uses of the funds included hiring additional full-time staff,
an additional full-time research fellow position, and organizing the Singularity
Summit at Stanford event in May 2006.
- Singularity Institute co-sponsored the Singularity
Summit at Stanford with the Symbolic Systems Program at Stanford, the
Center for Study of Language and Information, KurzweilAI.net, and Peter Thiel,
who moderated the event. The summit took place on 13 May 2006 at Stanford
University and had 1300 in attendance. The keynote was Ray Kurzweil, followed
by eleven other speakers: Nick Bostrom, Cory Doctorow, K. Eric Drexler, Douglas
Hofstadter, Steve Jurvetson, Bill McKibben, Max More, Christine Peterson,
John Smart, Sebastian Thrun, and Eliezer Yudkowsky.
- The 2007 Singularity
Summit took place on September 8-September 9, 2007, at the Palace of Fine
Arts Theatre, San Francisco. Speakers included Rodney Brooks, Eliezer Yudkowsky,
Barney Pell, Wendell Wallach, Sam Adams, Jamais Cascio, Steven Omohundro,
Peter Voss, Neil Jacobstein, Ben Goertzel, Paul Saffo, Peter Norvig, J. Storrs
Hall, Peter Thiel, Charles L. Harper Jr., Steve Jurvetson, Christine Peterson,
James Hughes, and Ray Kurzweil. This Summit was covered on the front page
of the San Francisco Chronicle.
- A third Singularity Summit
took place on October 25, 2008, at the Montgomery Theater in San Jose. Speakers
included Vernor Vinge, Bob Pisani, Nova Spivack, Esther Dyson, James Miller,
Justin Rattner, Eric Baum, Dharmendra Modha, Ben Goertzel, Marshall Brain,
Cynthia Breazeal, Pete Estep, Neil Gershenfeld, Peter Diamandis, and Ray Kurzweil.
- The Singularity Institute is sponsoring the Open Cognition Framework, or
OpenCog, which according to OpenCog.org
is intended to provide "research scientists and software developers with
a common platform to build and share artificial intelligence programs."