Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

20Jul/1015

Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips

I recently interviewed Eliezer Yudkowsky for the reboot of h+ magazine, which is scaling down from being a magazine into a community blog of sorts.

The interview is a good primer for what the Singularity Institute is about and the basic rationales behind some of our research choices, like focusing on decision theory. This is a good interview to read especially for those not entirely familiar with the research of the Singularity Institute. It can also be used to promote the Singularity Summit, so please share the link!

Here are the questions I asked Eliezer:

1. Hi Eliezer. What do you do at the Singularity Institute?
2. What are you going to talk about this time at Singularity Summit?
3. Some people consider "rationality" to be an uptight and boring intellectual quality to have, indicative of a lack of spontaneity, for instance. Does your definition of "rationality" match the common definition, or is it something else? Why should we bother to be rational?
4. In your recent work over the last few years, you've chosen to focus on decision theory, which seems to be a substantially different approach than much of the Artificial Intelligence mainstream, which seems to be more interested in machine learning, expert systems, neural nets, Bayes nets, and the like. Why decision theory?
5. What do you mean by Friendly AI?
6. What makes you think it would be possible to program an AI that can self-modify and would still retain its original desires? Why would we even want such an AI?
7. How does your rationality writing relate to your Artificial Intelligence work?
8. The Singularity Institute turned ten years old in June. Has the organization grown in the way you envisioned it would since its founding? Are you happy with where the Institute is today?

28Jun/1012

Singularity Hub Posts About the Summit 2010

Singularity Hub, one of the best websites on the Internet for tech news (along with Next Big Future and KurzweilAI news) has posted a reminder on the upcoming Singularity Summit in San Francisco, and a promise that they will provide excellent coverage.

Register before July 1st, before the price goes up another $100! We also have a special block of discounted rooms at the Hyatt available -- $130/night instead of the usual $200.

Sorry the Summit is $485 and will be $585 and then $685. We fly all the speakers out and cover all their expenses, there are twenty speakers, do the math. Profits from the Summit go to the Singularity Institute for our year-round operations and Visiting Fellows program, which provides us with a community of writers, speakers, and researchers to continue our Singularity effort until it is successful.

If you want to organize a cheaper annual event related to the Singularity, feel free to do so. We hold a workshop after the event for academics, so we get to tack on another event to maximize value and productivity for those who investigate the Singularity as part of their profession. I'm sure there will be plenty of informal "workshops" on the Saturday and Sunday after the talks in local bars and restaurants, in any case.

Remember -- the Singularity is the most important issue facing humanity right now. If we don't do what we can to ensure that it goes well for humanity, no one else will. We have a limited amount of time until the technological barriers between us and the Singularity collapse, and then intervention will be difficult if not impossible.

Filed under: SIAI, singularity 12 Comments
14Jun/1018

Reducing Long-Term Catastrophic Artificial Intelligence Risk

Check out this new essay from the Singularity Institute: "Reducing long-term catastrophic AI risk". Here's the intro:

In 1965, the eminent statistician I. J. Good proposed that artificial intelligence beyond some threshold level would snowball, creating a cascade of self-improvements: AIs would be smart enough to make themselves smarter, and, having made themselves smarter, would spot still further opportunities for improvement, leaving human abilities far behind. Good called this process an "intelligence explosion," while later authors have used the terms "technological singularity" or simply "the Singularity".

The Singularity Institute aims to reduce the risk of a catastrophe, should such an event eventually occur. Our activities include research, education, and conferences. In this document, we provide a whirlwind introduction to the case for taking AI risks seriously, and suggest some strategies to reduce those risks.

Pay attention and do something now, or be eliminated by human-indifferent AGI later. Why is human-indifferent AGI plausible or even likely within the next few decades? Because 1) what we consider "normal" or "common sense" morality is actually extremely complex, 2) the default morality for AIs will be much simpler than #1 (look at most existing AI/robotics goal systems -- they're only as complex as they need to be to get their narrow jobs done), simply because it will be easier to program and very effective until the AI reaches human-surpassing intelligence, 3) a superintelligent, super-powerful, self-replicating AI with simplistic supergoals will eventually eliminate humanity through simple indifference, the way that humanity has made many thousands of species extinct through indifference. Over the course of restructuring the local neighborhood to achieve its goals (such as maximizing some floating point variable that represents the bank account it once aimed to maximize), the complex, fragile structures known as humans will fall by the wayside.

The motivation will not derive from misanthropy, but basic AI drives such as the drive to preserve its utility function and defend that utility function from modification. These drives will appear "naturally" in all AIs unless explicitly counteracted. In fact, this should be experimentally verified in the near future with continuing progress towards domain-general reasoning systems. Even AIs with simple game-playing goals, given sufficiently detailed models of the world in which the games are played (most AIs lack such models entirely), will start to spontaneously expand into strategies like deceiving or confusing their opponent, perhaps surprising their programmers. Progress in this area is likely to start off incremental and eventually speed up, just like completing a puzzle gets easier the closer you are towards the end.

Even a "near miss", such as an AI programmed to "make humans happy", could lead to unpleasant circumstances for us for the rest of eternity. An AI might get locked into some simplistic notion of human happiness, perhaps because its programmers underestimated the speed at which a seed AGI could start self-improving, and didn't place enough importance on giving the AGI complex and humane supergoals which remain consistent under reflection and self-modification. The worst possible futures may be ones in which a Singularity AI keeps us alive indefinitely under conditions where our existence is valued but our freedom is not.

Filed under: AI, risks, SIAI, singularity 18 Comments
19May/101

Apply for the 2010 SIAI Visiting Fellows Program

Now is your last chance to apply for a Summer 2010 Visiting Fellowship at the Singularity Institute. For a concise summary of what SIAI is about, read this new short introduction.

Filed under: SIAI, singularity 1 Comment
25Feb/101

Last Chance to Contribute to 2010 Singularity Research Challenge!

Cross-posted from SIAI blog:

Thanks to generous contributions by our donors, we are only $11,840 away from fulfilling our $100,000 goal for the 2010 Singularity Research Challenge. For every dollar you contribute to SIAI, another dollar is contributed by our matching donors, who have pledged to match all contributions made before February 28th up to $100,000. That means that this Sunday is your final chance to donate for maximum impact.

Funds from the challenge campaign will be used to support all SIAI activities: our core staff, the Singularity Summit, the Visiting Fellows program, and more. Donors can earmark their funds for specific grant proposals, many of which are targeted towards academic paper-writing, or just contribute to our general fund. The grants system makes it easier to bring new researchers into the fold on a part-time basis, widening the pool of thinkers producing quality work on Artificial Intelligence risks and other topics relevant to SIAI's interests. It also provides transparency so our donor community can directly evaluate the impact of their contributions.

Human-level and smarter Artificial Intelligence will likely have huge impacts on humanity, but only a tiny number of researchers are working to understand how to ensure those impacts are good ones. The role of the Singularity Institute is to fill that void, bringing scholarship and science to bear on challenging questions. Instead of just letting the chips fall where they may, help the Singularity Institute increase the probability of a positive Singularity by contributing financially to our research effort. We depend completely on donors like you for all funding.

2010 marks the 10th year since SIAI's founding. With your help, SIAI will still exist in 2015, 2020, 2025... however long it takes to get to a positive Singularity. Thank you for your support!

Filed under: SIAI, singularity 1 Comment
26Jan/1029

Singularity Institute Featured in January Issue of GQ

If you haven't picked up this month's GQ magazine, do it soon. There is a feature on the Singularity Summit and Singularity Institute. (I also hear there is a piece by Carl Zimmer on the Singularity in Playboy but I haven't picked it up yet.) Seeing community names like Rick Schwall (an SIAI donor and supporter) in a national magazine sure is a trip. According to the National Magazine Awards, circulation is somewhere between 500,000 and 1,000,000 and is up in recent years.

Here is the Singularity portion (I removed the magazine cover due to copyright concerns and complaints from the comments section):

Really freaky, mmhmmm! Freaky like our ancestral past or Pandora freaky, I hope.

H/t to Gus K. for pointing out the article earlier this month.

Filed under: SIAI, singularity 29 Comments
9Jan/1026

Bob Mottram Objections to 2010 Singularity Research Challenge and Response

Bob Mottram isn't impressed by the Singularity Institute's grant proposals for our $100,000 Singularity Research Challenge:

It's kind of sad how SIAI seems to have become obsessed with "AI risks" and human extinction. Perhaps they always were from the beginning, but it's just my perception of them that was at fault. There's certainly a place for some group, existing independently from academia, who actively promote AI related R&D in a direction which has positive value to society and addresses problems which are highly relevant. This applies especially to the work which is less glamorous, more ambitious stuff which requires an expenditure of effort on a longer time scale than a typical PhD thesis or DARPA/X-prize contest.

The list of grant proposals for the Singularity Research Challenge seems incredibly disappointing, and focused on spurious notions of risk which, in my opinion, would have no beneficial impact on AI even if it were to be funded in its entirety.

To clarify what is happening, what Bob Mottram considers "spurious notions of risk", we consider "deadly serious notions of risk", so this is the main source of disagreement. Here was my response:

Our Uncertain Future project is pioneering probabilistic futurism in AI and WBE studies, and has received thumbs-up from several academics including Bela Nagy, who manages the Santa Fe Institute Performance Curve Database.

A hard takeoff from a human-indifferent AI is not a fallacious risk. It is quite real. Because human moral values are complex, creating a machine that does what we would consider "nice" or "common sense" is much more difficult than creating a machine with human-level intelligence but insufficiently complex and specific values. See the Fun Theory sequence on Less Wrong, for instance.

SIAI believes that AGI is an extremely difficult endeavor and deserves far more theory-level work than programming in the dark or working towards narrow AI tasks that drain away our attention at the expense of the Singularity itself.

Basically, if you consider an intelligence explosion plausible, SIAI's activities make sense, and if you don't, they don't. It's not a matter of marketing, just disagreement on which tasks are the most important for humanity to face right now. We consider clarifying decision theory and creating a reflective decision theory to be a major priority, for instance, and spend time on that accordingly.

To clarify further, in 2009 SIAI grew large enough to break into several loose divisions. This is excellent, because the Singularity Institute is one of the most important organizations on the planet and is one of the only barriers standing between humanity and extinction from unFriendly AI. However, it makes the task of explaining what we do all the more complicated. It so happens that I am paid to explain it, but sometimes I get discouraged because I discuss the organization constantly on this blog, occasionally several posts per day, and there is still a great amount of confusion about what our organization does and believes. Perhaps I ought to run an SIAI Video Q&A in the vein of Eliezer's recent Less Wrong Q&A.

What happened in the last year is that Anna Salamon and Steve Rayhawk joined us and created the Visiting Fellows program, under Anna's leadership. (Anna, Steve, and myself were only recently added to our staff page.) This entity is only peripherally related to SIAI's central AI project, which was more or less put on hold for two years while Eliezer Yudkowsky wrote the Less Wrong sequences. As our 2009 accomplishments document states, Eliezer worked with Marcello Herreshoff (his profile can be found here) on Friendly AI over the Summer.

So, think of SIAI as having three branches -- 1. Administrative/PR, which consists of President Michael Vassar, Media Director Michael Anissimov (aka me), and Chief Compliance Officer Amy Willey, 2. AGI research that constitutes serious progress towards seed AI, which makes up years of Eliezer's past work, Eliezer's future work after he finishes putting together his rationality book, Marcello Herreshoff's intermittent work, and contributions from Peter de Blanc, Nick Hay, and others (since 2006), including Anna Salamon and Steve Rayhawk, and 3. the Visiting Fellows Program, which including Visiting Fellows and various volunteers.

The goal of the Visiting Fellows Program is to put together extremely smart people concerned about reducing existential risk and have them pursue academic projects that make the best use of their respective strengths. Branch #3 also serves as a filtering mechanism for 2. The thing is, starting a true AGI project would be very expensive, not so much in money but in terms of intelligence, philosophy, computer science, and math knowledge required. Consolidating the necessary personnel will not be easy.

Why are we concerned about "AI risks" and human extinction? Well, this is why, among other reasons. SIAI is not about pursuing intermediate AI commercial benefits -- our organization only exists to pursue the Singularity and minimize AI risk. Writing illustrating this point has been produced in substantial quantities since our founding in 2000. SIAI is mostly a bunch of utilitarians.

Would readers be interested in a Lulu book putting together a lot of information about the Singularity Institute in one place? Only about 0.5% of my blog readers ever comment, so I feel like I'm talking to a vast sea of silent lurkers all the time. Seriously, it's weird.

In general, our approach turns off people like Bob Mottram, but inspires praise from people like Alan Darwst. In particular, Mr. Darwst writes:

Among the utilitarians I've met over the years, a sizable fraction have come to the conclusion that the optimal destination for utilitarian funding is organizations that research speculative futuristic scenarios and the philosophical / scientific / methodological questions that such research requires. In particular, many of these utilitarians have named the Singularity Institute for Artificial Intelligence (SIAI) as a good example of such an organization, so I'll focus on it here, but the discussion can apply more broadly.

The way the Singularity goes is a matter of life and death for humanity. Unfriendly AI programmed to value anything besides a very specific set of Homo sapiens-characteristic values will probably overlook our material preservation. From the perspective of most possible minds, humans are just another arrangement of atoms. We don't have any inherent moral value. "Moral value" is an "imaginary" thing that only exists among the tiny space of minds-in-general with explicit moral philosophies.

If we had the ability to build AGI today, our planet would not last the year, because we haven't solved Friendly AI. If we could build a seed AI now, we wouldn't know how to specify its goals in a way that doesn't eliminate us all completely. We are clueless. We can't create a utility function that is consistent under reflection and preserves individual humans when a tremendous amount of optimization pressure is applied to fulfilling it. We need a mathematical model of value that leaves us alive even when the unimaginable power of superintelligence is channeled into it. I think that Coherent Extrapolated Volition is a good enough solution that it would work, but it needs to be specified in much more detail. That's exactly what one of our grant proposals is about.

These grant proposals deserve funding now. We are about to walk into a minefield and we don't even have a map. We need to throw everything at the problem -- people, money, attention, everything.

Filed under: SIAI 26 Comments
5Jan/104

Me on the Radio — KUSP in Santa Cruz

On Sunday, January 3rd, I did an interview on KUSP (Central Coast Public Radio) in Santa Cruz, California, a National Public Radio affiliate. I talked to Rick Kleffel for an hour about the Singularity, the Singularity Institute, what we do, anthropomorphism, Friendly AI, and the like. It was for his "Talk of the Bay" radio program. Here is the audio archive.

Filed under: SIAI, singularity 4 Comments
5Jan/101

Support for 2010 Singularity Research Challenge

Over the past week and a half that the Singularity Research Challenge has been launched, we've received some nice support, including a post by Razib Khan at the Gene Expression blog and an explicit donation recommendation by Alan Darwst, author of Utilitarian-Essays.com and a well-regarded figure in the online utilitarian community. Here is a post by Alan on Felicia, the utilitarian community, that goes into why research charities like SIAI offer a very high return on philanthropic investment.

Don't it just let be Alan and Razib -- you too can make a blog post about the Singularity Research Challenge, right at this very moment!

Filed under: SIAI, singularity 1 Comment
2Jan/101

Black Belt Bayesian on Reasons to Prevent Existential Risk

In the context of our 2010 Singularity Research Challenge, Steven over at Black Belt Bayesian has a collection of "reasons to invest in reducing existential risk that you might not have considered before".

Filed under: risks, SIAI, singularity 1 Comment
1Jan/1026

10 Years of “Singularitarian Principles”: Analysis

Today is January 1st, 2010, the 10th anniversary of the online publishing of "The Singularitarian Principles" by Eliezer Yudkowsky. This document is a handy set of common sense advice for anyone who considers the possible creation of superintelligence a big deal in utilitarian terms (or otherwise). The work is divided into four "definitional principles", which form of the central definition of the term "Singularitarian" (as it was defined at the time), and "descriptive principles", which "aren't strictly necessary to the definition, but which form de facto parts of the Singularitarian meme."

The definitional principles are:

1. Singularity
2. Activism
3. Ultratechnology
4. Globalism

The descriptive principles are:

1. Apotheosis
2. Solidarity
3. Intelligence
4. Independence
5. Nonsuppression

The "Singularity" principle refers to believing in "some fundamental change in the rules" in the future. Looking back on this from the vantage point of 2010, I think the term "Singularity" as defined here ("defined many different ways") is far too vague to be useful. Yudkowsky probably should have anticipated how diluted the meaning of the word would become and the need to define it more narrowly for it to be useful. Other than that, I think the rest of the document essentially makes sense, and even after ten years it is not obsolete by any means.

The activism principle says, "A Singularitarian is someone who believes that technologically creating a greater-than-human intelligence is desirable, and who works to that end." Aside from #1, this is the most important principle in my eyes. Kurzweil's definition of a Singularitarian is someone who "understands the Singularity" and has reflected on its consequences for their own life, but to my eyes this is excessively broad and meaningless. Am I an environmentalist just because I understand the environment and have reflected on its consequences for my life? No. Movements are defined by activism and change, not silent reflection alone. Kurzweil is making the definition of a Singularitarian vague so he can sell it to a wider audience, but I doubt that this definition is useful for actually inspiring humanity to have a positive impact on the Singularity, which it must do or perish.

The principle of "ultratechnology" is that "the "Singularity" is a natural, non-mystical, technologically triggered event", and "What distinguishes the Singularitarians is that we want to bring about a natural event, working through ultratechnologies such as AI or nanotech, without relying on mystical means or morally valent effects." What this basically says is that we are scientific materialists who refuse to believe in woo. Pretty basic. The few challengers that try to connect us with mysticism over the past decade have largely failed, in my opinion. A more useful avenue of approach has been to claim that the human brain is extremely complex, it must be copied exactly to produce intelligence, and Singularitarians are overconfident of the timescales on which this can be achieved. To help clarify the situation, we created the Uncertain Future modeling application. This application will allow us to make our assumptions explicit in debates, and force us to think about these assumptions carefully.

Globalism means that we want the benefits of a Singularity to extend to everybody. As far as I can tell, most Singularitarians of the Singularitarian Principles variety take this for granted. The benefits of the Singularity would be so large that it would take everyone to fully enjoy them. Programming an AI to benefit some small group rather than humanity in general would probably be a technical hassle in the long run anyway (though I could be wrong about that), and likely terribly unstable, not to mention idiotic and evil. Speaking for the Singularity Institute, we currently have strongly utilitarian and globalist bent. I worry more about our activism than our globalism.

For the descriptive principles, visit the original page, which I strongly recommend if you care to learn more about the Singularity and Singularitarians.

Filed under: SIAI, singularity 26 Comments
29Dec/0910

2010 Singularity Research Challenge: Donate Now!

As I mentioned in my last post, the Singularity Institute (SIAI) has launched a 2010 Singularity Research Challenge to raise funds for Singularity research. Our organization is worth giving money to because the Singularity is a matter of life and death for our entire species, and we may only have a few decades remaining to deal with it. Our group is the most dedicated to maximizing the probability of a positive outcome, and has the intelligence and skill to produce detailed ideas and attract major media attention. We achieve a huge amount with our money. Nearly everyone at the Singularity Institute, including myself, takes a salary significantly lower than our market value given our education and experience, because we personally care about this issue a whole lot.

We have a network of several dozen young academics, mostly aged 20-30, who are devoted to performing research and writing papers on the topic of the Singularity if given the proper support and infrastructure. (For a snapshot of 2009's Visiting Fellows, along with names and bios, see this page.) Most of these people have degrees or are working on degrees from schools like Stanford, Harvard, Carnegie Mellon, and Yale. For instance, SIAI volunteer Tom McCabe is at Yale and sometimes-SIAI Research Associate Marcello Herreshoff is at Stanford. Many of these youngsters are astonishing geniuses, performing extremely well at both formal and informal tests of intelligence, and it is clear that their life goal consists of having a positive impact on the Singularity.

What we lack is sufficient funding to sponsor all the research we want to sponsor. Visit the website of the Challenge and scroll down to the grant proposals to see some of our ideas for work in 2010. This list is part of an effort at SIAI being spearheaded by Research Fellow Anna Salamon to increase transparency and accountability as we expand. Anna is a competent, energetic manager who leads much of SIAI's research wing, which is located in Santa Clara, CA. It was under her leadership that the SIAI Visiting Fellows Program was founded, and the grants proposal thing was her idea, which myself, SIAI Visiting Fellows, and volunteers duly fleshed out.

Other than room and board for researchers, SIAI doesn't have much overhead. There are some administrators, such as President Michael Vassar and our Chief Compliance Officer Amy Willey, and one PR nerd, yours truly. Our goal is to convert dollars into existential risk mitigation more effectively than any other group. Of course, your evaluation of this will vary depending on how much of a risk you believe the Singularity could pose to humanity. Among those who believe that the Singularity could be a risk and it ought to be studied and dealt with, the Singularity Institute often gets high marks. When we receive constructive criticism from supporters, we keep it in mind in everything we do, and frequently question ourselves.

One unfortunate element that holds us back at this point is that the majority of our supporters are in their twenties or early thirties and therefore have limited personal wealth. However, some of SIAI's dedicated supporters have already secured jobs in the tech sector with decent earning potential. As the Singularity Institute's support base grows and matures, we will certainly possess greater resources, but we don't want to wait. We want to fund as much Singularity research as we can, right now. Those considering donating larger amounts, say $1,000 or more, should feel free to get in touch with Anna Salamon (her email address is on the challenge page) and talk about what sort of research you would consider worthwhile to fund.

Discussion about the Singularity, and the Singularity Institute itself, are here to stay. We will still be around in 2020, 2030, 2040 -- however long it takes to get to a positive Singularity. We are in it for the long haul. The question is, are you with us? If a robotic or AI advance you see in the news in the next decade impresses you with the speed of mankind's progress towards real artificial intelligence, and has you concerned about the implications, you may regret not donating to our organization now, in 2009 or 2010. If time travel were possible, then the Singularity Institute is the kind of organization that people would be sent back in time to help. By sheer dumb luck, however, you happen to be one of the few people who has heard of the Singularity Institute this early in the game. Wouldn't it be silly not to take advantage of that fact?

Filed under: SIAI 10 Comments