Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Apr/0912

Specialized vs. General Molecular Assemblers and the Risk of AGI

J. Storrs Hall at the Foresight Institute has responded to my recent post about the challenges of self-replication. Specifically, the line where I refer to the Foresight Institute and the Center for Responsible Nanotechnology:

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster.

Dr. Hall responds:

From this he jumps with very few intervening arguments (”there are terrorists out there”) to a conclusion that we need a benevolent world dictatorship (”singleton”), which might need to be a superhuman self-improving AI. This seems a wildly illogical leap, but surprisingly appears to be almost an article of faith in certain parts of the singularitarian community and Washington, DC. Let us examine the usually unstated assumptions behind it:

A singleton need not be a benevolent world dictatorship -- just a "world order in which there is a single decision-making agency at the highest level", as defined by Nick Bostrom, who says:

A democratic world republic could be a kind of singleton, as could a world dictatorship. A friendly superintelligent machine could be another kind of singleton, assuming it was powerful enough that no other entity could threaten its existence or thwart its plans. A “transcending upload” that achieves world domination would be another example.

Consider the concept of global governance, for instance.

I consider it likely that a singleton will emerge in the 21st century, whether we want it to or not, as a natural consequence of expanding technological powers on a finite-sized planet, as well as a historical trend of aggregation of powers at higher geopolitical levels. Note that the singleton concept does not specify what degree or scope of decision-making powers the entity (which, as pointed out, could be a worldwide democracy) has. 99% of policy choices could very well be made at the local and national levels, while a singleton intervenes in those 1% of choices with global importance. As Dr. Hall points out later in his post, it seems like a pseudo-singleton already exists. He calls it the US Government, but I'd call it a fuzzy entity that consists of the shared consensus between the US Government, its opinion sources (academia, public, media), the UN (which is not just controlled by the US), the European Union, NATO, and other assorted actors.

To me, what I'd want most out of a singleton would be a coherent and organized approach to problems that face the entire planet. Instead of a disorganized patchwork, there'd be more decisive action on global risks. No authoritarianism in cultural, political, or economic matters is implied.

This is what I think of when I hear calls for "more international cooperation" on terrorism or global warming. This is why we have the WHO as the highest source of authority on the emerging swine flu. People say that international organizations and institutions are weak, and maybe some of them are, but at least a portion of them help the entire world move through crucial challenges. Celebrities and politicians emerge to champion causes and rally supporters. Diversity in opinion, unity in action. It's called cooperation.

The "singleton" I want could merely be described in terms of "more cooperation on threats to us all, including the question of whether certain threats are really threats or not". Whether AI is in the picture or not is really a secondary issue, but if AI expands our capacity to detect and respond to threats, more power to it.

Next, Dr. Hall argues:

Humanity can’t manage self-replicating universal constructors: We’ve been managing self-replicating universal constructors for tens of thousands of years, from elephants to yeast. What’s more, these are replicators that can operate in the wild. The design process, e.g. to turn a wolf into a Pekingese, takes longer but is much more intuitive to the average human.

If you’re worried about high-tech terrorists, worry about genetically engineered swine flu or other naturally-reproducing agents. If there are terrorists out there who are so technically sophisticated as to be a threat with MNT, at best guess still 20 years away for the leading mainstream labs, why aren’t they doing this? Even terrorist Berkeley professors only make letterbombs.

One type of self-replicating constructor could conceivably replicate itself in less than a day and become arbitrarily large and energy-hungry, and another takes at least a year to self-replicate and has a bounded size. One can make nearly anything and another is highly restricted in its specifications... there's no comparison here.

I am certainly worried about genetically engineered swine flu or other naturally-reproducing agents, and have been posting about these issues frequently. But I still reserve concern for the challenges of MNT, even if they may be 20, 30, or even 40 or more years off. Partially because the advances are fairly far off, the field for debate and thought is smaller than it would otherwise be, potentially giving early actors such as ourselves disproportionate influence over how the debate evolves in the future. As I plan to be discussing technological risk 20, 30, and 40 years from now, I am getting started early by voicing my concerns in 2009. If MNT does become an issue in 2030 or 2040, then hopefully I will be one of the people that is solicited for ideas on how to handle it, partially based on my public analysis of the problem at such an early juncture.

My concern about MNT is that it will not be that technically sophisticated when it is rolled out worldwide. That is, it will be possible to create weapons cheaply and easily with intuitive interfaces when non-restricted nanofactories become available around the world. (If diamondoid nanofactories are possible at all, which I wager them to be.) Even if the non-restricted nanofactories are only available to "scientists" or "authorities", there is a significant risk of them being dispersed via the black market. The demand would surely be astronomical.

If the nanofactories in question just use proteins to make products, as Dr. Drexler has been arguing for lately, then a lot of the security issues evaporate. As far as I know, you can't make a powerful missile, gun, or millipede robot out of keratin.

Next, Dr. Hall rightly points out that universal constructors probably wouldn't be distributed to everyone:

Once the leading mainstream labs produce self-replicating universal constructors, they are hardly going to hand them out by the billions for people to make shoes with. As Eric Drexler recently pointed out, specialized mill-style machinery is considerably more efficient than universal constructors at actually making stuff. My analysis of this point is that the difference is months for universal constructors vs milliseconds for specialized mills. Nobody is going to want universal constructors except for research.

Of course. The MNT community realized this a while ago. When I say, "managing universal self-replicating constructors", I don't mean that universal constructors will be distributed as consumer products. I realize that consumer nanofactories are likely to be specialized devices. I am referring to the point at which a limited number of actors acquire more-general (not necessarily universal) manufacturing capabilities, which in turn leads to distribution of more specialized versions of the technology to millions or billions of people. Perhaps "universal" is the wrong word, because as Dr. Drexler has also pointed out, it may be too much to predict a single device to be universal: it doesn't have to be. Cooperation between specialized devices should be quite sufficient to hit a very large space of manufacturing targets.

So, to rephrase, what I am concerned about is the widespread availability of more-general high-throughput manufacturing devices, which will result from the invention of a nearly-general molecular assembler. If I could revise my claim, I would subtract the word "universal" and say "general self-replicating nanofactories" instead of "universal self-replicating constructors". By "constructors", I meant the entire system, not just the tiny assemblers themselves, so I replace it with "nanofactories" to make it more clear. An individual assembler need not self-replicate -- perhaps 1000 assemblers could cooperate together to make another assembler. The technical issues around this are another ongoing debate and analysis. Still, what I am concerned about is that any combination of product-restricted nanofactories could be used to produce additional manufacturing devices that could be put to ill ends. Specialized nanofactories could be used to build more general construction devices, perhaps not even based on MNT at all. I am talking about a general magnification of our manufacturing capability and speed.

The concern is that a variety of products that are likely to be approved for manufacture will be dual-use products that can be turned to illicit ends. For instance, the general equipment in an chemical laboratory can be used to manufacture methamphetamines or opiods like oxycontin. In an MNT-equipped world, instead of this equipment costing tens of thousands of dollars, it may cost a thousand dollars, a few hundred, or even less. MNT, when and if it is developed, will magnify the technological oomph behind any human tendency by orders of magnitude. Tendencies towards good as well as envy, obsession, and evil.

The questions I am concerned about are the following:

1. Once universal constructors are developed, who will get them? The company that develops them? The US military? The US Government? The United Nations? The highest bidder?

2. Will there be any government controls on these universal constructors? As systems are developed that are less general than "root" systems, but still general enough to build weapons, illicit materials (for instance, addictive designer drugs), intrusive surveillance systems, dual-use systems, and the like, who will regulate which level of access gets which products?

The general implied position of the Foresight Institute appears to be, "we'll figure these things out as we go, MNT should be developed as soon as people put up the funding for it, everything will pretty much be fine".

In my analysis, the situation is relatively bleak. Forces arguing in favor of "openness" and "power to the people" will, while well-intentioned, probably end up granting too much custom-design, high-throughput manufacturing power to too many actors, and once the genie is out of the box, it can never go back in. Once you have a single unrestricted nanofactory, you can make 100 more (as long as you have the feedstock) in just a few days and hide them in very out-of-the-way places. Note that one of my primary concerns is high-throughput manufacturing, not just generality. If both generality and manufacturing speed could be artificially limited in the vast majority of nanofactory devices, perhaps the global security risk would be much diminished.

There are obvious ideas floating around, which I've written about before, for making nanofactories safer: GPS tracking, the need for certification to manufacture certain products, the recommendations set forth in the Foresight Guidelines on Molecular Nanotechnology, restricting the manufacture of products based on their chemical composition, intended purpose, energy density, speed, or size. Military Nanotechnology by Jürgen Altmann, a disarmament expert and physics Ph.D, puts forth some good ideas, which unfortunately will probably be considered too radical and restrictive to be adopted by any major country or company.

Particularly bleak in my book is the vast improvement in isotope separation technology which would become possible when dual-use, MNT-built industrial machinery is put to the challenge. There are over a dozen ways to enrich uranium, and many of the more advanced techniques are held back mostly by 20th century materials and a lack of manufacturing precision and reliability.

Dr. Hall writes:

Note that a really universal constructor at the molecular level would, even under current law, require a bushel of different licenses to operate — one for each of the regulated substances it was capable of making. Sony is not going to be selling these things on the streets of Mumbai.

I somehow worry that the DIY advocates will turn the tide of regulation with this one. For a device that inherently can make practically everything, picking out every item to exclude is much harder than just allowing a wide range of things and only introducing regulation when some terrible accident happens. Because the vast majority of constructed objects will be entirely benign and helpful in an economic and humanitarian sense, the legislatures of the world will be thrown off guard, embracing an "open source" perspective that puts as much power in the hands of the people as possible. When it comes to software, I'm all in favor of open source, but when it comes to manufacturing actual objects that have a physical impact on my world, I'd prefer that not just anyone be allowed to manufacture just anything.

Even a device with highly specialized manipulators at the nanoscale could still produce a huge variety of products. For instance, these highly specialized manipulators could be specialized to create nanoblocks, 100 nm-sized blocks with a variety of pre-programmed structures and functionality which could be combined in arbitrary patterns, like Legos. Specialized at the molecular level, thoroughly general at the person level.

If Dr. Hall means specialized as in "specialized to create dinnerware", such over-specialization seems unlikely to me. There will be strong social and economic reasons that argue in favor of generality. I don't want to switch manufacturing machines every time I want to build an object in a slightly different category. Just like with computers, most nanofactories will be relatively general, though the precise question is how general.

Dr. Hall then says, in reference to the notion of a benevolent AI singleton:

Anyway, there already is a “singleton” — the US government. It has clearly demonstrated a willingness to act to prevent even nuisance-level WMD by actors outside the currently-accepted group. (By nuisance-level I mean ones which pose no serious threat to topple the US from its dominant military position.) The notion of producing, from scratch, an entity, AGI or whatever, that would not only seriously threaten US dominance but depose it without a struggle seems particularly divorced from reality. (Note that the US military is the leading funder and user of AI research and always has been.)

But, that is exactly what we are arguing. A "seed" artificial intelligence, an AI built specifically for self-improvement, could break away from its programmers as soon as it gains a threshold level of capacity for self-creation and implementing real-world plans. In the same way that the Wright Flyer was, strictly speaking, many orders of magnitude less complex than a flying bird or insect, the first artificial intelligence may be many orders of magnitude less complex than a human mind and yet still capable of forming useful theorems about learning, decision-making, and competition that allow it to materially enhance its own intelligence and capability to far above the human level.

Because an AI would not be limited by unitary identity (it could break itself into pieces to work on tasks), finite hardware (additional computing power could be rented through cloud computing), the need to rest (an AI could run 24/7/365 with sufficient electricity), a brain unintended for hardware-level self-improvement (nature has retained the same basic neural building blocks for over 400 million years), frustration or boredom, social needs, bodily frailty, short-term memory limited to seven objects, and hundreds if not thousands of other shortcomings of biological minds, an AI mind considered as smart as a 10-year-old could probably achieve a heck of a lot more than a 10-year-old in a similar position.

Essentially, all of the human species is at the same intellectual level in terms of our cognitive capabilities. Even the least intelligent humans, unless they have brain damage, have greater cognitive capabilities than the smartest chimp. Our distinct level of cognitive ability is species-general and all we've ever known, so we tend to take it for granted. We fail to realize the solutions that an intelligence just slightly above us would see, just like there are a million things that are obvious to us and impossible to comprehend for a chimp, or even a dumber human.

The central argument is that humanity is not special. Just like the Earth turned out not to be the center of the universe and humans turned out not to be created in the image of God, some humans may be surprised to find out that we aren't at the center of the cognitive universe. We're just another step on a ladder between worms and the great unknown. Call it the Copernican Revolution in cognitive science.

Getting AI up to the point of human-equivalent intelligence may be incredibly difficult, and take decades as well as hundreds of millions of dollars in distributed research. But once it is at that point, it is easy to imagine self-improvement scenarios where the practical power of an artificial intelligence quickly begins to exceed that of even the largest human collectives. Some relevant variables are named in my summary of last summer's SIAI-funded research project.

It is classic anthropocentrism to say, "this human government is so powerful and mighty, how could it possibly be that this new species could exceed its capabilities?" Because from the perspective of the new entity, humans are intellectually just a bunch of monkeys. Physically too. An AI can be in a million different places at once, a human, just one.

I am hardly the first person to suggest that AI could surpass humanity in its capabilities, or even overcome a major government without a struggle. The entire Singularity Summit event is based at least partially on that premise -- the idea of an "intelligence explosion", which originated at least as early as 1965 with the recently deceased I.J. Good. Most of society is at least familiar with the idea of runaway AI, and a sizable educated minority grants it a non-negligible probability in the coming century. Larry Page and Bill Gates are obviously among that minority, which is why Page helped fund Singularity University and Gates is such a big fan of Kurzweil. So is congressman Brad Sherman, who has raised the issue in the US Congress.

Dr. Hall then writes:

It seems to me that if you can make a self-improving, superhuman AGI capable of taking over the world, you could probably make a specialized AI capable of running one desktop fab box. Its primary purpose is to help the 99.999% of users who are legitimate to produce safe and useful products. It doesn’t even have to outsmart the terrorists by itself — it is part of a world-wide online community of other AIs and human experts with the same basic goals.

It's not that easy -- nanofactories could come before any type of sufficiently advanced AI. Remember, in our analysis, a self-improving superhuman AI is not radically harder to create than a roughly human-equivalent seed AI -- the latter would transform itself into the former in a relatively short period of time, not limited by human thinking/acting speeds or methods. As Drexler writes in Engines of Creation:

The engineering AI systems described in Chapter 5, being a million times faster than human engineers, could perform several centuries' worth of design work in a morning.

It is perhaps unfortunate that some thinkers have come to see claims about MNT and superhuman AI as interdependent, when it is possible for one class of claims to be right and the other wrong just as easily for both to be wrong or right. As for myself, I tend to be more convinced that a human-equivalent self-improving AI would be able to empower itself rapidly than I am that reliable diamondoid mechanosynthesis will be implemented in nanofactory systems before 2030. As for when human-equivalent AI will come about, I would certainly prefer it to be before 2040, but I have absolutely no idea. On the issue of converting sand (silicon) into mind, it matters not as much when exactly it happens as the magnitude of its impact. Instead of perpetually lurking around the human-level of intelligence and capability, I would expect AIs to skyrocket in capability far past the human level, limited only by their consideration for the welfare of other beings (if such consideration is indeed present).

30Apr/093

David Gelles’ Article on Singularity University

David Gelles recently wrote up an article on Singularity University. Let's skip to the good parts.

Rather, a new, pseudo-academic institution called Singularity ­University is going to solve our grand challenges: poverty, hunger, energy scarcity and climate change. Among others. Through a combination of techno-optimism, wide-eyed idealism and belief in the perfectibility of human beings, these well-connected geeks are creating an institution meant to legitimise their most extreme thinking.

Then, we see an out-of-focus image with Bruce and Susan to the right.

Then, Gelles talks to Diamandis, who dishes out some superlativity:

A few days before visiting Ames, I caught up with Diamandis at San Francisco’s Fairmont Hotel. Diamandis was attending the Cleantech Forum, a gathering of entrepreneurs and venture capitalists hoping to cash in on green technology. Diamandis also sees a market here – who doesn’t? – and hopes SU can contribute. Yet he may be a bit more extreme than his fellow forum-goers: technology won’t just solve our energy needs, Diamandis argues, but all the world’s problems. “People think there is always going to be hunger,” he said. “Well, no. That’s not true. There doesn’t always have to be hunger.” Rather, in the near future, nanobots – minuscule robots capable of performing exceptionally complex tasks – will be able quickly and cheaply to produce food from raw materials, say algae or dirt.

There might eventually not be hunger, but we'll still have some problems. I'll bet that's what he meant to say. Maybe.

And not dirt, necessarily. Probably carbon dioxide. The atmospheric kind. We're probably going to suffer from such a deficit of CO2 that the Sierra Club will start digging up coal and burning it in open fields to replenish it. (Yes, that's Toth Tihamer-Fejel's line.)

Quick summary of the fun stuff we're into:

Every year, the Singularity Institute for Artificial Intelligence, a Silicon Valley-based non-profit organisation, hosts a summit focused on the future. Mainstream academics, professionals, entrepreneurs and pundits attend. More mainstream yet, The Singularity is Near is now being made into a film (directed by Kurzweil – who also stars in it). But many people who preach the Singularity are involved with more controversial movements. Transhumanism, which aims to “extend human capabilities”, takes the Singularity as one of its intellectual pillars. Many “Singularitarians” are also advocates of cryonics, the process of freezing a recently deceased body in the hope that future medical technologies will be able to revive it. Kurzweil is signed up to be frozen at his death.

Remember, everyone, in transhumanist lingo, "death" in that sentence should be replaced with "denimation". It's like temporary death, really. Temporary death to be followed by technological revival. As long as all those microfractures can be stitched up, and we don't blow ourselves up so that there's no one to put liquid nitrogen in your dewar.

It's interesting that the Singularity would be considered less controversial than transhumanism, rather than more. Has Kurzweil turned the word "Singularity" into something even less objectionable than "transhumanism"?

An interesting, characteristically sober and not-too-overexcited "rebuttal/questioning" of SU is then brought forth by Bill McKibben, the apparent go-to man for criticism of the "Singularity" (Kurzweil's generalized transhumanist vision which doesn't actually have much to do with smarter-than-human intelligence at all). He seems to be half-hearted at the effort... perhaps he should be replaced with someone more bombastic, such as Jaron Lanier, Dale Carrico, or Wesley J. Smith.

Witness the awkwardness and discomfort as David, the harmless journalist, seeks an audience with Google or NASA on the story:

Google, meanwhile, denied repeated requests for an on-the-record interview with a spokesperson. Finally, after encouragement from the SU team, the company offered up Chris DiBona, a specialist in open-source computing and Google’s point person for dealings with the school. DiBona seemed excited about the opportunity to work with an interdisciplinary group. He felt like his speciality, network computing, really could help deliver telemedicine in remote parts of the world. But even DiBona was mindful of the university’s strange pedigree. “Some of the stuff feels very science fiction to me,” he said. “But that’s not necessarily a bad thing. There’s an idealism that speaks well of the university. When you try to work on the future, you’re going to be wrong sometimes. A little zaniness goes a long way.”

Instead of "University", I'd call it, "a place where certain smart people get together to instruct and discuss technology with other smart people who pay $25K for the privilege". Of course, that's what I'd prefer to call any university.

Now for the interesting part:

For all the sci-fi overtones, the projects that come out of Singularity University will be well-intentioned, and it seems unlikely that any malicious artificial intelligence will be designed in the halls of Ames Research Center – or at least in SU’s corner. When pressed, Ismail backed away from the assertion that Singularity University would be delivering deployment-ready solutions to problems like hunger and energy scarcity. “If we do nothing else,” he says, “just to bring people up to speed on all these advances is an accomplishment and a full-time job.”

And who would expect anything more? You can't solve difficult problems with 100% certainty in one nine week get-together, but you can certainly try. (If that is indeed the object, though it seems to be more about putting together a creative and cutting-edge technological curriculum.) I'm never against smart people coming together to talk about technology and the future, even if a lot of money needs to change hands to do it. The students are paying for the privilege of teachers considered high-status in our society, along with some other teachers that I personally think are smart (like Ben Goertzel and Robert Freitas), who probably get insufficient credit from society. Most people could benefit tremendously from having Goertzel and Freitas lecture to them for several hours. Look at their interdisciplinary nature -- Goertzel, spiritual yet mathematical; Freitas, profoundly imaginative yet numerically rigorous. I don't know so much about the other teachers, I'm just telling you about who I know who might teach there.

About universities in general, I think everyone should read the chapter on universities in Paul Fussell's Class.

Here's the ending of the article:

What Singularity University can do with certainty, he says, is create an atmosphere where people aren’t afraid to dream. “You can’t pre-script innovation,” he said. “You can create an environment where you bring together the best and brightest from disparate fields and very often interesting things will happen.” But that begs a question: is Silicon Valley a place where anyone is really afraid to dream big? Must you pay $25,000 for the privilege? The big dreamers, it seems, may be the Singularity University team, hitching a ride on a popular catchphrase and harnessing it to corporate funding, government aid and a steady revenue stream from wealthy students.

Before I left Ames, Ismail loaded me up with swag. He gave me a calendar from Nasa showing pictures of the cosmos, a copy of The Singularity Is Near signed by Kurzweil, and a handful of Singularity University refrigerator magnets – a refreshingly simple technology, reliable, very human, and timeless.

Ha ha ha, what an interesting observation. Obviously, hundreds of people applied for the open slots at Singularity University, so there is the demand. People like a structured environment with nationally recognized scientists as teachers. Conventional universities are very expensive too. If people want to pay $25K for it, that's their business, and if it gets them thinking more about the benefits and dangers of advanced technology -- great! Even though I do criticize some of the positions of Kurzweil, his books are what really, truly made me think seriously about future technological change, and I commend him for it.

Since all of Singularity University's courses will have their data published online (or so I've heard), we'll all have a chance to evaluate the value of the information they're selling. Human time is expensive -- if it costs a lot to bring together the best professors, then it will have to be funded somehow, preferably through executive and middle-manager students with six-figure salaries.

Filed under: singularity 3 Comments
29Apr/0914

Following the Flu and Catastrophic Risk in General

The WHO has elevated the pandemic threat level to the second-highest rating. According to CNN this "indicates it fears a pandemic is imminent".

Also according to CNN, later on Obama will make remarks that the swine flu is "very serious" and the "entire government is taking the utmost precautions". This is in contrast to recent remarks that said we should have "concern but not alarm".

I guess one should never have "alarm", if alarm is defined as irrational emotion, but if "utmost precautions" in the entire government for a "very serious" situation isn't alarm, what is?

Given the rate at which natural pandemics have historically emerged from the world (about once every 50 years), how could that rate increase when genetic engineering of novel microbes for industrial production purposes becomes commonplace, like Craig Venter wants it to be? Once every 20 years? Every ten? New microbes will be developed, but our immune systems will stay the same. (Unless we develop artificial white blood cells (microbivores) and antibodies, which would be a good idea.)

My answer would be "unknown, but more studies should be funded immediately", and a tentative, "probably at unacceptable levels in the absence of targeted regulations".

When the entire galaxy appears to be at stake (if there were aliens in this galaxy they'd be here now, so I'm assuming that mankind is the only species with a chance at colonizing the entire galaxy and making it a nice place to live) on the issue of whether or not mankind survives and makes it off the planet, such matters are important.

Ignoring the fact that humanity has the potential to generate a tremendous amount of positive utility through future space colonization (in the extremely near future by historical standards) in examinations of catastrophic risks would be short-sighted.

There are at least three catastrophic risk categories: biotech, nanotech, AI/robotics, and one additional tentative category (this also came up in conversation with John Hunt), chemtech, the possibility of non-biological self-replicating novel molecules that pose a threat to the ecosystem.

Unfortunately, a comprehensive analysis of the categories of risk can sometimes be counterproductive to attracting attention to the analysis, so at first summary I usually just say, "biotech, nanotech, and AI/robotics", which all incidentally could provide tremendous benefits for humanity.

Let me repeat the call I made just a few days ago, right before the swine flu was in the news, for a demonstration or experiment of a genetically engineered pandemic virus in a contained facility.

What is currently happening, the spread of a natural pathogen, does not represent the same level of risk as a deliberately engineered one. The state space that natural evolution can probe is actually much smaller than the state space that a bioengineer can probe. You find this out when you look at the historical Russian bioweapons program in a bit more detail.

Between serious risks like the swine flu and non-serious (but not entirely ignorable) risks like the Large Hadron Collider, the public is getting a reintroduction to the idea of catastrophic risk in the post Cold War era. But the Cold War era is a big deal -- one would have hoped that it would have primed society at large for thinking about new global catastrophic risks, but it hasn't.

For more reading on natural influenza, see "The world is teetering on the edge of a pandemic that could kill a large fraction of the human population" by scientists Robert G. Webster and Elizabeth Jane Walker. For engineered pathogens, see "The Knowledge" at MIT's Technology Review, a profile of Serguei Popov, who was a higher-up in the Russian bioweapons research industry. Even Kofi Annan has made calls to regulate biotechnology in an effort to avert the danger.

In that last link, notice how I linked the Center for Genetics and Society, a Luddite think tank. It is my opinion that such organizations will never get anything done in regulating dangerous technologies because they are fundamentally anti-technology and pro-theology themselves. Their networks are made up primarily of religious folks or Gaia worshippers. For a regulatory push to be effective, it has to come from organizations filled with scientists, such as the Lifeboat Foundation. That's why we've made a fundraising call for a new conference. Studies have shown that positive feelings for either God or science weaken positive feelings for the other, so religious arguments based on "not playing God" or "Human Dignity" will do absolutely nothing to regulate these potentially dangerous technologies. Scientists just laugh at them.

Myself, I'd prefer we relinquish all potentially dangerous technologies until we have successfully implemented human intelligence enhancement (either via AI, brain-computer interfacing, or biological intelligence enhancement). Living in a world where our technological powers increase but our wisdom and empathy doesn't is fundamentally unsustainable.

Filed under: risks 14 Comments
27Apr/091

Challenge of Self-Replication Reprise

Just because I ran into it in a random Google search and I like it, here I am reposting some content from a post I made exactly five months ago, "The Challenge of Self-Replication":

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster. Currently Mumbai is under attack by unidentified terrorists — they are sacrificing their lives to kill, what, 125 people? I can envision a scenario in 2020 or 2025 that is far more destructive and results in the deaths of not hundreds, but millions or even billions of people. There are toxins with an LD50 of one nanogram per kilogram of body weight. A casualty count exceeding World War II could theoretically be achieved with just a single kilogram of toxin and several tonnes of delivery mechanisms. We know that complex robotics can exist on the microscopic scale — microwhip scorpions, parasitic wasps, fairyflies and the like — merely copying these designs without any intelligent thought will become possible when we can scan and construct on the atomic level. Enclosing every human being in an active membrane may be the only imaginable solution to this challenge. Offense will be easier than defense, as offense needs only to succeed once, even after a million failures.

...

Instead of just saying, “we’re screwed”, the clear course of action seems to be to contribute to the construction of a benevolent singleton. Given current resources, this should be possible in a few decades or less. Those who think that things will fall into place with the current political and economic order are simply fooling themselves, and putting their lives at risk.

By "benevolent singleton", I mean "an IAeed (Intelligence Amplified) fundamentally considerate and kind human whose intelligence is actually improved above H. sapiens to the tune that H. sapiens is above H. heidelbergensis, and after that point, whatever happens, happens", or "a self-improving Friendly AGI". Nothing so immensely, unimaginably complicated. If the latter seems hundreds of years away in your estimation, then perhaps the former is not quite as far.

Filed under: futurism 1 Comment
27Apr/091

SIAI call for Skilled Volunteers and Potential Interns

Over at Less Wrong, Anna Salamon is putting out a call for skilled volunteers and potential interns for SIAI-funded summer projects.

Here's the introductory paragraph:

Want to increase the odds that humanity correctly navigates whatever risks and promises artificial intelligence may bring? Interested in spending this summer in the SF Bay Area, working on projects and picking up background with similar others, with some possibility of staying on thereafter? Want to work with, and learn with, some of the best thinkers you'll ever meet? – more specifically, some of the best at synthesizing evidence across a wide range of disciplines, and using it to make incremental progress on problems that are both damn slippery and damn important?

Having worked with this group last summer, I can say: Anna is not kidding! This group is extremely intelligent and well-read, and we discussed a wide range of concepts and issues, including those that had little to nothing to do with SIAI's artificial intelligence focus. In this group, stellar standardized test scores (like perfect SATs and being in the top 10 in the state on math tests and science fairs) were the norm, but test scores fail to capture the complexity of intelligence in this group.

The summer visits were also interspersed with field trips to places like Google and Stanford.

Having attended multiple GATE programs during summers in my preteen and early teen years, where I practically learned more than all 15 years in normal school combined, I can say that SIAI's summer intern program is even more enriching and memorable. I learned more in those six weeks than I did in all my summers at GATE.

Here are some of the projects which may occur if SIAI does end up taking summer interns:

* Improving technological forecasting around AI (with wide probability intervals, attention to the heuristics and biases literature, etc.);
* Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions);
* Helping construct and/or test useful rationality curricula;
* Other activities that further our or relevant other actors' understanding of what humanity is up against or how to address it -- either directly, by research and writing on the topics themselves, or indirectly, by improvements in our individual or collective rationality.

Read the original post for more information and Anna's email address.

Filed under: SIAI 1 Comment
27Apr/090

Singularity Institute Overview for Journalists

I have written a Singularity Institute Overview for Journalists, for those of you who are interested. This compiles data from SIAI's website, Wikipedia, and other sources to give you a general picture of the organization.

This overview is the latest addition to my /pages folder, which also has a page that summarizes major SIAI news appearances and a Technological Singularity Overview for Journalists. Pages on transhumanism and other topics are forthcoming.

Filed under: SIAI No Comments
27Apr/091

Interview: Novamente’s Parrots And Advanced AI Progression

An interview with Ben Goertzel on the subject of Novamente's virtual pets is up on Gamasutra, a game developer site. The interviewer is the very same Jeriaska that is behind Future Current. Here's the intro:

As AI developers were convening in San Francisco for GDC, another artificial intelligence conference was wrapping up in Arlington, Virginia, a short walk from the Pentagon. AGI-09, the second conference on artificial general intelligence, brings together researchers attempting to create learning, reasoning agents with broad, humanlike intelligence.

Organized by Dr. Ben Goertzel, chief science officer of Novamente LLC, the AGI conference series is a motivated effort to steer research back in the direction of the original intents of AI, namely to make a thinking machine.

Goertzel's plan is to inch up the cognitive ladder by incrementally developing more cleverly adaptive pets in virtual worlds and massively multiplayer online games.

This discussion with the AGI designer focuses on the prospects of introducing general intelligence to non-playable game characters.

The topics addressed include contemporary examples of game AI and what steps need be taken for game designers to foster MMO environments suitable for genuinely clever artificial general intelligence.

Continue.

Filed under: AI 1 Comment
27Apr/0939

Obama Mentions Artificial Intelligence in Speech to National Academy of Sciences

As part of his stated commitment to boost national funding in research and development to 3% of US GDP, Obama mentioned both AI and advanced prosthetics as research goals:

WASHINGTON (AP) -- President Barack Obama on Monday promised a major investment in research and development for scientific innovation, saying the United States has fallen behind others.

"I believe it is not in our character, American character, to follow -- but to lead. And it is time for us to lead once again. I am here today to set this goal: we will devote more than 3 percent of our GDP to research and development," Obama said in a speech at the annual meeting of the National Academy of Sciences.

"We will not just meet but we will exceed the level achieved at the height of the space race," he said.

Obama said the investments he is proposing would lead to breakthroughs, such as solar cells as cheap as paint and green buildings that produce all the energy they consume.

The pursuit of discovery a half century ago fueled the nation's prosperity and success, Obama told the academy.

"The commitment I am making today will fuel our success for another 50 years," he said. "This work begins with an historic commitment to basic science and applied research."

He set forth a wish list for the future including "learning software as effective as a personal tutor; prosthetics so advanced that you could play the piano again; an expansion of the frontiers of human knowledge about ourselves and world the around us.

"We can do this," Obama said to applause.

See research.fi and Swivel for some numbers on R&D expenditure by GDP for various countries.

An interesting way of looking at this figure is as the revealed preference of how much of a nation's resources are spent on fundamental improvement rather than merely continuing to exist at the current level of technology and capability.

AI software truly as effective as a personal tutor would probably have to be an AGI, incidentally. I'm not sure why Obama picked such a high bar as an example of AI research, but perhaps he really means "almost as good as a personal tutor". In narrow domains, that might be possible with narrow AI, but not truly, because a really good personal tutor can make analogies and connections across completely different domains, and thus requires domain-general knowledge and reasoning.

Filed under: technology 39 Comments
26Apr/090

PhysOrg: Robots are narrowing the gap with humans

Singularity-relevant press release on PhysOrg, primarily about the "Robobusiness" conference in Boston:

Robots are gaining on us humans. Thanks to exponential increases in computer power -- which is roughly doubling every two years -- robots are getting smarter, more capable, more like flesh-and-blood people.

Matching human skills and intelligence, however, is an enormously difficult -- perhaps impossible -- challenge.

Nevertheless, robots guided by their own computer "brains" now can pick up and peel bananas, land jumbo jets, steer cars through city traffic, search human DNA for cancer genes, play soccer or the violin, find earthquake victims or explore craters on Mars.

At a "Robobusiness" conference in Boston last week, companies demonstrated a robot firefighter, gardener, receptionist, tour guide and security guard.

You name it, a high-tech wizard somewhere is trying to make a robot do it.

A Japanese housekeeping robot can move chairs, sweep the floor, load a tray of dirty dishes in a dishwasher and put dirty clothes in a washing machine.

Intel, the worldwide computer-chip maker, headquartered in Santa Clara, Calif., has developed a self-controlled mobile robot called Herb, the Home Exploring Robotic Butler. Herb can recognize faces and carry out generalized commands such as "please clean this mess," according to Justin Rattner, Intel's chief technology officer.

In a talk last year titled "Crossing the Chasm Between Humans and Machines: the Next 40 Years," the widely respected Rattner lent some credibility to the often-ridiculed effort to make machines as smart as people.

"The industry has taken much greater strides than anyone ever imagined 40 years ago," Rattner said. It's conceivable, he added, that "machines could even overtake humans in their ability to reason in the not-so-distant future."

Programming a robot to perform household chores without breaking dishes or bumping into walls is hard enough, but creating a truly intelligent machine still remains far beyond human ability.

Artificial intelligence researchers have struggled for half a century to imitate the staggering complexity of the brain, even in creatures as lowly as a cockroach or fruit fly. Although computers can process data at lightning speeds, the trillions of ever-changing connections between animal and human brain cells surpass the capacity of even the largest supercomputers.

"One day we will create a human-level artificial intelligence," wrote Rodney Brooks, a robot designer at the Massachusetts Institute of Technology, in Cambridge, Mass. "But how and when we will get there -- and what will happen after we do -- are now the subjects of fierce debate."

"We're in a slow retreat in the face of the steady advance of our mind's children," agreed Paul Saffo, a technology forecaster at Stanford University in Stanford, Calif. "Eventually, we're going to reach the point where everybody's going to say, 'Of course machines are smarter than we are.'

"The truly interesting question is what happens after if we have truly intelligent robots," Saffo said. "If we're very lucky, they'll treat us as pets. If not, they'll treat us as food."

Some far-out futurists, such as Ray Kurzweil, an inventor and technology evangelist in Wellesley Hills, a Boston suburb, predict that robots will match human intelligence by 2029, only 20 years from now. Other experts think that Kurzweil is wildly over-optimistic.

According to Kurzweil, robots will prove their cleverness by passing the so-called "Turing test." In the test, devised by British computing pioneer Alan Turing in 1950, a human judge chats casually with a concealed human and a hidden machine. If the judge can't tell which responses come from the human and which from the machine, the machine is said to show human-level intelligence.

"We can expect computers to pass the Turing test, indicating intelligence indistinguishable from that of biological humans, by the end of the 2020s," Kurzweil wrote in his 2005 book, "The Singularity Is Near."

To Kurzweil, the "singularity" is when a machine equals or exceeds human intelligence. It won't come in "one great leap," he said, "but lots of little steps to get us from here to there."

Kurzweil has made a movie, also titled "The Singularity Is Near: A True Story About the Future," that's due in theaters this summer.

Intel's Rattner is more conservative. He said that it would take at least until 2050 to close the mental gap between people and machines. Others say that it will take centuries, if it ever happens.

Some eminent thinkers, such as Steven Pinker, a Harvard cognitive scientist, Gordon Moore, a co-founder of Intel, and Mitch Kapor, a leading computer scientist in San Francisco, doubt that a robot can ever successfully impersonate a human being.

It's "extremely difficult even to imagine what it would mean for a computer to perform a successful impersonation," Kapor said. "While it is possible to imagine a machine obtaining a perfect score on the SAT or winning 'Jeopardy' -- since these rely on retained facts and the ability to recall them -- it seems far less possible that a machine can weave things together in new ways or ... have true imagination in a way that matches everything people can do."

Nevertheless, roboticists are working to make their mechanical creatures seem more human. The Japanese are particularly fascinated with "humanoid" robots, with faces, movements and voices resembling their human masters.

A fetching female robot model from the National Institute of Advanced Industrial Science and Technology lab in Tsukuba, Japan, sashays down a runway, turns and bows when "she" meets a real girl.

"People become emotionally attached" to robots, Saffo said. Two-thirds of the people who own Roombas, the humble floor-sweeping robots, give them names, he said. One-third take their Roombas on vacation.

At a technology conference last October in San Jose, Calif., Cynthia Breazeal, an MIT robot developer, demonstrated her attempts to build robots that mimic human and social skills. She showed off "Leonardo," a rabbity creature that reacts appropriately when a person smiles or scowls.

"Robot sidekicks are coming," Breazeal said. "We already can see the first distant cousins of R2-D2," the sociable little robot in the "Star Wars" movies.

Other MIT researchers have developed an autonomous wheelchair that understands and responds to commands to "go to my room" or "take me to the cafeteria."

So far, most robots are used primarily in factories, repeatedly performing single tasks. The Robotics Institute of America estimates that more than 186,000 industrial robots are being used in the United States, second only to Japan. It's estimated that more than a million robots are being used worldwide, with China and India rapidly expanding their investments in robotics.

___

ON THE WEB

Video of Herb, the Home Exploring Robotic Butler: http://personalrobotics.intel-research.net/projects/herb.php

Videos of Japanese housekeeping robots: http://www.physorg.com/news153079697.html

Video of Cynthia Breazeal demonstrating her sociable robots: http://intelligence.org/media/singularitysummit2008/cynthiabreazeal

Video of Rodney Brooks discussing obstacles to robot intelligence: http://tinyurl.com/d3vnrs

Video of Ray Kurzwell lecture on machines matching human intelligence: http://tinyurl.com/dk4cxc

(Emphases added.)

Saffo again... he's popping up everywhere. He recently started a blog on SF Gate, as well. To Saffo, I must say: it's probably not a matter of luck. How we create the first general AI(s) will have a profound influence on their continued development. Acting as if it’s entirely luck implies either superior winning power on www.WorldPokerTour.com, or more probably, that the motivational content of these AI minds does not matter.

Interesting how the Singularity Summit is referred to in this press release as "a technology conference". I guess you could call it that, though it's quite an atypical one.

Though Ray Kurzweil is called a "far out" futurist, he seems to be getting more attention than any other prominent futurist in the world recently.

Filed under: futurism, robotics No Comments
25Apr/097

Singularity 101 with Vernor Vinge

This was in the first issue of H+ magazine, but now it's featured at the website:

Singularity 101 with Vernor Vinge

My stance on Vinge's position on the Singularity is that he's too loose with the concept, and seems to slightly welcome people redefining it as agriculture or something else that has no relationship to the concept of smarter-than-human intelligence.

I actually agree with Vernor when he says he'd be surprised if the Singularity doesn't happen by 2030. It could very well happen after then, but still, I'd be surprised.

He also seems to take a morally detached view of the Singularity -- like, "The Singularity is something that could affect everyone on the planet for the profoundly better or worse, but I prefer to view it as an abstract intellectual concept rather than something that will actually affect people."

25Apr/093

Contained Biodisaster for Risk Analysis

Chris Phoenix says he found a dangerous area of biotech research, hopefully it is no big deal.

The other day a conversation with John Hunt reminded me of an idea we've visited before: that the only way that the world at large will take the biotech risk seriously is if an exceptionally virulent engineered pathogen is released in a controlled space, like a level 4 containment facility, with the intention of killing a specific test species.

This idea sounds sort of bad because it involves killing animals (which is usually a bad idea, as Joshua Greene would say: "boo to killing!"), but perhaps mice would do. If people cared about mice, we wouldn't have allowed the existence of a billion cats.

More generally, the point here is not the specific idea, but just to come up with ideas, because time is elapsing and we aren't getting any wiser or more compassionate, just more powerful. Synthetic biology is receiving a tremendous amount of scientific attention and research money with zero oversight. Please: regulate. I have semi-Luddite tendencies when it comes to developing technologies that are potentially omnicidal, even if the risk is relatively low. That puts me into direct conflict with those who want no controls on technological development.

Meanwhile, the environmentalist crowd might complain at the idea of "sinking so low" as to create a deadly microbe which may have some chance, however small, of escaping its containment unit. However, I'd say that the risk is worth it. Biodefense research labs are already operating with thousands of scientists over decades of research with thousands of deadly pathogens and security is compromised only extremely rarely. (Does anyone know of such incidents? Seems like something that would have happened in the Soviet Union at least a few times, in fact I recall that there is an abandoned facility right now somewhere in Siberia.) If the skeptics are right, the microbe won't be very successful anyway, but if the skeptics are wrong, then that information would be crucial to know so we could put more resources into biodefense against novel arbitrary pathogens.

This experiment can please both the skeptics and the the "doomsayers" (I prefer to call myself "extinction risk concerned"). The skeptics will either see the failure of the engineered pathogen to successfully infect and kill all animals in the chamber, or the success. (Obviously.) If they see failure, their position will get some degree of positive evidence in its favor, depending on the exact circumstances. If they see success, the evidence will be against them. Similarly, the "worriers" can be assuaged by repeated failures in the experiment or strengthen their concern if the experiment kills most/all animals in the chamber.

Really, this does make me squirmy, because I'm mostly against killing animals for research. Perhaps we could start the experiment with worms, but unfortunately the publicity element is all-important and worms would not have the proper evidence-generating effect.

This all goes back to that principle that eventually (especially on matters of controversy) you have to design an experiment of some kind that produces evidence about something, whatever that may be. (The process of subsequent analysis perpetually continues.) Saying Professor Y said one thing and Dr. Z said another endlessly is ultimately just a recycling of old information and opinions, though some Professors may have large lists of references to prop up their positions. Historically, we know experiments settle scientific questions better.

So, if someone wants to step up to the plate and fund this idea, just contact me and I'll look at my network of contacts to see if there's anyone interested in actually doing it, and how difficult the ethical approval process would be. (Or just pick it up and run with it yourself.) Does anyone know if any school in the West would even allow this sort of thing? I talked to some official who was in charge of ethical screening for research projects at a university sometime last year and we had some fun conversation, but I hadn't thought of this idea yet, so I didn't bring it up.

"Massacring worm cities with bio-engineered pathogens: It seemed like a good idea at the time!"

Filed under: risks 3 Comments
23Apr/0927

Peter Thiel and Patri Friedman on Their Way Forward

Peter Thiel talks about stuff at Cato Unbound magazine.

I remain committed to the faith of my teenage years: to authentic human freedom as a precondition for the highest good. I stand against confiscatory taxes, totalitarian collectives, and the ideology of the inevitability of the death of every individual. For all these reasons, I still call myself “libertarian.”

Like most immortalist libertarians, Peter wants to connect together immortalism with libertarianism, boosting libertarian transhumanism. In transhumanism, the battle between socialists and libertarians is one of endless "excitement" to old-timers and confusion to journalists trying to report on the movement.

Let me comment that most restrictions on freedom come from finite resources. The development of simple self-replicating factories, fed perhaps by acetylene, water, and the Sun, ought to render irrelevant most of the awfully boring debates between libertarian and social democratic transhumanists.

See my interview with Robert Freitas for more on this angle. Dale Carrico calls this superlative vision the "programmable poly-purpose self-replicating room-temperature device". I argued before with Richard Jones that molecular manufacturing doesn't need to be room-temperature to be revolutionary, it can even require temperatures around absolute zero. Even if all MNT fails, we can still use synthetic biology for high-throughput manufacturing, including for organic electronics, if we can't use synthetic biology to create atomically precise inorganic patterns.

Patri Friedman makes the argument elsewhere on the Cato Unbound site: direct political activism is pointless, instead try spending your time on developing new technologies that alter the web of incentives that dictate how the whole game is played. This could apply to democratic socialism as well.

Back to Peter again, he basically says that since libertarianism ain't working in the current environment, it's time to go to cyberspace, outer space, and seasteading to deal with the lack of authentic freedom. Patri's article mentions that he thinks that full jack-in to cyberspace won't be possible in the near future, but I disagree. I would consider immersive VR plausible by 2025, CRNS (Current Rate No Singularity). Even today, games like Crysis are approaching realistic visual scenes. Speakers and projectors will soon be made that are small enough to fit in a helmet lightweight enough that you can put it on and not remember too easily that you're wearing a helmet. The mass popularity of WoW and SecondLife should be an indicator that massively shared worlds are a step away from becoming truly mainstream, but people are still being skeptical. While being filmed recently for a documentary on virtual worlds, I said the turning point will be when mass amounts of people can legitimately make money by doing real work in the context of such worlds. Work like mechanical engineering, not like designing gothic virtual clothing.

As for outer space, it's back to that same criticism I was talking about in my recent posts on space. In the near term, where near term means decades from when mass space travel becomes feasible, which is decades away CRNS, space travel will just get you more political attention, not less. The only reason that the asteroid belt gets so little attention right now is that no one lives there. In his essay, Peter points out that the Heinlein sci-fi future won't be here for a bit:

We must redouble the efforts to commercialize space, but we also must be realistic about the time horizons involved. The libertarian future of classic science fiction, à la Heinlein, will not happen before the second half of the 21st century.

That is the vision that so many transhumanists and readers of this blog hold to, because they were raised on that stuff and it serves as the core of their selfhood. Only now are they beginning to adopt the view that Marshall T. Savage articulated in 1992 and I've been going on about since founding this blog: that the future is right here in exotic places on the planet, not in outer space. Libertarians as a whole have been particularly slow to pick up on this, preferring to fantasize about space, requiring leaders like Thiel and Friedman to slap them upside the head and yank them along, saying, "this is what we're doing now". Why so tremendously slow?

Thiel continues on to say:

The future of technology is not pre-determined, and we must resist the temptation of technological utopianism — the notion that technology has a momentum or will of its own, that it will guarantee a more free future, and therefore that we can ignore the terrible arc of the political in our world.

This notion of technological utopianism is pretty much the vision championed by Kurzweil: technology is a quasi-spiritual force advancing independently of individual human choices, and we can deal with unfriendly AI by ensuring that markets around the world are free. Right.

Filed under: futurism 27 Comments