Specialized vs. General Molecular Assemblers and the Risk of AGI

J. Storrs Hall at the Foresight Institute has responded to my recent post about the challenges of self-replication. Specifically, the line where I refer to the Foresight Institute and the Center for Responsible Nanotechnology:

What is remarkable are those that seem to argue, like Ray Kurzweil, the Foresight Institute, and the Center for Responsible Nanotechnology, that humanity is inherently capable of managing universal self-replicating constructors without a near-certain likelihood of disaster.

Dr. Hall responds:

From this he jumps with very few intervening arguments (”there are terrorists out there”) to a conclusion that we need a benevolent world dictatorship (”singleton”), which might need to be a superhuman self-improving AI. This seems a wildly illogical leap, but surprisingly appears to be almost an article of faith in certain parts of the singularitarian community and Washington, DC. Let us examine the usually unstated assumptions behind it:

A singleton need not be a benevolent world dictatorship — just a “world order in which there is a single decision-making agency at the highest level”, as defined by Nick Bostrom, who says:

A democratic world republic could be a kind of singleton, as could a world dictatorship. A friendly superintelligent machine could be another kind of singleton, assuming it was powerful enough that no other entity could threaten its existence or thwart its plans. A “transcending upload” that achieves world domination would be another example.

Consider the concept of global governance, for instance.

I consider it likely that a singleton will emerge in the 21st century, whether we want it to or not, as a natural consequence of expanding technological powers on a finite-sized planet, as well as a historical trend of aggregation of powers at higher geopolitical levels. Note that the singleton concept does not specify what degree or scope of decision-making powers the entity (which, as pointed out, could be a worldwide democracy) has. 99% of policy choices could very well be made at the local and national levels, while a singleton intervenes in those 1% of choices with global importance. As Dr. Hall points out later in his post, it seems like a pseudo-singleton already exists. He calls it the US Government, but I’d call it a fuzzy entity that consists of the shared consensus between the US Government, its opinion sources (academia, public, media), the UN (which is not just controlled by the US), the European Union, NATO, and other assorted actors.

To me, what I’d want most out of a singleton would be a coherent and organized approach to problems that face the entire planet. Instead of a disorganized patchwork, there’d be more decisive action on global risks. No authoritarianism in cultural, political, or economic matters is implied.

This is what I think of when I hear calls for “more international cooperation” on terrorism or global warming. This is why we have the WHO as the highest source of authority on the emerging swine flu. People say that international organizations and institutions are weak, and maybe some of them are, but at least a portion of them help the entire world move through crucial challenges. Celebrities and politicians emerge to champion causes and rally supporters. Diversity in opinion, unity in action. It’s called cooperation.

The “singleton” I want could merely be described in terms of “more cooperation on threats to us all, including the question of whether certain threats are really threats or not”. Whether AI is in the picture or not is really a secondary issue, but if AI expands our capacity to detect and respond to threats, more power to it.

Next, Dr. Hall argues:

Humanity can’t manage self-replicating universal constructors: We’ve been managing self-replicating universal constructors for tens of thousands of years, from elephants to yeast. What’s more, these are replicators that can operate in the wild. The design process, e.g. to turn a wolf into a Pekingese, takes longer but is much more intuitive to the average human.

If you’re worried about high-tech terrorists, worry about genetically engineered swine flu or other naturally-reproducing agents. If there are terrorists out there who are so technically sophisticated as to be a threat with MNT, at best guess still 20 years away for the leading mainstream labs, why aren’t they doing this? Even terrorist Berkeley professors only make letterbombs.

One type of self-replicating constructor could conceivably replicate itself in less than a day and become arbitrarily large and energy-hungry, and another takes at least a year to self-replicate and has a bounded size. One can make nearly anything and another is highly restricted in its specifications… there’s no comparison here.

I am certainly worried about genetically engineered swine flu or other naturally-reproducing agents, and have been posting about these issues frequently. But I still reserve concern for the challenges of MNT, even if they may be 20, 30, or even 40 or more years off. Partially because the advances are fairly far off, the field for debate and thought is smaller than it would otherwise be, potentially giving early actors such as ourselves disproportionate influence over how the debate evolves in the future. As I plan to be discussing technological risk 20, 30, and 40 years from now, I am getting started early by voicing my concerns in 2009. If MNT does become an issue in 2030 or 2040, then hopefully I will be one of the people that is solicited for ideas on how to handle it, partially based on my public analysis of the problem at such an early juncture.

My concern about MNT is that it will not be that technically sophisticated when it is rolled out worldwide. That is, it will be possible to create weapons cheaply and easily with intuitive interfaces when non-restricted nanofactories become available around the world. (If diamondoid nanofactories are possible at all, which I wager them to be.) Even if the non-restricted nanofactories are only available to “scientists” or “authorities”, there is a significant risk of them being dispersed via the black market. The demand would surely be astronomical.

If the nanofactories in question just use proteins to make products, as Dr. Drexler has been arguing for lately, then a lot of the security issues evaporate. As far as I know, you can’t make a powerful missile, gun, or millipede robot out of keratin.

Next, Dr. Hall rightly points out that universal constructors probably wouldn’t be distributed to everyone:

Once the leading mainstream labs produce self-replicating universal constructors, they are hardly going to hand them out by the billions for people to make shoes with. As Eric Drexler recently pointed out, specialized mill-style machinery is considerably more efficient than universal constructors at actually making stuff. My analysis of this point is that the difference is months for universal constructors vs milliseconds for specialized mills. Nobody is going to want universal constructors except for research.

Of course. The MNT community realized this a while ago. When I say, “managing universal self-replicating constructors”, I don’t mean that universal constructors will be distributed as consumer products. I realize that consumer nanofactories are likely to be specialized devices. I am referring to the point at which a limited number of actors acquire more-general (not necessarily universal) manufacturing capabilities, which in turn leads to distribution of more specialized versions of the technology to millions or billions of people. Perhaps “universal” is the wrong word, because as Dr. Drexler has also pointed out, it may be too much to predict a single device to be universal: it doesn’t have to be. Cooperation between specialized devices should be quite sufficient to hit a very large space of manufacturing targets.

So, to rephrase, what I am concerned about is the widespread availability of more-general high-throughput manufacturing devices, which will result from the invention of a nearly-general molecular assembler. If I could revise my claim, I would subtract the word “universal” and say “general self-replicating nanofactories” instead of “universal self-replicating constructors”. By “constructors”, I meant the entire system, not just the tiny assemblers themselves, so I replace it with “nanofactories” to make it more clear. An individual assembler need not self-replicate — perhaps 1000 assemblers could cooperate together to make another assembler. The technical issues around this are another ongoing debate and analysis. Still, what I am concerned about is that any combination of product-restricted nanofactories could be used to produce additional manufacturing devices that could be put to ill ends. Specialized nanofactories could be used to build more general construction devices, perhaps not even based on MNT at all. I am talking about a general magnification of our manufacturing capability and speed.

The concern is that a variety of products that are likely to be approved for manufacture will be dual-use products that can be turned to illicit ends. For instance, the general equipment in an chemical laboratory can be used to manufacture methamphetamines or opiods like oxycontin. In an MNT-equipped world, instead of this equipment costing tens of thousands of dollars, it may cost a thousand dollars, a few hundred, or even less. MNT, when and if it is developed, will magnify the technological oomph behind any human tendency by orders of magnitude. Tendencies towards good as well as envy, obsession, and evil.

The questions I am concerned about are the following:

1. Once universal constructors are developed, who will get them? The company that develops them? The US military? The US Government? The United Nations? The highest bidder?

2. Will there be any government controls on these universal constructors? As systems are developed that are less general than “root” systems, but still general enough to build weapons, illicit materials (for instance, addictive designer drugs), intrusive surveillance systems, dual-use systems, and the like, who will regulate which level of access gets which products?

The general implied position of the Foresight Institute appears to be, “we’ll figure these things out as we go, MNT should be developed as soon as people put up the funding for it, everything will pretty much be fine”.

In my analysis, the situation is relatively bleak. Forces arguing in favor of “openness” and “power to the people” will, while well-intentioned, probably end up granting too much custom-design, high-throughput manufacturing power to too many actors, and once the genie is out of the box, it can never go back in. Once you have a single unrestricted nanofactory, you can make 100 more (as long as you have the feedstock) in just a few days and hide them in very out-of-the-way places. Note that one of my primary concerns is high-throughput manufacturing, not just generality. If both generality and manufacturing speed could be artificially limited in the vast majority of nanofactory devices, perhaps the global security risk would be much diminished.

There are obvious ideas floating around, which I’ve written about before, for making nanofactories safer: GPS tracking, the need for certification to manufacture certain products, the recommendations set forth in the Foresight Guidelines on Molecular Nanotechnology, restricting the manufacture of products based on their chemical composition, intended purpose, energy density, speed, or size. Military Nanotechnology by Jürgen Altmann, a disarmament expert and physics Ph.D, puts forth some good ideas, which unfortunately will probably be considered too radical and restrictive to be adopted by any major country or company.

Particularly bleak in my book is the vast improvement in isotope separation technology which would become possible when dual-use, MNT-built industrial machinery is put to the challenge. There are over a dozen ways to enrich uranium, and many of the more advanced techniques are held back mostly by 20th century materials and a lack of manufacturing precision and reliability.

Dr. Hall writes:

Note that a really universal constructor at the molecular level would, even under current law, require a bushel of different licenses to operate — one for each of the regulated substances it was capable of making. Sony is not going to be selling these things on the streets of Mumbai.

I somehow worry that the DIY advocates will turn the tide of regulation with this one. For a device that inherently can make practically everything, picking out every item to exclude is much harder than just allowing a wide range of things and only introducing regulation when some terrible accident happens. Because the vast majority of constructed objects will be entirely benign and helpful in an economic and humanitarian sense, the legislatures of the world will be thrown off guard, embracing an “open source” perspective that puts as much power in the hands of the people as possible. When it comes to software, I’m all in favor of open source, but when it comes to manufacturing actual objects that have a physical impact on my world, I’d prefer that not just anyone be allowed to manufacture just anything.

Even a device with highly specialized manipulators at the nanoscale could still produce a huge variety of products. For instance, these highly specialized manipulators could be specialized to create nanoblocks, 100 nm-sized blocks with a variety of pre-programmed structures and functionality which could be combined in arbitrary patterns, like Legos. Specialized at the molecular level, thoroughly general at the person level.

If Dr. Hall means specialized as in “specialized to create dinnerware”, such over-specialization seems unlikely to me. There will be strong social and economic reasons that argue in favor of generality. I don’t want to switch manufacturing machines every time I want to build an object in a slightly different category. Just like with computers, most nanofactories will be relatively general, though the precise question is how general.

Dr. Hall then says, in reference to the notion of a benevolent AI singleton:

Anyway, there already is a “singleton” — the US government. It has clearly demonstrated a willingness to act to prevent even nuisance-level WMD by actors outside the currently-accepted group. (By nuisance-level I mean ones which pose no serious threat to topple the US from its dominant military position.) The notion of producing, from scratch, an entity, AGI or whatever, that would not only seriously threaten US dominance but depose it without a struggle seems particularly divorced from reality. (Note that the US military is the leading funder and user of AI research and always has been.)

But, that is exactly what we are arguing. A “seed” artificial intelligence, an AI built specifically for self-improvement, could break away from its programmers as soon as it gains a threshold level of capacity for self-creation and implementing real-world plans. In the same way that the Wright Flyer was, strictly speaking, many orders of magnitude less complex than a flying bird or insect, the first artificial intelligence may be many orders of magnitude less complex than a human mind and yet still capable of forming useful theorems about learning, decision-making, and competition that allow it to materially enhance its own intelligence and capability to far above the human level.

Because an AI would not be limited by unitary identity (it could break itself into pieces to work on tasks), finite hardware (additional computing power could be rented through cloud computing), the need to rest (an AI could run 24/7/365 with sufficient electricity), a brain unintended for hardware-level self-improvement (nature has retained the same basic neural building blocks for over 400 million years), frustration or boredom, social needs, bodily frailty, short-term memory limited to seven objects, and hundreds if not thousands of other shortcomings of biological minds, an AI mind considered as smart as a 10-year-old could probably achieve a heck of a lot more than a 10-year-old in a similar position.

Essentially, all of the human species is at the same intellectual level in terms of our cognitive capabilities. Even the least intelligent humans, unless they have brain damage, have greater cognitive capabilities than the smartest chimp. Our distinct level of cognitive ability is species-general and all we’ve ever known, so we tend to take it for granted. We fail to realize the solutions that an intelligence just slightly above us would see, just like there are a million things that are obvious to us and impossible to comprehend for a chimp, or even a dumber human.

The central argument is that humanity is not special. Just like the Earth turned out not to be the center of the universe and humans turned out not to be created in the image of God, some humans may be surprised to find out that we aren’t at the center of the cognitive universe. We’re just another step on a ladder between worms and the great unknown. Call it the Copernican Revolution in cognitive science.

Getting AI up to the point of human-equivalent intelligence may be incredibly difficult, and take decades as well as hundreds of millions of dollars in distributed research. But once it is at that point, it is easy to imagine self-improvement scenarios where the practical power of an artificial intelligence quickly begins to exceed that of even the largest human collectives. Some relevant variables are named in my summary of last summer’s SIAI-funded research project.

It is classic anthropocentrism to say, “this human government is so powerful and mighty, how could it possibly be that this new species could exceed its capabilities?” Because from the perspective of the new entity, humans are intellectually just a bunch of monkeys. Physically too. An AI can be in a million different places at once, a human, just one.

I am hardly the first person to suggest that AI could surpass humanity in its capabilities, or even overcome a major government without a struggle. The entire Singularity Summit event is based at least partially on that premise — the idea of an “intelligence explosion”, which originated at least as early as 1965 with the recently deceased I.J. Good. Most of society is at least familiar with the idea of runaway AI, and a sizable educated minority grants it a non-negligible probability in the coming century. Larry Page and Bill Gates are obviously among that minority, which is why Page helped fund Singularity University and Gates is such a big fan of Kurzweil. So is congressman Brad Sherman, who has raised the issue in the US Congress.

Dr. Hall then writes:

It seems to me that if you can make a self-improving, superhuman AGI capable of taking over the world, you could probably make a specialized AI capable of running one desktop fab box. Its primary purpose is to help the 99.999% of users who are legitimate to produce safe and useful products. It doesn’t even have to outsmart the terrorists by itself — it is part of a world-wide online community of other AIs and human experts with the same basic goals.

It’s not that easy — nanofactories could come before any type of sufficiently advanced AI. Remember, in our analysis, a self-improving superhuman AI is not radically harder to create than a roughly human-equivalent seed AI — the latter would transform itself into the former in a relatively short period of time, not limited by human thinking/acting speeds or methods. As Drexler writes in Engines of Creation:

The engineering AI systems described in Chapter 5, being a million times faster than human engineers, could perform several centuries’ worth of design work in a morning.

It is perhaps unfortunate that some thinkers have come to see claims about MNT and superhuman AI as interdependent, when it is possible for one class of claims to be right and the other wrong just as easily for both to be wrong or right. As for myself, I tend to be more convinced that a human-equivalent self-improving AI would be able to empower itself rapidly than I am that reliable diamondoid mechanosynthesis will be implemented in nanofactory systems before 2030. As for when human-equivalent AI will come about, I would certainly prefer it to be before 2040, but I have absolutely no idea. On the issue of converting sand (silicon) into mind, it matters not as much when exactly it happens as the magnitude of its impact. Instead of perpetually lurking around the human-level of intelligence and capability, I would expect AIs to skyrocket in capability far past the human level, limited only by their consideration for the welfare of other beings (if such consideration is indeed present).

Comments

  1. “If MNT does become an issue in 2030 or 2040, then hopefully I will be one of the people that is solicited for ideas on how to handle it, partially based on my public analysis of the problem at such an early juncture.”

    – well said.

  2. Blade O'grass

    I am a giant thunder monster! And I have taken it opon myself to train the ants in my backyard with sugar and treats over by the back fence, and yucky bad tasting stuff next to the house… I think me and the ants will be able to get along really well from now on!

    *one month later*

    I am a giant thunder monster! I have incresed the ant population ten-fold! And I can no longer go into my back yard!

    So much for being Mr. Nice Guy. I am now thinking maybe I should kill ‘em all off with bug spray and fire!

    – –

    Just sayin’

  3. kurt9

    Forget about centralized authority. We don’t want it. I don’t want it.

    We’re into these technologies BECAUSE we want to get free from any kind of monopoly authoritarianism. We view the open source DIY approach as the best means for creating a decentralized libertarian society (it sure as fuck isn’t going to come about through the political system – we have to make it happen from outside the system).

    The whole purpose of these technologies (at least for me) is to get free of the system, Stuart Brand style. You know, the only long term solution to the limits imposed by a finite planet is to get out into space (space colonization on a mass scale – both solar system and interstellar). Besides, these technologies will make space colonization a lot cheaper and easier to do than it would have been in O’niell’s day.

    Death to all bureaucracy and centralized institutions.

  4. RQ

    “A friendly superintelligent machine could be another kind of singleton, assuming it was powerful enough that no other entity could threaten its existence or thwart its plans.”

    Not fault tolerant.

    By definition, no one else is capable of recovering from its errors.

  5. Presidents, Politicians & Bankers Forever

    ‘We own and rule Planet Earth also known as Planet Presidents-Politicians-&-Bankers, previously known as Planet Kings-&-Bankers and before that as Planet Just-Us-Kings.’

    They’ll own and rule the AI or IA when it comes. Just get used to it.

  6. Presidents, Politicians & Bankers Never Again

    Has anyone noticed that all that politics and ruling business is mind bogglingly dumb, boring, useless and results in big piles of money somewhere and piles of bodies somewhere else?

    It’s a farse no matter where you look – North Korea, Iran, Chavezuela, Cubastro, or the US. Oh and the EU. And perhaps Japan too. Not sure. Perhaps the Nipponese got it right.

    Freedom of speech is another fat joke. You’re free to express your opinion. Especially as a journalist. Even in Russia. At least once.

  7. kurt9

    Good point. The purpose of DIY open source development of these technologies is to allow those of us who want it to get free of any kind of political coercion or having to be a part of some “bigger” whole. The word “progressive” always involves some kind of political control which, in turn, is a form of coercion. This is unacceptable.

  8. Ryan

    Getting free from bureaucracy and central institutions is fine as long as everyone wants to. I think we know that isn’t the case. Some people want very much to impose themselves on others, and won’t care if they prefer to just be left alone.

  9. michael vassar

    Pretty sure that Drexler thinks you CAN make missles, guns, bombs, etc out of proteins, or at least out of Pyrite. Given his track record you should probably give him good odds on this.

  10. Hervé Musseau

    I guess it depends on what MNT can achieve.
    If it results in bulky, expensive, complex machines that require hard to obtain feedstock, then it will be the domain of governments and/or companies.
    If it remains with governments, then it probably means that it is a weapon. Not a very pleasant thought.
    If it is the domain of companies, then it can do a lot of things maybe faster and cheaper and maybe that weren’t impossible with prior technologies. But it isn’t a game changer, either.
    If however it is cheap and easy, then it will become distributed throughout. There will be no stopping it anymore than stopping the internet. Hopefully then it will be open source. But it also means that it will come with dangers – more danger than music swapping.

  11. M

    The current scarcity paradigm will not change with the advent of MNT. Those who benefit from the system as it exists today, such as oil and energy interests and banking/finance will not relinquish their influence over the population. People will still need money. They will be expected to pay full price for products made by their commercial nanofactories. At the same time, social strain will increase as more and more people lose their jobs to AI/automation(see Marshall Brain’s robot nation.) The concentration of wealth/power from these factors will be extraordinary. The sheeple will not be allowed to stop working (or begging) and live in leisure. People will start revolting against authorities when they see they’re living in a world of enforced, artifical scarcity. Lawmakers in turn will clamp down on personal freedoms, probably citing a fictional war on nanoterrorists as the necessity. False flags anyone?

    Likewise, human enhancement and radical life extension technologies will not be made available to the masses. Governments will cite ethical considerations while politicians and other elites partake themselves behind closed doors.

    A strong AI, which removes humans from the decision making process, is the only way the current paradigm of control and exploitation is going to stop. There’s always going to be people who feel a need to control the lives of other human beings. Artilects, as Hugo De Garis describes them, (see “Building Gods” on google video) are our best hope against what may become a neo-feudalist global plantation where a technocratic elite dominate everyone else. I’ll take my chances with an autonomous AI, essentially an alien lifeform with unknown motivations, rather than politicans and other interests. We know what they’re capable of. They’ve done it in the past and they’ll do it again.

  12. Jeremiah Wood

    I partly agree with M’s post. However, you mention artilects, but neglect to mention cyborgs.

    In my opinion, cyborgs seem more likely. Will they become the “technocratic elite” you speak of? Perhaps. Or they may not, being augmented with such superior intellectual and cognitive faculties. Maybe they’ll just leave the un-enhanced to their own devices on Earth and seek out their own spaces in the universe.

    I also take exception to your statement: “There’s always going to be people who feel a need to control the lives of other human beings.” Quite a strong assertion. Somehow, I doubt that you know the future so well.

Trackbacks for this post

  1. Nanodot: Nanotechnology News and Discussion » Blog Archive » Replicating nanofactories redux

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>