Foresight Institute President J. Storrs Hall gave a response to my concern about what might be called a “hard nano take-off”. I said:
The first nanofactories will be both impressive (in their exponential qualities and complete automation of manufacturing) and unimpressive (their chemical inflexibility, possible cooling requirements, electricity consumption, limited initial design space, etc.) I predict they will be revolutionary enough that the first model may also be one of the most widely distributed. Unless there are serious restrictions on nanofactory self-replication, a near-exponential flood of nanofactories and nanoproducts will follow, flowing from the first system to cross that adoption tipping point.
I realize that I left out an important qualifier in this paragraph: I meant to say that the first commercial nanofactories will be impressive enough that they are widely distributed. These may be preceded by hundreds of experimental iterations, many of them microscopic. At first my qualification may sound somewhat tautological, but it’s not. Of course the first commercial version of a technology will be more widely distributed than the experimental versions, but by “widely distributed” I mean “very widely”, as in “iPod-level adoption rates”, if laws and corporate incentives permit. This would be highly uncharacteristic for any new technology — the first commercial automobiles, telephones, and computers were not very widely adopted for some time. In computer terms, this would be like your non-technical boss, house cleaner, grandmother, and New Age psychotherapist all adopting the Apple II shortly after it was released in 1977.
Part of the challenge here may be differing definitions. The common definition of “nanofactory” is a desktop, user-friendly system capable of building macroscale products using positional placement of individual atoms. Dr. Hall appears like he may (?) be using the term to describe “any nanomachine that makes another nanomachine”, but reading the writings of the Center for Responsible Nanotechnology (CRN) for about five years, and seeing them use the term “nanofactories” thousands of times to refer to advanced desktop systems, not nanomachines-building-nanomachines in general, I am specifically referring to the former.
In his post, Dr. Hall tells us that we’ll see gradual improvements over decades, and that the first nanofactories will be like ENIAC — “Very, very few people will have the skill to get anything useful out of it at all.” He says, “Early nanofactories will be cranky and experimental, expensive, require expensive inputs, be able to produce only very limited products, and be very lucky to replicate themselves before they break down.”
I disagree. If nanofactories work at all, they will be very powerful. A nanofactory would be a very complicated, “huge” thing. The Center for Responsible Nanotechnology compares the complexity of a molecular assembler to that of a Space Shuttle. I think the analogy would be apt for a nanofactory as well. We are talking about a miniature factory with more moving parts and individual computers than a typical 100 million-dollar modern factory today. Difficulties with the basic technology will manifest themselves in the pre-nanofactory stage, working with individual assemblers or small ensembles of assemblers. If you’ve made it all the way to nanofactory-level MNT, you’ve already jumped the primary technological hurdles.
A nanofactory would be a desktop system with a very numerous amount of assemblers. If an assembler fits into a cube 200 nm on a side, just 1,000 cm3 (1/1000 m3) is enough for roughly 1017 assemblers. To get from one assembler to 1017, modular self-replication has to execute with a very high degree of reliability and consistency, otherwise the whole process falls apart. Think of an assembly line where one product jams, a hundred other products jam behind it, the drive motor burns out, and the whole thing is so tiny and unwieldy that attempting to repair it directly would be like conducting brain surgery with Mickey Mouse gloves.
If you were a human at the nanoscale and an assembler module were a brick, a “mini-nanofactory” just a tenth of a millimeter tall would already be 500 bricks high, like a building 30 m (100 ft) tall. Any fundamental challenges with replication reliability will have been resolved long before reaching even this stage. There are simply too many moving parts for micromanagement to be possible — either the “code-level” operations are automated or they haven’t been established yet.
A desktop nanofactory would be like a 20-mile tall cathedral, using the brick analogy. If you can build one, it’s a good bet that your bricklaying strategy is pretty sound. You can’t even repair problems if they appear during the construction process or immediately after — if there is one problem, it is bound to be replicated countless times throughout the architecture. Unless the error rate is less than one in a trillion, you are going to have millions or billions of the same error throughout the nanofactory. Redundancy and automated error-checking/deactivation of corrupted modules may help to a certain extent here, but only so much.
So far I’ve made just one point — 1) replication reliability. Next, we can look at expense.
95% of the investment costs in building a nanofactory will go into building nanoscale machines, including an assembler, making them work reliably, putting them into a cooperative, redundant architecture that works without letting in molecular contamination, letting molecular contamination escape its internal confines, and so on. These are all low-level problems. If they aren’t all pretty much solved, you are going to get precisely nowhere. The cost of building the first nanofactory will be immense. But if you have a basic nanoscale modular architecture that can reliably build itself up from the micron level to the centimeter level, then it’s not going to matter whether you are building 100 centimeter-scale units or a million. The salient scaling issues are at the nanoscale and microscale. By the time you’re at the macroscale, the system has to be completely automated, and hence likely inexpensive.
The number one expense in any product comes from human input, attention, and craftsmanship on a per unit basis — the less you need, the cheaper it is. Desktop nanofactories would need to be almost completely automated, or they wouldn’t exist in the first place. You cannot micromanage each of every 1017 fabrication events and expect to leave the workbench any time in this geologic eon. The vast majority of direct human control, trial-and-error, and experimentation will likely take place in the first few thousand or millions of instances that scientists use assemblers to build additional assemblers — not in the first billion or trillion or quadrillion (those will be automatic by necessity). Some scaling problems could appear at the microscale rather than the nanoscale, but these will be trivial in comparison to the initial nanoscale challenges that must be overcome (we already have experience working with moving parts at the microscale, but very little at the nanoscale). The macroscale is our territory — we are familiar with it. The nanoscale is not.
Dr. Hall mentioned that early nanofactories will use expensive inputs. If he is talking about laboratory prototypes that you need a microscope to see, then maybe so. We won’t have the advantage of precision manufacturing to purify feedstock. But there is little indication that feedstock will be inherently expensive. In an interview I conducted with Robert Freitas, he listed acetylene as a good diamondoid mechanosynthesis (DMS) feedstock molecule from an efficiency perspective, and propane as another. Both are dirt cheap. Even if DMS doesn’t work out, the feedstock will have to consist of simple molecules, because they’ll be easiest to split up into atomic components for disposition. None of these are expensive. The theoretical cost would come from processing the feedstock for extremely high purities. Again, this hurdle will either be crossed at the prototype stage, or not at all. The earliest nanofactories will be extremely limited in their chemical palette — probably to only one or two elements. To work at all, they must be extremely effective at handling these elements, and will probably be useless for handling anything else.
So, I addressed 1) replication reliability, 2) general expense, and 3) cost of feedstock, generally based on the argument that these obstacles will be overcome at the micro-nanofactory stage as a prerequisite for having a desktop-sized nanofactory at all.
Maybe diamondoid mechanosynthesis nanofactories won’t be developed until strong AI comes along and tell us how to do it. Or maybe they are physically impossible, and we will need to use other materials, like proteins and organic chemistry to do molecular manufacturing. Only time will tell. But if DMS-based nanofactories are developed, they will be powerful and cheap if they work at all.
More on this from others: