Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31May/0722

Why Utilitarians Should Focus on Technology

Classes on utilitarianism rarely include encouragements to keep up with the latest in science and technology, or study it specifically. But they definitely should. Our society is in the midst of a technology-dominated era, where new inventions have a much bigger impact on human welfare than newly elected politicians.

Our minds are programmed to overfocus on politics, and underfocus on technology. The reason why is that our ancestors evolved in an environment where the political scene was constantly changing while technology stayed roughly static. Today, both areas change rapidly, but technology has a greater impact.

The classical example (it should be, anyway), is the Haber-Bosch process, the chemical process by which we manufacture fertilizer from nitrogen in the atmosphere. Without it, billions of people would never have been born, because food would be more expensive and scarce. Starvation would be rampant and fewer people would choose to have children. The agricultural industry would be reliant upon acquiring natural nitrate deposits, such as Argentinian guano, to provide fertilizer for food. If it weren't for the Haber-Bosch process, these nitrate deposits would probably have run out some time ago.

If we suppose that these billions of people are leading lives worth living, then the invention of the Haber-Bosch process was an act of tremendous positive utility, for which two central individuals, Fritz Haber and Carl Bosch, deserve credit. Logic might dictate that these two men should be household names, but they aren't, because we take their innovations for granted. But if the Haber-Bosch process suddenly stopped working, millions of people would starve, or be dependent on their governments to pick up the huge check for the acquisition of alternative fertilizers.

Today, only a minority of philosophers are techno-literate. They can say, "we should be utilitarian" in the abstract, but are they well-qualified to make the actual decisions about which technologies to advocate and which to ignore? We might optimistically assume that this is automatically taken care of by the market, and while the market is a much better economic optimizer than many human decision-making agencies, it is short-sighted. A consumer buying a product from a certain company is typically not buying it to contribute to future research that will bring economic benefit - they are only focused on the product at hand. Vocal activism from scientists, philosophers, and policy-makers is necessary to accelerate development of beneficial technologies.

Some politicians are on the right track when they say that the right way to address global warming is through technology. Changing consumption behavior on the personal level is radically harder than simply introducing a new technology that is inherently more efficient or less environmentally destructive. But the politics is more emotionally engaging, so we overfocus on that.

To get past the politics and become capable of debating the technology on a finer level, utilitarians should take the time to read up on it.

25May/0759

A Brief History of Self-Replicating Machines

We know that reprogrammable self-replicating systems are possible because they're swarming all around us. Every living thing is a reprogrammable self-replicating system. DNA is the program, asexual or sexual reproduction is the means of replication. But there's still more work to be done before we can create artificial self-replicating systems. Let's take a look at the history of the concept in recent times.

The concept of self-replicating automata was first formalized by John von Neumann, one of the greatest computer scientists and mathematicians of the 20th century. His Universal Constructor was virtual rather than physical, and can be seen as history's first computer virus. Von Neumann proved that the most effective way of performing massive mining operations such as mining an entire moon or asteroid belt would be by using self-replicating machines, taking advantage of their exponential growth. His magnum opus on the topic, Theory of Self-Reproducing Automata, was published in 1966.

After von Neumann's work, the field of self-replicating systems was dormant for a decade and a half. It was revived in 1980 at the request of newly elected President Jimmy Carter, for a cost of $11.7 million. This was the landmark "Advanced Automation for Space Missions" NASA summer study, conducted by Robert Freitas, among others. The study, which focused on lunar robotics, concluded, "there are several alternative strategies by which machine self-replication can be carried out in a practical engineering setting", and that "the virtually cost-free expansion of mining, processing, and manufacturing capacity, once an initial investment is made in an autonomous self-replicating system, makes possible the commercial utilization of the abundant energy and mineral resources of the Moon". Unfortunately this proposal was quietly declined and passed into obscurity, with negligible media coverage. Freitas still works on the tools to build artificial self-replicating systems today, through his Nanofactory Collaboration project.

In the early and mid-80s, an MIT graduate student named Eric Drexler made waves with his theories of nanoscale assemblers and self-replicating nanobots. His landmark 1986 book, Engines of Creation, has since been translated into six different languages and serves as a standard reference for nanotechnology discussions. In 1992, he authored a more technical book, Nanosystems, which goes into great detail regarding the feasibility of self-replicating molecular assemblers. In the 15 years since this book has been published, its critics have yet to find a single technical error. This was the first book to show that ribosomes are not the only theoretically possible molecular-scale self-replicating assemblers. It also showed how massively parallel manufacturing by molecular assemblers could be used to build human-scale products to atomic precision, without overheating, in durations measured in hours. The main challenge would be building the first molecular assembler - from then on, the exponential power of self-replication would take over.

Most recently, in 2004, Tihamer Toth-Fejel determined that "the complexity of a useful kinematic self-replicating system is less than that of a Pentium IV". This conclusion came out of a NASA Institute for Advanced Concepts study, "Modeling Kinematic Cellular Automata". In 2003, for their Timeline for Molecular Manufacturing, the Center for Responsible Nanotechnology (CRN) argued that the complexity of a self-replicating molecular assembler would be similar to that of the Space Shuttle. In his October 2003 paper, "Design of a Primitive Nanofactory", CRN Director of Research Chris Phoenix explained the development path between the first molecular assembler and desktop nanofactories in great detail, showing how tiny 200 nm blocks could be combined in enormous numbers to create human-sized products made out of diamond, using nothing but simple hydrocarbons for feedstock.

Today, the prospect of reprogrammable self-replicating machines is all but ignored by mainstream science and engineering. A dozen or so researchers continue to pursue the goal, hampered by a lack of funding and popular support. The community is small enough that I've met most of the involved individuals personally and continue to correspond with many of them regularly. Most of these researchers believe we'll be able to build a molecular assembler sometime before 2025.

For much more on this, see my other posts on the topic. For a comprehensive view on the history and theory of self-replicating kinematic machines, see the book of the same name, Kinematic Self-Replicating Machines. Incidentally, First Class members of the Lifeboat Foundation get this book sent to them free of charge.

25May/07202

Denying Superintelligence

There are quite a few individuals that react to the idea of qualitatively smarter-than-human intelligence, AI or otherwise, with extreme skepticism and derision. My guess is that there are four possible reasons for this, which different people display in different combinations and intensity levels.

The first is the folk theory that intelligence is a light bulb - either it's on, or it's off. No in between. If you have it, it only varies to a matter of degree, not qualitatively. Humans have intelligence and animals don't, which is why it's okay to raise animals for food, for instance. Intelligence and subjective consciousness go hand in hand.

The second is the argument from divine privilege. Man, being made in God's image, has been given the gift of reason. We cannot magnify this gift on our own any more than we can engineer a machine that turns us into angels. This "gift of reason" argument is what I was taught by my parents as a child.

The third is technological skepticism. For example, my grandfather, who is an atheist, believes it will be centuries before we understand the brain in enough detail to manipulate it significantly. This skepticism derives partially from a linear intuitive view of technological progress, and partly from a pseudo-spiritual worship of brain complexity.

The fourth is outright denial based on fear. Some people associate superintelligence with heartlessness, boring rationality, ruining all the fun, threatening to replace us, and so on. This is primarily based on fictional portrayals. There are dozens of films and books in which superintelligences are the bad guys. Astonishingly, the dumber good guys always seem to triumph in the end.

Can you think of any others?

Filed under: intelligence 202 Comments
22May/07113

What Smartness Means

Bacterial cells have little organelles in them called mesosomes. According to the Wikipedia article, "Mesosomes may play a role in cell wall formation during cell division and/or chromosome replication and distribution and/or electron transfer systems of respiration. Electron transport chains are found within the mesosome producing 32-34ATP. They act as an anchor to bind and pull apart daughter chromosomes during cell division." Various subscription-required articles, though some free, go on and on about the possible functions of these small organelles in the bacterial division, respiration, etc. Mesosomes were originally discovered in 1960.

Small problem. Sometime in the mid-70s, scientists realized that mesosomes weren't even real. They were just artifacts caused by freeze-fractures in the chemical fixation process for electron microscopy. Little intrusions produced where the plasma membrane and cell wall came apart from the stress of the fixation process. So much for that idea.

If you figure that biologists get paid something like $60,000 per year, and it takes a couple months to do research and write a paper, and maybe something like 500 papers were published on mesosomes before they realized that what they were studying was pure bunk, then the biology community as a whole burned through ~$5 million chasing a ghost.

What does this have to do with the subject matter of this site? I often talk about intelligence enhancement and the recursive snowballing effect that I and many others predict would occur soon after its development. If a sufficiently intelligent biologist were on the research team that first discovered "mesosomes" in 1960, they could have discovered these were just artifacts by replacing the water used in the fixation process with an inorganic solvent, and all this confusion would have been saved. Our society has a bias against being too hard on people for these little mistakes, because, at least they tried. People would be pointing fingers non-stop if we always judged past events with the knowledge of hindsight. And we're only human, right?

The magical difference that increased intelligence produces is getting it right the first time. It's very tough for us to imagine a slightly-smarter-than-human intelligence that constantly solves difficult problems right off the bat, because we've never seen one. If the smartest human we can throw at the problem is just about as good as anyone else, then we project the quality of hardness onto the problem - not onto the abstract recognition that "human intelligence isn't good enough". This is the mind projection fallacy. But what we naively label "impossible" might be "easy" even to a mild version of superintelligence, say a human being with an artificially expanded neocortex. We may say, "this problem inherently requires five years of research!", but a superintelligence walks along, says, "no it doesn't", and solves it in five minutes. We're too quick to label things extremely difficult or impossible, but if we don't, we lose our self-respect as a species, so many would argue we have to.

It seems like only transhumanists are capable of really stepping outside of that box of Homo sapiens and saying, "what if we were really and truly fundamentally smarter?" If more people could do this, then pursuing intelligence enhancement technology might become a national or even global priority.

Filed under: intelligence 113 Comments
20May/076

Lunar Nanofactory

This is the title image of Chris Phoenix's 2005 NASA Institute for Advanced Concepts study, titled "Large-Product General-Purpose Design and Manufacturing Using Nanoscale Modules". I had missed it before because it's not on CRN's website, but it's a full 112 pages and looks like a fascinating and technically detailed document.

19May/0729

Closing the Loop

I made the above image while idly listening to the podcast mentioned previously. It describes the Singularity idea pretty straightforwardly. If technology can be used to improve intelligence, even a little bit, then that will lead to further advances in intelligence enhancement technology, and so on, until there are superintelligent gods, right there on our front porch. Thus, it's counterproductive to work on the really big projects ourselves, when we can 'simply' invent intelligence enhancement technology, use it to make ourselves smarter, and then use our superintelligence to much more effectively pursue them.

Skepticism around the idea, explicitly or implicitly, generally takes the form that human beings are pretty much the smartest and fastest thinkers that can possibly exist, therefore intelligence enhancement technology will only provide tiny gains. Considering how far intelligence has come since the beginning of life on Earth, I think it's pretty bold to suggest that we're the endpoint of the process of intelligence improvement. Some also think it's "betraying humanity" to advocate superintelligence, but really, I personally love humanity a lot, but think we should avoid xenophobia about greater intelligence. The world needn't be a zero-sum place where humans automatically lose just because a smarter species shows up.

In the past, the process was incredibly slow, because evolution takes millions of years to make appreciable changes. This time, the process will be incredibly fast, partially because minds are substrate-independent and can be cognitively accelerated by better hardware, but also due to the qualitative smartness factor.

Filed under: singularity 29 Comments
19May/07162

Publicity and Such

My interview with RU Sirius has been made into a transcript and posted online, for your skimming pleasure. This was the first time I got to meet RU in the flesh, after having bought his 1992 book, Mondo 2000: A User's Guide to the New Edge, at a garage sale when I was 16. I think he is pretty cool and I like the way he is giving numerous transhumanists and intelligent futurists publicity nowadays.

In other news, Eliezer Yudkowsky was interviewed recently by Cameron Reilly, an Australian podcast mogul. He's also interviewed other familiar people you may have heard of, like Aubrey de Grey, Ray Kurzweil, etc. You can find the links to those other interviews right there on the page.

Filed under: interviews 162 Comments
19May/0715

Possible Views on the Future of AGI

Obviously, different people have different views on the future of AGI (Artificial General Intelligence) and its policy consequences for us in the here and now. They depend primarily on two variables: the power and controllability of advanced AGI. This produces four rough domains of opinion:

  1. Low power, low controllability
  2. Low power, significant controllability
  3. Great power, low controllability
  4. Great power, significant controllability

The low power, low controllability group are the human exceptionalists, AI denialists, and technology skeptics. They don't believe powerful AI can be created because there's something special about humans that can't be duplicated in machines. Furthermore, software (including AI) is unwieldy and difficult to get a handle on. This group emphasizes we should not rely too much on technology and need to maintain a low-tech infrastructure to cover our asses in case the technology base somehow falls out from under us. You might think this the view of the old lady you see at the supermarket, or the survivalist stockpiling his shack in the woods with handguns, but actual philosophers believe it too.

The low power, significant controllability group are the human supremacists, who see economically valuable applications for AI but don't believe it will become so powerful as to surpass us in all areas. Because advanced AI will be controllable in their eyes, they welcome the technology as long as it is nicely integrated within the preexisting human system and social structure. I also call this the Jetsons view because their belief is that future AIs will behave similarly to Rosie from the Jetsons, useful and subservient, and not intimidating or truly autonomous. Many transhumanists uncomfortable with the implications of superior AI take this route, as do many science fiction authors.

The great power, low controllability group are the de Garises, and to a slightly lesser degree the Moravecs. I whimsically refer to this group as the gigadeath view, to build on Hugo de Garis' term, meaning that this group thinks the promise of powerful advanced AI will create a deep and profound divide among human beings that leads to an all-out war between AI enthusiasts and the traditionalists, in which billions will die. In this view, programming benevolent AIs is impossible because as soon as these machines become superintelligent, all of their previous programming will be discarded outright. Presumably then we must merge directly with these AIs to make it into the future, and everybody else is out of luck. This view is pretty prominent in Hollywood, and among a few philosophers.

The great power, significant controllability group primarily originates with Eliezer Yudkowsky of the Singularity Institute. As such I will call it the SingInst view. The SingInst view acknowledges that after a certain point, AI will become self-improving and radically superintelligent and capable, but emphasizes that this doesn't mean that all is lost. According to this view, by setting the initial conditions for AI carefully, we can expect certain invariants to persist after the roughly human-equivalent stage, even if we have no control over the AI directly. For instance, an AI with a fundamentally unselfish goal system would not suddenly transform into a selfish dictator AI, because future states of the AI are contingent upon specific self-modification choices continuous with the initial AI. So, if the second AI is not the type of person the first AI wants to be, then it will ensure that it never becomes it, even if it reprograms itself a bajillion times over. This is my view, and the view of maybe a few hundred SingInst supporters.

Filed under: AI, risks, singularity 15 Comments
18May/0735

What Will the First Nanotechnology Products Be?

This is a question of some interest, which gets kicked around in the Drexlerian nanotechnology community from time to time. The Nanofactory Product Catalog is one attempt at making a list. This CRN blog post has some comments that scratch the surface of a bit. Let me know in the comments if you can find anything else.

The first assumption is that the nanofactory in question can only build carbon-based structures. So, for example, conventional PV solar cells wouldn't be allowed, because they require silicon to be built. Anything using amorphous carbon, graphite, diamond, carbon aerogel, glassy carbon, or carbon fiber is fair game. For simplicity and to avoid arguments, I do not assume the availability of carbon nanotubes or other fullerenes. The products listed below are a combination of those suggested at the linked sources and my own ideas. As always, I invite comments and additions.

Keeping these requirements in mind, I put future MNT products into three categories: 1) products or structures composed only of filled volumes or empty space (air, water, or vacuum), with no moving parts, 2) products or structures based on the six simple machines (inclined plane, wheel and axle, lever, pulley, wedge, screw) and combinations thereof, 3) products to be integrated with other chemicals, advanced electronics, smart functionality, etc.

Group 1:

  • walls, beams, walkways, domes, trusses
  • furniture, dishes, cutlery
  • large diamond sculptures
  • acoustic and thermal absorbers
  • industrial blades and sandpaper
  • windows, lenses, bulbs
  • enclosures, containers, barriers
  • high-strength nets and cages
  • carbon fiber tarps and sheets
  • ropes and draglines
  • hulls, chassises, armor
  • terrestrial and marine platforms
  • industrial capillary tubing
  • conventional tubing and conduits
  • insulators and heat sinks
  • highly efficient greenhouses
  • suspension bridges
  • artificial firewood
  • subterranean or suboceanic tunnel walls
  • missile shields, diamondoid spikes
  • water filtration systems

All of the above products have relatively little design complexity. An engineer could sit down and design some of these products in an afternoon. Pre-MNT designs will be ported to carbon-based designs quite easily and quickly, suggesting these products will be the first and built in large quantities. They are also the least ethically problematic and as such are most likely to be approved for construction by the law. If feedstock and energy requirements are low, there will be an economic incentive to take full advantage of the new manufacturing technology. For instance, capillary tubing could be used to extract more oil from underground deposits previously labeled depleted. One possible concern is the creation of products faster than they can be recycled. Diamond will not melt in lava, for instance, so dedicated recycling facilities would be a necessity.

Group 2:

  • windmills, waterwheels, flywheels, turbines
  • doors, hinged compartments
  • basic medical tools
  • dish/Stirling solar power plants
  • dams and canal locks
  • exaflop desktop rod logic computers
  • terapixel mechanical displays
  • sunshades with variable opacity
  • powerful pumps and large reservoirs
  • mechanical sorting machines
  • cranes and other construction equipment
  • large tracked vehicles
  • longwall mining machines
  • distilleries, heat exchangers, cooling towers
  • powerful subterranean drills
  • unfolding tents and domes
  • compressors and compactors
  • high-speed centrifuges
  • supersystems for carbon sequestration
  • lathes, drills, milling machines
  • large, solar/steam-powered Rube Goldberg machines
  • mechanical mines and spears

The group 2 products are a little bit more ethically problematic, as weapons and other force-projecting devices begin to pop up in this class. However, they are also some of the most useful and demand for these machines will be high. Limits will need to be set for power and energy densities, as well as size and weight. Large-scale construction projects will create significant thermal and acoustic pollution, with unknown consequences.

Group 3:

  • trains, planes, ships, and automobiles
  • superships, aircars, airships, spaceships
  • high-power motors, crankshafts, and pistons
  • rail guns and mass drivers
  • autonomous robotics
  • large-scale reflectors and mirrors
  • ubiquitous surveillance systems
  • huge OLED screens and other electronic displays
  • advanced laptops, palmtops, and wearables
  • nanoscale sensors and optic transmitters
  • prosthetics and cybernetic implants
  • sub-dermal heaters and coolers
  • high-resolution optical scanners (lidar)
  • advanced optics, i.e., phase array optics
  • launch ramps and trusses
  • high-performance nuclear power plants
  • supertall (50 km+) compressive structures
  • city-sized climate control systems
  • utility fog with basic swarm intelligence
  • holodeck-like play environments
  • respirocytes, chromallocytes, etc.
  • many more I haven't thought of

The group 3 products are a bit more complicated to design, so it may be a matter of weeks, months, or even a couple years before conventional product designs are ported into reliable carbon-based versions. If it takes a while to develop nanofactories that can work with materials besides carbon, then manufacturing polyelemental products will require a multi-step fabrication process. If a significant portion of the product, such as structural components, can be built in a nanofactory, then this will drastically simplify the process and reduce costs, but the need for exotic materials such as low-temperature superconductors or delicate electronics will ensure that the more advanced products will depend on centralized manufacturing schemes, at least for a while. The primary defining characteristic of group 3 products is a substantial quantity of non-carbon material.

Regarding integration of carbon nanotubes into products, there is room for a lot more functionality, but I'm avoiding CNTs for now because they're cutting-edge science and their bulk properties are poorly characterized. Notably, CNTs integrated into diamondoid products would allow much better electronic properties, improvements in strength, and the introduction of flexibility. Regarding regular diamondoid structures vs. CNT-integrated structures, Mike Deering writes, "Wet biological nanotech is very complicated because of all the different kinds of chemical bonds involved and difficult to control precisely because it is rather delicate. Vacuum diamondoid, fullerene, and carbon nano-tube (CNT) assembly is much simpler by comparison, almost all carbon-carbon bonds, which are the strongest bonds in chemistry, resulting in extreme stability. (Note: diamondoid nanotechnology is understood to include Lomer dislocation non-cleaving plane synthetic diamond, fullerenes and CNTs.) Consequently, the first nanofactories will produce solely inanimate, near chemically inert, almost indestructible products. These products will have capabilities which include structural, electronic, mechanical, optical, and computational capability. But don't think that you can't have soft textures and flexible materials with diamondoid construction. Nanotubes are flexible. A properly designed assemblage of CNTs can achieve any macroscopic physical characteristics desired, while maintaining near indestructibility. Producing the sex slave of your dreams solely comprised of diamondoid nanotechnology, indistinguishable from the biological analog, is merely a design challenge, not a technology limitation. On the other hand, a diamondoid technology nanofactory can't produce a ham sandwich."

I would welcome someone knowledgable about materials science to confirm Deering's comments on CNTs.

18May/078

Michael Vassar on RPOP ‘Slaves’, AI vs. Human Uploads

While browsing the SL4 mailing list archives, as I am wont to do, I ran across this post by Michael Vassar that I thought made a lot of good points in a small space. It was in response to a couple people voicing ethical concerns that the AI boxing (sandboxing an AI from the outside world for testing purposes) is always unfair to the AI. Vassar, myself, and many others, believe that it should be entirely feasible to create an AI that is a self-improving optimization process in a general sense - something that manipulates matter into a target state - not requiring consciousness, the experience of pleasure or pain, or the like. In this same sense, evolution is one particular optimization process, without anthropomorphic qualities. In the future, it may be worthwhile to create AIs that are consciousness and human-like, but the point here is that they don't need to be.

Onto the post:

~~~

Robin and Phil: I know it feels liberal, reasonable, fair, logical, unselfish, unbigoted, and in every way moral to extend ethical consideration to a GAI. I also know that as a species, our greatest ethical regrets are the countless times when we withheld ethical consideration from our fellow human beings, and that we have a long way to go before we overcome the tendencies which make us vulnerable to such regrettable actions. However, concerns about mistreating an AI, enslaving it or whatever, reflect deep anthropomorphic confusion.

We are not talking about containing an organism with an evolutionary past, selected from the search space by the removal of trillions of non-ancestors who failed to crave freedom. We are not even talking about an organism composed of countless agents, where belief is the interaction of excitatory "reward" and inhibitory "punishment" on many levels of organization. We are talking about an organism with no cognitive structures onto which to attach concepts of "reward", "punishment", "disappointment", "pain", "suffering", "frustration", "freedom", "injustice", or any of the other evolved salient patterns which we call values. These terms are no more properly attached to the sort of transparent AI SIAI favors than they are to "evolution", "the economy", or "the government". We are talking about a Really Powerful Optimization Process, and it seems possible to me that this is a case where using that language, RPOP, rather than AI, will greatly improve thinking.

The universe is FULL of things which may merit ethical consideration and do not yet recieve it, from children to animals to lower level mind-like processes taking place in our own brains possibly including structures very loosely analogous to Freudian concepts, or to our models of other human beings and of ourselves. It is concievable that when we better understand ourselves we will identify other such things which I do not yet even suspect warrant such consideration, but to guess that a RPOP is one of those things makes no more sense than to guess this of existing software, and is in fact somewhat less justified than moral consideration given to the discarded programs produced by directed evolution, especially direct evolution of neural nets.

I am not at all suggesting that all AI development strategies can be pursued without the risk of causing harm to digital beings. The construction of an AI by reverse engineering of the human brain, as Kurzweil advocates, would be almost certain to be preceded by numerous aborted attempts at its goal prior to success. Partial minds would be built and studies, and their evolved structures would interact with their simulated environments in ways which corresponded to thousands of different exotic varieties of suffering. AIs of this sort would be, in many ways, far less dangerous than the transparent AIs recommended by SIAI. When thinking about them, anthropomorphic thinking would work. They would not suddenly display dazzling and unexpected new abilities which could be fully utilized with mere gigaflops of processing power. They would not be natives to the world of code, nor naturally enabled to modify their own workings. Unfortunately, they would not, ultimately, solve our problem. The fact that they can be built would not make normative reasoning systems impossible. The singularity would still beckon, and AIs modeled on our minds would be no more likely to make the ascent in a controlled and Friendly fashion than we would. Less actually, for many reasons including reasons analogous to those discussed here. There is also the substantial risk associated with any such AIs being terribly insane for biological and environmental reasons.

~~~

Many relevant concepts in this post.

Filed under: AI, singularity 8 Comments
17May/078

Radical Discontinuity Does Not Follow from Hard Takeoff

From Nick Bostrom's "Ethical Issues in Advanced Artificial Intelligence":

Emergence of superintelligence may be sudden.

It appears much harder to get from where we are now to human-level artificial intelligence than to get from there to superintelligence. While it may thus take quite a while before we get superintelligence, the final stage may happen swiftly. That is, the transition from a state where we have a roughly human-level artificial intelligence to a state where we have full-blown superintelligence, with revolutionary applications, may be very rapid, perhaps a matter of days rather than years. This possibility of a sudden emergence of superintelligence is referred to as the singularity hypothesis.

I don't think "singularity hypothesis" is the best phrase to describe this, because of the dozens of meanings associated with the word "Singularity" already, and this particular meaning being a narrow slice of those. The classic term, and the one which I prefer, is hard takeoff.

Many people believe a hard takeoff is likely because a 'human-equivalent AI' would in fact have human-superior capability. An AI with roughly human-equivalent, or, we might say "human-similar intelligence " (to account for variations in domain competencies between humans and the first AI) would in fact have a much higher practical intelligence. Note that "human-similar" is a qualifier only pertaining to cognitive capability, like whether or not the AI can solve hard problems in a complex, real-world environment. The phrase does not pertain to motivations, worldview, or habits. An AI could be "human-similar" in intelligence while having a motivational system and modus operandi more foreign than any alien species described in any sci-fi book, ever.

So what cognitive and practical advantages would a human-similar AI have over a human actor? From "Relative Advantages of Computer Programs, Minds in General, and the Human Brain":

More design freedom, including ease of modification and duplication; the capability to debug, re-boot, backup and attempt numerous designs.

The ability to perform complex tasks without making human-type mistakes, such as mistakes caused by lack of focus, energy, attention or memory.

The ability to perform extended tasks at greater serial speeds than conscious human thought or neurons, which perform approx. 200 calculations per second. Computing chips (~2 GHz) presently have a 10 million to one speed advantage over neurons.

The in principle capacity to function 24 hours a day, seven days a week, 365 days a year.

The human brain can not be duplicated or "re-booted," and has already achieved "optimization" through design by evolution, making it difficult to further improve.

The human brain does not physically integrate well, externally or internally, with contemporary hardware and software.

The non-existence of "boredom" when performing repetitive tasks.

An increased ability to acquire, retrieve, store and use information on the Internet, which contains most human knowledge.

Lack of human failings that result from complex functional adaptations, such as observer-biased beliefs or rationalization.

Lack of neurobiological features that limit human control over functionality.

Lack of complexity that we have acquired from evolutionary design, e.g., unnecessary autonomic processes and sexual reproduction.

The ability to advance on the design of evolution, which is continually constrained by blindness, the requirement to maintain preexisting design, and a weakness with simultaneous dependencies.

The ability to add more computational power to a particular feature or problem. This may result in moderate or substantial improvements to preexisting intelligence. (AI does not have an upper limit on computational capacity; we do.)

The ability to analyze and modify every design level and feature.

The ability to combine autonomic and deliberative processes.

The ability to communicate and share information (abilities, concepts, memories, thoughts) at a greater rate and on a greater level than us.

The ability to control what is and what is not learned or remembered.

The ability to create new modalities that we lack, such as a modality for code, which may improve the AI's programming ability-by making the AI inherently native to programming - far beyond our own (a modality for code may allow the AI to perceive its hardware machine code, i.e. the language used to write the AI, and other abilities).

The ability to learn new information very rapidly.

The ability to consciously create, analyze, modify, and improve abilities, concepts, or memories.

The ability to operate on computer hardware that has powerful advantages over human neurons, such as the ability to perform billions of sequential steps per second.

The capacity to self-observe and understand on a fine-grained level that is impossible for us. AIs may have an improved capacity for introspection and manipulation, such as the ability to introspect and manipulate code, which would be the functional level comparable to human neurons, which we can't think about or manipulate.

The most important and powerful capacity of minds-in-general over the human brain is the ability to recursively self-encapsulate and self-improve its intelligence. As a mind becomes smarter, the mind can use its intelligence to improve its design, thereby improving its intelligence, which may allow further improvements to its design, thus allowing further improvements to its intelligence. It's unknown when open-ended self-improvement may begin. A conservative assumption is human-similar general intelligence; but it may begin before then, and it is important to plan nonconservatively.

Some people read all of the above and are still ask, "won't the AI need its human handlers to help it do everything?" At this point it's hard to know what to say, because it should already be clear that we are dealing with something profoundly foreign and different from us. Something profoundly more capable. Deserving of our respect, and indeed even fear.

Here I am assuming that the first AI of human-similar intelligence will be a seed AI - an AI specifically designed and oriented towards improving its own mind and body to achieve its goals. Even if the first human-similar AI isn't built specifically to be a seed AI, it is bound to become a de facto seed AI sooner or later, improving itself freely and enthusiastically. Improvement of capability is a subset of any goal as long as the goal does not expressly forbid it, and as long as there is additional utility to be derived from more goal-seeking activity. One might even call it a convergent subgoal, to use the Creating Friendly AI terminology.

To get from a data pattern on a computer system to some form capable of influencing the real world, the AI will need to use some sort of manufacturing technology. It seems more likely for it to invent its own than use a preexisting human-invented technology. Prior to the creation of its manufacturing base, the human-similar AI would be dependent on primarily social, financial, and informational channels of influence rather than physical channels. Even with these limitations, we can assume such an AI would be quite capable. With adequate hard drive space and processing power, the AI could extend its cognitive functionality by making numerous duplicates of useful subsystems, not "copying" itself into different entities per se but ensuring that it has the capability of more than one being while still retaining the unity necessary to pursue goals in a streamlined fashion.

At some point, maybe in days, maybe in minutes (we can't think of every possible option), a human-similar seed AI is going to get ahold of some flexible manufacturing technology it can use to give itself a physical instantiation. Molecular manufacturing seems likely, although there may be other, less capable manufacturing technologies it uses to bootstrap its way to MM. Once a seed AI gains a manufacturing technology of this flexibility and power, it should be able to start converting local matter into actuating systems and processing elements. This process could happen very quickly. Recall that some bacteria can self-replicate in 15 minutes using readily available carbon compounds. Viruses can self-replicate at an even greater rate. A self-replicating nanoscale manufacturing technology based on covalent rather than ionic-and-below strength bonds could use it to convert raw materials into useful actuators at an even greater rate than the fastest bacteria. Using solar or nuclear energy, or something we can't even imagine, this process would not be limited to using energetic compounds as feedstock, in the way that animals need to, but will be able to restructure dead rock just as easily as organics.

Very quickly, an AI could go from "just a computer" to a rapidly outflowing wave front of autonomous matter-transforming machinery. This hard takeoff scenario is part of the reason for the concern about Friendly AI - if such a powerful entity doesn't explicitly desire our continued well-being, we are toast. But if it does... it could be the greatest thing ever to happen to the human race. A huge percentage of the world's ills could be wiped out, if not overnight, extremely quickly. Material goods could be manufactured from raw materials on command and for zero cost. AI subsystems acting as peacekeepers and mediators could ensure that acts of evil such as the massacre in Darfur are stopped before they start. Risks such as asteroid strikes or nanoplagues would become old news, because this newborn superintelligence would be so powerful there's no reason why it couldn't distribute passive nanomachines to every square millimeter of the Earth's surface and use them to nip any malevolent self-replicators at the bud.

Yes, yes, it's all very shocking, and some people will never buy it simply because the consequences are too huge for comfort, but the hard takeoff conclusion is impossible to escape if you accept 1) a human-similar AI would have various cognitive advantages that put it significantly above us in capability, 2) these could be used to develop a superior manufacturing technology based on autonomous self-replication. I suspect there are numerous transhumanists not clear on the details of 1 and 2, and I would encourage any reading to go have a look at AI and Global Risk and Nanosystems respectively. It seems easier to get hung up on 1 than 2, because it's easier to convince oneself that one understands intelligence thoroughly than it is to be self-convinced of an understanding of molecular nanotechnology, though both happen.

After saying that I think AI will be capable of utterly transforming the world in mere days or weeks, the whole point of originally writing this post was to say that a hard takeoff is not necessarily incompatible with some continuity, even high degrees of apparent continuity. A superintelligent AI could conceal its primary computational substrate underground, for instance, or conduct distributed computations across tiny robotic elements scattered across the planet's surface but entirely invisible to the naked eye. It need not turn the skyline into dark ruins of megacities accompanied by unwelcoming lightning storms, a la the Matrix. In fact, there's no reason why such an AI couldn't restore our world to its formal environmental glory, sans deadly diseases, for instance. If given the seed motivation to care about human interests and aesthetics, it could make the world a very pleasant place to live. Because humans evolved many hundreds of thousands of years ago, our preferences tend to be aligned with creative spins on ideal ancestral environments, including but not limited to: fruit trees, flowers, green, rolling hills, a healthy animal population, flowing water, et cetera. There is absolutely no reason why the Earth's surface can't have a Luddite-friendly appearance while individuals desiring more radical self-enhancement go deep underground, into space, or into virtual worlds. I'm not going to speculate on the complex property rights and legal implications of such a system, but let's just say that just because the challenges seems foreboding today does not mean that our superintelligent future selves or descendants will not be able to figure it all out.

So when people ask me, "do you think everything will change radically after the Singularity?", I say, "yes and no". An AI not aligned with human interests could change the world so suddenly and radically, ignoring us all the while, that the most likely outcome would be "the irreversible rearrangement of our molecular structure into alternatives conferring positive utility" (from the AI's perspective), i.e., sudden death, not "cyborg wars" or whatever else some people were expecting. A hard takeoff also invalidates many of the conventional transhumanist visions, where incremental invention of products is the primary transformative means.

Filed under: singularity 8 Comments
14May/073

Visualization: Blog Mapping by Domain

An interesting blog mapping experiment from Mike Love over at Institute for the Future:

The blue dots are blogs and the tan dots are domains they link to. Clockwise from the top left, they are Accelerating Future, FuturePundit, Minding the Planet, Jon Udell, Science Library Pad, Future of the Book, Future Feeder, Open the Future, and FutureNow. Visit the above link and you can download a version with rollover URLs.

An interesting project by Mike Love is The Genealogy of Influence, a project to document and visualize the creative influences of great thinkers, scientists and artists. Cool stuff!

Unsurprisingly, it looks like Jamais and I are side by side in the futurist blogosphere.

Filed under: images 3 Comments