Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

25Oct/0630

Michael Wilson on AGI Funding

On AGIRI's general mailing list, Michael Wilson of Bitphase AI, Ltd., responds to the question, "how can you tell when an AGI project is worth investing in?":

There have been many, many well funded AGI projects in the past, public and private. Most of them didn't produce anything useful at all. A few managed some narrow AI spinoffs. Most of the directors of those projects were just as confident about success as Ben (Goertzel) and Peter (Voss) are. All of them were wrong. No-one on this list has produced any evidence (publically) that they can succeed where all previous attempts failed other than cute powerpoint slides - which all the previous projects had too. All you can do judge architecture by the vague descriptions given, and the history of AI strongly suggests that even when full details are available, even so-called experts completely suck at judging what will work and what won't. The chances of arbitrary donors correctly ascertaining what approaches will work are effectively zero. The usual strategy is to judge by hot buzzword count and apparent project credibility (number of PhDs, papers published by leader, how cool the website and offices are, number of glowing writeups in specialist press; remember Thinking Machines Corp?). Needless to say, this doesn't have a good track record either.

As far as I can see, there are only two good reasons to throw funding at a specific AGI project you're not actually involved in (ignoring the critical FAI problem for a moment); hard evidence that the software in question can produce intelligent behaviour significantly in advance of the state of the art, or a genuinely novel attack on the problem - not just a new mix of AI concepts in the architecture, everyone vaguely credible has that, a genuinely new methodology. Both of those have an expiry date after a few years with no further progress. I'd say the SIAI had a genuinely new methodology with the whole provable-FAI idea and to a lesser extent some of the nonpublished Bayesian AGI stuff that immediately followed LOGI, but I admit that they may well be past the 'no useful further results' expiry date for continued support from strangers.

Setting up a structure that can handle the funding is a secondary issue. It's nontrivial, but it's clearly within the range of what reasonably competent and experienced people can do. The primary issue is evidence that raises the probability that any one project is going to buck the very high prior for failure, and neither hand-waving, buzzwords or powerpoint (should) cut it. Even detailed descriptions of the architecture with associated functional case studies, while interesting to read and perhaps convincing for other experts, historically won't help non-expert donors make the right choice. Radically novel projects like the SIAI may be an exception (in a good or bad way), but for relatively conventional groups like AGIRI and AAII insist on seeing some of this supposedly already-amazing software before choosing which project to back.

Personally if I had to back an AGI project other than our research approach at Bitphase, and I wasn't so dubious about his Friendliness strategy, I'd go with James Rogers' project, but I'd still estimate a less-than-5% chance of success even with indefinite funding. Ben would be a little way behind that with the proviso that I know his Friendliness strategy sucks, but he has been improving both that and his architecture so it's conceivable (though alas unlikely) that he'll fix it in time. AAII would be some way back behind that, with the minor benefit that if their architecture ever made it to AGI it's probably too opaque to undergo early take-off, but with the huge downside that when it finally does enter an accelerating recursive self-improvement phase what I know of the structure strongly suggests that the results will be effectively arbitrary (i.e. really bad). As noted, hard demonstrations of both capability and scaling (from anyone) will rapidly increase those probability estimates. I understand why many researchers are so careful about disclosure, but frankly without it I think it's unrealistic verging on dishonest to expect significant donated funding (ignoring the question of why the hell companies would be fishing for donations instead of investment).

There are a few really good things about Michael Wilson. First, he is Bayesian. This means, in general, that he represents his belief confidence in terms of probabilities, takes prior probabilities fully into account, and comprehends the relationship between conditional and prior probabilities. Second, he understands AI and its consequences. This means that he doesn't regard human-equivalency as a stable-state optima for intelligences capable of recursive self-improvement, has a nonanthropocentric model of the space of minds in general, and fosters a pragmatic design attitude despite a well-fleshed-out understanding of normative reasoning. Third, he has real experience both as a programmer and entrepreneur. Not to say that such experience is a necessary condition for success in AGI - but it doesn't hurt, especially when coupled with the first two traits.

The ultimate message: there is no "AGI evidence test" in the way that a positive mammography is a good test for breast cancer. The prior probability of success is quite low - lower than one divided by all AGI projects up to this point - so 1/100 or so at greatest. To make up for this disparity, both the AGI evidence indicator must be rare, and the presence of the indicator given an imminently successful project must be close to unity.

For example, say that Bayesianity is the AGI indicator. If the prior probability of success for any AGI project is 1/100, and only 1/10 AGI projects display Bayesianity, and we know that any successful project must display Bayesianity, then according to Bayes' theorem, the probability of success given Bayesianity is 1/10. This is because the team is question is competing against the other 9 Bayesian teams out of 100 AGI projects for the single probabilistically-likely success.

Unfortunately for AGI researchers, the probability of success for any given AI project before 2020 without a discontinuous breakthrough is could be considerably lower than 1/100 - probably more like 1/10,000. The problem with determining indicators, like Bayesianity, is that we can know little about the necessary characteristics of a successful project, given that there has never been a successful project instance before, ever. The first step to increasing the probability that your project succeeds is to take on the necessary characteristics of a winning project. For example, listening to Enya is almost certainly not a necessary characteristic for a winning AGI project. Therefore, it doesn't matter whether you listen to Enya or not. However, having over three million dollars and having read over 10,000 cumulative pages on inductive inference are likely characteristics of a winning project, so any projects who want to boost their probability of success will need to fulfill those characteristics, whether they like it or not.

Understand?

Filed under: AI 30 Comments
24Oct/0613

Predictability of AI

From complexity theorist Richard Loosemore on the AGI list:

It is entirely possible to build an AI in such a way that the general course of its behavior is as reliable as the behavior of an Ideal Gas: you can't predict the position and momentum of all its particles, but you sure can predict such overall characteristics as temperature, pressure and volume.

Without any sophisticated theory of minds in general, predicting the future behavior of any given artificial intelligence can seem impossible - who's to say that it won't reprogram itself arbitrarily at any time, if it has the capacity to do so?

The issue is that capacity does not necessarily signify desire. In humans, desire comes from our evolutionary history - every desire, no matter how seemingly unrelated, evolved because it contributed somehow to our inclusive fitness. Art, literature, philosophy, gossip - few people realize that these domains of human endeavor are in fact evolutionarily programmed subgoals of the evolutionary supergoal: the increase of inclusive fitness, which encompasses both our ability to survive and give birth to children that survive. Some may say, "look at math, and the sciences, and abstract thought, don't these signify that humanity can go beyond its evolutionary origins?" To the contrary, these activities are just extended outgrowths of our evolutionary motivations running their behavior routines in different sorts of external environments.

When order is created, the entropy of a system decreases. The number of possible future states the system is likely to be in goes down. A diamond monolith has very low entropy, and is likely to remain in the same configuration for a long time to come. There is always a nonzero probability that thermal vibrations will shatter the monolith to dust, or that several quadrillion atoms will spontaneously quantum tunnel out of the monolith, thereby destroying it, but the likelihood of these occurrences is minute. On the other side of things, a corpse has high entropy - it is deteriorating, decaying, its molecules are going every which way, being digested by microorganisms and scattered by numerous natural forces. There is a chance that the corpse will spontaneously reform back into a living person, but the probability is close to nil.

An engineer is like a sculptor, creating mechanisms whose purpose is to stay within a certain quadrant of the design space. Because most machines are non-self-repairing and non-adaptive, when something goes wrong in a machine, it tends to simply break (degrade to a local entropy maxima) rather than spontaneously start accomplishing something else entirely unrelated to the original purpose for which it was designed (jump beyond local entropy maxima to another entropy minima). In redundant machines, the offending part breaks, the function of which is quickly replaced by an equivalent part, and the machine goes back to operating how it did originally.

In AI, a popular safety concern is that any AI programmed by human beings will be liable to spontaneously switch motivations after it reaches superhuman intelligence, so any explicit programming is pointless. For example, certain social animals have a heuristic that mediates their treatment of other members of the species. It goes like this:

If person X is weaker than me, then I should consider bullying them around to my advantage.

This heuristic evolved because it boosted inclusive fitness for individuals that followed it, thus it was selected for. For example, if person X has a wife and I have a wife, and I'm stronger than person X, then I can kill him and take his wife, thus giving me two channels to pass along my genes rather than just one. Whether or not our ancestors explicitly made this calculation when bullying people around, it was an adaptive trait that spread. Evolution did the calculations, not us.

Because certain social animals have this trait, like all humans, we've come to think that it is necessarily universal to all minds. If its happened throughout all of human history, why shouldn't it hold true in every history of every possible intelligent species?

This is where the "Overlord AI" fallacy comes into play. If there exists an AI that is stronger and smarter than us, why wouldn't it always bully us around and refuse to listen to us, right? Superintelligent AIs act like a stronger opposing tribe, right? Agent Smith certainly seemed to.

The problem with this view is the misgeneralization of a human social heuristic into the space of all possible minds. We must assume that operating based on this heuristic is in fact necessary for gaining any sort of power, so any mind would choose to employ it, no matter what. What we aren't prepared for is the existence of new minds that violate these assumptions.

Human kindness tends to be conditional. Conditional on shared genetic material, conditional on trusted alliances, conditional on networks of checks and balances. Human political systems work in spite of our observer-centric goal systems, because they developed to work around them.

The kindness of a properly programmed AI can be made unconditional. In fact, programming an unconditional response is probably easier than programming a conditional response, because the former is less complex. That's the type of AI we'd want - one with unconditional niceness.

Singularitarians forsee a "hard takeoff" - that is, a short gap of time between roughly human-equivalent AI and superintelligent AI equipped with molecular nanotechnology. There are various reasons for this, and they're all based on the cognitive differences between human-equivalent AIs and actual humans. Basically, it tuns out that AIs are better at just about everything: staying awake, thinking faster, thinking in different ways, utilizing surrounding technology, making themselves smarter, etc. As soon as you build a roughly human-equivalent AI, it's only a matter of time (days, or maybe even hours) before you have a superintelligent AI that can fabricate its own hardware out of sand, tap solar, chemical, and nuclear energy sources, win every future Nobel Prize easily, quickly manufacture food, housing, products, etc., out of raw materials, and perform other "angelic", "godlike", and "jaw-dropping amazing" tasks.

The hard takeoff prediction is not based on wishful thinking or the desire for a paternalistic AI God; it comes from looking at what humans can accomplish in spite of our limitations, imagining a mind without many of the same limitations, then asking what that mind would likely be able to accomplish. The Singularitarian stance is that no matter what, eventually, we will have to confront a superintelligence with power of this magnitude, and furthermore, that superintelligence is significantly more likely to emerge from AI first, rather than human intelligence enhancement.

If godlike superintelligence emerges from AI first, then what can we humans do to ensure that this god is a kind one, and nothing like God from the Bible?

As mentioned before, unconditional kindness. Because a superintelligent AI will grow from the seed of a human-equivalent AI, we have the power to specify its initial conditions. While we can't say exactly which initial conditions will lead to which outcomes, we can try to ensure an outcome that the vast majority of people will tolerate or even enjoy.

Some people may fundamentally be against the idea of a superintelligent AI existing at all, and they will be impossible to please, at least until they come to terms with reality. It may be best for a superintelligence to have minimal visible impact on the lives of such people, and only intervene during emergencies.

Some may not mind the existence of superintelligent AI, but won't want it in their way too much. They might take advantage of such an AI and its manufacturing abilities to get stuff like a free mansion, free television and free personal aircar, but prefer to interact primarily with other ordinary humans who regard the role of AI in a similar way.

Some will want to embrace superintelligent AI and take full advantage of what it has to offer. They may want to become superintelligent themselves, augmenting themselves to think faster, look at problems from more angles, have more compassion for the unfortunate, and so on.

How do we build a seed AI such that the superintelligence it becomes accounts for these different types of people, balancing their desires considerately and effectively? It would probably be useful to design a goal system integrating human moral decisionmaking ability, so that the AI can address a moral conundrum at least as well as the wisest human philosopher. This way, we don't need to specify every single contingency, but rather can depend upon the AI to solve these things on its own.

Once we realize that a Friendly AI is no more likely to spontaneously reprogram itself to be unfriendly than a broken cup is to reform itself and jump back on a table, the next question is "what is Friendly?" A superintelligent AI could literally be smart enough to pick a series of actions that all six billion humans would personally call "friendly", but is that enough? Probably not - but we can't even begin to address these further problems until we understand the difference between evolutionary goal systems and engineered goal systems, and their predictability.

For a bit of popular culture on unfriendly AI, see a recent PBF.

Filed under: friendly ai 13 Comments
22Oct/0646

Green Goo a La Mode

On Nobel Intent, minimal genomes are being discussed. The organisms in question are endosymbionts, free-floating cells that take up residence inside of animal cells, forming a symbiotic relationship. Apparently some of these species have extremely tiny genomes:

[...] there's a second paper on the endosymbiont in a related species, a psyllid, that makes the first genome look big. In this case, the bacterial genome has been whittled down into an extremely gene-rich 166 Kilobases with 182 genes. Over 97 percent of that genome codes for something; in fact, nearly a full percent of it codes for parts of two genes at once.

What do I take away from this? Well, aside from general scientific interest, I think that the successful existence of minimal genome organisms in nature shows us how low of a complexity threshold will be necessary to engineer green goo. That is, artificial variants of natural organisms with much greater physical performance, such that they will be capable of entirely displacing the original population. If other organisms are dependent on the displaced organism for food or some other reason, their disappearance could lead to catastrophic ecological collapse much more rapidly than scientifically questionable anthropogenic global warming.

In this Wired article on green goo:

In its report, published on July 8, the Action Group on Erosion, Technology and Concentration said the risks from green goo demand the most urgent foresight and caution. "With nanobiotech, researchers have the power to create completely new organisms that have never existed on Earth," said the ETC release accompanying its report.

It's a new one for some players. "I haven't heard of this concern anywhere else, I mean anywhere else," said Christine Peterson, president of the Foresight Institute, a nonprofit dedicated to accelerating the potential benefits and anticipating potential risks of nanotechnology. "I think it's because people are already aware of the issues of biotech. I'm not sure there's an additional issue here."

Christine wants to avoid an irrational aversion to Drexlerian nanotechnology stemming from concerns about green goo, similar to the irrational aversion to nanotech brought about by the fear of free-floating grey goo. However, there is great warrant for fear in this case. Today, designing and manufacturing an artificial life form has already been done at least once - for an artificial virus. We have created life. I predict we will create artificial bacteria by 2010, not just by writing their genomes, but actually by building them from scratch. It will cost millions and maybe a dozen Ph.Ds, but the difficulty threshold will drop like a stone.

By 2020, creating cybernetic microorganisms capable of entirely displacing their biological equivalents will become feasible in a university lab, with minimal funding. Then the potential problems will begin. Artificial viruses, bacteria, photoplankton, algae, even krill or insects could quickly be produced in great numbers before the end of the second decade of this century, which would then go on to self-replicate beyond our control. This scenario is not only conceivable, but probable - it only takes one successful self-replicator to create a major hassle. If it's a self-replicator that throws a wrench into human biology in particular, it could kill every person on earth in the time it takes to spread globally.

Because artificial organisms will have the potential for superior performance, they could spread much faster than natural species, while being capable of surviving in a much wider range of niches. Based on the principles of evolvable hardware, we can produce artificial organisms that evolve thousands or millions of times faster than their natural counterparts. Imagine an artificial chloroplast that jumps from cell to cell, plant to plant, continent to continent, rendering their hosts incapable of photosynthesis.

One of the only useful conceivable countermeasures would be to have obedient artificial microorganisms already fully distributed in the background environment - "blue goo" - so that we can instruct them to attack the green goo should it become a problem. Another would be successfully building Friendly AI, which could take care of the problem better than we ever could. A useful backup measure would be to launch self-sustaining space colonies, a la Lifeboat Foundation.

20Oct/0623

What is the Singularity?

"What is the Singularity?" is the Singularity Institute's introduction to the Singularity, written by Eliezer Yudkowsky in 2002. According to many, it's the best introduction to the Singularity out there. As I thought a reminder would be helpful, I'm posting the document here in its entirety:

"The Singularity is the technological creation of smarter-than-human intelligence. There are several technologies that are often mentioned as heading in this connection. The most commonly mentioned is probably Artificial Intelligence, but there are others; direct brain-computer interfaces, biological augmentation of the brain, genetic engineering, ultra-high-resolution scans of the brain followed by computer emulation. Some of these technologies seem likely to arrive much earlier than the others, but there are nonetheless several independent technologies all heading in the direction of the Singularity - several different technologies which, if they reached a threshold level of sophistication, would enable the creation of smarter-than-human intelligence.

A future that contains smarter-than-human minds is genuinely different in a way that goes beyond the usual visions of a future filled with bigger and better gadgets. Vernor Vinge originally coined the term "Singularity" in observing that, just as our model of physics breaks down when it tries to model the singularity at the center of a black hole, our model of the world breaks down when it tries to model a future that contains entities smarter than human.

Human intelligence is the foundation of human technology; all technology is ultimately the product of intelligence. If technology can turn around and enhance intelligence, this closes the loop, creating a positive feedback effect. Smarter minds will be more effective at building still smarter minds. This loop appears most clearly in the example of an AI improving its own source code, but it would also arise, albeit initially on a slower timescale, from humans with direct brain-computer interfaces creating the next generation of brain-computer interfaces, or biologically augmented humans working on an Artificial Intelligence project.

Some of the stronger Singularity technologies, such as Artificial Intelligence and brain-computer interfaces, offer the possibility of faster intelligence as well as smarter intelligence. Ultimately, speeding up intelligence is probably comparatively unimportant next to creating better intelligence; nonetheless the potential differences in speed are worth mentioning because they are so huge. Human neurons operate by sending electrochemical signals that propagate at a top speed of 150 meters per second along the fastest neurons. By comparison, the speed of light is 300,000,000 meters per second, two million times greater. Similarly, most human neurons can spike a maximum of 200 times per second; even this may overstate the information-processing capability of neurons, since most modern theories of neural information-processing call for information to be carried by the frequency of the spike train rather than individual signals. By comparison, speeds in modern computer chips are currently at around 2GHz - a ten millionfold difference - and still increasing exponentially. At the very least it should be physically possible to achieve a million-to-one speedup in thinking, at which rate a subjective year would pass in 31 physical seconds. At this rate the entire subjective timespan from Socrates in ancient Greece to modern-day humanity would pass in under twenty-two hours.

Humans also face an upper limit on the size of their brains. The current estimate is that the typical human brain contains something like a hundred billion neurons and a hundred trillion synapses. That's an enormous amount of sheer brute computational force by comparison with today's computers - although if we had to write programs that ran on 200Hz CPUs we'd also need massive parallelism to do anything in realtime. However, in the computing industry, benchmarks increase exponentially, typically with a doubling time of one to two years. The original Moore's Law says that the number of transistors in a given area of silicon doubles every eighteen months; today there is Moore's Law for chip speeds, Moore's Law for computer memory, Moore's Law for disk storage per dollar, Moore's Law for Internet connectivity, and a dozen other variants.

By contrast, the entire five-million-year evolution of modern humans from primates involved a threefold increase in brain capacity and a sixfold increase in prefrontal cortex. We currently cannot increase our brainpower beyond this; in fact, we gradually lose neurons as we age. (You may have heard that humans only use 10% of their brains. Unfortunately, this is a complete urban legend; not just unsupported, but flatly contradicted by neuroscience.) One possible use of broadband brain-computer interfaces would be to synchronize neurons across human brains and see if the brains can learn to talk to each other - computer-mediated telepathy, which would try to bypass the problem of cracking the brain's codes by seeing if they can be decoded by another brain. If a sixfold increase in prefrontal brainpower was sufficient to support the transition from primates to humans, what could be accomplished with a clustered mind of sixty-four humans? Or a thousand? (And before you shout "Borg!", consider that the Borg are a pure fabrication of Hollywood scriptwriters. We have no reason to believe that telepaths are necessarily bad people. A telepathic society could easily be a nicer place to live than this one.) Or if the thought of clustered humans gives you the willies, consider the whole discussion as being about Artificial Intelligence. Some discussions of the Singularity suppose that the critical moment in history is not when human-equivalent AI first comes into existence but a few years later when the continued grinding of Moore's Law produces AI minds twice or four times as fast as human. This ignores the possibility that the first invention of AI will be followed by the purchase, rental, or less formal absorption of a substantial proportion of all the computing power on the then-current Internet - perhaps hundreds or thousands of times as much computing power as went into the original AI.

But the real heart of the Singularity is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the Singularity to discuss - it's easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the Singularity tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe. This doesn't mean the subject is impossible to discuss; Section III of Levels of Organization in General Intelligence, on the Singularity Institute's website, does take a stab at discussing some specific design improvements on human intelligence. But that involves a specific theory of intelligence, which we don't have room to go into here.

However, that smarter minds are harder to discuss than faster brains or bigger brains does not show that smarter minds are harder to build - deeper to ponder, certainly, but not necessarily more intractable as a problem. It may even be that genuine increases in smartness could be achieved just by adding more computing power to the existing human brain - although this is not currently known. What is known is that going from primates to humans did not require exponential increases in brain size or thousandfold improvements in processing speeds. Relative to chimps, humans have threefold larger brains, sixfold larger prefrontal areas, and 98.4% similar DNA; given that the human genome has 3 billion base pairs, this implies that at most twelve million bytes of extra "software" transforms chimps into humans. And there is no suggestion in our evolutionary history that evolution found it more and more difficult to construct smarter and smarter brains; if anything, hominid evolution has appeared to speed up over time, with shorter intervals between larger developments.

But leave aside for the moment the question of how to build smarter minds, and ask what "smarter-than-human" really means. And as the basic definition of the Singularity points out, this is exactly the point at which our ability to extrapolate breaks down. We don't know because we're not that smart. We're trying to guess what it is to be a better-than-human guesser. Could a gathering of apes have predicted the rise of human intelligence, or understood it if it were explained? For that matter, could the 15th century have predicted the 20th century, let alone the 21st? Nothing has changed in the human brain since the 15th century; if the people of the 15th century could not predict five centuries ahead across constant minds, what makes us think we can outguess genuinely smarter-than-human intelligence?

Because we have a past history of people making failed predictions one century ahead, we've learned, culturally, to distrust such predictions - we know that ordinary human progress, given a century in which to work, creates a gap which human predictions cannot cross. We haven't learned this lesson with respect to genuine improvements in intelligence because the last genuine improvement to intelligence was a hundred thousand years ago. But the rise of modern humanity created a gap enormously larger than the gap between the 15th and 20th century. That improvement in intelligence created the entire milieu of human progress, including all the progress between the 15th and 20th century. It is a gap so large that on the other side we find, not failed predictions, but no predictions at all.

Smarter-than-human intelligence, faster-than-human intelligence, and self-improving intelligence are all interrelated. If you're smarter that makes it easier to figure out how to build fast brains or improve your own mind. In turn, being able to reshape your own mind isn't just a way of starting up a slope of recursive self-improvement; having full access to your own source code is, in itself, a kind of smartness that humans don't have. Self-improvement is far harder than optimizing code; nonetheless, a mind with the ability to rewrite its own source code can potentially make itself faster as well. And faster brains also relate to smarter minds; speeding up a whole mind doesn't make it smarter, but adding more processing power to the cognitive processes underlying intelligence is a different matter.

But despite the interrelation, the key moment is the rise of smarter-than-human intelligence, rather than recursively self-improving or faster-than-human intelligence, because it's this that makes the future genuinely unlike the past. That doesn't take minds a million times faster than human, or improvement after improvement piled up along a steep curve of recursive self-enhancement. One mind significantly beyond the humanly possible level would represent a full-fledged Singularity. That we are not likely to be dealing with "only one" improvement does not make the impact of one improvement any less.

Combine faster intelligence, smarter intelligence, and recursively self-improving intelligence, and the result is an event so huge that there are no metaphors left. There's nothing remaining to compare it to.

The Singularity is beyond huge, but it can begin with something small. If one smarter-than-human intelligence exists, that mind will find it easier to create still smarter minds. In this respect the dynamic of the Singularity resembles other cases where small causes can have large effects; toppling the first domino in a chain, starting an avalanche with a pebble, perturbing an upright object balanced on its tip. (Human technological civilization occupies a metastable state in which the Singularity is an attractor; once the system starts to flip over to the new state, the flip accelerates.) All it takes is one technology - Artificial Intelligence, brain-computer interfaces, or perhaps something unforeseen - that advances to the point of creating smarter-than-human minds. That one technological advance is the equivalent of the first self-replicating chemical that gave rise to life on Earth."

If you enjoyed this, you can continue with Why Work Towards the Singularity?

Filed under: singularity 23 Comments
20Oct/0626

Paul Phillips on the Singularity

World-famous poker player Paul Phillips, nicknamed "Dot-Com", has won over $2,200,000 playing poker live. Here's what he has to say about the Singularity on his blog:

More and more, I have come to believe that the future of the human race hangs on one thing and one thing only: whether we can reach the singularity before the enemies of civilization gain enough traction to plunge the entire planet into dystopia. And more and more I fear we are going to lose the race. Kurzweil has predicted for a long time the singularity will arrive around 2040 and I think this is as good a prediction as can be made, but it depends on the continued application of the law of accelerating returns. A few well placed nukes would push the ETA back more than a bit. And if enough of the underpinnings of civilization are smashed, there will be no chance.

My sincere belief that this race is the ONLY thing that matters with respect to the future of our species is why I don't care much about lots of things that people worry about: global warming or other environmental issues, energy consumption, you name it. If the world is to be transformed into a 7th century islamist paradise then I could give a fuck if the ocean levels rise two feet. A precondition of worrying about the future of humanity is ensuring that humanity is worth saving.

In the comments, he continues to write:

I am 100% transhumanist. The relevant feature of humanity to me is sentience. I don't expect superintelligent machines to be "our" tools - I expect "us" to be superintelligent machines. Whether any of us living today will make the leap I don't know. Very possibly not.

Thanks to Michael Haislip for the pointer. I'm told to read the rest of the comments, but I'll pass.

Paul, if you're 100% transhumanist as you say, how about contributing some of your time and money to making the Singularity a reality, while mitigating global risk? Transhumanists often get slammed for being all talk and no action, and there's only one way to reverse that trend -- personally taking action. Peter Thiel did.

Filed under: singularity 26 Comments
16Oct/06173

A Nuclear Reactor in Every Home

Sometime between 2020 and 2040, we will invent a practically unlimited energy source that will solve the global energy crisis. This unlimited source of energy will come from thorium. A summary of the benefits, from a recent announcement of the start of construction for a new prototype reactor:

  • There is no danger of a melt-down like the Chernobyl reactor.
  • It produces minimal radioactive waste.
  • It can burn plutonium waste from traditional nuclear reactors.
  • It is not suitable for the production of weapon grade materials.
  • Global thorium reserves could cover our energy needs for thousands of years.

If nuclear reactors can be made safe and relatively cheap, how popular could they get?

It depends on how cheap we're talking about. Most reactor designs utilize thorium use molten salt (or lead) as a coolant. Even though they were developed as early as 1954, molten salt-coolant reactors are a relatively immature technology. Interestingly enough, the first nuclear reactor to provide usable amounts of electricity was a molten salt reactor. Three were built as part of the US Aircraft Reactor Experiment (ARE), whose purpose was to build a reactor small and sturdy enough to power a nuclear bomber. These reactors are about the size of a large truck.

State-of-the-art nuclear reactors, such as Westinghouse's AP1000, cost $1.5 billion to build and produce 1.1 gigawatts of electricity. They cost around $50 million per year to maintain, and $30 million per year for uranium fuel. Nevertheless, they are slowly starting to compete with other sources of power like solar and fossil fuels. Eventually, they will rocket right past them. The goal is plants that only cost only $990 per kilowatt. A kilowatt-year of electricity sells for about $876, and a gigawatt-year $876 million, so even if these plants cost $1 billion to build, they can make $964 million worth of electricity every year. If fuel and maintenance costs are about $225 million per year, then your profit is $739 million/year. This is a huge profit. What prevented us from reaping the benefits of this in the past was inferior and more expensive building techniques frequently running overbudget, with some projects costing $4 - $5 billion to complete.

The AP1000 is a Generation III reactor, a new class of reactor that started coming online in 1996. More advanced Generation III reactors are sometimes called Generation III+, because they offer better performance but are not revolutionary. The benefits of Generation III+ reactors are obvious. They are economically competitive, but still have high capital and fuel costs. A lot of this high capital cost comes from excessive safety regulations. In "The Nuclear Energy Option", Bernard L. Cohen calculates that ever-escalating safety restrictions increase the cost of nuclear power plants by as much as four or five times, compensating for inflation:

Commonwealth Edison, the utility serving the Chicago area, completed its Dresden nuclear plants in 1970-71 for $146/kW, its Quad Cities plants in 1973 for $164/kW, and its Zion plants in 1973-74 for $280/kW. But its LaSalle nuclear plants completed in 1982-84 cost $1,160/kW, and its Byron and Braidwood plants completed in 1985-87 cost $1880/kW -- a 13-fold increase over the 17-year period. Northeast Utilities completed its Millstone 1,2, and 3 nuclear plants, respectively, for $153/kW in 1971, $487/kW in 1975, and $3,326/kW in 1986, a 22-fold increase in 15 years. Duke Power, widely considered to be one of the most efficient utilities in the nation in handling nuclear technology, finished construction on its Oconee plants in 1973-74 for $181/kW, on its McGuire plants in 1981-84 for $848/kW, and on its Catauba plants in 1985-87 for $1,703/kW, a nearly 10-fold increase in 14 years. Philadelphia Electric Company completed its two Peach Bottom plants in 1974 at an average cost of $382 million, but the second of its two Limerick plants, completed in 1988, cost $2.9 billion -- 7.6 times as much. A long list of such price escalations could be quoted, and there are no exceptions. Clearly, something other than incompetence is involved.

That something is huge safety restrictions. When the risk of meltdown is removed, these restrictions will be lifted. Carlo Rubia, a Nobel Prize-winning physicist and advocate of thorium power, writes, "after a suitable "cool-down" period, radioactive "waste" reaches radio-toxicities which are comparable and smaller than the one of the ashes coming from coal burning for the same produced energy". So waste and containment - the two main sources of cost and controversy for traditional reactors -- are all but eliminated with thorium.

The world-changing thorium reactor I am envisioning qualifies as a Generation IV reactor. A Generation IV reactor will pay for itself even more quickly than a Generation III reactor, and will replace every other source of electrical power in terms of cost-effectiveness. Generation IV reactors will be the fission reactors to end all fission reactors.

The Generation IV International Forum's definition:

Generation IV nuclear energy systems are future, next-generation technologies that will compete in all markets with the most cost-effective technologies expected to be available over the next three decades.

Comparative advantages include reduced capital cost, enhanced nuclear safety, minimal generation of nuclear waste, and further reduction of the risk of weapons materials proliferation. Generation IV systems are intended to be responsive to the needs of a broad range of nations and users.

Currently, it is thought that Generation IV reactors will not come online before 2030, at least according to the Generation IV International Forum's Technology Roadmap. A substantial amount of R&D must be done to develop the molten salt reactor idea into a viable construction plan. However, I am more optimistic on timescales. Improvements in materials science and high-quality manufacturing will relax design requirements, decreasing research time from 20 years to 10 years and building time from 3-5 years to one year. That is why I can imagine thorium reactors by 2020.

Thorium reactors will be cheap. The primary cost in nuclear reactors traditionally is the huge safety requirements. Regarding meltdown in a thorium reactor, Rubbia writes, "Both the EA and MF can be effectively protected against military diversions and exhibit an extreme robustness against any conceivable accident, always with benign consequences. In particular the [beta]-decay heat is comparable in both cases and such that it can be passively dissipated in the environment, thus eliminating the risks of "melt-down". Thorium reactors can breed uranium-233, which can theoretically be used for nuclear weapons. However, denaturing thorium with its isotope, ionium, eliminates the proliferation threat.

Like any nuclear reactor, thorium reactors will be hot and radioactive, necessitating shielding. The amount of radioactivity scales with the size of the plant. It so happens that thorium itself is an excellent radiation shield, but lead and depleted uranium are also suitable. Smaller plants (100 megawatts), such as the Department of Energy's small, sealed, transportable, autonomous reactor (SSTAR) will be 15 meters tall, 3 meters wide and weigh 500 tonnes, using only a few cm of shielding. From the Lawrence Livermore National Laboratory page on SSTAR:

SSTAR is designed to be a self-contained reactor in a tamper-resistant container. The goal is to provide reliable and cost-effective electricity, heat, and freshwater. The design could also be adapted to produce hydrogen for use as an alternative fuel for passenger cars.

Most commercial nuclear reactors are large light-water reactors (LWRs) designed to generate 1,000 megawatts electric (MWe) or more. Significant capital investments are required to build these reactors and manage the nuclear fuel cycle. Many developing countries do not need such large increments of electricity. They also do not have the large-scale energy infrastructure required to install conventional nuclear power plants or personnel trained to operate them. These countries could benefit from smaller energy systems, such as SSTAR, that use automated controls, require less maintenance work, and provide reliable power for as long as 30 years before needing refueling or replacement.

SSTAR also offers potential cost reductions over conventional nuclear reactors. Using lead or lead–bismuth as a cooling material instead of water eliminates the large, high-pressure vessels and piping needed to contain the reactor coolant. The low pressure of the lead coolant also allows for a more compact reactor because the steam generator can be incorporated into the reactor vessel. Plus with no refueling downtime and no spent fuel rods to be managed, the reactor can produce energy continuously and with fewer personnel.

Because thorium reactors present no proliferation risk, and because they solve the safety problems associated with earlier reactors, they will be able to use reasonable rather than obsessive standards for security and reliability. If we can reach the $145-in-1971-dollars/kW milestone experienced by Commonwealth Edison in 1971, we can decrease costs for a 1-gigawatt plant to at most $780 million, rather than the $1,100 million to build such a plant today. In fact, you might be able to go as low as $220 million or below, if 80% of reactor costs truly are attributable to expensive anti-meltdown measures. A thorium reactor does not, in fact, need a containment wall. Putting the reactor vessel in a standard industrial building is sufficient.

Current operating costs, ignoring fuel costs, for a 1-gigawatt plant are about $50 million/year. With greater automation and simplicity in Generation IV plants, in addition to more reasonable safety and security regulations, this cost will be decreased to $5 million/year, equivalent to the salary of about 60 technicians earning $80K/year. Because the molten salt continuously recirculates the fuel, the time-consuming replacement of fuel rods is not necessary - you just dump in the thorium and out comes energy. However, if molten salt is used as a coolant, it must be recirculated and purified external to the reactor vessel. This requires a chemical reprocessing facility, of a type that has only yet been demonstrated in a lab. The scale-up to industrial levels has currently been labeled as uneconomic, but improvements in salt purification technology over the next decade will bring the costs down greatly, and eventually the entire process will be automated. If thorium reactors become popular, automated, and mass-produced, the technology could improve to the point where the cost of maintaining a 1-gigawatt nuclear reactor will eventually drop as low as $1 million/year, or less.

Today, the nuclear industry primarily makes money by selling fuel to reactor operators. So there is little incentive to switch over to a fuel that will eventually be obtainable for as low as $10/kg. According to "The Economics of Nuclear Power", a kg of enriched uranium in the form of uranium oxide reactor fuel is $1633/kg.

Today, thorium is relatively expensive - about $5,000 per kilogram. However, this is only because of there is currently little demand for thorium, so as a specialty metal, it is expensive. But there is 4 times as much thorium in the earth's crust as there is uranium, and uranium is only $40/kg. If thorium starts to be mined en masse, its cost could drop to as low as $10/kg. This factor-of-500 reduction in cost would be similar to the reduction in cost that electricity experienced throughout this century, only compressed into a few years. It is estimated that Norway alone contains 180,000 tons of known thorium reserves. Global deposits of thorium:

  • 360,000 India
  • 300,000 Australia
  •  170,000 Norway
  • 160,000 United States
  • 100,000 Canada
  • 35,000 South Africa
  • 16,000 Brazil
  • 95,000 Others

Thorium could cost a lot less than uranium fuel because it doesn't need to be enriched to be used as fuel. As stated before, enriched uranium oxide gas costs $1633/kg, and 1-gigawatt nuclear power plants buy about $30 million in fuel annually, which works out to about 20,000 kg. You can read more at the wikipedia entry for the uranium market.

Even if the price of thorium never goes below $50/kg, it still represents a factor-of-32 economy improvement over uranium oxide. If a 1-gigawatt thorium reactor consumes amounts of thorium similar to the amount of uranium consumed by nuclear reactors today, fueling it for a year would only cost $1 milion, using the $50/kg price point, or $200,000, using the $10/kg price point.

Building a 1-gigawatt uranium plant today costs about $1.1 billion. Building a 1-gigawatt thorium plant will cost only about $250 million, or less, because meltdown concerns can be tossed out the window. This fundamentally changes the economics of nuclear power. We can call this the capital cost benefit of thorium.

Fueling a 1-gigawatt uranium plant today costs $30 million/year. Fueling a 1-gigawatt thorium plant will cost only $1 million/year, because thorium is four times more abundant than uranium and does not need to be enriched - only purified - prior to being used as fuel. We can call this the fuel cost benefit of thorium.

Staffing a 1-gigawatt uranium plant today costs $50 million/year. With greater automation, and (especially) fewer safety/security requirements, we will decrease that cost to $5 million/year. Instead of requiring 500 technicians, guards, personal assistants, janitors, and paper pushers to run a nuclear plant, we will only need a small group of 30 or so technicians to run the plant. (When the technology reaches maturity.) Generation IV nuclear plants will be designed to be low-maintenance.

Based on these numbers, over a 60-year operating lifetime, both plants produce 60 gigawatt-years of power. The total cost for the uranium plant is $4.9 billion, at a rate of $81.6 million per gigawatt-year. The total cost for the thorium plant is $490 million, at a rate of $8.16 million per gigawatt-year. Thorium power makes nuclear power ten times cheaper than it used to be, right off the bat.

Of course, ten times cheaper electricity is impressive, and blows everything else out of the water, but it doesn't quite qualify as the "unlimited source of energy" I was talking about. Why will thorium lead to practically unlimited energy?

Because thorium reactors will make nuclear reactors more decentralized. Because of no risk of proliferation or meltdown, thorium reactors can be made of almost any size. A 500 ton, 100MW SSTAR-sized thorium reactor could fit in a large industrial room, require little maintenance, and only cost $25 million. A hypothetical 5 ton, truck-sized 1 MW thorium reactor might run for only $250,000 but would generate enough electricity for 1,000 people for the duration of its operating lifetime, using only 20 kg of thorium fuel per year, running almost automatically, and requiring safety checks as infrequently as once a year. That would be as little as $200/year after capital costs are paid off, for a thousand-persons worth of electricity! An annual visit by a safety inspector might add another $200 to the bill. A town of 1,000 could pool $250K for the reactor at the cost of $250 each, then pay $400/year collectively, or $0.40/year each for fuel and maintenance. These reactors could be built by the thousands, further driving down manufacturing costs.

Smaller reactors make power generation convenient in two ways: decreasing staffing costs by dropping them close to zero, and eliminating the bulky infrastructure required for larger plants. For this reason, it may be more likely that we see the construction of a million $40,000, 100 kW plants than 400 $300 million, 1GW plants. 100 kW plants would require minimal shielding and could be installed in private homes without fear of radiation poisoning. These small plants could be shielded so well that the level of radiation outside the shield is barely greater than the ambient level of radiation from traces of uranium in the environment. The only operating costs would be periodic safety checks, flouride salts, and thorium fuel. For a $40,000 reactor, and $1,000/year in operating costs, you get enough electricity for 100 people, which is enough to accomplish all sorts of antics, like running thousands of desktop nanofactories non-stop.

Even smaller reactors might be built. The molten salt may have a temperature of around 1,400°F, but as long as it can be contained by the best alloys, it is not really a threat. The small gasoline explosions in your automobile today are of a similar temperature. In the future, personal vehicles may be powered by the slow burning of thorium, or at least, hydrogen produced by a thorium reactor. Project Pluto, a nuclear-powered ramjet missile, produced 513 megawatts of power for only $50 million. At that price ratio, a 10 kW reactor might cost $1,000 and provide enough electricity for 10 persons/year while consuming only 1 kg of thorium every 5 years, itself only weighing 1000kg - similar to the weight of a refrigerator. I'm not sure if miniaturization to that degree is possible, or if the scaling laws really hold. But it seems consistent with what I've heard about nuclear power in the past.

The primary limitation with nuclear reactors, as always, is containment of radiation. But alloys and materials are improving. We will be able to make reactor vessels which are crack-proof, water-proof, and tamper-proof, but we will have to use superior materials. We should have those materials by 2030 at the latest, and they will make possible the decentralized nuclear energy vision I have outlined here. I consider it probable unless thorium is quickly leapfrogged by fusion power.

The greatest cost for thorium reactors remains their initial construction. If these reactors can be made to last hundreds of years instead of just 60, the cost per kWh comes down even further. If we could do this, then even if there were a disaster that brought down the entire industrial infrastructure, we could use our existing reactors with thorium fuel for energy until civilization restarts. We could send starships to other solar systems, powered by just a few tons of thorium. We will simultaneously experience the abundance we always wanted from nuclear power with the decentralization we always wanted from solar power. We will build self-maintaining "eternal structures" that use thorium electricity to power maintenance robots capable of working for thousands of years without breaks.

What nuclear reactors provide:

  • heat
  • electricity
  • fresh water through desalination
  • propulsion

Links to further material:

Molten salt reactor on Wikipedia
Aircraft Reactor Experiment (ARE) on Wikipedia
Nuclear aircraft on Wikipedia
Generation IV reactor on Wikipedia
Project Pluto on Wikipedia
More on Project Pluto
Energy from Thorium blog
SSTAR information
Thorium support in Norway
Thorium Power, Inc.
Thorium chemical characteristics
The Nuclear Energy Option by Bernard L. Cohen
The Economics of Nuclear Power by the Uranium Information Centre Ltd.
A Pro-Nuclear website
Greenpeace founder recants about nuclear power
The Energy Amplifier by Nobelist Carlo Rubbia
Investment Stimulus for New Nuclear Power Plant Construction FAQ
World Nuclear Association
How To Build 6,000 Nuclear Plants by 2050

Anti-nuclear:

Nuclear Power: Too Expensive to Solve Global Warming

Trivia: The word thorium derives from the Scandanavian god of thunder, Thor. So it seems unsurprising that Norway is so supportive of thorium. I doubt the people that named thorium could have guessed the godlike energy it contains, but the name does seem apt in retrospect. Thorium oxide was originally used to make gas lanterns burn more brightly. Ralph Lucas, of the House of Lords, is also a thorium supporter.

Filed under: futurism 173 Comments
13Oct/062

After NK test, what can be done to reduce nuclear threat?

Via Eurekalert:

Scholars and policy analysts examine global security questions

In the wake of the announcement of a nuclear test by North Korea, new questions have been raised about proliferation and the threat of nuclear terrorism. Is nuclear terrorism preventable? What steps has the United States already taken to avoid a nuclear catastrophe and what steps should be taken in the future?

Scholars, scientists, and policymakers, including Graham Allison, Sam Nunn, and William Perry, address these crucial questions in articles that are currently available online in the September volume of SAGE Publication's The ANNALS of The American Academy of Political and Social Science. The volume is edited by Allison of the Belfer Center for Science and International Affairs, John. F. Kennedy School of Government, Harvard University.

Of particular interest in light of North Korea's claim that it has conducted a nuclear test are Allison's article "Flight of Fancy," which traces the chain of events a Korean nuclear test might set in motion, Perry's article "Proliferation of the Peninsula: Five North Korean Nuclear Crises," Sam Nunn's "The Race between Cooperation and Catastrophe: Reducing the Global Nuclear Threat" and Robert Galucci's article on "Averting Nuclear catastrophe: Contemplating Extreme Responses to U.S. Vulnerability." All articles from the volume are available to read at no charge through the Academy Blog at http://www.aapss.org/blog or on the SAGE publications website at http://ann.sagepub.com/current.dtl.

"The authors devoutly hope for a future when world leaders recognize this grave danger, taking the actions necessary to defeat it," commented volume editor Graham Allison. "On current trendlines, however, the likelihood of failure is greater than that of success. We hope to remind the world just how horrible nuclear anarchy would be."

Nice to see mainstream risk analysts studying this situation as best they can.

Filed under: risks 2 Comments
11Oct/062

Hiroshima resets “peace clock” after NK nuclear test

From the Pink Tentacle:

The Hiroshima Peace Memorial Museum's Peace Watch Tower, which records the number of days since the last nuclear test, was reset on October 10, one day after North Korea conducted an underground nuclear test.

The peace clock's two digital displays show the number of days since the US atomic bombing of Hiroshima and the number of days since the last nuclear test was conducted. Before being reset on Monday, the clock read 40, the number of days since the US conducted a subcritical nuclear test at the end of August.

The clock was set up on August 6, 2001 on the 56th anniversary of the 1945 U.S. atomic bombing of Hiroshima. Over the past 5 years, the clock has been reset 11 times following each of the nuclear tests conducted by the US (some in cooperation with the UK) and Russia.

Museum director Koichiro Maeda says, "We are concerned that more nations will start to believe their national security can be strengthened by possessing nuclear weapons. It is extremely foolish." The museum is now considering making room for North Korea in the reference library exhibit, which displays information about nations possessing nuclear weapons.

About 300 survivors of the Hiroshima nuclear bombing gathered in the park near the museum condemning the possession and testing of all nuclear weapons by all nations.

This week is not a positive one for those concerned about existential risk. But guess what. A US Intelligence official says that the test was likely not nuclear. Meanwhile, an unnamed North Korean official threatens to launch a nuclear missile.

Filed under: risks 2 Comments
10Oct/0625

Defining the Singularity

From a recent email to the Singularity mailing list:

The Singularity definitions being presented here are incredibly confusing and contradictory. If I were a newcomer to the community and saw this thread, I'd say that this word "Singularity" is so poorly defined, it's useless. Everyone is talking past each other. As Nick Hay has pointed out, the Singularity was originally defined as smarter-than-human intelligence, and I think that this definition remains the most relevant, concise, and resistant to misinterpretation.

It's not about technological progress. It's not about experiencing an artificial universe by being plugged into a computer. It's not about human intelligence merging with computing technology. It's not about things changing so fast that we can't keep up, or the accretion of some threshold level of knowledge. All of these things might indeed follow from a Singularity, but might not, making it important to distinguish between the possible effects of a Singularity and what the Singularity actually is. The Singularity actually is the creation of smarter-than-human intelligence, but there are many speculative scenarios about what would happen thereafter as there are people who have heard about the idea.

The number of completely incompatible Singularity definitions being tossed around on this list underscores the need for a return to the original, simple, and concise definition, which, in that it doesn't make a million and one side claims, is also the easiest to explain to those being exposed to the idea for the first time. We have to define our terms to have a productive discussion, and the easiest way to define a contentious term is to make the definition as simple as possible. The reason that so many in the intellectual community see Singularity discussion as garbage is because there is so little definitional consensus that it's close to impossible to determine what's actually being discussed.

Smarter-than-human intelligence. That's all. Whether it's created through Artificial Intelligence, Brain-Computer Interfacing, neurosurgery, genetic engineering, or the fundamental particles making up my neurons quantum-tunneling into a smarter-than-human configuration - the Singularity is the point at which our ability to predict the future breaks down because a new character is introduced that is different from all prior characters in the human story.

The creation of smarter-than-human intelligence is called "the Singularity" by analogy to a gravitational singularity, not a mathematical singularity. Nothing actually goes to infinity. In physics, our models of black hole spacetimes spit out infinities because they're fundamentally flawed, not because nature itself is actually producing infinities. Any relationship between the term Singularity and the definition of singularity that means "the quality of being one of a kind" is coincidental.

The analogy of our inability to predict the physics past the event horizon of a black hole with the creation of superintelligence is apt, because we know for a fact that our minds are conditioned, both genetically and experientially, to predict the actions of other human minds, not smarter-than-human minds. We can't predict what a smarter-than-human mind would think or do, specifically. But we can predict it in broad outlines - we can confidently say that a smarter-than-human intelligence will 1) be smarter-than-human (by definition), 2) have all the essential properties of an intelligence, including the ability to model the world, make predictions, synthesize data, formulate beliefs, etc., 3) have starting characteristics dictated by the method of its creation, 4) have initial motivations dictated by its prior, pre-superintelligent form, 5) not necessarily display similar characters to its human predecessors, and so on. We can predict that a superintelligence would be capable of putting a lot of optimization pressure behind its goals.

The basic Singularity concept is incredibly mundane. In the midst of all this futuristic excitement, we sometimes forget this. A single genetically engineered child born with a substantially smarter-than-human IQ would constitute a Singularity, because we would have no ability to predict the specifics of what it would do, whereas we have a much greater ability to predict the actions of typical humans. It's also worth pointing out that the Singularity is an event, like the first nuclear test, not a thing, like the first nuke itself. It heralds an irreversible transition to a new era, but our guesses at the specifics of that era are inextricably tied to the real future conditions under which we make that transition.

The fact that it is sometimes difficult to predict the actions of everyday humans does not doom this definition of the Singularity. The fact that "smarter-than-human" is a greyscale rather than black-and-white does not condemn it either. The Singularity is one of those things that we'd probably recognize if we saw it, but because it hasn't happened yet it's very difficult to talk about coherently.

The Singularity is frequently associated with technology simply because technology is the means by which agents that can't mold their environments directly are able to get things done in a limited time. So by default, we assume that a superintelligence would use technology to get things done, and use a lot of it. But there are possible beings that need no technology to accomplish significant goals. For example, in the future there might be a being that can build a nuclear reactor simply by swallowing uranium and internally processing it into the right configuration. No "technology" required.

The Singularity would still be possible if technological process were slowed down or halted. It would still be possible (albeit difficult) if every computer on the planet were smashed to pieces. It would be possible even if it turned out that intelligence can't exist inside a computer.

A Singularity this century could easily be stopped, for example if a disease wiped out half of humanity, or a global authoritarian regime forbade research in that direction, or if a nuclear war ejected sufficient dust into the air to shut down photosynthesis. The Singularity is far from inevitable.

The Singularity can be a bad thing, resulting in the death of all human beings, or a good thing, such that every single human being on earth can explicitly say that they are glad that it happened. There are also different shades of good: for example, a Singularity that results in the universal availability of "genie machines" could eliminate all journeys of value, by taking us right to the destination whether we want it or not.

As we can see, this definition of the Singularity I'm presenting encompasses a lot of possibilities. That's part of the elegance of it. By making a minimal amount of assumptions, it requires the least amount of evidence to back it up. All it requires is that humans aren't the smartest physically possible beings in the universe, and that we will some day have the ability to either upgrade our brains, or create new brains that are smarter than us by design.

Filed under: singularity 25 Comments
8Oct/065

Response to “What is friendly?”

Over at the Streeb-Greebling diaries, Bob Mottram watched the Google video on the Risks of AGI panel and writes,

In this video a panel of luminaries discuss the future risks which advanced forms of AI might pose. Much hinges upon the idea of "friendliness", and trying to ensure that decisions made by powerful intelligences will always be somewhat in tune with human desires. The elephant in the room here though is that there really is no good definition for what qualifies as "friendly". What's a good decision for me, might not be a good decision for someone else. When humans make decisions they're almost never following Asimov's zeroth law.

Asimov's zeroth law is "A robot may not injure humanity, or, through inaction, allow humanity to come to harm."

It's not really "an elephant in the room". There is a common definition for "friendly", and it is accepted by many in the field:

"A "Friendly AI" is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile."

Not too difficult. Then comes the objection, "there can be no such thing". Well, then you'd want to build an AI that is as close to that as possible.

So should future AIs always be engineered to follow the zeroth law?

Yes... not really as a "law", but as an innate part of its motivations.

If an AI could override all human political decision making and impose an equitable world food distribution network I think that would be a very positive development. But would national leaders be willing to have their own self-interested agendas overridden by automation? I suspect they would be unhappy about that, owing to the inherently tribal nature of human psychology.

If you can't please everyone all the time, then try to please as many people as possible most of the time. Again, there's no dilemma here. I do suspect that a superintelligence with advanced nanotechnology would be able to go a long way towards appealing the national leaders even if their people are fed.

One fallacy in my opinion is that it will be possible to control and predict the decision making quality of very complex AGIs. It's already hard for us to predict how existing, relatively simple, computer programs will operate under all possible conditions.

Well, a complex AGI will be built by a simpler AGI that human programmers write. Of course we cannot predict anything with 100% accuracy, but I do think that we can build an AGI such that we can place more confidence in it crossing the line to the superintelligent regime than we would in any particular human or combination of humans.

An AGI with a supergoal of maximizing the number of black objects in the universe will not be convinced to change its goal system through any learning experience, however anti-black it may be.

A problem in the conceptualization of Friendly AI is that some people think that we are aiming for perfection. Not so - we're just aiming for the best we can do, and something better than the alternatives. We can't ask for anything more.

Once you introduce general learning capabilities into the equation it soon becomes impossible to say what the system will do in the long run.

Not necessarily. A static human cognitive architecture will always do the same things in the long run - humanlike things. A humanlike brain has humanlike goal attractors, which are preserved in the abstract regardless of any amount of learning. In the space of all possible goal attractors, the human mind stays within a very constrained area.

It will be possible to write utility functions that remain invariant regardless of new knowledge that is acquired, or remain invariant within certain constraints. New information that changes the particulars of subgoals, but does nothing to change the supergoal. "Learning" implies acquiring knowledge, but does not necessarily imply changing goals.

Another assumption made by Goertzel and others is that there will only be a few powerful AGIs in the world.

The assumption is that there will be a hard takeoff whereby the first AGI to engage in recursive self-improvement is basically leagues ahead of any other AGI. Given that silicon transistors have switching speeds millions of times greater than biological neurons, this is indeed plausible.

Goertzel and others are not saying that there will only be a few powerful AGIs. Just that there will be a first mover, and that the existence of all future AGIs will be contingent upon the first AGI accepting their existence. The future space of all created AGIs will be limited by lines drawn by the first AGI, or human wishes channeled through that AGI.

Then there's also the Gates scenario, where there will be a super-powerful AGI on every desktop and in every home. In this situation anybody will be able to have AGIs do whatever they wish, with no guarantees on friendliness.

The first AGI that reaches superintelligence is likely to become so powerful as to qualify for near-omnipotence. It would be trivial to prevent the creation of AGIs antithetical to its goals.

Further material on Friendly AI:

What is Friendly AI?
Creating Friendly AI

Anyone interested in the field of Friendly AI should read that last one from start to finish. Also: note that 'friendly', the English word, is not the same thing as "Friendly", which is extremely complex and subtle.

Filed under: friendly ai 5 Comments
7Oct/064

Fantastic New Paper by Jason Matheny on Extinction Risk

An area of study more important than any other is that of extinction risks. An average-intelligence person devoting their life to the study and mitigation of existential risks can accomplish far more ethical good than lifetimes of work by thousands of the best and brightest politicians, scientists, writers, and programmers. Morality-wise, it's a pursuit that blows all others out of the water. Why? Because the negative value represented by the possibility of existential disaster is much greater in magnitude than all the other evils in the world, including poverty, torture, disease, and tyranny. We can't make a better world if we're dead.

If our species survives this century and goes on to colonize the stars, the people who were instrumental in minimizing the probability of risk during this century will deserve a lot of the credit. If you choose to devote your life to mitigating existential risk and actually end up having a significant impact, you could actually be famous for the rest of eternity. Think about that!

This is why it's of such massive importance whenever a new paper comes out about the subject. This area of study is neglected. Only in the past five years has it become an area of significant focus. Today, big names like Stephen Hawking and Martin Rees are on our side. However, there is an explicit lack of publications in the area.

It's my pleasure to upload a paper by Jason Matheny of the University of Maryland, entitled "Reducing the risk of human extinction". Matheny is known publicly for his involvement with New Harvest, a non-profit whose purpose is to develop artificial substitutes for meat. Recently he came across Nick Bostrom's paper on existential risk, and decided to contribute to the field.

Here's a chunk from the conclusion:

We may be poorly equipped to recognize or plan for extinction risks. We may not be good at grasping the significance of very large numbers (catastrophic outcomes) or very small numbers (probabilities) over large timeframes. We struggle with estimating the probabilities of rare or unprecedented outcomes. Policymakers may not plan far beyond current political administrations and rarely do risk analyses consider the existence future generations. (For a welcome exception, see Kent 2004.) We may unjustifiably discount the value of future lives. Finally, extinction risks are classic market failures where an individual enjoys no perceptible benefit from her investment in risk reduction. Human survival may thus be a good requiring deliberate policies to protect.

It might be feared that consideration of extinction risks would lead to a reductio ad absurdum: we ought to invest all our funds in asteroid defense, for instance, instead of AIDS, pollution, world hunger or other problems we face today. However, even if it were found that reducing extinction risks is highly cost-effective, it would not imply that public funds should be spent on asteroid defense, et al. at the exclusion of all other public programs. Many programs reduce extinction risk by maintaining a healthy, educated, and content population, and should be seen as part of a portfolio of risk-reducing projects.

In the concluding chapter of Reasons and Persons, Parfit (1984) wrote:

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

1. Peace
2. A nuclear war that kills 99% of the world’s existing population
3. A nuclear war that kills 100%

2 would be worse than 1, and 3 would be worse than 2. Which is the greater of these two differences? Most people believe that the greater difference is between 1 and 2. I believe that the difference between 2 and 3 is very much greater. . . . The Earth will remain habitable for at least another billion years. Civilization began only a few thousand years ago. If we do not destroy mankind, these thousand years may be only a tiny fraction of the whole of civilized human history. The difference between 2 and 3 may thus be the difference between this tiny fraction and all of the rest of this history. If we compare this possible history to a day, what has occurred so far is only a fraction of a second.

This paper tentatively supports Parfit's conclusion. Human extinction in the next few centuries could reduce the number of future generations by thousands or more. We take extraordinary measures to protect some endangered species from extinction. It might be reasonable to take extraordinary measures to protect humanity from the same. To decide whether this is so requires more discussion of the methodological problems mentioned here, as well as research on the extinction risks we face and the costs of mitigating them.

Mitigating existential risk should be the human species' number one priority, right now. If you want to help, mention the idea to your friends, organize your thoughts on the topic, contribute to mailing lists discussing it, blog about it, and lend risk-mitigating organizations your financial support.

As I write this, the nuclear tension on the Korean peninsula regrettably continues... while New York Magazine pokes fun at concern about risk.

Jason's bio:

Jason Matheny is a Ph.D. student in Agricultural Policy at the University of Maryland and a researcher at the Bloomberg School of Public Health at Johns Hopkins University, where he studies the health and environmental consequences of animal agriculture. He directs New Harvest, a nonprofit that funds research on in vitro meat, and previously worked on public health projects for the World Bank and the Center for Global Development.

Filed under: risks 4 Comments
6Oct/0649

Putting Antarctica in the Microwave

AntarcticaRockSurface

(Image of Antarctica without its ice shield.)

So Antartica was warm only 34 million years ago. I originally learned this from Lovecraft, but it's been confirmed by a study of fish teeth. When the ancient continent of Gondwana broke up, it formed the continent of Australia, severing poor Antarctica. A powerful circumpolar current of cold water led to a 9 °C temperature decrease around the continent. According to this Eurekalert article, "Mediterranean sun seekers should thank Antarctica":

Europeans who enjoy the Mediterranean's warm climate should thank Antarctica for their good fortune.

Climate modelling by Australian scientists at the University of New South Wales reveals that Antarctica's icy sea currents allow the balmy Gulf Stream to dictate warm weather conditions over much of the North Atlantic.

"The Gulf Stream's climate dominance over Europe relies on events some 30 millions years ago, when Antarctica started to freeze following the final break-up of Gondwana, the great southern continent, according to Dr Matthew England, whose research with PhD student Willem Sijp was published in the Journal of Physical Oceanography.

"The loss of a 'land bridge' between Australia and Antarctica effectively isolated Antarctica and depressed its temperature by up to 9 degrees C," says Dr England. "Once it was cut adrift in the Southern Ocean, a powerful circumpolar current was established that separated Antarctica from warm subtropical waters to the north."

The Antarctic circumpolar current is a massive force. It flows at the rate of over 100 million cubic metres of water a second and takes eight years to circumnavigate the frozen continent. As a result, the icy waters of the polar reaches of the Southern Ocean don't dominate global ocean currents and climate as they did 30 million years ago.

The Gulf Stream is a super warm Atlantic current that moves tropical waters north towards Europe. As it does so it releases heat into the atmosphere that gives adjacent countries a warmer climate than they would otherwise have.

"This means that Portugal and other Mediterranean countries have a much warmer climate than places on the same latitude, such as New York," says England, who is co-director of the UNSW Centre for Environmental Modelling and Prediction.

"After the Gulf Stream waters release their heat, they cool, sink deep into the ocean, and flow south to eventually resurface in the southern hemisphere oceans.

The waters then make their way northward via various ocean routes, being rewarmed in the tropics before returning to the North Atlantic. But the driver for these ocean currents is in the North Atlantic, not the Antarctic, because of the isolating effect of the circumpolar current in the far Southern Ocean.

"Having Antarctica cut-off from the subtropics because of the Southern Ocean reduces the icy continent's impact on the global climate system", says Dr England. "We've shown that the isolation of Antarctica is necessary for the Gulf Stream's warming of Europe to be so pronounced".

So if it weren't for Antarctica's breakaway 30 million years ago, lazing by the Mediterranean today would be a much chillier affair.

This post was originally going to be a proposal to build a 1,000 km sea bridge to block the circumpolar currents in the 3 km-deep ocean between Cape Horn and the Antarctic peninsula. But now it turns out that building such a bridge would actually decrease the temperature in other parts of the world, so it would be quite unacceptable!

I like Antarctica, quite a lot actually. It's remote, exotic, etc. Its only problem is that it's incredibly cold and dry. Apply heat, and we can start to solve both problems. Before we even think about terraforming other planets, we should make our entire home planet inhabitable.

All we have to do is increase the temperature of the totality of the circumpolar current by 9 °C and we're off to an excellent start. It costs about 4 kJ to heat 1kg of water by 1 °C, and if the Eurkalert article is right, the circumpolar current is 100 million cubic meters of water per second for eight years, which is 25 million cubic kilometers altogether. By comparison, the total volume of water on earth is known to be around 1400 million cubic km.

A cubic km of water weighs a billion metric tons. So the circumpolar current weighs around 2.5 x 10^16 metric tons, which would cost 10^11 terajoules to heat 9 °C. Since a watt is a joule per second, to rack up 10^11 terajoules in a year would require only a 7750 terawatt (TW) power plant, a bit larger than the earth's current power consumption of 4 TW. But for that cost we get an entire new continent to colonize!

We'd need to build 1,550,000 5GW solar satellite facilities to meet the power demands of this project. This works out to about 7,750,000 km² of orbiting solar panels, equal to the area of Australia. As big as this sounds, it would only occupy a relatively small portion of the space available in geosynchronous orbit.

Filed under: futurism 49 Comments