Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

30Nov/068

“Everytime they summate, we have dynamics”

A couple of videos demonstrating cymatic phenomena.

The last part of the final video is the most unusual of all.

Filed under: physics 8 Comments
29Nov/060

5-minute Molecular Nanotech Intro

Mike Treder at the Center for Responsible Nanotechnology reminds us that a 5-minute introduction to the concepts behind molecular manufacturing is available online, here's how it begins:

"Molecular manufacturing refers to a revolutionary near-future manufacturing technology. Whereas today's manufacturing uses large and imprecise machines, molecular manufacturing will use molecular machines to build engineered molecular products. The performance, value, and scope of this technology will be revolutionary and disruptive.

The root idea of molecular manufacturing is that molecule-scale fabricators can output their own mass of product in a few minutes. Built from precisely positioned and strongly bonded molecules, the products will be precise and strong. Computer control will enable a wide range of products, including more manufacturing systems. Doubling the number of fabricators every hour would scale a single fabricator into a kilogram-scale personal nanofactory (PN) in a few days. The fully automated PN would contain arrays of fabricators and equipment to join their output into large-scale, integrated, heterogeneous, complete products..."


Read the rest.

29Nov/061

Meetings on Thorium

Brian Wang forwarded this along:

Thorium Power Co. is talking to India about their Thorium energy tech
, they and DBI are involved in putting on a forum in Washington DC to inform the DC media and others about Thorium on Nov 30.

Clean Nuclear Energy: Thorium 2006

DBI, a California-based aerospace company involved in the research and development of thorium-fueled reactors, will host a forum on Thursday, November 30, from 10:00 a.m. - 3:00 p.m. at the National Press Club in Washington, DC, on thorium as an abundant source of clean energy to meet the world's growing energy needs.

The forum will address the role of thorium in three key areas: the environmental benefits of thorium; the safety and national security aspects of thorium; and the economic benefits and commercial applications of thorium. A detailed agenda and list of speakers can be found below.

WHO:
DBI, a California-based company established in 1965 and involved in the research and development of thorium-fueled reactors joined by Thorium Power, Ltd., of Virginia

WHAT:
Forum on thorium as an alternative source of clean nuclear energy

WHERE:
National Press Club
529 14th Street, N.W.
Holeman Lounge (13th Floor)

Filed under: technology 1 Comment
29Nov/0633

Solar/Kinetic Weapons in the News

From IOL Technology in NZ:

Reports in the US suggest that ideas either on the drawing board, or else already in development, include killer satellites that could destroy an enemy's satellites, a Common Aero Vehicle (CAV) that could swoop with hypersonic speed up to 3000 miles to attack a target, Hyper-Velocity Rod Bundles which would fire tungsten bars weighing 100kg from a permanently orbiting platform - and even a space-based laser that uses mirrors to direct the sun's rays against ground targets.

I talked about rods earlier... also, I'm starting to get worried about trends in the direction of solar weapons, i.e., weapons that use the sun's power to incinerate things. These have a lot of potential, and are potentially much stronger than nuclear weapons. It's one of three superweapons that should be banned forever - nuclear (ICBMs), solar (beams), and kinetic weapons (meteors), with ascending severity. Following is a (rough) excerpt from a work in progress that reviews twelve major existential threats in detail, "Catastrophic Technological Risk":

Solar weapons are a serious concern because there is both the tendency to underestimate what the world's superpowers will be capable of within this area in the next 20-50 years, and the general feel-good sensation associated with solar power that makes it seem so utterly harmless.

For this risk, the biggest worry is the threat of an arms race between two or more powerful countries. Bigger, faster weapons and shields precipitate the creation of newer, bigger, and faster weapons and shields, still followed by weapons and shields that are yet bigger and faster, and so on. The only natural endpoints of such a world-endangering endeavor would be victory for a single country, which would probably consist of the creation of a system capable of instantaneously immobilizing an entire enemy nation, or an explosive, all-out war, which may include nuclear weapons, but would be post-nuclear in its scope and severity.

Solar weapons are especially attractive to militaries because they would trump nuclear weapons. Intercontinental ballistic missiles (ICBMs) are physical objects that must propel themselves to their destination -- directed energy moves at the speed of light, and requires sophisticated active shielding to effectively protect against. At the highest intensities, shielding may be impossible, even in principle. This gives attackers a first strike advantage on the solar battlefield.

One might say that solar energy and directed energy are not the same thing, and that what I'm actually talking about is directed energy, not "solar weapons" per se. But there is a reason I am talking about solar weapons specifically. Fossil fuels could not possibly produce the output necessary to power the weapons that are being foreseen here: “ it would simply be too expensive. Nuclear is definitely a possibility, but it's easier to talk about "solar weapons" than"nuclear-powered electromagnetic weapons", so just consider the latter included in this category of risk.

In the past, directed energy has been plagued by various development problems. Most weapons only work when skies are relatively clear, though newer models attempt to be more flexible. Weapons of the future will circumvent the limitations of the past. The most powerful solar weapons will push aside air molecules before sending the primary energy arc down the channel, and this will all happen within fractions of a second, even when the target is dozens of km away. Infrared, auditory, electrical, laser, and particle-based superweapons are all conceivable. 'Artificial lightning' will be available to the military commanders of the future. This is not speculative – hundreds of millions of dollars went towards directed energy research in 2005, and dozens of projects are either on the drawing board or already in development.

Directed energy weapons will be mounted on ships, planes, jeeps, helicopters, even individual soldiers. The directed energy weapons of concern are in the TW range, and are likely to be mounted on ships, though next-generation power plants may offer MW/g power densities, allowing the superweapons to be miniaturized. A terawatt-level electric discharge would be equivalent to hundreds of lightning strikes, though the intensity of a lightning strike is not necessary to destroy most targets.

To imagine the long-term potential of solar weapons, consider concentrating the energy of a substantial area of sunlight (100 km2) within a 0.5 km2 area, causing the intensity of sunlight to be a hundred times greater (assuming 25% efficiency) in that region. Or, if it is night, projecting energy a hundred times greater than if the sun suddenly rose. Because air is practically transparent to thermal energy, there would be nowhere to run, except underground, or underwater. People on the street would be boiled alive in the intense heat. All moisture would on the air and ground would quickly be converted to steam, some with explosive force.

Welcome to the Future!

Filed under: risks 33 Comments
28Nov/0612

Reversible Molecular Computing

Found on John Baez's weekly finds in mathematical physics:

K. Eric Drexler writes:

Dear John,

John Baez wrote:

> [...] with a perfectly tuned dynamics, an analogue system
> can act perfectly digital, since each macrostate gets
> mapped perfectly into another one with each click of
> the clock. But with imperfect dynamics, dissipation
> is needed to squeeze each macrostate down enough so it
> can get mapped into the next - and the dissipation
> makes the dynamics irreversible, so we have to pay a
> thermodynamic cost.

Logically reversible computation can, in fact, be kept on track without expending energy and without accurately tuned dynamics. A logically reversible computation can be embodied in a constraint system resembling a puzzle with sliding, interlocking pieces, in which all configurations accessible from a given input state correspond to admissible states of the computation along an oriented path to the output configuration. The computation is kept on track by the contact forces that constrain the motion of the sliding pieces. The computational state is then like a ball rolling along a deep trough; an error would correspond to the ball jumping out of the trough, but the energy barrier can be made high enough to make the error rate negligible. Bounded sideways motion (that is, motion in computationally irrelevant degrees of freedom) is acceptable and inevitable.

Keeping a computation of this sort on track clearly requires no energy expenditure, but moving the computational state in a preferred direction (forward!) is another matter. This requires a driving force, and in physically realistic systems, this force will be resisted by a "friction" caused by imperfections in dynamics that couple motion along the progress coordinate to motion in other, computationally irrelevant degrees of freedom. In a broad class of physically realistic systems, this friction scales like viscous drag: the magnitude of the mean force is proportional to speed, hence energy dissipation per distance travelled (equivalently, dissipation per logic operation) approaches zero as the speed approaches zero.

Thus, the thermodynamic cost of keeping a classical computation free of errors can be zero, and the thermodynamic cost per operation of a logically reversible computation can approach zero. Only Landauer's ln(2)kT cost of bit erasure is unavoidable, and the number of bits erased is a measure of how far a computation deviates from logical reversibility. These results are well-known from the literature, and are important in understanding what can be done with atomically-precise systems.

With best wishes,

Eric

For an introduction to Drexler's plans for atomically-precise reversible computers, see:

28) K. Eric Drexler, Nanosystems: Molecular Machinery, Manufacturing, and Computation, John Wiley and Sons, New York, 1992.

The issue of heat dissipation in such devices is also studied here:

29) Ralph C. Merkle, Two types of mechanical reversible logic, Nanotechnology 4 (1993), 114-131. Also available at http://www.zyvex.com/nanotech/mechano.html

I need to think about this stuff more!

The upshot of this is that by running our minds on reversible molecular computers, we can live forever and expend no energy.

Anthropics alert: if this is so, why weren't we born in the future era of infinite free computation and lifespan?

Answer: we still know practically nothing about the physical delineations of our reference class, so we can't say what the probability distribution of our likelihood of birth looks like with much accuracy.

Read more on reversibility from Robin Hanson and anthropics from Milan M. Cirkovic.

27Nov/0624

Fact or Fiction?

Seen earlier this month on an obscure message board:

Ten months from now, on September 11, 2007, a thermonuclear device will be detonated over Ground Zero, the memorial ground of the 911 event. The bomb will be detonated by mobile phone, at the end of the 911 memorial service. It will have a yield of approximately 5 kilotons. Within 24 hours, a mutated strain of the ebola virus developed by the former Soviet Union as a biological weapon will be dumped into the water supply. The combination of nuclear fallout and the virus is expected to cripple all emergency response systems in place, killing at least 500,000.

The nuclear device, a relic from the former Soviet Union, is currently in Jordan. It is expected to be transported to the US inside a shipping container sometime in the next three months. A sample of the biological weapon has already been smuggled inside the US and is currently being cultured to produce the massive required quantities.

Better deploy that new radiation detection technology for our ports, fast!

Filed under: risks 24 Comments
27Nov/068

Prediction: Artificial Skyfish by 2025

In cryptozoology, a skyfish, or "rod", is a supposed atmospheric entity that travels too fast to be seen by the unaided eye. A relatively new addition to the cryptozoological laundry list, rods were 'discovered' in the early 1990s but debunked by 2003 at the latest. It turns out that they are just videographic artifacts, produced by the motion blur of a conventional insect being filmed at 60 fps.

How fast does something need to travel to move too fast to be seen? Of course this depends on its size and distance. According to this analysis of human vision, Air Force pilots were able to identify an image of a plane flashed in front of them for only 1/250th of a second. This is around the limits of human vision. If the flash were only for 1/500th of a second, it is nearly certain that they wouldn't even notice it.

Imagine a rod that moves 500 times its own length in a second. Even using a super-expensive 500 fps video camera, the skyfish wouldn't show up in the same place in more than a frame for any longer than 1/500th of a second, making it thoroughly invisible to the naked eye. For a rod 10 cm in length, that would be 50 m/s, or about 110 mph. For a larger rod 50 cm in length, it would need to travel at around 250 m/s, or more like 550 mph. Indeed, the Peregrine falcon, which moves at speeds of up to about 240 mph during steep dives, almost travels too fast to be seen without high fps photography.

Clearly, if a flying creature were discovered that traveled at 550 mph, it would be a boon to modern science. It is hypothesized that these creatures are made further harder to observe by their partial or total transparency. It has even been asserted (by UFO nuts) that they may be composed of an entirely undiscovered phase of matter, or possess the ability to pass in and out of an alternative dimension.

Unfortunately for insufficiently skeptical cryptozoologists, the skyfish has been thoroughly debunked. But might it one day be possible to engineer a small unmanned aerial vehicle (UAV) that moves too fast to be seen by the naked eye?

The surveillance potential of an artificial skyfish is undeniable. Imagine tens of thousands of these beasts flitting about in the mountains of Afghanistan, taking snapshots of people and facilities with greater resolution and at more angles than spy satellites or conventional UAVs could possibly muster. Traditional spying techniques would become all but obsolete.

The supposed method of locomotion for the fabled skyfish are undulating fins, similar to those used by the cuttlefish. According to this paper on cuttlefish locomotion, it appears that around 30% of the mass/volume of a cuttlefish is used for locomotion, images of the skyfish suggest that more like 50% of its mass/volume consists of its undulating 'fins', a fin-to-body ratio of 1:1, an improvement on the apparent 1:2 ratio found in cuttlefish. But a factor-of-2 improvement would hardly be enough to make up for 1:800 density difference between air and water. Cuttlefish also use jetting, like squids, to propel themselves along, whereas an artificial skyfish would probably not have that convenience.

A man-made rod would qualify as a µAV (micro air-vehicle). µAVs today are about 15 cm long and fly at 10 to 20 m/s (22 to 45 mph). Here is a diagram comparing MAVs to other flight vehicles:

Because the Reynolds number for tiny flyers is so low, they cannot rely on lift to move along, but must instead flap, undulate, or use propellers operating at 10 Hz+ to stay aloft. Power systems with high energy densities are essential. For this reason, leaders in the field of µAV research, the Entomopter Project in particular, use chemical energy sources, which offer superior energy density to electrical power storage. The amount of energy you can store in a drop of gasoline is much greater than what could be stored in a battery of similar size. A milliliter of gasoline contains 32 kilojoules, which is about 9 watt-hours condensed into an amount of fuel that weighs less than one gram.

The Entomopter Project uses something called Reciprocating Chemical Muscle (RCM), "motivated chiefly by the basic necessity for very high rate of energy release from compact energy sources". Here is their graph for necessary power versus forward velocity for various masses, using their particular µAV design:

A key observation made in the paper is that with each doubling of mass, nearly eight times as much power is required to propel the craft to a given speed. The upshot is that with each halving of mass, almost eight times as little power would be necessary to achieve the same velocity. In the entomopter testbed experiments, 1 ml of fuel offered 13 watts for a 100 g craft, allowing it to fly for about 30 seconds. The paper argues that if the mass of the µAV were halved, it would have been able to fly for 3 minutes using the same amount of fuel, underscoring the importance of weight to effective micro-flight.

The entomopter's actuator system was milled from steel. In the not-too-distant future, we will have actuators built from superior materials, such as carbon nanotubes, which achieve 60 times the strength of steel with 1/5th the density. More sophisticated synthesis techniques are dropping the costs of carbon nanotubes to dollars per gram. Already, highly purified nanotubes are available for $500/gram.

If a µAV today weighs about 100 g and flies at 10 to 20 m/s using 13W, surely we can imagine a µAV in 2025, built primarily from carbon nanotubes, that weighs approximately 10 g and travels 100 to 200 m/s (220 - 440 mph), too fast to be seen with the unaided eye. These artificial skyfish could be deployed by aircraft by the hundreds, or launched out of shoulder-mounted launchers and controlled using laptops. To keep the skyfish from being captured by the enemy, they would need to be programmed to automatically return to base at appropriate intervals for refueling or recovery. Using simple robotic joints and locks, skyfish that run out of fuel could be instructed to break themselves into hundreds of tiny pieces, too small to identify on the ground without sophisticated equipment. Metamaterials could be used to actually make the skyfish totally transparent.

The list of potential applications for artificial skyfish is quite large, if you use your imagination. They could be designed to kamikaze into enemy planes or missiles, rendering them impotent. They could deposit tiny sensors into enemy buildings, recording the conversations inside and transmitting them to the skyfish, which forwards the recordings to intelligence. Working in cooperation, skyfish could even carry kg-sized payloads to their destinations, or embed themselves in enemy vehicles, awaiting activation. More advanced versions of the future could weigh even less, traveling longer distances without refueling. We may not see an artificial skyfish project proposed in the next ten years, but when you see the headline from DARPA announcing cutting-edge research into high-velocity, stealth µAVs, remember that you first heard of the concept from some guy named Anissimov.

Filed under: futurism 8 Comments
23Nov/0694

Overcoming Bias Blog

Great new blog from the Future of Humanity Institute, on overcoming cognitive bias. It's a group blog, whose authors include at least four professional philosophers, and many prominent transhumanists, including Robin Hanson, Nick Bostrom, Hal Finney, Peter McCluskey, and Eliezer Yudkowsky.

Also: great article in the LA Times about econo-bloggers.

Filed under: philosophy 94 Comments
18Nov/0621

NewScientist: “Brilliant Minds Forecast the Next 50 Years”

Most of the respondents only discussed advances they hope will happen in their own scientific field.

But wait, you can't talk about the future, because it hasn't happened yet, right? Looking any further than five years into the future must mean you are childish, and have your head in the clouds.

Well, the point of this section in New Scientist is to say that this attitude is wrong, and that futurism deserves a place in scientific discourse. We make predictions and continuously refine them according to new evidence. For example, the possibility of wireless energy transfer changes the picture for the next 20 or so years, and beyond. There's nothing wrong with futurism, and many futurist predictions of the past have borne out - though more have failed than succeeded.

Remember, Singularitarians such as myself forsee a prediction horizon in the future - a horizon caused by the arrival of superintelligence. One respondent, Steven Pinker, unsurprisingly, brought up his uncertainty about the future in his response:

I absolutely refuse even to pretend to guess about how I might speculate about what, hypothetically, could be the biggest breakthrough of the next 50 years. This is an invitation to look foolish, as with the predictions of domed cities and nuclear-powered vacuum cleaners that were made 50 years ago.

I will stick my neck out about the next five to ten years, however. I think we'll see a confirmation of the fundamental hypothesis of evolutionary psychology - that many aspects of human cognition and emotion are evolutionary adaptations - from various new techniques for assessing signs of selection in genomic variation within and between species. The recent discoveries of selective pressures on genes for the normal versions of genes for microcephaly, for a speech and language disorder, and for development of the auditory system will be, I suspect, the harbinger of a large number of naturally selected genes with effects on the mind.

You got it, Pinker! I respect his general attitude, but I must break with Pinker's reluctance to predict the biggest breakthrough. To me, it seems obvious - the biggest breakthrough will be the one that itself begets more breakthroughs. That is, a breakthrough in brainpower, whether it be Artificial Intelligence, or Intelligence Augmentation. We transhumanist folks call this the Singularity. Some refuse to predict when they think it will happen, but for those who buy into the singularity hypothesis, the general consensus tends to be sometime between 2020 and 2040.

And how about the Singularity? Did any respondents bring it up? Terry Sejnowski, the computational neurobiologist, did...

How far will we get in 50 years? By then we will have machines that pass the Turing test. However, this is a weak test that does not get at the harder problem, which is to understand how the brain creates consciousness. To crack this we must first understand unconscious processing, which does most of the heavy lifting for us. I suspect that when we start to make progress with this the problem of consciousness will, like the Cheshire cat, disappear, leaving only a smile in the air.

I get the feeling that any AI complicated enough to actually pass the Turing Test would probably be conscious, both in the information-theoretic sense of being made of self-watching cognitive loops, but also in the phenomenological sense of having that "I see red" special-sauce of subjective experience. We humans like to think that consciousness is super-special, and that nothing that operates on "mere" computations can ever be granted it without great difficulty, but unfortunately much of this sentiment derives from psuedoscientific dualism that should have died out 60 years ago.

But it was Eric Horwitz that spoke most explicitly about AI:

Within 50 years, lives will be significantly enhanced by automated reasoning systems that people will perceive as "intelligent". Although many of these systems will be deployed behind the scenes, others will be in the foreground, serving in an elegant, often collaborative manner to help people do their jobs, to learn and teach, to reflect and remember, to plan and decide, and to create. Translation and interpretation systems will catalyse unprecedented understanding and cooperation between people. At death, people will often leave behind rich computational artefacts that include memories, reflections and life histories, accessible for all time.

Robotic scientists will serve as companions in discovery by formulating theories and pursuing their confirmation. By mid-century, advances attributed to automated scientists will include several world-changing breakthroughs.

When Horwitz uses the word "companions" to describe future AIs, he is either purposefully toning himself down or (more likely) he actually believes it. Is it appropriate to refer to a mind that runs on logic elements that operate at 10,000,000 greater serial speed than neurons, a "companion"? In the moral sense, hopefully, in the practical sense, not at all.

The language Horwitz uses for describing future robotic scientists is anthropocentric - he clearly implies rough equivalency of ability between post-Turing AIs and the smartest human scientists. But the points that Moravec and Kurzweil and Yudkowsky and many others have been making for upwards of a decade is that once AI reaches human-equivalency, it necessarily soars past it. Anyone who paints their picture otherwise is 1) unintentionally deceiving the public about the consequences of AI, 2) displaying a quaint naivete about the underlying hardware differences between biological and nonbiological cognitive systems. It's the old vision, one that has been around for decades...

Even though Horwitz talks about AI systems in the background, he still implies that individual AIs will do the cognitive lifting of roughly a single human being. And would you really think that the AI of 2050 would have to be instantiated with the robotics technology of the late 20th century? I mean, you can see the bolts on that robot.

The notion that we will invent AI, and then AI will reason on par with us indefinitely, is based on the assumption that human intelligence is all there is, and there's nothing beyond it. This attitude strikes me as like that of a person in a small rural village who absolutely refuses to acknowledge the existence of any outside world.

Anyway...

Francis Collins talked anti-aging, though of course not SENS, because that is verboten:

Fifty years from now, if I avoid crashing my motorcycle in the interim, I will be 106. If the advances that I envision from the genome revolution are achieved in that time span, millions of my comrades in the baby boom generation will have joined Generation C to become healthy centenarians enjoying active lives.

What a cheery vision! However, it is unambitious. Collins is probably aware of Aubrey de Grey's work and arguments, but like most who do aging research, would prefer to ignore it.

Mr. Collins has little reason to be supportive of the possibility of indefinite lifespans, after all, he became a born-again Christian after observing the faith of his critically ill parents and reading a book by C.S. Lewis. This, my friends, is intellectual failure.

Richard Miller said things similar to Collins:

Turning on the same protective systems in people should, by 2056, be creating the first class of centenarians who are as vigorous and productive as today's run-of-the-mill sexagenarians.

Gregory Chaitin, the information theorist, displayed some nascent transhumanism:

I hope that by 2056 weird astronomical observations will lead to radical new fundamental physics. I expect people will be tampering with the human genome, which should be fun. In my own field, I hope the current desiccated, formal approach has died out and people are more adventurous and creative.

John Halpern, assistant professor of psychiatry at Harvard Medical School, had something very interesting and non-boring to say:

In the coming months I will give psychotherapy assisted by MDMA (ecstasy) to dying cancer patients to see if their anxiety, pain and other end-of-life issues improve. I would like to test whether LSD or psilocybin can relieve debilitating cluster headache, and whether peyote offers Native Americans a treatment for drug and alcohol abuse. Within 10 years, enough positive results could establish that there are special benefits from "psychedelics". This may lead to a new field of medicine in which spirituality is kindled to help us accept our mortality without fear, and where those with addiction problems, anxiety or cluster headache discover a path to genuine healing. Capable of inducing the deeply mystical, these substances may prove to be a source for compassion and hope so desperately needed in these perilous times.

Perhaps psychedelics aren't really as crude or useless as our Puritanical culture would assert.

Meanwhile, Bill Joy, who was so concerned about existential risk just a little while ago, blows this opportunity by talking about energy:

work in the area of green technology for energy and resources. The most significant breakthrough would be to have an inexhaustible source of safe, green energy that is substantially cheaper than any existing energy source.

Ideally such a source would be safe, in that it couldn't be made into weapons, nor would it make hazardous or toxic waste or CO2. It seems to me that this is most likely to come from a deep new understanding of a physical effect at the nanoscale (or smaller) that allows safe and simple access to fusion -- or another completely unexpected energy technology

It seems to me, Bill, that we already have one - thorium. It's just a matter of building the reactors. In any case, spreading the word about existential risk is more important than green energy. Can't have green energy if your planet is on fire, or the cellular machinery of every member of the human species is being hijacked by malignant nanomachines, now can you?

Focusing on the existential risk aspect is especially important because it's a less fun job. Bill would rather talk about green energy than technological risks, because green energy makes us smile and technological risks don't. This species-universal tendency to ignore the risks makes it all the more important for rationalists to counterbalance it.

Filed under: futurism 21 Comments
17Nov/0637

Hank Conn on the Singularity Issue

From an ImmInst thread:

(1) Matter, from atoms, to molecules, to molecular components, to cells, to trees, to animals, humans and the human brain (i.e. hardware), when combined with other matter in specific ways, from the strong force within an atom, to atomic and molecular bonding, chemical signaling within an organism, or electrical signaling in the human brain (i.e. software), produces changes to the environment around it (i.e. executes an algorithm).

(2) The algorithm that the human brain and body execute is the algorithm of the human mind. The human mind that you know of as yourself is exactly one instance of one particular implementation in hardware and software out of the infinitely large set of all possible implementations of one specific algorithm out of the infinitely large set of all possible goal-seeking, domain independent, common sense, generally intelligent algorithms (i.e. mind designs- not all mind designs have a psychology anything like that of humans- see this for a more in-depth explanation). This specific algorithm and implementation have been evolved through many generations of natural selection that ultimately led to the combination of your genes being "instantiated" (so to speak): growing into the specific, operational instance of the specific implementation of the specific algorithm within mind design space that is you.

(3) Suppose some instance of a mind has direct access to some means of both improving and expanding both the hardware and software capability of its particular implementation. For example, an intelligence implemented on silicon computing technology of today or advanced nanotechnological computing technology of the years ahead could purchase more memory or processing power, thus improving/expanding the hardware upon which it is implemented, thus supplying more resources for the algorithm to use, thus increasing the relative capability of the algorithm compared to other instances of intelligent algorithms within mind design space. It could further more (1) optimize the central software base upon which the algorithm of its intelligence runs, and (2) add functionality for domain independent cognitive tools and abilities (e.g. data mining, belief calculation, inference, reasoning, etc) as well as domain dependent cognitive tools and abilities (e.g. calculator, web browser, C compiler).

(4) Suppose an instance of a mind has direct access to some means of both improving and expanding both the hardware and software capability of its particular implementation. Suppose also that the goal system of this mind elicits a strong goal that directs its behavior to aggressively take advantage of these means. Given each increase in capability of the mind’s implementation, it could (1) increase the speed at which its hardware is upgraded and expanded, (2) More quickly, cleverly, and elegantly optimize its existing software base to maximize capability, (3) Develop better cognitive tools and functions more quickly and in more quantity, and (4) Optimize its implementation on successively lower levels by researching and developing better, smaller, more advanced hardware. This would create a positive feedback loop- the more capable its implementation, the more capable it is in improving its implementation.

(5) We know that this positive feedback loop (let us label this, for understandable reasons, "recursive self-improvement", and let us label the event in which this mind achieves super-human intelligence, for specific reasons explained elsewhere, the "Singularity"), will either occur on one or more humans, or one or more AIs. We can make a distinction between whether this outcome will be Friendly to humans, or Unfriendly to humans, that is, outcomes including the annihilation of humanity, the descent of humanity into some horrific hellish scenario, or increases in over-all pain and/or suffering and/or death relative to their current levels (or however it is that we would really want to define “bad outcomes”, if we knew the actual consequences of making the definition in that particular way) would obviously be Unfriendly, and those that decreased the overall pain and/or suffering and/or death, gave humanity a truly optimal utopia and nearly (and, depending on the laws of physics, possibly) infinite life spans in which to live in them, or however it is that we would really want to define “good outcomes”, if we knew the actual consequences of making that definition in that particular way, would obviously be Friendly scenarios.

(6) Through recursive self-improvement (RSI), there is currently no way to know what kind of outcome to the Singularity that any kind of human will elicit. The sheer amount of knowledge and ability that will be gained will present the intelligence with power and awareness literally beyond the wildest dreams and experiences of any lesser intelligence (to speak specifically in terms of a human intelligence. Note that while we do have evidence of people's goals and behavior changing for the worse as they gain large amounts of power, this does not provide a technical, measurable test for knowing the bounds of Friendly or Unfriendly outcomes of a given human mind in RSI). Also, as the algorithm and implementation of the human mind arose from evolution by natural selection, the design process in no way took into account the tractability of the provability of bounds of outcomes of the mind through RSI, thus essentially making the problem extremely more complex in human minds relative to other possible mind designs.

(7) However, AI has a particular advantage in that an AI could be an instance of a mind design intelligently chosen from the space of all possible mind designs, with an intelligently defined goal system. If such an AI were designed specifically with mathematical provability of the stability of its goal system through RSI in mind, it could be built to reliably maintain certain causal or probabilistic bounds on Friendly and Unfriendly outcomes of the Singularity, and the Singularity could be initiated in such a way that we are measurably confident of how Friendly the outcome of the Singularity will be under this design.

(8) Unfortunately, proving bounds on the Friendliness of the outcome of a goal system of an AI through recursive self-improvement is (1) by definition, harder, and (2) likely very extremely, harder, than just designing an AI with a working goal system. This, in combination with the intractability of calculating bounds on the outcome of a human (and the vast majority of AIs) in RSI, is what can be known as the "Friendliness problem", in relation to the Singularity, which is an extremely serious and imminent existential risk.

~~~~

Hank Conn is a computer science student at University of Georgia.

Filed under: friendly ai 37 Comments
17Nov/068

Tech Items

The world's largest mobile machine is 300 meters long and weighs 45,500 tons. It is used for pit mining.

Kalpana One is a superior space colony design. It avoids all the shortcomings of past designs, while maximing the ratio of habitable area to hull mass. It is a cylinder and receives natural lighting 24/7 through its endcaps.

We are using gene doping to create supermice in a new way. This same technology can be used to enhance brain size and neuron density in humans.

Soon we will be able to tell exactly how we differed from Neanderthals.

A breakthrough in the theory underlying self-healing robots.

The wireless energy transfer everybody's been talking about.

Launch rings may one day be used to send raw materials up into space.

Filed under: technology 8 Comments
16Nov/064

Questioning the Origin of Priors

This evening I read a recent paper by Robin Hanson, entitled Uncommon Priors Require Origin Disputes. It is quite fascinating and has far-reaching implications for everyday reasoning, in addition to artificial intelligence and decision theory. The paper is only six pages, so you might as well go take a look if the topic interests you.

A prior probability is the likelihood that some event will happen, some condition will be met, or that some belief is true. Priors are always subjective, just like everything else, but are often commonly accepted knowledge. For example, according to Wikipedia, there are about 1,000 winners of the California State Lottery every year, out of a total California population of approximately 33 million. So the prior probability of a California state resident winning the lottery in an average year, ignoring any other information, is about .003%.

But not every prior is common knowledge. For scenarios where the prior is not so clear-cut, such as in complex political, economic, and social situations, priors may differ between individuals. For example, what is the prior probability that Zune will outrun the iPod?

In times like these, circumstances call for examining your particular priors, and asking why the priors you have are better than anyone else's. Some of you may have thought of this situation before if you've been exposed to Bayesian statistics, but it wasn't until Robin Hanson that a theoretical framework was created for making calculations about the probabilistic origins of the priors themselves.

Abstract:

In standard belief models, priors are always common knowledge. This prevents such models from representing agents' probabilistic beliefs about the origins of their priors. By embedding standard models in a larger standard model, however, pre-priors can describe such beliefs. When an agent's prior and pre-prior are mutually consistent, he must believe that his prior would only have been different in situations where relevant event chances were different, but that variations in other agents' priors are otherwise completely unrelated to which events are how likely. Due to this, Bayesians who agree enough about the origins of their priors must have the same priors.

Shortly into the paper, he continues, describing the concept of a pre-prior:

Just as beliefs in a standard model depend on ordinary priors, beliefs in the larger model depend on pre-priors. We do not require that these pre-priors be common; pre-priors can vary. But to keep priors and pre-priors as consistent as possible with each other, we impose a pre-rationality condition. This condition in essence requires that each agent's ordinary prior be obtained by updating his pre-prior on the fact that nature assigned the agents certain particular priors.

This pre-rationality condition has strong implications regarding the rationality of uncommon priors. Consider, for example, two astronomers who disagree about whether the universe is open (and infinite) or closed (and finite). Assume that they are both aware of the same relevant cosmological data, and that they try to be Bayesians, and therefore want to attribute their difference of opinion to differing priors about the size of the universe.

This paper shows that neither astronomer can believe that, regardless of the size of the universe, nature was equally likely to have switched their priors. Each astronomer must instead believe that his prior would only have favored a smaller universe in situations where a smaller universe was actually more likely. Furthermore, he must believe that the other astronomer’s prior would not track the actual size of the universe in this way; other priors can only track universe size indirectly, by tracking his prior. Thus each person must believe that prior origination processes make his prior more correlated with reality than others’ priors.

As a result, these astronomers cannot believe that their differing priors arose due to the expression of differing genes inherited from their parents in the usual way. After all, the usual rules of genetic inheritance treat the two astronomers symmetrically, and do not produce individual genetic variations that are correlated with the size of the universe. This paper thereby shows that agents who agree enough about the origins of their priors must have the same prior. This is a new argument for common priors, and one that depends only on consistency relations between the beliefs of a single agent.

Another example is given concerning a pair of siblings - one is more optimistic and one is more pessimistic, but these innate qualities derive from the genetic lottery, and one random genetic result would have no predisposition to track the truth more accurately than another random genetic result, so these pecularities must be debiased from the reasoning process in order to obtain better priors.

Whenever the process behind the origin of the prior has no special tendency to track reality when compared to alternative origins, it must be treated on equal ground with these alternatives. This can make judgements less concrete or specific than we may have desired, but it is essential for reasoning accurately. As the principle of maximum entropy states, probability distributions must maximize entropy while remaining consistent with the given information, that is, they can never postulate additional information beyond that which is strictly justified.

So if a particular cultural climate in India causes people to assign a prior probability of 20% to an event, but a different cultural climate in the United States causes the prior to be assigned a value of 30%, then either there are aspects of the cultural system which give a particular side an advantage, that is, aspects of the system that better correlate with the truth, or the cultural system is an incidental artifact that is supervening on the process whereby priors are generated, and must be averaged out to satisfy the principle of maximum entropy.

From the beginning of the discussion section on page 5:

These constraints on beliefs about the origins of priors are strong and highly asymmetric. Each agent must believe that his prior would "track truth" in the sense that his prior would only assign a higher probability to an event in situations where that event actually was more likely. Furthermore, he must believe that other agent's priors would only track truth to the extent that their priors covaried with his prior; he believes any additional variation in the priors of others must be completely unrelated to other events of interest.

In contrast, standard scientific beliefs about the origins of individual human variations do not offer much support for the belief that some people's initial belief tendencies track truth much better than other people's tendencies.

It gets worse. Even if you were somehow able to hold a species-average genetic attitude, you'd have to justify the truth-tracking superiority of your prior-originating processes relative to the prior-originating processes of aliens, genetically engineered human beings, artificial intelligences, or any of the multitudes of other intelligent species that could have been created under different evolutionary timelines.

Mindspace-averaged agents may be capable of satisfying Hanson's pre-rationality condition, but no human is capable of doing so completely. However, the overall quality of reasoning can be improved by questioning the processes underlying your priors, comparing them to potential alternatives, and asking whether you would use different numbers if you were a different person.

How, for example, might wealth influence our priors?

Filed under: meta 4 Comments