Accelerating Future Lexicon:
A-B C-F G-M
nanotechnological arms race
Omega Point Theory
Pedestrian's Bill of Rights
Rapid Infrastructure Ultratechnology
recursive self-improvement (RSI)
Riemann Hypothesis Catastrophe
A computer built using nanotechnology (manufacturing to molecular specifications).
A lower bound on nanocomputing speeds has been set by calculating the
speed of an acoustic computer using "rod logics" and messages
that travel at the speed of sound; a one-kilogram rod logic, occupying
one cubic centimeter, can contain 10^12 CPUs each operating at 1000
MIPS for a total of ten thousand billion billion operations per second.
Note that rod logics are the nanotech equivalent of vacuum tubes (circa
1945), or rather, Babbage's Analytical Engine (circa 1830). Electronic
nanocomputers would be substantially faster. We use the "rod logic"
numbers because they're easy to analyze, and because 10^21 operations
per second are sufficient for most applications [Yudkowsky01].
The use of nanocomputers for computing purposes. According to
the Center for
Responsible Nanotechnology and other nanotech experts, nanocomputing
would be one of the first practical applications of nanotechnology.
These same experts have argued that nanotechnology is likely to arrive
in full force sometime between 2005 and 2020. (See CRN's
timeline.) One of the most dangerous potential uses of
nanocomputing would be the brute-forcing of seed AI.
Brute-forced seed AIs would be unlikely to possess the complex
goal structure required to understand and pursue any humanly-recognizable
positive traits, such as benevolence, empathy, elegance, and sociability.
See also brute-forcing AI.
nanotechnological arms race:
Once the design for a nanomanufactured product is completed, creating
the product in bulk is only a matter of having the raw materials.
Also, nanotech will open up huge new portions of design space for
weapons and weapons systems, allowing devices and shields orders of
magnitude more powerful than anything we have today. Nanotech will
also confer a powerful first-strike advantage, creating powerful incentives
for rogue states to attack their enemies before they act first. See
the paper "Nanotechnology
and International Security" for more commentary. See also
Center for Responsible Nanotechnology.
Developing molecular nanotechnology would mean the ability to synthesize
arbitrary objects to atomic-level specifications. For an introduction,
see Engines of Creation
(1986). For a technical visualization of lower (not upper) limits
on the potential of nanotechnology, see the book Nanosystems (1992).
These lower limits - the nanotechnological equivalent of vacuum tubes
- include a one-kilogram computer, running on 100 kW of power, consisting
of 10^12 CPUs running at 10^9 ops/sec, for a total of 10^21 ops/sec.
By comparision, the human brain is composed of approximately 100 billion
neurons and 100 trillion synapses, firing 200 times per second, for
a total of somewhere around 10^17 ops/sec. Nanosystems also describes
molecular manufacturing systems capable of creating copies of themselves
in less than an hour. This implies a certain amount of destructive
potential. An exponentially replicating assembler could reduce the
biosphere to dust in a matter of days. (For an overly optimistic treatment
of the problem, see "Some
Limits to Global Ecophagy" by Robert Freitas.) Accidental
out-of-control replication is fairly easy to prevent, given a few
simple precautions; we should be more worried about military-grade
nanotechnology and deliberately developed weapons. Given our human
propensity to make things that go bang - and use them on each other
- it would probably be a good idea to develop AI before nanotechnology.
Nanotechnology is the "deadline" for AI.
In the long run, there are only two kinds of technology: There are
technologies that make it easier to destroy the world, and there are
technologies that make it possible to go beyond the human. Whether
we possess even a chance of survival is a question of which gets developed
first. Developing a transhuman AI does involve certain risks, but
it's better than the alternative - success in Friendly AI improves
our chances of dealing with nanotechnology much more than success
in nanotechnology would improve our chance of creating Friendly AI
An AI roughly in the vicinity of human intelligence, only slightly
above or slightly below, or perhaps with some transhuman and some
infrahuman abilities. Capable of interacting with humans as game-theoretical
equals. Under a hard takeoff scenario, near-human AIs would
exist very briefly, if at all [Yudkowsky01].
A person using chemical, genetic, pharmacological, electronic,
or other means to enhance her or his own intelligence, or the intelligence
of others. Neurohackers already exist, but it can safely be assumed
that their techniques only result in mild intelligence enhancement,
or they would be more widely known.
What a neurohacker does. See above.
Neuromorphic engineering is a field of engineering that is based on
the design and fabrication of artificial neural systems, such as vision
chips, head-eye systems, and roving robots, whose architecture and
design principles are based on those of biological nervous systems.
A term used by cognitive scientists to contrast the ideal or "normative"
forms of cognition with the psychologically realistic cognition occurring
in humans. For example, see [Tversky&Kahneman86]. The field that
compares normative cognition to human cognition is often called "heuristics
Novamente LLC is a commercial entity pursuing general AI, headed by
Dr. Ben Goertzel. Affiliated
with the Artificial General Intelligence
Research Institute, which is also led by Goertzel. Novamente uses
a mixture of symbolic and subsymbolic/connectionist aspects in its
observer-biased: The correctness of your political views.
A perceived quantity that tends to assume values whose perception
will benefit the perceiver [Yudkowsky01]. Evolved organisms, particularly
imperfectly deceptive social organisms, tend to develop observer biases.
Things that tend to be observer-biased in evolved organisms:
The likelihood that you are to blame for any given negative outcome.
Your own trustworthiness, relative to the trustworthiness of others.
Your likelihood of success if placed in charge of a social endeavor.
Omega Point Theory:
Physicist Frank Tipler's theory that life will eventually expand to
fill the universe, make use out of every computation available, and
gain an asymptotically large quantity of computational power from
the shear energy as the universe collapses into a Big Crunch. (It's
a long and somewhat complicated argument from his book The Physics
of Immortality. Many also consider it dubious.) Most cosmologists
now believe that a Big Crunch won't happen; that the universe will
expand indefinitely instead, but this does not preclude the possibility
of an intelligent civilization artificially creating a black hole
singularity out of a solar system or galaxy and trying to extract
asymptotic computation from that instead. Very speculative. See
Irrational bias derived from centering your thoughts around our current
ontology. An ontology is the context which we operate within, and
it often refers to fundamental qualities, such as, "our ontology
is three dimensional", rather than "our ontology is planet
Earth". The ontology of chess pieces is the chess board. Ontocentrism
can arise from false beliefs about what our ontology is, and the ensuing
philosophical effort to preserve that, or simply a comfortable feeling
with our current ontology that cripples us from imagining others.
The denial of any possibility that our world is a simulation would
qualify as ontocentrism. Michael Anissimov's term. See also
Technology that permits manipulation of the fundamental rules of reality,
such as physical or information-theoretic law [Yudkowsky01]. Very
speculative. See also physics workarounds.
Universal to humans.
Universal to all physically or even mathematically possible sentient
Playful title for an AI holding the supergoal of converting as
much matter as possible into paperclips. Mentioned in Nick
Bostrom's paper, "Ethical
Issues in Advanced Artificial Intelligence". Non-speculative.
See also Riemann Hypothesis Catastrophe, thermostat AI.
A non-augmented human in a world of augmented humans, posthumans,
and superintelligences. The term "Pedestrian" captures
the moral issues of a traditional human living in a world populated
by nonhumans, without belittling the human with a derogatory phrase.
No matter how powerful posthuman beings get, they must still possess
compassion and care for non-augmented humans, avoiding the unnecessary
disruption of their environment, living space, and personal goals.
Eliezer Yudkowsky's term.
Pedestrian's Bill of Rights:
Proposed set of rights for Pedestrians. Available at http://www.sl4.org/bin/wiki.pl?PedestriansBillOfRights.
A society whose analysis of history is so complete that it can perform
ancestral emulations and invite its predecessors to join it in the "present".
Some possible methods might be faster-than-light "travel"
to the past, sophisticated reverse extrapolations of physical law, or
perhaps something we can't conceive of yet. Michael Anissimov's
term. Very speculative.
It has been proposed that rather than breaking existing laws,
intelligence tends to find ways to circumvent laws elegantly.
It may turn out that intelligence can do pretty much anything it wants
to do, without technically "breaking" any physical laws. See
An intelligence possessing billions or trillions of times the computing
power of Earth's entire current population. Although it may be
physically possible to possess this level of computing power and still
be less intelligent than a human, "Power" usually refers to
the more "developed" state of superintelligences and transhumans.
The word was originally coined by sci-fi author Vernor Vinge,
but the "billions of trillions of times" part is something
new, added by transhumanist circles. See also aleph, ceiling
An AI of below-human ability and intelligence. May refer to an infantlike
or tool-level AI; to an AI that only implements a single facet of cognition
or that is missing key facets of cognition; or to a fairly mature AI
which is still substantially below human level, although "infrahuman"
is more often used to describe the latter [Yudkowsky01].
A desirable quality of Friendly seed AIs. In the "programming
ability", seed AI sense, programmer-independence probably refers
to the ability of an AI to improve its own source code without the assistance
of the programmers; designing improved architectures, sensory modalities,
whatever. Anything the programmer could do, and more. In the Friendly
AI sense, programmer-independence is likely to refer to the ability
of a Friendly AI to make compassionate moral choices without dependence
on the programmers. A programmer-independent Friendly AI shouldn't display
bias (positive, negative, or otherwise) in favor of its programmers
except insofar as its programmers are correct about certain aspects
of normative altruism. A programmer-independent Friendly AI should be
able to transcend the errors of its original programmers or even its
parent civilization. The idea is to transfer the same open-ended complexity
that was responsible for the elimination of slavery, the worldwide improvement
in medical conditions, and so on, from the human race to Friendly AI.
The only alternatives are to transfer over a frozen output of the current
most popular human morality, or build a blank slate AI and hope it will
converge towards some sort of ideal philosophy (extremely unlikely).
See also Friendliness, Friendly AI, seed AI.
Rapid Infrastructure Ultratechnology:
Any ultratechnology, most mundanely nanotechnology, with the ability
to create rapid infrastructure and in effect play with matter like
software [Yudkowsky01]. Rapid infrastructure technologies would radically
leapfrog the current paradigm that humans use to implement forward
The subjective sensations of conscious experience. Singular: quale.
A topic of hot debate in cognitive science. See David Chalmers'
Up the Problem of Consciousness".
A "rationalization" is a pseudo-rational excuse concocted
to justify particular preexisting actions or attitudes. The phrase
"I didn't really want it anyway" would be a common example
of a rationalization. It seems that a big part of normative
rationality is eliminating our rationalizations, many of which bear
evolution's characteristic design signature. See also Mirror.
Alternate name for recursive self-improvement.
recursive self-improvement (RSI):
Recursive self-improvement is the ability of a mind to genuinely improve
upon its own intelligence. This might be accomplished through a variety
of means; speeding up one's own hardware, redesigning one's own cognitive
architecture for better intelligence, adding new components into one's
own hardware, custom-designing specialized modules for recurring tasks,
and so on. Humans cannot conduct any of these enhancements to ourselves;
the inherent structure of our biology and the limited level of our
current technology makes this impossible. But, we do have experience
with a certain limited kinds of self-improvement called "learning"
and "philosophizing". It seems probable that a brilliant
neuroscientist in the near future could theoretically use neurotechnological
techniques to genuinely enhance his or her intelligence, then apply
that enhanced intelligence to devising more effective intelligence
enhancement techniques, and so on. Unfortunately, the neurological
structures corresponding to human intelligence are likely to be highly
intricate, delicate, and biologically very complex (unnecessarily
so; evolution exhibits no foresight, and most of the brain evolved
in the absence of human general intelligence). This makes it
seem that human intelligence enhancement, if it can break through
ethics barriers at all, is 10 to 20 years in the future, at the absolute
least. As Singularity analyst and systems theorist John
Smart has said, "wetware is sexy to talk about, but messy
and unethical mess with".
True Artificial Intelligence would bypass problems of biological
complexity and ethics, growing up on a substrate ideal for initiating
recursive self-improvement (fully reprogrammable, ultrafast, the AI's
"natural habitat"). Artificial Intelligence would be based
upon 1) our current understanding of the functional algorithms of
intelligence, 2) our current knowledge of the brain, obtained through
high-resolution fMRI and delicate cognitive science experiments, and
3) the kind of computing hardware available to AI designers. Futurist
Ray Kurzweil has pointed out that, at the current rate of improvement
in brain scanning technologies, we should have extremely high-resolution
scanners, (more than enough to scan all cognitively relevant aspects
of human neurology) and sufficiently high-density storage mediums
to record all the data involved, by around the year 2030. With that
level of detail, we could theoretically run one of these emulations
on in a virtual environment, and the problem of "AI" would
surely be "solved". However, this doesn't take into account
1) discontinuous improvements in computing power or scanning technology
due to nanotechnology or other unforseen developments, 2) advances
in cognitive science that indicate the complexity of certain brain
areas is largely extraneous to intelligence, 3) qualitative
improvements in scanning techniques, or 4) a global disaster or repressive
regime that drastically curtails technological progress.
The ability to genuinely enhance the hardware components underlying
one's intelligence has not yet been observed in this universe, but
cognitive science and the laws of physics seem to allow it. However,
there is probably a minimum threshold of intelligence required before
an entity can make qualitative improvements to its own intelligence;
a chimp probably couldn't do it, some humans might not have the knowledge,
and all humans are surely poorer self-enhancers than a seed AI
could be. As long as technological progress continues to occur,
the inevitability of recursively self-improving intelligence will
become more and more acute. It would be extremely difficult to outlaw
all the precursor technologies for intelligence enhancement; huge
sectors of biotech, medicine, nanotech, and cognitive science would
need to be suspended or eliminated. For more information on recursive
self-improvement, see Part
III of "Levels of Organization in General Intelligence"
are Singularity Activists?".
Many of the arguments put forth on this site depend on understanding
the idea of recursive self-improvement, so visualizing this one accurately
is important. In some exotic cases, recursive self-improvement will
not take off; where the programmers run an AI at incredibly slow rates,
or when the speed, smartness, and ability to self-modify can't surpass
a hardware or software obstacle, and stalls at a certain level of
intelligence. (If, by some chance, the AI happened to stall at exactly
around human-equivalent intelligence, then the programmers would have
at least one additional ally to help get the AI past the bottleneck;
the AI itself.) But in most cases, when you build a sufficiently intelligent
AI, it will be capable of recursive self-improvement. If and when
a certain level of neurotechnology is made available to humans, it
would only be a matter of time before they too enter into recursive
self-improvement of their own. Recursive self-improvement will signify
the arrival of a new era qualitatively different than the era of evolution,
natural selection, or human culture - an era where order and complexity
can be created far more rapidly, and where individual minds have more
flexibility and how to think and what physical form to assume. (The
levels of intelligence entailed by recursive self-improvement would
allow the invention of technologies facilitating the arbitrary and
full transformation of the body, mind, and perceptions, if the intelligence
in question so desired it.) One of the most powerful reasons that
recursive self-improvement is likely to be the big deal I claim is
that would be an example of positive feedback - better thinkers
would become better at thinking up new ways to make themselves more
Riemann Hypothesis Catastrophe:
A "failure of Friendliness" scenario in which an AI asked
to solve the Riemann Hypothesis turns all the matter in the solar
system into computronium, exterminating humanity along the way. (A
variant of this scenario was originally proposed by Marvin Minsky.)
[Yudkowsky01]. See also convergent subgoals.
A mechanical nanocomputer built using diamondoid rods of a few thousand
atoms each. Even though messages can only move at the speed of sound
in diamond (~17 km/s == ~6 x 1e-5 c), and the calculations in Nanosystems
assume ~12 km/s, the very small size of the components would enable
an individual rod-logic CPU containing 10^6 transistor-like rod-logic
interlocks to operate at 1GHz clock speeds, executing instructions
at ~1000 MIPS. The power consumption for a 1GHz CPU is estimated to
be ~60nW. The calculations are performed for 300 Kelvin (room temperature).
The error rate is 1e-64 per transistor operation, effectively negligible.
(However, the half-life against radiation damage for an unshielded
CPU in Earth ambient background radiation is only ~100 years.)
The usual summary is that a one-kilogram, one-cubic-centimeter nanocomputer
can contain 10^12 nanocomputers, consume 100kW (and dissipate 100kW
of heat - cooling systems are also described), to deliver 10^21 instructions
per second (10^15 MIPS, ten thousand billion billion operations per
second). The overall system has a clock speed ~10^6 times faster than
the maximum firing rate of a biological neuron, and delivers total
computing capacity ~10^4 times the upper-bound estimate for the human
brain (~10^14 synapses operating at ~200 hz == ~10^17 ops/second).
There are more speculative electronic nanocomputer schemas that would
allow ~10^25 operations per second; also, assuming a parallel-CPU
architecture may be conservative when dealing with seed AIs. However,
rod logics are easy to analyze and provide a definite lower bound
on the computing speeds achievable with molecular manufacturing technology