Accelerating Future Lexicon:
A-B C-F G-M
This lexicon contains words, organizations, persons, and phrases pertaining
to the subject matter of Accelerating Future. Other interesting glossaries
include Anders Sandberg's Transhuman
Terminology, the Nanotech
Now glossary, and the Incipient
Posthuman glossary. Additional informative glossaries, which this
borrows heavily from (with permission), are the General
Intelligence and Seed AI glossary and the Creating
Friendly AI glossary. The AcceleratingFuture
lexicon is also available in Wiki
form, accessible for editing and comments. Please feel free to discuss
this lexicon on our forum.
The Accelerating Future lexicon includes entries on a variety of topics,
including AGI (Artificial General Intelligence), various organizations,
AI researchers, cognitive science, evolution, Singularitarianism, transhumanist
material, terms related to the nascent discipline of AI morality (sometimes
called "Friendliness"), abstract characteristics of goal systems,
speculative terms and ideas to be used in thought experiments and toy
models, and papers commonly circulated throughout Singularity analysis
and advocacy communities. It is recommended that readers acquire some
familiarity with these topics before plunging in - KurzweilAI.net,
our articles section, and the Singularity
Institute are all good places to start.
This lexicon includes some very futuristic and speculative entries side-by-side
with seriously relevant and down-to-earth entries, so ambiguous entries
are labelled accordingly. "Very speculative" indicates
that this certain speculative idea may not be possible under the laws
of physics as we know them. "Somewhat speculative"
indicates this something is probably not possible through human agency
alone (but probably possible through the assistance of beings that exceed
human capacity). "Non-speculative" means this something
is almost certainly possible through human agency, and is likely to
occur within the first third of the coming century. The lexicon contains
more than 250 entries so far. Keywords with their own entries are italicized.
Have an enjoyable read, and feel free to submit any comments, additions,
or corrections to Michael Anissimov.
24 Definitions of Friendly AI
Accelerating Change Conference (ACC)
Adaptive Artificial Intelligence, Inc
AGI mailing list
AI Advantage, the
AI Box Experiment
An Intuitive Explanation of Bayesian Reasoning
Artificial General Intelligence (AGI)
Artificial General Intelligence Research Institute (AGIRI)
Artificial Intelligence (AI)
Brain-Computer Interfacing (BCI)
Broderick, Dr. Damien
24 Definitions of Friendly AI:
Singularity Institute for Artificial Intelligence document providing
multiple definitions for their "Friendly AI" notion. Located
See also Creating Friendly AI, Friendly AI.
See Adaptive Artificial Intelligence, Inc.
The frequently heard argument that overall change (and sometimes,
"progress") in human civilization - or even the universe
itself - has been accelerating over time. For example, if we say that
the amount of change that took place between 1990 and 2000 seems roughly
equivalent to the amount that occured between 1960 and 1990, we might
extrapolate that and argue the same amount of change is likely to
occur again, and then again, on ever-shortening timescales. Accelerating
change is a controversial thesis that appears in a variety of forms,
but is steadily gaining popularity and academic credibility. Notable
thinkers on accelerating change include John Smart (SingularityWatch.com)
and Ray Kurzweil (KurzweilAI.net).
See also Institute for Accelerating Change, Moore's Law, Singularity,
cognitive interpretation of, Singularity, sociotechnological interpretation
Accelerating Change Conference (ACC):
Institute for Accelerating Change's (IAC) yearly conference.
"ACC03", IAC's first conference, took place at Stanford
University, and featured attendees such as Eric Drexler, William
Calvin, Robert Wright, and Ben Goertzel. ACC04 is scheduled
to take place at Stanford again. Website: http://www.accelerating.org.
Used to describe behaviors in organisms that increase inclusive
fitness. To avoid confusion, it's best to use this word within
the context of biological evolution only, selecting different terms
to describe cultural behavior in human systems or technological progress.
The word "adaptive" has been so overused in the past half-century
that it has acquired a very fuzzy and uncertain meaning. See also
evolution, evolutionary biology, evolutionary psychology.
Adaptive Artificial Intelligence, Inc:
Adaptive Artificial Intelligence, Inc. An Artificial General Intelligence
project led by Peter Voss, located in Los Angeles. As of March
2003, A2I2 has seven full-time members on its programming team. A2I2
is currently aiming for "dog-level intelligence". Website:
In the context of benevolent AI design, the complex of beliefs and
expectations which causes us to expect that AIs will behave anthropomorphically
and hostilely. The adversarial attitude predisposes us to fear self-improvements
by the AI rather than surfing the tide; the adversarial attitude causes
us to fear the ability of the AI to comprehend and modify Friendliness,
rather than regarding comprehension and improvement as the keys to
self-improving Friendliness. The adversarial attitude predisposes
us to expect failures that are human-likely rather than those that
are actually most probable or worrisome. The adversarial attitude
is the opposite of unity of will and does not permit us to
cooperate with the AI. It leads us to try and enslave or cripple the
AI; to build a tool, or a servant, rather than a Friendly but
independent Martin Luther King. It leads us to try and reinforce patchwork
safeguards with meta-safeguards instead of building coherent goal
systems [Yudkowsky01]. Most AI researchers of the past few decades
have been adversarial, which is a great obstacle to the construction
of intelligent systems that are benevolent because they want to
be, rather than because of some arbitrary programmer rules. (Which
would be destined to break down above a threshold level of intelligence
or when the AI gains access to its own source code, which is bound
to happen given enough time.) See also Asimov Laws, Friendliness,
Friendly AI, Friendship architecture.
In the context of Friendly AI design, the swamp of AI inhibitions
and meta-inhibitions, and programmer fears and anxieties, resulting
from the adversarial attitude or the use of Asimov Laws. To
enforce even a single feature that is not consistent with the rest
of the AI - to use, as a feature of Friendliness, something
that you perceive as an error on the AI's part - you'd need not only
the error/feature, but inhibitions to protect the error, and meta-inhibitions
to protect the inhibitions, and more inhibitions to cut the AI short
every time ve tries for a new avenue of philosophical sophistication.
Stupid and simple cognitive processes suddenly become preferable,
since every trace of complexity - of ability - becomes a danger to
be feared... this is the Adversarial Swamp, which inevitably drags
down all who set foot in it; once you try to enforce even a single
feature, the whole of the AI becomes a threat, and the programmers
are forever doomed to swim against the current [Yudkowsky01].
See Artificial General Intelligence.
AGI mailing list:
Mailing list focused on the subject of Artificial General Intelligence,
moderated by AGIRI Director Dr. Ben Goertzel. Website: http://www.agiri.org/agilist.htm.
See Artificial General Intelligence Research Institute.
See Artificial Intelligence, Autonomous Intelligence.
AI Advantage, the:
Advantages that the first generally intelligent AI would be very
likely to possess, by virtue of its inherent substrate and design.
For some interesting summaries, see LOGI
part 3 or Computer
Programs, Minds-in-General, and the Human Brain. See also
Singularity, impact of.
AI Box Experiment:
Experiment to gain insight into whether a "sandboxed" AI
of human-similar or human-surpassing intelligence would be able to
talk its way out of its restrictions, using a text-only channel. Current
indications seem to point to "yes". Webpage: http://yudkowsky.net/essays/aibox.html.
The moral of the results is that it is a horrible idea to build an
AI of human-similar intelligence without ensuring that the AI is genuinely
benevolent first. No matter how confident you are that you will be
able to keep the AI under your control, it will outsmart you and circumvent
its restrictions. See also Friendliness architecture, Friendly
Theory of universal Artificial Intelligence based on sequential
decision theory and algorithmic probability, by AI researcher Dr.
Marcus Hutter. The theory argues mathematically that true superintelligence
would be possible, given an infinite amount of computing power, although
the resultant mind would be a psychological hedonist. Read an introduction
See also Hutter, Dr. Marcus.
An attempt at scaling down AIXI, in an effort to yield
a computable Artificial Intelligence. Analyzed in the paper
a Universal Theory of Artificial Intelligence based on Algorithmic
Probability and Sequential Decisions". See also Hutter,
A point or state where infinite information is stored and processed.
It is currently not known if the laws of physics allow this, but several
credible physicists are convinced that they might. If infinite levels
of computation are indee possible, then subjective experience in this
universe may be eternal. (An infinite amount of computing power would
allow us to implement an infinite quantity of virtual realities and
their inhabitants, assuming that conscious minds can be created through
raw computation, which is the current consensus of practically anyone
who studies the brain scientifically. This thesis is called casual
functionalism.) Mitch Porter's term. Very speculative.
See also Omega Point, Alpha-Line Computing, Shock
Deliberate creation of Algernons via neurosurgery, cybernetic implants,
or via other means. Non-speculative. See Algernon.
Any human who, via artificial or natural means, has some type of mental
enhancement that carries a price. Algernons may possess surplus neurons
or neuron-equivalents in certain brain areas while suffering deficits
in others. Eliezer Yudkowsky's term, based on the novel "Flowers
for Algernon" by Daniel Keyes. Non-speculative. See also
Algernization, Brain-Computer Interfacing, neurosurgery.
An attempt at quantifying the objective amount of complexity in a
given structure or algorithm, using algorithmic complexity theory.
Sometimes used to estimate the difficulty of software or engineering
projects. See Wikipedia's
entry. See also Kolmogorov complexity.
Alpha-Line Computing is a repeated sequence of Alpha-Point Computing
instances; universes continuously splitting off from their parent
universes, perhaps tweaking the laws of physics to maximize available
computation in the next universe in the sequence. Eliezer Yudkowsky's
term. Very speculative. See also Shock Level Four.
A computing method whereby a new universe is split off from the
home universe, and the infinite (or near-infinitely) hot and dense
Big Bang of this new universe is utilized to perform computations.
Articles have appeared in Popular Science and other mainstream publications
suggesting that it may one day be possible to create a universe in
a lab or a futuristic lab-equivalent. See this post from the SL4 mailing
Very speculative. See also aleph, Alpha-Point
Computing, Omega Point Theory, Shock Level Four.
True altruism means helping others without desire for personal
gain in any form. Altruism requires that there is "no want for
material, physical, spiritual, or egoistic gain" (http://barbaria.com/god/philosophy/zen/glossary.htm).
Altruism is a moral philosophy whereby all sentients, and their respective
moralities, are equally valued. Altruism is a complex type of behavior
that requires the ability to model what other minds want, and the
intelligence to fulfill those wants in ways these minds want them
to be fulfilled. It would be ideal if the first Artificial Intelligence
were entirely altruistic, which is what the field of AI Friendliness
is going for. But since the minimum requirements for self-reinforcing
altruistic behavior are not likely to be computationally simple, implementing
altruism in AIs (or cognitively modified human beings) is not likely
to be an easy task. If we do not succeed in this task, then we will
face the risk of confronting smarter-than-human intelligences with
goals that amount to indifference or hostility towards our welfare.
See also Friendly AI.
A person (AI, cyborg, or human, doesn't matter) who practices altruism
An Intuitive Explanation of Bayesian Reasoning:
Webpage by Eliezer Yudkowsky, a tutorial on Bayesian reasoning
- a class of reasoning that adheres to the axioms of probability theory.
Located at http://yudkowsky.net/bayes/bayes.html.
See also Bayes' Theorem, Bayesian.
Advocacy Director for the Singularity Institute for Artificial
Intelligence, co-director of the Immortality Institute,
chair of the Bay Area Transhumanists Association, primary writer for
See also immortalism, Singularitarianism.
Why does Earth seem so precisely tuned for life? Because if it weren't,
we wouldn't be here to ask that question to begin with. The Anthropic
Principle refers to the automatic selection effects taking place
all the time based on who we are, what we are doing, and
where we happen to be. This effect has come to the attention of an
international group of top philosophers and even physicists. Here's
the canonical website: http://www.anthropic-principle.com.
The field of study applying the Anthropic Principle and awareness
of observer selection effects. Non-speculative.
The narrow-minded habit of interpreting the world exclusively in terms
of human values, tendencies, and experiences. Originally a mental
shortcut for survival in the brutal social systems of our ancestry,
this way of thinking becomes more obsolete by the day. Anthropocentrism
strikes hardest when we try to wrap our minds around complex, hypothetical
objects or entities with nonhuman qualities and characteristics. If
we mess up the creation of transhuman intelligence, anthropocentric
thinking will likely be to blame. See also anthropomorphic, minds-in-general.
Literally "human-shaped". Anthropomorphic thinking mistakenly
attributes properties to minds-in-general that are specific
to naturally evolved minds, imperfectly deceptive social organisms,
or human minds [Yudkowsky01]. See also anthropocentrism.
From Eliezer Yudkowsky's webpage:
"The Singularity holds out the possibility of winning the Grand
Prize, the true Utopia, the best-of-all-possible-worlds - not just
freedom from pain and stress or a sterile round of endless physical
pleasures, but the prospect of endless growth for every human being
- growth in mind, in intelligence, in strength of personality; life
without bound, without end; experiencing everything we've dreamed
of experiencing, becoming everything we've ever dreamed of being;
not for a billion years, or ten-to-the-billionth years, but forever...
or perhaps embarking together on some still greater adventure of which
we cannot even conceive. That's the Apotheosis." A successful
Singularity (creation of an intelligence smarter than us with the
ability to recursively self-improve its own hardware) might not be
capable of ushering in an "apotheosis", but this depends
heavily on how we define it. The possibility of Apotheosis depends
heavily on the difficulty of implementing a peaceful equilibrium state
across the whole (or a significant portion of) civilization. Even
if apotheosis is unachievable, better medicine, social structures,
and a roof over everyone's head would still be nice, and the creation
of transhuman intelligence could still provide a much faster route
to these luxuries than traditional human-instituted progress. Somewhat
speculative. See also Friendly Singularity.
Artificial General Intelligence (AGI):
Artificial Intelligence capable of learning and solving complex problems
in a variety of domains, without programmer supervision, presumably
of human-similar intelligence but likely displaying different
patterns of domain competency than human beings. A true AGI may be
intelligent and conscious, but unable to pass the Turing Test, if
the knowledge and cognitive prerequisites for communication with humans
are absent. For example, a young AGI might be better at writing computer
code than painting masterworks, although it could write the code
underlying the ability to paint masterworks fairly rapidly if
the cognitive prerequisites for recursive self-improvemement
were available. Often abbreviated as 'AGI' or 'GAI'. Both the AGI
mailing list and the SL4 mailing list routinely discuss
the prospect of Artificial General Intelligence. Non-speculative.
See also A2I2, Artificial Intelligence, Autonomous Intelligence,
Artificial General Intelligence Research Institute, general intelligence;
Goertzel, Ben; Singularity, Singularity Institute for Artificial Intelligence;
Voss, Peter; Yudkowsky, Eliezer.
Artificial General Intelligence Research Institute (AGIRI):
Artificial General Intelligence Research Institute, an organization
engaged in developing General Artificial Intelligence (as opposed
to narrow-domain Artificial Intelligence). Their commercial affiliate
is Novamente LLC, which currently develops tool-level AI for application
in bioinformatics. AGIRI is headed by Dr. Ben Goertzel, an Artificial
Intelligence researcher and entrepreneur. Website: http://www.agiri.org.
Artificial Intelligence (AI):
A full, complete intelligence built deliberately by another intelligence,
rather than through the unfolding of a genome created by natural selection.
The word "Artificial Intelligence" is often used to mean
"software programs that are competent in narrow domains",
but this is essentially just a marketing scheme. A true Artificial
Intelligence would need to be exactly as the word indicates; intelligent.
An Artificial Intelligence need not be a "computer" or a
"robot" any more than a human is a cell or a gene. Regarding
the issue of AI morality; an AI might want to help humans regardless
of how they treat it, hurt humans regardless of how they treat it,
treat humans based on how they treat it, or treat everyone based on
moral rules which don't make distinctions between "humans"
and "AIs", but ground in totally different criteria which
only an expert could predict in advance (let alone engineer). All
of these factors depend on the AI's initial design and how the AI
chooses to modify its goal structure (if it has that ability). Artificial
Intelligences are sometimes referred to using feminine pronouns or
gender-neutral pronouns such as "ve", "ver",
or "vis". Non-speculative. See also AI Advantage,
design-contingent philosophy, Friendliness, Singularity, recursive
Asimov Laws: A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
A robot must obey the orders given it by human beings except where
such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection
does not conflict with the First or Second Laws.
Famous science fiction writer Isaac Asimov's "Three Laws of Robotics",
a plot device invented in the 1940s. They are;
Amazingly, there are still people who take these laws seriously and
suppose they would work as a viable strategy for AI creation. Unfortunately,
Asimov Laws are far too simplistic, open to varying interpretations,
anthropocentrically biased, and adversarial to qualify as a serious
engineering strategy for AI morality. Asimov Laws are also predicated
on the false assumptions that 1) AIs cannot become independent moral
agents, let alone possess kinder-than-human morality, 2) AIs will always
be mechanical, in appearance and in thought, 3) creating a human-friendly
AI is a lot like coercing a human into being human-friendly. Asimov
Laws also present themselves as semantic primitives - because
they are to humans - therefore neglecting the vast underlying complexity
that would be required to even approximate these laws in a real AI.
(This problem can be summed up in the phrase "how do you create
a benevolent morality out of 1s and 0s?" You can, but it's not
as straightforward as the Asimov Laws ethic would suggest.) The modern-day
field of AI Friendliness is an attempt to go beyond Asimov Laws and
similar ideas, creating workable strategies for safe AI goal systems.
See the AsimovLaws.com project.
See also design-contingent philosophy, Friendliness, Friendship architecture,
The opportunity cost of postponing space colonization and other forms
of positive technological development which could be used to create
structures with real value, such as people living happy lives and interesting
experiences. Every second, we could be colonizing the galaxy, allowing
the existence of trillions or much more new, happy beings, but we aren't.
First elucidated by Dr. Nick Bostrom in his paper "Astronomical
Waste: the Opportunity Cost of Delayed Technological Development",
located here: http://www.nickbostrom.com/astronomical/waste.html.
Self-guided neurosurgery, for the purpose of genuine intelligence
enhancement. The benefits of performing neurosurgery on oneself
rather than with an assistant would increase rapidly after a certain
level of intelligence were attained. Through these means, and
if no unpassable bottlenecks were encountered, the autoneurosurgeon
could theoretically bootstrap his/herself to superintelligence.
However, most Singularity analysts consider the arrival of seed
AI to be more likely in the near future, not to mention to ethical
issues surrounding extreme neurosurgery. Non-speculative.
See also neuromorphic engineering, Singularity.
Alternate title for "Artificial Intelligence". The phrase
"Autonomous Intelligence" loosely reflects the hardware-extensible,
substrate-independence, superhuman behavioral flexibility,
and self-transluscency aspects of genuine Artificial Intelligence.
"Autonomous" also ditches the negative and misleading connotations
of the word "artificial".
The capability to view 100% of one's internal mental processes, and
make arbitrary revisions. Sometimes referred to in the context of
"strong self-improvement" or "recursive self-improvement".
Humans only possess conscious awareness of a small minority of the
complexity of our internal mental processes, and possess the capability
to deliberately revise even less an even smaller minority. Dr.
Nick Bostrom's term. See also autoneurosurgery, self-translucent
intelligence, seed AI.
A thinker or process that uses Bayes' Theorem, an equation commonly
used as a normative standard of reasoning, to guide inference and
decision-making. See also Bayes' Theorem.
Probability theorem, with applications in engineering, philosophy
of science, decision theory, statistics, everyday situations, and
more. Bayesians sometimes argue that Bayes' Theorem usurps
Karl Popper's logical positivism as the best paradigm for deriving
truth in science. Bayes' theorem defines a unique optimum for inference.
Sometimes called "Bayes' rule" or the "Bayesian Probability
Theorem". See also An Intuitive Explanation of Bayesian Reasoning.
See Brain-Computer Interfacing.
Famous "hard" (physically realistic) science fiction writer.
Author of more than 30 books in the genres of science fiction and
fantasy. Most recently, Bear has been writing novels about human evolution.
Some of his most famous or recent works include "Darwin's Radio",
"Blood Music", and "Slant". Website: http://www.gregbear.com.
A hypothetical mind with the ability to think and observe, but lacking
explicit high-level goals, accumulated experience, or hardware support
for any skills beyond an absolute minimum. Conventional social science
wisdom tends to suggest that humans are born as blank slates, absorbing
whatever culture writes upon them, but this has been thoroughly disproven
by the field of evolutionary psychology. When thinkers imagine
"blank slate AIs", there is a tendency to imagine some kind
of totally "clean" mind with no biases, "uninfluenced"
by human programmers. There is no such thing; the selective absorption
and abstraction of sensory input, accompanied by targeted output,
is wrapped up within the very essence of intelligence. For better
or for worse, it is not even theoretically possible to construct a
mind with goals "uninfluenced by human programmers". The
best we can work toward is an AI constructed by programmers that act
as representatives for humanity as a whole, who condemn the idea of
creating an AI biased towards any particular human or group of humans.
A human-equivalent AI would require an sizable suite of goals and
predispositions to function at all, among the most basic including
things like "remember salient regularities in the external world",
"parse incoming information into auditory, visual, and tactile
streams", and so on. This automatically renders the AI a non-blank-slate,
and if no top-level goals are introduced into the AI's design by programmers,
a de facto "top-level" goal will emerge as the aggregate
result of low-level goals responsible for the operation of general
cognition. (A difficult but very important concept.) The result would
almost certainly not be an AI that just sits in one place obediently
and thinks about whatever humans tell it to. "Obedient passivity"
is itself a complex high-level phenomena that only exists in certain
humans only because the brainware responsible for our predispositions
towards it is coded into our massive Homo sapiens genome. The
truth is that we really don't know what kind of top-level goals
would emerge from an overlapping of low-level cognitive minigoals,
but it would likely be a goal system totally foreign to the idea of
respecting human welfare. We can cross our fingers and hope that our
preferred philosophy will spontaneously emerge within the AI, but
this is extremely unlikely. The space of goals, philosophies, tendencies,
and predispositions that humans recognize as "altruism",
"kindness", "empathy", or even "tolerability",
constitutes a very small portion of the total space of (mathematically)
Since a true "blank slate AI" is a physical impossibility,
when people say "blank slate", this usually ends up meaning
one of two things; either "the simplest functioning AI possible"
or "a normative mind that acts and thinks like a mellow and unbiased
human being". Blank slate AIs may turn out to be the easiest
AIs to build, which is probably a bad thing, because blank slates,
by definition, would lack hardware support for complex motivations
like compassion or empathy. An AI that behaves like a mellow, sane,
and unbiased human being would hardly be a blank slate, and would
require a wealth of design features dedicated to implementing these
behaviors - which is what the "Friendly
AI" project is trying to do. An AI that behaved like a sane,
mellow human being, and stayed that way, probably wouldn't be too
bad (although expecting an AI to behave just like that would be anthropomorphic).
See also design-contingent philosophy, Friendship architecture.
Bostrom, Dr. Nick:
Formerly a Yale philosopher, now an Oxford philosopher. Co-founder
of the World Transhumanist Association. Polymath; writer on
various topics, including superintelligence, the Singularity, anthropics,
Artificial Intelligence, life extension, cognitive science, probability
theory, and existential risks. Notable papers include "How
Long Until Superintelligence?", "Existential Risks:
Human Extinction Scenarios and Related Hazards", and "Ethical
Issues in Advanced Artificial Intelligence". Dr. Bostrom
is one of the first doctorates to comment openly on Singularitarian
ideas, along with Dr. Ben Goertzel. Personal website at http://www.nickbostrom.com.
Molecular biologist with knowledge of computer science, applied mathematics,
nanotechnology, and other areas. Worked on anti-aging research throughout
the 90s. Well-known transhumanist. Read more at his home
page or a KurzweilAI.net
Brain-Computer Interfacing (BCI):
The current name of the developing technological field concerned with
the direct readout and manipulation of biological neurons. Sometimes
extended to refer to more indirect linkages, such as those that attempt
to decode electroencephalograms and so on [Yudkowsky01]. If
persons successfully enhanced their intelligence using BCI, then applied
that advanced intelligence to further intelligence enhancement techniques,
then a Singularity and recursive self-improvement could
Broderick, Dr. Damien:
Australian "hard science fiction" writer. Body of high-quality
work stretching back to the 1960s. Dr. Broderick also published the
first non-fiction book devoted solely to the Singularity concept,
"The Spike", available at a store near you. Other well-known
works of Dr. Broderick's include "The Judas Mandala", "Ascension",
and "The Last Mortal Generation". Member of the SL4 mailing
list. Homepage: http://www.panterraweb.com/the_spike.htm.
An Artificial Intelligence development tactic where the sheer processing
power of a host computer replaces requirements for code elegance and
efficiency. Brute force approaches shuffle through as many algorithm
permutations as possible in an effort to generate one intelligent
enough to participate in its own programming and improvement (seed
AI). Brute-forcing of seed AIs is serious existential risk
(i.e., it could quickly lead to the demise of humanity if not
handled properly) . Since brute-force AI development tactics
are fundamentally undirected, programmers will have no control over
which top-level goals emerge in the AI's mind, and the goal(s) the
AI possesses when it becomes capable of resisting modifications to
the goal system (at the threshold of recursive self-improvement)
may end up being the goal the AI has forever (because spontaneously
emergent supergoals tend to include the subclause "resist modifications
to supergoal content". Just take a look at biological evolution.
If it weren't for cosmic rays causing mutations in DNA, then the first
being to evolve would have spread exact copies of itself to every
viable habitat, or died trying.)
Monomaniacal seed AIs with fixed goals would lack the cognitive prerequisites
for empathy and compassion, resulting in worldviews that place humans
on the same level as minerals and other raw materials - we might simply
be viewed as unusually large spires of protein jutting out from the
landscape. It seems very likely there is a bare minimum of cognitive
complexity required for a mind to possess empathy and consideration
for the wishes and opinions of other sentients, what we might call
altruism or even "social common sense". Some neurologically
damaged human beings (psychopaths) lack that cognitive complexity,
as would brute-forced AIs. The possibility of brute-forced AIs makes
it essential that Friendly AI be created before nanocomputing
or quantum computing becomes available. See also Friendship architecture,
human indifference, UnFriendly Singularity.