Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31Mar/078

Basics of Friendly AI

What is Friendly AI? From the glossary of Creating Friendly AI:

Friendly AI: 1: The field of study concerned with the production of human-benefiting, non-human-harming actions in Artificial Intelligence systems that have advanced to the point of making real-world plans in pursuit of goals. The term "Friendly AI" was chosen not to imply a particular internal solution, such as duplicating the human friendship instincts, but rather to embrace any set of external behaviors that a human would call "friendly". In this sense, "Friendly AI" can be used as an umbrella term for multiple design methodologies. Usage: "The field of Friendly AI."

2: An AI which was designed to be Friendly. Within the context of Creating Friendly AI, an AI having the architectural features and content described in this document. Usage: "A Friendly AI would have probabilistic supergoals."

3: Friendly AI: An AI which is currently Friendly. See Friendliness. Usage: "The first AI to undergo a hard takeoff had better be a Friendly AI."

And what, what, one might ask, is Friendliness?

Friendliness: Intuitively: The set of actions, behaviors, and outcomes that a human would view as benevolent, rather than malevolent; nice, rather than malicious; friendly, rather than unfriendly; good, rather than evil. An AI that does what you ask ver to, as long as it doesn't hurt anyone else, or as long as it's a request to alter your own matter/space/property; an AI which doesn't cause involuntary pain, death, alteration, or violation of personal environment.

The reason why the definition is intuitive is because the precise definition has to be in terms of math. Math that gets programmed into the AI's algorithms.

Why does the first AI matter so much? Why not ignore the first and just try to do a good job on the second, or the third?

Hard takeoff: The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours.

Whatever you believe about AI improvement speeds, it's best to assume a hard takeoff. This is because the costs of being wrong on this point are so very high. One more distinction, this time between "Friendship content" and "Friendship structure":

Friendliness content: Defined in 1: Challenges of Friendly AI. The zeroth-order and first-order problems of Friendly AI; correct decisions and the cognitive complexity used to make correct decisions.  The complex of beliefs, memories, imagery, and concepts that is used to actually make decisions.  Specific subgoal content, supergoal content, shaper content, and so on.  See 1.4: Content, acquisition, and structure; see Friendship acquisition and Friendship structure.

Friendliness structure: Defined in 1: Challenges of Friendly AI. The third-order problem of building a Friendly AI that wants to learn Friendliness (engage in Friendship acquisition of Friendship content).  The structural problem that is unique to Friendly AI.  The challenge of building a funnel through which a certain kind of complexity can be poured into the AI, such that the AI sees that pouring as desirable at every point along the way.  The challenge of creating a bounded amount of Friendship complexity that can grow to handle open-ended philosophical problems.  See 1.4: Content, acquisition, and structure.

One of the most common errors in initially approaching the idea of Friendly AI is to confuse Friendship content with Friendship structure. Instead of transferring over a fixed set of rules a la Asimov laws (1. Thou shalt not kill, 2. Thou shalt have no gods other than me, etc.), the challenge is to create a dynamic process that generates the "rules" we want automatically. The idea is to create a moral philosopher whose statements and beliefs garner reactions like, "wow, I wish I'd thought of that", not a mindless machine that we have to be constantly worried is going to interpret "make humans happy" as "recycle all organic matter on the surface of the Earth into constantly stimulated homonid pleasure centers". Successful Friendly AI is supposed to be a self-guiding arrow - a threshold of confidence at which, there's no reason to worry that you "forgot something", because the AI is on your side and will implement whatever safeguards you would think of, and more.

For the questions you're thinking of, like "isn't all morality relative?", see the CFAI Indexed FAQ.

Filed under: friendly ai 8 Comments
31Mar/0721

Universcale, Dark Energy, and AI Ethics

This Universcale flash app is really impressive. I found the most interesting part around the micro/nanoscale. It includes data points on the very smallest electronics as well as organic molecules.

It was proposed recently that dark energy is just an illusion, caused by the relative difference in collapse speed of matter-dense areas of space relative to the voids. If this is true then it would be quite a fascinating discovery, letting us say that we actually understand 70% of the mass-energy of the universe. The remaining portion to explain would be dark matter. Despite their misleadingly similar names, the only thing that dark matter and dark energy have in common is that we don't know where they come from. Both could be mere artifacts of our interpretations.

On Digg, every few days there is usually some article that hints about human-level artificial intelligence or robotics. The reactions are always twofold. Let me simply paste for a recent thread:

1. The Asimov Laws comment:

While pouring over code for days, lets hope they remember to put in the 3 laws of Robotics.

2. The "I'm worried because of movies" comment:

This shit is scaring me. In every movie involving AI the human race has struggled against robots, computers, or whatever you'd like to call them. If you let AI have physical responsibilities and give it the ability to learn it's only natural that they will evolve and decide to kill humans. Computers can evolve faster than humans and it is almost certain as demonstrated by evolution that they will want to destroy us. There are mutualistic relationships in the natural world, but I personally don't think computers will want us to live like we are right now.

I know some of you will laugh at this, but this is not a joke to me and you should wake up and smell the coffee. If AI is developed it should never be given the right to develop itself physically without giving it restraints that leave the computer unable to expand past a certain point.

Both of these comments are what you get from the average person, and as with many average-level thoughts on difficult topics, they're superficial and unconstructive. Asimov's laws wouldn't work. Negative commands ("don't do this") are useless in comparison to positive commands ("do this"). Unless what you want a robot or AI to do is entirely implicit in the positive commands, the goal structure is unlikely to be self-consistent. Asimov's laws were a plot device invented half a century ago. We aren't going to get anywhere if we keep pretending that they would actually help or are a legitimate way of thinking about AI ethics.

It's smart to be concerned about the future of AI, and "wake up and smell the coffee", with regards to the fact that we aren't going to be the only intelligent species on this planet for much longer. Many transhumanists need to do this. However, saying "it's only natural that they will evolve and decide to kill humans" is the classic boring anthropomorphism that kills all serious discussions of AI ethics before they can even get started. It's like trying to do math without having any coherent concept of number. Humans need to realize that everything we consider "natural" and "normal" about certain psychological patterns is entirely contingent on our historical experiences in a pin-sized corner of the totality of mindspace. There is no automatic connection between intelligence level and goal content except insofar as they sometimes come from the same underlying causal process (in our case, evolution), so to say, "once AIs surpass us in intelligence, they'll want to kill us" is ridiculous Darwinomorphism. By Darwinomorphism, I mean unfoundedly assuming that a intelligently programmed intelligence is going to share the same psychological features that are common to all minds shaped by Darwinian evolution.

Anyone who holds either of these two beliefs - that Asimov's laws are a decent idea, or that AIs will behave in a certain anthropomorphic way - is essentially sending a signal that they can't contribute to the serious discussion of "what dynamic goals do we give the first AI, and what structure should be implementing those goals?" At present, it seems like the community who can discuss these issues is only around 100 people, which is unfortunate, because the clock is ticking and having several thousand would be much more preferable.

Filed under: AI 21 Comments
25Mar/0710

Artificial Intelligence Within our Lifetime?

Kaj Sotala is a great guy who has done a lot for transhumanism. He sets an excellent example by donating $10 a month to CRN, Lifeboat, and SIAI, something you all should be doing. Now he steps up to the plate by writing an actual paper about AI, entitled, “Artificial intelligence within our lifetime? No idle speculation”. Here is the intro:

In recent years, some thinkers have raised the issue of a so-called “superintelligence” being developed within our lifetimes and radically revolutionizing society. A case has been made that once we have a human-equivalent artificial intelligence, it will soon develop to become much more intelligent than humans - with unpredictable results.

Often, people seem to have less trouble with the idea of machine superiority than with the idea of us actually developing an artificial intelligence within our lifetimes - to most people, true machine intelligence currently seems very remote. This text will attempt to argue that there are several different ways by which artificial intelligence may be developed in the near future, and that the probability of this happening is high enough that the possibility needs to be considered when making plans for the future.

Continue it here. This is an excellent effort for a critical cause. I wish more people would do things like it!

Another choice piece by Kaj is Why I Worry About the Future.

Filed under: AI 10 Comments
23Mar/073

Can-Crushing Bionic Hand of Doom

A team of researchers from the Tokyo Institute of Technology (TIT) claim to have developed the world’s first electromechanical prosthetic hand with a grip strong enough to crush an empty beverage can.

This bionic hand weighs a little more than 300 grams and has a grip strength of around 15 kg (33 lbs), which is about half that of the average adult male. The hand also features four quick, nimble fingers that take as little as 1 second to flex and extend. When used in combination with the hand’s opposable thumb, each finger can deftly pinch and pick up small objects of various shapes.

Researchers have long considered it a great challenge to design an electric prosthetic hand with a strong grip. Toru Omata, a graduate school professor at TIT, explains that until now, electromechanical hands have relied solely on motors for their grip. The secret to this bionic hand’s strong grip, he explains, is the system of pulleyed cables that run through the fingers and attach at the fingertips.

One day in the future, the proud owner of this bionic hand will be able to crush cans at will. For that to happen, though, the researchers need to outfit the hand with a system of myoelectric control technology, which would allow the user to control the hand by flexing other muscles.

(Watch video of the hand crushing a CC Lemon can.)

Via Pink Tentacle.

Filed under: robotics 3 Comments
23Mar/079

Friendly AI Critical Failure Table

For those who haven’t seen it… the Friendly AI Critical Failure Table. Yes, it’s humor. Here’s a few for a taste:

6: Any spoken request is interpreted (literally) as a wish and granted, whether or not it was intended as one.

7: The entire human species is transported to a virtual world based on a random fantasy novel, TV show, or video game.

8: Subsequent events are determined by the “will of the majority”. The AI regards all animals, plants, and complex machines, in their current forms, as voting citizens.

9: The AI discovers that our universe is really an online webcomic in a higher dimension. The fourth wall is broken.

10: The AI behaves toward each person, not as that person wants the AI to behave, but in exactly the way that person expects the AI to behave.

Continue.

Filed under: friendly ai 9 Comments
23Mar/072

Brocken Spectre Image

An image of the Brocken spectre, found on Wikipedia.

Filed under: random 2 Comments
22Mar/073

“Evolution by Choice”, by Mitchell Howe

Across every continent and throughout every ocean, evolution has woven living tapestries of awesome complexity and beauty. In perhaps the most exquisite motif of all, evolution has even given rise to minds able to recognize and appreciate this beauty. But the artistry we observe should not be confused with determined craftsmanship, for evolution does not create any blueprints or write any recipes before laboring. It sounds like an incorrect answer given by a sassy teenager on a test, but evolution by natural selection is, in reality, just a bunch of stuff that happens.

Because it is a non-intelligent process - the unavoidable reality that conditions will always favor some designs over others - evolution by natural selection has to break many, many eggs in order to make an omelet. When we marvel at the swiftness of the cheetah, we do not see the billions of ancestral cousins that weren't quite fast enough. When we delight in the vibrant plumage of many birds, we do not see the loveless flocks of bachelors that weren't quite attractive enough.

Modern humans share a lineage no less brutal than those of our fellow animals. Even the unique cognitive ability reflected in the name homo sapiens sapiens - the thinking thinking man - is the result of a merciless game in which the perpetuation of genetic information is the only condition for victory. From our most logical calculations to our most passionate urges, our minds are orchestras assembled and tuned solely to perform magnificent renditions of the simplest melody: the call of the wild.

But by developing such an exquisite and versatile tool as the human brain, evolution has unwittingly (for that is the only way it can ever act) given us a means of escaping the cruel laboratory of natural selection. For despite the peculiar tuning prescribed by nature, general intelligence - the kind historically unique to humans - can play more than one song.

This is not to say that becoming masters of our own evolution is as simple as recognizing our origins and deciding not to be played. Until very recently, the only tool we've had to influence our genetic evolution was selective breeding, and since people tend to dislike being killed or forbidden to reproduce for the sake of the gene pool, we rightly look upon the science of eugenics with great suspicion. Also, people frequently diverge in their choice of preferred genetic traits. At best, they tend to favor qualities that nature already selects. At worst, they hold prejudices that lead to ethnic cleansing and genocide.

Today, we know genes can be altered in a more targeted fashion, assuming we can decide which configurations are best to give our children. But this level of genetic engineering will require many decades-long studies and scientific breakthroughs before coming of age, and raises disturbing questions about the ethical desirability of a "designer baby" society.

Perhaps we find genetic engineering and eugenics unsatisfactory in part because they fail to do any better than natural selection at providing personal freedom; while parents using these techniques may appreciate greater reproductive control, their children would still inherit a particular genome without having any say in the matter. Breaking out of this constrictive paradigm requires technology that can allow individuals to decide for themselves what kinds of minds and bodies they will possess, thus making evolution a personal decision.

Given genetic engineering's lengthy development cycle, it seems natural to view the more advanced technology needed for personal evolution as a distant fantasy. After all, this would require either superior alternatives to human bodies or the ability to reconfigure living bodies at the sub-cellular level - themes of only the most speculative science fiction. Nanotechnology - the nascent field of engineering materials and devices at molecular scales - can conceivably meet these specifications. But despite the accelerating progress that is starting to make nanotechnology a household word, humans are poorly suited for engineering the level of complexity and control needed for these advanced applications; we are evolved for activities of a completely different magnitude. (For instance, manufacturing trillions of multipurpose medical nanobots might be "easy" compared to making them all operate intelligently.)

Even so, the formidable barriers of advanced technology may fall easily if, instead of confronting them directly, we first build on our unique evolutionary legacy of general intelligence. The ad-hoc intellectual orchestra improvised by natural selection could almost certainly be outperformed by one assembled intelligently from the beginning. The creation of artificial general intelligence (AGI) represents a unique and formidable challenge, but holds tremendous promise as a way of playing to our greatest strength and augmenting it. In fact, the moment we achieve greater intelligence has such "singular" significance that futurists refer to it as the Singularity.

An adequately designed AGI could provide enormous assistance in the design of still more intelligent minds - a process that can be repeated in a self-reinforcing cycle. An AGI could also stand squarely outside the survival-promoting distortions that evolution has built into our thought processes, but at the same time possess a sympathetic respect for human ethics - a trait called Friendliness by some researchers. These new kinds of minds - free, capable, and compassionate to an unparalleled degree - would be invaluable partners in safely mastering technologies that can make personal evolution a reality.

Admittedly, opening a mind-and-body shop will probably not be the most urgent service performed by any Friendly AI. Indeed, it is the suffering of millions from potentially curable diseases and social conditions that should be making Friendly AI a world-wide research priority. (Many experts believe that AGI can be created in one or two decades with just a fraction of the funding devoted to causes like cancer research.) But initiating the transhuman destiny of homo sapiens sapiens is perhaps the most significant long-term achievement we can imagine; after that, who can say what dreams and challenges await?

We presently live in a beautiful-but-indifferent world where death and hardship are the norm. Adversity is, after all, the driving force behind natural selection. But as if that weren't enough, evolution has tragically engineered us not to experience lasting happiness, but to restlessly tend insatiable appetites in the service of our genes. With help from Friendly new minds, however, the enduring frustrations of the human condition can be severed as the cold strings of a mindless puppeteer. The creation of greater intelligence is the first step towards evolution by choice: the freedom to create our own better selves.

©2002 by Mitchell Howe

21Mar/0762

Relative Advantages of AI and Human Brains

Advantages of computer programs over humans, which some might call, “why we use computers at all”:

    More design freedom, including ease of modification and duplication; the capability to debug, re-boot, backup and attempt numerous designs.
    The ability to perform complex tasks without making human-type mistakes, such as mistakes caused by lack of focus, energy, attention or memory.
    The ability to perform extended tasks at greater serial speeds than conscious human thought or neurons, which perform approx. 200 calculations per second. Computing chips (~2 GHz) presently have a 10 million to one speed advantage over our neurons.
    The in principle capacity to function 24 hours a day, seven days a week, 365 days a year.
    The human brain cannot be duplicated or “re-booted,” and has already achieved “optimization” through design by evolution, making it difficult to further improve.
    The human brain does not physically integrate well, externally or internally, with contemporary hardware and software.
    The non-existence of “boredom” when performing repetitive tasks.

Advantages of human brains over hypothetical AIs:

    Present AIs lack human general intelligence and multiple years of real-world experience.
    The computational capacity of the human brain is estimated at 2 * 10^16, or 20 million billion calculations per second, which is twenty times greater than the supercomputer Blue Gene’s predicted achievement of 10^15, or 1 million billion calculations per second, by 2005. However, the human brain may not have a computational advantage over computers for much longer. Ray Kurzweil, for example, predicts that the computational capacity of the human brain will be accomplished on supercomputers, or clustered systems, by 2010, followed on personal computers by 2020.
    The human brain has already achieved a high-level of complexity and “optimization” through design by evolution, and thus has proven functionality.

Advantages of minds-in-general (AIs) over the human brain:

(The following are not advantages of specific AI approaches, but rather advantages of minds-in-general over the human brain.)

    An increased ability to acquire, retrieve, store and use information on the Internet, which contains most human knowledge.
    Lack of human failings that result from complex functional adaptations, such as observer-biased beliefs or rationalization.
    Lack of neurobiological features that limit human control over functionality.
    Lack of complexity that we have acquired from evolutionary design, e.g., unnecessary autonomic processes and sexual reproduction.
    The ability to advance on the design of evolution, which is continually constrained by lack of foresight, the requirement to maintain preexisting design, and a weakness with simultaneous dependencies.
    The ability to add more computational power to a particular feature or problem. This may result in moderate or substantial improvements to preexisting intelligence. (AI does not have an upper limit on computational capacity; we do.) Note that the speed of computational power is predicted to continually increase exponentially, and decrease exponentially in cost, every 12-24 months, in accordance with Moore’s Law.
    The ability to analyze and modify every design level and feature.
    The ability to combine autonomic and deliberative processes.
    The ability to communicate and share information (abilities, concepts, memories, thoughts) at a greater rate and at a greater level of complexity than us.
    The ability to control what is and what is not learned or remembered.
    The ability to create new modalities that we lack, such as a modality for code, which may improve the AI’s programming ability-by making the AI inherently native to programming - far beyond our own (a modality for code may allow the AI to perceive its hardware machine code, i.e. the language used to write the AI, and other abilities).
    The ability to learn new information very rapidly.
    The ability to consciously create, analyze, modify, and improve abilities, concepts, or memories.
    The ability to operate on computer hardware that has powerful advantages over human neurons, such as the ability to perform billions of sequential steps per second.
    The capacity to self-observe and understand on a fine-grained level that is impossible for us. AIs may have an improved capacity for introspection and manipulation, such as the ability to introspect and manipulate code, which would be the functional level comparable to human neurons, which we can’t think about or manipulate.
    The most important and powerful capacity of minds-in-general over the human brain is the ability to recursively self-encapsulate and self-improve its intelligence. As a mind becomes smarter, the mind can use its intelligence to improve its design, thereby improving its intelligence, which may allow further improvements to its design, thus allowing further improvements to its intelligence. It is unknown when open-ended self-improvement may begin.

Think about the differences and what they mean. The items on the above lists are not controversial - they’re either known facts or follow directly from the nature of the hardware. It’s the policy consequences that are controversial. But take the time to ignore the policy implications (if any), and by ignoring I mean not commenting on, and leaving this post just as a place to meditate about known differences between human brains, computer programs, and hypothetical AIs.

Filed under: AI 62 Comments
20Mar/070

Hijacking Nanotechnology Terminology Again?

In the early 80s, and the great scientist and engineer Eric Drexler came up with the term “nanotechnology” to describe a manufacturing technology that builds products from the atoms up. Around the turn of the century, the term was hijacked to mean anything involving nanometer-scale features, like modern computer chips. Technically, this means you could use the word “nanotechnology” to mean anything, because practically everything has nanoscale features that play a role in its overall properties. The result is that the original meaning of the word “nanotechnology” went kaput, and nanotech enthusiasts had to start saying “molecular nanotechnology” or “molecular manufacturing” to refer to what they were talking about.

Around 2001 or so, the Center for Responsible Nanotechnology started using the term “nanofactories” to describe desktop molecular manufacturing units. Now it seems like a group of researchers is attempting to hijack this word too, even though I’m sure they well know that the word already has an established meaning. From ScienceDaily:

The list of side effects on your prescription bottle may one day be a
lot shorter, according to researchers at the University of Maryland’s
A. James Clark School of Engineering.

That’s because instead of taking a conventional medication, you may
swallow tiny “nanofactories,” biochemical machines that act like
cells, first conceived of at the Clark School.

For example, these ingested nanofactories, using magnetism, could
detect a bacterial infection, produce a medication using the body’s
own materials, and deliver a dose directly to the bacteria. The drug
would do its work only at the infection site, and thus not cause th
side effects that may arise when an antibiotic travels throughout the
body in search of infections.

William Bentley, professor and chair of the Fischell Department of
Bioengineering at the Clark School, and several graduate students
including Rohan Fernandes, have developed this “magnetic nanofactory”
concept and published their research in Metabolic Engineering in
December of last year. Colleagues around the country voiced their
support for the technology in Nature Nanotechnology last month.

Artificial cells are not nanofactories! A “nanofactory” is a desktop manufacturing system! Why does the mainstream constantly steal cutting-edge terminology and water it down? My guess is that the word “nanofactory” is being used here instead of “artificial cell” or “nanobot” because a “factory” sounds more benign and neutral. People might not want to think of the idea of autonomous little robots in their bloodstream, so “nanofactories” sounds better. But they’re stepping all over the prior use of the term! Researchers know how to use Google, and I'm sure they saw the term on other websites, but they just didn't really care.

20Mar/0763

Nifty Nuclear Blast Maps

That's what the radius of destruction would look like if a 10 kT nuke were detonated on top of my house! Put in your own zip code, and see how bad it would be for you.

I found this page by following a link from NTI, the global security organization founded by Ted Turner. Warren Buffet is another billionaire who supports NTI and encourages his shareholders to read books and watch films about the threat of nuclear terrorism.

You can order a free DVD of Last Best Chance, a film warning against nuclear terrorism, by visiting here.

Another blast calculator can be found at this URL.

Filed under: nuclear 63 Comments
19Mar/0741

DARPA’s Transhumanist Research

Filed under: transhumanism 41 Comments
19Mar/07157

Toroidal Colony

The pictured colony is certainly a big one. Kalpana One is currently my favorite space colony design, in terms of relative feasibility and usefulness. One might ask, "what's the point of spending tons of money on building a space colony when Friendly AI could build us one for free, and when unFriendly AI could easily take down such a colony?" The reasons are, 1) governments will spend money on space colonization whether we want them to or not, so we might as well keep an eye on the field, 2) space colonies are an insurance policy against pre-AI disasters, 3) the prospect is inspiring in general, and even if such colonies are never produced en masse in the real world, they'll still be featured in the fictional worlds we choose to inhabit.

The pictured colony looks really, really huge. Probably would weigh trillions of tons. Seems to be about 50km across at the torus, maybe 1000km across total.

Filed under: images, space 157 Comments