What follows is possibly the first reference to AI/robotic recursive self-improvement in fiction, from all the way back in 1935. Quote from Technovelgy:
In this story of a future Earth, humanity had all of its needs met by a device - an intelligent machine.
"You have forgotten your history, and you have forgotten the history of the Machine, humans..."
"On the planet Dwranl, of the star you know as Sirius, a great race lived, and they were not too unlike you humans. ...they attained their goal of the machine that could think. And because it could think, they made several and put them to work, largely on scientific problems, and one of the obvious problems was how to make a better machine which could think.
The machines had logic, and they could think constantly, and because of their construction never forgot anything they thought it well to remember. So the machine which had been set the task of making a better machine advanced slowly, and as it improved itself, it advanced more and more rapidly. The Machine which came to Earth is that machine."
From The Machine, by John W. Campbell.
Published by Astounding Science Fiction in 1935.
Looks like the Singularity idea is not so new after all.
In the field of AI, the supergoal is to create an information processing
system that does something truly significant. (Whether this something is
good, bad, of financial worth to a few, of world-ending importance to many,
etc., depends upon who is doing the programming and how successful they are
at it.) The seemingly essential subgoal that defines AI research is to
create a system that can both learn and improve itself in self-reinforcing
manner to eventually meet the end objective of significant action. Some
minimal yet critical combination of software elegance and hardware
capability is required to get to this point.
Discussion often lingers on the questions of how near to the capacity of the
human brain such a system would need to be in order to meet this goal, or
even to what degree of human brain might be required. I believe such
questions are largely meaningless because they lose sight of the only
supergoal - that such a system sustainably learn and improve, leading to
eventual significant action.
Consider this in light of the debate about whether a person with 50 IQ can
ever hope to achieve the results of someone with a 100 IQ by remembering
that within the wide range of IQ scores held by capable adults there are
many with high IQ's who have failed to contribute anything insightful or
even useful, just as there are many with lower IQ's who have come up with
world-changing ideas and become leaders in business. (While far from
scientific, an issue of TIME from early this year had fun with this idea.)
The ability to solve simple problems and make logical conclusions from given
data, as measured by IQ scores, does not directly correlate to the AI
supergoal of doing something truly significant. Somebody may know how to
design a better mousetrap yet never do anything with this knowledge. We
would hope that an AI not likewise 'fizzle' (unless its better mousetrap
design was a grey goo that would wipe out all mammalian life).
I believe that a large part of the surprisingly common discord between IQ
scores and societal significance can be explained by my simple theory of
'Effective Sagacity'. It begins with the idea that there are various levels
of thought experienced in the human mind, and that only the time spent at
the highest level contributes to genuinely productive intelligence. I
prefer to identify just two levels of thought with the disclaimer that there
is no hard line between them. I like to call them Fidget and Sage.
Fidget is the level of thought that involves making numerous small, trivial
decisions and enacting any routine physical actions these decisions require.
Many activities, once learned, become Fidgetized. Card shuffling and
dealing. Assembly line tasks. Simple arithmetic. Brushing your teeth.
You know that they are Fidgetized because you can think about something else
entirely while doing them. But you don't always think about something else
because Fidget is often capable of bringing Sage mind behind it lock-step.
(I'll talk about that more about interplay between these two in a second.)
Fidget cannot intentionally change your life, but it is very useful and
Sage is the level of thought that involves conscious consideration and
complex decision-making. It is the level you are at when you not only hear
what your professor is saying, but also think about it, relate it to your
model of the universe, and implement it accordingly - *learning* Sage is
responsible for pondering the deeper questions of life, sustaining
meaningful conversation, and making conclusions about your identity. It was
hopefully the level you were at if/when you decided on a career, spouse,
etc. Sage is not all-powerful, though. For starters, it has very low
endurance when most actively engaged, like someone who can walk for miles
but can barely run a lap around the track. It is also easily distracted by
inconsequential tasks, like a dog happily entertained for hours by a simple
game of catch. In fact, given the choice between running a lap and
repeatedly grabbing a stick in its mouth, Sage will usually bring you a
Because of the complimentary talents of Fidget and Sage, they have a very
friendly relationship. People are often most satisfied when both are
simultaneously occupied at a low-to-middle stress level. Solitaire on the
computer is mostly a thoughtless exercise of mouse clicks under Fidget
control, with occasional input from Sage when an actual strategic decision
needs to be made. Neither mind is working terribly hard but both are
occupied and satisfied - a condition of well-being some researchers have
called "flow". Fidget is just as happy to spend hours throwing a stick as
Sage is to chase it and bring it back -- the seductive addiction of video
games and jigsaw puzzles is explained.
The poor endurance of Sage, and its desire to rest at an optimum
lower-stress activity level also sheds light on many kinds of
procrastination, since the thing you put off doing is often some special
case that requires a higher Sage activity level. "I can't study anymore for
my final. I must go for a swim and work on my tan." "I can't finish
writing about levels of thought right now. I must play Diablo II for a
couple of hours."
(Five hours later)
There are times though, when one level of thought operates almost
independently from the other. If you have ever been putting staples in
hundreds documents when you realized that you had run out of staples a dozen
slams of the stapler ago, you know what I am talking about. The fully
Fidgetized task did not require the attention of Sage, who found something
else to do and failed to notice and report the absence of staples. It is
either called "daydreaming" or "spacing out", depending on whether Sage was
meandering through the park or asleep on the bench when it was discovered.
Driving is an activity that unfortunately lends itself to inappropriate
Fidgetization. While first learning to drive, few can really think about
much else besides driving, but over time the procedures become more routine.
Many, many traffic accidents have occurred because people allowed Sage to
leave driving completely up to Fidget, who does not react promptly when
something unexpected occurs. Perhaps Sage was talking to his stock broker
on the cell phone, or perhaps just carrying on an imaginary conversation
with an ex-lover who would be oh-so jealous about seeing him with so-and so
behind the truck that just stopped suddenly in front of -WHAM!-. (I mean,
honestly, there are few excusable reasons to rear-end someone.)
Sage can also be deliberately put out to pasture, and this is frequently
done when Fidget is busy and can't play. Many drivers and workers in
repetitive jobs either consciously or unconsciously silence Sage by
listening to music - an activity that for many gets Sage absently swaying to
the beat. (This is not always the case when listening to music, but a use to
which it is frequently put.)
Even if Fidget is not busy, Sage can be intentionally suppressed. For some,
like angst-ridden teenagers, conversations with Sage may be so disturbing
that loud music is the best way to drown them out. For others, chatting
with Sage may simply be dull and unsatisfying. Alcohol and Marijuana are
known Sage-suppressants. TV offers many levels of basic thought occupation
catered mostly to minds ranging from the "moronic" to the "typical
American" - which is why many noticeably intelligent people have just one or
two favorite shows and renounce the rest as a worthless morass of glandular
So what do I mean by "Effective Sagacity"? Well, by now it should be
obvious that humans, on average, spend very little time with Sage hard at
work. Sage is usually engaged in trivial games with Fidget, deliberately
distracted while Fidget is busy, or intentionally suppressed because of
boring or uncomfortable mental dialogue. It may even be that Sage, when
allowed to slack off so much, becomes even more out of shape and incapable
of running laps. (I reluctantly make this conclusion knowing that I give
ammunition to those who derail mine and subsequent generations as having no
attention span thanks to today's ubiquitous entertainment technology.) The
problem is, high-level Sage-thought is the only kind that fosters true
learning, creativity, experimentation, etc. Therefore, even the most
high-IQ human may never produce anything new or useful to society if she is
unable or unwilling to regularly put her lanky-but-lazy Sage through its
paces. The low-IQ underdog may climb to the top of his field because his
awkward-but-fit Sage is continually running marathons. The formula is as
**The amount previously invested and currently spent in highest-level
thought combine to form one's "Effective Sagacity." In the end, this is the
*only* measurement of mental capacity an AI researcher ought to be
Note that I did not say that Effective Sagacity was the proportion of high
Sage thought to other thought, nor did I say that it was the average height
of one's thoughts. Only highest-level 'Sage' thoughts count. Only thoughts
already completed (which by definition have enriched the mind) or currently
undertaken count. This means that a mind too unsophisticated to think any
deep thoughts will automatically be disqualified from having a high
Effective Sagacity. It also means that a high-IQ -- the mere potential to
think really big thoughts, is meaningless.
When we talk about AI, it must be said that a self-improving seed
intelligence has the potential to have an Effective Sagacity score
completely off the charts compared to humans. This is fine. If, due to
faster-than-neuron circuitry and clever software, the AI thinks through the
equivalent of 1,000 human years of high Sage thought in just two weeks, the
scale is not broken - just embarrassing to humans. It may also be that this
same AI is thinking thoughts of far higher Sage than humans are capable of.
This is more of a stretch for the Effective Sagacity scale, but if such is
demonstrably the case than the machine is already a superintelligence that
is probably doing something very significant. Hope it's friendly.
An AI researcher, then, should also take heart in the knowledge that most of
the human mind's activity may not need to be replicated in order to create a
machine that thinks high Sage thoughts. Others have already stated well the
reality of the human mind's origins and its preoccupation with biological
drives. These same forces undoubtedly worked in some way that I do not
fully understand to create the range of generally low-endurance Sage most of
us rely upon to learn and create. An artificial intelligence would not only
be free of the bio-burdens of survival, but also of the human limitations on
sustained high-level thought. It may not be necessary to come even close to
matching human neural capacity in silicon, not only because so much of the
brain's body-minded tasks need not be wired for, but because the primary
thought tasks that are programmed will be consistently carried out. If a
software engineer spends just 30 minutes a day actually entering code, she
is probably not spending the other 7.5 hours thinking about that code, but
rather some 2.5 hours thinking about the code, 2 hours thinking about food,
sex, or social status, and 2 hours "spaced out" or otherwise incapacitated
by Sage lazily chasing down or soaking up trivial thoughts of some kind or
other. An AI should be able to tweak strongly in favor of the on-target
It is possible that this conclusion is wrong; It could be that there is some
fundamental limitation inherent in the brain's level computational capacity
that makes it possible to learn effectively for short periods of time but
impossible to do it for weeks on end - but I doubt it. It could also be
that an AI would have its own crippling correlates to human Fidget
activities - exhaustive memory or data-stream management, perhaps. These
Fidget distractions could easily demand so much attention that little
capacity is left for Sage thought. (This metaphor may very crudely apply to
Ben Goertzel's early incarnation of Webmind.) More efficient coding and
more powerful hardware seem very likely to overcome this potential
bottleneck soon, however.
All these happy conclusions seem to support the view of a hard, fast AI
takeoff sooner rather than later. I'm all too happy to stand by that, but
the Effective Sagacity view suggests an additional hurdle a growing
seed-AI - the limits of human knowledge obtained thus far. A highly
Sagacious AI would be very adept at learning new material, at internalizing
input to create a more accurate model of the universe, and using this model
to produce insightful output. The problem potentially arises after the
young AI has devoured all available texts and treatises on computer science
along with all examples of program code - and perhaps managed to make only
modest improvement on its own design. Further progress could be very slow
without additional instructional materials. Fortunately, the truly
Sagacious AI could also effectively find its way out of this cul-de-sac of
human thought. It could do so the same way outstanding scientists do today:
by identifying the limits of current understanding and coming up with the
right questions to ask in order to expand those limits. The AI could either
come up with great experiments to advance human knowledge, or, more
efficiently in the software field, create and perform experiments on its
own. Even if the AI is -merely- capable of directing humans in bold new
experiments, it has already done something truly significant. This would
also increase the likelihood that it would continue to be capable of
improvement and further ultimate significance.
The Effective Sagacity view suggests that the goal of AI is simpler than it
is often made out to be. Not only does AI not require replication of the
human brain, it should not prove as susceptible to subtle weaknesses that
sap the capacity of even the most brilliant humans to sustain high-level
thought. It would be naive, however, to suggest that creating an AI is a
simple task. Coding and wiring for a truly significant new intelligence
demands both daring creativity and enviable perseverance. It will require
thinkers of the highest Sagacity.
Below is a list containing various power values and the quantity of water they can boil. Click for the larger version. For the water, the initial temperature is approximately that of the ocean's surface, 20 °C. The text version may be downloaded here.
Click above to see my images (and a few by others) of the Transvision 2007 conference, held last week in Chicago. Regrettably, I left my camera charger at home so I only caught the first half of the conference. George Dvorsky posted his photos here. There is also a video of Ray Kurzweil's acceptance speech for the H.G. Wells award, given each year to an outstanding transhumanist. Previous winners include Aubrey de Grey, Ramez Naam, and Charlie Stross.
Visit new blogs on the Accelerating Future domain:
Black Belt Bayesian
Life, the Universe, and Everything
The authors are Steven and Tom. Feel free to welcome them in the comments. Here are my favorite recent posts from Black Belt Bayesian and Life, the Universe, and Everything:
- Transhumanist Buzzword Bingo (I really love this)
- Star Trek as Bad Futurism
- Speedrunning Through Life
- Optimization processes
- Reduction to QED
- Friendly AI Must Be Designed
Both these guys are extremely bright and committed to transhumanism. I'm pleased to have them as a part of Accelerating Future's growing family of bloggers.
This poll was on CNN a while ago. I don't remember the article associated with it.
Interesting results. I discussed a similar CNN poll here last June.
My answers are "Yes, but within limits", and somewhere in between #1 and #2 for the second question. I welcome some limits, such as limits on the rate that entities are allowed to reproduce. For more on why this is necessary, see The Future of Human Evolution by philosopher Nick Bostrom.
When significant transhumanist technologies become available and start percolating throughout the population, some may opt to live in human-only societies. Should certain segments of humanity be allowed to ban all augmented persons and enhancement prosthetics from their countries?
Yes, it's come to that point. The word "Singularity" has been losing meaning for a while now, but whatever semblance of a unified or coherent definition there ever used to be, it has long faded away over the horizon. Rather than any single idea, Singularity has become a signifier used to refer to a general cluster of ideas, some interrelated; some, blatantly not. These ideas include: exponential growth, transhuman intelligence, mind uploading, singletons, popularity of the Internet, feasibility of life extension, some developmentally predetermined "next step in human evolution", feasibility of strong AI, feasibility of advanced nanotechnology, some odd spiritual-esque transcension, and whether or not human development is primarily dictated by technological or social forces. Quite frankly, it's a mess.
Anytime someone gets up in front of an audience and starts trying to talk about the "Singularity" without carefully defining exactly what they mean and don't mean, each audience member will be thinking of an entirely different set of concepts, draw their own opinions from that unique set, and interpret further things they hear in light of that particular opinion, which may not even based on the same premises as the person sitting next to them. For an audience of 50 people, you could very well have 50 unique idea sets that each listener personally thinks represents the Singularity. For such challenging and sometimes confusing topics, clarity and specificity is a necessity, so we might as well discard the overused "Singularity" word, and talk about what we actually mean using more specific terms. It helps keep things distinct from one another.
Even more confusing is that there are technologies, and then there are plausible or possible consequences from the technologies - two things which are very distinct. Both lines of inquiry can cause heated argument, even when everything is perfectly delineated! But the delineation is still important, so after the argument is over, you actually know what you were arguing about. Below, I'm going to slice up various concepts associated with the term "Singularity" into ideas that can actually be examined individually:
1) Exponential growth: it sure looks like technological progress is accelerating to me, and on many objective metrics, it is, but maybe some others disagree. But guess what: whether or not progress is accelerating is largely irrelevant to the feasibility of mind uploading, cryonics, or superintelligence. It may influence timeframes, but not feasibility in the abstract sense. When acceleration skeptics say: "technological progress is not accelerating, therefore, all this other transhumanist stuff is impossible" - they're kinda missing the point - if a given technology is feasible, it is likely to be invented eventually unless globally suppressed, but the question of when is entirely separate. In principle, transhuman intelligence could be created during a time of accelerating progress, or constant progress, or even stagnation. This was mentioned at the last Singularity Summit.
2) Radical life extension: again, radical life extension (people living to 100, 200, 300, and beyond) seems very plausible to me, and I believe that we are going to be experiencing this ourselves in our lifetimes, unless an existential disaster occurs. A Berkeley demographer found that maximum lifespan of human beings is increasing at an accelerating rate. However, life extension has very little, if anything, to do with the Singularity, other than that the Singularity is sometimes associated with technological progress and that technological progress may result in radically extended lifespans. This is somewhat like how house mice are somewhat associated with raccoons because both live in areas dense with human populations.
3) Mind uploading: in his "Rapture of the Geeks" article, which I'm not even going to link, Cory Doctorow made the mistake of thinking that the "Singularity" was all about the feasibility of mind uploading and Singularity activists' primary goal is to upload everyone into a computer simulation. This is confusion caused by not looking hard enough - you're busy, you have to go protest copyright law or whatever, have to go to a meeting, blah blah blah, so you just read a few web pages that give you a totally skewed view of what you're trying to criticize, and come to the conclusion that "Singularity" = mind uploading. You hope to get away with it because you realize this is cutting edge stuff and most people don't know the difference between an uploaded superintelligence or a de novo superintelligence, for instance, so you just go for it. Bad idea. Mind uploading and the Singularity (my definition: transhuman intelligence) are totally different things. Transhuman intelligence might lead to uploading, but they're not equivalent.
4) Feasibility of strong AI: this is rightly closely associated with the Singularity, but it's still not the same thing. You can be a refusenik of strong AI and still advocate intelligence enhancement. You can want to die at age 80, believe that progress is not accelerating, and that pro-mind uploading people are crazy, and still advocate "the Singularity", because the Singularity is supposed to mean intelligence enhancement: that's it! Feasibility of strong AI is more closely related to the Singularity than the above topics, because there is a large group of Singularity activists (aka Singularitarians, spell it right), trying to build strong AI... but, if you're anti-strong-AI and think that means you're anti-Singularity, you should think again, and recognize that the Singularity and strong AI are not the same thing. You can have a Singularity with enhanced human intelligence, no AI involved at all. It's just that many Singularity activists think that AI is the easiest way to achieve intelligence enhancement - the Singularity. We could change our mind with significant persuasion - we chose AI because it looks like the easiest and safest path, not because we have some special AI-fetish. It's a means to an end, and that's all.
5) Transhuman intelligence: what "the Singularity" has always supposed to mean, but has gotten radically, radically diluted as of late. Complicating matters is that many people have different views of what transhuman intelligence is supposed to be, so even if we shave it down to just this, there is still confusion. Let me put it this way: transhuman intelligence is not a specific thing, it's a space of possible things, encompassing human intelligence enhancement through drugs, gene therapy, brain-computer interfacing, brain-brain interfacing, and maybe other techniques we haven't even considered. It also encompasses AI, but not present-day human networking or the Internet - these are simply new ways of arranging human-level intelligence. (Legos can't be made into brick-and-mortar buildings, no matter how you configure them.) To me, transhuman intelligence is completely inevitable in the long run - it will be developed, the question is how, who, and when.
So, five different things. Unrelated, but frequently conflated. If you want to critique or support something, focus on that specific thing: don't confuse yourself and others by smearing them all together! And if you're planning on attending the next Singularity Summit in San Francisco, and aren't already thoroughly familiar with the ideas surrounding the Singularity, I suggest you sit near me, so I can translate, because I doubt most of the speakers will have a very coherent or well-defined view of the Singularity either. Stewart Brand, for instance, says, "The Singularity is a frightening prospect for humanity. I assume that we will somehow dodge or finesse it in reality" - but what does he actually mean? It's so incredibly difficult to tell. I'm not picking on Brand specifically here, just repeating my original point in this post: that for every 50 people, you may very well have 50 completely different conceptions of what the Singularity is.
Transhumanists advocate the improvement of human capacities through advanced technology. Not just technology as in gadgets you get from Best Buy, but technology in the grander sense of strategies for eliminating disease, providing cheap but high-quality products to the world's poorest, improving quality of life and social interconnectedness, and so on. Technology we don't notice because it's blended in with the fabric of the world, but would immediately take note of its absence if it became unavailable. (Ever tried to travel to another country on foot?) Technology needn't be expensive - indeed, if a technology is truly effective it will pay for itself many times over.
Transhumanists tend to take a longer-than-average view of technological progress, looking not just five or ten years into the future but twenty years, thirty years, and beyond. We realize that the longer you look forward, the more uncertain the predictions get, but one thing is quite certain: if a technology is physically possible and obviously useful, human (or transhuman!) ingenuity will see to it that it gets built eventually. As we gain ever greater control over the atomic structure of matter, our technological goals become increasingly ambitious, and their payoffs more and more generous. Sometimes new technologies even make us happier in a long-lasting way: the Internet would be a prime example. In the following list I take a look at what I consider the top ten transhumanist technologies.
10. Cryonics. (Not cryogenics, that's something else.)
Cryonics is the high-fidelity preservation of the human body, and particularly the brain, after what we would call death, in anticipation of possible future revival. Cryonics is an important transhumanist technology not only because it is already available today, but because the technology is relatively mature - we can reliably stop cells from decaying. In vitrification, the brain is not frozen in the conventional manner but with a cryoprotectant (antifreeze) mixture, which effectively prevents the formation of crystals, causing the water to freeze smoothly, like glass. Maintenance of a cryo-patient is not difficult - it requires no electricity, but merely the replenishment of liquid nitrogen about every three weeks. As cryonics becomes more popular, this process could become automated and extremely reliable. Further improvements in dewar technology will continue to increase safety and reduce costs. The Cryonics Institute in Michigan, for example, has operated since 1976 without a single mishap.
Financed by the interest of the payout of a life insurance policy (which for people under 40 may cost as little as $100 a year to own), patients can be securely cryopreserved for as long as the cryonics company stays afloat and the dewar stays in one piece. Eventual revival does not require the technology to become available tomorrow, or next year... as long as the liquid nitrogen keeps replenished, you can stay on ice for as long as it takes. For an existence proof of cryonic revival, there are frogs that can freeze solid and revive later, though reviving a human from freezing would likely require molecular nanotechnology (MNT). When we will be able to revive a cryo-patient will be strongly related to when we develop sophisticated MNT. Once we do develop MNT, the prospect of successful revival is extremely likely - it would involve slowly melting the ice and rebooting the metabolism by kickstarting the appropriate chemical reactions within cells.
The above image may look like a photo, but it's actually a screenshot from the game Crysis, a first-person shooter which will be released later this year. Look at screenshots from the game and you'll see that computer graphics are already beginning to approach photorealism. Sometime in the 2020s, reality simulations will become so high-resolution and immersive that they'll start to get indistinguishable from the real thing. Simulations will become the preferred environments for work and play. Pretty soon the main obstacle to truly immersive VR will not be the visuals but the haptics - our sense of touch. To fool our senses into believing haptic technologies are conveying the real thing, the "frame rate" needs to be significantly higher than for visual technologies, a few hundred updates per second rather than a few dozen - which is why development could take another decade or two. But many millions of dollars are currently going into efforts to develop advanced VR.
Clearly, World of Warcraft's eight million subscribers and SecondLife's five million subscribers are onto something. At least 1% of all broadband Internet users play in virtual worlds, and this number is increasing rapidly. These worlds typically outclass the real world in terms of customizability, but still have yet to catch up in terms of sensory richness or social fulfillment. But it's only a matter of time. In the mid-to-late 2020s, I expect full-body, high quality haptic VR suits to be affordable to the average person in developed countries, obtained either from your local WalMart or perhaps printed right out of a desktop nanofactory after payment of a fee. For more on this, here is one scientific paper, "Towards full-body haptic feedback".
8. Gene therapy/RNA interference.
Gene therapy replaces bad genes with good genes, and RNA interference can selectively knock out gene expression. Together, they give us an unprecedented ability to manipulate our own genetic code. By knocking out genes that code for certain metabolic proteins, scientists have been able to make mice that stay slim no matter how much junk food they eat. Lou Gehrig's disease has been cured in mice, and it could only be a few years before we develop a therapy that can cure it for humans too. Aubrey de Grey's SENS (Strategies for Engineered Negligible Senescence) research program contains various prescriptions for the use of gene therapy. Within a couple decades or so, progress in anti-aging therapies will improve to the point where we are gaining more than an extra year of lifespan per year, reaching so-called "longevity escape velocity" eventually culminating in indefinite lifespans.
Like many transhumanist technologies, gene therapy is really exciting because it's just beginning. No scientist has yet performed gene therapy on germline cells (sexual cells in the gonads) due to the ethical controversy of producing genetic changes which are heritable, but, as with many of these things, it's only a matter of time. Regulations in any given country will only be capable of slowing the overall progress of the field by a few years at most. The money will go where the research is permitted. In its mature form, gene therapy and genetic engineering will become extremely cheap and powerful, letting humans live comfortably in a wider range of environments and gain immunity to most, if not all diseases. Supercomputers of the future, with thousands or millions of times the crunch power of today's best, will let us simulate the changes in extreme detail before we attempt them with actual human beings. This will make ill side effects quite unlikely for the typical case, much to the dismay of the authors of "genetic engineering turned daddy into a bloodthirsty zombie!" trash novels and films.
7. Space colonization.
Space colonies will become necessary to house the many billions of individuals that will be born in the future as our population continues to expand at a lazy exponential. In his book, The Millennial Project, Marshall T. Savage estimates that the Asteroid Belt could hold 7,500 trillion people, if thoroughly reshaped into O'Neill colonies. At a typical population growth rate for developed countries at 1% per annum (doubling every 72 years), it would take us 1,440 years to fill that space. Siphoning light gases off Jupiter and Saturn and fusing them into heavier elements for construction of further colonies seems plausible in the longer term as well.
Why expand into space? For many, the answers are blatantly obvious, but the easiest is that the alternatives are limiting the human freedom to reproduce, or mass murder, both of which are morally unacceptable. Population growth is not inherently antithetical to a love of the environment - in fact, by expanding outwards into the cosmos in all directions, we'll be able to seed every star system with every species of plant and animal imaginable. The genetic diversity of the embryonic home planet will seem tiny by comparison.
Space colonization is closely related to transhumanism through the mutual association of futurist philosophy, but also more directly because the embrace of transhumanism will be necessary to colonize space. Human beings aren't designed to live in space. Our physiological issues with it are manifold, from deteriorating muscle mass to uncontrollable flatulence. On the surface of Venus, we would melt, on the surface of Mars, we'd freeze. The only reasonable solution is to upgrade our bodies. Not terraform the cosmos, but cosmosform ourselves.
Can you spot the cyborg in this picture? You're looking right at him! It's Michael Chorost, the man who was born almost deaf but now can hear, thanks to a cochlear implant. Most of the cyborgs in fiction fit certain stereotypes - Ãœbermensch wannabes, cyborg assassins, and supercops. But cyborgs already walk among us, and they look just like normal people. This trend will continue in the future. Many cyborg upgrades which will become available in the 20s and 30s, such as hearing and vision enhancement, metabolic enhancement, artificial bones, muscles, and organs, and even brain-computer interfaces will be invisible to the casual observer, implanted beneath the skin. Cybernetic features on the surface, such as dermal enhancements or technological actuators like retractable wings, will be carefully camouflaged. No one will want to shock the rest of society by looking like the tin man in public.
The process of cyborgization has already been happening for centuries if not millennia, since the advent of clothing and piercings. For many generations, but especially in the last couple decades, our technological gadgets have been getting smaller, more functional, and more closely integrated with our natural activity. Recently, Microsoft announced Microsoft Surface, a mouseless, keyboardless form of desktop computing which takes input from finger tracing and hand gestures. The sophistication of biotechnology and the availability of better materials and precision manufacturing will let us make systems so small and effective that even everyday people elect to implant them. These cybernetic systems will greatly improve our everyday experience, from letting us hear a wider range of ambient sounds, to viewing millions of stars rather than just a few thousand, to making us more resistant to accidents. They will improve the overall economy by enabling us do more work in less time for better pay. In the long term, enhanced humans may get a bigger portion of the economic pie than un-augmented humans, but the pie itself will become so much larger than even the poorest humans of tomorrow will be better off than the wealthiest of today.
Here's a good cyborg blog I found while doing research for this article, and the Power Jacket, a 4-pound jacket that enhances strength and is used by people recovering from paralysis. For more, see the cybernetics category of my del.ic.ious links, or my top ten list of cybernetic enhancements.
5. Autonomous self-replicating robotics.
Why do manual labor when the robots can do it for you? Self-replication might be considered the Holy Grail of robotics. A landmark NASA study, "Advanced Automation for Space Missions", found that robotic self-replication is just a matter of engineering, and that no fundamental theoretical breakthroughs are needed. The study proposed sending a 100-ton package to the Moon, with a self-replication time of 1 year, and letting it self-replicate until the desired level of development is attained. The design - which was fleshed out in great detail - was based on electric carts running on rails within the factory, "paving machines" that direct sunlight to melt lunar regolith, robotic strip miners for obtaining raw materials, and a solar cell "canopy" for powering it all. After 10 years, over 100,000 tons of lunar factory could be produced autonomously. The factory's functions could then be hijacked for the benefit of human colonists, used to produce housing, products, and provide large quantities of solar power.
If similar self-replicating systems could be constructed on Earth, there would be little limit to the material plenty they could provide. Self-replicating factories could turn the vast empty badlands of Australia into lush gardens by pumping water from the oceans, self-replicating factories in the high Arctic could melt snow and create gigantic transparent domes suitable for habitation, and submersible automata in the seas could dredge sand from abiotic regions of the ocean floor and process it into gigantic platforms for human colonization. By opening up such vast new regions of the Earth's surface, talk of overpopulation and crowding would fall by the wayside for quite a few decades, with people realizing how much space there actually was all along. And once things really do get too crowded here on Earth, we can move to the Moon, Mars, and the asteroid belt, using the power of self-replicating robotics to create rotating space colonies suitable for housing trillions of people.
Self-replicating factories could reduce the costs of material goods close to that of food - the primary expenses would consist of raw materials, energy, and whatever small quantity of human oversight is necessary to keep an eye on the overall structure of things. By utilizing special, man-made "nutrients" for top-level functions (rare or exotic molecules such as custom-synthesized proteins) and the broadcast architecture - whereby derivative factories must receive affirmations from a central parent factory to continue self-replicating - such factories could be made safe by design. With such abundance, humanity might actually shift from having a zero-sum perspective on a world to a positive-sum perspective. With medical tools and basic goods in ample supply, no one in the world would need to suffer from poverty or curable disease. The nature of human work would shift from manual drudgery and mind-numbing routine to more creative and personally fulfilling endeavors, like art, music, math, science, literature, and exploration.
4. Molecular manufacturing.
If self-replication is the Holy Grail of robotics, then molecular nanotechnology (MNT) is the Holy Grail of manufacturing. Molecular nanotechnology would use massive arrays of nanometer-scale actuators (produced initially through self-replication) to manufacture macroscale products with atomic precision. This concept is known as the nanofactory. In practical terms, the creation of nanofactories would mean that practically everything could be made out of diamond, motors would become so powerful that a cubic centimeter would provide enough torque to propel a car, medical nanodevices could heal wounds and repair organs without the need for surgery, and air-suspended nanodevices ("utility fog") could be configured to simulate practically any desired object on demand. On the downside of things, it could become easy to manufacture mite-sized robots with a payload of poison sufficient to kill thousands, or a laptop-sized device capable of separating U-235 from U-238 in a worrisomely simple and rapid fashion, or self-replicating synthetic algae capable of clogging up our oceans with grey goo. Enabling widespread use of the positive applications while cleanly and completely suppressing the nasty applications is a first-order challenge. Incidentally, you can make a difference right now by donating to the Lifeboat Foundation or Center for Responsible Nanotechnology, two of very few organizations focusing on this area.
To some, molecular nanotechnology sounds like science fiction, and based on the grandiose applications I discussed in the previous paragraph, you can't blame them. But many of the prerequisites of molecular manufacturing have already been demonstrated - "molecular surgery" has been used to snip off and replace individual hydrogen atoms, various functional nanoscale devices have been built, scanning tunneling microscopy has been used to mechnically manipulate individual atoms, and so on. The challenge is to create a nanoscale manipulator arm capable of placing individual atoms with angstrom-level precision, avoiding undesired reactions, and serving as a universal constructor that can build a copy of itself. There are numerous technical challenges still outstanding, but when these are overcome, manufacturing will be granted the power that nature has had for hundreds of millions of years - the ability to fabricate large objects with molecular precision. The numerous potential applications of the techology to human enhancement are obvious; with molecular manufacturing, we could orchestrate elegant improvements to every single body component, achieving all of the upgrades described on my top ten list, and many more.
3. Megascale engineering.
Most people are familiar with megascale engineering because it is seen throughout fiction - the Death Star, for instance. Typically, megascale engineering refers to building structures at least 1,000 km in length in one dimension, such as a space elevator, Globus Cassus, or Dyson sphere. With the self-replicating robotics described above, the production of such large structures could be done largely by autonomous drones, with intelligent agents only managing the highest top-level functions and architecture. Considering that mankind's long-term future is in space, and that space right now is pretty devoid of any structure useful or habitable to humans, we have a lot of work to do, and if you can make the projects megascale, why not?
Like some of the other items on this list, megascale engineering is only indirectly transhumanist - but is still very relevant to the long-term future of intelligent life. Megascale engineering goes hand-in-hand with the grandiose transhumanist vision: intelligent beings spreading across the cosmos, and eventually shaping the very structure of the universe itself. The fact that these vast expanses of colonizable space are currently neglected imposes on us a vast opportunity cost - if we hurried up a bit and colonized them, we could give rise to tremendous numbers of people leading worthwhile lives. What experiences would they have, and what stories would they tell? We'll never find out, unless we make it happen.
2. Mind uploading.
Mind uploading, sometimes referred to as nonbiological intelligence, centers around the controversial proposition that cognitive processing can be implemented on substrates other than our current neurons. Considering decades of successful results in neurophysiology, and the recent construction of the world's first brain prosthesis - an artificial copy of the hippocampus - this seems very likely. It appears that our minds are defined more by the information pattern they embody than the particular hardware they are implemented on. Numerous philosophers of mind have long recognized this, but acceptance among the wider public has been a long time in coming: people don't want to think that they're "just" data structures being implemented as computational automata on biological neurons. But it is hard to think of it any other way: once we dismiss the possibility of an immaterial soul, we must acknowledge the mind as a material pattern implemented in physical configurations, and if other substances aside from our current neurons can meet the requirements for these configurations, then there is no reason why intelligence and consciousness could not exist on another substrate. For a humorous look at this complex philosophical argument, see "They're Made Out of Meat" by Terry Bisson.
If our brains really don't have to be made out of meat, then we can transfer them to other substrates. By incrementally replacing each neuron with a synthetic neuron-equivalent, the whole process could go down painlessly and seamlessly. The transfer could be as slow or as fast as we want: from the information-processing perspective of the brain itself, nothing ever changes. Light still comes in through the eye's lens, hits the retina, is transformed into nerve impulses which travel down the optic nerve, receives further processing in the visual cortex at the back of the brain, the highlights of which are sent to the prefrontal cortex for integration with information from the other senses. The brain can't tell if it's made out of traditional meat, or accelerated biological neurons, or entirely nonbiological neuron-equivalents: the computation is the same. Sometimes this notion is also referred to as an application of the Church-Turing thesis.
If entirely synthetic brains are possible, then there's nothing stopping such persons from inhabiting computer networks - not indirectly, sitting in chairs as we currently do, but directly, engaging in computer worlds as a sentient program of tremendous complexity. With molecular manufacturing on hand, reversing the process would be as simple as printing out a hundred or so kilograms of flesh and bone again, complete with memories from the networked experience. This is probably among the transhumanist visions that most reliably elicits the "yuck!" reaction, but if functionalism is true, then virtual experience will be indistinguishable from physical experience. Not only that, but even more enjoyable, due to the manifold degrees of freedom which would become newly accessible. In a virtual world, there are no laws of physics except those we choose.
1. Artificial General Intelligence.
As argued in the previous section, functionalism seems likely. If so, then strong AI is possible. Thinking, feeling, imagining, creating, communicating, thoughtful synthetic intelligences with conscious experiences. Whether serial computing is sufficient, or parallel computing is necessary, both are within technological reach, and present-day computing speeds are fast approaching the computing power of the human brain. In fact, according to many estimates, the fastest present-day supercomputer, Blue Gene/P, has already exceeded it. Blue Gene/P operates continuously at speeds of over a petaflop, which is a million billion operations per second. For strong AI skeptics, no computer - even one operating at trillions of trillions of trillions of operations per second, is sufficient to implement true intelligence, but to functionalists like myself, such a meat-centric perspective is unjustified.
Distinct from artificial intelligence in general, which has come to refer to any sophisticated software program, artificial general intelligence refers to AIs that display open-ended learning and similar competency levels to human beings. A handful of researchers are working diligently towards artificial general intelligence, informed by the mathematics of inference and probability theory: JÃ¼rgen Schmidhuber, whose "main scientific ambition has been to build an optimal scientist, then retire"; Marcus Hutter, author of the landmark book Universal Artificial Intelligence; Ben Goertzel, who recently presented his AI design in a talk to Google; and Eliezer Yudkowsky, who is developing a reflective decision theory from first principles. Whether or not others believe in the feasibility of general AI, these individuals will keep working, and one will eventually succeed.
The way the world would be impacted by the arrival of general AI is too extreme to discuss in much detail here. If raw materials such as sand can be converted into computer chips and then into intelligent minds, eventually the majority of material in the solar system could be made intelligent and conscious. The result would be a "noetic Renaissance": the expansion of intelligence and experience beyond our wildest dreams. Conversely, if not given empathic values, artificial intelligence could lead to the doom of all. It's up to us to set the initial conditions appropriately: if not, we might not be around to regret it.
If you enjoyed this post, please subscribe to my feed for updates on related topics.
With the advent of wireless broadband, using the net has become so easy. Be it downloading graphic design templates, or doing an online spyware removal, everything is much more speedy and user-friendly. Although on an individual basis it also depends on a sites own web server.This is because a lot of sites are using cheap web hosting and not all hosting companies are effective. In this context , the web name is often a give-away of the hosting company employed by the site.
Kaj Sotala, a fellow supporter of both the Lifeboat Foundation and Singularity Institute, has published a new article, "Why care about artificial intelligence?" to follow up on his "Artificial intelligence within our lifetime?" article, which I covered in March.
The main thrust of the article is that AIs could potentially be much, much more powerful than human beings, and therefore we have an important stake in how their motivational systems are constructed. The main talking points are:
- Artificial intelligences can do everything humans can
- Limitations of the human mental architecture
- Limitations of the human hardware
- Comparative human/AI evolution and initial resources
- Considerations and implications of superhuman AI
- Controlling AI: Enabling factors
- Controlling AI: Limiting factors
- Immense risks, immense benefits
- Summary and implications
Also recently published by Kaj on his site are the works, "Transhumanism: Happiness, Equality, Choice", "Ethics of forced choice and future selves", and "In defense of transhuman development". The papers are only a couple of pages each, suitable for a quick and informative read.