There's an ongoing uploading debate in the comments with Aleksei Riikonen, Mark Gubrud, Giulio Prisco, myself, and others. The topic of uploading is the gift that keeps on giving -- the dead horse that can sustain an unlimited beating.
There is a new open letter on brain preservation -- sign the petition! Also, there will be workshops on uploading after the Singularity Summit 2010 this August in San Francisco. A big congrats to Randal Koene, Ken Hayworth, Suzanne Gildert, Anders Sandberg, and everyone else taking the initiative to move forward on this.
One last thing: ghost hunting equipment. Harness the power of ghosts, take over the world.
As always, there's been some nice activity over at anti-transhumanism central, The New Atlantis Futurisms blog. Most recently is a post "Why Transhumanism Won't Work", which is as provocatively named as my recent post "Transhumanism Has Already Won". The post, a guest post by Mark Gubrud, is more of a screed against mind uploading than against transhumanism in general, however Gubrud claims that "transhumanism itself is uploading writ large." Basically, Gubrud calls attention to a talk that will be given by a philosophy professor against mind uploading at the upcoming H+ conference at Harvard. The essence of the argument is that advocates of mind uploading are dualists because they speak of a "pattern" that is really a "soul" which is postulated to be transferable across substrates. (It's ironic that Gubrud makes a guest post arguing against the soul on a site funded by "Washington, D.C.'s premier institute dedicated to applying the Judeo-Christian moral tradition to critical issues of public policy." It shows that some modern Christians are willing to be pragmatic about the messages they put out there.)
I made a very short comment in response to the post:
Why don't we lose our identity/become different people when the constituent proteins making up our cells are continuously renewed? Is the soul transferred from the old proteins to the new proteins? How is that scientific?
This elicited a good-length response from Gubrud, beginning with this dramatic condemnation of my seemingly innocuous question:
You ask a very good question, one of the key questions which lead people into the nihilistic wilderness in which a noxious weed like transhumanism can flourish.
The #1 question on my mind when I read this is, "Is Gubrud a Christian, or does his hatred of transhumanism stem from something else?"
Gubrud continues to talk about how identity is just a concept, which I concur with, and then says:
Transhumanists look outside the human community for a source of meaning and moral order. They believe in a great story of Evolution, Intelligence, and Destiny. It's kind of a throwback to a pre-Copernican worldview.
Humanists understand that this is a random universe, to which we bring our own meanings. Humanists treasure humanity and nature, while regarding technology as the tool we use to protect these primary values, rather than a primary value in itself.
This really is a debate about human values and the future of humanity. There still ain't nobody else here.
This is definitely false with respect to the community around the Singularity Institute, at least. We understand that human values are a lone candle in an otherwise morality-indifferent galaxy. The only difference is that we see opportunity for moving beyond strict adherence to the fitness-maximizing goals that natural selection gave us, and into secondary characteristics such as mathematics, literature, art, etc. If the latter is truly more important than the former, then given the opportunity to modify our own minds and bodies, we will choose minds and bodies that nurture the latter while the former falls out of style.
To sum up, I have to say -- folks, critiques of mind uploading and transhumanism are not the same thing. You can blend them together like some sort of philosophical porridge, but for clarity's sake they ought to be handled individually. I think part of the problem is that transhumanism is so compelling that attacking it head-on is a huge challenge, so critics prefer to target its more radical ideas, such as mind uploading.
Mind uploading is indeed a radical idea, and I can sympathize with some of Gubrud's arguments about continuity, but critics have to realize that the "mind uploading as dualism" argument is over a decade old and has already been refuted many times. It is refuted by positing mind transfers so incremental that it is quite impossible to say that the original person has been lost. The transfer can be made arbitrarily incremental, and there will still be people who say that the original person is lost, and keep saying that forever, but it seems quite likely that society will eventually adopt the technology anyway. We've already incrementally uploaded so much data about ourselves into an "exoself" of computer files and Internet sites.
There is still a great amount of disagreement in the community over whether uploads or AGI are more likely to occur first. I would like to solicit probability distributions and statements from interested parties on the relative likelihood of either route reaching human-equivalent intelligence first. Please send your probability distribution values, including mean and standard deviations, to michaelanissimov at g mail dot com, coupled with a well-written, well-edited statement of 400 or more words, and a short official bio and link to your CV (if applicable). Please include at least 3-5 citations in your argument, and please make it as academic as possible. The final results will be published on a web page at AcceleratingFuture.com, and may make it into a journal if the results are good enough.
The operating definition of intelligence to be used during this exercise is that given by Shane Legg in his PhD thesis Machine Superintelligence:
Intelligence measures an agent's ability to achieve goals in a wide range of environments.
Good luck! By the power of Aumann, may our probability estimates converge closer to the truth as a result of this exercise.
The academic philosophers who are reading -- you know who you are -- if you only make one comment on Accelerating Future all year, make it a contribution to this exercise. Thank you for your time.
Universal mind uploading, or universal uploading for short, is the concept, by no means original to me, that the technology of mind uploading will eventually become universally adopted by all who can afford it, similar to the adoption of modern agriculture, hygiene, or living in houses. The concept is rather infrequently discussed, due to a combination of 1) its supposedly speculative nature and 2) its "far future" time frame.
Before I explore the idea, let me give a quick description of what mind uploading is and why the two roadblocks to its discussion are invalid. Mind uploading would involve simulating a human brain in a computer in enough detail that the "simulation" becomes, for all practical purposes, a perfect copy and experiences consciousness, just like protein-based human minds. If functionalism is true, like many cognitive scientists and philosophers correctly believe, then all the features of human consciousness that we know and love -- including all our memories, personality, and sexual quirks -- would be preserved through the transition. By simultaneously disassembling the protein brain as the computer brain is constructed, only one implementation of the person in question would exist at any one time, eliminating any unnecessary confusion.
Still, even if two direct copies are made, the universe won't care -- you would have simply created two identical individuals with the same memories. The universe can't get confused -- only you can. Regardless of how perplexed one may be by contemplating this possibility for the first time from a 20th century perspective of personal identity, an upload of you with all your memories and personality intact is no different from you than the person you are today is different than the person you were yesterday when you went to sleep, or the person you were 10^-30 seconds ago when quantum fluctuations momentarily destroyed and recreated all the particles in your brain.
Regarding objections to talk of uploading, for anyone who 1) buys the silicon brain replacement thought experiment, 2) accepts arguments that the human brain operates at below about 10^19 ops/sec, and 3) considers it plausible that 10^19 ops/sec computers (plug in whatever value you believe for #2) will become manufactured this century, the topic is clearly worth broaching. Even if it's 100 years off, that's just a blink of an eye relative to the entirety of human history, and universal uploading would be something more radical than anything that's occurred with life or intelligence in the entire known history of this solar system. We can afford to stop focusing exclusively on the near future for a potential event of such magnitude. Consider it intellectual masturbation, if you like, or a serious analysis of the near-term future of the human species, if you buy the three points.
So, say that mind uploading becomes available as a technology sometime around 2050. If the early adopters don't go crazy and/or use their newfound abilities to turn the world into a totalitarian dictatorship, then they will concisely and vividly communicate the benefits of the technology to their non-uploaded family and friends. If affordable, others will then follow, but the degree of adoption will necessarily depend on whether the process is easily reversible or not. But suppose that millions of people choose to go for it.
Widespread uploading would have huge effects. Let's go over some of them in turn.
1) Massive economic growth. By allowing human minds to run on substrates that can be accelerated by the addition of computing power, as well as the possibility of spinning off non-conscious "daemons" to accomplish rote tasks, economic growth -- at least insofar as it can be accelerated by intelligence and the robotics of 2050 alone -- will accelerate greatly. Instead of relying upon 1% per year population growth rates, humans might copy themselves or (more conducive to societal diversity) spin off already-mature progeny as quickly as available computing power allows. This could lead to growth rates in human capital of 1000% per year or far more. More economic growth might ensue in the first year (or month) after uploading than in the entire 250,000 years between the evolution of H. sapiens and the invention of uploading. The first country that widely adopts the technology might be able to solve global poverty by donating only 0.1% of its annual GDP.
2) Intelligence enhancement. Faster does not necessarily mean smarter. "Weak superintelligence" is a term sometimes used to describe accelerated intelligence that is not qualitatively enhanced, in contrast with "strong superintelligence", which is. The road from weak to strong superintelligence would likely be very short. By observing information flows in uploaded human brains, many of the details of human cognition would be elucidated. Running standard compression algorithms over such minds might make them more efficient than blind natural selection could manage, and this extra space could be used to introduce new information-processing modules with additional features. Collectively, these new modules could give rise to qualitatively better intelligence. At the very least, rapid trial-and-error experimentation without the risk of injury would become possible, eventually revealing paths to qualitative enhancements.
3) Greater subjective well-being. Like most other human traits, our happiness set points fall on a bell curve. No matter what happens to us, be it losing our home or winning the lottery, there is a tendency for our innate happiness level to revert back to our natural set point. Some lucky people are innately really happy. Some unlucky people have chronic depression. With uploading, we will be able to see exactly which neural features ("happiness centers") correspond to high happiness set points and which don't, by combining prior knowledge with direct experimentation and investigation. This will make it possible for people to reprogram their own brains to raise their happiness set points in a way that biotechnological intervention might find difficult or dangerous. Experimental data and simple observation has shown that high happiness set-point people today don't have any mysterious handicaps, like inability to recognize when their body is in pain, or inappropriate social behavior. They still experience sadness, it's just that their happiness returns to a higher level after the sad experience is over. Perennial tropes justifying the value of suffering will lose their appeal when anyone can be happier without any negative side effects.
4) Complete environmental recovery. (I'm not just trying to kiss up to greens, I actually care about this.) By spending most of our time as programs running on a worldwide network, we will consume far less space and use less energy and natural resources than we would in a conventional human body. Because our "food" would be delicious cuisines generated only by electricity or light, we could avoid all the environmental destruction caused by clear-cutting land for farming and the ensuing agricultural runoff. People imagine dystopian futures to involve a lot of homogeneity... well, we're already here as far as our agriculture is concerned. Land that once had diverse flora and fauna now consists of a few dozen agricultural staples -- wheat, corn, oats, cattle pastures, factory farms. Boring. By transitioning from a proteinaceous to a digital substrate, we'll do more for our environment than any amount of conservation ever could. We could still experience this environment by inputting live-updating feeds of the biosphere into a corner of our expansive virtual worlds. It's the best of both worlds, literally -- virtual and natural in harmony.
5) Escape from direct governance by the laws of physics. Though this benefit sounds more abstract or philosophical, if we were to directly experience it, the visceral nature of this benefit would become immediately clear. In a virtual environment, the programmer is the complete master of everything he or she has editing rights to. A personal virtual sandbox could become one's canvas for creating the fantasy world of their choice. Today, this can be done in a very limited fashion in virtual worlds such as SecondLife. (A trend which will continue to the fulfillment of everyone's most escapist fantasies, even if uploading is impossible.) Worlds like SecondLife are still limited by their system-wide operating rules and their low resolution and bandwidth. Any civilization that develops uploading would surely have the technology to develop virtual environments of great detail and flexibility, right up to the very boundaries of the possible. Anything that can become possible will be. People will be able to experience simulations of the past, "travel" to far-off stars and planets, and experience entirely novel worldscapes, all within the flickering bits of the worldwide network.
6) Closer connections with other human beings. Our interactions with other people today is limited by the very low bandwidth of human speech and facial expressions. By offering partial readouts of our cognitive state to others, we could engage in a deeper exchange of ideas and emotions. I predict that "talking" as communication will become passÃ© -- we'll engage in much deeper forms of informational and emotional exchange that will make the talking and facial expressions of today seem downright empty and soulless. Spiritualists often talk a lot about connecting closer to one another -- are they aware that the best way they can go about that would be to contribute to researching neural scanning or brain-computer interfacing technology? Probably not.
7) Last but not least, indefinite lifespans. Here is the one that detractors of uploading are fond of targeting -- the fact that uploading could lead to practical immortality. Well, it really could. By being a string of flickering bits distributed over a worldwide network, killing you could become extremely difficult. The data and bits of everyone would be intertwined -- to kill someone, you'd either need complete editing privileges of the entire worldwide network, or the ability to blow up the planet. Needless to say, true immortality would be a huge deal, a much bigger deal than the temporary fix of life extension therapies for biological bodies, which will do very little to combat infectious disease or exotic maladies such as being hit by a truck.
It's obvious that mind uploading would be incredibly beneficial. As stated near the beginning of this post, only three things are necessary for it to be a big deal -- 1) that you believe a brain could be incrementally replaced with functionally identical implants and retain its fundamental characteristics and identity, 2) that the computational capacity of the human brain is a reasonable number, very unlikely to be more than 10^19 ops/sec, and 3) that at some point in the future we'll have computers that fast. Not so far-fetched. Many people consider these three points plausible, but just aren't aware of their implications.
If you believe those three points, then uploading becomes a fascinating goal to work towards. From a utilitarian perspective, it practically blows everything else away besides global risk mitigation, as the number of new minds leading worthwhile lives that could be created using the technology would be astronomical. The number of digital minds we could create using the matter on Earth alone would likely be over a quadrillion, more than 2,500 people for every star in the 400 billion star Milky Way. We could make a "Galactic Civilization", right here on Earth in the late 21st or 22nd century. I can scarcely imagine such a thing, but I can imagine that we'll be guffawing heartily as how unambitious most human goals were in the year 2009.