Transhumanism has been defined as the use of science and technology to improve the human condition, and the aspiration to go beyond what is traditionally defined as human, but it can be something broader: rational self-improvement while ignoring the boundaries set as typical. There's a lot of "self-improvement" out there, and a fair deal of promoting rationalism in debate and analysis, but these don't always come together. For instance, a highly rational individual might spend their entire day in front of a computer, neglecting exercise, and failing to take opportunity of a huge category of potential self-improvement. Conversely, someone preoccupied with "self-improvement" might believe in trendy nonsensical ideas about self-improvement that don't actually work.
People usually start off in life with a certain set of aptitudes, such as brains, social skills, strength, or looks. A fun way of embracing life is to try to maximize these qualities no matter where you start out on them. Even though I tend to fall on the "nature" side of the nature-nurture debate, I still think there is a tremendous amount that can be done to improve shortcomings that people make excuses to avoid improving. Social skills would be one example -- several transhumanist friends of mine have remarked how they used to be socially inept, and now are clearly extremely comfortable in social situations, because they made simple choices, like joining a rationalist community or a debate team.
This broader transhumanism means feeling personally obliged to improve yourself, both for your own benefit and for those around you. Let me focus a little bit on those around you, because there's been so much discussion on improving for yourself. Many groups and communities are only as strong as the average of their weakest members, due to aggregation effects that can be hard to explain. That's why an effective team working towards a goal needs to have every member be disciplined; one undisciplined member can be a thread that unravels the whole tapestry. When you neglect your physical appearance, your social skills, or your intellectual standards, you don't just hurt yourself, but those around you. Of course, no one can be perfect. The point is not to be perfect, but to at least try to improve, and put your ego aside to the extent that you are willing to accept criticism from others, sometimes even so-called "unconstructive" criticism. "Unconstructive" criticism tends to contain a grain of truth that can be the seed for future self-improvement.
Because the body is the seat of the mind, and the human animal's mind is deeply interconnected with their body, the first priority of self-improvement should be a healthy lifestyle. Being overweight is linked to anxiety and depression. Exercise is connected to positive mood, self-esteem, and restful sleep in dozens of studies. Rigorous exercise, rather than lazy shortcuts, lead to real benefits. It's not really a question of time -- tremendous benefits can be gained by exercising rigorously for as little as 30 minutes a couple times a week. There is no one who is too busy to exercise. A transhumanist who professes to be interested in transcending the human who is too lazy to exercise is like a Christian who is too lazy to pray or attend church -- a lemming attaching themselves to a social label rather than someone who can live up to the ideas they value. You have the tools to improve yourself now -- take advantage of them! Don't sit around for decades waiting for a pill to solve all your problems. If you aren't active yet, starting thinking of yourself as the type of person who should be active, and behavior will follow.
After making a commitment to improving the body, you should improve your mind. Intellectuals should be expected to have a book in their queue pretty much perpetually. Books are quite cheap, and there is so much to learn that anyone not reading is someone who is neglecting their intellectual curiosity. Articles on websites tend to be short and emotionally charged, not the kind of careful analysis or inspired literature that exists in books. Reading quippy front-page articles on Reddit or Digg is not a good cornerstone for a balanced intellectual life. Don't even get me started on television. I'm not saying that people shouldn't get information from diverse sources, but that the true foundation of intellectualism is, and has always been, books. "Infotainment" like the Colbert Report is just entertainment.
After you get your information, you have to process it properly. Be aware of cognitive biases. Never trust anything you think the first time. The greatest enemy of rationality is not the church, or the mainstream media, or the Republicans/Democrats, but your own brain. A true rationalist can be exposed to the most idiotic information sources and still extract useful evidence and insights by applying their own frame to the facts, rather than using the framing of the presenter. A rationalist does not get emotional while arguing, because nine times out of ten, emotions get in the way of proper analysis. Do a cold, clean analysis first, then, maybe a few hours or days later, you can start indulging in the emotions that flow from true beliefs. Maybe it's even best never to get emotional at all. Emotions are fast-and-frugal heuristics for processing information, far inferior to dispassionate analysis. I like to get emotional about issues that aren't really important, like my favorite songs or games. For those issues that really do matter, like geopolitics, social psychology, philosophy, and science, I try to keep emotions to a minimum.
Don't be so sensitive. We are all idiots in comparison to what is possible. Human beings are just monkeys, a node on the chain of being. One day in the not too distant future, minds will be created that put all of our best to shame. Don't worship the human spirit as if it were a god. The human spirit is nice, but it has plenty of flaws. People are balanced when they are slightly skeptical about everything by default, not when they embrace everything by default. Remember that skepticism triggered the Enlightenment, and if it weren't for skepticism, we would probably still be in the Dark Ages. Praise people who are skeptical of your ideas in good faith, don't discourage them.
Improving ourselves is not easy. That the definition of "improvement" itself has many subjective elements is part of the challenge, though many types of improvements tend to be self-evident in retrospect. The hardest part of improvement may be the willingness to make yourself vulnerable to criticism from others. All of us have our downfalls -- we're overweight, lazy, irresponsible, or overconfident. To some degree, I am all of these things. I'll bet most of you are too. Since everyone tends to have weaknesses, the idea is not to eliminate all weakness, or achieve some social standard of competence and then give up, but to whittle away at your weaknesses and reap the benefits from incremental gains. That's what transhumanism is -- slow improvement, using the best tools at our disposal. Never giving up, and never saying we've done enough. There is always more to do -- more to read, more to learn, more to say, and more to act on. Go out and do it.
For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex. Nature is fucked up, and anyone who argues otherwise has not actually seen nature in action.
This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation. Planetwide coordination is something that just didn't happen until the invention of the telegraph and radio made it possible.
America and Western Europe are full of the most security-deluded people of all. The most recent generations, growing up without any major global conflict -- Generation X and Y -- are practically as ignorant as you can get. Thousands of generations of tough-as-nails people underwent every manner of horrors to incrementally build the orderly and safe society many of us have the luxury of inhabiting today, and the vast majority of Generation X and Y neither appreciate nor understand that.
Wilsonian idealism, in particular, proved to be a turning point in the way Americans think about social interaction on a wide scale. Wilson was one of the first leaders to argue that national actions should be based on approximating some benevolent global goal or ideals rather than narrow national interest. This is not a terrible idea in principle, but without the brutal threat of military force or economic intimidation, it can't be carried out. High ideals are a luxury purchased with the currency of de facto and de jure power. De jure power itself is just a fabrication, a consensual illusion that draws all its strength from de facto power to persist, like a flower depends on its roots and stem.
A defenseless peasant of the Middle Ages could talk all he wanted about treating thy neighbor as thyself, kindness, sharing, reasonableness -- whatever. It wouldn't necessarily stop a power-mad knight from riding onto his land the next day, chopping off his head, taking his wife, and setting fire to his house and fields.
Benevolence, to flourish, should be promoted with words and ideas, but also force. Ultimately, people often choose to be stubborn and ignore all words. There are also those who pretend to go along but coordinate to violate norms discretely, usually with thinly veiled humor.
Security is the foundation of everything else. Free speech, including the ability to criticize the government and military, only exists because the highest power in the land permits it. If it's a God-given right, God had a funny way of implementing it when denied it from all his subjects by default for thousands of years, living under feudal rule and local warlords or strongmen.
Security does not come easy, since there are many people who will violate it any chance they can get for personal gain. Perhaps there exist some aliens who naturally cooperate peacefully, but we are not them. If anything, human beings are more bloodthirsty and warlike than most species, not less. Or, you could say we have a wider variance of behavior -- the ability to be highly cooperative as well as highly uncooperative.
Humanity's tendency to break apart unless constantly under self-vigilance will become an even greater liability for us when the Pandora's Box of Transhumanism is finally opened in the 2030s and 2040s. There are many people in the world interested in technology for only one reason -- to give them a better opportunity to screw over their enemies.
This urge in humanity is simply too omnipresent and intense to be reconciled or eliminated in the very short 20 or 30 years we have before things start to get more intense technologically. We can count on it being there, just as it has been there for thousands of years. The question is what sort of order, or disorder, will emerge when some human beings become radically more powerful than others.
There is a reason why conservatives are afraid of change. If the status quo is seen as acceptable, change makes things worse. Most possible changes, arguably, do. Every improvement is necessarily a change, however, so change is necessary if we are going to improve.
Some transhumanists confront of the challenge of massive power asymmetry like children. They see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly. Like children, they have a temper tantrum at any suggestion that the candy could have negative effects as well as positive ones.
Transhumanists have to grow up. The world is not your candy basket. The technologies we are pushing towards could lead to our demise just as easily as our salvation. You and everything you love could be eliminated by the technologies you were so excited about in the 2010s and 2020s.
A cognitive transhuman, in particular, will be a bewitching thing. Someone who thinks faster than you, understands what your microexpressions mean, and has superior predictive theories of both the natural and artificial world will be able to solve "impossible" problems with some regularity. Detectives and the FBI do not primarily solve cases with guns, but with their minds. Superior transhuman minds will run circles around the merely human minds in law enforcement and the FBI, unless the latter has the equivalent or better intelligence enhancement technology.
The intelligence arms race has the potential to get uglier faster than any merely physical arms race before it. An intelligence with access to its own mind, under threat, will have an incentive to actually boost its paranoia through neural self-modification. Psychological extremes never imagined will become routine states for the most experimental and ambitious of the new self-enhancers. They will have every incentive to downplay their accomplishments, hide their abilities, and they will succeed.
The second we create an intelligence superior to ourselves, the world could become fundamentally unsafe in a new way. The delicate balance of roughly human-level intelligence will be broken. All rules will be thrown out the window. Transhumans will not feel intimidated by the threats of humans. This is a really good thing if they are on our side, a really bad thing if not. The choices we make in creating the first transhumans will determine whether they are on our side or not in the longer term. The great tree of the Transhuman World will be grown by the seed we plant today.
The future is not exciting and optimistic. The future is dark and uncertain, imbued with the heavy sense of responsibility we personally have to make things go well. Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn't go too well for the weak -- just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.
R.U. Sirius writes:
Today, I think there are many more self-defined transhumanists. There is more willingness, particularly perhaps with post-Gen X young people, to define themselves -- to stand up and say, without reflexive irony, "I'm a transhumanist!" or "I'm an atheist!" or "T'm a socialist" or "I'm a libertarian!" whereas it would have seemed almost gauche in the 90s.
Yes! More socially aware and technologically connected than people of the "Me Decade" and the decade right after it, the leaders of the 10s recognize the importance of groups and movements beyond the individual. This is the age of Facebook and Causes. People realize that intellectual movements, like atheism and transhumanism, need their support and identification to exist. Someone who is too self-centered to join any club that will have them is someone who will sit on the sidelines of history.
When I say "I'm an atheist", it makes it slightly more acceptable to be an atheist, because I'm another person "putting my name on the line". The point is that it shouldn't be questioned or considered at all abnormal to be an atheist. To dispel the stigma we need to take the association and lend it positive affect. Same with transhumanism, though the stigma on that appears to be evaporating, even directly in the mainstream.
Anyway, many of those who do self-define as transhumanists today might be seen as a hardier bunch--they're going to keep their eyes on the prize, so to speak, whatever comes at them--or alternatively, they could be seen as simply more ideologically convinced, or in some cases, more willing to elide or ignore or underestimate the crises around them.
Let's go with "hardier bunch". Transhumanists of the 2010s realize that the problems on the front page of the news are nothing compared to the greater background problems of civilization, like poverty, disease, aging, and violent conflict. We seek to decisively solve all these age-old problems, not just chip away at them with the same old failed strategies. To go with a phrase that my friends Sergio Tarrero and Philippe van Nedervelde like to use, "radical improvement".
Self-defining transhumanists are ideologically confident and do not let the mainstream think for them. To quote the recent TIME article on the Singularity:
Not all of them are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
People confident enough to look at the evidence, think carefully, come to an interim belief, and then remain confident of their position in the face of social pressure are people I can respect. I respect all people like this, not just Singularitarians or people I agree with!
There is a strain in hipster culture that to be apathetic about everything is cool. In opposition to that is a larger philanthropic entrepreneurial subculture that really cares about actually improving the world. Transhumanism is just a little part of this emerging and powerful subculture. GOOD magazine, founded by a 26-year-old in 2006, is generally considered representative of this new movement.
Over at the Speculist, Phil Bowermaster understands the points I made in "Yes, the Singularity is the biggest threat to humanity", which, by the way, was recently linked by Instapundit, who unfortunately probably doesn't get the point I'm trying to make. Anyway, Phil said:
Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It's highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time -- just as we have no need for the bacteria in the swimming pool and they wouldn't have much of a shot against our chlorine assault.
"How would the superintelligence be able to wipe us out?" you might say. Well, there's biowarfare, mass-producing nuclear missiles and launching them, hijacking existing missiles, neutron bombs, lasers that blind people, lasers that burn people, robotic mosquitos that inject deadly toxins, space-based mirrors that set large areas on fire and evaporate water, poisoning water supplies, busting open water and gas pipes, creating robots that cling to people, record them, and blow up if they try anything, conventional projectiles... You could bathe people in radiation to sterilize them, infect corn fields with ergot, sprinkle salt all over agricultural areas, drop asteroids on cities, and many other approaches that I can't think of because I'm a stupid human. In fact, all of the above is likely nonsense, because it's just my knowledge and intelligence that is generating the strategies. A superintelligent AI would be much, much, much, much, much smarter than me. Even the smartest person you know would be an idiot in comparison to a superintelligence.
One way to kill a lot of humans very quickly might be through cholera. Cholera is extremely deadly and can spread very quickly. If there were a WWIII and it got really intense, countries would start breaking out the cholera and other germs to fight each other. Things would really have to go to hell before that happened, because biological weapons are nominally outlawed in war. However, history shows that everyone breaks the rules when they can get away with it or when they're in deep danger.
Rich people living in the West, especially Americans, have forgotten the ways that people have been killing each other for centuries, because we've had a period of relative stability since WWII. Sometimes Americans appear to think like teenagers, who believe they are apparently immortal. This is a quintessentially ultra-modern and American way of thinking, though most of the West thinks this way. For most of history, people have realized how fragile they were and how aggressively they need to fight to defend themselves from enemies inside and out. With our sophisticated electrical infrastructure (which, by the way, could be eliminated by a few EMP-optimized nuclear weapons detonated in the ionosphere), nearly unlimited food, water, and other conveniences present themselves to us on silver platters. We overestimate the robustness of our civilization because it's worked smoothly so far.
Superintelligences would eventually be able to construct advanced robotics that could move very quickly and cause major problems for us if they wanted to. Robotic systems constructed entirely of fullerenes could be extremely fast and powerful. Conventional bullets and explosives would have great difficulty damaging fullerene-armored units. Buckyballs only melt at roughly 8,500 Kelvin, almost 15,000 degrees Fahrenheit. 15,000 degrees. That's hotter than the surface of the Sun. (Update: Actually, I'm wrong here because the melting point of bulk nanotubes has not been determined and is probably significantly less. 15,000 degrees is roughly the temperature that a single buckyball apparently breaks apart at. However, some structures, such as nanodiamond, would literally be macroscale molecules and might have very high melting points.) Among "small arms", only a shaped charge, which moves at around 10 km/sec, could make a dent in thick fullerene armor. Ideally you'd have a shaped charge made out of a metal with extremely high mass and temperature, like molten uranium. Still, if the robotic system moved fast enough and could simply detect where the charges were, conventional human armies wouldn't be able to do much against it, except for perhaps use nuclear weapons. Weapons like rifles wouldn't work because they simply wouldn't deliver enough energy in a condensed enough space. To have any chance of destroying a unit that moves at several thousands of mph and can dodge missiles, nuclear weapons would likely be required.
When objects move fast enough, they will be invisible to the naked eye. How fast something needs to move to be unnoticeable varies based on its size, but for an object a meter long it's about 1,100 mph, approximately Mach 1. There is no reason why engines could not eventually be developed that propel person-sized objects to those speeds and beyond. In this very exciting post, I list a few possible early-stage products that could be built with molecular nanotechnology that could take advantage of high power densities. Google "molecular nanotechnology power density" for more information on the kind of technology a superintelligence could develop and use to take over the world quite quickly.
A superintelligence, not being stupid, would probably hide itself in a quarantined facility while it developed the technologies it needed to prepare for doing whatever it wants in the outside world. So, we won't know anything about it until it's all ready to go.
We'll still be stuck in the blue region while superintelligences develop robotics in the orange and red regions and have plenty of ability to run circles around us. There will be man-sized systems that move at several times the speed of sound and consume kilowatts of energy. Precise design can minimize the amount of waste heat produced. The challenge is swimming through all that air without being too noticeable. There will be tank-sized systems with the power consumption of aircraft carriers. All these things are probably possible, no one has built them yet. People like Brian Wang, who writes one of the most popular science/technology blogs on the Internet, take it for granted that these kind of systems will eventually be built. The techno-elite know that these sorts of things are physically possible, it's just a matter of time. Many of them might consider technologies like this centuries away, but for a superintelligence that never sleeps, never gets tired, can copy itself tens of millions of times, and parallelize its experimentation, research, development, and manufacturing, we might be surprised how quickly it could develop new technologies and products.
The default understanding of technology is that the technological capabilities of today will pretty much stick around forever, but we'll have spaceships, smaller computers, and bigger televisions, perhaps with Smell-O-Vision. The future would be nice and simple if that were true, but for better or for worse, there are vast quadrants of potential technological development that 99.9% of the human species has never heard of, and vaster domains that 100% of the human species has never even thought of. Superintelligence will happily and casually exploit those technologies to fulfill its most noble goals, whether those noble goals involve wiping out humanity, or maybe healing all disease, aging, and creating robots to do all the jobs we don't feel like doing. Whatever its goals are, a superintelligence will be most persuasive in arguing for how great and noble they are. You won't be able to win an argument against a superintelligence unless it lets you. It will simply be right and you will be wrong. One could even imagine a superintelligence so persuasive that it convinces mankind to commit suicide by making us feel bad about our own existence. In that case it might need no actual weapons at all.
The above could be wild speculation, but the fact is we don't know. We won't know until we build a superintelligence, talk to it, and see what it can do. This is something new under the Sun, no one has the experience to conclusively say what it will or won't be able to do. Maybe even the greatest superintelligence will be exactly as powerful as your everyday typical human (many people seem to believe this), or, more likely, it will be much more powerful in every way. To confidently say that it will be weak is unwarranted -- we lack the information to state this with any confidence. Let's be scientific and wait for empirical data first. I'm not arguing with extremely high confidence that superintelligence will be very strong, I just have a probability distribution over possible outcomes, and doing an expected value calculation on that distribution leads me to believe that the prudent utilitarian choice is to worry. It's that simple.
Remember, most transhumanists aren't afraid of superintelligence because they actually believe that they and their friends will personally become the first superintelligences. The problem is that everyone thinks this, and they can't all be right. Most likely, none of them are. Even if they were, it would be rude for them to clandestinely "steal the Singularity" and exploit the power of superintelligence for their own benefit -- possibly at the expense of the rest of us. Would-be mavericks should back off and help build a more democratic solution, a solution that ensures that the benefits of superintelligence are equitably distributed among all humans and perhaps (I would argue) to some non-human animals, such as vertebrates.
Coherent Extrapolated Volition (CEV) is one idea that has been floated for a more democratic solution, but it is by no means the final word. We criticize CEV and entertain other ideas all the time. No one said that AI Friendliness would be easy.
Carl Zimmer wrote this: "Can You Live Forever? Maybe Not -- But You Can Have Fun Trying". This is a very positive, yet slightly skeptical look at the Singularity movement. This article is a follow-up to Zimmer's earlier article in Playboy, which came out this January. This year, there have been articles on the Singularity Summit and Singularity Institute in Playboy, GQ, the UK Independent, and Scientific American. Here's a funny bit from the current article:
After the meeting I decided to visit to researchers working on the type of technology that people such as Kurzweil consider the steppingstones to the Singularity. Not one of them takes Kurzweil's own vision of the future seriously. We will not have some sort of cybernetic immortality in the next few decades. The human brain is far too mysterious and computers far too crude for such a union anytime soon, if ever. In fact some scientists regard all this talk of the Singularity as a reckless promise of false hope to the afflicted.
But when I asked these skeptics about the future, even their most conservative visions were unsettling: a future in which people boost their brains with enhancing drugs, for example, or have sophisticated computers implanted in their skulls for life. While we may never be able to upload our minds into a computer, we may still be able to build computers based on the layout of the human brain. I can report I have not drunk the Singularity Kool-Aid, but I have taken a sip.
Taking a sip is a subset of drinking.
I've been reading some of the public material on the new President's Council on Bioethics, because I can't for the life of me figure out what they do. I do know that they're called the "Presidential Council for the Study of Bioethical Issues" now. Same domain name, different panel. It seems as if Obama was so against the old commission that he had to destroy it and create a new one from scratch, which highlights the transitory and low-power nature of the body.
Checking out the background materials section of their website, I was compelled to click on the first presentation at meeting two, "Oversight of Emerging Technologies". It outlines important overall characteristics of this panel. Their mission is as follows:
1. To monitor scientific/medical developments (â€œadvancesâ€) and identify the issues they will raise for society
2. To bridge divide between science and society
3. To articulate the range of views on controversial subjects, To inform the political process & policymaking
4. To provide guidance to individuals & healthcare professionals
5. To provide recommendations to policymakers
Another important part, under "mode of work", is this:
Not asked to invent new philosophical theories but to offer conclusions & recommendations based on multidisciplinary analysis of issues facing policy makers, healthcare professionals, scientists, patients & families
I love this part. It seems to be a nod to the theological conservative philosophy underlying Bush's panel, and backing away from that attitude. Some of the panel members are no doubt aware of transhumanism as well, so the statement might be seen as a reassurance that the panel won't take sides and align with any particular philosophy.
The problem with not inventing or using existing philosophical theories is that you disempower yourself. Philosophical theories often dictate the course of history. The Presidential panel's decision not to embrace philosophies for guiding bioethical decisions (as if that's even possible) creates a power vacuum for transhumanists and bioconservatives to fight over.
Because the panels are so transitory and ephemeral anyway, they lack stability and power. Transhumanism, in contrast, is an ongoing, capable, increasingly higher-profile community that was more or less founded with the launch of the extropians mailing list in 1991. Bioconservatives, meanwhile, mostly focus on near-term issues like abortion, outlawing marijuana, and assorted anti-gay bigotry. With regard to future issues, the issues that will determine the trajectory of the 21st century, they are mostly highly disorganized or silent. The New Atlantis, the bioconservative journal, has received barely any external coverage in its seven years of existence. Surprisingly, sometimes the editor's column in the Wall Street Journal more closely resembles transhumanist articles than anything written by Leon Kass.
As far as I can tell, the main function of Obama's panel seems to be to assess the potential risks of synthetic biology, which is fantastic. The phrase "life extension" does not appear anywhere on the site.
1) We Shall Soon Be Able To Choose Our Own Level Of Pain-Sensitivity
2) We Can Soon Choose How Rewarding We Want Our Daily Life To Be
3) Steak Lovers and Vegans Alike Can Soon Eat Cruelty-Free Diets
4) Carnivorous Nonhuman Predators Can Be Phased Out Too
5) We May Be On The Eve Of An "Intelligence Explosion"
What good is transhumanism if it can't eliminate suffering?
Here's the website. Humanity+ @ CalTech is hosted by the California Institute of Technology and ab|inventio, the invention factory behind QLess, Whozat, SocialDiligence and MyNew.TV.
The speakers list is a mix of the usual suspects and some new names. The usual suspects include Randal Koene, Suzanne Gildert, Michael Vassar, Max More, Nastasha Vita-More, Bryan Bishop, Patri Friedman, Ben Goertzel, and Gregory Benford. If you were following my tweets from this weekend you'll recall that Benford announced StemCell100(tm) at the Life Extension Conference in Burlingame, which is a product of LifeCode, a spinoff company of Genescient.
The conference is partially being organized by my friend Tom McCabe, who was recently voted on to the Board of Directors of Humanity+. Please let Tom know (his email is at his website) if you want to help sponsor the event!
Diamonds adorning tiaras to anklets are treasures but these gemstones inside the body may prove priceless.
Two Case Western Reserve University researchers are building implants made of diamond and flexible polymer that are designed to identify chemical and electrical changes in the brain of patients suffering from neural disease, or to stimulate nerves and restore movement in the paralyzed.
The work of Heidi Martin, a professor of chemical engineering, and Christian Zorman, a professor of electrical engineering and computer science, is years from human trials but their early success has drawn interest worldwide.
My general stance on enhancement and implants is "go diamond or go home", and its corollary, "go fullerene or go home".
Some of you may have caught Hank Pellissier's (also known as Hank Hyena) review of Singularity Summit 2010 on a World Future Society blog, where he said:
"Glancing at the program, I realized no one else was going to even remotely address augmentation. Why? Because they're computer geeks, I decided in an epiphanic flash. They adore computers; they want to build THEM smarter and quicker and stronger... But what about ME? I want all that, for myself."
Hank... no. Come on. The point of the Singularity Institute is to build AI for the benefit of humanity, and we even get people who complain that we focus too much on preserving human values and not letting AI "develop on its own". Fuck computers, really. (Sorry for the language, but I want to make a strong point about values.) The point of a computer is to serve a conscious being. A computer is a hunk of sand and plastic. Certain computers may one day implement conscious beings, but even then, the being itself is somewhat more of nebulous concept than the computer, which is just the substrate. Even though you are your brain, we generally refer to each other as people and not "brains".
When people at the Singularity Summit talk about computers and AI technology, the primary reason they give a damn about it is because it enhances some aspect of human performance. Usually the focus is on enhancing the senses, cognition, and thought. This is human enhancement. I suspect because Hank is a bit on the older side (nothing wrong with that -- I want to live for billions of years), he has physical enhancement more in mind. Well, there were talks relevant to that. Ben Goertzel's talk on "AI Against Aging" was about using AI to analyze genomic information to tease out new correlations between alleles and conditions. It's a form of augmentation because it lets us do analysis we couldn't do on our own, then apply it to curing aging, which is another form of augmentation. What would Hank say about this, that it's not direct enough, or something? That we should just focus on genomic analysis using existing tools and never improve them? Hank, the reason for developing better tools is to hurry along augmentation. It's analogous to someone complaining that a group building a bridge is casting iron beams instead of sawing down a tree and hoping it falls right across the river. Better tools lead to better augmentation outcomes. Try to climb Everest with inferior tools and you just freeze.
Gregory Stock talked about ethical issues around likely upcoming augmentation (uploading), that's "predictions of augmentation", I should think. If someone is taking profound augmentation for granted in their talk, isn't that paying credit to human augmentation?
Steven Mann's talk was almost completely about augmentation. He discussed his EyeTap system which records everything he sees and creates an augmented reality interface. Hank complains that "Mann veered away from enhancement to spend the last 10 minutes introducing us to his wet new invention, "the world's first musical instrument that produces sound from vibrations in water." If he spent most of the beginning of his talk discussing augmentation, and then spends 10 minutes talking about an excellent musical instrument that had everyone excited, is that such a big deal?
I could go on, but the point is that practically every talk at the Singularity Summit was about augmentation in some fashion -- Hank Hyena just didn't always understand the connection. Another example would be Lance Becker, who discussed avoiding reperfusion injury, which could save huge numbers of people from dying when their body or brain is somehow deprived of oxygen for an extended period of time. This is very much a form of augmentation. Not permanent, not an implant, but if you have a heart attack and this saves your life, I think you'd be very much appreciative anyway. Until we invent respirocytes (artificial red blood cells) or something like them, that invention will be a life-saver.
Same Goals, But Our Methods are Superior
Another annoying part of Hyena's review is this bit, which embodies a simplistic view of the future that several prominent transhumanists have fallen into as well:
Exactly, I thought. I want my IQ boosted, my senses expanded, my muscles strengthened, my organs cleansed. I want the Singularity to happen inside me. Why? I'm reasonably concerned that a smarter-than-us AI machine might choose to eliminate gross humanity, but mostly... I'm tired of my mental and physical limitations. We "meatbags" - a new pejorative for our archaic anatomy - need to be upgraded. We've already assisted ourselves with contact lenses, plastic hips, hearing aids, prosthetic limbs... but still, the preponderance of our flesh remains Paleolithic.
I want all this same stuff, and the best way to get there as soon as possible is through Friendly AI. Why? Because there's too much to do otherwise. If people think they have faster routes to a disease-free, post-scarcity, post-death world, by all means, pursue them, but everyone should understand that those who pursue Friendly AGI have the same base goals as you, we just use different means. We want the same damn thing. We repeat this again and again, but some people just don't get it. We aren't narrow-minded nerds with a special love for AI, really.
Hank also seems to think it would be possible to use human augmentation to keep up with or fight AI, which is just foolish. (Stephen Hawking is foolish for thinking this as well.) If AI gets to the point of human-superior intelligence, there will be no fighting with it. It will just win. You're better off making yourself safe by building an AI on your side. The chance of you personally becoming powerful enough to contain and control all AI research globally is close to zilch. Even some prominent transhumanists have this fantasy. They think they run away from, or outsmart recursively self-improving AI. Don't bother. The advantages a smarter-than-human AI would inherently have by virtue of its substrate are just too immense.
There's another problem with messing around with direct human augmentation -- it's really damn hard. Even the Pentagon, with billions of dollars in research funding, is sort of dancing around the edges of it. Before we can have direct human augmentation, we need better support systems. Support systems are much easier to engineer. Hank, did you know that the risk of complications in practically any surgery is a few percent? True augmentation that requires surgery is impossible without advanced robotics. Manual surgery by doctors is too expensive and imprecise. Humans mess up. We need advanced robotics and AI to achieve the outcomes we desire. No society is going to adopt widespread surgical enhancement unless the complication rate is pushed way, way down. Yes Hank, I know that you would probably volunteer to be among the first to line up for the augmentation, but the problem with early prototypes is that they have bugs to be worked out. That involves basically torturing animals (animal "testing") on a mass scale, and we haven't even begun making cyborg rabbits yet, so what makes you think that cyborg humans are anywhere close? Robotics and AI -- external systems -- are where it's at, and it will remain that way for 15-20 years at least.
Speaking of animals, we need better computer simulations of biological systems to avoid hundreds of thousands of highly painful, often lethal tests on defenseless rodents, rabbits, chimps and dogs. I know the "me! me!" attitude says "let them die for me", but even the guys that run these tests are constantly looking to improve their simulations. For a few years my dad worked at Genentech, and often talked about the PR challenges they had to deal with in animal testing. There are plenty of scientists whose work involves killing animals all day, and although they may learn something from it, I doubt they would mind if their animals were replaced by simulations that provided the same information. From a pragmatic perspective, there's also the question of cost. You can save orders of magnitude on cash by testing upgrades in silico rather than in vivo.
Build Some Muscle, You Frail Nerds!
Want your IQ boosted, your senses expanded, your muscles strengthened, your organs cleansed? Some transhumanists may not realize that you can do these things today, without Future Tech(tm). Future tech should not be viewed as a free path for lazy people to get smarter and in shape. I mean, it could be viewed as that, but it's not very honorable. It should be viewed as an extension of things we can do today to improve our minds and bodies. Want your IQ boosted? Get ten hours of sleep a night. Want to expand your senses? How about buying a microscope, a telescope, and really figuring out how to use your smartphone to access information? Want to strengthen your muscles? How about a run and a trip to the gym? Want to cleanse your organs? Eat wholesome food and moderate your alcohol intake.
To avoid looking like (and being) frail nerds sitting in front of computers awaiting Techno Rapture, we need to push ourselves with what we have now. Many transhumanists might not realize what unaugmented human beings are capable of. For instance, there's this one badass by the name of Richard Proenneke who lived alone for 30 years in the remote Alaskan wilderness in a log cabin he built. This was his retirement -- he was 52 when he moved out there and 86 when he died. He only returned to civilization at age 82. Most peoples' retirement consists of sitting on their ass getting drunk and watching television until they die, but this guy was keeping himself alive in an unforgiving sub-zero environment simply with his own dedication and skills. Most dudes in their early 20s who fancy themselves badass would be dead after a week or two in that environment. One naive youngster who tried it, Chris McCandless, died in a few months, because the moose he shot rotted, because he didn't know the first thing about preserving it.
The best forms of "augmentation" available today are simply being proficient with computers/smartphones and getting into shape physically. Say that I bought an exoskeleton tomorrow and used it to go for a hike in the mountains. Well, it would let me hike with minimal effort for a while, then eventually probably run out of power or otherwise break down, and I would be unaugmented again. My biological body has manifold advantages over any exoskeleton that will be developed in the immediate future -- it runs on biomass, it's self-repairing, it has a highly redundant structure, it can grow, it's closely connected to my brain via nerves, etc. Even the best exoskeletons will only be used for specialized purposes until we develop molecular nanotechnology.
Molecular nanotechnology is really necessary to get anywhere with augmentation. Otherwise, forget it. Surgery is a hassle. No one will ever get FDA approval to rip off a dude's arm and replace it with a cyborg arm, even if the cyborg arm is "better". (The first few million prototypes definitely won't be.) Transhumanists have to understand the potential of molecular nanotechnology and how far superior it is to anything we are developing now, or they just get confused about future possibilities.
Many transhumanist ideas, including Ray Kurzweil's books, are developed in the context of MNT becoming available in the 2020s. Without MNT or AGI, the future looks much more mundane for the next couple decades. The real excitement for augmentation may be with brain-computer interfaces like the kind that Ed Boyden of MIT is developing, but even Mandayam Srinivasan's (MIT Media Lab) talk on brain-computer interfaces didn't impress Hank because "the robots got to do all the cool stuff". Hank, those robots are going to be developed into our literal appendages. Don't you see how this is part of the process? Robotics must be improved apart from humans before it gets miniaturized, powerful, and sophisticated enough to be worth ripping our skin open, throwing out what evolution gave us, and putting in the new components. If you're in such a hurry, you can always pay an underground surgeon in the Philippines to slice you open and put in whatever the state of the art is today, but I doubt you would enjoy the results.
The unfortunate fact is that Nature is unforgiving. If you're older than 50 or 60, you should sign up for cryonics immediately, and not count on living forever due to advances in biotechnology. About halfway through life is where disorders and diseases start killing us more often than accidents. Some of these are difficult to avoid, and just happen, even if we do everything right. If you do everything right with food and exercise but don't sign up for cryonics, you are signing your own death warrant. What about improving cryonics? What about the Brain Preservation Prize? Cryonics is the catch-all plan. Nothing else is certain enough -- even cryonics is uncertain because you might die in circumstances where the transport and stabilization folks can't get to you soon enough, or because a future grid-down scenario interrupts the flow of liquid nitrogen and you thaw out. When I talk to older transhumanists that are into cryonics, I see people who are psychologically calmer than those who endlessly obsess over their food, questionable supplements, and other minutiae that will mean jack squat if they get into a simple car accident. Why not pump some iron so that your next fall down the stairs isn't a fatal affair? Why are there so many older transhumanists who are chubby, or who have a frame so frail it looks like a stiff breeze could take them out?
In his review, Hank wrote:
At lunch, I chomped on a turkey sandwich donated by Boudin and engaged a Florida hacker in a flesh-based vs. artificial organ argument. "I don't want a pig heart that will die like a pig," I whined. "I want a synthetic heart that lasts forever; they're arriving in only five years." He demurred, pulling his beard. We talked about rock-climbing next. Wouldn't it be great to have ultra-powerful fingers?" I proposed. "I'd scamper up Half Dome like a spider!" He retorted, "it's also enjoyable to simply see what we can accomplish within our own limitations." Luddite, I thought. Can't we have fun?
Both of them are right. Hank is silly for calling someone who wants to accomplish what they can within their own limitations a Luddite. Does he expect everything to be handed to him on a silver platter? Transhumanism is not a bowl of free candy. It's an extension of the age-old desire to improve ourselves, which requires work, sweat, inconvenience, getting out of our comfort zone, and other non-nerdy activities that some transhumanist braniacs shy away from.
All or Nothing
One more point. To get the kind of cybernetic upgrades that Hank and I both want would require replacing the entire musculoskeletal system, or even the entire body minus the brain. I credit this insight to Greg Fish, who regularly brings it up. In a recent blog post, he wrote:
Likewise, itâ€™s fun to dream about being a superhuman cyborg, but actually giving up a limb and coping with the consequences is completely different. There would be no way to reattach legs and arms, and once the surgery begins, thereâ€™s no going back. Personally, I wouldnâ€™t want to fix whatâ€™s not broken. Now, if at some point in my old age bits and pieces start falling into disrepair and the proper technology had a good, long, successful run at limb and organ replacements, I might just go for it. But until then, Iâ€™d like to stick with what biology gave me. And I suspect that so would many othersâ€¦
This is damn right. I suspect that with molecular nanotechnology and extensive research and development, you could eventually build a robotic arm so awesome and superior that I'd be willing to saw off my original arm for it, but remember this is sawing off your arm we're talking about here. Sawing off your own arm is not something you do lightly, unless you're trying to commit suicide in a way that is totally brutal and metal.
The immediate future of human enhancement is in biology and the better use of external augmentations. People are too socially timid to even take advantage of external augmentations available today that improve our survival prospects by a significant margin. For instance, wearing a helmet while driving. Over a million people per year die in car accidents, and fifty million are injured, but most people treat driving as if it were completely safe. Forget cyborg arms, how about a transparent helmet that that is so inconspicuous that people can wear it confidently while driving without worrying about being laughed at by strangers?
When I get my internal upgrades, which probably won't be until a hard takeoff Singularity anyway because humans are too stupid to make good ones before then, I'm going to get it all done at once. Remember how Hank was talking about super-strong fingers? Well, if your fingers were ultra-strong and you were scampering up Half Dome like spider man, your fingers would fall off, because the muscles and tendons just below the fingers wouldn't be able to take the stress. You plummet off Half Dome and your body sits at the bottom as a rotting carcass. The only way to really upgrade the body is to upgrade the whole thing at the same time. Everything in the body is designed with the assumption that everything else is the the way it's always been.
I know everyone wants to fly, and to fly right, your body has to be durable. Durable as in the entire thing should be made of fullerenes. Anything less is pointless. Fullerene heart, fullerene skull, fullerene matrix throughout the brain cushioning it from g forces, fullerene muscles, etc. Fullerenes are the only class of materials really durable enough to have fun with, and anything less is just a prototype waiting to get trashed. Even a full fullerene chassis would be dependent on external bits (micro-UAVs) to really function at its full potential, so you're dependent on external robotics anyway. If I could choose between a highly "advanced" by pre-Singularity, pre-MNT standards cyborg body, and a simple flexible exoskeleton plus a bit cloud for support, I'd choose the latter.