Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

8Oct/1187

The Mundanity of Physical Enhancement

Although physical enhancement is what most people associate with transhumanism, it's not particularly interesting. A man with tentacles and wings who can fly and breathe underwater is still just some dude. Humans are primitive beings, with conspicuously primitive minds -- we just recently evolved from un-intelligent apes that used the same stone tools for millions of years.

Everything truly exciting about the transhumanist project lies in the mental realm. Only through opening up and intervening in the brain can we really change ourselves and the way the world works. Anything else is just the surface.

What approaches can we take to cognitive enhancement?

First, take brain surgery. It is extremely unlikely that cognitive enhancement will be conducted through conventional brain surgery as is practiced today. These procedures are inherently risky and only conducted under necessary circumstances, when the challenges of surgery outweigh the huge cost, substantial risk, and long recovery time of the procedures.

More subtle than brain surgery is optogenetics, regarded by some as the scientific breakthrough of the last decade. Optogenetics allows researchers to control the precise activation of neurons through the introduction of light-sensitive genes to animal brain tissue.

Optogenetics is unlikely to be applied to humans before 2030-2040, for two reasons. The first is that it involves the introduction of foreign genes into human brain tissue, and gene therapy is in its infancy -- treatments derived from gene therapy are extremely rare and highly experimental. People have been killed by gene therapy gone awry. When gene therapy research moves in the direction of human enhancement, a massive backlash seems plausible. It may be banned entirely for enhancement purposes.

At the very least, the short-lived nature of gene therapy and problems with viral vectors ensure that gene therapy will stay experimental until entirely new vectors are developed. Chromallocytes are the ideal gene delivery vector, but those are quite far off. Is there something between current vectors and chromallocytes that produces safe, predictable gene therapy results? That is a great big question mark. What is needed is not one or two breakthroughs, but a long series of many breakthroughs. I challenge readers to find anyone in biotech who would bet that gene therapy will be made safe, predictable, and approved for use in humans within 10 years, 20 years, or 30. Developing new basic capabilities in biotech is a long, drawn out process.

The second reason optogenetics will not bear fruit for cognitive enhancement before 2030-2040 is that it requires slicing off part of the scalp and mounting fiber optics directly on the skull. This is all well and good for animals, which we torment with abandon, but it seems unlikely to be popular among the Homo sapiens crowd. Mature regenerative medicine would be necessary to heal tissue damage from this procedure.

According to Ray Kurzweil's scenario, "nanobots" will be developed during the late 2020s which will be injected into the human body by the trillions, where they can link up with neurons and augment the brain from the inside.

However, given the near complete lack of progress towards molecular nanotechnology since Eric Drexler wrote Engines of Creation in 1986, I find this hard to believe. Nanobots require nanofactories, nanofactories require assemblers, and assemblers would be highly complex aggregates of millions of molecules that themselves would need to be manufactured to atomic precision. Today, all objects manufactured to molecular precision have negligible complexity. The imaging tools that exist today -- and for the foreseeable future -- are far too imprecise to allow for troubleshooting molecular systems of non-negligible size and complexity that refuse to behave as intended. The more precise the imaging method, the more energy is delivered to the molecular structure, and the more likely it is to be blown into a million little pieces.

It is difficult to understate how far we are from developing autonomous nanobots with the ability to perform complex tasks in a living human body. There is no reason to expect a smooth path from today's autonomous MEMS (micro-electro-mechanical systems) to the "nanobots" of futurist anticipation. Autonomous MEMS are early in their infancy. Assemblers are probably a necessary prerequisite to miniature robotics with the power to enhance human cognition. No one has designed anything close to an assembler, and if progress continues as it has for the last 25 years, it will be many decades before one is developed.

So, that is three technologies that I have argued will not be applied to cognitive enhancement in the foreseeable future -- brain surgery, optogenetics, and nanobots.

9Sep/11130

What Does it Mean to be a Transhumanist?

To me, transhumanism is a temporary movement -- transitional. Its role is to help individuals and society transition to living in a world where some portion of society technologically transforms their minds and bodies on both incremental and fundamental levels. This might range from getting a Google-connected neural implant to uploading one's consciousness into a virtual world. We transhumanists consider (cautious!) developments along these lines to be a good thing, and feel that the most pressing objections and concerns have been adequately addressed, including:

- What are the reasons to expect all these changes?
- Won't these developments take thousands or millions of years?
- What if it doesn't work?
- Won't it be boring to live forever in a perfect world?
- Will new technologies only benefit the rich and powerful?
- Aren't these future technologies very risky? Could they even cause our extinction?
- If these technologies are so dangerous, should they be banned?
- Shouldn't we concentrate on current problems...
- Will extended life worsen overpopulation problems?
- Will posthumans or superintelligent machines pose a threat to humans who aren't augmented?
- Isn't this tampering with nature?
- Isn't death part of the natural order of things?

The key is to see "Transhumanism" as a philosophy being just a temporary crutch, a tool for humanity to safely make the leap to transhumanity. Transhumanism is really only simplified humanism. Eventually, transhumanists hope to see a world where a wide variety of physical and cognitive modifications are available to everyone at reasonable cost, and their use is responsibly regulated, with freedom broadly prevailing over authoritarianism and control. When and if we arrive at that world in one piece, everyone will become de facto transhumanists, just as today, most people are de facto "industrialists" (benefit from and contribute to modern industrial society) and de facto "computerists".

It is also possible to imagine someone who doesn't anticipate taking advantage of transhumanist technologies being in favor of "transhumanism" nonetheless. That is, insofar as transhumanists competently and openly discuss the potential upsides and downsides of certain ambitious technological pathways such as extreme life extension and artificial intelligence, and make progress towards beneficial futures. Since widespread cognitive and physical enhancement is something that will soon effect everyone, including the unmodified, everyone has an obvious stake in the trajectory of enhancement technologies even if they do not personally use them.

Transhumanism can also be viewed as a discussion primarily among those who anticipate taking advantage of enhancement technologies before most others. As such, transhumanism forms a beacon that alerts the rest of society to likely changes and informs society about the kind of people who are most interested in human enhancement. Since certain "transhumanist" technologies, particularly intelligence enhancement, may prove to have decisive power over the course of history in the centuries ahead, it is important to examine the groups pursuing it and their motives.

For instance, DARPA is a hotbed of enhancement research. So, the role of the transhumanist is to alert society to that fact, ask them if they care, and if so, what they think about it. Is it a good thing that the development of human enhancement is being spearheaded by the United States military?

A transhumanist elicits opinions and perspectives of human enhancement from a variety of commentators who might not spontaneously offer their opinions otherwise. This includes critics of enhancement such as The New Atlantis, representing the "Judeo-Christian moral tradition".

Another purpose of the transhumanist is to be a concentrated source of facts and opinions on the concrete details of proposed enhancements, with facts and opinions clearly distinguished from each other. In theory, if the long-term dangers of a particular new technology or enhancement therapy plausibly exceed the benefits, transhumanists are responsible for discouraging the development of those technologies, instead developing alternative technologies that maximize benefits while minimizing risks. It would be easier for transhumanists to divert funding away from dangerous technologies, than, say bio-conservatives, because researchers under the influence of the extended transhumanist memeplex are the ones developing the crucial technologies and bio-conservatives are not.

A transhumanist is not just a blind technological cheerleader, enraptured by the supposed inevitability of a cornucopian future. A transhumanist should acknowledge the hazy and uncertain nature of the future, accepting beliefs only to the degree that the evidence merits, guided not by ideology but by flexible thinking, always welcoming criticism and views contrary to standard orthodoxies.

20Jul/1175

The Last Post Was an Experiment

+1 for everyone who saw through my lie.

I thought it would be interesting to say stuff not aligned with what I believe to see the reaction.

The original prompt is that I was sort of wondering why no one was contributing to our Humanity+ matching challenge grant.

Maybe because many futurist-oriented people don't think transhumanism is very important.

They're wrong. Without a movement, the techno-savvy and existential risk mitigators are just a bunch of unconnected chumps, or in isolated little cells of 4-5 people. With a movement, hundreds or even thousands of people can provide many thousands of dollars worth of mutual value in "consulting" and work cooperation to one another on a regular basis, which gives us the power to spread our ideas and stand up to competing movements, like Born Again bioconservatism, which would have us all die by age 110.

I believe the "Groucho Marxes" -- who "won't join any club that will have them" are sidelining themselves from history. Organized transhumanism is very important.

I thought quoting Margaret Somerville would pretty much give it away, but apparently not.

To me, cybernetics etc. are just a tiny skin on the peach that is the Singularity and the post-Singularity world. To my mind, SL4 transhumanism is pretty damn cool and important. I've written hundreds of thousands of words for why I think so, but there must be something I'm missing.

To quote Peter Thiel, those not looking closely at the Singularity and the potentially discontinuous impacts of AI are "living in a fantasy world".

17Jul/1193

Why “Transhumanism” is Unnecessary

Who needs "transhumanism"? Millions of dollars are going into fields such as brain-computer interfacing, robotics, AI, and regenerative medicine without the influence of "transhumanists". Wouldn't transhumanism be better off if we relinquished the odd name and just marketed ourselves as "normal"?

Wild transhumanist ideas such as cryonics, molecular nanotechnology, hard takeoff, Jupiter Brains, and the like, distract our audience from the incremental transhumanist advances occurring on an everyday basis in labs at universities around the world. Brain implants exist, gene sequencing exists, regenerative medicine exists -- why is this any different than normal science and medicine?

Motivations such as the desire to raise one's father from the dead are clearly examples of theological thinking. Instead of embracing theology, we need to face the nitty gritty of the world here and now, with all of its blemishes and problems.

Instead of working towards blue-sky, neo-apocalyptic discontinuous advances, we need to preserve democracy by promoting incremental advances to ensure that every citizen has a voice in every important societal change, and the ability to democratically reject those changes if desired.

To ensure that there is not a gap between the enhanced and the unenhanced, we should let true people -- Homo sapiens -- be allowed to vote on whether certain technological enhancements are allowed. Anything else would be irresponsible.

As Margaret Somerville recently wrote in the Vancouver Sun:

Another distinction that might help to distinguish ethical technoscience interventions from unethical ones is whether the intervention affects the intrinsic being or essence of a person -- for instance, their sense of self or consciousness -- or is external to that. The former, I propose, are always unethical, while the latter may not be.

The intrinsic essence and being of a person is not something to be taken for granted -- it has been shaped carefully by millions of years of evolution. If we start picking arbitrary variables and trying to optimize them, the consequences could be very unpredictable. Our lust for pleasure and power could quickly lead us to a dark road of narcissistic self-enhancement and disenfranchisement of the majority of humanity.

Filed under: transhumanism 93 Comments
7Jul/1137

Dale Carrico Classics

Just in case there are new readers, I want to refer them to the writings of Dale Carrico, probably the best transhumanist critic thus far. He's a lecturer at Berkeley. (Maybe The New Atlantis should try hiring him, though I sort of doubt they'd get along.) I especially enjoy this post responding to my "Transhumanism Has Already Won" post:

The Robot Cultists Have Won?

When did that happen?

In something of a surprise move, Singularitarian Transhumanist Robot Cultist Michael Anissimov has declared victory. Apparently, the superlative futurologists have "won." The Robot Cult, it would seem, has prevailed over the ends of the earth.

Usually, when palpable losers declare victory in this manner, the declaration is followed by an exit, either graceful or grumbling, from the stage. But I suspect we will not be so lucky when it comes to Anissimov and his fellow victorious would-be techno-transcendentalizers.

Neither can we expect them "to take their toys and go home," as is usual in such scenes. After all, none of their toys -- none of their shiny robot bodies, none of their sentient devices, none of their immortality pills, none of their immersive holodecks, none of their desktop nanofactories, none of their utility fogs, none of their comic book body or brain enhancement packages, none of their kindly or vengeful superintelligent postbiological Robot Gods -- none of them exist now for them to go home with any more than they ever did, they exist only as they always have done, as wish-fulfillment fancies in their own minds.

You can read the whole thing at Dale's blog.

Filed under: transhumanism 37 Comments
15Jun/118

“How to Pitch Articles” Now on H+ Magazine Website

My article on how to pitch articles to H+ magazine has been slightly improved and is now posted on H+ magazine.

Topics to inspire you:

  • How can the transhumanist philosophy be applied to daily life?
  • Quantified Self topics
  • Is change actually accelerating? If so, what is the evidence?
  • What technologies pose major risks and why?
  • What are the next steps for robotics and AI?
  • What is happening in genomics?
  • What is the future of energy?
  • Is culture getting friendlier to the future?
  • What will the year 2020 be like?
  • What will the year 2030 be like?
  • What will the year 2050 by like?
  • What will the year 2100 be like?
  • Book reviews (Robopocalypse)
  • Movie reviews (Limitless)
  • Conference/event reviews
  • Cool new businesses and initiatives in the transhumanist space
  • Philosophical issues
  • Other cultural commentary
  • Space, space stations, spaceships, satellites, planetary colonization
  • Topics similar to content in Scientific American and Popular Mechanics

Send your pitch ideas to editor@hplusmagazine.com. I look forward to seeing your ideas!

Filed under: meta, transhumanism 8 Comments
11Jun/1111

How to Pitch Articles to H+ Magazine

I'm the new Managing Editor at H+ magazine, which in practical terms means I need to come up with five good articles a week to publish. The magazine gets a lot of traffic so it's a good place to share information with other transhumanists.

1. Come up with an idea or coverage of a company/product/news story worth covering. Ideally you have had personal experience with the company/product/news story and are uniquely suited to write about it. If not, you should be ready to quote someone who has.

2. Send the pitch to editor@hplusmagazine.com. That goes into my inbox. Include links to samples of your other writing. (If you want to write articles for H+ magazine but haven't written serious blog posts yet, you might want to try that first.)

3. If you get the go-ahead, investigate the story, get a quote from an expert in the area you're writing about. Take notes. The article should primarily be reporting, not speculation or personal opinion. Editorials are welcome but harder to write than straightforward informative articles. If you do want to insert a little speculation, save it for the end.

4. Write the article. Between 500 and 1000 words is ideal. The less experienced you are at writing, the shorter and more concise it should be. Follow Singularity writing advice. Omit needless words. Remember the Most Important Writing Rule. Most likely, what you write will be boring not because you're stupid, but because you aren't bending over backwards far enough to please the audience. Make each sentence matter.

5. Use the inverse pyramid structure that is common for all news and magazine articles. The five Ws come first: who, what, where, when, and sometimes why and how. Then, the most important details of your story. Why should we care? That should be answered within two or three sentences of the beginning. Why is reading this article worth the reader's precious time? Why should I read this article in my free time instead of going hiking, visiting the beach, or reading something better-written? If your idea isn't good enough to occupy the reader's time, don't bother.

That's it! Follow these simple guidelines, and your article will be accepted and you will become famous overnight. Within the transhumanist community, anyway. :)

10Jun/111

Humanity+ Summer Fundraiser

Humanity+, which used to be known more descriptively (but less concisely and media-friendly) as the World Transhumanist Association, is running a fundraiser this summer:

Thanks to a generous matching grant by the Life Extension Foundation and other major donors, if we raise $15,000 independently, we will secure a total of $30,000 in funding for Humanity+ this summer, enabling the organization to shift into a higher gear. Any gift you make to Humanity+ will be matched dollar-for-dollar until July 31st.

Donate today!

http://www.humanityplus.org/match

Filed under: transhumanism 1 Comment
9Jun/113

Thanks for Adding Yourself to the Map

Thanks to everyone who is participating in the transhumanist collaborative map project, after just six days we have almost 100 pins on the map and over 20,000 views. I see that many people in the Bay Area and New York are being shy and not adding themselves...

Be sure to pass the link around to your friends who are transhumanists, so we can build a better picture of the movement worldwide. This is a very unique and foresightful group! We should learn a little more about one another.

Filed under: transhumanism 3 Comments
8Jun/1146

Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that "we're already creating the superior beings, I think we lost the battle to the machines long ago."

I always think of this guy when I go by Woz Way in San Jose.

So, if artificial intelligence can become smarter than humans, shouldn't we be concerned about maximizing the probability of a positive outcome, instead of just saying that AI will definitely do X and that there's nothing we can do about it, or engaging in some juvenile fantasy that we humans can directly control all AIs forever? (We can indirectly "control" AI by setting its initial conditions favorably, that is all we can do, the alternative is to ignore the initial conditions.)

3Jun/1134

Collaborative Map of Transhumanists Worldwide

Updating this map is a little tricky, you have to be invited as a collaborator by someone who already is one. If you know someone already on the map you can ask them for an invite, otherwise you have to fill in your email address in form below. Then you can also invite anyone else to collaborate, you just need their email address. I promise I won't sell it to spammers, this list is only for adding people to the map.




View Transhumanists Worldwide in a larger map

24May/1148

How Can I Incorporate Transhumanism Into My Daily Life?

Transhumanism has been defined as the use of science and technology to improve the human condition, and the aspiration to go beyond what is traditionally defined as human, but it can be something broader: rational self-improvement while ignoring the boundaries set as typical. There's a lot of "self-improvement" out there, and a fair deal of promoting rationalism in debate and analysis, but these don't always come together. For instance, a highly rational individual might spend their entire day in front of a computer, neglecting exercise, and failing to take opportunity of a huge category of potential self-improvement. Conversely, someone preoccupied with "self-improvement" might believe in trendy nonsensical ideas about self-improvement that don't actually work.

People usually start off in life with a certain set of aptitudes, such as brains, social skills, strength, or looks. A fun way of embracing life is to try to maximize these qualities no matter where you start out on them. Even though I tend to fall on the "nature" side of the nature-nurture debate, I still think there is a tremendous amount that can be done to improve shortcomings that people make excuses to avoid improving. Social skills would be one example -- several transhumanist friends of mine have remarked how they used to be socially inept, and now are clearly extremely comfortable in social situations, because they made simple choices, like joining a rationalist community or a debate team.

This broader transhumanism means feeling personally obliged to improve yourself, both for your own benefit and for those around you. Let me focus a little bit on those around you, because there's been so much discussion on improving for yourself. Many groups and communities are only as strong as the average of their weakest members, due to aggregation effects that can be hard to explain. That's why an effective team working towards a goal needs to have every member be disciplined; one undisciplined member can be a thread that unravels the whole tapestry. When you neglect your physical appearance, your social skills, or your intellectual standards, you don't just hurt yourself, but those around you. Of course, no one can be perfect. The point is not to be perfect, but to at least try to improve, and put your ego aside to the extent that you are willing to accept criticism from others, sometimes even so-called "unconstructive" criticism. "Unconstructive" criticism tends to contain a grain of truth that can be the seed for future self-improvement.

Because the body is the seat of the mind, and the human animal's mind is deeply interconnected with their body, the first priority of self-improvement should be a healthy lifestyle. Being overweight is linked to anxiety and depression. Exercise is connected to positive mood, self-esteem, and restful sleep in dozens of studies. Rigorous exercise, rather than lazy shortcuts, lead to real benefits. It's not really a question of time -- tremendous benefits can be gained by exercising rigorously for as little as 30 minutes a couple times a week. There is no one who is too busy to exercise. A transhumanist who professes to be interested in transcending the human who is too lazy to exercise is like a Christian who is too lazy to pray or attend church -- a lemming attaching themselves to a social label rather than someone who can live up to the ideas they value. You have the tools to improve yourself now -- take advantage of them! Don't sit around for decades waiting for a pill to solve all your problems. If you aren't active yet, starting thinking of yourself as the type of person who should be active, and behavior will follow.

After making a commitment to improving the body, you should improve your mind. Intellectuals should be expected to have a book in their queue pretty much perpetually. Books are quite cheap, and there is so much to learn that anyone not reading is someone who is neglecting their intellectual curiosity. Articles on websites tend to be short and emotionally charged, not the kind of careful analysis or inspired literature that exists in books. Reading quippy front-page articles on Reddit or Digg is not a good cornerstone for a balanced intellectual life. Don't even get me started on television. I'm not saying that people shouldn't get information from diverse sources, but that the true foundation of intellectualism is, and has always been, books. "Infotainment" like the Colbert Report is just entertainment.

After you get your information, you have to process it properly. Be aware of cognitive biases. Never trust anything you think the first time. The greatest enemy of rationality is not the church, or the mainstream media, or the Republicans/Democrats, but your own brain. A true rationalist can be exposed to the most idiotic information sources and still extract useful evidence and insights by applying their own frame to the facts, rather than using the framing of the presenter. A rationalist does not get emotional while arguing, because nine times out of ten, emotions get in the way of proper analysis. Do a cold, clean analysis first, then, maybe a few hours or days later, you can start indulging in the emotions that flow from true beliefs. Maybe it's even best never to get emotional at all. Emotions are fast-and-frugal heuristics for processing information, far inferior to dispassionate analysis. I like to get emotional about issues that aren't really important, like my favorite songs or games. For those issues that really do matter, like geopolitics, social psychology, philosophy, and science, I try to keep emotions to a minimum.

Don't be so sensitive. We are all idiots in comparison to what is possible. Human beings are just monkeys, a node on the chain of being. One day in the not too distant future, minds will be created that put all of our best to shame. Don't worship the human spirit as if it were a god. The human spirit is nice, but it has plenty of flaws. People are balanced when they are slightly skeptical about everything by default, not when they embrace everything by default. Remember that skepticism triggered the Enlightenment, and if it weren't for skepticism, we would probably still be in the Dark Ages. Praise people who are skeptical of your ideas in good faith, don't discourage them.

Improving ourselves is not easy. That the definition of "improvement" itself has many subjective elements is part of the challenge, though many types of improvements tend to be self-evident in retrospect. The hardest part of improvement may be the willingness to make yourself vulnerable to criticism from others. All of us have our downfalls -- we're overweight, lazy, irresponsible, or overconfident. To some degree, I am all of these things. I'll bet most of you are too. Since everyone tends to have weaknesses, the idea is not to eliminate all weakness, or achieve some social standard of competence and then give up, but to whittle away at your weaknesses and reap the benefits from incremental gains. That's what transhumanism is -- slow improvement, using the best tools at our disposal. Never giving up, and never saying we've done enough. There is always more to do -- more to read, more to learn, more to say, and more to act on. Go out and do it.

Filed under: transhumanism 48 Comments