From some druid site:
Yet die we must. As Sherwin Nuland points out in his book, How We Die, we must die for the sake of our species; if somehow we contrived to live forever, we would quickly overwhelm our environment’s carrying capacity and all perish like lemmings. “Must,” in biological terms, thus carries not only its ordinary meaning of inevitability, but also a sense of appropriateness. Our need for death is personified in Herne the Hunter, sometimes called Cernunnos by the Celts. He is the god of culling, who takes away life for the sake of balance and health in the world.
Note the uncanny resemblance to an Onion article written a year earlier.
Yahzi Coyote has a saying: “All that is necessary to defeat a theologian is to repeat his arguments back to him, changing the word God to any other word.” Likewise, all that is necessary to defeat a bioethicist is to imagine his eyes glowing red and his voice deepening whenever he mentions death, decay, suffering, and necessity.
update: Oops, I had to remove the video because it messed up the layout. Until I’ve figured out how to make this work, please visit:
Bela Lugosi “Atomic Supermen” Speech in Bride of the Monster
An article on transhumanism appeared in a major Dutch newspaper last November. Here’s a readable machine-translated version. The author, Cees Dekker, is well-known as a legitimate scientist, but notorious as an Intelligent Design advocate. Having IDers disagree with you is like hitting the public relations jackpot. Unfortunately, Grooviness does not permit me to encourage the fallacy of reversed stupidity (or rather, reversed folly).
In Brave New World poverty and war are absent, and yet this world is a dystopia, because the most fundamental is missing: humanity, family, belief in God, courage, creativity, art, science – all that has disappeared.
The question is: do we want a Brave New World?
I will now have to go and ponder this fascinating line of argument.
Sometimes people make an argument along the lines of: “Science has discovered that human intelligence is intrinsically embodied, and therefore 1) you can’t just program an artificial intelligence in a box without a body, and 2) you can’t upload your mind because then it would no longer be embodied”. Usually they insert the word “profound” somewhere.
Let’s start by dealing with 1). First off, just because human intelligence is embodied, do we know that embodiment is a requirement for intelligence? Our bodies and their interaction with their environment happened to be around for evolution to work with in programming our minds, so just the fact that it used them isn’t very informative. To show that embodiment is necessary, you’d have to show that other approaches fail.
Second, I could see embodiment meaning a few different things, and none of them seem very threatening.
- “To make an intelligent mind you need to give it an actual physical body in the actual physical world.” This seems clearly false. If a virtual body in a virtual world has the same structure, it should allow the same intelligence, because intelligence is a structural property. (Something that’s wet in a virtual world with the same structure as the real world is not also wet in the real world, but something that’s intelligent in a virtual world with the same structure as the real world is also intelligent in the real world.)
- “To make an intelligent mind you need to give it something with body-structure in something with world-structure.” I doubt it. (Note, though, that I know nothing.) Anyway, this is compatible with AI-in-a-virtual-world-in-a-box.
- “To make an intelligent mind you need to give it some of the mental features that embodied creatures have, like sensory and motor modalities.” Maybe. Here there’s no conflict with bodiless-AI-in-a-box at all.
Next, the argument from embodiment against uploading. That one sounds confused to me too. If you uploaded my mind somewhere without connecting it to something with a structure much like my body in a sensible 3D world, then my mental life would be so much changed that you could indeed doubt whether I’d still be me. But if you uploaded my mind so that it stayed connected to something with a structure much like my body in a sensible 3D world, the only thing that’s changed — other than the details of the surroundings, which change every time I walk into a different room — is that things that used to be real are now virtual. (I need a better word for “real” here — I think virtual worlds are perfectly real in the philosophical sense.) This is not a difference that leaves me less embodied in any way that affects my psychology. A thing that used to be true of me — “I am being implemented in base-level reality” — is then no longer true of me, but this in itself causes no philosophical problems. Facts about me — where I am, what I perceive, what atoms I’m made of — change all the time. It’s only when my psychology is rewritten or my memories are changed that I need worry about being hurled into an existential crisis.
There’s a simpler way to make the points I made in an earlier post on the concept of “transcendence”: “transcend” is a transitive verb. You have to transcend something specific. You can’t just attain “transcendence”. That’s like attaining “victory”. Victory over what?
Moreover, you can only transcend certain kinds of things. You can transcend limits, but not, e.g., places. With the right technology, you might transcend physical aging, or the need to sleep. But (even apart from technological difficulty) transcending the physical world doesn’t make sense. After you walk from Denmark to Germany, have you transcended Denmark? Probably someone using the word like this means something more specific that’s worth spelling out.
The easiest thing is to just not use the word, loaded as it is with religious baggage.
Neven Sesardic, a philosopher, critiques the movie Gattaca:
Imagine that you are on an intercontinental flight and that immediately after take‐off
the pilot makes the following announcement: ‘Dear passengers, I hope you will join me in
celebrating a wonderful achievement of one of our navigators. His name is Vincent.
Vincent’s childhood dream was to become an airplane navigator but unfortunately he was
declared unfit for the job because of his serious heart condition. True, he does occasionally
have symptoms of heart disease like shortness of breath and chest pain, yet he is certainly
not the kind of person to be deterred from pursuing his dream so easily. Being quite
convinced that he is up to the task and that everything would be fine Vincent decided to
falsify his medical records. And indeed, with the clean bill of health readily forged and
attached to his application, he smoothly managed to get the plum job and is very proud to
take care of your safety today. Can we please get some applause for Vincent’s
accomplishment and perseverance in the face of adversity? And, by the way, keep your seat
belts tightly fastened during the entire flight.’
I somehow doubt that in such a situation you would clap enthusiastically, or that you
would vote for Vincent as the airline employee of the month. I bet that, on the contrary, you
would be outraged that he used deception and irresponsibly put other people’ lives at risk
in order to achieve his selfish goal. But why then do we react so differently when we are
confronted with that other Vincent, the main character in the movie Gattaca (1997), who
basically does the same thing. Why do we admire him? I will try to show that this is all the
work of silver screen magic.
(via Online Papers in Philosophy)
Edge’s 2008 question asks what people have changed their minds about. As usual many responses are interesting.
Sam Harris’s and Martin Rees’s entries are transhuman-themed:
Nick Bostrom’s and Aubrey de Grey’s entries are not:
- Technology will lead to extremely good outcomes (technophile)
- Technology will lead to extremely bad outcomes (technophobe)
- Technology will lead to outcomes that are on the whole neutral (technonormal?)
- Technology will lead to extreme outcomes, either good or bad (technovolatile?)
People tend to assume transhumanists are type 1, when many are in fact type 4. This is one of transhumanism’s major PR problems. On a one-dimensional scale of like/dislike, type 4 won’t even register.
(And yes, strictly speaking, 4 is just a mix of 1 and 2, and the other possible mixes should be options also.)
It would explain a lot if something like this were going on:
- In fiction, when characters with transhuman attributes appear, the focus is on the coolness, scariness, or plot function of their powers. Rather than people with meaningful inner lives, they tend to be just walking lists of impressive stats.
- People form the category of “superhuman” as something belonging in the domain of cheesy fiction, or cheesy elements of non-cheesy fiction.
- They conclude transhumanists want a world where everyone is smarter, stronger, and longer-lived, but at the cost of turning into a cartoon character lacking human-like complexity.
Maybe this isn’t quite true. Some stories explore how real human beings might deal with unusual powers. But in the context of transhumanism, with its emphasis on technology usable by anyone rather than a few favored individuals, these aren’t the first that come to mind. Brave New World is.
Consider the space of all possible human natures. Somewhere in this vast space is human nature as it is. Somewhere else is human nature as it ought to be according to our deepest values. It is not a design criterion of evolution that these two should coincide. How, then, is bioconservatism — by which I mean the idea that, according to human values, actually existing human nature is better than any alternative — anything else than extremely improbable?