Hanson: Philosophy Kills

Robin Hanson found a skeptical Bryan Caplan when the former explained his positions on cryonics to the latter. (“The more I furrowed my brow, the more earnestly he spoke.”) Caplan said:

What disturbed me was when I realized how low he set his threshold for [cryonics] success. Robin didn’t care about biological survival. He didn’t need his brain implanted in a cloned body. He just wanted his neurons preserved well enough to “upload himself” into a computer. To my mind, it was ridiculously easy to prove that “uploading yourself” isn’t life extension. “An upload is merely a simulation. It wouldn’t be you,” I remarked. …

“Suppose we uploaded you while you were still alive. Are you saying that if someone blew your biological head off with a shotgun, you’d still be alive?!” Robin didn’t even blink: “I’d say that I just got smaller.” … I’d like to think that Robin’s an outlier among cryonics advocates, but in my experience, he’s perfectly typical. Fascination with technology crowds out not just philosophy of mind, but common sense.

Hanson responded with …

Read More

Greg Fish: Against Causal Functionalism

Greg Fish, a science writer with a popular blog who contributes to places like Business Week and Discovery News, has lately been advancing a Searleian criticism of causal functionalism. For instance, here and here. Here is an excerpt from the latter:

A Computer Brain is Still Just Code

In the future, if we model an entire brain in real time on the level of every neuron, every signal, and every burst of the neurotransmitter, we’ll just end up with a very complex visualization controlled by a complex set of routines and subroutines.

These models could help neurosurgeons by mimicking what would happen during novel brain surgery, or provide ideas for neuroscientists, but they’re not going to become alive or self aware since as far as a computer is concerned, they live as millions of lines of code based on a multitude of formulas and rules. The real chemistry that makes our brains work will be locked in our heads, far away from the circuitry trying to reproduce its results.

Now, if we built a new generation …

Read More

Vague Complexity, Precise Complexity

The word “complexity” is a confusing one. There are two types of complexity — the vague, layman’s term, which seems to mean something like a great chain of being (“the more like us humans it is, the more complex it must be”), and the precise, mathematical term, Kolmogorov complexity, which refers to the measure of computational resources needed to specify the object. If you are familiar with the latter concept, that’s what you start to think of whenever someone says “complexity”, and people using the layman’s sense of the term start sounding vague and/or confused. People working in AI tend to mean Kolmogorov complexity when they say “complexity”, so if you hang around with people like that for long enough, it gets ingrained into you.

Since Kolmogorov complexity has a nice mathematical definition, it’s very precise. It turns out that lots of not-so-cool things are really complex, like the structure of bread mold, chaotic fluid eddies, or Hadamard’s billiards. The definition of Kolmogorov complexity is agnostic towards what kind of complexity you mean. A …

Read More

Toby Ord on BBC for Giving What We Can

A friend and associate of mine, Oxford philosopher Toby Ord, has gained some major coverage on the BBC website. Congratulations, Toby! Toby has pledged 10% of his annual salary, plus any yearly earnings above £20,000, to charities fighting poverty in the developing world. He projects that will amount to about £1M over the course of his career, which he has calculated could save 500,000 years of healthy life.

Toby is participating in what I glibly call “utility war” — a worldwide war not for money or power, but to achieve the greatest good for the greatest number (positive utility). This could be the war to end all wars. A war we can be pleased to fight.

For more information, see Giving What We Can.

Read More

The Dream-Computer Interface

Several notes on dreams and the dream-computer interface idea.

Dr J. Allan Hobson, a leading dream researcher with Harvard, is publicizing his hypothesis of dreaming with a press release, “Dreams may have an important physiological function”. According to Dr. Hobson, the function of dreams is physiological — a sort of “mental practice” for the awakened state. Hobson said that “dreams represent a parallel consciousness state that is running continuously, but which is normally suppressed while the person is awake”.

If his hypothesis is correct, it has an important implication for rationality. In the mind sciences, people tend to drastically overweight the significance of the ghost or soul — represented by one’s conscious experience and “free will” in various theories. If progress in cognitive science has taught us anything, is it is that this ghost is both an illusion and far less significant than we, in our vanity, think it is. Hobson’s theory is that dreaming has a physiological function, an unflattering reflection on pet theories that assign dreaming …

Read More

Analysis of Massimo Pigliucci’s Critique of David Chalmers’ Talk on the Singularity

To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers’ Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don’t bias people before they’ve read the real objections.

Pigliucci is convinced that Chalmers is a dualist, which is not exactly true — he is a monist with respect to consciousness rather than spacetime and matter. I used to be on …

Read More

The Connection Between Stimuli and Pleasure/Pain is Arbitrary, an Objective Fact that Has Relatively Little to Do with One’s Personal Tech Habits

My thoughts on sex after the Singularity were picked up by a blogger on CNET, Chris Matyszczyk, so I thought I’d react a little bit. He writes:

Indeed, Retrevo’s findings are so disturbing that I wonder whether the roboticists are right to suggest that sex should be a matter of adjusting one’s own chemistry rather than attempting to consort with another human. To wit, in the words of blogger Michael Anissimov, one of the “leading thinkers in the radical tech community” who were invited to pontificate in the lustrous pages of H Plus magazine: “The connection between certain activities and the sensation of pleasure lies entirely in our cognitive architecture, which we will eventually manipulate at will.”

I am haunted by the drastic prognostications by the salivators over The Singularity about the future of sex. Indeed, some words of Anissimov are rattling around my head like those of a particularly angry former lover. Speaking of this beautiful future, he said: “I could make any experience in the world highly pleasurable or highly displeasurable. I could make …

Read More

Future Shock Levels as Point Estimates

My friend and associate Peter de Blanc has an interesting post up recently, on how the point-estimate nature of popular futurist prediction signifies a fundamentally non-probabilistic way of thinking about the future and possible future technologies. We tend to think in terms of black-and-white, yes-or-no, rather than probabilities, because it’s easier for us to handle. For instance, most people don’t represent the likelihood of catastrophic climate change as a probability — they tend to think in terms of “it will happen” or “it won’t”. I find myself falling into this way of thinking constantly, and have to exert deliberate effort to preserve a probabilistic frame of mind.

Read More

Does the Universe in Fact Contain Almost No Information?

Another dude I met at the Summit who I liked was Singularitarian Max Tegmark. He was a lot taller than I imagined. My favorite paper of his has always been “Does the Universe in Fact Contain Almost No Information?”, which fits in with a theory I came up independently (and I’m pretty sure has been postulated elsewhere) that we probably live in the simplest possible universe that can contain conscious entities. Another interesting paper from 2007 is “Shut up and calculate”, which explores Max’s concept of a “level IV” universe that contains every mathematically possible structure.

Read More

Risks with Low Probabilities and High Stakes

One of the people I met at the Summit who I got along with was Toby Ord. Toby is the mind behind Giving What We Can. I’ve looked at his website and papers before, but now I’m back for more. You can read along by checking out “Probing the Improbable: Methodological Challenges for Risks with Low Probabilities and High Stakes”, a paper by Toby Ord, Rafaela Hillerbrand, and Anders Sandberg. Here is a random quote:

Flawed arguments are not rare. One way to estimate the frequency of major flaws in academic papers is to look at the proportion which are formally retracted after publication. While some retractions are due to misconduct, most are due to unintentional errors. Using the MEDLINE database (Cokol, Iossifov et al. 2007) found a raw retraction rate of 6.3 â‹… 10-5, but used a statistical model to estimate that the retraction rate would actually be between 0.001 and 0.01 if all journals received the same level of scrutiny as those in the top tier. This would suggest that P(¬A) > …

Read More

Phil Goetz: Exterminating Life is Rational

Phil Goetz has a nice post up at Less Wrong that argues that we will eventually inevitably eliminate ourselves as a species unless one of the following things happens:

*We can outrun the danger: We can spread life to other planets, and to other solar systems, and to other galaxies, faster than we can spread destruction. *Technology will not continue to develop, but will stabilize in a state in which all defensive technologies provide absolute, 100%, fail-safe protection against all offensive technologies. * People will stop having conflicts. * Rational agents incorporate the benefits to others into their utility functions. * Rational agents with long lifespans will protect the future for themselves. * Utility functions will change so that it is no longer rational for decision-makers to take tiny chances of destroying life for any amount of utility gains. * Independent agents will cease to exist, or to be free (the Singleton scenario).

He looks at each of these possibilities one by one.

Read More

Bad News for Conservatives

From Eurekalert, Easily grossed out? You might be a conservative! Excerpt:

Liberals and conservatives disagree about whether disgust has a valid place in making moral judgments, Pizarro noted. Conservatives have argued that there is inherent wisdom in repugnance; that feeling disgusted about something — gay sex between consenting adults, for example — is cause enough to judge it wrong or immoral, even lacking a concrete reason. Liberals tend to disagree, and are more likely to base judgments on whether an action or a thing causes actual harm.

Actual harm — what a wild concept!

Just to save this post from turning into a political flame war, let me point out that there are certainly good things about some forms of “conservatism”, such as capitalism, but when it comes to moral reasoning among some conservatives, I am frequently disturbed.

Read More