Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

15Nov/090

Toby Ord on BBC for Giving What We Can

A friend and associate of mine, Oxford philosopher Toby Ord, has gained some major coverage on the BBC website. Congratulations, Toby! Toby has pledged 10% of his annual salary, plus any yearly earnings above £20,000, to charities fighting poverty in the developing world. He projects that will amount to about £1M over the course of his career, which he has calculated could save 500,000 years of healthy life.

Toby is participating in what I glibly call "utility war" -- a worldwide war not for money or power, but to achieve the greatest good for the greatest number (positive utility). This could be the war to end all wars. A war we can be pleased to fight.

For more information, see Giving What We Can.

Filed under: ethics, philosophy No Comments
4Nov/0938

Can you Give a Drone a Conscience?

An article on roboethics is at the Times. In an ideal world, government authorities would recognize the friendliness problem in advance and pro-actively create Friendly AI with competent researchers, though I would currently estimate the chances of that happening in the next 20 years as less than 10%.

Filed under: ethics, robotics 38 Comments
13Jul/090

New Research from Joshua Greene: ‘Neuroimaging suggests that truthfulness requires no act of will for honest people’

Joshua Greene, author of one of the most important papers for understanding the need for Friendly AI, brings us new research:

CAMBRIDGE, Mass. -- A new study of the cognitive processes involved with honesty suggests that truthfulness depends more on absence of temptation than active resistance to temptation.

Using neuroimaging, psychologists looked at the brain activity of people given the chance to gain money dishonestly by lying and found that honest people showed no additional neural activity when telling the truth, implying that extra cognitive processes were not necessary to choose honesty. However, those individuals who behaved dishonestly, even when telling the truth, showed additional activity in brain regions that involve control and attention.

The study is published in Proceedings of the National Academy of Sciences, and was led by Joshua Greene, assistant professor of psychology in the Faculty of Arts and Sciences at Harvard University, along with Joe Paxton, a graduate student in psychology.

"Being honest is not so much a matter of exercising willpower as it is being disposed to behave honestly in a more effortless kind of way," says Greene. "This may not be true for all situations, but it seems to be true for at least this situation."

Read more at Eurekalert.

Filed under: ethics No Comments
20Jun/0933

Convo at Sentient Developments on Hedonistic Imperative

Some heavy-hitting philosophers, like David Pearce and Mark Walker, have gotten involved at a discussion in the comments thread of my guest post at Sentient Developments. They are trying to convince Athena Andreadis that having more control over our emotions is a good thing and that we can eliminate pain without becoming drooling zombies. Note that hyperthymic (very happy) people not only are completely functional, but they have a tendency to be more creative than people in the middle of the happiness bell curve.

In the thread, people point out the value of pain -- it's evolutionarily useful, etc. I consider it quite likely that we'll find workarounds to all the obstacles that stand in the way of removing it, however. The vast majority of pain is useless. The minority of pain that is useful could be replaced by automatic "warning signals" that pop up when we would otherwise be feeling useful pain, or even connecting the cause of pain directly to pain-avoidance instincts without the intermediary of conscious pain.

People find that last part really hard to grasp. How could you jerk away your hand from a hot stove in a fraction of a second without feeling pain? The fact of the matter is that, in the end, pretty much any stimulus can be arbitrarily connected to any reaction in a physical system as long as you have the necessary access and a well-specified description of the stimulus and reaction. In the long term, we'll be able program ourselves to laugh insanely at the sound of drops of water, or recoil in fear from beavers. There are no magical connections between stimuli, conscious feelings, and instinctual reactions. Evolution built them all from scratch. As our ability to reengineer the human brain increases, we'll gain the ability to reprogram literally anything we want. I think that people will eventually choose to make practically every stimulus result in some shade of happiness -- the question is how to program these "gradients of bliss".

Nothing in the world is inherently happiness-causing or pain-causing. It's all based on the neural circuitry doing the perceiving. Thinking otherwise is falling prey to the Mind Projection Fallacy, an error we seemed programmed to make, but once we realize it's wrong, we ought to drop it forever.

Filed under: ethics 33 Comments