Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

24Mar/0962

Are Emotions Necessary for General Intelligence?

From where I'm standing philosophically, the answer is "obviously not, our particular emotions are contingent aspects of human intelligence which exist for specific evolutionary reasons". In trying to find more evidence for this opinion, I found this Wikipedia article on alexithymia:

Alexithymia, literally "without words for emotions" is a term coined by psychotherapist Peter Sifneos in 1973 to describe a state of deficiency in understanding, processing, or describing emotions.

The formal definition is here:

1. difficulty identifying feelings and distinguishing between feelings and the bodily sensations of emotional arousal
2. difficulty describing feelings to other people
3. constricted imaginal processes, as evidenced by a paucity of fantasies
4. a stimulus-bound, externally oriented cognitive style.

Continuing down the page, it looks like some pop-psychology BS is being deployed to define this category, but could it still be valid? If someone really has "no emotions", it seems like they'd need to have some genetic disorder that somehow suspended functions in parts of the limbic system without killing them entirely. It seems possible to me that alexithymiacs just have lower emotional arousal than others, rather than the genuine absence of all type of emotion -- but who knows?

Some initially pop-psych sounding categories do end up seeming legitimate. Like psychopaths, for instance, which truly do have different neural activation patterns than normal people when exposed to things like violence.

Is anyone else familiar with alexithymia or any empirical research on it?

Filed under: brain, philosophy 62 Comments
24Mar/090

NYT: The Conficker Worm: April Fool’s Joke or Unthinkable Disaster?

Interesting article at the NYT about the Conficker worm:

Given the sophisticated nature of the worm, the question remains: What is the purpose of Conficker, which could possibly become the world's most powerful parallel computer on April 1? That is when the worm will generate 50,000 domain names and systematically try to communicate with each one. The authors then only need to register one of the domain names in order to take control of the millions of zombie computers that have been created.

One fictional example of a massive cyber-disaster I've heard of is Pluto's Kiss, but I haven't actually seen the show it's on, .hack.

Massive Botnet: coming to a computer near you April 1, 2009.

Filed under: technology No Comments
24Mar/0975

PGD-IVF Would Lead to Designer Babies

George Dvorsky at Sentient Developments points us to an op-ed at New Scientist titled "Fears over 'designer' babies leave children suffering". The author writes:

Such fears are misplaced: IVF-PGD is little use for creating designer babies. You cannot select for traits the parents don't have, and the scope for choosing specific traits is very limited. What IVF-PGD is good for is ensuring children do not end up with disastrous genetic disorders.

I, along with dozens of prominent scientists in the field, disagree -- IVF-PGD would be useful for creating designer babies. Would would would. To boost this position, the author links another New Scientist article... (one that he probably edited, being biology features editor) which seems to contradict him:

Part of the problem is that only one or two cells are available for screening. Until recently this greatly restricted the tests that could be done. However, new ways of amplifying DNA are making it possible to do hundreds of tests. That means clinics will be able to screen for a much wider range of harmful mutations - and for desirable variants too.

Only one paragraph that I can find appears to support the op-ed author's idea that IVF-PGD couldn't be used for designer babies:

How much further can selection go? What of that object of tabloid hysteria, the "designer baby"? Will we one day be able to ask for a tall, musical, blue-eyed boy or a dark-haired girl? Even if regulatory authorities allow us to use PGD to select desirable gene variants, there are major snags. For starters, IVF typically generates fewer than 10 embryos per cycle. This means parental choice will be very limited. "I don't think anyone in their right mind would ever go through IVF to select the hair colour of their offspring," says Yuri Verlinsky of the Reproductive Genetics Institute, Chicago, one of the pioneers of PGD.

This Verlinsky quote is really confusing. Elsewhere, Verlinsky has been quoted as saying that PGD-IVF could lead to a "disease-free society" (a sloppy way of saying a "genetic disease free society"), but he claims that people won't use it to choose the hair color of their offspring. His quote doesn't make it clear whether he's talking about his opinion or the technical challenge. Also, the author of that (non op-ed) piece seriously breaks journalistic neutrality by calling designer babies an "object of tabloid hysteria" when many prominent scientists in IVF take the idea seriously. It's like the contributors at New Scientist are on misguided vigilante missions to make emerging technologies sound more palatable to the mainstream.

In any case, the limitation on the number of blastocysts can be circumvented using multi-generational in vitro embryo selection, which Verlinsky should have already considered, and if he hasn't, he has tunnel vision. So he either 1) is scientifically uncreative in his own field, or 2) knows that more advanced PGD-IVF could be used for designer babies, and just wants to keep it a secret from the public because he wants to get them to accept the technology incrementally, like boiling a crab in water that increases in temperature only slowly.

In general, I think the op-ed is a shoddy example of memetic engineering -- the author is trying to distract attention away from the designer baby controversy to help promote PGD-IVF for eliminating genetic diseases. Good motive, but somewhat dishonest, because I doubt that even the author believes that PGD-IVF would be useless for designer babies.

Speaking of "designer babies", I hate the term. As James Hughes said in WIRED, "the term "designer babies" is an insult to parents, because it basically says parents don't have their kids' best interests at heart". How about just "PGD-IVF babies", a non-catchy term, because it shouldn't become catchy and be used to discriminate against children born using the tech or parents who decide to use it? This would be in the same vein of Aubrey calling his project "Strategies for Engineered Negligible Senescence" to make it deliberately difficult to misunderstand. The project of SIAI could become, "the Engineering of a Human-Values Reflective Optimizing Process".

Also, we must remember that New Scientist lacks credibility. Instead of reading New Scientist, how about PhysOrg and Eurekalert? Or Next Big Future?

Either way, the whole issue matters not, because designer babies are largely irrelevant and will be eclipsed by things like strongly self-improving superintelligence and molecular manufacturing. See "Evolution by Choice" by Mitchell Howe.

Filed under: biology 75 Comments
23Mar/0919

Slashdot is Dead

Every so often, I run into someone who thinks that Slashdot still matters. Just FYI, Digg overtook Slashdot over three years ago, and for Slashdot, it's been downhill from there. Look at the current picture on Alexa:

From the perspective of sites like Digg, Slashdot barely exists. It's so bad that in January, this very website practically surpassed the traffic of Slashdot, at least according to Alexa. And you can't get much more pathetic than that.

(Update: a commenter points out what's already been pointed out many times, which is that Alexa isn't perfect, then tries to defend Slashdot on that basis, which is like a flashback to 2006. But the alternate tracking service he cites, Compete, still demonstrates that Digg is more than 40 times more popular than Slashdot, which proves my point that Slashdot is dead, and has been for a long time.)

Filed under: technology 19 Comments
23Mar/098

H+ Magazine Website is Online

The people at H+ made a cool website to go along with their magazine. Check it out:

http://hplusmagazine.com

The editor, R.U. Sirius, has promised daily updates. Looking in the articles section, I see a summary of the recent AGI-09 conference by Ben Goertzel, a person whose sheer textual output probably exceeds 100 typical transhumanists combined.

There is a community section:

http://hpluscommunity.com

R.U. Sirius has a blog with one amusing post on Flash magazines. I can identify with all the points he makes.

Love the opening line. The slightly jokey/condescending tone throughout the article indicates something... what could it be? Personality.

Filed under: transhumanism 8 Comments
23Mar/0941

The Terrible, Horrible, No Good, Very Bad Truth About Morality

I just finished Joshua Greene's paper, "The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It" and loved it. Best paper I've read since Omohondro's "Basic AI Drives". If people just read and understood both those papers, then far more would probably drop everything they're doing and do something about unfriendly AI. I hope that SIAI can build some sort of relationship with Greene, or get him to say a few words on how moral revisionism (in the sense of moving past moral realism) applies to AI morality.

Though the first half of Greene's essay was the most intellectually serious and useful, the latter half was more amusing and interesting. I want to comment on it, but I don't want to ruin it by having you reminded by the part I mention for the whole essay, so I'm just going to post the quote he concludes with:

"If only it were so simple! If only there were evil people somewhere
insidiously committing evil deeds, and it were necessary only to separate
them from the rest of us and destroy them. But the line dividing good and
evil cuts through the heart of every human being. And who is willing to
destroy a piece of his own heart?"

-- Aleksandr Solzhenitsyn

Filed under: philosophy 41 Comments
18Mar/0968

Science Desktop

Filed under: images, science 68 Comments
17Mar/098

Study gives more proof that intelligence is largely inherited

From PhysOrg:

They say a picture tells a thousand stories, but can it also tell how smart you are? Actually, say UCLA researchers, it can.

In a study published in the Journal of Neuroscience Feb. 18, UCLA neurology professor Paul Thompson and colleagues used a new type of brain-imaging scanner to show that intelligence is strongly influenced by the quality of the brain's axons, or wiring that sends signals throughout the brain. The faster the signaling, the faster the brain processes information. And since the integrity of the brain's wiring is influenced by genes, the genes we inherit play a far greater role in intelligence than was previously thought.

Genes appear to influence intelligence by determining how well nerve axons are encased in myelin — the fatty sheath of "insulation" that coats our axons and allows for fast signaling bursts in our brains. The thicker the myelin, the faster the nerve impulses.

Thompson and his colleagues scanned the brains of 23 sets of identical twins and 23 sets of fraternal twins. Since identical twins share the same genes while fraternal twins share about half their genes, the researchers were able to compare each group to show that myelin integrity was determined genetically in many parts of the brain that are key for intelligence. These include the parietal lobes, which are responsible for spatial reasoning, visual processing and logic, and the corpus callosum, which pulls together information from both sides of the body.

Continue.

The press release mentions the possibility of enhancing human intelligence:

And could this someday lead to a therapy that could make us smarter, enhancing our intelligence?

"It's a long way off but within the realm of the possible," Thompson said.

I guess it's becoming more acceptable for mainstream scientists to talk about human intelligence enhancement. I wonder if the increasing profile of transhumanism has anything to do with that, or if it's about more of these ideas in fiction, or something else.

Filed under: intelligence 8 Comments
16Mar/0913

The Future is Now: Scientists Build Anti-Mosquito Laser

I've wanted this to be developed for a long time. From Physorg:

(PhysOrg.com) -- In an effort to prevent the spread of malaria, scientists have built a laser that shoots and kills mosquitoes. Malaria, which is caused by a parasite and transmitted by mosquitoes, kills about 1 million people every year.

The anti-mosquito laser was originally introduced by astrophysicist Lowell Wood in the early 1980s, but the idea never took off. More recently, former Microsoft executive Nathan Myhrvold revived the laser idea when Bill Gates asked him to explore new ways of combating malaria.

Now, astrophysicist Jordin Kare from the Lawrence Livermore National Laboratory, Wood, Myhrvold, and other experts have developed a handheld laser that can locate individual mosquitoes and kill them one by one. The developers hope that the technology might be used to create a laser barrier around a house or village that could kill or blind the insects. Alternatively, flying drones equipped with anti-mosquito lasers could track the insects with radar and then sweep the sky with the laser.

The researchers are tuning the strength of the laser so that it kills mosquitoes without harming other insects or, especially, people. The system can even distinguish between males and females by the frequency of their wing movements, which may be important since only females spread the parasite.

Question: would it be really so bad if we made mosquitos go extinct? Or tsetse flies, an organism that sounds like it came right out of Hell?

Filed under: technology 13 Comments
16Mar/0934

Generally Negative Review of Life Extensionists on Time.com

Any coverage is good coverage, right? The answer is no. Negative coverage of life extension can spread stereotypes that cause funding sources to become more squeamish about contributing to groups like the Methuselah Foundation.

On Time.com, a feature "10 Ideas Changing the World Right Now" includes "Amortality" as one of their ideas, saying that, "The defining characteristic of amortality is to live in the same way, at the same pitch, doing and consuming much the same things, from late teens right up until death." The journalist also writes, "They're a highly sexed bunch", and includes Nicolas Sarkozy, Madonna, and Mark Zuckerburg among their number.

You might read the article and say it's not too negative, but it is negative. The general feel I got was that life extensionists are irresponsible and deluded about their age. This is the same reaction that we're used to hearing, but these arguments are even more dangerous because they can appeal to moderates and liberals, unlike the arguments of Leon Kass for instance, which appeal only to religious conservatives.

The impetus to extend life need not be based on denialism and youth fetishism (it can be, especially among the millions of people who buy "anti-aging" creams, but this exists less in the more mature life extension advocacy organizations based on humanitarian goals). It can be based on transhumanism as simplified humanism:

"As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you're done. No special cases. You don't have to ask anyone's age."

15Mar/098

Will Any Sufficiently Intelligent AI Designer See the Necessity of Friendliness Theory?

As I'm working my way through the Greene dissertation (currently on page 163, thanks to Roko for pointing this out to me originally via his blog), I feel myself getting more optimistic about a certain issue: that is, that any sufficiently intelligent human will figure out that morality doesn't generate itself automatically in the absence of very sophisticated and specific cognitive hardware. This is wonderful, because it lessens the probability of unFriendly AI, that perennial Singularitarian bugaboo.

In the Greene essay, he points to three things: psychopaths, Phineas Gage, and similar patients with damage to their ventromedial frontal lobes, which are apparently indispensable to generating moral evaluations and feelings. They all display significant intelligence but the inability to have strong feelings about right and wrong, or to distinguish between moral rules and conventional rules. This proves that intelligence is independent from morality. These findings were buttressed with fMRI studies Greene did on criminal psychopaths.

Anyone who philosophizes enough will figure out that morality is in the human mind, not out in the world. Anyone who does fMRI brain scans on psychopaths and normal people or is aware of such work will realize the same thing. Also, I am hopeful that anyone who develops a sufficiently well-developed AI design will realize the same thing. In the best-case scenario, unFriendly AI simply doesn't happen because people realize what they're doing before they finish the task. Unfortunately, this is wildly overoptimistic, because just recognizing the need for a complex and subtle morality is not enough to build one.

Continuing on my optimistic tangent, I am hopeful that anyone building human-equivalent artificial intelligence will at least make a token effort to instill it with some sort of top-level morality. There is little danger of this being Asimov's laws, because Asimov's laws are not descriptive enough to really set an AI or robot in motion. Any attempt to build a goal system based on Asimov laws will have to introduce a heck of a lot more complexity to have a product that has any use. If this morality is then applied to test cases, and it would most hopefully be if the AI in question has any significant responsibility, then it will quickly become apparent that morality is not a free lunch and that the AI fails miserably on many moral tests. If pre-human-equivalent AI does significant damage due merely to accident (an issue I am exploring at length by reading Moral Machines by Wendell Wallach and Colin Allen), then a hands-free approach to AI morality will get a bad reputation, and there will be a strong profit motive to produce genuinely Friendly AI. Another obvious pitfall here is that the programmers craft a morality sufficient for a non-self-improving human-level AI but which fails catastrophically and fatally (to us) when the AI improves itself. But at least it would fail for that reason rather than the programmers making an even dumber error.

Basically, this post is an exercise in optimism -- I spend much time on this blog wringing my hands so vigorously that one might think the skin will be sloughed right off. But it's Sunday, so here I am saying, "what if... [optimistic scenario]?"

Going even further back than Greene, who just published his dissertation in 2002, we have Hume, who said, "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger", showing he was completely aware of the reason/morality distinction, and in fact is the premier historical philosopher in favor of this view. It seems to be in the interest of those concerned about unFriendly AI to spread the moral philosophy of Hume as much as possible. Ditto with moral sense theory, a non-cognitivist (non-realist) theory that morality is grounded in complex sentiments and emotions. The seeds of the idea that Friendliness theory (challenging study regarding how to create a Friendly AI) is necessary are planted right there.

12Mar/0910

Technological Singularity/Superintelligence/Friendly AI Concerns

To make everything as open and obvious as possible, I created a small boxes-connected-by-arrows chart to explain my beliefs on what the Singularity is about and what mankind should do about it:

We can call these nodes 1 through 8, reading left-to-right from the top.

What observations can we make right away? Well, it's interesting how all the ideas at the top are relatively non-mainstream, non-widespread, controversial, and none of them are interdependent. You can have a hard takeoff without superintelligence, for instance, and seed AI without any of the other boxes. You can argue in favor or against any one of these boxes as a profession (if you're a tenured philosopher), or just as a hobby.

Say we annihilate the box that says seed AI is likely before 2030. That partially ameliorates my concern/worry, but not really, because then I still have to worry about self-bootstrapping BCI-augmented humans and/or uploads.

However, there is one box that does contribute a lot to the concern/worry, and that's the far right one, Box 4. In my original vision, there was no box 4, and there was no worry. I believed that any sufficiently intelligent agent would become friendly, discovering the "objective truth about morality". That's the present position of Peter Voss, unless I'm mistaken.

After what seemed like forever, the big picture in Box 4 was presented on Overcoming Bias in late January, but the pieces of this view have been floating around for over a decade. I forget how I picked it up originally, but I know that reading How the Mind Works by Steven Pinker helped. A particularly good presentation is given by Joshua Greene, director of the Moral Cognition Lab at Harvard, in his doctoral dissertation, "The Terrible, Horrible, No Good, Very Bad Truth About Morality and What To Do About It".

Where do the other boxes come from? Every box has dozens or hundreds of references I've absorbed over a decade of reading, but I can point out the salient ones for people in a hurry. Box 1 is sketched by the brain emulation arguments given in Ray Kurzweil's The Singularity is Near, and a shorter version is given by Nick Bostrom in "How long until superintelligence?" This one doesn't hugely matter -- even if seed AI comes in 500 years, the impact is so enduring and absolute it's worth putting attention towards now. Even if it comes about in 70 years (2080), that's still roughly within my expected lifetime, taking into account life extension based on historical progress.

Box 2 comes mainly from the AI Advantage, which I originally encountered in Creating Friendly AI, which was subsequently reinforced by arguments in Levels of Organization in General Intelligence and dozens of other sources. Since last summer, myself and other SIAI volunteers/interns/employees have been building more detailed, flexible, academia-friendly models of the situation here, which I overviewed in a post late last month. These models accommodate both slow-as-mud takeoffs and near-instantaneous takeoffs depending on which parameters you set. Lots of interesting debate and thought will center around this box in the coming decade.

Box 3 is another fun and oddly controversial one. You'd think that after humanity being dethroned from its central place in the cosmos about ten times since the beginning of the Enlightenment, that would be enough for us to acknowledge that qualitative superintelligence is plausible, but apparently it isn't. Pundits like J. Storrs Hall and others are able to look at humans, then look at animals, and say that a similar intelligence gulf couldn't exist between us and another hypothetical being. I'll surely be forced to argue the points in this box for years to come, but like box 4, and unlike boxes 1 and 2, I see this as a losing battle for the opposition, and thereby somewhat less interesting. I consider the incredulity around the plausibility of superintelligence to be a temporary and fragile thing -- the right fictional exposition, whether in book or movie form, will destroy this anthropocentric conceit. "Understand" by Ted Chiang is a nice try, as is the much shorter and funnier "X17" by Eliezer Yudkowsky.

Blah, down to my most hated box, the accursed Box 5. Eliminating box 5 by pursuing a solution is my present interim goal in life. Unfortunately, I have met people where all four boxes on top are present but they don't lead to box 5. A number of reasons can be put forth for this, the most prominent being focused exclusively on your own life and close friends and not caring if all of humanity is snuffed out as long as it's quick. Or perhaps a lack of emotional valence -- if you have a long-time commitment to political activism, then being worried about self-improving AI is boring because it doesn't invoke evpsych-derived obsessions with political intrigue. So, you avoid following your beliefs to their logical conclusions because the conclusions are too disturbing or have actionable implications that contradict your prior plans.

Box 5 is more open and easily observed while boxes 1-4 are not immediately obvious, leading some intrepid Internets Psychologists to write in their own made-up ideas for the upper boxes. James Hughes has published a paper on his, and made a big deal about it at the 2007 Singularity Summit, which, by the way, received front-page coverage in my city newspaper (the Chronicle). Basically, because he doesn't understand one or more of the top boxes, he can't imagine how box 5 could derive from boxes 1-4, so he makes up his own boxes that seem to make more sense as sources to generate concern, like run amok Millennialism. I'm not sure how to feel about this. Sort of bored, really, because it discourages debates about boxes 1-4.

Box 6 happens when the worry and concern temporarily abates and you actually think about what to do. All sorts of fun ideas can emerge from there, and many of them have never been published. My brain contains a large catalog. Make up your own. When I see people spontaneously generating plans here, I see that they finally understand my point. More and more such plans (usually in the form of solutions to Friendly AI) have been popping up here in the last decade, some by Ph.Ds like Tim Freeman and Matt Mahoney, and some by cranks, like Arthur T. Murray, who plan to have the Singularity all wrapped in time for 2012. Discussions with other singularitarians made me originally realize that the space of possible actions is quite huge, if you only pause to think about it. Some cranks have inevitably proposed answers like "destroy everything". Others have proposed government regulation, which is silly because no legislature will consider superintelligence plausible prior to its creation.

Box 7 is the current main plan, embodied by the Singularity Institute and all the support behind it. I like the general idea, but it must be emphasized that it is entirely incomplete and needs more work immediately. My support for the plan may be withdrawn at any time, based on how it evolves. Other contributors to the SL4 and AGI lists have come up with specific implementation plans for this, but the most interesting ideas (in my view) come from the 30-odd person, mathematics-oriented community of SIAI interns and volunteers. This is composed of names you may remember from the peanut gallery on Overcoming Bias, including Michael Vassar, Marcello Herreshoff, Anna Salamon, and many others. People like Matt Mahoney, Peter Voss, Bill Hibbard, J. Andrew Rogers, Pei Wang, Jürgen Schmidhuber, Marcus Hutter, Steve Omohundro, Moshe Looks, Richard Loosemore, Tim Freeman, and a small set of others (which can be found lurking on Ben Goertzel's AGI list) offer interesting counterpoints here. Ray Kurzweil's solution, "the free markets will do it!" demonstrate that he lost box 4 somewhere along the way. Maybe he will find it eventually.

Box 8 involves coming up with some way to enhance human intelligence as a stepping stone to the long-term fix of Friendly AI. Originally I just dismissed this idea out of hand, based on the biological complexity of human minds, the preexisting optimization conducted by evolutionary processes, difficulty of securing funding and government approval, difficulty of noninvasive testing procedures, etc. Today, I'm still extremely skeptical, but have become vaguely less skeptical due to smart folks presenting me with decent arguments. I have the feeling that lots of people are holding back some of their ideas on this because they don't want to be seen in public discussing them, plus they think they might be valuable. I warn them that without a public discussion of possible engineering approaches, the community will languish due to insufficient exchange of ideas. You may think that you and your 6 smart friends are a sufficient group to discuss it with, but believe me, your little group is not the only one thinking about it in a serious, writing-up-tentative blueprints way. There are at least 20 more where that came from, and all you little cells staying quiet is just delaying progress by years.