Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.

31May/0914

Constant Rape in Chad and Darfur

A new Harvard-back study found that Darfuri women at refugee camps in Chad and Darfur are frequently being raped. Big surprise. Here's how it works: you go to get firewood, and you get raped.

Like Joe Biden, I believe that there needs to be a military force in the region to put a stop to the genocide and rape. So far, only about 9,000 African Union and U.N. peacekeepers have been deployed in Darfur to protect and provide relief for 2.5 million civilians. This is not nearly enough. When hundreds of thousands of men, women, and children civilians are being murdered, tortured, and raped, it is humanity's business. There needs to be a much larger military coalition, made up of all willing nations.

The Janjaweed are relatively small in number, just 20,000 by some estimates, and with primitive technology. They are essentially bandits with AK-47s on horseback, highly susceptible to missiles or machine gun fire from aircraft. The Sudanese government declines that it is supporting the group. Sudan has been openly complicit in massacres and slave-taking in Southern Sudan, however.

The peacekeeping force might have a problem if the military of Sudan intervened. They number about 400,000, with 100,000 reserve, and "the most advanced military production industry in Africa and the Middle East." Still, I think it's worth being bold and seeing what happens. If Sudan cannot defend its inhabitants from genocide, it has temporarily sacrificed its right to govern. The concept of Westphalian sovereignty ought to be suspended in this instance, under extenuating circumstances.

According to DarfurScores.org, Sally Chin of Refugees International has noted, the world has given the African Union "the responsibility to protect, but not the power to protect." It must be given the power to protect. World leaders (nudged along by the world academic/intellectual complex) should take the necessary actions to see this happen. We have the power. The United States can lead, but it might be difficult to do alone, occupied as we are by Iraq and Afghanistan.

Perhaps we never should have invaded Iraq, and instead spent all our resources on Darfur and Afghanistan.

If US leadership is politically untenable, then other nations need to step forward. You can't just let genocide happen and do nothing about it -- that is ridiculous. At the very least, everyone could complain about it more.

Here is a letter from a Sudanese thinker who believes that intervention would only make things worse. If so, then the primary focus should be on encouraging the rebels and Sudanese government into peace negotiations. Whatever strategy is chosen, the point is to do something to try and stop the genocide. The blogger asserts that water shortages are one of the root causes of the conflict -- if so, then better nanotechnology research into water filters and a humanitarian campaign to ship these to the region might do more in the long run than a UN intervention.

Filed under: warfare 14 Comments
29May/093

The General Intelligence Factor

A Scientific American feature article from 1998, Linda Gottfredson on g. Quote:

The debate over intelligence and intelligence testing focuses on the question of whether it is useful or meaningful to evaluate people according to a single major dimension of cognitive competence. Is there indeed a general mental ability we commonly call "intelligence," and is it important in the practical affairs of life? The answer, based on decades of intelligence research, is an unequivocal yes.

But, what if the fact that I don't have an extremely high IQ makes me feel bad? Then I'll dismiss all that research and ignore it. Who needs science that doesn't make me feel better about myself?

Filed under: intelligence 3 Comments
29May/0924

Dr. Richard Jones Steps Down from UK “Nano-Champion” Position

The news is at his blog. Congratulations to Richard on all he has accomplished.

29May/0914

Kurzweil Defends Himself

Kurzweil defends himself against the Newsweek article by Daniel Lyon.

I certainly don't always agree with Kurzweil, and I know for a fact that he's made at least a dozen or more predictions that are wrong (pretty good considering how many predictions he's made in total), but I think he's basically spot on with this defense.

Especially important to me is his defense that he cares about the risks of technology:

Regarding the dangers of technology, Lyons writes, "But Kurzweil is having none of that -- he thinks the "man-machine civilization" is going to be wonderful. He doesn't argue. He just sits there smiling. "That's a total misrepresentation. Extensive portions of my recent books and many of my speeches are devoted to what I describe as "the intertwined promise versus peril of new technology. Bill Joy's famous cover story in WIRED on the dire dangers of new technologies was based, as he states at the beginning of his article, on my book The Age of Spiritual Machines. I'm not just sitting here smiling, but have worked extensively with the Army and other organizations on developing defenses against abuse of biotechnology and other advanced technologies.

However, in his book, he does essentially dismiss the risk of AI, or thinks that free market economics will be the solution. Kurzweil should read Omohundro's "Basic AI Drives" and Preventing Skynet. "We're all gonna merge" is not an adequate refutation of the risk of unfriendly AI. The most advanced AIs will be independent, not integrated into human neurons. And if we're not careful in how we program them, they will fool us into thinking they're our friends, take control of or produce a critical threshold of weapons, and kill us. It's funny when primates think they can have a chance against an enemy that can copy itself, has need for a physical body, no need for sleep, never gets bored, can accelerate its thinking speed, add computing power directly to its own intelligence, and directly improve on its intelligence. Oh, right, intelligence isn't everything. Therefore the US military can defeat a recursively self-improving AI with access to rapid manufacturing. Right. "We'll just nuke it". How can you nuke something that is everywhere, and has installed agents onto hundreds of millions of computers in every major city on Earth?

The reality is, when confronted with a human-indifferent recursively self-improving AI, we'll probably be dead before we even know what is happening.

It's interesting how Kurzweil's refutation of Lyons mirrors my refutation of Horgan.

Filed under: singularity 14 Comments
29May/090

Chris Phoenix Interview at CRN

Chris Phoenix gives an interesting interview to a student via email.

28May/090

Nanotechnology News from Next Big Future

About a week ago, Brian Wang did a nice nanotech roundup. Funny title: "A Bad Week For Those Who Deny Molecular Nanotechnology or Accelerating Technology or the Tech Singularity".

28May/090

Can public commitment be counterproductive for achievement?

Great article by Patri Friedman at Less Wrong. He turned my thinking around on the issue, in less than 15 seconds of reading.

Other interesting posts include "Can we create a function that provably predicts the optimization power of intelligences?" by whpearson and Do Fandoms Need Awfulness? by Eliezer Yudkowsky.

Filed under: rationality No Comments
27May/0913

We Care About Your Opinion, But Are Still Committed (to the Singularity)

So, quite a few people are reacting to the Markoff piece, and the Singularity Institute (SIAI) has already been in touch with him. It seems nowadays that there are a lot of people in the NYT offices interested in the topics of this blog: AGI, extinction risk, transhumanism, and nanotechnology. Some colleagues in my circles have called this a "PR coup" -- but we have to remember -- journalists mostly just report on the opinions of experts, not issue their own. (Though framing is obviously involved.) Even if the cover of The New York Times began to look like Accelerating Future, it would still just transmit the positions of experts in nanotechnology, futurism, and AI.

Therefore, I encourage more academics and other professional thinkers to take a look at the ideas and formulate their opinions. Because the dialog is young and is picking up, there are still many opportunities to make quite a name for yourself, either as a Singularity proponent, like Kurzweil and myself, or critic, like Horgan. I'm just getting started myself, and plan to increase my profile as much as I can.

A recent reaction to the Markoff piece at Beliefnet, authored by Denise Abatemarco, said the following:

The fact that some of our brightest minds are focused on the goal of immortality always strikes me as odd. It also seems indicative of a GIANT denial of impermanence...and maybe an unhealthy attachment/clinging to ego? The idea of immortality, though sometimes fun to contemplate, has always struck me as more creepy and unnatural than alluring. But I suppose creepy and unnatural is my take on self-aware A.I. in general. Maybe I've been exposed to too much dystopian style science fiction, but I'm skeptical of the idea that superhuman computers and human/computer hybrids are the answers to our problems.

What do others think? Anyone planning on combating impermanence by having themselves cryogenically preserved?

It's a little disappointing, because I do believe that superhuman computers and human/computer hybrids are the answer to our problems. As J. Storrs Hall put it, reacting to a poll at H+ magazine about the plausibility of a "Terminator" scenario:

On the face of it, it's ludicrous. Why would a supposedly intelligent network mind waste so much energy and resources indulging in cinematically grandiose personal combat in grim wastelands with loud music? If it, for some reason, wanted to kill off humanity, it would just whip up a thousand new flu strains and release them all at once -- and use neutron bombs to clean up.

On the other hand, if all you mean is are the robots going to take over, it's more or less inevitable, and not a moment too soon. Humans are really too stupid, venal, gullible, mendacious, and self-deceiving to be put in charge of important things like the Earth (much less the rest of the Solar System). I strongly support putting AIs in charge because I'm dead certain we can build ones that are not only smarter than human but more moral as well.

I generally agree with this. I think it's possible to realize the joke that is humanity while at the same time respecting it, sort of. We're still the best species around. I like other humans. It's just that, we've had 200,000 years to get things right and we've failed. Time to create superintelligence.

I really care about the opinions of people like Denise Abatemarco. She admits being exposed to a lot of sci-fi and being skeptical about the use of AI to improve our lot as a civilization. Unfortunately, sci-fi-formulated opinions are not really sufficient. We need more academics and experts in cognitive science and decision theory looking at the challenge of AI and the Intelligence Explosion (the real "Singularity").

What myself and Denise might agree on is that AI could be really powerful. Since someone will build AI eventually anyway, doesn't it make sense to try to steer it in the right direction? The blunt truth is that in the end, we will go forward whether or not people like Denise object. A lot of us are pretty committed -- within 45 seconds of reading "What is Friendly AI?", I knew that this is probably what I'd be doing until it was finished. As this document says:

Independence means regarding the Singularity as a personal goal. The desire to create the Singularity is not dependent on the existence, assistance, permission, or encouragement of other Singularitarians. If every other Singularitarian on the planet died in a tragic trucking accident, the last remaining Singularitarian would continue her personal efforts to make the Singularity happen.

The intellectual heritage of Singularitarianism comes from transhumanism and Extropianism, both of which have a strong streak of individualism and quite explicit antiauthoritarianism. Historically speaking, most human causes do tend to organize themselves around explicit sources of authority. It should go without saying that neither I, nor Vernor Vinge, nor the Singularity Institute, nor any other human institution, should be believed to have any "authority" over other Singularitarians - except that voluntarily granted by other Singularitarians, of course. That much is implicit in our transhumanist heritage.

So, we're distributed and non-dogmatic. I assign authority to the Singularity Institute because I consider it the best-focused effort at creating Friendly AGI. Some might disagree, and pick A2I2, or some other company or organization -- even DARPA. But, I'd think that most Singularitarians would agree that the majority of us consider ourselves allies of SIAI. It's vaguely possible that that could change in the future -- if the wrong people got in charge of the organization, it could drift away from its original goals, in which case I'd leave.

Still, I and some others have been at this for a while now (going on a decade), and we're not stopping now. (Not to say that I'm dogmatic -- if I were convinced that it was a lost cause or unavoidably dangerous, I'd stop.) Some people, like Denise Abatemarco and others, might feel intimidated or slighted somehow by what we're trying to do. We want to take their opinions into account as much as possible -- that's why we advocate an AI design that uses the preferences of humanity as its input. We care about what you think, but we still want to build human-friendly Artificial General Intelligence as soon as we possibly can. Intelligence is destined to create greater intelligence, and nothing short of absolute extinction can prevent that.

The views expressed in this post are not necessarily those of the Singularity Institute.

Filed under: singularity 13 Comments
26May/0915

Michael Anissimov Debates John Horgan on the Singularity

I recently debated John Horgan via email on whether the Singularity is a cult or not. He posted our discussion at a blog for the Center for Science Writings at the Stevens Institute of Technology (located in in Hoboken, NJ), where he is Director. It's a privilege to debate with such a lauded journalist, even if I am supposedly a cultist in his eyes. If I'm going to be a cultist, I might as well be the best damn cultist I can be.

Filed under: singularity 15 Comments
24May/094

Friendly AI in the New York Times

Oh, hello, what's this? Friendly AI and Eliezer in The New York Times. As Media Director of SIAI, it's nice when things occur due to work I put in, but it also great when I don't need to do a thing and SIAI is publicized in mainstream newspapers. Can't complain.

Filed under: SIAI 4 Comments
21May/095

Terminator Salvation: Preventing Skynet is Live

Well, I did it -- Preventing Skynet is live. Just three essays up there right now, but I'm still soliciting additional essays, and will contribute my own soon. Please link to it from your blog or website! Include "Terminator Salvation" as part of the link to increase its ranking for that search term. So, like this:

Terminator Salvation: Preventing Skynet

This site will continue to be online as a resource forever, so we can keep adding things to it if we want to.

Filed under: friendly ai 5 Comments
21May/093

Forecasts from the World Future Society

"Top 10 Forecasts for 2009 and beyond". Here's the ones I find the most interesting:

Forecast # 1: Everything you say and do will be recorded by 2030. By the late 2010s, ubiquitous unseen nanodevices will provide seamless communication and surveillance among all people everywhere. Humans will have nanoimplants, facilitating interaction in an omnipresent network. Everyone will have a unique Internet Protocol (IP) address. Since nano storage capacity is almost limitless, all conversation and activity will be recorded and recoverable. -- Gene Stephens, "Cybercrime in the Year 2025," THE FUTURIST July-Aug 2008.
Forecast #2: Bioviolence will become a greater threat as the technology becomes more accessible. Emerging scientific disciplines (notably genomics, nanotechnology, and other microsciences) could pave the way for a bioattack. Bacteria and viruses could be altered to increase their lethality or to evade antibiotic treatment.-- Barry Kellman, "Bioviolence: A Growing Threat," THE FUTURIST May-June 2008.
Forecast #6: The race for biomedical and genetic enhancement will -- in the twenty-first century -- be what the space race was in the previous century. Humanity is ready to pursue biomedical and genetic enhancement, says UCLA professor Gregory Stock, the money is already being invested, but, he says, "We'll also fret about these things -- because we're human, and it's what we do." -- Gregory Stock quoted in THE FUTURIST, Nov-Dec 2007.
Forecast #7: Professional knowledge will become obsolete almost as quickly as it's acquired. An individual's professional knowledge is becoming outdated at a much faster rate than ever before. Most professions will require continuous instruction and retraining. Rapid changes in the job market and work-related technologies will necessitate job education for almost every worker. At any given moment, a substantial portion of the labor force will be in job retraining programs. -- Marvin J. Cetron and Owen Davies, "Trends Shaping Tomorrow's World, Part Two," THE FUTURIST May-June 2008.

Since professional knowledge is quickly becoming obsolete in all areas, that means that someone with only a decade of experience soaking himself in scientific and technological knowledge (me) can do just as well, if not better, at foresight than those with more "experience". Funny how that works out.

Filed under: futurism 3 Comments