Phil Bowermaster on the Singularity

Over at the Speculist, Phil Bowermaster understands the points I made in “Yes, the Singularity is the biggest threat to humanity”, which, by the way, was recently linked by Instapundit, who unfortunately probably doesn’t get the point I’m trying to make. Anyway, Phil said:

Greater than human intelligences might wipe us out in pursuit of their own goals as casually as we add chlorine to a swimming pool, and with as little regard as we have for the billions of resulting deaths. Both the Terminator scenario, wherein they hate us and fight a prolonged war with us, and the Matrix scenario, wherein they keep us around essentially as cattle, are a bit too optimistic. It’s highly unlikely that they would have any use for us or that we could resist such a force even for a brief period of time — just as we have no need for the bacteria in the swimming pool and they wouldn’t have much of a shot against our chlorine assault.

“How would the superintelligence be able to wipe …

Read More

Bill Gates Mentions the Risk of Superintelligence in the Wall Street Journal

Bill Gates is smart in a way that other corporate titans of the 90s and 00s just aren’t. Smart as in intellectual with a broad range of knowledge and information diet, not “smart” as in wears a trendy turtleneck and has a good design and business sense.

In a recent article in the Wall Street Journal, Gates takes on Matt Ridley’s books like The Rational Optimist: How Prosperity Evolves. Gates writes:

Exchange has improved the human condition through the movement not only of goods but also of ideas. Unsurprisingly, given his background in genetics, Mr. Ridley compares this intermingling of ideas with the intermingling of genes in reproduction. In both cases, he sees the process as leading, ultimately, to the selection and development of the best offspring.

The second key idea in the book is, of course, “rational optimism.” As Mr. Ridley shows, there have been constant predictions of a bleak future throughout human history, but they haven’t come true. Our lives have improved dramatically–in terms of lifespan, nutrition, literacy, wealth and other measures—and he believes that the trend will …

Read More

Future Superintelligences Indistinguishable from Today’s Financial Markets?

It seems obvious that Singularity Institute-supporting transhumanists and other groups of transhumanists speak completely different languages when it comes to AI. Supporters of SIAI actually fear what AI can do, and other transhumanists apparently don’t. It’s as if SL3 transhumanists view smarter-than-human AI with advanced manufacturing as some kind of toy, whereas we actually take it seriously. I thought a recent post by Marcelo Rinesi at the IEET website, “The Care and Feeding of Your AI Overlord”, would provide a good illustration of the difference:

It’s 2010 — our 2010 — and an artificial intelligence is one of the most powerful entities on Earth. It manages trillions of dollars in resources, governments shape their policies according to its reactions, and, while some people revere it as literally incapable of error and others despise it as a catastrophic tyrant, everybody is keenly aware of its existence and power.

I’m talking, of course, of the financial markets.

The opening paragraph was not metaphorical. Financial markets might not match pop culture expectations of what an AI should look like …

Read More

Another Nick Bostrom Quote

“One consideration that should be taken into account when deciding whether to promote the development of superintelligence is that if superintelligence is feasible, it will likely be developed sooner or later. Therefore, we will probably one day have to take the gamble of superintelligence no matter what. But once in existence, a superintelligence could help us reduce or eliminate other existential risks, such as the risk that advanced nanotechnology will be used by humans in warfare or terrorism, a serious threat to the long-term survival of intelligent life on earth. If we get to superintelligence first, we may avoid this risk from nanotechnology and many others. If, on the other hand, we get nanotechnology first, we will have to face both the risks from nanotechnology and, if these risks are survived, also the risks from superintelligence. The overall risk seems to be minimized by implementing superintelligence, with great care, as soon as possible.”

– Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence”

Read More

Analysis of Massimo Pigliucci’s Critique of David Chalmers’ Talk on the Singularity

To follow up on the previous post, I think that the critique by Massimo Pigliucci (a philosopher at the City University of New York) of David Chalmers’ Singularity talk does have some good points, but I found his ad hominem arguments so repulsive that it was difficult to bring myself to read past the beginning. I would have the same reaction to a pro-Singularity piece with the same level of introductory ad hominem. (Recall that when I was going after Jacob Albert and Maxwell Barbakow for their ignorant article on the Singularity Summit, I was focusing on their admission of not understanding any of the talks and using that as a negative indicator of their intelligence and knowledge, not insulting their hair-cuts.) If anything, put the ad hominem arguments at the end, so that they don’t bias people before they’ve read the real objections.

Pigliucci is convinced that Chalmers is a dualist, which is not exactly true — he is a monist with respect to consciousness rather than spacetime and matter. I used to be on …

Read More

Answering Popular Science’s 10 Questions on the Singularity

I thought I would answer the 10 questions posed by Popular Science on the Singularity.

Q. Is there just one kind of consciousness or intelligence?

A. It depends entirely on how you define them. If you define intelligence using what I consider the most simple and reasonable definition, Ben Goertzel’s, “achieving complex goals in complex environments”, then there is only one kind, because the definition is broad enough to encompass all varieties. My view is that this question is a red herring. The theory of “multiple intelligences”, presented by Howard Gardner in 1983, doesn’t stand up to scientific scrutiny. Most people who study intelligence consider the theory empirically unsupported in the extreme, and the multiple intelligences predictably useful only insofar as they correlate with g, which just provides more support for a single type of intelligence. The theory is merely an attempt to avoid having some people labeled lower in general intelligence than others. In terms of predictive value, IQ and other g-weighted measures blow away the multiple intelligences theory. Instead of …

Read More

ABC Radio National Coverage of Singularity and the Summit

Here is a blog post. At the top is the classic Toothpaste for Dinner comic about the Singularity. A funny excerpt:

“I’ve recently found a third topic to exclude from dinner conversations, alongside politics and religion. The singularity. While I’m rarely one to dichotomise people, in this case I’ve found you’re either excited by the idea, or you do your best to stifle a smirk and offer me another slice of roast beef.

With the propensity to discuss the Singularity at dinner most of all, I’m quite familiar with this phenomenon. When people eat meat, it reminds me of how superintelligences will eat us for dinner if we aren’t careful.

Here is the radio show.

Here is another quote from the blog post:

For my money, I think it’s far too easy to get lost in the assumption that the trick to speeding up innovation lies in smarter minds. Progress is inhibited more by social concepts such as ethics, resource allocation and effective communication. Sure, a few bright boffins might hurt in the search for …

Read More

Aubrey de Grey on the Singularity and Longevity Escape Velocity

Read Aubrey’s 8-page paper “The singularity and the Methuselarity: similarities and differences” at the SENS Foundation website. The arguments are quite subtle and complex at points, providing a lot to chew on. Here’s a quote:

Let us now consider the aftermath of a “successful” singularity, i.e. one in which recursively self-improving systems exist and have duly improved themselves out of sight, but have been built in such a way that they permanently remain “friendly” to us. It is legitimate to wonder what would happen next, albeit that to do so is in defiance of Vinge. While very little can confidently be said, I feel able to make one prediction: that our electronic guardians and minions will not be making their superintelligence terribly conspicuous to us. If we can define “friendly AI” as AI that permits us as a species to follow our preferred, presumably familiarly dawdling, trajectory of progress, and yet also to maintain our self-image, it will probably do the overwhelming majority of its work in the background, mysteriously keeping things the way we want them without …

Read More

Is Smarter-than-Human Intelligence Possible?

Florian Widder, who often sends me interesting links, forwarded me to an interview that Russell Blackford recently conducted with Greg Egan. The excerpt he mentioned concerns the issue of smartness and whether qualitatively-smarter-than-human intelligence is possible:

I think there’s a limit to this process of Copernican dethronement: I believe that humans have already crossed a threshold that, in a certain sense, puts us on an equal footing with any other being who has mastered abstract reasoning. There’s a notion in computing science of “Turing completeness”, which says that once a computer can perform a set of quite basic operations, it can be programmed to do absolutely any calculation that any other computer can do. Other computers might be faster, or have more memory, or have multiple processors running at the same time, but my 1988 Amiga 500 really could be programmed to do anything my 2008 iMac can do — apart from responding to external events in real time — if only I had the patience to sit and swap floppy disks all day long. I suspect …

Read More

Superintelligence Could Prevent War, Leading Futurist Says

This idea sounds familiar. From the website of the US Air Force:

8/10/2009 – HANSCOM AIR FORCE BASE, Mass. (AFNS) — The convergence of “exponentially advancing technologies” will form a “super-intelligence” so formidable that it could avert war, according to one of the world’s leading futurists.

Dr. James Canton, CEO and chairman of the Institute for Global Futures, a San Francisco-based think tank, is author of the book “The Extreme Future” and an adviser to leading companies, the military and other government agencies.

He is consistently listed among the world’s leading speakers and has presented to diverse audiences around the globe.

It’s good to hear that the world’s leading futurists are slowly catching up to the position that I’ve been arguing for since 2001, when I was still a teenager.

Canton seems familiar with the singleton concept and views the US as rushing towards an unchallenged status:

“The superiority of convergent technologies will prevent war,” Doctor Canton said, claiming their power would present an overwhelming deterrent to potential adversaries. While saying that the U.S. will …

Read More

A Boring Disagreement?

My disagreement with Dale Carrico, Mike Treder, James Hughes, Ray Kurzweil, Richard Jones, Charles Stross, Kevin Kelly, Max More, David Brin, and many others is relatively boring and straightforward, I think. It is simply this: I believe that it is possible that a being more powerful than the entirety of humanity could emerge in a relatively covert and quick way, and they don’t. A singleton, a Maximillian, an unrivaled superintelligence, a transcending upload, whatever you want to call it.

If you believe that such a being could be created and become unrivaled, then it is obvious that you would want to have some impact on its motivations. If you don’t, then clearly you would see such preparations to be silly and misguided.

Why do people make this more complex than it needs to be? It has nothing to do with politics. It has everything to do with our estimated probabilities of the likelihood of a more-powerful-than-humanity being emerging quickly. I am practically willing to concede all other points, because I think that this is the crux of the argument. Boring …

Read More