Will the Real AI Critics Please Stand Up?

I’m having great trouble finding any citeable work that argues that artificial intelligence is completely impossible. People throw kiwis at AI theory in its current state, or the philosophy of functionalism, but every single argument I can find stops short of outright denunciation.

For instance, Gerald Edelman, winner of the 1972 Nobel Prize in Medicine and coiner of the term “Neural Darwinism”, argues that “AI” is impossible, expelling much hot air on the subject, but then it turns out that he believes, “It seems reasonably feasible that, in the future, once neuroscientists learn much more about consciousness and its mechanism, why not imitate it?”, and remarks “We construct what we call brain-based devices, or BBDs, which I think will be increasingly useful in understanding how the brain works and modeling the brain. But it also may be the beginning of the design of truly intelligent machines.” So that’s not very anti-AI. Edelman was also quoted in John Horgan’s recent anti-Singularity piece in IEEE Spectrum, the “Consciousness Conundrum”, in support of the idea that AI is difficult. …

Read More

Why Human-Level AI Won’t Change the World

One position I have difficulty wrapping my head around is the position held by those who believe that human-level AI is possible but that it would lack the capability to quickly change the world. The reasons for why AI would likely have that capability are frequently cited. To summarize just a few:

1) AI could quickly and easily be copied as many times as is computationally feasible.

2) Running on a flexible substrate, AI could “overclock” their cognitive functions, leading to enhanced intelligence and capability.

3) Though robotics today is still maturing, it will be more sophisticated by the time AI arrives, and with AI’s help, it isn’t unreasonable to assume that AIs will have direct and broad access to the physical world through robotic means.

4) AIs would be able to share thoughts almost instantly, meaning that skills learned by one AI could be transferred to all other AIs very quickly.

5) AIs would be able to quickly and automatically perform tasks considered by humans to be “extremely boring”, but still pragmatically useful.

6) AIs could routinely perform intellectually …

Read More

Missing: Robot Ethics Charter

Researching the current state of “roboethics” (a lame term that marginalizes “AI ethics”, a more-relevant superset of roboethics), I find a bunch of references to a South Korean project to draft a Robot Ethics Charter. All these references occur in March 2007, and they promised the ethics charter would be released in April 2007 and subsequently adopted by the government. However, I can’t find it anywhere. Anyone have a clue about where it went? One article summarized the effort as follows:

The prospect of intelligent robots serving the general public brings up an unprecedented question of how robots and humans should be expected to treat each other. South Korea’s Ministry of Commerce, Industry and Energy has decided that a written code of ethics is in order.

Starting last November, a team of five members, including a science-fiction writer, have been drafting a Robot Ethics Charter to address and prevent “robot abuse of humans and human abuse of robots.” Some of the sensitive subject areas covered in the charter include human addiction to robots, humans …

Read More

Two Papers You Should Read

Some of you may have seen these papers already, as I mention them frequently, but they’ll important enough that I like to re-mention them regularly. They’re “Artificial Intelligence as a Positive and Negative Factor in Global Risk” by Eliezer Yudkowsky and “The Basic AI Drives” by Steve Omohundro. The papers are 42 and 11 pages, respectively. There’s no abstract for the first paper, but here’s the abstract for the second:

“One might imagine that AI systems with harmless goals will be harmless. This paper instead shows that intelligent systems will need to be carefully designed to prevent them from behaving in harmful ways. We identify a number of “drives” that will appear in sufficiently advanced AI systems of any design. We call them drives because they are tendencies which will be present unless explicitly counteracted. We start by showing that goal-seeking systems will have drives to model their own operation and to improve themselves. We then show that self-improving systems will be driven to clarify their goals and represent them as economic utility functions. They will …

Read More

Conscious Thought Leads to Better Decisions

From Eurekalert, a press release titled, “Complex decision? Don’t sleep on it”:

Neither snap judgements nor sleeping on a problem are any better than conscious thinking for making complex decisions, according to new research.

The finding debunks a controversial 2006 research result asserting that unconscious thought is superior for complex decisions, such as buying a house or car. If anything, the new study suggests that conscious thought leads to better choices.

Since its publication two years ago by a Dutch research team in the journal Science, the earlier finding had been used to encourage decision-makers to make “snap” decisions (for example, in the best-selling book Blink, by Malcolm Gladwell) or to leave complex choices to the powers of unconscious thought (“Sleep on it”, Dijksterhuis et al., Science, 2006).

At stake in these conscious/unconscious thought experiments (literally) is a wider philosophical argument about the value of intuition and hunches. We want to think that hunches produce better decisions, and have been taught since we were children that this is an intelligent way to approach reality (“Use the Force, Luke”.) …

Read More

Interview with Dr. Steel

Dr. Phineas Waldolf Steel is a mentally twisted but awe-inspiring figure whose interests span the production of propaganda, the construction of chronically malfunctioning robots, puppet shows, and an ongoing attempt to become World Emperor for the purpose of turning this planet into a Utopian Playland. His growing movement aims to move beyond conflict and war to forge a world that makes fun the top priority. His multi-faceted persona is an example of what people can do when they are highly creative in a variety of cultural areas.

Dr. Steel is an entertainer, leader, musician, artist, and thinker. If you’re interested in finding out more about him, check out his website. The laboratory in the toyland section is particularly entertaining. As an independent artist, Dr. Steel hasn’t sold out to any record companies yet, though I’m sure that he would accept a big contract if it were part of his master plan for world domination. His music has been described as “hip-hop industrial opera”, which is correct in the abstract, …

Read More

Funding Secured for Diamondoid Mechanosynthesis Research

Finally, some serious research will experimentally explore the possibility of diamondoid mechanosynthesis (DMS). This research will be conducted in the UK. Here’s the first paragraph of the press release:

Professor Philip Moriarty of the Nanoscience Group in the School of Physics at the University of Nottingham (U.K.) has been awarded a five-year £1.53M ($3M) grant by the U.K. Engineering and Physical Sciences Research Council (EPSRC) to perform a series of laboratory experiments designed to investigate the possibility of diamond mechanosynthesis (DMS). DMS is a proposed method for building diamond nanostructures, atom by atom, using the techniques of scanning probe microscopy under ultra-high vacuum conditions. Moriarty’s project, titled “Digital Matter? Towards Mechanised Mechanosynthesis,” was funded under the Leadership Fellowship program of EPSRC. Moriarty’s experiments begin in October 2008.

If reliable DMS is possible, it could eventually lead to full-fledged molecular nanotechnology, which would have diverse applications, many of them dangerous. Advocates of MNT traditionally overestimate the probability of MNT being possible at all while underestimating the negative applications of the technology.

I’ve been following the

Read More

What is the Singularity?

The Singularity has nothing to do with the acceleration of technological progress. It is only somewhat related to interdisciplinary convergence. The universe is not specially structured for the Singularity to happen. History has not been particularly leading up to it, except in the sense that inventing new technologies gets easier when civilization has more advanced building tools and knowledge. The Singularity is the creation of smarter-than-human intelligence, nothing less, and nothing more.

The Singularity is not a belief system. It is a likely (but by no means certain) future event with great potential for good and for ill. Sort of like nuclear technology, if nuclear technology could invent more advanced technologies on its own and have independent goals. Kind of scary, really.

The Singularity is a hurdle for the human species to jump, not a stairway to Heaven. It could fairly easily be avoided or delayed, either by blowing up most of the major cities, detonating H-bombs in the upper atmosphere (EMP), someone taking over the world, etc.

The Singularity is not mystical because intelligence is not mystical. …

Read More

Vernor Vinge’s Latest Take on the Singularity

Vernor Vinge has an interesting and somewhat unique take on the Singularity, ironic because all the spinoffs are based on his original definition. However, I regularly disagree with some of his points.

One of the points he frequently makes is that a hard takeoff (superintelligence nearly overnight) would necessarily be bad. I disagree — there are likely to be bad hard takeoffs, and good hard takeoffs. If the superintelligence in question actually cares about human beings, then surely its “hard takeoff” could be orchestrated in such a manner that everyone benefits and no one has their life “flip turned upside down”. On the other side of the coin, if the superintelligence didn’t give a damn about human beings, then we’d likely have our constituent atoms rearranged into something it considers more “interesting”, like a cosmic whiteboard for its beloved mathematical equations.

Favoring a hard or soft takeoff is not like picking between chocolate and vanilla ice cream. Instead of being based on a matter of human preference, it’s likely that objective facts about the structure of cognition will dictate …

Read More

Support “The Singularity” Documentary

The Singularity Institute is requesting donations to support the completion of a documentary on the Singularity by Doug Wolens. Wolens is an experienced filmmaker who filmed Singularity Summit 2006 and 2007. Filming is 80% done, and Doug needs an additional $45,000 to complete the documentary in time for this winter’s film festivals. He has already interviewed figures such as Ray Kurzweil, David Chalmers, and Peter Norvig. Excerpts of the interviews are available on the donations page.

Here’s the blurb for the movie and an explanation of how it helps the Singularity Institute:

“The Singularity” is an investigation into the frontiers of scientific progress. Many important disciplines are coming together to drive this progress – nanotechnology, artificial intelligence, molecular biology, and more. “The Singularity” explores the current boundaries of this research, showing where the trends are leading, and how smashing the intelligence barrier will affect society.

In “The Singularity,” award-winning documentary director Doug Wolens addresses vital questions for all of us: Exactly what is likely within our lifetimes? How are things moving so quickly? Who is working …

Read More