Objections to SIAI/AGI/FAI

From Kaj Sotala:

“For the recent week, I have together with Tom McCabe been collecting all sorts of objections that have been raised against the concepts of AGI, the Singularity, Friendliness, and anything else relating to SIAI’s work. We’ve managed to get a bunch of them together, so it seemed like the next stage would be to publicly ask people for any objections we may have missed.

The objections we’ve gathered so far are listed below. If you know of any objection related to these topics that you’ve seriously considered, or have heard people bring up, please mention it if it’s not in this list, no matter how silly it might seem to you now. (If you’re not sure of whether the objection falls under the ones already covered, send it anyway, just to be sure.) You can send your objections to the list or to me directly. Thank you in advance for everybody who replies.”

The list so far:

SIAI

General objections

  • The government would never let private citizens build an AGI, out of fear/security concerns.
  • The government/Google/etc. will start their own project and beat us to AI anyway.
  • SIAI will just putz around and never actually finish the project, like all the other wild-eyed dreamers.
  • SIAI is just another crazy “doomsday cult” making fantastic claims about the end of the world.
  • Eventually, SIAI will catch the government’s attention and set off a military AI arms race.

AI & The Singularity

Consciousness

  • Computation isn’t a sufficient prerequisite for consciousness.
  • A computer can never really understand the world the way humans can. (Searle’s Chinese Room)
  • Human consciousness requires quantum computing, and so no conventional computer could match the human brain.
  • Human consciousness requires holonomic properties.
  • A brain isn’t enough for an intelligent mind – you also need a body/emotions/society.
  • As a purely subjective experience, consciousness cannot be studied in a reductionist/outside way, nor can its presence be verified. (in more detail)
  • A computer, even if it could think, wouldn’t have human intuition and so would be much less capable in many situations.

Desirability / getting there

  • There’s no reason for anybody to want to build a superhuman AI.
  • A Singularity through uploading/BCI would be more feasible/desirable.
  • Life would have no meaning in a universe with AI/advanced nanotech (see Bill McKibben).
  • A real AI would turn out just like (insert scenario from sci-fi book or movie).
  • Technology has given us nuclear bombs/industrial slums/etc.; the future should involve less technology, not more.
  • We might live in a computer simulation and it might be too computationally expensive for our simulators to simulate our world post-Singularity.
  • AI is too long-term a project, we should focus on short-term goals like curing cancer.
  • Unraveling the mystery of intelligence would demean the value of human uniqueness.
  • If this was as good as it sounds, someone else would already be working on it.

Implementation/(semi)technical

  • We are nowhere near building an AI.
  • Computers can only do what they’re programmed to do. (Heading 6.6. in Turing’s classic paper
  • The human brain is not digital but analog: therefore ordinary computers cannot simulate it.
  • Godel’s Theorem shows that no computer, or mathematical system, can match human reasoning.
  • It’s impossible to make something more intelligent/complex than yourself.
  • Creating an AI, even if it’s possible in theory, is far too complex for human programmers.
  • AI is impossible: you can’t program it to be prepared for every eventuality. (Heading 6.8. in Turing’s classic paper, SIAI blog comment: general intelligence impossible)
  • We still don’t have the technological/scientific prerequisites for building AGI; if we want to build it, we should develop these instead of funding AGI directly.
  • There’s no way to know whether AGI theory works without actually building an AGI.
  • Any true intelligence will require a biological substrate.

Intelligence isn’t everything

  • An AI still wouldn’t have the resources of humanity.
  • Bacteria and insects are more numerous than humans.
  • Superminds won’t be solving The Meaning Of Life or breaking the laws of physics.
  • Just because you can think a million times faster doesn’t mean you can do experiments a million times faster; super AI will not invent super nanotech three hours after it awakens.
  • Machines will never be placed in positions of power.

On an Intelligence Explosion

  • There are limits to everything. You can’t get infinite growth.
  • A smarter being is also more complex, and thus cannot necessarily improve itself any faster than the previous stage — no exponential spiral.
  • Computation takes power. Fast super AI will probably draw red-hot power for questionable benefit. (Also, so far fast serial computation takes far more power than slow parallel computation (brains).)
  • Giant computers and super AI can be obedient tools as easily as they can be free-willed rogues, so there’s no reason to think humans+ loyal AI will be upstaged by rogues. The bigger the complex intelligence, the less it matters that one part of the complex intelligence is a slow meat-brain.
  • Biology gives us no reason to believe in hard transitions or steep levels of intelligence. Computer science does, but puts the Singularity as having happened back when language was developed.
  • Strong Drexlerian nanotech seems to be bunk in the mind of most chemists, and there’s no reason to think AI have any trump advantage with regard to it.
  • There is a fundamental limit on intelligence, somewhere close to or only slightly above the human level. (Strong AI Footnotes)

On Intelligence

  • You can’t build a superintelligent machine when we can’t even define what intelligence means.
  • Intelligence is not linear.
  • There is no such thing as a human-equivalent AI.

Religious objections

  • True, conscious AI is against the will of God/Yahweh/Jehovah, etc.
  • Creating new minds is playing God
  • Computers wouldn’t have souls.

Terminology

Validity of predictions

  • AI has supposedly been around the corner for 20 years now.
  • Extrapolation of graphs doesn’t prove anything. It doesn’t show that we’ll have AI in the future.
  • AI is just something out of a sci-fi movie, it has never actually existed.
  • Big changes always seem to be predicted to happen during the lifetimes of the people predicting them.
  • Kurzweil’s graphs for predicting AI are unrealistic.
  • The Singularity is the Rapture of religious texts, just dressed in different clothes to appeal to proclaimed atheists.
  • Moore’s Law is slowing down.
  • Progress on much simpler AI systems (chess programs, self-driving cars) has been notoriously slow in the past.
  • There could be a war/resource exhaustion/other crisis putting off the Singularity for a long time. (See Tim O’Reilly’s first comment in the comments section)

Friendliness

Activism

  • It’s too early to start thinking about Friendly AI.
  • Development towards AI will be gradual. Methods will pop up to deal with it.
  • Friendiness is trivially achieved. People evolved from selfish self-replicators; AIs will “evolve” from programs which exist solely to fulfill our wishes. Without evolution building them, AIs will automatically be Friendly.
  • Trying to build Friendly AI is pointless, as a Singularity is by definition beyond human understanding and control.
  • Unfriendly AI is much easier than Friendly AI, so we are going to be destroyed regardless.
  • Other technologies, such as nanotechnology and bioengineering, are much easier than FAI and they have no “Friendly” equivalent that could prevent them from being used to destroy humanity.
  • Any true AI would have a drastic impact on human society, including a large number of unpredictable, unintended, probably really bad consequences.
  • We can’t start making AIs Friendly until we have AIs around to look at and experiment with. (Goertzel’s objection)
  • Talking about possible dangers would make people much less willing to fund needed AI research.
  • Any work done on FAI will be hijacked and used to build hostile AI.

Alternatives to Friendliness

  • Couldn’t AIs be built as pure advisors, so they wouldn’t do anything themselves?
  • A human upload would naturally be more Friendly than any AI.
  • Trying to create a theory which absolutely guarantees AI is too unrealistic / ambitious of a goal; it’s a better idea to attempt to create a theory of “probably Friendly AI”.
  • We should work on building a transparent society where no illict AI development can be carried out.

Desirability

  • A post-Singularity mankind won’t be anything like the humanity we know, regardless of whether it’s a positive or negative Singularity – therefore it’s irrelevant whether we get a positive or negative Singularity.
  • It’s unethical to build AIs as willing slaves. (an example of this objection)
  • You can’t suffer if you’re dead, therefore AIs wiping out humanity isn’t a bad thing.
  • Humanity should be in charge of its own destiny, not machines.
  • A perfectly Friendly AI would do everything for us, making life boring and not worth living.
  • The solution to the problems that humanity faces cannot involve more technology, especially such a dangerous technology as AGI, as technology itself is part of the problem.
  • No problems that could possibly be solved through AGI/MNT/the Singularity are worth the extreme existential risk incurred through developing the relevant technology/triggering the relevant event.
  • A human-Friendly AI would ignore the desires of other sentients, such as uploads/robots/aliens/animals.

Feasibility of the concept

  • Ethics are subjective, not objective: therefore no truly Friendly AI can be built.
  • The idea of a hostile AI is anthropomorphic.
  • “Friendliness” is too vaguely defined.
  • Mainstream researchers don’t consider Friendliness an issue.
  • Human morals/ethics contradict each other, even within individuals.
  • Most humans are rotten bastards and so basing an FAI morality off of human morality is a bad idea anyway.
  • The best way to make us happy would be to constantly stimulate our pleasure centers, turning us into nothing but experiencers of constant orgasms.

 

Implementation

  • An AI forced to be friendly couldn’t evolve and grow.
  • Shane Legg proved that we can’t predict the behavior of intelligences smarter than us.
  • A superintelligence could rewrite itself to remove human tampering. Therefore we cannot build Friendly AI.
  • A super-intelligent AI would have no reason to care about us.
  • What if the AI misinterprets its goals?
  • You can’t simulate a person’s development without creating a copy of that person.
  • It’s impossible to know a person’s subjective desires and feelings from outside.
  • A machine could never understand human morality/emotions.
  • AIs would take advantage of their power and create a dictatorship.
  • An AI without self-preservation built in would find no reason to continue existing.
  • A superintelligent AI would reason that it’s best for humanity to destroy itself.
  • The main defining characteristic of complex systems, such as minds, is that no mathematical verification of properties such as “Friendliness” is possible.
  • Any future AI would undergo natural selection, and would eventually become hostile to humanity to better pursue reproductive fitness.
  • FAI needs to be done as an open-source effort, so other people can see that the project isn’t being hijacked to make some guy Dictator of the Universe.
  • If an FAI does what we would want if we were less selfish, won’t it kill us all in the process of extracting resources to colonize space as quickly as possible to prevent astronomical waste?
  • It’s absurd to have a collective volition approach that is sensitive to the number of people who support something.

 

Social issues

  • Humans wouldn’t accept being ruled by machines.
  • An AI would just end up being a tool of whichever group built it/controls it.
  • Power-hungry organizations are going to race to AI technology and use it to dominate before there’s time to create truly Friendly AI.
  • An FAI would only help the rich, the First World, uploads, or some other privileged class of elites.
  • We need AI too urgently to let our research efforts be derailed by guaranteed Friendliness.
  • Developing AI now would set off an arms race to military AI. We should wait for integration and democratization to spread.

Contact me or Kaj if you have any new ones.

Society Is Not A Moral Entity

A new drug-war propaganda poster has been showing up recently. The caption goes: “Anonymous hero protects the city from drug crime, receives cash reward.” Wouldn’t it be nice, to both protect the innocent and receive a nice pile of cash? But who’s actually helped by the drug war? A quick look at some of the participants:

- The drug user gets thrown in prison, has his assets confiscated, loses his job and will probably have trouble finding a new one. Obviously, it didn’t help him.

- The informant, along with everyone else in the city, is saddled with a higher tax rate to pay for the “rewards”, the prisons, the drug dogs, the trial, the public defender, etc. No help here.

- The police department has to spend time and effort on drug cases, instead of investigating rapes, murders, etc. Still no help.

- The innocent victims are further victimized, by both individual criminals and cartels flush with drug money.

The poster, of course, can’t name who will be ‘protected’; saying ‘the drug user is protected’ or ‘the taxpayer is protected’ would sound silly. So the next best thing is to move up one level of abstraction, and say that the drug wars are protecting ‘society’. Because of evolutionary psychology and the human tendency towards anthropomorphicism, society as a whole is harder to think about than any one, specific person. Telling a story about someone’s guts were dissolved will get you a much, much stronger reaction than going on about “cholera” or “ethnic cleansing”.

Chaos Is The Enemy

After many years of searching, we have finally found the Great Enemy of the Free Peoples of Middle-earth. It isn’t Sauron. It isn’t Osama bin Laden. It isn’t George Bush, anti-environmentalists, mad scientists, grad students or other such agents of doom.

It’s chaos. Randomness. Entropy.

Because of evolutionary psychology, we tend to see meaning in everything, even in things we know are random. Pareidolia, the technical term for seeing faces or hearing voices in random noise, shows up everywhere. Before we had Science, we used to attribute rain and drought to the sinister motivations of the “rain god”. The apparent irrationality of love was caused by the “love god”. The results of wars and battles were predetermined by the “war god”. Farming was ruled over by the “fruit and grain god”. If something went wrong, and you couldn’t find a human scapegoat, there was always a deity around to take responsibility.

Even after the invention of Science, we still find ways to blame everything on the actions of some malevolent ‘overseer’. If a President dies in office, it’s because of a secret Indian curse. If the economy tanks, it’s because of a global banking conspiracy. If your car breaks down, it was planned by the manufacturer. If someone dies…

The vast majority of the time, if someone dies, it’s not because of murder. It’s because of chaos- some random error in DNA replication caused cancer, or some clump of fatty acid caused a heart attack. Chaos has killed more people than every genocide in history put together. Even during WWII, the single largest mass-killing event in human history, more people died of “natural causes” (chaos) than were killed by government armies. The same principle applies on a smaller scale; most of the daily annoyances we live with are caused by chaos, not deliberate malice. There just isn’t enough optimization power available to force the world into states that we like.

Now that we’ve identified the Enemy, we can start fighting back. If you’re reading this, you’ve already fought back in a small way, by keeping your computer running against the degenerative pressure of entropy. You can fight back in the conventional way, by plugging away at your job, or by doing good and helping other people. But historically, these techniques have only been moderately successful, even when applied with tremendous efforts over hundreds of years. So we, the transhumanists, are developing new technologies to help us overcome natural barriers. There are very few problems today which can’t be solved through the combination of technology and altruism- death, poverty, nuclear war, human stupidity, and just about everything else are all solvable problems. We’re taking back this planet. Join the Revolution today.

What Next?

A review of some of our accomplishments during the past several years:

- Transhumanist organizations, as a group, have raised millions of dollars over the past ten years. This is a huge amount of money relative to the size of the movement- Amnesty International has more than two million members worldwide, and they had an operating budget of just $43 million last year.

- A huge amount of material on transhumanism has been written and published (primarily online). Anyone and their dog can now download thousands of pages of transhumanist-oriented essays, most of which are reasonably well-written, although large parts may be inaccurate or obsolete.

- Transhumanists now have a social network: the Accelerating Future people database has more than ninety entries, and more than four hundred people have agreed to have their names listed on the Lifeboat Foundation’s SAB. Hundreds of people subscribe to the SL4, AGIRI and WTA mailing lists.

- We have gotten mainstream, mostly positive press coverage, primarily due to the 2007 Transvision conference and Singularity Summit. The Summit made the front page of the San Francisco Chronicle, a major newspaper, which actually took the time to cover it rather than dismissing us as kooks.

The question is, then: what next? We can certainly keep doing more of the above: more money, information, social connections and press coverage would obviously be good. But none of these are going to help the future directly. How do we translate the momentum we’ve been building up into increasing humanity’s potential? Any suggestions?