The Negation Test

Consider this paragraph:

“Many sublimations concerning constructivist capitalism exist. In a sense, Sartre’s model of cultural postmaterialist theory suggests that the task of the reader is social comment, but only if reality is interchangeable with language; otherwise, Marx’s model of neosemioticist Marxism is one of “conceptual theory”, and hence a legal fiction.”

Now consider this paragraph:

“Many sublimations concerning constructivist capitalism don’t formally exist. In a sense, Sartre’s model of cultural postmaterialist theory suggests that the task of the reader isn’t social comment, but only if reality isn’t interchangeable with language; otherwise, Marx’s model of neosemioticist Marxism isn’t one of “conceptual theory”, and hence, is not a legal fiction.”

The first paragraph was taken from the Postmodernist Generator at http://www.elsewhere.org/pomo/. The second paragraph, according to the rules of English grammar, should say the opposite of whatever the first paragraph said. However, the two still sound very similar- I doubt that most people could tell which one was the ‘original’ and which one was the negation.

Totally meaningless text may sound nice, but it doesn’t help you to concentrate your probability space or make testable predictions. Hence, the inverse of totally meaningless text should be more totally meaningless text; if you can’t tell the difference between a paper and its negation, it must be meaningless to you (which can be caused by personal ignorance, as well as actual meaninglessness). Producing a negation is far easier than producing a parody, and it should be equally effective. Downloading a postmodernist philosophy paper, negating all the statements, and then attempting to publish it is protected under fair use, if anyone here wants to try it.

Don’t Panic

In the immortal words of Douglas Adams: “Don’t Panic!”. For the uninitiated, this quote comes from the Hitchhiker’s Guide to the Galaxy, a series of science fiction/comedy novels. It’s usually good advice anyway, but it becomes mandatory when dealing with ultratechnology; panic from future shock is a real, serious risk. Cognitive malfunctioning and irrationality increase tremendously during panic and other stressful conditions. Panicking in the face of transhumanist technologies and existential risks is nothing more than jumping out of the frying pan and into the fire.

General Mathematical Engine

Classical intelligence tests, such as IQ, are geared for humans and won’t tell us a great deal about the capabilities of AGI prototype programs. Tests designed for animals are equally limited, for similar reasons. Domain-specific tests, such as chess, tend to favor narrow AIs rather than AGIs; as of Mar. 2008, to my knowledge, there’s no decent system for general-intelligence-based AI testing in existence.

Therefore, I propose the creation of a series of tests to determine the capabilities of a general mathematical engine, as a prototype for more advanced kinds of AGI. Such an engine, depending on how advanced it is, should be able to prove theorems, make useful conjectures, and learn new information, across a wide variety of mathematical fields. To name a simple example, such an engine could be tested on Rubik’s Cube-solving capability, on various levels of difficulty (no prior support for groups and combinatorics, minimal support, extensive support, etc.) Additional suggestions welcome.

SL4 Wiki Online

The SL4 server has been down several times in the past few months, for reasons not known to me or the public. The SL4 Wiki, hosted at sl4.org/wiki, has disallowed robots as an anti-spam measure and so is not indexed by Google. To stop this information from simply disappearing into the void, I’ve set up a cached copy of the Wiki at acceleratingfuture.com/tom/sl4wiki. Warning: This is a raw dump and formatting problems are to be expected. All material is copyrighted by the original authors.

Intelligent-Sounding Questions

Consider the history of the ancient philosophical question, “If a tree falls in a forest, and nobody’s around to hear it, does it make a sound?”

I have no idea why or how someone first thought up this question. People ask each other silly questions all the time, and I don’t think very much effort has gone into discovering how people invent them.

However, note that most of the silly questions people ask have either quietly gone away, or have been printed in children’s books to quiet their curiosity. This type of question- along with many additional errors in rationality- seems to attract people. It gets asked over and over again, from generation unto generation, without any obvious, conclusive results.

The answer to most questions is either obvious, or obviously discoverable- some easy examples are “Does 2 + 2 = 4?”, or “Is there a tiger behind the bush?”. This question, however, creates a category error in the human linguistic system, by forcibly prying apart the concepts of “sound” and “mental experience of sound”. Few people will independently discover that a miscategorization error has occurred; at first, it just seems confusing. And so people start coming up with incorrect explanations, they confuse a debate about the definition of the word “sound” with a debate about some external fact (most questions are about external facts, so this occurs by default), they start dividing into “yes” and “no” tribes, etc.

At this point, the viral meme-spreading process begins. An ordinary question (“Is the sky green?”) makes reference to concepts we are already familiar with, and interrelates them using standard methodology. A nonsensical question either makes reference to nonexistent concepts (“Are rynithers a type of plawistre?”), or uses existing concepts in ways that are obviously incorrect (“Is up circular?”). Our mind can deal with these kinds of questions fairly effectively. However, notice the form of a question asked by the tribal chief/teacher/professor/boss: things like “Does electromagnetism affect objects with no net charge?”. Even at large inferential distances, the audience will probably pick up on some of the concepts. Most laymen have heard of “electromagnetism” before, and they have a vague idea of what a “charge” is. But they lack the underlying complexity- the stuff beneath the token “electromagnetism”- needed to give a correct answer.

From the inside, this sounds pretty much like the makes-a-sound question: familiar concepts (“tree”, “falling”, “sound”) are mixed together in ways which aren’t obviously nonsense, but don’t have a clearly defined answer. The brain assumes that it must lack the necessary “underlying knowledge” to get past the confusion, and goes on a quest to discover the nonexistent “knowledge”. At the same time, the question conveys an impression of intelligence, and so the new convert tells it to all of his friends and co-workers in an attempt to sound smarter. Many moons ago, this exact question even appeared in a cartoon I saw, as some sort of attempt to get kids to “think critically” or whatever the buzzword was.

How People Think

This picture is an obvious hoax, but if you look back on WWII’s history, you’ll find that many people really did think this way. Human psychology hasn’t changed a lot since 1943. Where are the modern-day equivalents of WWII-era Nazis and Communists?

Well, they’re still with us- it’s quite easy to find people who seriously advocate committing crimes against humanity as a part of government policy. This kind of thing is not that uncommon- remember the infamous Star of David armbands? Obviously, a great many of these remarks aren’t meant to be taken seriously, but mass murder doesn’t require widespread fanaticism. The population simply has to sit back, not do anything, and watch their tax dollars hard at work.

There are millions of counterexamples to the commonly-held idea that most people are usually moral and rational, but this is the most obvious case study I could find. If something as blatantly evil as crimes against humanity doesn’t raise a blip on many people’s radar, what chance does accidental global extinction have? Relying on popular support, or on the support of politicians (who are selected from the populace by ability and willingness to rise to power, not morals), simply isn’t going to work.

Prediction Market Interest

Present-day prediction markets, so far as I am aware, have no interest-paying mechanism or inflation adjustment. With increasing inflation rates, this severely limits the potential of long-term prediction contracts, as most investors don’t want to throw away a large chunk of cash in exchange for the possibility of a 2:1 or 3:1 payoff. The typical, long-term return on mutual fund investments and the like is 10%. Over a five-year contract- not extreme by any standard- this means that an investor who bets $100 up front will automatically lose 38% of their winnings relative to other investments, even assuming they win at all. Over a ten-year contract, this figure increases to 61%. I suspect that Intrade and other marketeers know this, and that this is where they make the bulk of their money; the interest on upfront cash easily exceeds the 1% or 2% investment fee, even for a short term six-month contract.