Security Breach: ViddyHo Virus

Attention everyone: I have fallen for the ViddyHo virus, please do not open any strange links you receive from me. If you have already opened a link and entered your account information, please change your password and security question immediately, and notify anyone you sent a spam message to.

On a lighter note, I will be presenting a paper at the AGI-09 Workshop in Arlington, VA on March 9th.

Why There Is No Housing Bubble

From “Why There Is No Housing Bubble” by Jim Jubak, June 2005:

It’s just that, for all the teeth-gnashing and pundit-moralizing, we really don’t have a housing bubble that’s anywhere near bursting. Current 10-year interest rates are just too low. And I certainly don’t see interest rates rising enough in the next year or so to burst a bubble, either.”

As of now (February 2009), this guy is still writing for MSN, as “The Web’s No. 1 Investing Columnist” and “Senior Markets Editor”. Why do people still listen to these talking heads regardless of how many times they’re proven wrong?

Textbooks Are Insanely Cheap

No, that’s not a joke; textbooks are really cheap, if you’re buying them for personal use and not for a class. The high price of new textbooks is almost entirely caused by demand from college students, which makes one wonder about how many of those students actually value education as opposed to the piece of paper given at graduation. Eg., it costs $92 to buy a used copy of Stewart’s Calculus (6th edition), which is currently used for college classes. But the price of the fifth edition is now $5, which means that the book is literally not worth the paper it’s printed on, because people aren’t forced to buy it anymore, and so there’s a huge glut on the market.

Expert Bribery

Bribing an expert is, in general, not illegal.

This seems to be an important principle, so it will be stated twice for emphasis: Bribing an expert is, in general, not illegal. It’s illegal to bribe public servants and union representatives, and under some circumstances it’s illegal to bribe a corporate employee (without the employer’s consent). But there’s no law against giving doctors, lawyers, scientists, engineers, etc. large sums of money in exchange for their support of you, your company or your product.

The public in general seems to place a nontrivial amount of confidence in experts of various sorts: expert witnesses at trials, expert professionals on commercials, expert guests on TV shows. But it is, not only commonplace, but standard procedure for all of these people to have their pockets lined in exchange for supporting a certain point of view. Even on an international scale, the only thing preventing ExxonMobil from just writing checks to all the world’s climatologists is the large fraction of scientists who really do value ethics over money, and the obvious negative PR that would result.

Sales as Wireheading

Wireheading is the practice of inserting wires into an animal’s brain to bliss out the pleasure centers; this can also be done with numerous drugs, such as cocaine and various opiates. More generally, wireheading is altering an optimization processes’s utility function in order to increase the world’s utility, by substituting the quantity of utility for the original goals as the optimization target. The end result of wireheading, if applied to all of human civilization, is the universe being deleted and replaced higher and higher floating-point numbers representing happiness or utility or what have you.

Salesmanship, although not considered illegal or unethical, should be considered a form of wireheading, as “value” is delivered to the consumer (and therefore the company, and the economy) by altering the consumer’s utility function through sales pitches. The change in the definition of “value” is much more subtle than an AI wiring our brains to greatly desire paperclips, but I see no reason why the former should be regarded as “improving quality of life” if the latter isn’t. It seems reasonably plausible that a superintelligence could turn the world into paperclips through extraordinarily effective sales pitches alone; I can easily imagine a sales pitch which would result in me signing over all of my material assets in exchange for something which has zero market value.

Newcomb’s Paradox

For purposes of simplicity, I will avoid introducing Newcomb’s paradox, or any of the various philosophical issues surrounding it. I will also shamelessly avoid the issue of a perfect predictor; any perfect predictor of Turing machines in general seems to require a halting oracle, and the paradox should still work if you just use an ordinary human who has really good psychological knowledge and so can predict accurately 90% of the time.

The heart of Newcomb’s paradox is what Scott Aaronson calls first-order rationality: a case where utility is attached to beliefs directly, rather than attaching only to actions which flow from beliefs. As an extremely simple example of first-order rationality, if you have a letter in a sealed envelope which you strongly believe to be both accurate and surprising, and someone points a gun at you and tells you they’ll shoot you if you open it, you probably shouldn’t, even though you’ll predictably end up with less accurate beliefs. It seems that you can’t get a human to genuinely disbelieve in something they already know to be true without introducing other… issues, which leads to a great deal of confusion, but any human with reasonably unimpaired cognition should have little trouble avoiding a bullet in the previous scenario.

If a human is deciding whether to put $1M in the box, the obvious thing to do is to try to influence the human in some way… to shine a light in their eyes, or inject them with morphine, or weave wonderful tales about what you would do with the $1M, etc., etc. By act of magic, none of these work, and the only thing that the human considers is the predicted result of your cognitive algorithm for how many boxes to take- the only way to influence the future is through the dependence on currently held ideas. Which currently held ideas would be best?

It seems that a reasonably good solution is to evaluate the problem using a meta-algorithm; evaluate the potential cognitive algorithms available and see which one produces the best result. A mind with a one-box algorithm will predictably receive $1M, while a mind with a two-box algorithm will predictably not receive $1M. The direct consequences- the ones which depend on external actions directly- are $1K in favor of two-boxing. But the indirect consequences, which depend directly on which algorithm you use, are $1M in favor of one-boxing, far outweighing the $1K even with some uncertainty added.

A meta-algorithm should be able to beat any paradox which directly rewards arbitrary cognitive content, eg., Kavka’s toxin puzzleмебели. The other problem with the puzzle is that it would be difficult for a human (although not an AI) to implement an algorithm which actually results in them drinking the toxin, rather than a pseudo-algorithm under which they “intend” to drink it (for various definitions of intent) but actually won’t. This is easily fixable in principle, eg., by rigging a time bomb to the toxin which will explode and kill you if you don’t consume it. The ideal meta-algorithm is fully self-consistent over time, by selecting an algorithm which prefers X and then actually doing X, so it should be able to handle even a perfect predictor by avoiding deliberate deception.

First-order rationality is also applicable to game theory, eg., the Prisoner’s Dilemma, or even the True Prisoner’s Dilemma. Assuming that the two players know something about each other, selecting an algorithm which cooperates always has the direct effect of losing points, but it may also have the indirect effect of gaining points by increasing the probability that the other player will cooperate. Since the goodness comes from the indirect effects, which are still real but dependent on the other player’s algorithms, I dispute Eliezer’s assertion that one can always find a way to cooperate- if the other player is simply a rock which will fall off a shelf and land on the DEFECT button, it would be criminal stupidity to not also “defect”.

Supposed To

On the general principle that our actions shouldn’t be caused by mysterious unnamed forces, I propose that we deliberately eliminate the words “supposed to” from daily conversation. Consider the sentence:

“We’re supposed to meet tomorrow at noon.”

It’s not at all clear who desires us to meet at noon- Bill? Steve? The government? God? This sort of sentence actually makes the mind projection fallacy worse, by imbuing objects with a supposed-to-ness, rather than considering how agents are imposing their desires on objects. If I think we should all meet at 11, it adds an extra cost for me to argue with a “supposed to”, as the other party in the argument is removed from explicit consideration.

“We should meet tomorrow at noon.”

This is a clearer sentence; the implication is that I, specifically, want us to meet tomorrow at noon.

“Mike wants us to meet tomorrow at noon.”

This is clearer still, as the agent doing the supposing is named explicitly. If you don’t like it, you can start thinking about Mike’s psychology, and how to convince him otherwise.