Always 40 Years Away

An earlier post pointing to a Nick Bostrom paper led to a discussion on the observation (true or false) that the predicted date of a technological singularity has been receding by one year per year, so that it’s always the same number of years away. Rolf Nelson wrote:

[T]his is a common objection that I’ve never understood. What hypothesis is this alleged pattern Bayesian evidence for, and what hypothesis is this alleged pattern Bayesian evidence against? If there’s a hard takeoff in 2040, I guarantee you that someone in 2039 will publish a prediction that AGI is still 40 years away.

Also, some amount of “prediction creep” is rational to the extent a phenomenon resembles a one-shot poisson process. Suppose my personal life expectancy today is that I will die, on average, at age 78.58236. If, tomorrow, I find that I haven’t died yet, I should slightly increase it to age 78.58237. I *expect* this expectation value to increase slightly on every day of my life, except for the day I die, during which the expectation value decreases drastically.

Further discussion brought up the exponential distribution and its property of “memorylessness”, meaning that the distribution conditional on failure from time 0 up to time t looks like an exact copy of the original distribution shifted t to the right. (If you have a Poisson process, i.e. one where events occur independently and are equally probable at each time, then the probability distribution for when the first event happens is an exponential distribution, so this is the same thing Rolf said.)

The question now is, if according to experts (chosen in a certain defined way) AI has always been 40 years away, what does this prove? I don’t have a full answer, but I do have some comments.

  • In a distribution that’s like the exponential (and with a 40-year mean) but thicker-middled and thinner-tailed (in some sense that I’d have to think about how to quantify), upon observing 40 years of failure, the conditional distribution will be concentrated in its head and will have a mean less than 40 more years away. This is the sort of distribution you get if, e.g., the amount of work that remains to be done is a known quantity, and a random amount of work is being done i.i.d. in each time period.
  • If this is true it makes the experts look OK; on reflection they made no mistake in predicting a mean of 40 years.
  • In a distribution that’s like the exponential (and with a 40-year mean) but thinner-middled and thicker-tailed, upon observing 40 years of failure, the conditional distribution will be concentrated in its tail and will have a mean more than 40 more years away. This is the sort of distribution you get if, e.g., you add a couple of different exponential distributions with means spread around 40 years to represent the experts’ ignorance of what mean they really should have predicted. The increase in the predicted time until the event would represent their learning that, apparently, the exponential distribution with higher mean than 40 years is a better model.
  • If this is true it makes the experts look bad; they underestimated the difficulty of the problem.
  • In reality, again if the observation is true, some combination of the above two effects is probably in play, with the effects on the mean canceling each other out. (Intuitively I’d say that if they cancel out, you need “more” of the second effect than the first effect.)
  • But if neither effect is going on and we’re seeing a simple exponential distribution in action, what is that Bayesian evidence for? Well, according to this distribution, if it started 40 years ago then with probability 1-1/e AI should have happened already. According to the alternative hypothesis that 1) experts are like tape recorders programmed to say 40 years whatever the facts are and 2) there’s actually a negligible chance of it happening soon, AI should have happened with probability 0. The evidence favors the latter hypothesis over the former hypothesis with a Bayes factor of e. (Of course, you have to consider priors.) So I’m not sure that pointing at memorylessness gets you far; instead of “why haven’t predicted times decreased?”, you get the question “why hasn’t it happened yet?”.
  • Of course, everything depends on how you pick what expert predictions to listen to. Just because by one method you can get suggestive evidence that the experts have been over-”optimistic”, it doesn’t follow that a more sensible method would yield more “pessimistic” predictions for the future.
  • Just extrapolating to “AI will always be ’40 years away’” is all kinds of naive — I think everyone here can agree on this.

The Calculus of Doubt

From the conclusion of “Disbelief as the Dual of Belief” by John Norton:

The natural presumption is that degrees of belief are primary and that degrees of disbelief and their associated logic are parasitic upon them. Nothing within the logic of belief and disbelief supports this presumption. We have seen that the duality of belief and disbelief supports a complete isomorphism of the two. For every axiom, property, proof or theorem of the one, there is an analog in the other. Therefore, for any reason, argument or principle we may find within the logic of belief as supporting its primacy, there will be an exactly matching dual reason, argument or principle within the logic of disbelief supporting the latter. If the primacy of belief over disbelief is more than an accident of our history, the reason must be sought outside the logics of belief and disbelief.

I wonder if thinking in terms of disbeliefs (and disprobabilities?) could reverse some common cognitive biases.

Why the Brights Make Me Uneasy

Pablo Stafforini wrote:

It seems to me that something truth-seekers can do is make a conscious effort to minimize the number of propositions whose truth-values their identities are built around. Don’t think of yourself as a theist, an atheist or an agnostic; don’t think of yourself as a liberal, a conservative, a moderate, a socialist, or a libertarian; don’t think of yourself as a deontologist, a consequentialist, or a virtue ethicist. Whatever views you have on these and other issues, they should only be views that represent how things are, rather than views that constitute who you are.

When a finger points at the lack of a God, the fool makes up an identity group for the finger.

What Relativity Doesn’t Teach Us

In this pdf article John D. Norton investigates a long list of philosophical morals that have been drawn from relativity theory and finds most of them wanting — not true, not new, or based on either something more or something less than general relativity.

That could mean relativity isn’t as weird (to us) as we thought, or it could mean the old physics was weirder than we thought. Probably a bit of both.

Contrary to a recent commenter on Overcoming Bias (I forget who, maybe Eliezer), I think philosophers of physics tend to do better philosophy of physics than physicists, for an unmysterious reason: it’s their specialty.

Sun Fine-Tuned?

New Scientist reports that Charles Lineweaver and others looked at 11 properties of the Sun and calculated that their combined “typicalness” relative to nearby stars is actually above average, concluding that the Sun has no properties fine-tuned for life. If so, there goes yet another potential part of an explanation for the Fermi Paradox.

The paper turns out to be on ArXiv, and there are some of the usual annoying interpretation-of-statistics issues.

The sun is heavier than 95% of other stars, and it’s been suggested this has an anthropic explanation; but the authors argue that because a joint chi-square test on all 11 independent properties comes out below average, the high mass is apparently a result of random chance.

If you ask me, that’s pure Bayesphemy.

The paper itself states:

Mass is probably the single most important characteristic of a star. For a main sequence star, mass determines luminosity, effective temperature, main sequence life-time and the dimensions, UV insolation and temporal stability of the circumstellar habitable zone (Kasting et al. 1993).

So what’s happened here is they’ve combined the data on the Sun’s atypically high mass with data (and attendant randomness) on ten other, less relevant properties. I don’t want to think about the math right now, but it seems intuitively that if you add enough properties that don’t do anything to a property that does do something in a joint chi-square test of the kind the authors used, you always have a decent shot at “showing” they don’t have a combined effect regardless of how strong the one real effect is.

Besides, if you’re interested only in the effect of mass, then how can knowing all the other properties (which are, again, independent of mass) tell you anything relevant? There’s just no information in them. I guess there could be if for some reason your probabilities for the proposition “unusual X is required for life” were positively correlated for different X. I guess if you were relying on someone’s authority when they said “high mass is important”, that person’s authority would be undermined by the evidence that other properties are unusually typical and so they may be cherry-picking properties.

Another thing I don’t immediately see them addressing is whether some properties may have been fine-tuned to be not too far from the typical range. Maybe that’s theoretically implausible in all 11 cases.

I’m not too sure of my thinking here; expect sneaky edits.

Regardless of Bayes gripes, the paper is interesting and informative. Although Lineweaver seems to be on the wrong side of the ET debate, I recommend his other stuff.

Assuming Is Not Believing

Suppose I’m participating in a game show. I know that the host will spin a big wheel of misfortune with numbers 1-100 on it, and if it ends on 100, he will open a hatch in the ceiling over my head and dangerously heavy rocks will fall out. (This is a Japanese game show I guess.) For $1 he lets me rent a helmet for the duration of the show, if I so choose.

Do I rent the helmet? Yes. Do I believe that rocks will fall? No. Do I assume that rocks will fall? Yes, but if that doesn’t mean I believe it, then what does it mean? It means that my actions are much more similar (maybe identical) to the actions I’d take if I believed rocks would definitely fall, than to the actions I’d take if I believed rocks would definitely not fall.

So assuming and believing (at least as I’d use the words) are two quite different things. It’s true that the more you believe P the more you should assume P, but it’s also true that the more your actions matter given P, the more you should assume P. All of this could be put into math.

Hopefully nothing shocking here, but I’ve seen it confuse people.

With some stretching you can see the assumptions made by mathematicians in the same way. When you assume, with the intent to disprove it, that there is a largest prime number, you don’t believe there is a largest prime number, but you do act like you believe it. If you believed it you’d try to figure out the consequences too. It’s been argued that scientists disagree among themselves more than Aumann’s agreement theorem condones as rational, and it’s been pointed out that if they didn’t, they wouldn’t be as motivated to explore their own new theories; if so, you could say that the problem is that humans aren’t good enough at disbelieving-but-assuming.

Embodiment Shmembodiment

Sometimes people make an argument along the lines of: “Science has discovered that human intelligence is intrinsically embodied, and therefore 1) you can’t just program an artificial intelligence in a box without a body, and 2) you can’t upload your mind because then it would no longer be embodied”. Usually they insert the word “profound” somewhere.

Let’s start by dealing with 1). First off, just because human intelligence is embodied, do we know that embodiment is a requirement for intelligence? Our bodies and their interaction with their environment happened to be around for evolution to work with in programming our minds, so just the fact that it used them isn’t very informative. To show that embodiment is necessary, you’d have to show that other approaches fail.

Second, I could see embodiment meaning a few different things, and none of them seem very threatening.

  1. “To make an intelligent mind you need to give it an actual physical body in the actual physical world.” This seems clearly false. If a virtual body in a virtual world has the same structure, it should allow the same intelligence, because intelligence is a structural property. (Something that’s wet in a virtual world with the same structure as the real world is not also wet in the real world, but something that’s intelligent in a virtual world with the same structure as the real world is also intelligent in the real world.)
  2. “To make an intelligent mind you need to give it something with body-structure in something with world-structure.” I doubt it. (Note, though, that I know nothing.) Anyway, this is compatible with AI-in-a-virtual-world-in-a-box.
  3. “To make an intelligent mind you need to give it some of the mental features that embodied creatures have, like sensory and motor modalities.” Maybe. Here there’s no conflict with bodiless-AI-in-a-box at all.

Next, the argument from embodiment against uploading. That one sounds confused to me too. If you uploaded my mind somewhere without connecting it to something with a structure much like my body in a sensible 3D world, then my mental life would be so much changed that you could indeed doubt whether I’d still be me. But if you uploaded my mind so that it stayed connected to something with a structure much like my body in a sensible 3D world, the only thing that’s changed — other than the details of the surroundings, which change every time I walk into a different room — is that things that used to be real are now virtual. (I need a better word for “real” here — I think virtual worlds are perfectly real in the philosophical sense.) This is not a difference that leaves me less embodied in any way that affects my psychology. A thing that used to be true of me — “I am being implemented in base-level reality” — is then no longer true of me, but this in itself causes no philosophical problems. Facts about me — where I am, what I perceive, what atoms I’m made of — change all the time. It’s only when my psychology is rewritten or my memories are changed that I need worry about being hurled into an existential crisis.