Technology: Four Possible Stances

  1. Technology will lead to extremely good outcomes (technophile)
  2. Technology will lead to extremely bad outcomes (technophobe)
  3. Technology will lead to outcomes that are on the whole neutral (technonormal?)
  4. Technology will lead to extreme outcomes, either good or bad (technovolatile?)

People tend to assume transhumanists are type 1, when many are in fact type 4. This is one of transhumanism’s major PR problems. On a one-dimensional scale of like/dislike, type 4 won’t even register.

(And yes, strictly speaking, 4 is just a mix of 1 and 2, and the other possible mixes should be options also.)

Unfalsifiable Ideas versus Unfalsifiable People

If no possible outcome counts as evidence against a hypothesis, then no possible outcome can count as evidence for it. This is easy to prove, and it’s part of why people are wary of unfalsifiable hypotheses. But falsifiability can be lacking in two different ways:

  • Sometimes, a hypothesis is unfalsifiable because there genuinely is no evidence that could possibly distinguish it from the alternatives. Perhaps the hypothesis is meaningless. Or perhaps it’s not meaningless, but it just happens we can’t get any empirical evidence and we’re stuck evaluating the hypothesis on a priori grounds. Interpretations of quantum mechanics are a good example.
  • Sometimes, a hypothesis is unfalsifiable because its proponents cheat and count all possible evidence as neutral or (at least in some cases) favorable. Here you can apply the proof I mentioned above to show they’re being irrational. But now you’re really dealing, not with unfalsifiable hypotheses, but with unfalsifiable people. The hypotheses themselves have no falsifiability problems at all; usually, they’re not true, and this is reflected in the evidence being against them. The existence of people who die without ever having heard of Christianity is evidence against Christianity even if Christians don’t admit that it is.

The first type of unfalsifiability gets undeserved hate because it’s associated with the second type. If I propose an interpretation of quantum mechanics that makes the same predictions as all other interpretations, that doesn’t mean I’m cheating, like you could argue Marxists or Freudians or Christians do. It just means I’ve made a claim in the domain of philosophy.

Cartoon Bias

It would explain a lot if something like this were going on:

  • In fiction, when characters with transhuman attributes appear, the focus is on the coolness, scariness, or plot function of their powers. Rather than people with meaningful inner lives, they tend to be just walking lists of impressive stats.

    Black Belt Bayesian Blog does not endorse this candidate for WTA president.

  • People form the category of “superhuman” as something belonging in the domain of cheesy fiction, or cheesy elements of non-cheesy fiction.
  • They conclude transhumanists want a world where everyone is smarter, stronger, and longer-lived, but at the cost of turning into a cartoon character lacking human-like complexity.

Maybe this isn’t quite true. Some stories explore how real human beings might deal with unusual powers. But in the context of transhumanism, with its emphasis on technology usable by anyone rather than a few favored individuals, these aren’t the first that come to mind. Brave New World is.

Is the Hole Natural?

You may have heard the news that astronomers discovered a billion light year hole in space. It’s tempting to blame an expanding alien civilization, but I think the hole is natural. Here are some bad and good reasons why:

  • In the picture, the hole still seems to have some galaxies in it, and I would expect a posthuman civilization to eat all galaxies in its reach. But from what I can tell, the science doesn’t exclude the possibility of a completely empty gap.
  • In the picture, the hole seems potato-shaped. If the aliens expanded at the same speed in all directions, the hole would be spherical — or, depending on how you look at it, pear-shaped, because we see closer galaxies later in time. But again, from what I can tell, we don’t know much about the hole’s exact shape. And there could be explanations for the wave having a different speed in different directions, like differences in density.
  • We see the gap as it was 6-10 billion years ago. That’s early in the history of the universe. Is it too early for civilizations to have formed? According to Charles Lineweaver, “68% of earths in the Universe are between 3.3 and 9.3 Gyr old while 95% are between 0.6 and 10.5 Gyr old”. So it doesn’t seem completely out of the question.
  • Why didn’t the colonization wave make the hole even bigger? Keep in mind that the farther out we look, the more into the past we see. If light from there is just arriving, probes couldn’t have arrived yet, and neither could evidence of further colonization. So in theory, the hole could be a lot bigger now than we see it. If expansion happens at almost the speed of light, the edge will reach us in only millions of years. But although this is possible, it’s a major coincidence. If colonization waves are really that fast, then the probability of catching one in action is tiny. And if expansion is much slower, explaining how they already colonized a billion light year blob becomes tough.
  • The prior probability that colonization waves emerge rarely enough to solve the Fermi paradox but often enough for us to encounter one is quite small.
  • Finally, from how I understand the experiments, we know two things about the hole: there are few radio wave-emitting galaxies there, and the cosmic microwave radiation from that direction was cooled because the mass in the hole went missing. I have no idea whether Matrioshka brains or whatever it is these guys build would emit a lot of radio waves. I do think removing all mass from a billion light year blob would require magic physics.

As you may have noticed, a lot of “I don’t know”s remain. I would be grateful if someone with more expertise could clarify.

Wouldn’t it be ironic, by the way, if the first proof of alien life were an immense radio silence?

Elsewhere

Gordon McCabe helpfully collects some freely online chapters of several philosophy of physics anthologies.

Curi has some thoughts about life in a posthuman world (not all of which I would endorse).

Robin Hanson at Overcoming Bias and Richard Chappell at Philosophy, Etc criticize some knee-jerk Popperians. (You know the type. “If I can’t immediately think of an unambiguous test, then it’s not science and it must be religion, and it’s meaningless, and false too! But hey, if religion is what you get off on, go ahead! Philosophy? There’s no such thing!”)

Memory Sports

Since the early nineties, people have been competing at feats of memory in events like the World Memory Championships. The top contenders seem to be Britons and Germans of either sex. Record holder Ben Pridmore once memorized a random pack of cards in 26 seconds, 27 random packs of cards in an hour, and 3915 random bits in half an hour.

This, to me, is at least as impressive as running 100 meters in 10 seconds. Unfortunately, in today’s world with its paper and its search engines, it doesn’t seem much more useful. I would hesitate to call mnemonics a meaningful form of intelligence augmentation. Still, it’s always fun to see brains do something you thought they couldn’t.

If you’re curious about the techniques involved, Mentat Wiki has a good overview.

Underinvesting in Knowledge

Bjørn Lomborg, in an interview about his new book Cool It, says:

It’s about research and development and I specifically propose we invest 0.05 percent of GDP in research and development in non-carbon-emitting energy technologies — this could be wind, solar, you name it. There are many different opportunities. The idea is, it’s 10 times cheaper than Kyoto. It’s likely going to be maybe 100 times cheaper than the follow-up to Kyoto, which is going to be negotiated in my home town, Copenhagen in 2009. And yet it’s a 10-fold increase in the research and development that we commit right now to these issues. So it is one that is doable, it is politically feasible and it is smart. In the long term, it will likely do much more good than Kyoto or son of Kyoto will ever do – and it will actually have the affect in the long term to halt global warming.

Is Lomborg right? Does society underinvest that badly in research and development? Knowledge is a classic public good, but you would also expect greater investment there by governments and charities to have a natural base of supporters.

Utility

A lot of people get confused over different uses of the word “utility”. Here’s my understanding.

Decision-theoretical agents have preferences over states of the world. You can assign a number to each state such that the agent always prefers the state with the highest number. This number is called “utility”, and the agent a “utility maximizer”. If an agent is uncertain about the consequences of different actions, you can show that under reasonable assumptions the agent assigns utilities to individual outcomes, and prefers the action after which the expected value of these utilities is the highest. This is called the “Expected Utility Theorem”, and an agent to which it applies is an “expected utility maximizer”.

This formalism is used both in economics and in ethics.

Economics (of the neoclassical kind) models consumers and other economic actors as such utility maximizers. These models are imperfect, and there’s a lot of controversy over their usefulness which this post will not comment on. What’s clear is that utility, in the economic context, is not happiness. Happiness is probably related to utility in some way, but they’re not the same thing. Utility is not something you can experience. It’s just a mathematical construct used to describe the optimization structure in your behavior. Each agent cares only about his own utilities, so there’s no point in adding utilities across different agents, either.

Consequentialist ethics says an act is right if its consequences are good. Moral behavior here amounts to being a utility maximizer. What’s “utility”? It’s whatever a moral agent is supposed to strive toward. Bentham’s original utilitarianism said utility was pleasure minus pain; nowadays any consequentalist theory tends to be called “utilitarian” if it says you should maximize some measure of welfare, summed over all individuals. Summing utilities is a necessary part of these theories, and the way to do it will either be obvious (like if utility is the number of bananas you eat), or will have to be specified by the theory (like if utility is the degree to which you achieve your goals). Take note: not all utility maximizers are utilitarians.

There’s no necessary connection between these two kinds of utility other than that they use the same math. It’s possible to make up a utilitarian theory where ethical utility is the sum of everyone’s economic utility (calibrated somehow), but this is just one of many possibilities. Anyone trying to reason about one kind of utility through the other is on shaky ground.

Often it’s tempting to interpret economic theory as assuming people strive after happiness, or people who achieve their goals are happier than people who don’t, or people achieving their goals is a good thing, or people achieving their goals is the only good thing, or rational people are egoists. Some economists say these things. Economics doesn’t.

It’s not quite true that economics says nothing about utility in the ethicist’s sense. There’s a branch of economics called “welfare economics” that does. For example, the economist John Harsanyi proved that if you’re behind a veil of ignorance, you must accept utilitarianism, which says the best policy is the one that maximizes the sum of everyone’s individual utility. But such a proof makes some explicit assumptions. One of these is “Pareto efficiency”, the idea that if doing something makes nobody worse off in terms of preferences and at least one person better off, it should be done. That sounds reasonable, but it does assume that nothing matters other than preference fulfillment. This is not an obvious ethical truth.

With this much opportunity for confusion, when people mention “utility”, it’s a good idea to make sure you (and they) know exactly how they’re using the word.

Jeremy Bentham's HEAD!

First Aid for P-Values

The strength of an experimental result is commonly stated in the form of a p-value. For example, if the p-value is 0.01, that means that if there were no effect, the probability of getting a result at least as extreme as the one actually obtained would be 0.01. This is a useful number, but it’s easily misinterpreted. What you really want to do is compare the probability of getting the exact result you got under the null hypothesis, to the probability of getting that result under the alternative hypothesis. The ratio of these two is called a Bayes factor. Only this information will allow you to rationally update your degree of belief in the null hypothesis. But from the p-value, you can’t tell.

I don’t know if this is widely known, but there’s a trick that you can use to translate from p-values to Bayes factors. Assume your prior distribution is symmetrical and peaked at the null hypothesis. Then the following formula gives you a minimum Bayes factor:

- e p ln(p)

For example, if your p-value is 0.05, your minimum Bayes factor is 0.4. That means the odds for the null hypothesis (that is to say, P(null true)/P(null false)) are multiplied by at least 0.4. A null hypothesis that starts out as 75% probable ends up being at least 31% probable. So a p-value of 0.05 isn’t nearly as bad as it sounds.

This article has a handy little table listing some other possible values. It also gives a weaker formula for a minimum Bayes factor that does not make any assumptions about the prior. This pdf article explains more about this minimum and about Bayes factors in general.

If you see a p-value quoted for a result you doubt (cough parapsychology cough), and if you know the basic technical concepts, it can be quite useful to have the formula or table at hand. Coincidental results are more common than you might think.

Disappointment

Steven Landsburg, in his book The Armchair Economist, writes:

[I]f you chose this book randomly off the shelf, it would be as likely to exceed your expectations as to fall short of them. But you didn’t choose it randomly off the shelf. Rational consumer that you are, you chose it because it was one of the few available books that you expected to be among the very best. Unfortunately, that makes it one of the few available books whose quality you are most likely to have overestimated. Under the circumstances, to read it is to court disappointment.

Sounds reasonable, doesn’t it? You underestimate some books, you overestimate some others, and if a book seems to you to be the best, then it’s more likely that you overestimated it than that you underestimated it. Landsburg goes on to apply the same reasoning to potential marriage partners before entering into the chapter’s main topic, the “winner’s curse”, which means that if you bid higher than anyone else in an auction, you probably overestimated the item’s value.

It’s certainly possible for a rational thinker to be systematically disappointed in a subset of possible outcomes. If you don’t know whether it’s going to be cloudy, you can expect the weather to disappoint you in case of cloudiness. Likewise, in the case of the winner’s curse, you will be either in the situation where your bid is higher than all other bids, or you’re in the situation where it’s not. In the former case, you probably overestimated the value; in the latter case, you probably underestimated the value. You just don’t know which is true.

But there’s something wrong with the books (and marriage) example. Just before you start reading a book, you already know what subset of outcomes you’re in — you know you’re reading the book that seemed like the best buy, and you should already have taken this into account somehow. A rational thinker can never expect, unconditionally, to be disappointed. If you already know it’s going to be cloudy, you can expect the weather to disappoint you in case of rain, but you can’t expect the weather to disappoint you on the whole. So the the reasoning I quoted must be flawed.

I suspect the problem is with orthodox methods of statistical inference.

An “unbiased estimator” of book quality is one that, for each possible book quality, is equally likely to come out higher or lower than the actual value. More precisely, the estimator’s expectation value conditional on each possible actual value is equal to that actual value. Maybe the perceived interestingness of the cover and title is such an estimator.

If you take one of these estimators and use it as your expectation for the book’s quality, then yes, you are likely to be disappointed. It will be common for high estimates to be caused by random errors as well as quality; there is an effect here called “regression to the mean”. But you don’t want an “unbiased” estimator. You want an estimator that’s “biased” in the direction of your prior knowledge. The right way to get such an estimator is to start with a prior probability distribution for book quality, use Bayes’s theorem together with data like how interesting the cover looks to turn it into a posterior probability distribution, and then take the expected value. With this estimator, you can no longer expect to be disappointed. You will be disappointed some of the time, but not systematically; the expected value for your disappointment is zero.

Consistent with this reasoning, I didn’t find Landsburg’s book disappointing at all. But spurn Bayes, and you will find yourself regularly disappointed as a consumer, in your love life, and everywhere else. Don’t say you weren’t warned.