Predictable Mistakes

Some argue that radical life extension would be a bad thing because, not fearing death, immortals would choose to delay any major accomplishments. This is a common class of argument, and there’s something very strange about it.

It’s paradoxical for most people to believe that most people will predictably make some specific mistake — not just that they’ll behave selfishly in a collective action problem, but that they’ll fail badly and foreseeably at pursuing their own interests. To make this sort of argument, you not only need to believe you’re in a special class of people who happen to have some insight. You need to believe that your insight will keep failing to change people’s behavior, again and again — even though you expect it to convince the listener that there’s a problem.

If you’re given thousands upon thousands of years to make a simple point, in a world of increasing knowledge and perhaps increasing intelligence, then, if you were really right, why would you expect to keep failing? Sure, people make dumb mistakes with their lives today. But they don’t have hundreds of years of life experience, they’ve grown up in a human-unfriendly and rapidly changing world, and mostly, they simply haven’t had the chance yet to think very deeply about things and hear all relevant viewpoints. Our civilization is very young. Give it time.

(You could maybe repair the “people need fear of death” argument by saying that regardless of what people’s conscious beliefs say they ought to be doing, it won’t work if they don’t feel the motivation in their bones. You could also argue that there is a collective action problem. But that’s just this particular example. The point I’m making is more general.)

It’s a Historical Novel… in Space!

I’ve never watched the series Firefly, but its Wikipedia entry introduces it as follows:

According to Whedon, nothing has changed in the future: there are more people with more advanced technology, but they still have the same problems politically, morally, and ethically.

This is the most concise trampling on the ethos of science fiction that I’ve seen. Folks, for better or worse, technology changes the rules.

Kehlog Albran put it better: “I have seen the future and it is just like the present, only longer”.

Mirror Matter

It’s unsafe to place a lot of confident physical bounds on what advanced civilizations will be able to do. Here’s an example that may turn into the start of a series.

Mirror matter is made of particles that are in a sense normal matter’s mirror image. Some physicists think it exists, perhaps even in large enough quantities to play a role in astronomy. Mirror matter would be invisible, but would behave like normal matter gravitationally, and might still interact weakly with normal matter. Robert Foot, a physicist who’s studied mirror matter, has written a book on it, part of which is on the web.

Note well: mirror matter is not antimatter is not negative mass is not matter-in-parallel-worlds.

If mirror matter exists in significant amounts, the possibilities get crazy. It’s been suggested that the Tunguska event was the impact of a body made of mirror matter, some of which is still in the ground and could be recovered by an expedition. Exoplanets could be made of mirror matter. There could be mirror stars. There could even be mirror planets in our own solar system. Some have speculated that perhaps Pluto or some moons of Jupiter are made of mirror matter with a small crust of normal matter, so that if you went there and dug down, they’d seem hollow, but you could mine the stuff.

Why would you want to collect mirror matter? There are probably a lot of different applications. Perhaps the most interesting is free energy. The mirror world, not having a mirror sun, is very cold. From any temperature difference, you can extract work at the cost of letting heat move from hot to cold. We couldn’t keep cooling the real world and heating up the mirror world forever, but we could do so for a long, long time.

If this isn’t classic mad science, I don’t know what is.

The verdict: P < .1. (I’m going purely by authority — it’s clearly legit science, but also clearly a minority view.)

But if true, incredibly cool.

Chess for the Warcraft Generation

Just for fun, from the Chess Variants Page, here’s Fantasy Grand Chess, with six possible armies.

Some other variants that look intriguing:

And people say immortals will get bored! Really, Chess is a genre, not a game. I wonder whether serious players of standard chess haven’t long reached the point of diminishing returns in fun. Probably one of those lock-in things.

Gotta Catch ‘em All

The intellectual world is made of Good Insights. If you’re a truth-seeker, you want every Good Insight you can get your hands on. Where do you find these things? Very intelligent people often generate them. But most or all people, even very intelligent people, have a set of Biases. Every possible Bias blocks some possible Good Insights.

Your job as a truth-seeker, then, is to follow a set of thinkers containing, for each Bias, at least one thinker who is free from that Bias. It’s a matter of collecting the set. A thinker whose Biases are many, with one very common Bias not among them, is a rarer, more valuable beast than a thinker whose Biases are few, but include all those that are very common. Diversity sometimes beats quality. The Presentable Pundit may be far more likely to be right than the Mad Theorist around the corner, and yet less worth reading.

At least, that is what this model implies. Is it a useful model?

Yin without Yang?

“There is no Yin without Yang”, some people say, where Yin is something like life or light or happiness, and Yang is its opposite, like death or darkness or suffering. This could have a number of different meanings.

  • “Once you’ve defined Yin, you’ve defined Yang”. For example, once you have the concept of life, you have the concept of death, meaning the lack or ending of life. This is true, but it doesn’t seem to have any practical implications. Just because something is thinkable or talkable-about, doesn’t mean it has to exist somewhere.
  • “If no instances of Yang ever occur, no instances of Yin can ever occur”. This is true for some values of Yin, but not others. If Yin means “unusual strength”, there can be no Yin without Yang, because not everyone can always be unusually strong. If Yin is life or light or happiness, then the claim is irrelevant whether it’s true or not. After all, instances of death, darkness, and suffering have already occurred.
  • “If no instances of Yang occur at a particular time, no instances of Yin can occur at that time”. This is a slightly stronger variation on the previous claim, one that would be relevant. Perhaps, even if Yang had existed in the past, there could be no Yin in a currently Yangless universe. Height seems to work like this, unless you want to say the whole world can go up and down. If nowhere is low, then nowhere is high. Life does not work like this. Being alive is not a relative thing — it does not mean you’re “more alive than average”. I would say the same about light and happiness. Sure, not everywhere can be especially light, and not everyone can be especially happy. But the interesting senses of “light” and “happiness” are absolute. If it’s light enough that I can see things, that doesn’t change if other places are also light. If I’m happy, that doesn’t change if other people are also happy.
  • “To be motivated, any mind needs to experience both pain and pleasure at times.” Minds are rather confusing things, but, I don’t think so. I imagine that a human you kept happy and pain-free would still be motivated to do a lot of things. In posthumans, especially, I don’t see why mild pleasure and great pleasure couldn’t play all the important functional roles that pain and pleasure do in humans. (A mind that couldn’t experience pain wouldn’t magically start finding mild pleasure subjectively painful. Why should it be impossible to build a mind whose experiences at all times disposed it to prefer having those experiences to having no experiences at all?) More alien minds could probably function without either.
  • “There exists a mechanism that creates Yang when there is too much Yin”. Maybe some people would say there is karma or a cosmic balance that has actual physical effects. These people are wrong, though. If there are barriers to a universal abolition of Yang, these will not have to do with karma. Perhaps they’ll have to do with physics or sociology. If we’re at the world’s maximum carrying capacity, one death will have to balance each birth. If we have a finite amount of fuel, then if we use some fuel to make light in one place, we have to keep some other places dark. But that’s not because of a general principle that “there is no Yin without Yang”, it’s because that’s how physics happens to work.
  • “Yin without Yang is possible, but then it doesn’t really count.” This is the position someone like Leon Kass would take about death — sure, you could have life without death, but it would lose its value. I tend not to find claims like this convincing. I think there’s a major bias that produces them: in our human-unfriendly world, accepting some of the bad things in life has always been instrumentally useful, and we may be habitually carrying over the association into imagined futures where these things are no longer necessary. I also note that even if your Yin is cheapened by a lack of Yang, the cheapened Yin may still be preferable to adding Yang. Historical atrocities gave some people the opportunity to show genuine moral heroism, and maybe such heroism has intrinsic value — but that doesn’t mean the atrocities were worth it.

I don’t claim to have refuted claims of this last kind. Maybe I’ll try in a future post.

Do Simulations Matter?

The Simulation Argument, formulated by Nick Bostrom, aims to show that you’re probably inside a simulated world. It assumes that enough civilizations like ours go on to spawn posthuman descendants that create many such worlds, and that enough of those worlds are like Earth. In all of spacetime, simulated versions of our civilization then outnumber originals. (Note that the Simulation Argument is not the same thing as the conclusion that you’re in a simulation.)

I think the argument and assumptions could hold up. If so, what does that mean we should do? Robin Hanson has made some suggestions. It seems to me, though, that (to a first approximation) the possibility of being in a simulation should make no difference to the behavior of a non-egoist agent. Here’s a quick informal argument.

Imagine you’re in a huge tree. It’s foggy, so you can’t tell whether you’re at the trunk or at one of a subset of the tree’s (sub-)branches. There are many branches and only one trunk, so you can assume you’re probably at a branch. You feel a strange urge to apply chemicals to the wood, and have two choices. One chemical, BranchKiller, is deadly to branches but not the main trunk; the other chemical, TrunkKiller, is deadly to the main trunk but not the branches. Assume you like the tree and want to save as much of it as possible.

In this situation, you should clearly apply BranchKiller, not TrunkKiller. Since you’re probably at a branch, it’s true that BranchKiller is more likely to harm the tree. But if you are at the trunk, the effects will spread to all branches. If you had some clones, one at each trunk or branch you think you might find yourself, then using TrunkKiller would always kill the entire tree, and using BranchKiller would always kill only part of the tree.

Now imagine the tree is the universe, the trunk is base-level reality, and the branches are simulated worlds. It’s possible that people in simulated worlds could do things to affect their fate that wouldn’t work in base-level reality. But for every decision made in a simulated world, base-level reality trumps that decision. We in base-level reality get to decide (if only very indirectly) what universes, if any, are created, and whether they can be influenced post-creation. Making sacrifices in base-level reality for gain in simulated worlds is like killing the tree’s trunk to save its branches. Unless we can very reliably help simulated worlds at very little cost in base-level reality, it seems to me we can just ignore the simulation issue entirely.

Update: see the comments for some corrections and clarifications.

Rapture of the Nerds, Not

The idea of a technological singularity is sometimes derided as the Rapture of the Nerds, a phrase invented by SF writer Ken MacLeod [update: this isn't true, see his comment] and popularized by SF writers Charlie Stross and Cory Doctorow. I can take a joke, even a boring old joke that implies I’m a robot cultist, but it irks me when jokes become a substitute for thinking. There’s always someone in discussions on the topic who uses the comparison to fringe Christian beliefs about the End Days as if it’s some sort of argument, a reason why all the developments postulated by those who do take the singularity seriously will fail to materialize.

Although the parallel — like any appeal to authority, positive or negative — might work as a heuristic, a hint to look harder for more direct and technical criticisms, it of course fails as such a criticism itself. When computing atom trajectories in a supercomputer, or in nanotechnological devices, Mother Nature doesn’t check the consequences against a List of Ridiculous Beliefs, rejecting any outcomes too similar to those expected by the uncool and stupid.

Now, it could be that if there’s a close similarity between the singularity and the rapture, this points at some sort of psychological flaw shared by believers in both, a seductive but irrational attractor of the human mind that sucks people in, with those raised religiously dressing it up in terms of God, and us technologically-oriented atheists imagining a human-made God-substitute. But that image of a shared psychological flaw is itself so seductive that it has distorted people’s view of what the singularity is about into a kind of geek-bible-wielding strawman — singularitarian ideas are assumed to parallel fundamentalist Christian ideas even where they don’t, just because the comparison is apparently so much fun. “Oh, look at those silly nerds, aping the awful fundies without even knowing it!” In this post, I will list some (but not all) ways in which the singularity and rapture resemble each other less than some people think.

First, though, it’s worth listing some ways in which the singularity and the rapture do resemble each other. Both deal with something beyond human rules emerging into the world, something so powerful that, if it wanted to, it could make human effort from that point on irrelevant. Some predictions about the singularity have included ideas that this power would suddenly help us “transcend” human life and our human bodies, with uploading, in the critic’s mind, parallelling God snatching true believers up to Heaven. And with such an event looming on the horizon, it’s only to be expected that both groups would take the possibility very seriously, in some cases even making it a central concern in their lives.

Now, some differences:

  • Rationalism: Whatever you want to call it — critical thinking, debiasing, epistemic hygiene — I don’t know of any movement that emphasizes this nearly to the extent that the singularity movement does. For example, a few of the posters at Overcoming Bias (a group blog that I highly recommend) are involved somehow in the singularity movement, and several others buy into some sort of transhumanist worldview. I think this convergence is a good sign. Here’s an article by Eliezer Yudkowsky (pronounced “Frankensteen”) that talks about how to avoid biased views of the future. Whatever you might call the rapture believers, paragons of rationality they’re not.
  • Naturalism: Though this should be obvious, all developments associated with a technological singularity would take place within the ordinary physical processes that scientific folks have come to call home. Kooks like Terence McKenna, who connect the singularity to Mayan prophecies, are laughed at by serious singularitarians. Transhuman intelligence, through its ability to explore unexpected corners in solution space, may seem magical, but we all realize no actual magic is involved. I’m not one who would disqualify non-naturalistic claims outright; it’s just that, in my opinion, the evidence so far strongly favors a naturalist world. Still, it’s a difference worth noting.
  • Uncertainty: Contrary to what you might think, most singularity activists don’t think a singularity is in any way an unavoidable consequence of technological progress; the collapse of human civilization, unfortunately, could avoid it quite well. Nor are they anywhere close to absolute certainty that the singularity is going to happen, or happen soon, the way the rapture people have faith in their rapture. It’s possible, after all, that we’ve underestimated the difficulties in making progress in AI, brain scanning, and the like. Experience from past attempts at futurism, as well as psychological research, tells us that it’s easy to be overconfident. But one thing they do agree on is the singularity is worth influencing, and some kinds are worth striving toward. In terms of expected value, even a 10% chance of such a world-changing event should cause many of us to refocus our efforts. A high-profile exception on this point may be Ray Kurzweil, whose overly precise predictions based on such an unreliable methodology as curve-extrapolating should earn him at least a little mockery (though there is also a lot to like in his writings).
  • Human-caused: Rapture believers wait for an external God to save them, independent of human effort. Programming a transhuman AI, on the other hand, is something we humans can do. The singularitarian worldview is sometimes claimed to encourage an attitude of passive waiting. I think that’s an unfair accusation; it actually encourages going out and solving problems, just through a different approach.
  • Nature contingent on human action: Again, contrary to what you might think, singularity activists don’t blindly expect a singularity to be positive. Intelligence sometimes helps humans understand how to bring out the best in themselves, but this will not generalize to AI. Unless thorough precautions are taken from the beginning, a superintelligent AI is likely to be indifferent toward us — not benevolent or cruel, just uncaring, except to the extent that our continued existence might get in the way of whatever it’s trying to achieve. That means it’s crucial for the first successful AI project to work under such precautions. And that is also to say: that a project under such precautions should be the first.
  • No in-group perks: In the Singularity-as-Rapture-for-Nerds analogies that I’ve seen, it’s claimed that the Nerds expect only themselves to benefit, with the rest Left Behind in their misery, in the same way that only Christian true believers are supposed to benefit from the rapture. This seems like a clear example of fitting reality to your analogy when it should be the other way around. I haven’t seen any predictions by singularity advocates that restrict the effects to some elite group of techno-savvy westerners, nor have I seen anyone advocate that this should happen. The singularity is supposed to benefit humanity, not some particular group, and singularity activists understand this. If we succeeded at building an AI that cared enough to leave us alive in the first place, the AI would almost certainly be enough of a humanitarian to help all of us, and with maybe thousands of years of progress in a short time, it would have the resources to make this easy. Scenarios where only the rich and 31337 classes benefit from technological progress seem like a possible danger (though, if the cost of new technologies falls quickly enough, only a temporary one), but these are always pre-singularity scenarios.
  • No religious trappings like rituals, worship, or holy writings. I’d expand on this further if this post were about comparisons to religion in general, but it’s specifically about rapturism.
  • No revenge: One of the dynamics fueling religious rapture beliefs is the expectation of unbelievers being deliciously proved wrong when it happens, after which horrible things will happen to them. As far as I know, no one in the singularity movement deals in anything like these revenge fantasies. This is a good thing.
  • No anthropomorphism: The Christian God is in a way just a big authoritarian alpha monkey, but a superintelligent AI is not expected to think or behave anything like a human. Perhaps it would manifest more like a new set of laws of nature than like a human leader. It might not even be conscious. It would certainly not be a source of arbitrary moral authority.
  • The difference that actually matters, of course, is that a belief in the rapture is not justified by the evidence, and a qualified belief in the singularity, defined as disruptive changes caused by a recursively-improving superhuman AI, is. I have found a truly marvelous proof of this, but alas, it falls beyond the scope of this post.

It’s also interesting to think about what would happen if we applied “Rapture of the Nerds” reasoning more widely. Can we ignore nuclear warfare because it’s the Armageddon of the Nerds? Can we ignore climate change because it’s the Tribulation of the Nerds? Can we ignore modern medicine because it’s the Jesus healing miracle of the Nerds? It’s been very common throughout history for technology to give us capabilities that were once dreamt of only in wishful religious ideologies: consider flight or artificial limbs. Why couldn’t it happen for increased intelligence and all the many things that would flow from it?

It would be tragic if, by thinking of some subjects as inherently religious, we let the religious impose their terms on our understanding of the world.

Anthropic Reasoning

Observational selection effects are biases created when different hypotheses make different predictions on the existence of observers. For example, you could argue that just from the fact that our solar system has an Earthlike planet in it, we can’t conclude solar systems with Earthlike planets are typical; if observers evolve only on such planets, then that is what they’ll observe no matter what a typical system looks like.

Some people have taken to calling the study of how to deal with these effects “anthropics”. There’s a lot of subtle philosophy that goes into this, and I have yet to find any one account that I think gets it all right. I may do a post with actual content later, but here are some (mutually inconsistent) works I found particularly enlightening. If you combined some of the ideas here you might end up with something good: