Writing the Aumann game posts got me thinking about games designed to promote rationality. Nick Bostrom and commenters discussed the topic on Overcoming Bias. Tom McCabe had a post about rational debating. In this post I’ll brainstorm about two ideas I’ve had for computer/video games.
First, Bayes Bayes Revolution.
The basic tool here is a probability slider that you move left/right with two keys or a mouse. It runs from 0 to 1, but probably not in a linear way. There should be more distance between 0.01 and 0.02 than 0.50 and 0.51, just for convenience. And maybe the results of your setting the slider should show up on a view-only linear scale so your intuitions don’t get warped the same way. Maybe a two-color pie diagram.
The game starts out posing you a question, maybe whether a hidden animal is red or green. You have some initial guess for the probability, either from having played the game before (50-50 if you haven’t), or because the game told you.
Then, one by one, at a rate depending on difficulty level, the game gives you pieces of evidence. You might see a blue ball flying across the screen, and you might know (again, from experience or because you’re told) that red animals throw blue balls three times as often as green animals. If your initial probability for a red animal was, say, 0.2, then if you crank the Reverend you find:
p(red animal | blue ball) = p(red animal) * p(blue ball | red animal) / p(blue ball) = 0.2 * 3q / (0.2 * 3q + 0.8 * q) = 3/7.
The game scores you based on how close you put the slider to this correct answer. Then when the next piece of evidence comes it scores you based on how close you put the slider to the correct answer after taking into account both pieces of evidence. And so on. From what I understand about existing games of this genre, they rate you “OK” or “excellent” or whatever each time according to some formula. (You’re being scored here on following the right procedure rather than placing a high probability on the right outcome; that would be cleaner in a way, but would introduce a luck factor.)
Starting from there, you can make the game as simple or complicated as you want. It can be spartan and abstract or full of bells and whistles, using simple independent drawings from an urn or bombarding the player with baroque causal diagrams.
In the end it would be like following a sort of dance routine. But the Bayes Dance is a very special kind of dance, because if you do it right, you never have any idea where you’re going. In fact, to those who can’t hear the Music of the Evidence, the Bayes Dance looks exactly like a random walk. Just like “if we knew what we were doing, it wouldn’t be research”, one can say that “if we knew where we were going, it wouldn’t be Bayes”. Take a Bayesian to a regular dance lesson, and he will say, “if I already know I’m going to have to step to the left, then that must be a better place for my foot, so why not put it there to start with?” Or he will say, “my leg has to stay here on average, so if you’re that sure it’s going to the left, then in the unlikely case that it does go to the right, it will have to fly off, all the way out of the window”.
So in Bayes Bayes Revolution, as indeed in life, there is no predictable routine you can practice. All you can do is align your gut feelings with the math so they can work with any input.
I seem to have turned BBR into a rationality lecture more than an actual game idea. I guess if you really hated rationality lectures, you wouldn’t come here. Aumania is a different game idea, one that greatly resembles the Aumann game we’ve played on this blog before, and unlike BBR I could see it being quite enjoyable.
To stay with the metaphor from earlier on a little longer: Aumann’s agreement theorem says that if two Bayes dancers have gone out of synch because they heard different music, then when they watch each other after the music stops (or perhaps between notes if they’re quick), they will do a sequence of steps, each reacting to the last, that ends in them gradually merging back and standing still.
Unlike BBR, Aumania is necessarily multi-player. The point of the game is to react smartly to the opinions of others.
Here’s how it works. Each player has an individual probability slider, like in BBR. Maybe they’re arranged in a circle on the screen. The game presents the players with randomly generated claims, perhaps like the ones we used for the Aumann game. Players have maybe ten seconds to move their slider. The key is that they should take special note of how all the other sliders are being set. When time is up, the answer is revealed and everyone gets scored.
Your score is the logarithm of the probability you put on the correct answer. That way, entering your true subjective probability maximizes your expected score. Score, defined this way, is always a negative number; the game could either accept this, or transform back to percentages, or give players a fixed amount of free points for every question, or whatever.
Score could also map to hit point damage. Learning to be rational so you can help humankind find and act on true beliefs is nice and all, but learning to be rational because else Mario gets eaten just before killing that level 6 boss, that’s motivation.
For every setting on the probability slider there is a corresponding score size if the claim is true and if the claim is false. These could be displayed as two bars to show the trade-offs you’re making.
Aumania is partly about knowing your trivia, but not as much as you might think. Players will do better in groups with much knowledge dispersed in them than in groups with little. But the world’s most ignorant person can score the same as the world’s most knowledgeable person, just by always copying the probability. If he’s also the world’s most stubborn person, that’s when he has a problem.
I’m not sure whether Aumania would work best as a cooperative game (players maximize the group’s total score), an independent game (players maximize their own score), or a competitive game (players minimize the rank order of their score compared to the other players). In a cooperative or competitive game, there might be incentives for deception, which means Aumann’s theorem won’t work. Unpredictable deadlines might help.
Again, you can add as many complications as you want. Random pieces of evidence, as in BBR, would be one possibility. Other means of communicating evidence, such as chat, would be another.
If we can teach the next generation to update beliefs as smoothly as this, our species will be utterly unstoppable.