For those of you who actually came here because of the title, this is self-recommending.
If you knew everyone’s opinion on everything, you could extract lots of useful information from the correlations. So maybe there should be a Web 2.0 thing that let users answer a lot of controversial questions and maybe display them in a profile.
Then you could let people query it based on user-defined criteria. (“Among people over 50 who believe in string theory, who’s considered the favorite to be Cthulhu’s running mate in 2012?”)
You could also try out many algorithms to figure out which one best turned opinion data into the right answers to objectively scorable questions. (“What will the temperature be in a year?”) Then you could apply that algorithm to answer all other questions.
Potential problems abound. For example, objectively scorable questions are a biased subset of all questions. Methods used to extract the most reliable answers to them may not generalize. Also, there would be “strategic voting”-type issues.
If these problems could somehow be solved or contained, the result would arguably the most authoritative source on Earth, and a new argument for majoritarianism. (I picture it coming with the sound of a booming voice saying, “you dare disagree with Authoritron?”. That way it will reduce irrational overconfidence in one’s personal opinions. Social psychology, etc.)
(This is a similar idea but with the intent of fixing inconsistencies in an individual set of opinions rather than using other people as authorities.)
Sometimes people set up strawman arguments because they’re easier to knock down than the real thing. But that’s a beginner mistake.
What I’ve seen far more often in reasonable people (and me), is that they come to a discussion with some point in mind that they want to make, because they think it’s underappreciated or because they had an “aha” experience thinking of it, and then interpret others as saying the thing countered by that point, when in fact those others are saying something subtly different.
If all you have is a +5 holy pitchfork “Scarecrowbane” that you obtained through perilous questing, every stranger looks strawy.
(With posts like this I am probably just being a captain unto the obvious, but I think it’s good to keep naming and shaming these tendencies. Overcoming Bias and logical fallacy lists do well already, but how about a TV Tropes for argument patterns?)
An argument from authority is an indirect argument; it tells you there are probably direct arguments that convinced the authority.
An argument by analogy is an indirect argument; it tells you there are probably direct arguments for the thing you’re considering that are like the direct arguments supporting the analogous thing.
An argument with ambiguous words and sentence constructions in it is an indirect argument; it tells you that if you picked the right exact meanings and stuck to them all the way through the argument, you would probably find yourself with a direct argument.
In all these cases, “logical information” is being left off the table. Some of the same considerations apply here that apply when ordinary facts are withheld.
If you know the arguer is arguing in good faith, indirect arguments are merely a noisy (and “causally distant”) signal.
But to the extent that the arguer could be arguing in bad faith — as an advocate rather than a truth-seeker — the fact that the direct arguments weren’t specified is itself evidence they don’t work, just like when facts aren’t given, that’s evidence they don’t go the arguer’s way.
So I would say that, because indirect arguments create space for advocates to operate in, discussions between truth-seekers can be “loose” and still be informative, whereas discussions between advocates should be “tighter” if they are to have much value.
Any argument by analogy can be transformed into an argument not by analogy. If you’re not sure whether you believe A, but you already believe B which someone says is analogous to A, then the procedure is as follows:
- Figure out all the reasons why you believe B, including the ones you haven’t yet verbalized.
- Throw out all the reasons that don’t also apply to A.
- (While you’re at it, throw out all the reasons that are wrong and adjust your belief in B.)
- List all the reasons that do apply to A, deleting all references to B.
An argument by analogy is not so much an argument as a (sometimes credible) promise of one; it is a mark of homework left undone.
Aumann’s agreement theorem, which I’ve mentioned here a few times before and which has informed many discussions on Overcoming Bias, states:
If two (or more) Bayesian agents are uncertain about some proposition, and their probabilities are common knowledge, and they have the same priors, then their probabilities must be equal, even if they have different information.
Concretely, as agents trade probability estimates back and forth, conditioning each time on the other’s estimates, probabilities will tend to the same number.
This is a surprising result that has often been summarized as saying “rational agents cannot agree to disagree”. I think there are some problems with applying the theorem this way in practice that haven’t been highlighted enough.
In The Wrong Tail, I discussed a moral reason to promote true beliefs. There’s also an ethical reason to promote true beliefs. Manipulate people (including yourself) in pursuit of your goals, and others will manipulate people (including themselves) in pursuit of theirs.
By the way: there is a game-theoretical reason to speak the literal truth even in those cases where it does not promote maximally accurate beliefs. Doing otherwise creates room for those who truly are dishonest, or who overestimate their ability to “lie the truth”, to operate in. In general, there’s a game-theoretical reason not to do things that are usually wrong, even when they happen to be right, so long as it’s hard for others to check when they’re right.
Of course, “there’s a reason for X” by no means implies “we should do X”.
We can see the world as a great web of facts. Suppose we cut out one node, and let something grow back by doing probabilistic inference from surrounding nodes.
Truth is the thing that’s there. Truthiness is the thing that grows back.
Often, we pay too much attention to truth and too little attention to truthiness.
A random person finds an alien device beside the road. With equal probabilities, if you press the button, it kills ten thousand people, saves ten thousand people’s lives, or does nothing. Our random person presses the button. Depending on what happens, history will remember him as a great villain, a great hero, or just a random person. Only that last opinion is correct.
According to Feeling Rational, emotions and rationality are compatible — what matters is that you deal with evidence in the right way, not how you feel while doing so; and since you’re human, with certain goals/desires/values, it’s only proper to react emotionally to your beliefs about the world, whether they’re true or false.
This is true and important. But it shouldn’t cause us to discount what is also true and important: the affect heuristic is a great source of bias, emotional arousal seems to make people reason shallowly and unreliably more often than it makes them reason deeply and reliably, and when emotions flare up, it’s hard for disagreements to stay fun and informative.
I might go so far as to say that the golden rule of online discourse is, “if you’re not in a neutral or reflective mood, shut up”.
(PS: “Rationality” in these discussions means something like: “whatever mental habits tend to lead to true beliefs”. In some other contexts it means: “relying on calm verbal reasoning”. Beware confusion.)
Groovies think that, when selecting beliefs, we should aim for accuracy and nothing else than accuracy; that nothing else than properly-processed evidence will get us there; and that living up to this principle, far from being a natural tendency of humans and their organizations, takes a constant, conscious struggle, with its own lore and its own pitfalls.
Why “Groovy”? Because our information, as it flows in, defines a groove through the spacetime of beliefs. It is this groove that we strive to get down with. As it were.
Groovies believe that a human mind, or a community made up of human minds, is a giant monster that feeds on truth and chokes on confusion. Perhaps if you’re really clever and really lucky, you can make it choke in exactly the right way. But mostly you aren’t.
We believe that promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires, because they’re Nazis or whatever. Specifically, don’t do it to yourself.
(Sadly, we may have to count everyday social interactions as a partial exception.)
One of our mottoes is “if we believe absurdities we shall commit atrocities”. Another is “life is demanding without understanding”. False beliefs hurt both the believer and others.
To start from Robin Hanson’s metaphor of beliefs as clothes,
- A belief is not a uniform. It is not for making you the same as everyone else.
- A belief is not a gothic dress. It is not for making you different from everyone else.
- A belief is not a clown suit. It is not for entertaining your audience.
- A belief is not a crown. It is not for asserting your authority.
- A belief is not a slipper. It is not for giving you comfort.
- A belief is not a Che t-shirt. It is not for showing allegiance to a cause.
- A belief is not a high-heeled shoe. It is not for making you sexy.
- A belief is not an asbestos suit. It is not for avoiding getting you flamed, or fired.
- So what is a belief? A belief is a mirror suit, for reflecting the world.
I would very much like to see Grooviness become a widespread norm:
“If Europe continues down its path of disbelieving in God, its civilization will fade away because of the lack of babies.”
“Dude, that’s not Groovy.”
“That’s exactly the kind of reasoning that leads to rape.”
“Dude, that’s not Groovy.”
“If ze Cherman people are to attain kreatness, ve must maintain an unshakable confidence in ze superiority of our race.”
“Dude, zat is not Kroovy.”
Yes, Groovy ideals are already rather widely shared, at least on a lip service level. But in discussions among even intelligent people, you can easily find reasoning that, to us Groovies, is haram. We all have a little overworked demon in our heads whose job it is to decide what to believe (you may say you “contain multitudes”, but why aren’t they applying Aumann’s agreement theorem?). And the good news you can tell that demon is it needn’t worry about what’s “fair” or “fruitful” or “inspiring”. That’s not its job. Deciding what’s true is worry enough.