More Anthropic Activity?

The Large Hadron Collider operation was delayed again. Again!

The joke about this is that delays keep happening is because the LHD would kill us all if it worked and that it’s anthropically likely that we’d be born into a universe with a high population, one where human extinction keeps not happening for “mysterious” reasons. (To clarify, I lean towards thinking this is false.)

Can anyone say something about why they think the Doomsday Argument is false? I’ve read some rebuttals but found them unconvincing. I understand that all this stuff is fuzzy but I still haven’t been convinced out of it.

Comments

  1. gus k.

    If the Everett many worlds interpretation is correct, and if the LHD collider triggers a vacuum colapse, then we will never observe the LHD to work. Each time it is turned on it destroys the universe; except for the tiny fraction of histories where it fails to succesfully turn on. Since we only observe those histories where we continue to exist, from our perspective we observe a series of more and more improbable occurences that prevent it from turning on. First a fuse breaks, then a power outage, then the operator has a heart attack, then there’s an earthquake, etc.
    This is the quantum suicide experiment writ large. Instead of a single person playing Russian roulette, we play Russian roulette with the whole universe.
    I think MWI is the most probable interpretation; but there are cosmic rays more powerful than LHC, so this is a joke. But if hundreds of attempts improbably failed, then t his would be the most rational explanation.

  2. gus k.

    Richard Gott’s Doomsday argument goes like this: You should assume that you, as an individual, are average (Copernican humility). If humanity survives for thousands of years then quintillions of humans will exist throughout the galaxies and you are one of the first humans ever to exist. If we were to pick a random human out of the quintillions which are destined to exist, the chances of being one that lived in the 21st century or earlier is one in a billion. Are you really that special?
    On the other hand, if humanity ends in the 21st century when the population is 10 billion, out of 100 billion total humans in all of history, the chance that a random person would live in the 21st century is 10%. By the 22nd century, 50%. Therefore, if we don’t violate the Coppernican principle, humanity will end soon.
    The argument is wrong because it confuses probability with indexicality.

  3. gus k.

    If you buy a lottery ticket with 1 in a million odds, the probabilty that tomorrow you will be a winner is 1 in one million. It is not logical to assign probabilities to indexicality. What is the probability that you are Michael Anissimov? Why are you not Gus K.? Why are you human and not a mouse. These are meaningless questions. There are numerous humans in the world, and you just happen to be the one named Micheal Anissimov. That is part of the brute indexicallity of your existence. It is meaningless to say that you have a 1 in 6 billion probability of being who you are. It is equally meaningless to assign probabilites to us being the earliest of humans, or the last generation of humans. We are humans who happen to live in the 21st century. That is part of the brute indexicality of who we are.

  4. mjgeddes

    It’s nonsense alright, but as you say, very hard to pinpoint exactly why.

    Look at this why: if you guys are right about AI/Singularity, there’s something very very peculiar about everyone at the SIAI, and especially Yudkowsky.

    Now image that in all QM branches, the equivalent of all the folks at SIAI who actually succeeded in initiating good Singularity had bought national lottery tickets and won, and this is how they gained enough money for AI research. That is, only in QM branch where researchers bought national lottery tickets and won did civilization survive, because only these researchers had enough money for research.

    Does that mean that everyone at SIAI should rush out and start buying national lottery tickets? Is there something anthropically peculiar about SIAI folks that would give you all a much greater chance of winning the lottery?

    (remember, in all QM branches where you, Michael, win the US national lottery, SIAI wins enough donations for AI research).

  5. Thuris

    Yes. It’s one thing to argue that intelligent life on some other planets is highly likely, based on the assumption that starting conditions obtaining on early Earth obtain on many other planets. It’s another thing to say that our failure thus far to find evidence of intelligent life elsewhere means that we would have to be the first, and that the odds for any one species being first — even first in the galaxy — are vanishingly low, and therefore we should adopt great skepticism towards the idea of not being alone in the universe.

    But somebody has to fill that first spot, and for whatever species does so, it is not a matter of how likely they are to be there, but just a matter of fact. (Not suggesting that this is settled fact, just using an example.)

    How likely is it to be the tallest person in the world? Depends whether you’re the tallest person in the world. If not, it’s pretty unlikely.

  6. See my paper, “Past Longevity as Evidence for the Future”, in the January 2009 issue of Philosophy of Science:
    http://www.journals.uchicago.edu/doi/abs/10.1086/599273

    The paper argues that the Leslie Doomsday Argument conflates future longevity and total longevity. For example, DA’s Bayesian formalism is stated in terms of total longevity, but plugs in prior probabilities for future longevity.

    Some of the paper is discussed on my blog here:
    http://ronpisaturo.com/blog/2009/06/30/my-refutation-of-the-doomsday-argument/

  7. kr

    The short answer: it makes no sense to calculate the probability of an event after it has happened.

    I have an excellent quote from Richard Feynman illustrating this very principle:

    http://xph.us/2009/07/23/probability.html

  8. mjgeddes

    To me the anthropic arguments send out a big red flashing siren ‘BLACK SWAN WARNING!’ – here’s something that’s not really understood involving intricate aspects of probability theory and multiverse theory.

    You SIAI folks should worry that a huge black swan is lurking just waiting to pop out like a fuckn’ jack-in-the-box, and the solution to anthropic puzzles could demolish the current understanding of Bayes, the multiverse and AI theory.

    I read on ‘Less Wrong’ Eliezer talking about ‘platonic people space’ which caused me to shake my head sadly – there aren’t any people in ‘platonic space’ – and therein I think lies the big problem with anthropic arguments – there’s a possible confusion of levels of abstraction , which assumes that there’s people in a platonic space somewhere just waiting to pop out. And the multiverse theory could hinge on this same problem.

    Like I’ve suggested on ‘Overcoming Bias’, what if reductionism is false and reality is actually stratified into different level of abstraction? See the Bohm interpretation of QM for instance:

    http://en.wikipedia.org/wiki/Bohm_interpretation

    Beware the black swan of anthropics!

  9. mitchell porter

    “Can anyone say something about why they think the Doomsday Argument is false?”

    I can’t, because I don’t think it’s false.

    What perplexes me, from the anthropic perspective, is why I should be a conscious being at all, when unconscious ones are so much more numerous. (The argument that “if I wasn’t conscious, I wouldn’t be asking the question” does not answer the question.)

  10. To the (silly) Anthropic Principle, I reply simply:

    “Is it really so surprising that we find ourselves in a universe that allows life, on a planet that is suitable for life?

    For, in universes that don’t allow life, and on planets not suitable for life… we would not have selves to find.”

    It’s really that simple. Anyone who doesn’t get it is just wasting my time, frankly. Even if god does exist, this anthropic nonsense is a terrible argument for its existence, and I think I’ve made that clear.

    http://zenithnoesis.wordpress.com/2010/05/16/anthropic-nonsense/

  11. Awesome function! Maintain submitting great materials.

  12. There are been following a website to get a month possibly even and possess found a ton of info and adored the process you’ve structured your web site. I will be trying to work my own really personalized blog site nonetheless. I do think the way too general i must consentrate on a lot of smaller sized subject areas. Staying as much as possible to everyone men and women is not everything their chipped approximately end up being.

Trackbacks for this post

  1. Los incidentes del LHC y la inmortalidad cuántica « La Singularidad Desnuda

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>