The Other Side of the Immortality Coin

There are two sides to living as long as possible: developing the technologies to cure aging, such as SENS, and preventing human extinction risk, which threatens everybody. Unfortunately, in the life extensionist community, and the world at large, the balance of attention and support is lopsided in favor of the first side of the coin, while largely ignoring the second. I see people meticulously obsessed with caloric restriction and SENS, but apparently unaware of human extinction risks. There’s the global warming movement, sure, but no efforts to address the bio, nano, and AI risks.

It’s easy to understand why. Life extension therapies are a positive and happy thing, whereas existential risk is a negative and discouraging thing. The affect heuristic causes us to shy away from negative affect, while only focusing on projects with positive affect: life extension. Egocentric biases help magnify the effect, because it’s easier to imagine oneself aging and dying than getting wiped out along with billions of others as a result of a planetary plague, for instance. Attributional biases work against both sides of the immortality coin: because there’s no visible bad guy to fight, people aren’t as juiced up as they would be, about, say, protesting a human being like Bush.

Another element working against the risk side of the coin is the assignment of credit: a research team may be the first to significantly extend human life, in which case, the team and all their supporters get bragging rights. Prevention of existential risks is a bit hazier, consisting of networks of safeguards which all contribute a little bit towards lowering the probability of disaster. Existential risk prevention isn’t likely to be the way it is in the movies, where the hero punches out the mad scientist right before he presses the red button that says “Planet Destroyer”, but because of a cooperative network of individuals working to increase safety in the diverse areas that risks could emerge from: biotech, nanotech, and AI.

Present-day immortalists and transhumanists simply don’t care enough about existential risk. Many of them are at the same stage with regards to ideological progression as most of humanity is against the specter of death: accepting, in denial, dismissive. There are few things less pleasant to contemplate than humanity destroying itself, but it must be done anyhow, because if we slip and fall, there’s no getting up.

The greatest challenge is that the likelihood of disaster per year must be decreased to very low levels — less than 0.001% or something — because otherwise the aggregate probability computed over a series of years will approach 1 at the limit. There are many risks that even distributing ourselves throughout space would do nothing to combat — rogue, space-going AI, replicators that eat asteroids and live off sunlight, agents that pursue reproduction at the exclusion of value structures such as conscious experiences. Space colonization is not our silver bullet, despite what some might think. Relying overmuch on space colonization to combat existential risk may give us a false sense of security.

Yesterday it hit the national news that synthetic life is on its way within 3 to 10 years. To anyone following the field, this comes as zero surprise, but there are many thinkers out there who might not have seen it coming. The Lifeboat Foundation, which has saw this well in advance, set up the A-Prize as an effort to bring development of artificial life out into the open, where it should be, and the A-Prize currently has a grand total of three donors: myself, Sergio Tarrero, and one anonymous donor. This is probably a result of insufficient publicity, though.

Genetically engineered viruses are a risk today. Synthetic life will be a risk in 3-10 years. AI could be a risk in 10 years, or it could be a risk now — we have no idea. The fastest supercomputers are already approximating the computing power of the human brain, but since an airplane is way less complex than a bird, we should assume that less-than-human computing power is sufficient for AI. Nanotechnological replicators, a distinct category of replicator that blurs into synthetic life at the extremes, could be a risk in 5-15 years — again, we don’t know. Better to assume they’re coming sooner, and be safe rather than sorry.

Once you realize that humanity has lived entirely without existential risks (except the tiny probability of asteroid impact) since Homo sapiens evolved over 100,000 years ago, and we’re about to be hit full-force by these new risks in the next 3-15 years, the interval between now and then is practically nothing. Ideally, we’d have 100 or 500 years of advance notice to prepare for these risks, not 3-15. But since 3-15 is all we have, we’d better use it.

If humanity continues to survive, the technologies for radical life extension are sure to be developed, taking into account economic considerations alone. The efforts of Aubrey de Grey and others may hurry it along, saving a few million lives in the process, and that’s great. But if we develop SENS only to destroy ourselves a few years later, it’s worse than useless. It’s better to overinvest in existential risk, encourage cryonics for those whose bodies can’t last until aging is defeated, and address aging once we have a handle on existential risk, which we quite obviously don’t. Remember: there will always be more people paying attention to radical life extension than existential risk, so the former won’t be losing much if you shift your focus to the latter. As fellow blogger Steven says, “You have only a small fraction of the world’s eggs; putting them all in the best available basket will help, not harm, the global egg spreading effort.”

For more on why I think fighting existential risk should be central for any life extensionist, see Immortalist Utilitarianism, written in 2004.

Comments

  1. I’ll overlook the information I have on AGW, and for this conversation stipulate that global warming is predominantly of ‘man-made’ origin.

    It is not an existential risk. Global warming simply cannot result in the destruction of our species; not even of our technological base. Our cultures and current civilizations? Sure — if one heeds the most extreme alarmisms such as the idea that the polar ice-caps will melt completely by 2050 & cause the earth’s oceans to rise by 10 meters. But to call that an ‘existential risk’ is pure misinformation.

    Of course, this only further aggravates your point; that there seems to be not one single existential risk which society-at-large recognizes and seeks to combat.

    I worry about the ‘socialist’ overtones of many such attempts, though; much like AGW (Which, again, I’m stipulating as fact here), many would seek to hijack that concern to political agendas.

    Perhaps its better that only select intellectuals are dealing with it, for now?

  2. Hi Michael,

    How would quantum immortality affect the rational amount of effort we should choose when defending against existential risks?

    On another topic, it seems that some existential risks can be protected against with the same technologies that are involved in increasing longevity. For example, a backed-up upload would not be susceptible to biological weapons.

    It also seems hard to defend against the risks before we even know what technologies will look like. We might spend alot of effort pontificating about a possible risk scenario, while a tech breakthrough catches us unaware with a completely different scenario.

    I think it makes sense to spend a % of effort on risk avoidance, but not to completely postpone beneficial research directions such as life extension.

  3. i’m thinking global warming is as good a literary device as any in order to create a known allegory for existential risk – quit the quibbling.

    great writing again Michael, you’re really beginning to outdo yourself on a regular basis.

  4. JM Inc.

    I especially agree with you here. Alas, we often hear of Dr. Bostrom and Dr. Rees attempting quite eloquently to turn our heads in favour of the consideration of existential risks, and yet I see so many of my transhumanist compatriots almost completely ignoring the problems, as though they were some sort of intellectual curiosity. Paying lip service is not nearly good enough to get the ball rolling (or the pen tipping, as it were). In fact, with dismay, I must admit that to all appearances the people who seem primarily concerned with existential risks are misanthropes like Moravec and De Garis who do not seem to mind the risks much, and bioconservatives who are doing everything in their power to publicise their effects, but who ultimately seem to care less than the misanthropes, simply because they view it primarily as a political angle. Personally, I think that bioconservative spin-tactics on the subject are, to borrow your phrase here, “worse than useless,” because of the nature of the balm they offer to fix it – they like to pose what I call the “East-Way-Out” projection, which we might charitably describe as the ‘medical malpractice scenario.’ Remember that old joke about the man who heads into his doctor’s office, and says “Doctor, you know, it hurts my shoulder when I twist it like this.” and demonstrates with a yelp, only to be told by his doctor, “Well, don’t twist it like that any more.” — well, that is, in short, the bioconservative answer to existential risk. I have always felt that it was the duty of every transhumanist to care about existential risk much more than any bioconservative does, quite simply because if we have our way (technologically speaking), the problem is amplified and compounded.

    Unfortunately, I see far too much starry-eyed futurism and far too little risk analysis. I find myself so concerned with the issue that I waver on the point of becoming a bioconservative myself, checked primarily by the fact that I believe that doing so would have an overall negative effect on my own ability to make a positive contribution to the problem. Surely a bioconservative solution would seem appropriate, excepting that all such solutions seem (at least to my eyes) to be short-sighted flights of fantasy, not addressing the very real underlying problems with technological risk, but merely proposing to numb and dumb the symptoms.

    We seem to be trapped between a rock and a hard place – caught, as it were, between humanity’s illogical past, dominated by evolutionary psycho-adaptations, and its uncertain but rapidly approaching future. I find Eliezer Yudkowsky’s ‘tipping-point’ metaphor is particularly apt, in which a given intelligent species is consigned (as is any species, from varying evolutionary standpoints) eventually to slide from a high energy region of stasis into either of a few lower energy regions, namely, extinction or some futuristic region of continued evolution. It is, after all, this same mechanism which drives evolution as a whole – die by entropy or evolve through natural selection. The only problem with us is that, now that we are smart enough to realise that we ought to get out and push, we are still not logical enough to know which direction we would rather go in.

  5. Miron,

    Even if quantum immortality were true, it would still be worth preventing risks, because 1) much of the negativity of death derives from the anguish felt by others in the same Everett branch as your own, 2) certain risks (like AI) could have effects so total that all possible paths lead to human extinction. In any case, practically no one acts as if QI is true, otherwise they wouldn’t mind being shot in the head. Until we know more, QI is just an intellectual curiosity.

    There is only a small overlap between risk prevention tech and longevity tech. Another example would be using SENS-like strategies to heal the brain and boost intelligence. However, this is mostly a red herring as the vast majority of suggested risk defenses have little to do with life extension.

    We know a fair amount about what some of the main risks would look like. Most of them involve either rogue replicators (bio, nano) or rogue intelligences (AI). In-depth studies have been done on all of these risks, and more are needed, but certainly a risk could pop up that we didn’t foresee. If so, that element of uncertainty would suggest that we devote even more time towards analyzing the risk landscape, in an attempt to reduce that uncertainty, rather than less.

    As I argue in the post, life extension research will never be completely postponed because the majority of people will always choose to focus on it anyway, no matter how severe the existential risk forecast.

    Thanks for your comment.

  6. Michael Wrote:

    If so, that element of uncertainty would suggest that we devote even more time towards analyzing the risk landscape, in an attempt to reduce that uncertainty, rather than less.

    I am struck by the idea that this seems an invitation for diminishing gains. I would suggest that energy should be invested into diminishing unknown risks — it can be done — rather than attempting to discern and address all risks. No matter how well you prepare there’s always room for a Black Swan. The more rigid your spending patterns the less you reserve to be available for the unknown. The optimal method of guaranteed risk aversion would be, all relevant hypothesii treated as verified, to spread into multiple membranes with variant Hawking Arrows. Short of that — spreading to multiple planets in the near term and to multiple starsystems in the long. Not too many; only a local cluster — anything more than that and temporal divergence would start to turn our own ‘offspring’ into a new source of existential risk. Come to think of it — that might have some bearing on the Von Neumann probe element to the Fermi Paradox: mutations inevitably occur over iterations. Any culture that can survive to make viable Von Neumann probes wouldn’t want to create a scenario of self-annihilation that approaches unity in the way Von Neumann probes do.

  7. Definitely read the Black Swan. We definitely cannot foresee certain risks, and we can probably foresee less and less as we accelerate. If that’s so, spending too much effort on risk prediction will take away from progress in beneficial technologies. There’s a balance, and I’m not sure exactly where it is.

    Michael, good points on quantum immortality. Note that complete obliteration is fine in QI, because that prunes that entire branch with minimal anguish. In QI the goal is to minimize anguish-moments. You would want to minimize the risk of personally dying since you don’t want your loved-ones to accumulate anguish moments in the branch you don’t exist. You’d also not want to have a catastrophe that wipes a %, because the survivors will experience lots of anguish.

    Another thing to ponder is that perhaps the Cuban missile crisis pruned 99% of our branches at that point, and the cold-war in general also pruned a high %.

    Yet another point is that long-lived humans (or humans with such prospects ahead of them) are more likely to be careful because they have more to lose. Cryonics is still very high risk.

  8. Darth Vasya

    IConrad: “It is not an existential risk. Global warming simply cannot result in the destruction of our species; not even of our technological base.”

    Unfortunately, it can. The thing is, the greenhouse effect has several positive feedback loops.

    Technically speaking, the main greenhouse gas is water (yep). It absorbs IR radiation and accumulated its energy. The IR absorption spectrum of water has a narrow ‘window’ of transparency, which presently allows quite much of this energy to pass through, but coincidentally, this window can be blocked by the presence of CO2, which is not transparent at this wavelength.

    Now, the positive feedback: the amount of water in the atmosphere is directly related to the average temperature of the planet. The higher, the more, and hence, the higher, etc. Such a loop can make the temperature of planet’s surface higher than the boiling point of water, which clearly is unfavorable for any water-based life.

    Presently, no such mechanism exists for the level of CO2. All of net CO2 emission into the atmosphere is due to humans burning fossil fuel. But if we rise the planet’s temperature high enough, another sources of CO2 can start being important, namely, the carbydes on the ocean floor. Once they start decomposing, the CO2 emission will become self-sustaining, and the picture of Earth without liquid water becomes complete.

    I’m not saying we’re doomed. There are many possible solutions to this problem, from extensive use of biofuels and solar power to simply moving the planet farther from the Sun, but that’s not today’s technologies.

    At present, global warming is a threat to our existence and something has and will be done about it. But it’s one of the ‘nice’ threats, since we learned about it in great advance. That’s not the case with other, more rapid ‘sui-genocide’ scenarios or unpredictable disasters like asteroid impacts.

  9. Darth Vasya wrote:

    Unfortunately, it can. The thing is, the greenhouse effect has several positive feedback loops.

    … Forgive the language that follows: This is a fucking canard. Nobody knows anything about the actual feedback loops, what limits they have, and what regulatory cycles there are.

    Seriously; if those feedback loops were so existent, then the earth would have boiled to a crisp millions of years ago; CO2 levels have been higher than they are today. Earth’s temperatures have been higher than they are today — these are at different times, mind you. They always cycled back.

    Positive feedback loops is something you only hear about from hysterics like Al “Lying is okay if it creates a sense of urgency (See the answer to Question Six)” Gore.

    All of net CO2 emission into the atmosphere is due to humans burning fossil fuel.

    Whereas human sources of CO2 amount to just 3% of natural emissions, human sources produce one and a half times as much methane as all natural sources.

    I’m not saying we’re doomed. There are many possible solutions to this problem, from extensive use of biofuels and solar power to simply moving the planet farther from the Sun, but that’s not today’s technologies.

    And I am saying that you need to learn how to attribute for your own cognitive biases when it comes to the accumulation of information.

    Par exemplorum; consider the potential climate-regulating impact of precipitation.

    Michael; I apologize for making this direct. Misinformation annoys the hell out of me.

  10. “Unfortunately, it can. The thing is, the greenhouse effect has several positive feedback loops.”

    From past temperature data (http://www.globalwarmingart.com/images/8/8f/Ice_Age_Temperature_Rev.png), we know that the Earth has warmed 6-8 C in less than a thousand years, not just once, but dozens of times in the past. We’re still here.

    “the carbydes on the ocean floor. Once they start decomposing,”

    The oil companies would have a field day. Could you imagine what it would be like, not even having to drill for natural gas- it just bubbles to the surface? Even if we can’t collect it for whatever reason, we still have this advanced technology known as a “match” that allows us to rapidly and cheaply convert methane into CO2 and water vapor.

  11. Okay, Tom and I are in complete agreement. That reduces the oppositional viewpoint’s probability of being correct to a unitary 0.00%… rofl

  12. anonymous

    In quantum immortality, all possible worlds exist so there is no need to do anything at all.

  13. I certainly agree with your point — that
    we need a lot more intelligent, focused attention to the challenge of minimizing the probability that the human species goes extinct in the coming centuries. I’ve been doing what I can (see my web page), but I wish a lot more people were putting
    1+1 together and trying to do something real.
    By rights, I should be putting a larger share of my personal energy into high-level intelligent systems and new kinds of quantum stuff — since that’s where I have some special advantages to work with — but if our world sinks into oblivion because no one paid attention to simple arithmetic, all of that good stuff would end up as castles in the sand.
    But… in my view, after worrying about this for a long time… I really don’t think CO2 is number one on the list of what is MOST likely to kill us all. (It is high on the list, but more because of the wars it may induce than because of a direct threat of extinction of humans; even a 50% cut in world GNP is a far cry from zeroing it out.) Boring as they may be… plain old nuclear weapons are still out there, and are on course to expand in numbers and in distribution to a frightening degree.
    The future scenarios here remind me of what I once told a friend about Iraq: “It’s a coin toss. Heads we lose, tails Al Queida wins; our only hope is that the coin lands right on its side.” If the economic pressure for nuclear proliferation grows as much as we should expect from present trends, we face two extreme dangers. One is that the first use of a nuclear bomb by an angry group of some kind leads to
    retaliations (“they did it first”) and a kind of nuclear “kembi”, enough to push the earth environment over the edge for humans. The other is that global horror and legitimate fear after that first use will be so strong that governments are overturned worldwide, and we enter a regime where technology is so hated that
    future progress is severely stifled — enough to
    lead to long-term stagnation so severe that
    we are unable to address the challenges of the future, and sink slowly towards extinction over
    a few thousand years. (Re the latter: it would NOT be an equilibrium system.)
    Perhaps the most urgent issue, then, is how to develop new energy technology that can compete both with coal and with fission+enrichment soon enough, on a scale large enough to make it possible to change BOTH the proliferation AND the CO2 trends. (By the way, folks who really care about CO2 should be sounding the alarm about shale oil, which
    Exxon has been touting as ITS answer to the peaking of conventional oil production worldwide. About ten times the total CO2
    per BTU as with coal, last I heard.)
    There are really only two serious hopes for such a large enough “third source.” There is the effort to make large-scale solar “farms”
    and batteries cheap enough to compete. There
    is energy from space (see my web page, or the NSS web page if you want another corroboration).
    Neither will be easy, and both entail risks.
    Because they are risky, the rational path is to
    support both as hard as we can, in order to minimize the risks to the human species.
    Yet for all the rhetoric about CO2…
    the real action to capture these hopes
    is pitifully small. There are so many
    people out there who love to talk about things, without moving a little finger towards the real action that we need!!!
    But — the sad fact is that we ALSO need to
    worry a lot about cultural reconciliation around the world, and the unabated population crisis
    (which has essentially been covered up by folks
    who just don’t like to think about inconvenient truths and like to get money from donors
    with ideological agendas).
    Best of luck to us all….

  14. The problem is groupthink. If the media chooses to look at a problem, then and only then does it become a problem, for the public. Existential risks do not take very much of the news hour, or news day.

    While climate change may present significant risks to civilisation, CO2 is not the central problem of climate change. Thus all the time, money, and energy spent on CO2 is essentially wasted, if the object is to cope with climate change.

    For all the talk on campuses about teaching critical thinking, in the end what is generally taught is groupthink. As long as that is the case, what universities are doing is administering academic lobotomies to already perpetually adolescent minds.

  15. MCP2012

    Well-said, Al Fin.

  16. Al, you’re slightly changing the subject by going into global warming. Also note many of the cognitive biases I named are panhuman: they afflict practically everyone, including those people you would call “the best of humanity”. Although I agree that groupthink and today’s academic system are bad, I think you have a tendency to twist the conversation into being about these things a little too often.

    Affective biases are panhuman… we can blame them on our evolutionary history, not the way things are run now. It’s easy to over-assign causal power to the things that annoy us in here and now over things far in the past that we can’t change. Because you’re annoyed with the academic system today (justly so), you tend to blame it for most human failings. But humans can fail perfectly fine on their own, even under optimal circumstances. That’s why we’re striving for the transhuman.

  17. MCP2012

    As Michael A. has pointed-out (dare I say touted?), Thorium may turn-out to be both economically and ecologically viable in the near future…

    And, yes, Darth Vasya is correct as far as it goes that water vapor in *some* forms can have greenhouse effects. But, as I’ve pointed-out at least twice before (now several months ago…) water vapor in the form of clouds and/or fog-banks can actually be systematically engineered (generated) so as to raise the Earth’s albedo (solar-energy reflectivity) so as to counteract any (supposedly deemed) problematic **temperature** increase(s). When thinking in terms of modifying the planet’s albedo slightly (’cause it wouldn’t take all that terribly much to make a non-negligible difference in surface & near-surface temp) is factored-in, one comes away with a much broader perspective both on “climate change” **&** what (if anything) we need to do about it…

    As for existential risks: I’m more-than-ever convinced that we need to press ahead with Friendly AGI development. One scenario—which partakes of *Colossus*, *Cyberdyne* and Bob Prehoda’s “IAN”—is that governments, corporations, and societies generally (these things, of course, NOT being all the same) gradually, over the next decade or two, turn over more & more key decision-making capabilities to A(G)I cybernated systems. The most important—indeed, crucial—thing in such a scenario is for proto-friendliness (i.e., proper normative thinking) to on-goingly be incorporated into such systems. We shall see…

  18. One thing is certainly clear: Without significant AGI and/or significant IA, the odds of getting a sufficient percentage of the population concerned about existential risk to do something effective about it is a probability which approaches zero.

  19. A million, or even a thousand intelligent people today could do a huge amount of prevent existential risk. But of course, in the long run, intelligence enhancement would be incredibly helpful.

  20. Technical point: Michael wrote : “The greatest challenge is that the likelihood of disaster per year must be decreased to very low levels — less than 0.001% or something — because otherwise the aggregate probability computed over a series of years will reach 1″

    That’s actually not possible. The total probability of disaster will never reach (or exceed) 1. For example, if the probability of disaster in any given year is 10%, then the probability of disaster after n years is

    1 – (0.9)^n

    which tends to 1 as n tends to infinity, but never actually reaches 1.

    The situation is the same for any fixed probability of disaster, no matter how low. I would prefer it if the total probability of disaster did not tend to 1, in which case we have to continually reduce existential risk each year, so that we have a series which converges to less than 1.

  21. Alexei Turchin

    I have wrote a big study on existential risks on the name ‘The Structure of global catastrophe’ where I find results that I never read in any literature.

    http://www.proza.ru/texts/2007/08/10-217.html

    Un fortunately, it is on Russian. But you have Russian last name, maybe you know Russian?

    I”ve found sponsor to publish my book and it will be published next winter.

    I also translated lifeboat.com on Russian – it will work soon.

    I hope to translate my book on English and distribute it for free.

    Thank you for your work on global risks!

  22. Alexei Turchin

    So. I find that I didn”t said anything usefull in my last post, exept arrogant self advertising:(

    I also think that space colonies are not good back drives for civilisation , because we can”t organize effective control on them. If any contry start to proceed with dangerous nanobots, we could send there missles or somthing in 1 hour. But if something wrong will happened on Pluto , we will learn about it only after 6 hours, and it will takes days or monthes to send there our space fleet. Of course it will be too late. So we have to keep any selfreplicating and AI staff on Earth. May be this is an answer to the Fermi paradox?

    Many times in hustory the metropolia has wars with its own former colony, and former colony often win (ex: UK and US)

    But may be we have to keep lagre space fleet near Pluto in case that it could immediately stop uprising of nanobots on Pluto?

    But what if the uprisng will occur not on Pluto, but on fleet it self?

    The same fuck is with asteroid shild.

    Will we live in the safer world if 50 rockets with 1 gygaton bomb will wait asteroid on near earth orbit?

    Or chances are higer that they attack Earth in some occasion?

  23. Jay Lewis

    SENS should take priority over Existential risk mitigation according to rational expectancy optimization:

    Unplesant outcome Scale:
    1 = Discomfort, ex = skipping a meal
    2 = In pain, ex = broken leg
    3 = Physically dead, ex = fatal heart attack with head carefully frozen soon after.
    4 = Informationally dead, ex = terminal brain cancer followed by cremation.

    Outcome if I die of age realted cause within the next few decades:
    3 if I’m lucky, 4 otherwise, so lets score it 3.5.

    Outcome if world destroyed by grey goo or other tech gone bad: Almost certinly 4

    Probability of age related death over next 200 years if we do nothing: 100%

    Probability of end-of-world over next 200 years due to bad tech: More than 0%, but less than 100%. So lets call it 50%.

    So the expected utility of sucessfully preventing ageing would be 3.5*100% = 3.5

    The expected utility of preventing an end-of-world scenario would be 4*50% = 2

    If I had to focus all my energy on improving the odds of one item it would be longevity. Once age is conqured all that energy can be switched to making anti-grey goo spray or whatever.

    Of course my model neglects a more realistic scenario where the odds of accidental or intentional biotech or nanotech disasters go up as we advance these technologies to cure aging.

    But if the good and carefull guys avoid these areas, not only do we miss out on the benifits of it, but the evil or sloppy guys will eventually develop them anyway.

  24. I have the same., http://bayet.webs.com replica rolex watches, 3373,

  25. accutane and resultsquick forum readtopic viagra signature online,[URL=http://pillsvx.net/fda/viagra-super-active-georgia.html]viagra super active georgia[/URL][URL=http://pillsvx.net/fda/dose-viagra-cocaine.html]dose viagra cocaine[/URL],free samples of clomidto buy cheaply colorado springs,[URL=http://pillsvx.net/purchasing-viagra-online.html]purchasing viagra online[/URL][URL=http://pillsvx.net/cheapest-viagra-online-in-the-uk.html]cheapest viagra online in the uk[/URL],the cheap viagra super active fort waynecipro and food,[URL=http://pillsvx.net/dr.med/cheapest-viagra-american-samoa.html]cheapest viagra american samoa[/URL]
    $5 shipping pharmacy nolvadexzyrtec synthroid wellbutrin pravachol actos aricept,[URL=http://pillsvx.net/fda/viagra-online-pharmacy-generic.html]viagra online pharmacy generic[/URL][URL=http://pillsvx.net/fda/order-levitra-super-active-florida.html]order levitra super active florida[/URL],viagra contraindications dosagescheap viagra pill,[URL=http://pillsvx.net/allpills/cost-of-accutane.html]cost of accutane[/URL][URL=http://pillsvx.net/allpills/clomid-sign-of-ovulation.html]clomid sign of ovulation[/URL],will cipro work on strep throatcheap cialis site,[URL=http://pillsvx.net/dr.med/purchase-viagra-rhode-island.html]purchase viagra rhode island[/URL]
    nexium and liver diseaseorder levitra super active boston,[URL=http://pillsvx.net/fda/cialis-label.html]cialis label[/URL][URL=http://pillsvx.net/fda/viagra-recreational-use.html]viagra recreational use[/URL],clomid and southernhealth insurancecelexa lexapro versus,[URL=http://pillsvx.net/jonathan-adler-viagra.html]jonathan adler viagra[/URL][URL=http://pillsvx.net/viagra-project-for.html]viagra project for[/URL],cheapest mississippithe fast order of honolulu,[URL=http://pillsvx.net/affiliate-free-join-program-viagra.html]affiliate free join program viagra[/URL]
    prednisone in caninescialis warning,[URL=http://pillsvx.net/fda/buy-generic-viagra-miami.html]buy generic viagra miami[/URL][URL=http://pillsvx.net/fda/viagra-federal-express.html]viagra federal express[/URL],effects sexual side wellbutrinfioricet soma tramadol viagra,[URL=http://pillsvx.net/allpills/lipitor-side-effect-finger-numbness.html]lipitor side effect finger numbness[/URL][URL=http://pillsvx.net/allpills/posted-15-baby-make-clomid-dont.html]posted 15 baby make clomid dont[/URL],interactions levitra niacin metformin diovan4.26 buy cialis online,[URL=http://pillsvx.net/dr.med/viagra-stoke-on-trent.html]viagra stoke on trent[/URL]
    viagra in de tuinaccutane suicide,[URL=http://pillsvx.net/fda/viagra-100mg-tablet.html]viagra 100mg tablet[/URL][URL=http://pillsvx.net/fda/simvastatin-interactions-with-viagra.html]simvastatin interactions with viagra[/URL],lipitor backacheonly fda durham,[URL=http://pillsvx.net/viagra-message-boards.html]viagra message boards[/URL][URL=http://pillsvx.net/picture-of-viagra.html]picture of viagra[/URL],order cheap raleighmedicais levitra,[URL=http://pillsvx.net/cocaine-with-viagra.html]cocaine with viagra[/URL]

Trackbacks for this post

  1. Buy eialis online.
  2. Pillinc buy phentermine adipex meridia online.
  3. Ambien cr.
  4. Zoo porn dog porn animal porn.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>