Hard Takeoff Sources

Definition of “hard takeoff” (noun) from Transhumanist Wiki:

The Singularity scenario in which a mind makes the transition from prehuman or human-equivalent intelligence to strong transhumanity or superintelligence over the course of days or hours (Yudkowsky 2001). The high likelihood of a hard takeoff once a roughly human-equivalent AI is created has been argued by the Singularity Institute in Yudkowsky 2003.

Hard takeoff sources and references, which includes hard science fiction novels, academic papers, and a few short articles and interviews:

Blood Music (1985) by Greg Bear
Fire Upon the Deep (1992) by Vernor Vinge
“The Coming Technological Singularity” (1993) by Vernor Vinge
The Metamorphosis of Prime Intellect (1994) by Roger Williams
“Staring into the Singularity” (1996) by Eliezer Yudkowsky
Creating Friendly AI (2001) by Eliezer Yudkowsky
“Wiki Interview with Eliezer” (2002) by Anand
“Impact of the Singularity” (2002) by Eliezer Yudkowsky
“Levels of Organization in General Intelligence” (2002) by Eliezer Yudkowsky
“Ethical Issues in Advanced Artificial Intelligence” by Nick Bostrom
“Relative Advantages of Computer Programs, Minds-in-General, and the Human Brain” (2003) by Michael Anissimov and Anand
“Can We Avoid a Hard Takeoff?” (2005) by Vernor Vinge
“Radical Discontinuity Does Not Follow from Hard Takeoff” (2007) by Michael Anissimov
“Recursive Self-Improvement” (2008) by Eliezer Yudkowsky
“Artificial Intelligence as a Positive and Negative Factor in Global Risk” (2008) by Eliezer Yudkowsky
“The Hanson-Yudkowsky AI Foom Debate” (2008) on Less Wrong wiki
“Brain Emulation and Hard Takeoff” (2008) by Carl Shulman
“Arms Control and Intelligence Explosions” (2009) by Carl Shulman
“Hard Takeoff” (2009) on Less Wrong wiki
“When Software Goes Mental: Why Artificial Minds Mean Fast Endogenous Growth” (2009)
“Thinking About Thinkism” (2009) by Michael Anissimov
“Technological Singularity/Superintelligence/Friendly AI Concerns” (2009) by Michael Anissimov
“The Hard Takeoff Hypothesis” (2010), an abstract by Ben Goertzel
Economic Implications of Software Minds (2010) by S. Kaas, S. Rayhawk, A. Salamon and P. Salamon

Critiques

“The Age of Virtuous Machines” (2007) by J. Storrs Hall
“Thinkism” by Kevin Kelly (2008)
“The Hanson-Yudkowsky AI Foom Debate” (2008) on Less Wrong wiki
“How far can an AI jump?” by Katja Grace (2009)
“Is The City-ularity Near?” (2010) by Robin Hanson
“SIA says AI is no big threat” (2010) by Katja Grace

I don’t mean to say that the critiques aren’t important by putting them in a different category, I’m just doing that for easy reference. I’m sure I missed some pages or articles here, so if you have any more, please put them in the comments.

Comments

  1. AH

    Great list Michael!

    Under criticism, I’d also cite some posts by Anders Sandberg and comments to this blog by Tim Tyler about “cloud intelligence” or “group intelligence”, which would seem to imply a slower takeoff.

  2. Grouping critique links together at the bottom, is a time honoured method of separating pro and con, when one is primarily featuring the “pro” grouping of links.

    Very useful entry. Anything from Doug Hofstadter in response to Ray Kurzweil would be welcome.

  3. Another critique I think is important, based on the Great Filter with or without anthropic reasoning: http://meteuphoric.wordpress.com/2010/11/11/sia-says-ai-is-no-big%C2%A0threat/

    • Thank you Katja, I’ve added that to the list. That is certainly one of the most interesting hard takeoff critiques.

    • Choi

      Michael, what are the reasons you don’t believe that Katja’s Great Filter argument is a strong evidence against the hard takeoff hypothesis?

    • Hard takeoff can still happen if the UFAI/FAI just converts the planet into computronium and doesn’t expand outwards from this solar system.

      As far as I can tell, Doomsday Argument is correct. I am impressed that Katja was able to show that DA applies even when you take the SIA into account.

    • Choi

      “Hard takeoff can still happen if the UFAI/FAI just converts the planet into computronium and doesn’t expand outwards from this solar system.”

      My question is why wouldn’t the AI try to colonize planets outside the solar system? Do you have something like a Fermi calculation, like,

      (the numbers of colonizing AIs we would observe)= (number of AI-bearing planets in other solar systems) * (probability of hard takeoff) * (probability that the AI would expand from its own solar system) * [etc.] Please tell me, are there any important factors or terms that I am missing here? That Hard takeoff can still happen if the UFAI/FAI just converts the planet into computronium and doesn’t expand outwards from this solar system.

  4. plop

    Sweet list! Printing them out for reading during sitting sessions, particularly useful when it’s hard for things to take off.

  5. iwillforgetthis

    Love it. How long until until a semantic google or Watson 3.0 can put one of these lists together in an instant?

  6. nominalien

    From this it seems the concept of hard takeoff is the brainchild of a handful of individuals, developed since the 1990s, chiefly by one man. Where are the rest? Is the world of hardtakeoffists really this small?

    • Yes it is. It’s not Eliezer’s fault for doing so much work on such an important topic, though, but everyone else’s fault for not also more work on it.

  7. nominalien

    This is “merely” the ultimate goal of humanity, of any intelligent life in the universe; the apex of our achievement-potential.

    In fact it is the apex of the achievement-potential of the whole universe until artificial general intelligence opens up a world of new possibilities, just like natural intelligence has up to its creation, and splits the universe into one more era:

    The First Era: pre-Big Bang
    The Second Era: post-Big Bang, aka pre-AGI
    The Third and Final Era: post-AGI

    And just a handful of people work on these things?!

  8. PariahDrake

    The thing I find interesting is the subjective perception of take off based on proximity.

    For example, let’s say that in actuality, a “soft” or “firm” take off occurs, in 2045.

    This take off, from our perspective today, is probably what I would call “firm”, but that’s not important, what is important is that we wouldn’t call it a “hard” take off.

    Now, fast forward 34 years into the future. We haven’t quite achieved the criteria for take off yet, but we are about to, in one year, and our ability to predict this has risen incrementally ever since.

    So, from the perspective of the people alive one year from take off, is it “hard” or “soft” (or “firm” or whatever)?

    Additionally, I find “hard” take off scenarios absolutely impossible to predict, but also increasingly likely as we progress towards either a soft or firm take off.

    What I mean by that, is that as the processing power of commercially available computers rises (exponentially or not), then the probability that someone, anywhere in the world, in their basement or garage, or in a lab (private or government) can create the SAI that leads to a hard take off increases each year.

    Even if those we would normally associate with having the resources to create this SAI decide, consciously, not to do it (perhaps they all agree that it’s just another arms race, maybe they ban it – whatever), at the same time, with the computers that are available to the average consumer, someone somewhere is going to create an SAI – if it’s possible.

    Consequently, SAI (if it’s possible) is inevitable, and no amount of regulation can stop it for long.

    As soon as the hardware is easily available, some hobbyist somewhere is going to chance upon the right code to unleash it. This probability increases with the number of people interested in such a thing, and as far as I’ve seen, more and more people are becoming interested in doing just this.

    Strange.

    • PariahDrake

      If we also factor in the increasing frequency of open source AI projects, the probability of a chance “hard take off” seems to loom even larger.

      The illusion of control is a thin veneer.

  9. binary big bang

    “some hobbyist somewhere is going to chance upon the right code to unleash it.”

    While one can’t chance upon something of such complexity, the point is otherwise valid. Given cheap enough massive computational resources, achieving an AGI is only a question of application of intelligence and lots and lots of code. May take man-decades to code. Probably requires an entirely new language to handle the parallelism. Gotta wonder why the big software companies aren’t working on this.

    • “Gotta wonder why the big software companies aren’t working on this.” They must be. I bet Gooogle is, under the guise of its search engine. And what about the government of the US?

    • PariahDrake

      I’m sorry, I didn’t mean to imply that the hobbyist in this example would discover SAI through “chance”, more that the “chances” of SAI being discovered by someone, anywhere in the world, increase with time as hardware power (and overall general knowledge of AI) increase everywhere in the world.

      I guess what I’m getting at is distribution of computing power and software knowledge sort of makes SAI inevitable and beyond collective human control.

  10. Panda

    “The Metamorphosis of Prime Intellect” is available online at http://www.kuro5hin.org/prime-intellect/mopiidx.html

    Note: I have nothing to do with the author or the website. Also, as a warning, the story’s themes are often very violent.

  11. binary big bang

    Has anyone actually presented a workable AGI plan that only needs to be implemented, i.e. coded?
    If yes, why hasn’t someone funded it yet?
    Why aren’t we getting news of increasingly complete AGI projects left and right? Where are all the projects lurking? How are they doing? Surely those who predict hard takeoff for, say, 2035, must be cognizant of the current developmental stage on which to base their projections. How are we presently doing?

    How many years to proto-AGI that can do neater tricks than any code is supposed to be able to, somewhat like Watson?

    • “Has anyone actually presented a workable AGI plan that only needs to be implemented, i.e. coded?”

      ROFLMAO
      (this is the only appropriate comment I can think of, try to figure out why)

      • binary big bang

        You’re a pessimist, laughing at the low (to non-existent?) success so far?

        What I’m asking is are there even bits and pieces of code that actually do AGI-ish things, that could actually find themselves in a real AGI some day? Evidence of some concrete & real, undeniable progress? Not just theories floating around, but something like “This is how this (subsystem) MUST be done, this piece of the puzzle IS finished.”

  12. binary big bang

    I know there are uncertainties involved in engineering that may even force one to scrap seemingly correct plans and workable code at the last steps, necessitating complete rearchitecting, but engineering also tends to produce knowledge of edges of parameter spaces within which some systems apparently MUST function or they can’t function at all. Has someone presented such knowledge?

  13. binary big bang

    If not, hardtakeoffists or takeoffists of any kind, seem to be curiously confident bunch. It’s something akin to believing that we’ll do space travel soon, before even rudimentary flight is shown to be possible.

    Sure, we do have existence proof in our own minds like we had proof in birds for flight, but it took centuries (millennia?) to crack it, and it turned out to be relatively trivial. Why would the most complex thing in the universe be cracked in mere decades, especially if (if that is the case) we don’t have anything solid so far?

    I’m trying to find the source of this optimism, and if it exists, the justification people have for it.

    • Huh?
      What are you actually asking for?
      Justification of pessimism or justification of optimism?

      You said: Evidence of some concrete & real, undeniable progress? Not just theories floating around

      Not only there hasn’t been ANY “concrete & real, undeniable progress” since Nov 28 1983: “If AI has made little obvious progress it may be because we are too busy trying to produce useful systems before we know how they should work.”

      But there aren’t either any kind of “theories floating around”, in spite of so many pretenses NOBODY KNOWS “How does a human brain produce behavior/inference X, and how do we implement that so as preserve maximal man-machine compatibility (quote from the above link)

      So indeed, Ed is right : “belief in strong AI [IS] just so much “religious” mumbo-jumbo. So far, it has all been speculation.”

      If “once a roughly human-equivalent AI is created there is a high likelihood of a hard takeoff” could someone explain why as we currently have ourselves human level intelligence (and some of them of the brightest kind) WE DO NOT ACTUALLY SEE ANY KIND OF “TAKE OFF”?

      • PariahDrake

        Simple: substrate dependence.

        The idea that an approximately ‘human level AI’ would lead to a take off is because it’s substrate would allow for a millions fold increase in the speed of self-organization and self-improvement.

        The human brain does improve itself – but slowly, because it’s limited by the constraints of biology.

        A roughly human level AI instantiated in silicon (or something even better – optoelectronics), would improve along the same evolutionary lines, but millions of times FASTER.

      • @PariahDrake

        because it’s substrate would allow for a millions fold increase in the speed of self-organization and self-improvement.

        This is irrelevant to my point, the question is WHY do not we already see which improvements could be made?
        The “human level AI” will be just as clueless as we are.
        Furthermore, how do you know that the artificial substrate will show a “millions fold increase in the speed”?
        Clock speed isn’t the be all end all of performance and there is no guarantee that we could ever understand the paralellism of brain operations: Human engineers need abstraction layers to understand their designs and communicate them to each other.

  14. Ed

    At the risk of offending many people…to me, belief the Singularity seems to be just as much an act of “religious” faith as belief in the Rapture. Given the track record of (total lack of) success in creating strong AI so far, what evidence does anyone have that strong AI will happen in our lifetimes? Or ever happen? Is belief in strong AI just so much “religious” mumbo-jumbo? So far, it has all been speculation.

    • Ed

      To clarify — I would have more confidence in strong AI in my lifetime (I’m 47 years old) if there were some successes…but what has happened so far is just incremental progress on specific problems.

      My gut feeling is that strong AI is possible, and so is artificial consciousness, and mind uploading, and AI’s far more intelligent than humans — but not without either many, many decades more incremental progress, or some conceptual breakthrough that does not seem to be in evidence.

      Will technology keep advancing long enough for this to happen? Looks to me like the march of scientific and technical progress is not accelerating, but slowing down.

      Maybe Western civilization will stagnate before too long (I’d love to see evidence that this will not happen…but most signs point to slowing of progress), and some future human civilization will develop the conceptual tools for strong AI.

    • Panda

      The “rapture” school of critics often decries a “belief [in the] Singularity”, without explaining what is meant by the word “belief”. This lack of explanation probably arises because the comparison is illogical.

      Religion says “X is true because I say so” (e.g. the holy text tautologically defines truth). The Singularity, by contrast, is a prediction based upon empirical evidence and reasoned extrapolations. But!, you protest, it is possible to be irrational about the probability we give to our extrapolations. After all, it is scientifically possible that gravity will cease to function tomorrow, but we do not have a high confidence in such an event.

      This critique conflates complaining about the probability assigned and complaining about whether we can assign a probability at all. If the Singularity is a religion, then all probabilities are illogical. If it is not, then we can talk reasonably about it, while still debating probabilities.

      To me, there’s nothing religious about the Singularity itself. The problem is that the probability of the singularity is very hard to measure, and some people have irrational predictions. After all, I feel that those who assign it a high probability have failed to make a compelling case. But this is not a criticism of the hard takeoff dialogue itself, so much as quibbling over what probability to assign it. And, if we believe that a hard-take-off is possible, then perhaps we would be irrational to not think about it!

      Closing your eyes to a possibility can be an act of faith itself…

    • @Panda

      The Singularity, by contrast, is a prediction based upon empirical evidence and reasoned extrapolations.

      Sorry, but NO, the supposed “evidence and reasoned extrapolations” do not hold water, these are just wishful thinking fueled by religious motives.
      It’s in the underlying drives that Singulitarianism is religious not in the “logical” considerations, in the “logical” considerations it’s just balderdash! :-D

    • Panda

      K-B-

      Depending on our version, the Singularity has a few relatively simple requirements. Let’s try one version. First, humans must be able to create a human-level intelligence in an artificial substrate. Second, that substrate, offering more processing power and memory, would allow the intelligence to quickly surpass human intelligence.

      Singularitarians have not proven that human-like intelligence can be reproduced mechanically (see the “Chinese Room” argument). Singularitarians have not proven that an implementation of AGI on a faster/better substrate would necessarily use the full potential of that substrate (the “minesweeper on a supercomputer” argument). As a result, the Singularitary concept cannot command certainty.

      However, there is nothing religious in such theories. Religion is based upon a tautology. It speaks to its own authority, not to philosophical or scientific extrapolations. There are NO extrapolations in religion. “God says X is true, therefore X is true.” There is no way to reason that God is wrong to a believer.

      The Singularity, by contrast, is either true or not, and, while many of its theories remain impossible to test, one day they will be falsifiable. Much of modern science was once the subject of philosophical inquiry, before theories became falsifiable. By forming theories about the world, we consider how to direct our science and actions. There is nothing illogical about trying to predict science; it is simply almost impossible to do.

      Now, you say that the concept of the Singularity derives from “just wishful thinking fueled by religious motives”. Let’s break this into two parts. First, there is the “religious motives” prong. This goes to the motivation for the idea. It suggests that the religious motive of those who articulate the Singularity taints the idea itself. This first argument of yours is, frankly, quite irrational. The motive for an idea is not the test for its validity. A religious motive has led to the building of great cathedrals and universities, the furtive freeing and education of slaves, and other goods in society. A religious motive is hardly relevant to the validity of an action resulting from the motive. Were a religious man to dispute the Singularity, we would not inquire as to the religious motive for his disputing the idea but whether his arguments were sound.

      Second, there is the “wishful thinking” prong. This is even worse and need not detain us long. It is a mere conclusory label. I might call physics “wishful thinking” and “theology” all I like. Doing so will not make it so. If the best you have is tautological definitions that the Singularity is a religion, then you will excuse me if I think you are not very convincing.

      As I said above, I will agree that there are those who take the philosophical ideas of the Singularity and embrace them as a certainty, often on scant proof. This is often illogical. However, that does not impeach the general philosophy itself.

    • @Panda

      By religious motives I mean the Eschatology which is typically a religious concern and distorts the judgment of Singularitarians either way, Big Bad AI v/s Conquest of the Galaxies, both ideas are CRAP, the Singularitarians cannot be trusted.

      Wishful thinking is only the result of these insane beliefs as you demonstrated yourself by showing that there is NO basis for certainty in any of the prerequisites for a Singularity, my own stance is that this is not even plausible.

      So, beside saying my arguments are weak, what is YOUR position about the “general philosophy” of the Singularity?

    • Panda

      There’s nothing wrong with an eschatology; the “heat death of the universe” is an example of a rather legitimate one. And recall that the Singularity, at least as explained by Mr. Kurzweil, should have no eschatology. He says that it is impossible to predict events after the Singularity.

      I know that Mr. Kurzweil has opined something about awakening the universe (what I assume you mean by “conquest of the galaxies”), but even he has to admit that that’s speculation, derived from an irresistible human impulse to speculate more than from logic.

      The Hard Takeoff theory is more credible. Its proponents argue that, although we cannot predict events occurring after the Singularity, we can still shape them by how we implement the Singularity. And since implementation may take the form of a superintelligent AI, we would be wise to get the AI++ project right the first time. Since a superintelligence might be able to prevent any competing superintelligences from forming, the first may well be the last shot we’ll ever get. This philosophy is riddled with uncertain assumptions, but it’s hardly religious. The assumptions are debatable, and people debate them here all the time.

      At any rate, you think that the Singularity is not even plausible. I do not think the Singularity is very compelling either. However, I think we are facing a transhumanist future. Technology has outpaced biology in many ways, from planes to cell phones. I think we will eventually come to an era when technology tackles essential parts of the human identity. Which and how soon, I do not know. But if you read the news, you will see that a lot is happening in theoretical research. If we develop artificial eyes that are superior to normal eyes, I do not doubt that some people will tear theirs out for the artificial ones.

      I also fear that transhuman technologies will not be optional. Just as performance-enhancing drugs with unhealthy side effects plague sports, imagine white-collar mind-enhancing drugs. And if such mind-enhancing drugs shorten life or curtail quality of life, does that matter if everyone fells compelled to use them to “keep up”? I think, ultimately, very few humans will have a choice about whether to be transhuman–not if they want to remain competitive in the market place. I recall, in undergrad, seeing students popping nootropics. They turned into zombies, able to focus for hours on studying. I shuddered at the thought of doing the same, particularly when we still know so little of how the brain works and what these drugs do to it. But already kids are taking pills to compete for a higher GPA.

      Ultimately, I do not think that speculation, founded upon arguable and debatable predictions is the same as religion. Religion, as I’ve tried to point out, is inherently tautological.
      There is no room for debate at all.

      It is wise to be highly skeptical of many transhumanist ideas, particularly the Singularity. But all too often, I find people close their minds too quickly, because they fear being led into weird imaginings. This is a healthy initial response. Transhumanism is weird, sometimes full of wish-fulfillment, and other unpleasant things.

      But, in my humble opinion, it’s also worth debating.

  15. PariahDrake

    If I were to worry myself about hard take off scenarios, I think I’d worry far more about a possible “Lawnmower Man” scenario than a “Skynet” or “Hal9000″ scenario.

    People, with disabilities, are right now being implanted with BCI’s, and soon will be fully connected to the internet. It’s only a matter of time before those implants also provide enhancements above and beyond natural ability.

  16. Mathias

    Human beings have mentally constructed “the world” and are therfore capable of manipulating it almost to their own wishes.

    Videogames are a hint to where the human race is now going…

    Which is into the next phase in human experience:

    1. From World
    2. To Dream Worlds(possibly created by Ai’s)

    But here’s what boggles my mind:

    Will we just go from one dreamworld to another, like when we switch tv-channels?

    Whats there more than a dream, besides other dreams?

    What if we could go beyond the dream-machine experience?

  17. Thanks for putting the list together! I reckon the first general AI to be recognized as such will be something like a search engine with which you can have a conversation.

    • “to be recognized as such”

      Hmmmm… Not really, I bet this could be done right now given the appropriate budget, plugging IBM Watson into Google or Bing and this would NOT be a significant advance, only yet another PR stunt.
      The well known rule “If it walks like a duck and quacks like a duck it’s a duck” does NOT apply (unfortunately)

    • Panda

      It depends on what you mean by “have a conversation”.

  18. compeng

    The fact remains that tech advances, the arguments against its future capabilities don’t. Tech will eventually steamroll over any and all arguments if it’s physically possible to implement the technology. And even if it isn’t we may be able to create it in a virtual world so real it’s the same as if we had in the real world.

    Transhumanists are people who are convinced that it’s not good to be left in the past by advancing their thoughts to encompass more than the present and even the next decade. Granted, many advance them at a rate much higher than tech advances, but that’s your chosen point of view. You can be a conservative transhumanist. Kurzweil takes closer look at the near-term future and often succeeds, but also speculates about the mid-term to long term future. Problem is, people tend to pay attention to the latter more because it sells, which can’t be validated as easily or at all, therefore supposedly “discrediting” him has a futuroloony instead of a futurologist.

    I The fact remains that tech advances, the arguments against its future capabilities don’t. Transhumanists are people who are convinced that it’s not good to be left in the past by advancing their thoughts. Granted, many advance them at a rate much higherthan actual tech advancement, but that’s your chosen point of view. Kurzweil takes closer look at the near future and successed, but also speculates about the medium term to long term future. Problem is, people tend to pay attention to the latter more, which can’t be validated as easily or at all – therefore “discrediting” him has a futuroloony instead of a futurologist.

  19. compeng

    …what the hell just happened there…

  20. compeng

    …ignore the last paragraph…

  21. sten

    Is there somewhere a list of the “ultratechnologies” that a hard takeoff is going to make possible, i.e. what will a post hard take off world contain, how it differs from ours?

  22. hi, I would like to talk about some aspect of IA danger, Is there a forum for that?

    I think that we can build FIA, but it remains the danger that uncontrolled IA would spread throgh internet to our computers by computer vulnerability, I’d like to discuss about that on a forum or here.

  23. Advanced “Takeoff” methods would created changes in life or could improve our lives. But there are disadvantages also. More technology also makes more disasters.

  24. Thanks for any other great article. Where else may anyone get that kind of information in such a perfect manner of writing? I’ve a presentation subsequent week, and I’m on the search for such info.

  25. When I first saw this title Hard Takeoff Sources | Accelerating Future on google I just whent and bookmark it. Thanks for your posting. Another item is that being a photographer entails not only difficulty in capturing award-winning photographs and also hardships in acquiring the best dslr camera suited to your requirements and most especially struggles in maintaining the caliber of your camera. This can be very genuine and noticeable for those photography fans that are directly into capturing the particular nature’s fascinating scenes the mountains, the particular forests, the wild or even the seas. Going to these exciting places absolutely requires a dslr camera that can live up to the wild’s unpleasant settings.

  26. When I first saw this title Hard Takeoff Sources | Accelerating Future on google I just whent and bookmark it. Do you mind if I quote a few of your articles as long as I provide credit and sources back to your blog? My website is in the very same niche as yours and my visitors would certainly benefit from some of the information you provide here. Please let me know if this okay with you. Thanks!

  27. When I first saw this title Hard Takeoff Sources | Accelerating Future on google I just whent and bookmark it. Hey There. I found your blog using msn. This is a really well written article. I’ll make sure to bookmark it and return to read more of your useful information. Thanks for the post. I’ll definitely comeback.

  28. Another, peculiarly Madeiran, means of transport was the transporting hammock.
    You can use photos of one’s car online by creating an internet site.
    Buying used vehicles, as a methods for save money
    is a wise choice as of late.

    Take a look at my site –

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>