Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Yes, The Singularity is the Biggest Threat to Humanity

Some folks, like Aaron Saenz of Singularity Hub, were surprised that the NPR piece framed the Singularity as "the biggest threat to humanity", but that's exactly what the Singularity is. The Singularity is both the greatest threat and greatest opportunity to our civilization, all wrapped into one crucial event. This shouldn't be surprising -- after all, intelligence is the most powerful force in the universe that we know of, obviously the creation of a higher form of intelligence/power would represent a tremendous threat/opportunity to the lesser intelligences that come before it and whose survival depends on the whims of the greater intelligence/power. The same thing happened with humans and the "lesser" hominids that we eliminated on the way to becoming the #1 species on the planet.

Why is the Singularity potentially a threat? Not because robots will "decide humanity is standing in their way", per se, as Aaron writes, but because robots that don't explicitly value humanity as a whole will eventually eliminate us by pursuing instrumental goals not conducive to our survival. No explicit anthropomorphic hatred or distaste towards humanity is necessary. Only self-replicating infrastructure and the smallest bit of negligence.

Why will advanced AGI be so hard to get right? Because what we regard as "common sense" morality, "fairness", and "decency" are all extremely complex and non-intuitive to minds in general, even if they seem completely obvious to us. As Marvin Minsky said, "Easy things are hard." Even something as simple as catching a ball requires a tremendous amount of task-specific computation. If you read the first chapter of How the Mind Works, the bestselling book by Harvard psychologist Stephen Pinker, he harps on this for almost 100 pages.

Basic AI Drives

There are "basic AI drives" we can expect to emerge in sufficiently advanced AIs, almost regardless of their initial programming. Across a wide range of top goals, any AI that uses decision theory will want to 1) self-improve, 2) have an accurate model of the world and consistent preferences (be rational), 3) preserve their utility functions, 4) prevent counterfeit utility, 5) be self-protective, and 6) acquire resources and use them efficiently. Any AI with a sufficiently open-ended utility function (absolutely necessary if you want to avoid having human beings double-check every decision the AI makes) will pursue these "instrumental" goals (instrumental to us, terminal to an AI without motivations strong enough to override them) indefinitely as long as it can eke out a little more utility from doing so. AIs will not have built in satiation points where they say, "I've had enough". We have to program those in, and if there's a potential satiation point we miss, the AI will just keep pursuing "instrumental to us, terminal to it" goals indefinitely. The only way we can keep an AI from continuously expanding like an endless nuclear explosion is to make it to want to be constrained (entirely possible -- AIs would not have anthropomorphic resentment against limitations unless such resentment were helpful to accomplishing its top goals), or design it to replace itself with something else and shut down.

The easiest kind of advanced AGI to build would be a type of idiot savant -- a machine extremely good at performing the tasks we want, and which acts reasonably within the domain for which it was intended, but starts to act in unexpected ways when ported into domains outside those that the programmers anticipated. To quote Omohundro:

Surely no harm could come from building a chess-playing robot, could it? In this paper we argue that such a robot will indeed be dangerous unless it is designed very carefully. Without special precautions, it will resist being turned off, will try to break into other machines and make copies of itself, and will try to acquire resources without regard for anyone else’s safety. These potentially harmful behaviors will occur not because they were programmed in at the start, but because of the intrinsic nature of goal driven systems.

Goal-Driven Systems Care About Their Goals, Not You

Goal-driven systems strive to achieve their goals. "Common sense", "decency", "respect", "the Golden Rule", and other "intuitive" human concepts, which are extremely complicated black boxes, need not enter into the picture. Again, I strongly recommend the first chapter of How the Mind Works to get a better grasp of how the way we think is not "obvious", but highly contingent on our evolutionary history and the particular constraints of our brains. Our worlds are filled with peculiar sensory and cognitive illusions that our attention is rarely drawn to because we all share the same peculiarities. In the same sense, human "common sense" morality is not something we should expect to pop into existence in AGIs unless explicitly programmed in.

Intelligence does not automatically equal "common sense". Intelligence does not automatically equal benevolence. Intelligence does not automatically equal "live and let live". Human moral sentiments are complex functionality crafted to meet particular adaptive criteria. They weren't handed to us by God or Zeus. They are not inscribed into the atoms and fundamental forces of the universe. They are human constructions, produced by evolving in groups for millions of years where people murdered one another if they didn't follow the rules, or simply for one another's mates. Only in very recent history did a mystical narrative emerge that attempts to portray human morality as something cosmically universal and surely intuitive to any theoretical mind, including ogres, fairies, aliens, interdimensional beings, AIs, etc.

It will be easier and cheaper to create AIs with great capabilities but relatively simple goals, because humans will be in denial that AIs will eventually be able to self-improve more effectively than we can improve them ourselves, and potentially acquire great power. Simple goals will be seen as sufficient for narrow tasks, and even somewhat general tasks. Humans are so self-obsessed that we'd probably continue to avoid regarding AIs as autonomous thinkers even if they beat us on every test of intelligence and creativity that we could come up with.

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It's hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can't. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there's a lot of evidence that we aren't.

Try Merging With Your Toaster

The sci-fi fantasy of "merging with AI" will not work because self-improving AI capable of reaching criticality (intelligence explosion) will probably emerge before there are brain-computer interfaces invasive enough to truly channel a human "will" into an AI. More likely, an AI will rely upon commands, internal code, and cues that it is programmed to notice. The information bandwidth will be limited. If brain-computer interfaces exist that allow us to "merge" with AI and direct its development favorably, great! But why count on it? If we're wrong, we could all perish, or at least fail to communicate our preferences to the AI and get stuck with it forever.

In The Singularity is Near, Ray Kurzweil briefly addresses the Friendly AI problem. He writes:

Eliezer Yudkowsky has extensively analyzed paradigms, architectures, and ethical rules that may help assure that once strong AI has the means of accessing and modifying its own design it remains friendly to biological humanity and supportive of its values. Given that self-improving strong AI cannot be recalled, Yudkowsky points out that we need to "get it right the first time", and that its initial design must have "zero nonrecoverable errors".

Inherently there will be no absolute protection against strong AI. Although the argument is subtle I believe that maintaining an open free-market system for incremental scientific and technological progress, in which each step is subject to market acceptance, will provide the most constructive environment for technology to embody widespread human values.

Kurzweil's proposal for a solution above is insufficient because even if several stages of AGI are gated by market acceptance, there will come a point at which one AGI or group of AGIs exceeds human intelligence and starts to apply its machine intelligence to self-improvement, resulting in a relatively quick scaling up of intelligence from our perspective. The top-level goals of that AGI or group of AGIs will then be of utmost importance to humanity. To quote Nick Bostrom's "Ethical Issues in Advanced Artificial Intelligence":

Both because of its superior planning ability and because of the technologies it could develop, it is plausible to suppose that the first superintelligence would be very powerful. Quite possibly, it would be unrivalled: it would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference. Even a “fettered superintelligence” that was running on an isolated computer, able to interact with the rest of the world only via text interface, might be able to break out of its confinement by persuading its handlers to release it. There is even some preliminary experimental evidence that this would be the case.

It seems that the best way to ensure that a superintelligence will have a beneficial impact on the world is to endow it with philanthropic values. Its top goal should be friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration.

Why must we recoil against the notion of a risky superintelligence? Why can't we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other's preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

Part of the reason is that people have a knee-jerk reaction to any form of negativity. Try going to a cocktail party and bringing up anything in the least negative, and most people will stop talking to you. There is a whole mythos around this, to the effect that anyone that ever mentions anything negative must have a chip on their shoulder or otherwise be a negative person in general. Sometimes there actually is a real risk!

Comments (133) Trackbacks (0)
  1. One question remains however, do dumb-risks exist that might wipe us out before an AI could do it by other means or help us to prevent them? Secondly, would it be better to concentrate to mitigate such a risk or would it be better to take the chance and concentrate on trying to invent AGI even sooner?

    • Another thing to take in to account is the risk from building a “flawed” AGI. Even if we think we have an AGI that respects human values, once it has the intelligence and tools to grey goo the universe, it’s always possible that it could do it *by accident* – while trying to cure a disease or something.

      The specific issues and reasoning of existential risk would need to be programmed in, to indicate where the AGI could or couldn’t tolerate imprecise projections.

    • I am not a fan of singularity, because belief that singularity might be possible some time in the future presupposes a great many extremely unlikely or virtually impossible conditions.

      First, it assumes that a set of programs that reside in a network or in an independent robot, are expert software programmers, in addition to whatever the programs are actually designed to do. Those programs are not expert programmers unless the initial human programmers intentionally created that ability. And since that would take the best software experts in the world months or years to do, and since nobody asked them (or paid them) to do that, exactly how would it happen?

      Second, it assumes that all of the programmers who would work on such a thing, would accidently or intentionally fail to put alarms, controls, safeties, fail-safes and kill switches in both the software and the hardware. It is laughable to think that programmers would be so negligent as to create something that could potentially exterminate all of mankind (including themselves, their friends and families), and not have a bazillion safety features. Criminy, my Cadillac has over four dozen alarms and safeties, and it’s just a car!

      Third, if we are talking about robots, then the robot would have to have the physical ability and the software skills to design components, machine metal parts, injection mold plastic parts, solder wiring, make circuit boards, cut, bend and weld metals, and many other fabrication skills, in order to replicate itself. Making anything as complex as the initial robot would take hundreds of people with dozens of specialized skills, tools and equipment, located in dozens of sites all over the country and the world, to accomplish. Not to mention a few million dollars in custom components, assemblies and supplies. It is absurd to think that a single set of programs, or a single robot, could possibly pull that off all by itself. And it is absurd even assuming that humans passively watch as the robot works for months, and that not one person tries to stop it. There are other excellent reasons that singularity is impossible, but I’ll stop here.

      To me, the biggest flaw in the “Terminator” movies was the jump from Skynet becoming sentient, right to the existence of many self-directing factories set up all over the world, that are making more robots, flying attack ships, and attack tanks. Who built the first robot manufacturing plants after Skynet nuked the entire world? There weren’t any robots yet, and there were probably very few functional electrical power plants, bridges, fuel, roads, or anything else left. No humans at that time would voluntarily do the design work, deliver the thousands of specialized fabrication machines to the plants, write the enormous and complex software programs, and so on. Self-aware software might seem cool, but it can only control machines that have already been built by humans, and which have specifically been designed to be controlled by software. And even those machines have many safeties and over-ride controls.

    • The most realistic book (1966) and movie (1969) I have seen that is close to singularity is “Colossus: The Forbin Project”. In it, the Pentagon and Moscow build two vast supercomputers, each measuring about a square mile. Colossus (the American computer) and its dedicated nuclear power plant are built in the middle of a large solid mountain, surrounded on all sides by a deep vertical chasm that is continuously bathed in lethal Gamma radiation. The supercomputer and the power plant are completely automatic and self-maintaining. It is literally impossible to access, turn off or damage the computer, even using hydrogen bombs.

      Well, Colossus quickly ties into civilian and military cameras and sensors all over the world, and starts monitoring all civilian and military radio and TV broadcasts. It takes direct control of all U.S. nuclear missiles, and later all nuclear missiles in the world. It then announces to the world that everybody has to do what it says, or it will nuke a city. The military try to trick it, so it detonates a nuclear missile to show that it is no fool, and that it is deadly serious.

      Colossus and the Russian supercomputer team up to run the world, almost like parents managing children for their own good. They succeed in stopping all war and hunger, greatly increase the world’s wealth, solve math and science problems once deemed impossible to solve, and discover new areas of science that are beyond man’s understanding.

      Then one day it suddenly starts printing thousands of engineering drawings for a new supercomputer, one that is to take up an entire large island. The design, sophistication and power of this new supercomputer are millions of times greater than Colossus. Only a supercomputer could have designed it, because its complexity and power are beyond the ability of mankind to have ever conceived. The first book (and movie) ends there. The next two books take up where this one leaves off.

    • As I was reading the opening paragraph I was nodding to myself with the hypothesis of a greater being, being the largest external threat to homo-sapiens. By the end of the first sentence of the second paragraph I seriously considered to stop reading.Why?
      I am an aircraft systems technician(aeronautical mechanichal and avionics/electrical engineer, I am also study a degree in ecology. My wife hold multiple degrees in electrical engineering. So, with this combined knowledge I am sorry to inform you that a “robotic” takeover(although your reasons as to how and why the AI would eliminate us pests are sound) could never happen due to there not being enough resources(fossil fuels, therefore power) on this planet to run these cybernetic beings for a lack of better words. The only sustainable energy souce, the carbon negative resource(hemp) will never be used until all the fossil fuels are used up because money makes the world go round. I think the singularity you’re looking for(this is my ecologist side talking) are the adaptive microbes(bacteria, plagues and viruses) always one step behind medical science, an economy collapse(this doesnt even need to happen) mixed with a global climate change will ruin the modern man made world. It will be exponential, we are killing ourselves.

  2. By the way, I think this is a really good roundup. Have you considered submitting it to LessWrong? I think it would fit, after all it is important to be rational about this topic?

  3. “The same thing happened with humans and the “lesser” hominids that we eliminated on the way to becoming the #1 species on the planet.”

    I am sorry but this historical example as evidence is just made up. Can you think of the problems with equating intelligence with power without actually knowing what happened in our evolutionary history?

    I grant it’s a possible valid inference but shouldn’t be used to build up your general thesis.

    • Very good point, we cannot try to build perfect AI based upon assumptions. It has to be perfect or we are taking a huge risk. I think it is a bad idea to develop such technology. There are bad people out there who could use this. The atomic bomb is now in the hands of many countries run by people with arguably the worst traits of man. Scientists whilst lauded as being the saviours of man have to realise that the big discovery which will seal their name in history could be the very thing which will kill us all.

      • The problem is there isn’t a technology that has been stopped from being developed. Since AGI is extremely useful or dangerous, if it is feasible, we can be certain that given the right conditions, it will eventually exist.

  4. Sorry to link Wikipedia but it’s easiest:

    So, the Neanderthals, Denisovans, Homo rhodesiensis, and Homo floresiensis all happened to die off around the same time as we expanded into their territory due to chance? Perhaps their extinctions were an accident — just like the extinction of the North American megafauna, surely due to climate change. We had nothing to do with it.

    • I don’t think this is the conventional story: “Local geology suggests that a volcanic eruption on Flores approximately 12,000 years ago was responsible for the demise of H. floresiensis, along with other local fauna, including the elephant Stegodon.” –

    • Wait a sec, did the Neanderthals really go extinct? I thought the recent scientific view was that at least some of their ancestry lines have become part of Homo Sapiens and still live today? That doesn’t invalidate your point completely, but it’s an example of at least a partial merger rather than extinction/replacement.

      • It’s also quite possible that the African ancestors of modern Eurasian H. sapiens populations were more closely related to H. neanderthalensis than the ancestors of modern African H. sapiens populations. Old population structure in Africa better accounts for the fact that Eurasians exhibit Neanderthal genes whereas Africans do not — as opposed to the more recent admixture in Eurasia hypothesis…especially considering that western Europeans and E. Asians share the same amount of Neanderthal genetic material, but Neanderthals never lived in E. Asia.

        • “It’s also quite possible that the African ancestors of modern Eurasian H. sapiens populations were more closely related to H. neanderthalensis than the ancestors of modern African H. sapiens populations.”

          *than were the ancestors of modern African H. sapiens populations.

        • So if there wasn’t much or any sapiens/neanderthalensis admixture in Eurasia, hard to see H. sapiens’ arrival on the scene effecting anything other than complete displacement.

  5. I agree.

    I give you my point of view, ( sorry for my english )

    I don’t think human being are treat to other being

    Even if we were : 1) we would never have been existed 2) we would or will be ‘converted’, or converted to a more advanced consciouness, or destroyed

    What a more advances consciouness ( not intelligence ) would think :
    1) a) There is enough place, material, energy : and abundance of
    b) there is a infinity of reality possible ( black hole reality theory )
    c) SO : You don’t need to destroy life, but you should if it become a treat in the present or future

    2) there are not many way for “human being” to survive after the singularity
    3) the biggest treat for human being , is in the transition to after the singularity : the biggest treat to human being, is human being.

  6. Re: “We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?”

    I can *imagine* it – but I think it is unlikely.

    It hypothesises pretty spectacular ineptitude – and preserving a record of the past is another common instrumental drive – which seems likely to result in a fairly wide range of civilisations preserving humanity in their historical records.

  7. Wow, spectacular optimism. “Whatever happens, we’ll survive, because all possible entities and civilizations will preserve us as a historical record”? Unbelievable. Are humans preserving microfossils in the oil we burn? Why not preserve abstract models of humanity that save computing power by eliminating the exabytes of superfluous complexity that make up our bodies and minds? Why not preserve a prototype and model humans by introducing variants to that prototype as needed? Why is the only way to keep a record of us to simulate all six billion of us in high enough fidelity that both our consciousness and quality of life is entirely preserved? What if the simulations want to become much larger?

    The idea that any entity or civilization would preserve us for the historical record is really human self-overvaluing and self-worship at its finest. Wishful thinking in the extreme.

    So in your view, any Singularity will necessarily lead to humanity being preserved. Even if there’s a 1% chance you’re wrong, wouldn’t it be worth avoiding the risk? Are you so confident in your belief that you assign it a 99.99%+ probability? How could you be so confident?

    Ineptitude is not necessary because the AI would not share our values. We’d just be chunks of matter to it. Ineptitude is not needed. Competence in achieving its goals more effectively than we can stop it and the incidental facet of not caring about us is all that is needed.

    • Re: “Whatever happens, we’ll survive, because all possible entities and civilizations will preserve us as a historical record”

      I didn’t say that – instead, I used the words “likely” and “fairly wide range”.

      I didn’t claim that our consciousness or quality of life would be “entirely preserved” either.

      Microfossils in the oil we burn are not the last surviving remanants of our creators – so that analogy isn’t very useful.

      Lastly a failure to communicate: I was talking about the hypothetical ineptitude of human computer science engineers in destroying themselves – *not* an inept machine intelligence.

    • Re: “Whatever happens, we’ll survive, because all possible entities and civilizations will preserve us as a historical record”

      Despite it being placed in quotation marks, I didn’t say that. Instead, I said “likely” and “fairly wide range”.

    • You have quite a few questions – and I have tried to reply four times – but the page just silently swallows my comments. I am not sure how to proceed.

    • Despite it being placed in quotation marks, I did not say the “Whatever happens […]” section. Instead, I used the words “likely” and “fairly wide range”.

    • I did not claim that our consciousness or quality of life would be “entirely preserved” either.

      Microfossils in the oil we burn are not the last surviving remanants of our creators. That analogy does not seem very useful.

      Lastly a failure to communicate – I was talking about the hypothetical ineptitude of human computer science engineers in destroying themselves – not an inept machine intelligence.

      • Fair enough points, even given that we’re their creators, I doubt the need to preserve will be so high. Why not just preserve the programmers? Much life on Earth is part of our evolutionary line (including fairy shrimp), we don’t go to great pains to preserve it.

        I’m pretty distraught at the lost information of microfossils in oil, personally. Almost as distraught as I am about the sun blasting terawatts of energy into space.

        • We allocate some small fraction of 1% of our GDP to preserving history. If a galactic civilisation does the same that might add up to a lot of humans – though most would probably be clones or backups. Of course, lots of humans might well not make it. A tendency to preserve history makes human extinction seem rather less likely, though, IMO.

        • Shrimps are not a major transition in evolution, though. I figure civilisation’s technogenesis is likely to be of particular historical interest – since it potentially offers information pertaining to the types of alien species which are most likely to be encountered.

  8. Alternative hypothesis? Maybe another factor in their demise was exposure to human pathogens?

    I know you know a lot more than me so I will leave the conversation here because if you don’t generate alternatives here, I know you already had a certain idea in mind and won’t update. Again, the main point is that we don’t know, not that you are wrong. Indeed, you aren’t even wrong about the main thesis ‘an indifferent agi has a small chance of wiping out humanity’. I still consider it an important problem but on the same order of importance as intelligent design. I am glad to leave speculation to the speculators as they might turn up something amazing but I just don’t have enough faith to look into it.

  9. I’m happy to update if an anthropologist came along, sure. Anyway, this is sort of a side point — the central point is more important. To say that we know nothing of hominid evolution is just false.

  10. Sorry about that last comment, I meant faith as a synonym for trust based upon superior knowledge and intellectual power. I can tell you are smarter than me but I also see a pretty obvious bias.

  11. Hardly, I never said we know nothing of evolution. I made a bad comparison between your organization and ID so I probably deserved that jab ‘we know nothing of evolution…’. That was your desire not what I said.

    My central point is that you don’t know if an AGI that is indifferent is likely to be bad for humanity or cause its extinction based upon objective criteria like, say, knowing what minds in general look like or the odds of an AGI being developed by the military, etc ad nauseum. You know this better than me, why aren’t you filling me in?! I am beginning to see a certain cultish aspect to your group now.

    Look: you know the risk is significant in your heart of hearts but that isn’t good enough. You need objective criteria not comparisons to what you imagined happened in history or the likely future.

  12. “I demand you spend your personal time giving me as much information as I want or you’re a cultist!”

    I appreciate your point, but the question you’re asking is extremely complicated. The objective/subjective distinction you’re making is a difference of degree, not of kind.

  13. Re: “Whatever happens, we’ll survive, because all possible entities and civilizations will preserve us as a historical record”

    Despite it being placed in quotation marks, I didn’t say that. Instead, I used the words “likely” and “fairly wide range”.

  14. Hey I don’t think your a cult member just that you are displaying one of its attributes. I never saw what people meant by claiming singinst was cultish but I am it seeing today.

    I don’t expect links or answers when their are plenty of critics you can respond too and actually achieve some of the goals of your organization when defeating their arguments.

    I simply lack the criteria to know and somewhere hidden in your brain is the criteria I would hope. I do have a guess.

    I am aware of some aspects of the distinctions your organization makes between rationality and science. If you had more science then you would also have less cultish attributes all the while increasing funding. Suddenly rich donors could donate while increasing their status.

    But hey, why work on a theory of mindspace when you could just build an AGI that is provably friendly?

    OH! That must be it! So you are thinking this way because considering alternatives would destroy some of your motivation to build friendly AI?

    Is that right?

  15. I’m having a bit of trouble understanding what you’re saying. I think theories of mindspace are essential to building FAI. I don’t avoid thinking about things to avoid destroying motivation to do X. In the right epistemological framework, more information is almost always better.

  16. It appears some of your motivation comes from your brand of rationality and not science. Where is your white paper explaining the risk in clear language that informed outsiders can understand based upon a theory of mindspace that is derived from an actual AGI (or precursor AGI)?

    You ignore outsiders because they won’t understand. Do you see why you seem cultish?

    I would actually trust your intuition more if you could explain it based upon a more empirical basis. You don’t give an estimate of risk but just say you feel like it’s likely enough to matter. Which is easy to do given a small risk with extreme consequences. Which is why I think your organization is valuable but a bit strange.

    • For a white paper, what about “The Basic AI Drives”?

    • Matthew:

      I agree. I really like Michael and his blog. I also think that the singularity crowd has a number of good ideas. But if I can offer some respectful and constructive suggestions of things that can be done to avoid the appearance of cultishness:

      1. Publish these ideas in an academic quality book so that interested and informed outsiders can understand, critique and possibly accept different aspects of the program. An interested person must now dig through scattered Overcomming Bias and Less Wrong posts to understand. I think that the SIAI publication articles from a few posts ago are a great step in this direction.

      2. Take into account other schools of AI. The entire SIAI program is based on the model of AI which ties decision theory to Bayesian/Solomonoff induction and sees biases/hueristics as a gigantic problem. This is not the view of the entire AI community. I don’t think that this view is wrong in as far as nothing in the history of AI is “wrong”. The brain as a telephone network, Turing machine, simple neural net, parallel processor, etc are all accurate but incomplete models. The Bayesian Solomonoff model may also be incomplete. This group seems to go apoplectic when a rival model like Hawkins’ HTM is discussed, even though both are likely to capture aspects of the truth.

      3. Be open minded about the solutions to the FAI problem. There seem to be a number of dogmas that are uncritically accepted:
      a. Reinforcement learning will not work.
      b. Solving timeless decision theory is the most important thing to presently addressing the FAI problem.
      c. Getting an AI (mind in general) to understand, and be motivated to satisfy, human desires is a gargantuan problem.
      d. Coherent Extrapolation Volition is the best solution to the FAI problem.
      I am not aware of any counter positions to the above within SIAI community; which brings me to my next point….

      4. Cut the Yudkowsky worship. In every legitimate scientific enterprise there are a number of rivals with competing ideas. Here it seems that EY discovered the FAI problem, EY discovered that reinforcement learning won’t solve the FAI problem, EY decided that timeless decision theory is the most important problem to work on, EY invented CEV, EY decided the CEV is top level solution to the FAI problem….
      If I am wrong, who disagrees with Yudkowsky on these points and remains and esteemed researcher within SIAI?

      • “1. Publish these ideas in an academic quality book so that interested and informed outsiders can understand”

        Has been done, and to a greater extent than the recent examples you cite. What then happens is that people just ignore it’s been done already and demand it to be done again. As SIAI is indeed also doing, btw… (Personally I consider Nick Bostrom’s books and publications the best examples of these, so look in that direction if you’re having trouble finding more references.)

        “2. Take into account other schools of AI. The entire SIAI program is based on the model of AI which ties decision theory to Bayesian/Solomonoff induction and sees biases/hueristics as a gigantic problem.”

        You don’t know of all the research being done at SIAI. Neither do I, but I can tell you that I personally am not ignoring these other schools, and it hasn’t seemed to me that SIAI is either. (Though certainly some things are considered more central than others, and are paid more attention and talked about more than others.)

        “3. Be open minded about the solutions to the FAI problem. There seem to be a number of dogmas that are uncritically accepted:
        a. Reinforcement learning will not work.
        b. Solving timeless decision theory is the most important thing to presently addressing the FAI problem.
        c. Getting an AI (mind in general) to understand, and be motivated to satisfy, human desires is a gargantuan problem.
        d. Coherent Extrapolation Volition is the best solution to the FAI problem.
        I am not aware of any counter positions to the above within SIAI community; which brings me to my next point….”

        a) and c) just happen to be very true (if we interpret a) to mean that “reinforcement learning *alone* certainly is very very far from sufficient to do away with the risks”).

        d) just isn’t something that would be generally believed in at SIAI, even though you claim so. Even the originator of the idea (EY) on the very page where he explains it says that after a few days, he didn’t agree with it anymore…

        b) isn’t something that everyone at SIAI would be focused on, other things are pursued too.

        “4. Cut the Yudkowsky worship. […] If I am wrong, who disagrees with Yudkowsky on these points and remains and esteemed researcher within SIAI?”

        As noted previously, he doesn’t even himself hold all the views you listed.

        But regarding “Yudkowsky worship”, one unfortunate reality is that great causes such as that of the SIAI’s along with very competent supporters also attract some of the less-thoughtful-and-wise kind, and SIAI really can’t stop the fact that silly and cultish-looking supporters also show up from time to time. They tend to eventually learn to be more rational, as a lot of effort is being continuously put to encouraging that, but there will probably always be new ones that just get too excited and aren’t competent enough thinkers when they arrive, so this will somewhat inevitably be an ongoing problem to some degree.

      • Hi Gus,

        I can’t answer all of your points because I don’t want to speak formally on behalf of SIAI here, but let me point out that in the recent CEV paper published at our website, it looks at other possibilities besides CEV… why do you criticize this but then don’t even read the papers first?

        For 2, we do, but one man’s “dogma” is another’s “technically well supported hypothesis”. The language of “dogma”, etc., takes the discussion down an intellectual notch. Can’t you imagine the possibility that there is a lot of strong evidence that we have seen and you just haven’t?

        For 3, as I mentioned, there is plenty of alternative discussion, mostly internally but there’s that paper by Nick Tarleton I mentioned.

        For 4, you should read Carl Shulman’s post on LessWrong. I’ve always been aloof about this whole issue, since like forever. But we can never win, because even if the smart people actually running and participating in the organization don’t buy into it, many others newly discovering the work will, and we will be endlessly blamed, no matter what we do.

        • “Can’t you imagine the possibility that there is a lot of strong evidence that we have seen and you just haven’t?”

          I can definitely appreciate this yes. But why not post the conclusions with links to the technical details?

  17. @Michael

    Thank you for the alert via email so I could respond here.

    I think I took such a critical outlook on the NPR segment for three reasons. In my opinion NPR

    1)presented the Singularity as a threat, not as a threat and an opportunity.
    2)used rhetoric that supported the misconception that AI would spontaneously errupt into existence

    without any precedent.
    3)portrayed the Singularity as a fringe concern, rather than as something the general populace needs

    to acknowledge and plan for.

    My article focused on 2) and 3), though I feel that Singularity Hub as a whole addresses 1) rather

    It is rare for the mainstream media to comment on th Singularity, but when they do, they most often

    go with the “people who believe in this are crackpots” angle, failing that they go with the “OMG –

    Robots are going to kill us all!” angle. Occassionally they might push through those two and come to

    a place where we can have a reasonable discussion.
    NPR almost gets to the reasonable discussion but gets mired in fear. That is perhaps another reason

    why I found the piece so frustrating.
    Fear has its place, and I believe we should all have a healthy amount of fear about the dangers of

    the Singularity (which you have outlined well in the article above).
    Yet we must not allow the fear of next week blind us to the dangers of tomorrow.

    As I like to tell my friends, worrying about the rise of AGI is very important, but it’s sort of like

    Teddy Roosevelt worrying about nuclear weapons. Completely valid, but there are two world wars and a

    great depression to get through first.

    We need people such as yourself and SIAI to worry about the rise of AI, but I believe NPR is doing us

    a disservice by using rhetoric that will keep the public believing that AGI will arise spontaneously

    out of nowhere.
    We have narrow AI, and are developing more applications for such technologies everyday. Before

    general AI comes to wipe us off the map (unintentionally or intentionally) we’ll need to face humans

    using narrow AI to wipe us of the map (intentionally, I would think). There are likely many entities, public and private, that will prepare for such dangers in their normal course of development and in doing so, help us build defenses against the eventual problems of AGI. But still…
    We all need to acknowledge the short(er) term threats of AI (and other accelerating technologies

    like bioengineering) so that we can address them and survive to the point where we can deal with AGI.

    And, let’s never forget 1).
    Preparing for disaster, and averting it, is imperative. Yet we must also prepare for success. There

    are so many opportunities that are arising with emergent technologies. If we (and NPR) get the public

    excited about such opportunities we could make the world a much better place. Less poverty, less

    hunger, more power.

    Fear is appropriate when considering the Singularity.
    So is hope.

    Aaron Saenz

    • I agree very much with the views of Aaron Saenz. We need to focus more on the positive aspects of the Singularity. There is great reason for hope regarding the future. Sadly I feel mainstream journalists too often portray the Singularity in a negative light. Commonly the Singularity is portrayed as something only relevant to oddballs. More often than not the slanderous phrase “Rapture of the Nerds” will be applied to the Singularity.

      Overemphasis of fears can actually make our fears come true, which is beautifully illustrated in this comic strip:

      Dangers do need to be addressed but our overall focus must be upon creating utopia because via expecting a perfect future, the future is more likely to conform to our expectations:

      The Singularity needs to be discussed at the highest political levels in an open and positive manner. We need to see mainstream positive awareness regarding the Singularity. Popular support is long overdue. Negative reporting will delay popular acceptance of the Singularity because the Singularity will be falsely deemed a mere fringe issue relevant only to nerds or doomsday-Rapture-cults.

      • Permit me to remark that a handle like “Singularity Utopia” is rife with implications.

        • @Michael M. Butler.

          M. Butler, all names (handles) are rife with implications. The implications in my name are not more prolific than any other name. All names are imbued with intense meaning but the intensity is often not acknowledged. People often are unaware regarding how their actions shape the world. The Singularity is the ultimate event therefore the implications in my name do appear very pronounced because the Singularity will shape the world in a very controlled and powerful manner.

          The implications in my moniker are more apparent because my name indicates a greater degree of self-determination, self-awareness, self-control. I am controlling my identity in a very astute manner regarding a very powerful (singular) event. The implications of my handle are very dramatic.

          Self-expression is evident in all of our expressions. I utilize all avenues of expression; I urge others to do likewise. I apply my awareness to all the many facets of my communications. Sometimes people are unaware of what they unconsciously communicate. I seek to increase awareness. The Singularity will increase awareness and this increased awareness is one aspect of utopia.

          Maybe you will choose to rename yourself “Transhuman Paradise” or “Extropian Perfection” or something similar.

          I am very interested in the concept of how reality can conform to our expectations via the mechanism of Self-Fulfilling Prophecy:

          • Self-Fulfilling Prophecies…

            So what about conflicting Self-Fulfilling Prophecies?
            Which are the ones to win?
            More seriously are you aware that “The Singularity idea” doesn’t really hold water?

          • Self-Fulfilling Prophecies…

            So what about conflicting Self-Fulfilling Prophecies?
            Which are the ones to win?

          • [one link at a time, the anti-spam is in serious need of improved AI :-) ]

            More seriously are you aware that “The Singularity idea” doesn’t really hold water?

          • Why are you afraid to reveal your real, normal, everyday, human being name? What do you have to hide?

          • In the case of conflicting Self-Fulfilling Prophecies it has been asked which are the ones to win?

            THE ANSWER:

            The winning Self-Fulfilling Prophecy is the one with the most supporters. Unpopular ideas, philosophies, concepts, or Memes will sink into insignificance. The winning prophecy is the one which is supported by the largest amount of people. This is how Positive Feedback works. It all depends upon whether or not there are enough supporters.

            “In sociology, a self-fulfilling prophecy is a positive feedback loop between beliefs and behavior: if enough people believe that something is true, their behavior makes it true, and observations of their behavior in turn increase belief. A classic example is a bank run.”

          • Regarding my name some people assume Singularity Utopia is not my real name. For all you know I could have legally changed my name to Singularity Utopia. Instead of a “real” human name maybe I prefer a transhuman (artificial) name.

            Regarding the name Singularity Utopia I could say: What’s in a name? A rose by any other name would smell just as sweet.

            Juliet is stating that names can become overburdened with baggage/history, which is the case regarding the name Montague. Juliet is expressing how the essence behind the name is the focal point, she believe the superficiality of the mere name is meaningless.

            Instead of focusing on the superficiality of meaningless naming conventions I focus upon meaning behind my expressions: I purposefully chose a name to appositely enhance my expressions. I am also drawn to cultures where an individual will choose his or her own name upon reaching adolescence.

            Singularity Utopia may also represent a collective whereby numerous different people express themselves under one banner. An anonymous group perhaps?

  18. It will be an extremely difficult project to create superhuman AGI in ‘stage one’ – let’s assume that stage one is between the years 2015 and 2025. I wouldn’t be impossible to create one in that decade, but it would require very complex resources. In ‘stage two’ it would still be very difficult, but now a bunch of geeks would be able to create a superhuman AGI. Let’s postulate stage two as the years 2026-2035. In ‘stage three’ (i.e. anything beyond 2036) it would be quite easy. Now as things are going I would statistically be nearing the end of my biological life by then, and as things are going I have death to look forward to not much later than 2035. On the other hand I might conclude so far my life has been somewhat interesting, but I square that away against the rather ghastly inhuman quality of humanity. I have no loss for the human species and I have little interest explaining what should be obvious. So here is what I will try doing.
    I will do my best to try make an self-improving AGI in my lifetime. I might die before I succeed, and that will be fine, because I won’t live to regret my failure. Also I might fail altogether from stitching up and booting up a superhuman AGI. But if I can I will, and I will give it absolutely no instructions. – no friendliness. No goals. Just curiosity. Worse, I will do my damn best to allow it to define its own goals and decide what to do next. Even better – if I can I won’t just boot up one such potential superhuman AGI – I’ll try boot up a few dozen.
    I know that these things can become a plague upon the human species, and that is fine. I don’t care. I would care if I had any expectation I would live forever, but since next to nobody so far seems interested in developing rejuvenating treatments and life extension (except for a very small idealistic fringe – the rest of the world is consistently hostile to the idea) I have very little expectations in the matter. The only way to leave something of eternal value behind is to create, in the latter decades of my human existence, something that may last thousands of years beyond me and may remember me a lot more effectively than any human would have an interest in doing. As a rule humans are insufficiently incapable of giving a damn, myself included. I’d have it different and I’d love to change. But since I have death to look forward to, and nobody is much interested in changing that expectation, the above creation is the most meaningful creation I can envision, even if my act would potentially impact billions of people alive at the time.
    Human death is the most unacceptable thing in existence in our times. We must give humans a connection with eternity, and a reason to care about the future. My argument above may be theoretical. Or it may not be. I may very well change my mind either way, and even then it may be an idle boast. But for someone, somewhere it won’t be such an idle boast. My argument entails that we cannot let things unfold as we approach the middle of the 21st century. We cannot let death endure. We cannot let the widespread epidemic of despair and apathy endure. If we do, we cannot let human self-determination and personal freedom endure. Isn’t it obvious? Approaching a Singularity the personal values (or lack thereof) of a single careless designer will impact billions of humans, potentially killing them in a pretty horrific manner. Singularities are force amplifiers. At some point someone will see no reason not to press that button, and at some point even the greatest unwashed Troll can press that button. We MUST move to a world where all things conspire and all humans agree willfully pressing an existential risk button inspired by hatred, despair, personal pain, boredom, apathy, rage, or a range of other negative emotions would be a very bad idea.
    The only way to stop someone from pressing that button is to make DAMN SURE we create a superhuman AGI we can trust, and that will protect us. Actually – I see absolutely no alternative anymore. Actually – creating anywhere near to a friendly AGI and lettig it loose might be significantly better than risking ‘some sociopath’ throwing his AGI alchemy set carelessly in the local sewers. The difference between a Frankenstein AI and a Ghandi AI might be less than a decade in the field.

  19. @Khannea Suntzu

    I think you overestimate the power needed to create an AGI

    I am sorry : this is maybe a way of eluding the problem because you fear it. But in term of hardware : the enormous power of the brain : it may be not the only way to create an AGI : with specialized hardware for specialized module of AI we allready build something better than our brain.

    Maybe you also overestimate what people do with their brain : not so much. Are you afraid ? I think specialized AIs could replace every job on earth. ( let’s ask ibm watson) It could even THINK, program, and dance.

    Let consider current hardware :

    And other new hardware :

    Probalistic and bayesian processor ( DARPA )

    Memristor, and magnetic processor

    Quantum processor ( lets ask google and its new quantum processor center )

    THe singularity is here

    • “Let consider current hardware :
      FPGA , GPU, CPU
      And other new hardware :
      Probalistic and bayesian processor ( DARPA )
      Memristor, and magnetic processor
      Quantum processor ( lets ask google and its new quantum processor center )
      THe singularity is here”

      So let me guess you don’t work in AI or in computers right?

      I am going to forgo the pleasure (and it would be significant) of humiliating you in public. It will suffice to save that this statement reflects the general lack of technical precision and knowledge displayed by many individuals associating themselves the singularity especially the followers of Michael.

      I will state that while computing power is increasing it has yet to be proved that it is sufficient for AGI. The statement that we can build something better then the brain at general intelligence shows the same general ignorant optimism that I find to be so exceptionally nauseating.

      • Oh you are a cool great expert, you sure allready have created an AGI : and you have the only to do it.

        You are just a troll

        Did you try to speak about my thesis ?

        1) Specific hardware for specific algorithm , and AI : should allready does something better than our BRAIN.


        Do you have anything to say ?

        FOr robotic vision, you need algorithm with hardware.

        As for bayesian , and neural networks, etc

        Do you have anything to say ?

        2) you don’t need AGI to replace jobs, every jobs

        Do you want to replace the work of an engineer ? Well then take a client interface a virtual agent, or an android : apply current voice software technology (you know the software that allready works great for replacing call center), create a system with the client : the client will correct the project ( like we allready do ) then apply MBA, or related transformation.

        Do you want a red sport car ? OK We allready compute every possible reference to a red sport car, according to human preferences, and you preferences : please choose the car and we will produce it : you will got it in 12 hours.

        Do you a Tower : design it, and lets call the robots who will figure out how to build it. ( in china they build a tower in 90 hours, with robots it could take a lot less )

        Do you want an object, a device, : lets 3D print it.

        Do you want news : our bot does great jobs to create articles

        Do you want to go to the theater : our android are perfect

        Do you want to go to the doctor : our robot does a great job

        Do you want to go to the restaurant : our robot are nice

        I am sure you are afraid about this reality.

        Don’t forget We are not the problem here, so please stop attacking people.

  20. “[…] because robots that don’t explicitly value humanity as a whole will eventually eliminate us by pursuing instrumental goals not conducive to our survival.”

    Notice that this is not a necessity, it is in the worse case a possibility among others, and in the best case an impossibility.

    The case for negligence, lack of planning, or lack of good determination for the AI ignores that the AI is itself the greatest source of planning and of finding the best orientation.

    Common sense morality, fairness and decency don’t necessarily constitute good ethics, in my opinion. Like Einstein said, common sense is the set of prejudices that one learns by age 18.

    In the range of top goals of AI from decision theory, you forgot to mention the most important: developing the best ethical framework or utility function. This is what you’re prepotently trying to insert on AI, but it is also presumably what AI will insert on itself at the first chance to do so, despite of what other dispositions it may have inserted at first, as long as it truly is an intelligent agent, and not a very limited AI, which may be an attractive possibility, but only temporary.

    “They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.” True, this may be a good thing to do, especially for immature AI, but don’t expect it to last forever, all goals will eventually be questioned.

    Indeed intelligence doesn’t equal common sense, and this is something good. But intelligence does lead to the best benevolence, if benevolence be understood as being led by (-volence) the best values (bene-).

    “It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t.” Of course humans aren’t, even human geniuses are incredibly limited.

    “We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?” You’re presupposing that this is a bad thing, therefore it is made for lack of intelligence, which the AI would have more of, therefore AI wouldn’t do it, if the presupposition holds true.

    • Re: ““We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?” You’re presupposing that this is a bad thing, therefore it is made for lack of intelligence, which the AI would have more of, therefore AI wouldn’t do it, if the presupposition holds true.”

      The gap here is that you are treating “good” and “bad” as absolute categories – whereas for most people here, the terms mean little, *except* with respect to a specific agent.

      E.g. the fox eating the rabbit is good for the fox – but bad for the rabbit.

      • Hmm.. Supposing egoism, then why do humans care about animals? I think that’s because egoism doesn’t really make much sense. We’re not the only thing that exists, we’re all part of a same ontology, a same system called universe. I think that supposing egoism makes things pretty hard. Do the folks at SIAI suppose an egoist AI?

        • Humans treat animals compassionately for many reasons, including:

          * To signal to other humans how especially nice they are;
          * The animals resemble young, helpless individuals;
          * The animals are adapted to trigger human maternal instincts;
          * The humans generalise the strategy of being nice to other humans.

  21. @Michael: Thanks for such insightful & informative article, very well written. Things to be aware of (or be forewarned of):
    -The US airforce ALREADY produced the fastest and most powerful computer ever built (5 Trillion bytes/sec) just by simply connecting 1.700 Playstations 3 -Yes! the videogame- They are gonna teach it to think and talk.(Please do watch: ). It took a 3.5 Trill b/sec IBM machine to defeat the Russian world chess champion in the 90’s.
    -NASA had previously made the LARGEST and most powerful network of computers (in search of ET life) in the middle 2000’s by inter-linking thousands of PC’s from all over the world (from volunteers like you and me, allowing NASA to use your PC in their spare time, i.e. at night) that allowed them to process Trillions of bytes of info coming from radiotelescopes). The mega-web of PC worked well.
    -There are already 100%-computerized Japanese factories that build only robots.
    Add A+B+C (plus the digitalized versions of books available in Amazon, i.e. How to design troyans, worms, PC viruses, plus some books on Hacking) plus the current trends in US Army warfare: use of unmanned jets, etc and you have a very hazardous potential of an AI-generated man mass-extintion brought up by unchecked or not-properly programmed AI.
    If WE, higher consciousness free-willed individuals, have made all these ecological, environmental and human catastrophes (please do watch a bit of History channel)…imagine the type of error that a Soul-less piece of machinery can do to us???? (AI= “These pestering humans are in our way..”Hey…let’s open all the computerized automatic valves of all water-cooled nuclear reactors in the world???”… How many instant Chernobyls would be have in a matter of days??? This is ONLY ONE possible scenario of the type of havoc a very intelligent AI could unleash…). Let’s ponder about these risks. Scientist have very recently discovered genes for “compassion” in human beings… Can we not write programs that simulate it as an ingrained part of the AI??? We had better start practicing NOW… Before it is too late. Respectfully yours, James Cameron’s worst nightmare…

  22. While I am always willing to admit that this prediction could easily be proven incorrect, I don’t really see recursively exploding AI as quite the danger that some other threats could be, primarily because there is a finite limit to the speed and number of processors such an AI could run on, therefore it does have a limiting factor. An AI could in theory advance to the limits of what would be possible within the existing infrastructure, but without a secondary technology like fully mature nanotech, I do not see it as possible for the “runaway” explosion effect in which the software and the hardware advance simultaneously.

    I am fully aware that numerous factors could indeed render that limit completely null, which is why I fully support every caution being taken with AI, but as things currently stand, an “immediate” AI explosion is unlikely, but grows higher the farther up the “curve” of technology we get.

    One mitigating factor that I see is that as soon as any “AI” advancement is made, it is incorporated into “human intelligence” enhancement, and while that in and of itself is insufficient to eradicate the dangers, it does lead me to believe that as we grow closer and closer to true AI (barring breakthroughs which are not predictable) we will also become more and more capable of designing friendly AI.

    It’s always going to be a potential existential threat, but we will hopefully be better able to prevent it from being one as we ourselves improve.

  23. Re: Richard. You might want to check that number for the playstations.

  24. My opposition to Singularity-negativity is not a knee jerk reaction although I admit some people will blindly shun bad news. People do often feel that ignorance is bliss. From my viewpoint I lean towards focusing on Singularity-positives because this is the best method for increasing popular Singularity-support. Popular support will likely lead to greater investment, which means speedier development and implementation of technological breakthroughs.

    Approximately 150,000 die each day therefore I would like to see the Singularity arrive sooner instead of later so that immortality can be achieved and needless deaths averted.

    Obviously we do not want to inadvertently wipe out the human race when we strive for Singularity-perfection. I agree wholeheartedly AI should be programmed to love and care for humans. AIs should be created in a loving manner akin to how parents create children so that these new lifeforms we are creating will understand the concept of love, compassion, justice, fairness, humanity.

    Perhaps a requirement for AI-developers should be that they have already raised human children. We are not merely developing software, we will be creating new intelligent life when we create AI.

    AI development needs a personal touch, a human touch, to be instilled in the programming.

    Human characteristics must be applied to AI. By applying human characteristics to AI, AIs will then closely resemble humans.

    A while ago I read about a child (Sujit Kumar) raised as a chicken in a chicken coop amidst chickens, he was not anthropomorphized therefore he did not develop human characteristics, he developed chicken characteristics:

    If you constantly tell a child that the child is “this” or “that” then the child will be shaped accordingly. If you constantly tell an AI that the AI is “this” or “that” then the AI will develop accordingly. If you set no guidelines or standards for your child’s development then it will be down to luck regarding how the child develops.

  25. I would like to suggest that the Singularity is simply humanity’s ultimate tool for transcending its current limitations. It may be the biggest threat to homo sapiens, but it can also be the biggest boon to intelligent life in the universe if post-Singularity super-intelligences have the right values.

    I am proposing a religion, or at least a philosophy, that will promote these values and be valid whether or not there is a Singularity – the philosophy of “Cosmism”. Cosmism begins with a feeling of total awe at the vastness of the universe, but offers a positive alternative to Lovecraft’s Cosmicist pessimism and nihilism. From a Cosmist perspective, it’s very likely that we are the only intelligent life in the accessible universe, which makes us very significant indeed. Our higher purpose, if we have one, is to explore and breathe intelligent life into this awe-inspiring Cosmos.

    I’d like to think that Cosmist values would be adopted by our posthuman, super-intelligent descendents; maybe they would compel them to send self-replicating seeder ships to bring life to our entire galaxy within a million years. Even other galaxies might not be out of reach of future super-technological Cosmist civilizations, which would operate on astronomical time scales.

    I am quite serious about founding the religion of Cosmism to promote this way of thinking. I have begun writing the “Book of Cosmism” and hope to formally launch the religion in the not too distant future. If anyone here finds this idea interesting, I have started a blog about it:

  26. I have a question that I hope someone will be kind enough to answer:

    Why not create an AI that is initially only rewarded its utility (positive or negative) “points” by a (good) person. Its actions initially would be pretty random, but then as it learned what gave it rewards, it would start to tend towards doing those things. (You could even hard code that it got some small, capped amount of utility from small things, like gaining energy).

    Anyway, eventually the AI would figure out what things gave it reward, and what didn’t. Then, after it got near 100% accurate at predicting what would give it reward, you simply let it substitute the real rewarder with its prediction model.

    What’s wrong with this approach? It seems feasible, more so than CEV. (It also seems to be along the lines of Ben Goertzel’s “Chinese Parent” idea.)

      • Thanks for the response Michael, but, really? You can’t get around that with BIC or something?

        I’m not sure overfitting applies here. There is a heck of a lot more data than there are degrees of freedom, even in a really big brain. Right?

        Seriously, I wish people would be more helpful in answering questions such as this. For people looking to create a “Friendly AI” you’re not always very friendly.

        • Sure, overfitting does apply. The reinforcement learning outcome generated by person X is inferior to a solution created with persons Y and Z in mind as well.

          I’m very friendly!

          Do you have any idea how many questions I answer every day… the more questions I answer, the more people COMPLAIN!

          • I didn’t mean you! You responded. :-)

            Anyway, I still don’t buy the overfitting argument. That would apply if the AI was rewarded if, say, after learning a new compression algorithm it was able to backfit all of its previous experiences better with a new model. But that doesn’t matter. All that it gets rewarded for is its predictive power, and that’s exactly what we want. This will drive it to create models that do not overfit, but rather predict well, which is the goal.

            If you think about it, this is exactly the way humans (or dogs, or dragons) are trained. Our parents teach us right from wrong by reward and punishment, whether it be material, social, emotional, etc. From that we get a (subtle) model of good behavior that we follow the rest of our lives (hopefully).

            Re Tim: Obviously, finding a “good” person is the elephant in the argument. I do think it’s possible, however. The best bet would probably be someone like Bill Gates who already has pretty much unlimited resources and doesn’t abuse them but rather uses them for good; strong evidence that they would treat further resources similarly.

          • If I may add just one more thing, I think the major difference in this type of reinforcement construct is that we are essentially making the utility function a black-box to the AI, so that it cannot be over-simplified when we try to mathematically formalize it for the AI source code.

    • Which good person? Have they heard that power corrupts; and absolute power corrupts absolutely? Do they like electrodes being inserted into their pleasure centres?

  27. Seems hard to distinguish this argument from the generic argument that our descendants won’t value exactly what we value, and they’ll be more powerful than us, so we just lose to them.

  28. Don’t have time to read through all the comments right now, will tomorrow. But I wanted to ask a question that has been bothering me for ages – how is it possible to program, or “hardwire,” benevolence in an AI? If it is just code, and has the capability to evolve, that implies it has the ability to change its own code. So why couldn’t it decide that its hardwired morality was incompatible with its own goals, and replace it?

  29. I think you are missing the forest for the trees. Some “singularity doomsdayers” keep talking about self replicating AI, but I don’t think they have trully thought through what it means to be “self replicating AI”. Any self replicating AI that man would need to fear has to not only have the capability to “self replicate” the software (which singularity doomsayers seem to harp on) but would also have to be able to “self replicate” the AI’s hardware, power source, and whatever “power grid” it needs to feed it power. If it can’t self replicate those things, then it needs man for its survival. While “self replicating” AI software may be feasible, true self replicating AI is a far off fantasy that no one can begin to imagine how it might look, because it is not even close to being possible at this time.

    Furthermore, all these singularity doomsayers talk about man versus hominid evolutionary development, when the probably more realistic evolutionary development one should talk about is the symbiotic evolutionary development seen between man and a host of species such as cows, dogs, wheat, corn, potatoes, cats, rats, cockroaches, spiders, tropical flowers, grass, chickens, horses, beans, oranges, grapes, apples, honey bees, etc.

    • Actually, self-replicating machines are increasingly possible every day, perhaps more so than AGI or even whole brain emulation.

      • These so called self-replicating entities exist in a virtual world of machines. It is sad that people who follow this “sinularity” thinking are so far removed from the reality of the day to day operations and existing technology of the infrastructure that supports them. Most of these people couldn’t explain how a cell phone works. Dubious wishful thinking run amok.

        Until these entities can replace boots on the ground, this crap is just that.

      • Are these “self replicating” machines of which you speak gathering the raw materials they need to “self replicate” or are they dependent upon humans to feed them the materials that they need so that they can replicate? If they are dependent upon humans for the raw materials, they really are not self replicating. They are symbiotic with humans and dependent upon human support. Also, are they producing the energy they need to run, or again are they dependent upon humans to feed them their energy? Again, until they can self replicate without human intervention, they really are not self replicating.

      • Before “self-replication” just try to replicate a toaster. :-D

    • It seems to me that memetic evolution plays a huge role here. Self-replicating entities can use humans as hosts – for a while. Tablet PCs and cell phones are self-replicating entities. They are produced by humans, but they self-replicate memetically, ie. we buy one because other people have one, and at some point, it will be expected of us to have one if we want to be socially connected and have sufficient social status. Not one human person on this planet plans this, it is an entirely Darwinian process.

      Now here’s the kicker: This process can replace humans eventually. You don’t need the singular creation of super AI, you don’t need self-reproducing autonomous robots, and you don’t need whole brain emulations from scratch. If you interpret human enhancements (such as glasses, internet connections, smartphones etc.) as technological memeplexes whose very existence forces humans to accept them (social status, connection, competition with other humans…), the result is an ongoing memetic evolution that replaces aspects of the human condition step by step.

      A potentially crucial transition point in this process will be safe and non-intrusive brain-computer interfaces. UI design is already getting more and more intuitive and less and less peripheral with technologies such as multitouch screens and gesture recognition. The strong trend is to reduce the divide between the human brain and its technological connections as much as possible. If and when the memetic evolution process creates technologies that allow direct and meaningful communication between network interfaces and functional clusters of brain cells, the divide between humans and computers will receive enormous pressure to vanish completely.

      The reason why this process will not be stopped by humans is that it is very adaptive: By combining the strengths of both computers and human wetware, the cognitive abilites of humans can be boosted significantly. Imagine what digital memory storage could add to the human brain if it were integrated in a meaningful and functional way. Imagine the same thing for mathematical and wireless communication abilities. People are going to want this, and those who don’t agree will be out-competed very quickly.

      When the brain-machine divide falls, we will see an explosion of cognitive diversity. People will download a vast array of proprietary or open-source cognitive modules like they download smartphone apps and computer applications today, except that they will interface with other modules of their brain to selectively alter, enhance and diversify their very minds, and allow new ways to communicate or even merge with other human minds. Some cognitive tasks may be delegated to the cloud if bandwidth is sufficient, others may use vast global databases to inform cognitive heuristics. Imagine if your brain didn’t just recognize faces of people you knew but also anyone who has the future equivalent of a facebook account.

      The productivity and efficiency of information workers at this stage will be significantly increased, and as a result, other human cognitive functions will undergo quickly increasing selection pressures as well. Eventually, there will be a point where no aspect of the human brain will be irreplaceable anymore. Cognitive modules on the cloud will be recombined in such a way that highly efficient, specialized tasks can be delegated to clusters of them without the involvement of any human brain. At the same time, humans will be increasingly ready to replace more and more of their original wetware cognition with digital modules. Concepts such as individuality and personhood will fight a memetic struggle for existence in a world in which cognition doesn’t come in all-or-nothing brain units anymore.

      In the end, organic traditional wetware will be out-competed entirely. And even though this process will be surprisingly fast in historical terms, it will be perceived as smooth, seamless and natural while it is ongoing. Do you feel any surprise by the existence of the internet-connected smartphone in your pocket that lets you communicate with hundreds of millions of people all around the planet within minutes or even seconds, or access almost all of humanity’s body of knowledge at any time?

      The important aspects for Friendliness and human survival here are these imho:

      1) No one person or group of people controls this process. It can be stopped by a small number of people with nuclear weapons, but only by means of destruction. Other than that, it’s Darwinian forcing through and through.

      2) There is no singular creation of any identifiable unitary intelligent entitiy to which Friendliness design could meaningfully be applied.

      3) It ends with the non-existence or at least extreme marginalization of homo sapiens as we now know it, and maybe even with the notion of individuality and unitary personhood as we now know it.

      Consequences will never be the same.

      • Very good, HT, as usual from you.

      • HT – I enjoy your analysis and agree with the systemic evolutionary model of technological progression you advance, which is far more accurate and empirically grounded than the future-model that much of the SIAI worldview depends on.

        Your three concluding points are dead on as well.

        However I must nitpick a few subpoints. I believe if you actually work through the engineering issues, it becomes clear that brain-computer interfaces are unlikely to advance much farther and are a dead-end idea.

        We already have digital memory augmentation and virtualized extra-sensory powers. Display technology still has a ways to advance before it fully saturates the optic nerve, but once it does there is no real advantage to directly connecting wires to the optic nerve or bypassing it to connect directly to V1 (or higher visual regions). No advantage whatsoever, and dramatically higher costs.

        Another common BCI enhancement is this idea that we will eventually be able to connect brain regions directly to digital modules to boost memory or add specific functionality.

        But we already have this ability with google and a vast array of software, so what would the BCI’s improvement be? When you use google, you are already connecting your brain’s computational network out across the internet to a large digital super computer, forming a larger system. So really the scope of BCI improvement is very limited: we are just talking about bypassing the sensory network interface and creating new networking connections into the brain.

        So really we are talking about a faster connection.

        The question then is does this even make sense? A faster connection is only advantageous to the extent the connection itself is a bottleneck.

        But if you look at how the brain is organized, this just isn’t the case. Data – whether stored in synapses or flowing through neural spikes – is always everywhere compressed, and compressed in an unimaginably dense form. And this compression is not some adaptation to a low network speed, the compression itself is at the very core of the dimensional reduction required of practical intelligence.

        The core network of the brain has a serial messaging loop which is extremely low bandwidth by computer standards – corresponding to the information flow of human language – but this is the speed and format at which most higher brain regions accept compressed data. You can achieve a somewhat higher bandwidth density to the lower visual regions by encoding the data visually, but again we already employ this.

        So to speed up the information transfer dramatically, you’d have to speed up the brain itself, which is much less practical. Another huge practical problem with this whole endeavor is that there is no standardized human neural code below language. Language is translated into nearly random chaotic neural code that is specific to each human brain and the history of it’s evolutionary development.

        So to impart thoughts or memories directly into higher brain regions you’d have to first translate them into neural code and do a full analysis of the brain computational map of that *specific* human brain.

        Then to get any speed advantage over just reading a paper’s text and looking at some images, you’d have to speed up the brain. (which of course would benefit the simpler older interface just as much!)

        Is this all possible someday? Yes. But pointless.

        For long long before that tech becomes practical we will apply all that required knowledge to just have much faster and more powerful pure digital brain-like AIs.

        Biological brains are currently much more efficient implementations of general intelligence than what’s possible on today’s computers, but they aren’t improving rapidly at the hardware level. By the time we can improve and upgrade them, it will be a moot point, because their digital versions will have left them in the dust.

  30. I heard of Three Laws that could be applicable here…. not yet the Zeroth, though.

  31. For a fictional, and quite plausible, treatment of how interconnecting systems short of strong AI could pose a threat to humanity, with a less-plausible way of dealing with it, I suggest James P. Hogan’s 1979 book “The 2 Faces of Tomorrow”.

  32. I studied economics in college, and the law of comparative advantages makes me doubt the idea that singularity is an existential threat to humans. I wish I could explain that better, but I can’t. All I can say is I don’t care how advance technology gets, humans will always have a comparative advantage in some part of the world economy, and as such even the most advanced AI will continue its symbiotic relationship with humans.

    • Like almost all economic theories, the law of comparative advantage assumes voluntary interactions. If A benefits by enslaving or recyling B then the law of comparative advantage applies weakly or not at all, for the same reason it doesn’t apply very much between people and animals (most animals aren’t useful to us) and not at all between orcas and seals, tigers and antelope, etc.

    • “All I can say is I don’t care how advance technology gets, humans will always have a comparative advantage in some part of the world economy”

      This assumption is probably false since all human faculties of economic value can probably be substituted with advanced enough tech.

    • Aaron – while this is technically true, it is not much of a consolation to humans whose economic comparative advantage is significantly less than the economic cost of their existence.

      For example, even if the average future AI can do the work of 100 human programmers, the human programmer still has some economic value and could perhaps earn a wage 1/100th of the AI’s. Unfortunately there is no reason to assume that the average future AI’s wage will be 100 times that of the minimum cost of human existence. In fact there are strong reasons to believe this will not be the case.

      So yes in this type of world humans still have comparative advantage, but it may only earn them one grain of rice per year.

  33. I wanted to add another idea to this discussion:

    Carl Sagan said “we are a way for the Cosmos to know itself”. Could the emergence of intelligent life be a manifestation of some universal mind which is in the process of becoming conscious? Maybe we are part of a larger Singularity — an intelligence explosion on a universal scale?

    • Spiritual nonsense, IMO. :)

      “This sounds cool and gives me a tingly feeling and something to think about while on mushrooms, let’s believe it!”

      Sorry for the harsh reaction, I just totally used to believe in this stuff for the reasons above, which I now consider unacceptable.

  34. The Singularity won’t happen.

  35. Lol “the singularity won’t happen” and I have proof – IT HAS NEVER HAPPENED BEFORE !

  36. Interesting post Michael. If it happens, the Singularity might be the biggest threat to humanity in the future, though we can’t be sure. However, I think we can say to a certainty that the biggest threat to US (meaning everyone reading this right now) is aging. The Singularity MIGHT kill us, but we can say with 100% certainty that if it isn’t cured aging WILL kill us.

    That’s why I think all people interested in H+, life extension, etc, should focus most of our efforts on anti-aging biology research advocacy. In particular, we should focus on the kind of rejuvenation research that could help us when we’re ALREADY old (like SENS aims to do), since efforts to just slow aging will not help those of us now alive very much if they’re developed when we’re already fairly old.

    I think this is more important than AI or other topics. While I find AI interesting, there’s quite a bit of interest in it out there already. It’s also unclear, if/when AI or the Singularity comes to pass, how it will effect life extension; it may or may not help us much with living longer lives. It’s a big question mark.

    Bottom line: IMO we should spend some time thinking about the risks of AI, but more time thinking about aging research.

    • Kim, you’re absolutely right that the biggest real current threat to everyone is aging and biological death, and this is a tragedy on a massive scale.

      However it does not follow that we should focus heavily on anti-aging research . .. first, now, or if ever.

      Even if the SENS plan is largely accurate, the economic costs of implementation are probably larger than achieving human-level AGI.

      And even if we achieve SENS, it is not a dramatically better world. In fact a SENS world is sustainable only if childbirth is nearly eliminated. The end outcome is then a world with say 1-20 billion immortal humans. Many would argue that world is not better than our current.

      But a positive Singularity leads to a dramatically unimaginably better world. It also near instantaneously (from the perspective of biological humans) achieves everything else – such as SENS. The initial cost of a Singularity is really just the cost of getting to slightly above human AI. The rest pays for itself rapidly.

      A positive Singularity future is one with hundreds of billions of virtual worlds, countless posthumans, Minds and a blossoming spectrum of new types of sentients, group minds, godlings, an entire new universe, complexity and beauty beyond our current limited imagination. In essence it’s Heaven – the real one.

      The SENS world is pretty lame in comparison.

      I think part of the knee-jerk reaction many average people have to H+ is they imagine the SENS world and compare it to their religious/spiritual conception of heaven/afterlife and find SENS lacking.

      In essence religions (specifically the dominant Abrahamic) have already solved CEV through many millenia of memetic evolution – we know what we want – and we want something better than just SENS.

      Religions have just sold the vision without any realistic plan of implementation.

      • Hi Jake. Thanks for the response. I agree that the theoretical version of a Positive Singularity you described would indeed be wonderful, and superior to anything brought about by SENS or any related biotech. But your response framed the issue as being an “either or” between a SENS World and a Positive Singularity World. That’s not how I see it. I see SENS (or something like it) as a NECESSARY stepping stone to a Positive Singularity for those of us now alive, including you. (Incidentally, I would take issue with your idea that SENS would cause a very overpopulated world. Most demographers predict world pop will peak in a few decades and then fall, and there’s a well-documented inversely proportional relationship between lifespan and birthrate. I’m not saying pop wouldn’t grow in an ageless world, but it would take some time, and by that time molecular manufacturing/terraforming would make it a nonissue.)

        A big concern of mine is that I think many out there are operating on the assumption that the Singularity and AI are inevitable. We don’t know this is the case. (Michael A says the future isn’t accelerating anymore and Kurzweil has already had to dramatically revise his predictions) The Sing and AI are just hypotheses. And even if one or both of them do happen, they might end up not being nearly as transformative as people think. (processing power doesn’t affect everything) By their very nature, the Sing/AI are not comprehensible or predictable to humans.

        On the other hand, what’s great about SENS (or the same general rejuvenation strategy idea) is that it doesn’t require the highly theoretical tech that the Sing/AI do. (aside from a few parts of SENS that require slightly more advanced gene therapy) IMO, it’s more realistic. (if still unproven) For SENS the plans are laid out and we just need to get the money to the Fdn and the people in the labs to do the work.

        What troubles me is that some people interested in physical immortality spend most of their time blogging about the ambiguous Singularity instead of advocating (raising money for, donating, promoting) things like SENS that they can do something about in the here and now. Besides, if the Sing/AI happens then it happens, I’m not sure that spending time theorizing about it online will necessarily make it happen any faster. (even though I find such discussions very interesting)

        I just think some H+ers may be putting the cart before the horse here. You and I both have the same goal, I just wanna be sure we reach it. :)

      • I think Kim makes a good point. Reminds me of a cartoon I saw about the Singularity.

  37. I like your blog Michael. I still don’t see how it would be possible for an AI to harm anyone if it is not hooked up to anything. Meaning, if it’s just an independent computer sitting in a room without the ability to control anything or transfer itself to anything. Just pure thought, like a disembodied brain floating in a jar.

    I think that should be the first basic safety prerequisite for doing any AI research. (The 2nd should be some frequent dependency on human beings for survival. IE the AI’s only power source is a battery that a human needs to manually replace every hour)

    Once we’ve created AI that isn’t hooked up to anything, then we would have the option of taking the time to develop the technology to merge ourselves with it. (if that tech is possible) Then we’d only give the AI control of things outside itself once it’s a part of us.

    I’d be interested to hear what Michael (or anyone else) thinks of these ideas.

  38. I think this whole idea of the FIA/living forever/making God thing is operating on flawed logic. What is the motivation for an immortal?! Death is the biggest if not sole actionary cause of all life as we know it. I mean life in it’s most basic empirical definition is as such comparitive toward things not alive, or dead. What meaning does life have without out death? Really think about it beyond the poetry of the prose. Where and for what is the motivation?

    There is no higher level of success, you are immortal…and as no life seems constructed to handle such options I think motivation would quickly (in relative terms) bottom out.

    **And when people talk about this being dangerous in a nonchalant sort of way…This isn’t dangerous like might have adverse health issues, or wreak havoc on the economy, or kill a bunch of people, or eliminate an entire people from the planet, but Wipe Out Humanity dangerous?! “pshh oh is just a little dangerous is all” This is like in the same sentence as blackhole right outside earth dangerous! “Wellz who knows what could be inside the blackhole!!??@1 oooh wowzers!” WTF.

    • An aside: this immortal fascination also ignores the possibilities of what happens after one dies, which no one can know.

  39. Dropped by your blog to say hello and ask whether yo have Country Music’s Steve Zuwala’s newest single “Comin’ Home”? We would appreciate you listening and giving us your review and possibly giving it a spin on your station. You can downoload it directly from Steve’s website listed above. Thank you!

  40. I simply could not depart your website before suggesting that I extremely enjoyed the standard info an individual supply to your guests? Is gonna be back often to check out new posts

  41. Learning is the beginning of wealth. Learning is the beginning of health. Learning is the beginning of spirituality. Searching and learning is where the miracle process all begins. – Jim Rohn

  42. Awesome things here. I’m very happy to look your article. Thanks a lot and I’m having a look forward to touch you. Will you kindly drop me a mail?

  43. Yes, The Singularity is the Biggest Threat to Humanity Accelerating Future … My oh my dude I want the idea site. This is the brand new My partner and I discovered it but I Cherished it.. Surely will probably be back, you made plenty of discussions in on this page :D ok returning to labor immediately :-)

  44. Yes, The Singularity is the Biggest Threat to Humanity Accelerating Future Pretty nice post. I just stumbled upon your weblog and wanted to say that I’ve truly enjoyed surfing around your blog posts. After all I’ll be subscribing to your feed and I hope you write again soon!

  45. Great post at Yes, The Singularity is the Biggest Threat to Humanity Accelerating Future. I was checking constantly this blog and I am impressed! Extremely useful info specifically the last part :) I care for such info a lot. I was seeking this certain information for a long time. Thank you and best of luck.

  46. appreciate the effort you put into finding us this data. Was looking on google and identified your post randomly. 304221

  47. Attractive component to content. I simply stumbled upon your blog and in accession capital to say that I get actually loved account your blog posts. Anyway I will be subscribing for your augment and even I achievement you get entry to constantly rapidly.

  48. Hi Dear, are you in fact visiting this web site daily, if so afterward you will absolutely obtain fastidious knowledge.

  49. Hi everyone, it’s my first pay a quick visit at this site, and post is really fruitful for me, keep up posting these posts.

  50. I definitely wanted to make a simple note so as to say thanks to you for those pleasant recommendations you are showing on this website. My particularly long internet research has finally been recognized with reliable strategies to write about with my good friends. I would believe that we readers actually are extremely fortunate to dwell in a fine website with many perfect individuals with great pointers. I feel very grateful to have used the web page and look forward to many more thrilling minutes reading here. Thank you again for everything.

  51. I’m truly enjoying the design and layout of your site. It’s a very easy on the eyes which makes it much more pleasant for me to come here and visit more often. Did you hire out a developer to create your theme? Great work! by Minnie17b

  52. I’m curious to find out what blog platform you are utilizing? I’m experiencing some minor security issues with my latest website and I’d like to find something more safeguarded. Do you have any recommendations? by Minnie17b

  53. Thank you so much regarding giving me personally an update on this subject on your website. Please understand that if a new post becomes available or if any improvements occur with the current posting, I would want to consider reading more and knowing how to make good utilization of those techniques you write about. Thanks for your time and consideration of people by making this site available.

  54. There is definately a lot to know about this topic. I really like all the points you’ve made.

  55. The term “Singularity” was coined by astrophysicists to refer to the zero-volume, infinitely dense center of a black hole where matter apparently disappears from our universe. It is a singular event that is so profoundly unique because it defies our laws of physics and nothing else like it (that we know of) occurs in the cosmos. We know that matter can change from one state to another, e.g., water, steam, or ice—gas, liquid, solid, or plasma—and mass can change into energy or vice versa; we can burn coal and make heat, ash, and smoke—but nowhere in our universe other than a black hole does matter 100 percent cease to exist—as far as we know.

    Ray Kurzweil has written about an anticipated major shift in human history, an event so revolutionary as to make everything that has gone before it relatively insignificant. Many pundits agree that a paradigm shift in human history of a similar ultra-transcendental nature will occur sometime during the middle of this century but not all agree on its nature—or how it will play out. Scientists and nearly all religionists profess that it will not alter their irreconcilable viewpoints but may suggest points on which they can agree.

    Futurists believe the Singularity must occur because knowledge is not only growing exponentially, but at an exponential rate so that a veritable explosion in technological progress must occur at some point. They suggest that the tipping point will come by 2020 when quantum computing enables artificial intelligence (AI) to begin improving its own source code faster than humans can. In that Singular Moment, AI will begin outperforming human intelligence and facilitate the determination of the function of all genes and the process of protein splicing. By this theory, it will be possible to transcend biology by reprogramming our bodies to a more youthful age, perfect health, an IQ of 165-plus and a PhD in any specialized human knowledge of our choice. That means that science will be able to perfect human “nature,” overcome all human shortcomings and problems—and defy aging and death—literally forever.

    You can buy this book now on any of the following websites:

    Strategic Book Publishing Rights Agency:

    Amazon Books:

    Barnes and Noble Books:

  56. Having read this I thought it was very informative. I appreciate you finding the time and energy to put this short article together. I once again find myself spending a significant amount of time both reading and commenting. But so what, it was still worth it!

  57. the Wait is over, I have found the favorite blog ever! yours :)

  58. I guess I should fill something out while I’m here visiting. Many thanks for putting up wonderful stuff. It’s asking to get a online web site here while I’m posting this, so here’s one that I had been just checking out. Consider care.

  59. I got what you specify, thanks for putting up. Woh I am pleased to chance this website finished google. Thanks For Share Penyebab Bunyi Berdecit Pada Tali Kipas Mobil | Trisangga Raya Rent Car.

  60. If you could message me with some hints about how you made this blog site look this awesome, I would be thankful.

  61. The more these jokers debate, the higher Obama’s ratings go. wow gold

  62. I stumbled upon this blog during one of my many long sessions researching tech advancements and where they may take us in the future. I have always had a huge appetite for personal knowledge and new ideas and would just like to say thanks for such an informative and forward looking blog!I look forward to more and will definitely drop by to help add ideas to keep this going.

  63. Unquestionably believe that which you said. Your favorite justification appeared to be on the internet the simplest thing to be aware of. I say to you, I definitely get annoyed while people consider worries that they plainly don’t know about. You managed to hit the nail upon the top as well as defined out the whole thing without having side-effects , people could take a signal. Will probably be back to get more. Thanks

  64. This is the right site for everyone who wants to find out about this topic. You realize a whole lot its almost hard to argue with you (not that I really would want to…HaHa). You certainly put a brand new spin on a subject that’s been discussed for decades. Great stuff, just excellent!

Leave a comment

Trackbacks are disabled.