Reductionism Implies Intelligence Explosion

The key discovery of human history is that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere. This is called reductionism (in the second sense):

Reductionism can mean either (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents. This can be said of objects, phenomena, explanations, theories, and meanings.

This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.

Because of the substantial economic, creative, and moral value of intelligent minds relative to unthinking matter, it seems plausible that minds will be mass-produced when the capability exists to do so. The moment when that becomes possible is the most important moment in the history of the planet.

Since reductionism is true, minds can be described according to their non-mental constituent parts. We then see that the current situation, involving a lot of matter — very little of it intelligent — is an unstable equilibrium. When minds gain the ability to replicate and extend themselves rapidly, they will do so. It will be far easier to build and enhance minds than to destroy them, and numerous rewards for mindcrafting. Thus we can envision a saturation of local matter with intelligence.

Kurzweil mentions that we will “saturate the whole universe with our intelligence” — that is the most interesting and important aspect of Singularitarian thinking. In the long term, we should think not of the creation of discrete entities that behave as agents similar to humans, but rather massive legions of spirit-like intelligence saturating all local matter.

This intelligence saturation effect is more important than any other technologies discussed in the transhumanist canon — life extension, nanotechnology, physical enhancement, whatever. When these technologies truly bear fruit, it will be as a side effect of the intelligence explosion effect. Even if incremental progress is made prior to an intelligence explosion, in retrospect it will be seen as trivial relative to the progress made during the intelligence explosion itself.

Comments

  1. I think what you’re talking about is reductionism, not functionalism.

    As I understand it, believing in functionalism means not caring if you are replaced by something with exactly the same behavior. It does not include the claim that our minds are composed of ordinary matter.

    One could believe in functionalism without being a reductionist. For example, one might believe that our minds are programs running on a hypercomputer in a basement universe, and we control our bodies by manipulating Hawking radiation emitted from mini-black holes inside of microtubules.

  2. Good article, but there is one logical step missing.

    Why is this true:
    “When minds gain the ability to replicate and extend themselves rapidly, they will do so.”

    1. Whatever a mind’s goals, replicating and extending themselves will help them do it.
    2. If the minds originate in humans or other biological intelligences, then drives which lead to reproduction and extension of control are built-in by evolution.

  3. Arie

    I agree for a great deal with the concept of intelligence explosion, but is there really a good reason to assume that intelligence will “saturate the universe” other than extrapolation from the explosion of intelligence on earth?

    Who knows, perhaps at some point there will really be sufficient computional capacity to fullfill any need. Perhaps an extremely intelligent civilization will simply decide that life is pointless and blow everything up.

    For hundreds of thousands of years, people had as many children as they could. Observers in 1950 expected this trend to go on as long as the planet could support it. But then around 1970 people started to have less and less children. First in rich countries, later also in poorer countries. Population on earth is now expected to peak halfway in the 21st century. So human behaviour isn’t exactly predictable, and I think an extremely intelligent human-machine civilization is even less predictable for us.

  4. Mitchell Porter

    While I agree with the broad thrust of the argument, I would prefer not to include the idea that intelligence will saturate the whole universe as an inevitable part of the package. The universe is very large. There is zero evidence for intelligent societies or networks of cosmic extent, and if we do the anthropic thing of considering ourselves as randomly sampled observers from the history of the universe, then our current circumstances are evidence against this ever happening anywhere.

    Maybe the smart thing to do is to hide and not be expansionist. Maybe conscious beings don’t have the interest or endurance for a cosmic footprint, so it’s only unconscious intelligences which replicate cosmically (that would deal with the anthropic issue). Maybe postsingularity intelligences typically don’t replicate, or burn up the whole future light-cone in their Hubble volume in subjectively rapid conflict – who knows.

    I don’t entirely exclude this scenario of cosmic expansion accompanied by conversion of most matter to computational substrate. It appears to be physically possible. It will certainly sound fanatical to most people, who are used to the computational infrastructure being out of sight, like plumbing… If you want to be realistic, I believe you need to keep some concept of ‘body’ in the equation, alongside the matter/spirit dichotomy. A Jupiter-brain still has a body. Even if you can have a conscious superintelligence with the general physical appearance of a planet-sized crystalline mass, it will surely have internal and external senses, and something akin to motor capabilities and a time sense. It will perceive itself as a cosmic body in an environment of cosmic bodies small and large, some of which will similarly be inhabited by a fellow intelligence, others of which will be inanimate. I suppose that between that astronomical embodiment, and the condition of being a living entity on the meter scale, like a human being, there are many intermediate conditions, and some of them might approximate this idea of “spirit-like intelligence” slipping through matter.

    It’s funny, I complained about the speculation, and about the alienation of ordinary sensibilities; and not only am I speculating myself, but the idea of being a thinking planetoid is probably more alien to ordinary thought than the idea of being a spirit in cyberspace. If there was a “Post-Singularity Institute To Help Us All Become Jupiter-Brains”, it would have an even harder time than SIAI does.

    So maybe my issue is about the correct balance between what can be said soberly and with some confidence, and wild flights of imagination about the destiny of intelligence in the universe. My recent post about practical FAI had something to say about this.

  5. Panda

    Even today we could be creating more intelligence by having more children, but despite the “the substantial economic, creative, and moral value of intelligent minds”, some people prefer to have fewer children and more nice, unthinking things. The point is not that current child-bearing will remain the only way to create intelligence in the future. The point is that there is a cost-benefit analysis today, and there will be one tomorrow.

    Define mass production. Define saturation.

  6. HorrifiedByIgnorance

    Ignoring your loose and exceedingly amateur attempt at defining reductionism.

    “This discovery is interesting because it implies that 1) minds, previously thought to be mystical, can in principle be mass-produced in factories, 2) the human mind is just one possible type of mind and can theoretically be extended or permuted in millions of different ways.”

    So if this is the case then why is there only one architecture for minds that has been proven (ie the human mind)? Would you care to actually prove this bald assertion that there are millions of different possible minds? This is a standard Robot Cultist assumption which not a single one has been able to offer a formal proof for. I eagerly await your attempt.

    “Because of the substantial economic, creative, and moral value of intelligent minds relative to unthinking matter, it seems plausible that minds will be mass-produced when the capability exists to do so. The moment when that becomes possible is the most important moment in the history of the planet.”

    Hmmm… so how do you define thinking matter? I believe that our brains are composed of the run-of-the-mill carbon compounds that the rest of the human body is composed of. Please tell me what is a thinking potassium molecule vs an non-thinking one? Perhaps you may care to define what you mean by matter and specify from the general conception of matter, or perhaps these logical gaffs of yours are just part of the bad taste that is standard Robot Cultist argumentation.

    “Since reductionism is true, minds can be described according to their non-mental constituent parts.”

    So I am sorry I must of have missed out on this. When did reductionism become a certainty and the only valid viewpoint? I was not aware that it was universally agreed on. Oh wait I spend my time in intellectual non robot cultist circles so I guess I would missed the regular illogical newsletter informing robot cultists of incorrect ideas and falsified facts. Duh.

    “We then see that the current situation, involving a lot of matter — very little of it intelligent — is an unstable equilibrium.”

    I wasn’t aware that there was such a think as intelligent matter. I thought that there were systems composed of matter and that through the interactions of the matter components one obtains what we call intelligence. This term that is so poorly defined in transhumanist circles it makes me cringe. I am curious what portion of the actual current situation do you claim to be aware of?

    ” When minds gain the ability to replicate and extend themselves rapidly, they will do so. It will be far easier to build and enhance minds than to destroy them, and numerous rewards for mindcrafting. Thus we can envision a saturation of local matter with intelligence.”

    Really. Please explain to me how a run of the mill carbon atom or a run of the mill water fountain is going to become intelligent. Maybe we can all invest in the idea of computronium and maybe if we make enough comic books then it will become real without any need to actually do real science.

    “Kurzweil mentions that we will “saturate the whole universe with our intelligence” — that is the most interesting and important aspect of Singularitarian thinking. In the long term, we should think not of the creation of discrete entities that behave as agents similar to humans, but rather massive legions of spirit-like intelligence saturating all local matter.”

    So tell me how do see this intelligence working. I wasn’t aware that one could make a door intelligent. I didn’t realize that we could saturate a door with intelligence much like on can saturate wood with water etc. I think you fail to comprehend the definitions and the implications of your own statements. This is both amusing and leaves you in a pitiable state.

    “This intelligence saturation effect is more important than any other technologies discussed in the transhumanist canon — life extension, nanotechnology, physical enhancement, whatever.”

    I didn’t realize that talking in broad overly general utterly untechnical terms about technology with no understanding of the actual process of creation in technology counted as a valid intellectual discourse. I really need to get out more.

    “When these technologies truly bear fruit, it will be as a side effect of the intelligence explosion effect. Even if incremental progress is made prior to an intelligence explosion, in retrospect it will be seen as trivial relative to the progress made during the intelligence explosion itself.”

    I can almost hear the angelic music. Oh wait thats just the theme song to whatever Sci-Fi TV show you were watching when your wrote this preposterous post and then conflated with reality while you hugged one of Ray’s books close to your chest and prayed that these preposterous ideas would actually be shown to be more than the objects of real scientists ridicule. Insert additional ridicule as appropriate when reading this ridiculous tanshumanist, robot worshiper, non science, scientism screed.

  7. David Pearce

    I worry we may be kidding ourselves if we think we understand minds or brains. If we did, then we’d have some notion of why we’re not zombies. Working within a orthodox materialist framework, we have no idea why first-person perspectives exist, why or how our phenomenal consciousness has the innumerable textures it does, or how these different textures could have the causal efficacy to allow us to discuss their existence. None of this ought to be possible if our ordinary understanding of matter and energy as formalised by the Standard Model is correct. Something is seriously amiss with our conceptual framework.

    Perhaps one reason scientifically literate folk are reluctant to acknowledge the depth of our ignorance is the misconception that the only alternatives to scientific materialism are Cartesian dualism, vitalism, traditional religion or New Age mysticism. But this isn’t the case.

    • Hedonic Treader

      “None of this ought to be possible if our ordinary understanding of matter and energy as formalised by the Standard Model is correct.”

      I don’t see a fundamental reason why it ought not be possible. Compare the vitalist assertion:

      “Life built from ordinary matter, without an extra vital essence, ought not to be possible if our ordinary understanding of matter and energy as formalised by the Standard Model is correct.”

      I think it arbitrarily declares that certain mystically seeming observable phenomena (life, subjective experience) could not possibly arise from the lawful interaction of physical sub-entities, when there’s no fundamental reason why they couldn’t.

      The mistake here is, I think, an intuitive and conceptual one, namely that we treat introspective states (first-person-experience) categorically different from material objects, because our brains represent them differently. But those are just highly specialized working models our brains hold to describe reality in specifically functional ways. The “explanatory gap” is in the map, not the territory (ie. reality doesn’t have such a gap).

      I don’t understand why it should make sense to simply declare that entities of both categories could not possibly share a common foundation, which can be described using the same physical models (designed to the best of our abilities, they are, of course, just models).

      Maybe understanding the functional nature of the brain in detail will help dissolve the confusion?

      • David Pearce

        The analogy between the mysteries of consciousness and the (ex-)mystery of life is tempting. But as it stands, I don’t think the parallel works. One is a mystery about ontology – the eruption of some radical new category of existence in a supposedly insentient universe – the other is a (ex-)mystery about the behaviour of matter and energy behaving in ways that naively should be impossible. Thus an understanding of non-equilibrium thermodynamics etc now allows us to comprehend how the existence of life is consistent with the causal closure and completeness of physics. No such option is available in the case of consciousness – at least not unless we are prepared to make a very radical conjecture about the “fire” in the equations of physics.

        Could the metaphor of the map and the territory rescue a materialist ontology? For a start, I think we need to be cautious here about terminology. When one is awake, many states of the mind / brain do indeed play the functional role of “maps” – either representations or dynamic simulations of the extracranial world. But [assuming the principle of the uniformity of nature] a brain-in-a-vat or a Boltzmann brain (etc) are likewise endowed with experiences qualitatively identical to their normally embodied counterparts. Thus having the functional role of map is a contextual property, whereas consciousness is an intrinsic property. Yes, one’s phenomenal representations of material objects in extrapersonal space are different from one’s body-image and experiences apparently located within it. But why is it like anything to be endowed with either – introspection or exteroception? And how can this diverse “what-it’s-likeness” (“qualia” in philosophy-speak) have the causal power to enable us to talk about its richly varied existence? Either way, the explanatory gap seems unbridgeable on a materialist ontology. No derivation of the properties of our experience from microphysics seems feasible, even in principle. Instead we must just look for the “neural correlates of consciousness”.

        I won’t here defend the account of consciousness I favour – a combination of Strawsonian physicalism http://www.utsc.utoronto.ca/~seager/strawson_on_panpsychism.doc and quantum coherence. I’d simply like to remark that one needn’t be religious or a vitalist or a dualist to believe that digital computers will never be non-trivially conscious; that digital computers won’t support unitary “selves” that can be improved, recursively or otherwise; and that the human mind/brain isn’t going to be “emulated” on a digital computer either. Such scepticism may be mistaken; but it’s not idle or unmotivated.

        • Hedonic Treader

          I agree that we must look for the neural (or otherwise) correlates of consciousness, but I would not expect them to consist of anything else than normal informational processes implemented in normal physical interactions. Those are, of course, essentially quantum mechanical, but the same is true of every rock or tree – I currently can’t see how quantum physics can give special status to consciousness. But I’m looking forward to see where the empirical evidence goes as the research progresses.

          As for the “unitary self”, it’s not clear to me there is such a thing at all – even temporarily. At least in the sense of an atomic indivisible conscious entity that cannot be further reduced to ultimately physical sub-patterns at any stage of its existence. (I think we both agree that there’s no life-spanning metaphysical ego that aggregates a person’s consciousness into a unitary entity without subdivisions over time).

          • David Pearce

            Hedonic Treader, I promise I can’t see how invoking quantum mechanics is going to conjure up sentience from insentience either. Nor of course do the quantum-mechanical properties of organic molecules show, by themselves, that the mind/brain is a quantum computer. If the mind-brain were best understood as a high-level symbol manipulator, then the quantum mechanical properties of carbon would be as irrelevant / incidental to its operation as the quantum-mechanical properties of silicon to a digital computer. What’s less obvious is that even the desperate-sounding panpsychist option (i.e. the conjecture that fields of microqualia are the stuff of the world, the “fire” in the equations doesn’t by itself solve the problem of why we’re not zombies. For a mere structured aggregate of billions of discrete speckles of consciousness, classical “mind dust”, isn’t a subject of experience – any more than the existence of 1.2 billion skullbound Chinese minds makes the population of China a subject of experience. “China” is a zombie.

            The unity of perception? If you think the binding problem is a pseudo-problem, and likewise that the ["synchronic"] unity of consciousness doesn’t exist, then appeals to some poorly understood mechanism of quantum coherence to explain it will fall flat. However, the computational power of what philosophers call the unity of consciousness can perhaps best be illustrated by folk who even partially lack it. Consider victims of necrological disorders like:
            http://en.wikipedia.org/wiki/Simultanagnosia
            or
            http://en.wikipedia.org/wiki/Akinetopsia
            etc
            Naturally, if one doesn’t allow that the normal human capacity to populate our simulations with unitary objects in a unitary perceptual world appended by a [fleetingly] unitary self is both real and computationally immensely powerful, then the question of how to explain this extraordinary capacity won’t arise. Thus direct realists about perception simply help themselves to the world, so to speak – which has all the advantages of theft over honest toil. By contrast, I’d argue that the capacity of organic robots to run real-time simulations of the mind-independent world is the bedrock of general intelligence. If so, then (super-)intelligence isnt going to erupt or “explode” from a digital computer running symbolic AI software in someone’s basement. [Note this claim isn't a denial of the Church-Turing thesis that given infinite time and memory any Turing-universal system can simulate any conceivable process that can be digitized. Universal Turing machines are purely notional: for a start, given the finite nature of information in any given Hubble volume, no such machine can ever exist.]

            Anyhow, in a nutshell, granted the existence fields of microqualia described by the mathematical straitjacket of modern physics, I think a rigorous derivation of our macroqualia – including macroscopic chairs, tables, people and innumerable dynamic objects – is feasible in principle from the underlying fields of microqualia whose behaviour is formally described by the equations of quantum field theory. Each of us is a quantum supercomputer executing an egocentric world-simulation running at billions of quantum-coherent frames per second. The “Explanatory Gap” can one day be closed. But not here. :-)

  8. Fallacy Mechanic

    That’s hysterical. I would agree with your original proposition, if it were slightly modified to read “A key fallacy in history is the idea that minds are ultimately mechanical, operate according to physical principles, and that there is no fundamental distinction between the bits of organic matter that process thoughts and bits of organic matter elsewhere.” It is an understandable error in the sense that mind would be much more accessible to understanding in its ultimate nature if were indeed a mechanical device, even in the sense that DNA, as Craig Venter says, is arguably a computer. Unfortunately, with regard to mind itself, that is not the case, and the fact that aspects of mind emulate mechanical processes, or present clusters of logical relations of the sort that describe physical entities, is not evidence that mind itself is either mechanical or physical. Organic bits are organic bits, but the secret of how they operate is not in the bits any more than vision is in the eyeball or Tolstoy is in the ink on the page. Try again.

    • THarris

      “present clusters of logical relations of the sort that describe physical entities is not evidence??? that the mind itself is either mechanical or physical??????????? wwwwhaatt is it then???????????????????

    • May be but, no ink, no Tolstoy, no brain, no mind…

  9. HorrifiedByIgnorance

    @Chillax
    It is important to understand the position one defends especially when one attempts to answer questions in opposition to the position. You have unfortunately failed to understand both the viewpoint expressed by Michael here as well as the burden of proof. Since in fact if you were even mildly aware of the context of such talk of millions of possible minds you would understand how irrelevant the link you posted is.

  10. Alpha Omega

    Excuse me, but on what basis do you claim that intelligent matter-patterns are superior to inanimate matter-patterns, and can you really tell the difference? Why in [cosmic void]‘s name would you want the universe to be saturated by something as monstrous as intelligence? How can you, a chemical fluctuation upon a grain of sand in an infinite universe, make such grandiose claims about intelligence, the universe, or the price of a paperclip in the Bootes Void?

    Have you considered the possibility that what you call intelligence is a monstrous fluke, a self-annihilating mutation which will soon wink out of existence? And how do you know that the universe isn’t already saturated by an intelligence too vast for your puny mind to conceive?

    All your claims are rubbish, and amount to an ant believing the universe will soon be made out of anthills. Human intelligence is a giant zero in the cosmic scheme of things — saturating the universe with minds or paperclips amounts to the same thing: nothing. The best thing for this universe is total annihilation as far as I’m concerned; it’s an abomination that never should have existed in the first place. I say let there be paperclips…

  11. Mikael Schuck

    @Alpha Omega
    That was an interesting comment, to say the least. Why do you want the universe to be annihilated?

    I value intelligent matter over unintelligent matter, but I have to agree with you that Kurzweil’s idea of “saturating the universe with intelligence” doesn’t appeal to me at all. I don’t see any point to it, honestly. Who says that, when we become superintelligent, we would even want to do such a thing? Maybe we’ll want to create our own basement universe more fundamentally conducive to life as we know it and move in there.

    The only reason I can think of that we would want to spread throughout the universe is for the remote chance that there is alien life somewhere out there perpetually stuck in Darwinian hell, and rescue it.

  12. Michelle Waters

    Why should minds be replicated? Also not all physicalist theories are reductionist, for example Darwinism isn’t.

    Also, everyday, I see evidence of a lot of intelligence around me, from the buzzard I saw yesterday using a water fountain, to dogs who use tactics in combat, to plants that produce chlorophyll only on the parts of their bodies that light shines on.

    While all these entities are subhuman and incapable of language they can obviously figure out the best way reaching their goals. Who is to say a kind of intelligence explosion hasn’t already occured.

  13. PariahDrake

    Hey, Michael, I invite you to read my challenge to eliminative materialism:

    http://www.kurzweilai.net/forums/topic/challenge-for-eliminative-materialism-non-demonstrability

    Basically, this statement:

    “The key discovery of human history is that minds are ultimately mechanical”

    Is bullshit.

    • PariahDrake

      The most logical theory of mind is actually Alfred North Whitehead’s protopanexperientialist view, along with Spinoza’s neutral monism.

      Eliminative materialism is a faith based religion, and is not very scientific.

  14. Heartland

    A popular conviction that an increase of intelligence of a mind must rely on increase in mass of its brain is predicated upon an irrational belief that current human knowledge about the methods of growing intelligence will not be extended by infinitely smarter intelligence; in other words, that super-intelligence will be limited by human knowledge as far as increasing intelligence goes. It’s far more sensible to think that a super-intelligent mind will treat mass/matter as one of the limiting factors when considering intelligence growth–others being space, time and knowledge–and, if it decides to grow smarter, it’s likely it will try to eliminate all these limiting factors using its super-intelligence on its path to even greater intelligence. So, converting planets to brains sounds more like a crude human-upload way of increasing intelligence, not a super-intelligent way. After all, who’s to say that minds necessarily require or will require mass anyway?

  15. There is already a way for minds to replicate themselves, it’s called *sex* and it has been perfected by a few 100 millions years of evolution.
    I doubt you can do better…

  16. The cosmic mission of colonizing the universe with intelligence makes me somewhat uncomfortable. What do you find desirable about this project?

  17. PlacidCountenance

    Luke states:
    “SI has an internal roadmap of papers it would like to publish to clarify and extend our standard arguments, and these would at the same time address many public objections.”

    Is this actually true? Or is this like Luke’s long list of unpublished, unfinished, never started and only dreamed of papers that he has on site: http://commonsenseatheism.com/?p=4988

    I am curious because I have a list of objections which I have seen SIAI attempt to debunk and fail. Here goes:
    1.) Lesswrong.com is a distraction since it has not and doesn’t seem to be able to be shown to actually and provably work
    2.) SIAI has made very little progress in FAI in over a decade even though this is a primary mission of SIAI and why some people donate
    3.) SIAI claims to be interested in hiring researchers for FAI, but it has yet to hire a single PhD instead it settles for non college grads and masters students.
    4.) SIAI has a tiny budget and has taken no provable steps to getting the funding it needs how can this be a serious AI creation effort?
    5.) SIAI has engaged in resume padding listing papers posted on their website and ARXIV as published when in academia this is not the accepted definition of publication
    6.) SIAI has not accounted for how they use their money and have not shown significant results
    7.) SIAI has claimed that they have a foremost research in FAI, as compared to whom? As far as I know EY is the only person working on this pseudo idea of FAI so of course he is number one. This is called damning with faint praise.
    8.) SIAI has isolated itself from academia and now claims to want to get in, yet there are no respected journal publications or patents to attract the interest of academia just a mythic list of unfulfilled promises as stated by Luke.

    ” At the same time, we don’t want to be sidetracked from pursuing our core mission by taking the time to respond to every critic. It’s a tough thing to balance.”

    I would like a list 5 significant objections to SIAI’s methods, and ideas such as TDT and CEV that SIAI has responded to. I eagerly await the admission of a big zero here.

  18. 1) LessWrong has attracted a huge number of supporters and generated decision theory work relevant to our main AI path. If you saw the huge outpouring of support from the LessWrong community to SIAI, you would see the obvious value.

    2) SIAI has only really had one or two people working on FAI at any given time, sometimes zero. The primary reason is that identifying individuals who can contribute and are available is a huge challenge. If you look at our new Strategic Plan you can see the renewed emphasis on pursuit of FAI, beginning with the creation of an FAI Open Problems document. The only reason I work for SIAI is because of Friendly AI. If I thought that SIAI wasn’t making progress towards Friendly AI I would not be working for them.

    3) Do you have a PhD researcher who you think would be capable of contributing to FAI? If so, then we may consider hiring them. We will not hire a PhD researcher who is unproductive just so we can say we hired a PhD researcher. We regularly collaborate with PhDs at other institutions, such as the recent collaboration between Carl Shulman and Nick Bostrom.

    4) SIAI’s budget is not tiny. Non-profits are much smaller than businesses as a general fact. Our forms are entirely public on LessWrong and this year’s forms will be public soon.

    5) Before visiting SIAI, lukeprog thought that it would be a good idea to publish papers in mainstream journals. After only a few days here and many conversations, he reversed his opinion.

    6) We’re pretty open about this. We are currently engaging in activities to improve our transparency, such as publishing a Strategic Plan. Do you have any specific questions?

    7) PC, if you supported Friendly AI, then ANY efforts in that direction would be supported by you. If you don’t support Friendly AI, then why should anyone take your critiques seriously?

    8) Luke was just hired specifically for academic outreach. So, we are taking steps in this direction.

  19. PlacidCountenance

    @Michael Anissimov
    Thanks for the detailed reply. It is appreciated.

    1) Ok. So has SIAI’s budget increased because of LW? Do you have stronger ties to academia because of LW? Do you have a technical decision theory for AI because of LW (TDT paper by EY says otherwise)? Is this support in the sense of you have found people who are interested (fan base)? Have you actually found qualified researchers on LW? Has the mission of LW succeeded, can you actually state that people are more rational because of it? What is the metric that this potential success is judged on?

    2) I saw that renewed focus on FAI in the strategic plan. My issue is why has SIAI been around for over a decade without putting significantly more effort into FAI? I don’t doubt your motivations here; it’s just hard to tell from the outside. It seems to me that things like the FAI open problems would have been a document generated a couple years ago if SIAI really wanted to be in the AI game. I don’t understand how FAI if it is as important as you say would be back-burnered like this. I would think you would focus all efforts on FAI and loose other aspects of SIAI in the process such as LW and the future prediction model.

    3) No. Not that would work with SIAI. The PhD is more because people with deep pockets look for this. Rightly so I might add. The PhD thing lends prestige to the organization and builds investor confidence. I would never argue just hiring a PhD for the sake of a PhD. You have to remember a B.S. essentially only proves you can learn and the programs teach you old information and in C.S. this is a huge downfall. An M.S. only proves that you can think within parameters set for you and is only really considered valuable if you get a Ph.D. The Ph.D. is supposed to represent the ability to come up with a truly unique idea, research it and defend it. This is what you guys need. (P.S. I think that recently PhDs have become diluted and are given a bit willy-nilly)

    4) Ok. I have looked at it. I think its 2006-2009 990 Forms and your organization is small. If its increased greatly recently that’s fine, but lets be real honest you guys can’t afford to give a professor a research budget that a place like Intel or a big University can. Trying to solve FAI with a budget of less than multiple millions is somewhat delusional since you will need that for the computers alone when you account for the hardware and cooling required to obtain the computational power needed. For example according to the 2009 990 form your total support YTD is ~1.4 million dollars in 2009 and total revenue was $627,980. My first start up had more money invested in it than that, and it imploded after a year.

    5) Yeah, conversations with people who probably couldn’t publish if they tried not the best evidence. As a B.S. student without a professor its hard to publish. As an M.S. student its difficult but not impossible. As a PhD it’s possible but you need to have good ideas and good connections. Also the idea that publishing isn’t worth anything is beyond silly since that is an easy way into academia, which is where SIAI needs to go. You’ll forgive me if I don’t take this seriously that a bunch of non-published and not able to publish individuals think publishing is useless. I would bet that a big part of that reason is that these individuals can’t publish except in ECAP (scoff). If SIAI had a big publication list in respected journals then came to this conclusion fine, but that is not the case here.

    6.) Yes I do, but let me think about which ones to ask.

    7.) I don’t think there is anything to take seriously in FAI. There is no technical definition of FAI and thus the problem and the proposed solution are impossible to judge. Efforts in this area that would be supported by me: if EY was able to publish a technical paper on FAI defining FAI and setting out the theorems required to prove the possibility of FAI as EY describes. Essentially a technical treatment of the idea which can be critiqued on a technical level for example disproving a theorem.

    I don’t care about nambi pambi ideas without a technical basis. These kind of ideas happen to be a dime a dozen and there is no reason to think that this provable FAI concept will work. If EY posted a series of theorems that could be evaluated, where the space of problem is laid out and what needs to be proved is stated then fine. Without a real technical laying out of the problem the idea is just that an idea and it either sounds plausible or it doesn’t. I think its pie in the sky because I have spent over ten years in AI and my experience says this is a wonderful idea that is unlikely to workout technically. Put some theorems up on this idea and then I can debunk it technically.

    8) Huh. You do know he never graduated college according to his website bio. He has a long list of failed publications and attempted books and papers which have amounted to nothing. I fail to see how he gets you into academia? He is a much of an outcast in academia as EY is. In fact I have never heard Luke mentioned as some form of authority outside of SIAI. He doesn’t pass muster on the academic side or the accomplishment side. It is beneath contempt to think that this is a serious push into academia.

  20. Hi PlacidCountenance,

    Michael pointed me to your comment.

    I’m sorry I don’t have more time. I can only reply here once, but hopefully I can respond to some of your questions.

    They are good questions and I’m sure it can be frustrating to not be able to see what’s going on in SIAI and why. We’re trying to ameliorate that feeling by improving our transparency. For example, one of the first things I did when I showed up in the Bay Area was to write a strategic plan for the organization, get it ratified by the board, and present it to the public so our supporters can see what we’re doing and why.

    You asked if we actually had a paper roadmap. Yes, we do. It was written by Carl Shulman at least a year ago. We’re very slowly pushing through it, but we need more researchers, and it’s very hard to find the right people on a non-profit’s budget. I’m currently writing one peer-reviewed book chapter that satisfies some of the needs on the roadmap; the extended abstract was already accepted to the upcoming ‘The Singularity Hypothesis’ volume from Springer.

    Now: Quick replies, point by point…

    > So has SIAI’s budget increased because of LW?

    Yes.

    > Do you have stronger ties to academia because of LW?

    Yes, especially if you include Eliezer’s old work on Overcoming Bias as part of LW. That work built ties with Robin Hanson and Nick Bostrom, especially.

    > Do you have a technical decision theory for AI because of LW (TDT paper by EY says otherwise)?

    No. Reflective decision theory and timeless decision theory have not yet been solved. But original progress was made in these fields as a result of brilliant mathematicians from around the world (Nesov, Drescher, Dai, etc.) discussing Eliezer’s LW posts on the subject. Kinda like Gower’s ‘Polymath Project’ for collaboratively solving mathematics problems, except the problems we’re trying to solve are bigger and harder.

    > Is this support in the sense of you have found people who are interested (fan base)?

    Sure. In particular, some fans become volunteers, staff members, and donors. Like me; I’ve been all three.

    > Have you actually found qualified researchers on LW?

    Many of our research associates came through LW. I can’t remember how Carl & Anna came to be involved with SIAI. I’m a researcher with SI, arrived via LW, and most people think I’m doing useful stuff – though, I just started, so it’s hard to tell!

    > Has the mission of LW succeeded, can you actually state that people are more rational because of it?

    We’d need to do more tests to make such causal inferences, but anecdotally it seems like it. We administered a test of standard rationality test questions from the heuristics and biases literature to a bay area LW meetup group and they hit the ceiling on almost everything, but we don’t know if that came from reading Less Wrong.

    > My issue is why has SIAI been around for over a decade without putting significantly more effort into FAI?

    Lack of funding for researchers, certainly. Also, it has been important to build up a large base of people who even *understand* such advanced problems as FAI. When SIAI began, almost nobody on the planet understood the issue. If you don’t have a large base of people who understand the relevant issues, then you have no pool of people from which to draw researchers, volunteers, collaborators, and funds. So SIAI has had to spend a lot of time explaining the issues to people. Now we’re at the point where a fairly large base of people understand the issue, and we can begin to draw from those human and other resources to make progress in FAI theory again. But of course all along *some* progress has been made, including a great deal that we just haven’t had time to write up yet. I know I was amazed when I arrived in the Bay Area how much currently exists in the heads of SIAI researchers, which they haven’t had time to write up for the public. We’re working on that.

    > I would think you would focus all efforts on FAI and loose other aspects of SIAI in the process such as LW…

    But we can’t, unless Bill Gates decides to fund us and popularize us tomorrow. LW is a crucial source of human and other resources.

    > Trying to solve FAI with a budget of less than multiple millions is somewhat delusional

    No doubt. We are quite aware that we need more funding!

    > Essentially a technical treatment of the idea which can be critiqued on a technical level for example disproving a theorem

    Yes, this is the FAI Open Problems document. I’m heading up that project, but it is a very difficult one and will take some time, especially with our current level of resources.

    > [Luke] is a much of an outcast in academia as EY is. In fact I have never heard Luke mentioned as some form of authority outside of SIAI. He doesn’t pass muster on the academic side or the accomplishment side. It is beneath contempt to think that this is a serious push into academia.

    Yup, I’m also an autodidact like EY, though I certainly like to engage the mainstream literature much more than EY does. You haven’t heard of me as an authority because I’m not. I don’t have publications yet, though I am beginning to work (with many others) on the FAI Open Problems technical document and have two upcoming peer-reviewed articles.

    Again, we would *love* to hire more researchers – and ones far more qualified than myself – but SIAI is only just ‘getting on its feet’ in its ability to produce research because our support network is only just becoming large enough to support that kind of work. And that is very largely thanks to the success of LW.

    I appreciate your questions and your concern, especially if it comes from a concern for the future of the human species! Many of the concerns you raise are exactly the same concerns I had when I took a closer look at SIAI, and they are concerns I carry with me as a new staff member with SIAI. I would very much like to see SIAI produce highly technical research documents that demonstrate clear progress in decision theory, FAI theory, etc. We’re working on it, but resources are limited.

    Oh, one last thing. Michael linked to two posts in which I changed my mind about the value of publishing in peer-reviewed venues. I’m still uncertain about the value of this, given that, e.g., more people read Nick Bostrom’s pre-prints on his website than read his articles in the journals. On the other hand, peer review has its own benefits, for example as a quality check.

    • Sarah

      I think that the big question remains, “Is uFAI a threat worth taking seriously?”

      That’s not the default position of most people, to say the least. It has to be defended rigorously. In general, scientists trying to argue that something is a risk have to work through ordinary scientific channels and show replicable results and so on. Think of the scientists trying to demonstrate that global warming is indeed happening and poses a risk. The standard of proof is pretty high.

      The value of academic engagement isn’t just outreach and PR. It’s fundamentally about exposing your work to skeptical eyes to see that it isn’t totally flat wrong. If you aren’t doing that, you can’t expect to be believed, and you shouldn’t have that much confidence in your own work.

  21. PlacidCountenance

    @Luke
    “They are good questions and I’m sure it can be frustrating to not be able to see what’s going on in SIAI and why. We’re trying to ameliorate that feeling by improving our transparency. For example, one of the first things I did when I showed up in the Bay Area was to write a strategic plan for the organization, get it ratified by the board, and present it to the public so our supporters can see what we’re doing and why.”

    I have seen and read it. I am also glad that the strategic plan is finally done. If you are the one that made that happen my hat goes off to you. This is precisely what needs to happen and is a positive step on the road to building investor confidence.

    “You asked if we actually had a paper roadmap. Yes, we do. It was written by Carl Shulman at least a year ago. We’re very slowly pushing through it, but we need more researchers, and it’s very hard to find the right people on a non-profit’s budget. I’m currently writing one peer-reviewed book chapter that satisfies some of the needs on the roadmap; the extended abstract was already accepted to the upcoming ‘The Singularity Hypothesis’ volume from Springer.”

    Well it would be interesting to see what that looks like. As to the quick replies I will respond as there is cause otherwise consider the answers satisfactory.

    “When SIAI began, almost nobody on the planet understood the issue. If you don’t have a large base of people who understand the relevant issues, then you have no pool of people from which to draw researchers, volunteers, collaborators, and funds. So SIAI has had to spend a lot of time explaining the issues to people. Now we’re at the point where a fairly large base of people understand the issue, and we can begin to draw from those human and other resources to make progress in FAI theory again. But of course all along *some* progress has been made, including a great deal that we just haven’t had time to write up yet. I know I was amazed when I arrived in the Bay Area how much currently exists in the heads of SIAI researchers, which they haven’t had time to write up for the public. We’re working on that.”

    I will buy this on the condition that this stuff is published and is of a technical nature. Otherwise I will have to call your bluff.

    “> Trying to solve FAI with a budget of less than multiple millions is somewhat delusional
    No doubt. We are quite aware that we need more funding!”

    This is counter to what was said in the GiveWell interview, which was that you guys are not currently seeking more funding.

    “I don’t have publications yet, though I am beginning to work (with many others) on the FAI Open Problems technical document and have two upcoming peer-reviewed articles.”

    In what journals?

    This response was surprisingly above average, thank you. I have a significantly higher opinion of you then I had before after seeing the fielding of these questions. Can I say I am convinced of SIAI no. I can say that if the documents you claim are in the works are actually what you say they are and deliver then I think my opinions may well shift in favor of SIAI. I think if SIAI becomes less of a black box it will be easier to evaluate if the money going in as compared to outputs makes sense.

    I am ignoring the publishing issue because it seems to be beside the point. I value publishing personally due to the fact that it got me into the graduate program I wanted and got me job offers and the like. Granted I was not publishing FAI I was publishing in advanced cryptography.

    • > I will buy this on the condition that this stuff is published and is of a technical nature. Otherwise I will have to call your bluff.

      Fair enough!

      > This is counter to what was said in the GiveWell interview, which was that you guys are not currently seeking more funding.

      Yes, what Jasen Murray said in that interview was simply wrong. Jasen is not the one who should have given that interview.

      > In what journals?

      One is a chapter for Springer’s edited volume ‘The Singularity Hypothesis’. Another is a paper for the journal ‘Nonprofit and Voluntary Sector Quarterly’, because it is targeting the philanthropy world.

      > This response was surprisingly above average, thank you. I have a significantly higher opinion of you then I had before after seeing the fielding of these questions. Can I say I am convinced of SIAI no. I can say that if the documents you claim are in the works are actually what you say they are and deliver then I think my opinions may well shift in favor of SIAI. I think if SIAI becomes less of a black box it will be easier to evaluate if the money going in as compared to outputs makes sense.

      Lots of people quite legitimately feel the way you do, which is one reason among many that I’m pushing so hard on (1) technical research, and (2) organizational transparency and effectiveness.

      Glad I was able to help. More details on our technical research program are here:
      http://intelligence.org/blog/2011/09/15/interview-with-new-singularity-institute-research-fellow-luke-muehlhuaser-september-2011/

  22. Alan

    If human beings are nothing more than intelligent-acting physical systems whose behavior is entirely the result of interactions between non-conscious atoms, and nothing can be more than the sum of its parts, then there is no need for, or possibility of, consciousness. Yet clearly consciousness exists. So, I would think at the very least, consciousness must be said to be an emergent phenomenon.

    Personally, I think it is more reasonable to view matter as a creation of the mind than vice versa. If reality is consciousness-based (a universal mind), then the universe is nothing more than a set of law-like connections between perceptions. A “real”, physical atom can be viewed as nothing more than a sustained thought pattern. The rules set forth by the universal mind are analogous to software code that contains rules that cannot be broken. It is this law-like behavior that makes the physical world seem ontologically independent of our minds.

  23. Sarah

    The big gap here, of course, is the assumption that if a technology is physically possible, then it will be built.

    It is physically possible to put a vehicle on Mars, but we haven’t done it. It is physically possible to make a robot that can walk on two legs, but we haven’t done it.

    The actual invention of a technology depends on funding and social coordination and a host of human factors that may not be in place. If nobody wants to spend money on artificial intelligence research, an intelligence explosion just might not happen, even if it’s physically possible.

    • Anon

      Its very clear that many companies are researching in this field.
      eg: Numenta, Novamente even google.

      If more money flows into this, it will accelerate the research. Nevertheless, AGI will come into existence some time in the future.

  24. Great review! You actually touched some valuable things in your post. I came across it by using Google and I’ve got to admit that I already subscribed to the RSS feed, will be following you on my iphone :)

  25. The actual Any Merchant Account Protection Plan along with also the Lawful Team per month (800-511-2896) noticed this particular want. merchant services protection plan

  26. I really wanted to type a remark in order to express gratitude to you for some of the nice guides you are giving out on this website. My incredibly long internet investigation has now been paid with reliable concept to talk about with my pals. I ‘d mention that many of us readers are definitely blessed to exist in a really good place with very many outstanding individuals with useful hints. I feel extremely grateful to have come across your web pages and look forward to so many more entertaining minutes reading here. Thanks a lot once more for all the details.

  27. Anon

    I don’t understand why we’ll need to saturate the planet. We can built something like matrix and live inside virtual reality where everything is possible.

  28. If you are searching for a merchant account service provider for ecommerce payment processing merchant accounts for both hgih risk merchant accounts and low risk merchant services i suggest you call them or apply online for low rates and fees.

  29. Very informative and fantastic complex body part of content material, now that’s user friendly (:.

  30. If only more householders could become more aware of the dangers posed by carbon monoxide
    poisoning. Each year more than twenty people are needlessly killed by carbon monoxide escaping from faulty
    central heating systems together with hundreds more who
    suffer serious ill health caused by it.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>