Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Future Superintelligences Indistinguishable from Today’s Financial Markets?

It seems obvious that Singularity Institute-supporting transhumanists and other groups of transhumanists speak completely different languages when it comes to AI. Supporters of SIAI actually fear what AI can do, and other transhumanists apparently don't. It's as if SL3 transhumanists view smarter-than-human AI with advanced manufacturing as some kind of toy, whereas we actually take it seriously. I thought a recent post by Marcelo Rinesi at the IEET website, "The Care and Feeding of Your AI Overlord", would provide a good illustration of the difference:

It's 2010 -- our 2010 -- and an artificial intelligence is one of the most powerful entities on Earth. It manages trillions of dollars in resources, governments shape their policies according to its reactions, and, while some people revere it as literally incapable of error and others despise it as a catastrophic tyrant, everybody is keenly aware of its existence and power.

I'm talking, of course, of the financial markets.

The opening paragraph was not metaphorical. Financial markets might not match pop culture expectations of what an AI should look like -- there are no red unblinking eyes, nor mechanically enunciated discourses about the obsolesence of organic life -- and they might not be self-aware (although that would make an interesting premise for a SF story), but they are the largest, most complex, and most powerful (in both the computer science and political senses of the word) resource allocation system known to history, and inarguably a first-order actor in contemporary civilization.

If you are worried about the impact of future vast and powerful non-human intelligences, this might give you some ease: we are still here. Societies connected in useful ways to "The Market" (an imprecise and excessively anthropomorphic construct) or subsections thereof are generally wealthier and happier than those than aren't. Adam Smith's model of massively distributed economic calculations based on individual self-interest has more often than not surpassed in effectivity competing models of centralized resource allocation.

This post is mind-blowing to me because I consider it fundamentally un-transhumanist. It essentially says, "don't worry about future non-human intelligences, because they won't be so powerful that they aren't indistinguishable from the present day aggregations of humans".

Isn't the fundamental idea of transhumanism that augmented intelligences and beings can be qualitatively different and more powerful than humans and human aggregations? If not, what's the point?

If a so-called transhumanist thinks that all future non-human intelligences will basically be the same as what we're seen so far, then why do they even bother to call themselves "transhumanists"? I don't understand.

Recursively self-improving artificial intelligence with human-surpassing intelligence seems likely to lead to an intelligence explosion, not more of the same. An intelligence explosion would be an event unlike anything that has ever happened before on Earth -- intelligence building more intelligence. Intelligence in some form has existed for at least 550 million years, but it has never been able to directly enhance itself or construct copies rapidly from raw materials. Artificial Intelligence will. Therefore, we ought to ensure that AI has humans in mind, or we will be exterminated when its power inevitably surges.

If there are any other transhumanists who agree that future superintelligences will be directly comparable to present-day financial markets, please step forward. I'd love to see a plausible argument for that one.

Comments (53) Trackbacks (1)
  1. If I understand Rinesi correctly, he is equating distributed networks of information processing with AI. This model has been used by some neuroscientists as a model for consciousness, without much success in my opinion.

    The problem, it seems to me, is that such a system would always be too diffuse to ever have an internal model of self that’s required for a useful form of consciousness. As far as I can see, there has to be a central point to which information flows and is interpreted by the overseer of the ‘self’.

    It’s really like saying that a flock of birds flying in formation, schools of fish, or the coordinated activities of a termite nest are ‘intelligences’ in of themselves. In many ways the cooperative activities of social animals and hive insects resemble the financial market metaphor – would you call them an ‘intelligence’?

    It’s kind of an ’emergent hive mind’ theory with more in common with magical thinking than science.

    As to AI being no existential threat:

    I would (again) refer you to the excellent book by science writer Rita Carter, ‘Consciousness’ that collates evidence from neuroscience, cognitive studies and psychiatry in an attempt to understand the only working model of consciousness we have – us.

    In it, research appears to indicate that:

    a)There can be no ‘intelligence’ without ‘consciousness’

    b)consciousness – no matter human or otherwise – requires some form of embodiment to function (so, again, Rinesi’s hive mind is not embodiment).

    c)Consciousness requires an emotional component in order to function. The very act of making even an abstract judgement requires a ‘feel’ for what is ‘right’ or ‘wrong’ – that’s emotion.

    My point in mentioning this is that an AI (perhaps better name is AC – Artificial Consciousness?) will in all likelihood have to be able to experience emotion to be useful to us. Without emotion there is only a pallid facsimile of creativity – and without creativity what could it invent?

    So if it can – or must – have emotion, it most certainly can and will be dangerous. It could feel resentment, jealously, ambition and hatred.

    This is why,I suspect, many so-called AI ‘experts’ (experts on something that is currently not a reality doesn’t really warrant the title does it?) are uncomfortable with the idea of emotion being necessary, and go out of their way to dismiss it. Because it implies a potential existential threat to us.

    But if they’re wrong and emotion is indeed needed for AI, then Rinesi’s telling us not to worry is – at best – irresponsible.

    • “Without emotion there is only a pallid facsimile of creativity – and without creativity what could it invent?”

      Computers can invent in the way that evolution does – through trial and error. They can also employ smarter search algorithms than that, e.g. to explore a space of possible designs.

      The whole significance of computing is that it provides a mathematical theory and a material technology capable of duplicating every mental function – goal-directed behavior, pattern recognition, communication – in a way which makes no reference to mind, thought, meaning, consciousness, emotion, etc. It’s all just physical “state machines” interacting with each other.

      And since neuroscience is analyzing the human brain and human behavior in the same way, this raises serious questions about the relationship between subjectivity and the world of mindless physical cause and effect. That debate has been around for centuries now, but the theory of computation gives it a new twist. Lots of people now like to think that e.g. emotion or consciousness occurs wherever a certain type of computation occurs. Whether you believe that or not, the significance for AI is that emotion and consciousness are not a necessary part of AI theory or practice. You don’t need to think of a thermostat’s operation in terms of liking and disliking in order to design and make it, and the same goes for the far more intricate feedbacks and calculations which would make up an artificial “intelligence” capable of rivaling the human mind.

    • “Computers can invent in the way that evolution does – through trial and error. They can also employ smarter search algorithms than that, e.g. to explore a space of possible designs.”

      And how do they judge, from ‘trial and error’ which is the best solution? Other than something blowing up of course? Without the ability to judge – which according to cognitive studies, requires emotion?

      Evolution is hardly a good example to cite. It’s taken billions of years to arrive at complex organisms. I think we’d probably want an AI to work a bit faster – and computing speed means zero if the thing is unable to make a decision about anything in an infinite loop.

      And I’ve yet to see a single shred of evidence of a computer truly inventing anything. So your statement has no supporting evidence. The ‘Brutus’ system for instance, touted as creating fictional stories, merely shuffles input data and is nothing more than a cleverly designed information retrieval system, by the ready admission of its creators.

      “The whole significance of computing is that it provides a mathematical theory and a material technology capable of duplicating every mental function – goal-directed behaviour, pattern recognition, communication – in a way which makes no reference to mind, thought, meaning, consciousness, emotion, etc. It’s all just physical “state machines” interacting with each other.”

      Since that approach has yielded not one convincing example of true artificial intelligence, the above statement is an assertion of dogma, conjecture and theory, not fact. ‘Every mental function’ is inextricably intertwined with emotion, not to mention that emotions are a mental function too. If you know of a mathematical model that can derive emotion from an algorithm, let the entire scientific community know and claim your Nobel Prize.

      “And since neuroscience is analyzing…” Neuroscience is the study of brain function and only one of the tools, I believe, required to understand consciousness. Cognitive study, psychiatry and psychology are necessary for a full understanding in the opinion of many scientists besides myself.

      Unfortunately there’s are lot of ‘rubber science’ spouted by people who’ve made no serious study of consciousness, usually computer scientists, especially amongst those who think AI is just about writing the right software algorithms.

      “Whether you believe that or not, the significance for AI is that emotion and consciousness are not a necessary part of AI theory or practice.”

      Again, the only *working* model of consciousness – biological – does not support this assertion. Since current ‘AI theory or practice’ has failed to produce an AI, I would say that’s a statement of faith.

      It’s not just me that is sceptical about that. Neuroscientists like Steven Rose are too.

      It is frankly a bizarre notion that you can have an entity that has no emotion or consciousness and call it an ‘intelligence’. It is not. It is merely a computing system, different from what we have today only in terms of greater sophistication. It would be worse than the most severely autistic human, and would be next to useless to us as anything more than a glorified calculator.

      “far more intricate feedbacks and calculations which would make up an artificial “intelligence” capable of rivalling the human mind.”

      ‘intricate feedbacks and calculations’ are meaningless without a conscious mind capable of deriving meaning from the data.

      Without conscious and emotional interpretation of qualia, no AI worthy of the name could derive meaning from its activities, which is all important for understanding.

      Don’t mistake me, I believe AI is an entirely possible endeavour, and a profoundly transformative one too. But it seems to me that AI researchers avoid dealing with emotion because it’s part of what cognitive scientists call ‘the hard problem’.

      In fact in some strange circles in this field, ‘consciousness’ and ’emotion’ are dirty words, somehow ‘New Age’.

      I think we simply have much, much further to go until we can create a true artificial mind – we’ve barely begun to understand how own consciousness works, and how brain function creates it.

      I think it’s a tad premature – and more than slightly hubristic – to disregard the emotion-driven human mind as our working model and assume that we can start from scratch with a ‘raw origination’ approach to AI.

      Also not forgetting that we need to be able to interact with it. We’re not creating it just to ‘rival’ our minds (unless we’re suicidal or masochistic), we’re creating it to make a better world for ourselves.

      Creating something so alien to us that we can barely understand each other – like the parents of a severely autistic child – makes absolutely no sense. A Tower of Babel effect in the worst possible definition.

      • DMan, let’s consider the non-player characters in any computer game – anything from the ghosts that chase Pac-Man around, through to the anthropomorphic warriors in today’s MMORPGs. There is no emotion or creativity in their “choices”, yet they manage to give their human enemies a hard time; how do they do that? How do they decide the right thing to do? Or how about a chess computer, how does it decide what moves to make? It’s all mathematics and algorithms: data structures, game theory, optimization. They have a simple model of the “state of the world” – pieces on the board, location on the battlefield – and a function which evaluates the desirability of a possible state – is the other player checkmated, is the enemy dead – and they have algorithms for finding pathways to a desirable state.

        I foresee that your response will be: that’s not intelligence; all the intelligence is in the designer and the programmer. But these principles of organization can scale up, and it’s also possible to see how they can be produced spontaneously and favored by an evolutionary process with no designer, and with no source of order beyond the order that comes from having universal laws of microscopic physics. Of course, artificial intelligence isn’t going to materialize in a junkyard somewhere, it’s going to be produced by the cumulative technological meddlings of an evolved natural intelligence, Homo sapiens, but you need to get away from the juiciness of being human and look at the cognitive mechanics of detailed problem solving. That involves repeated performance of elementary tasks like retrieval of appropriate memories, small deductions, small inductions, generation of possible scenarios according to principles of combination, pattern matching of sensory data against memory, learning from example, generation of hypotheses – obviously I could go on all day, but there is a body of mathematical algorithmic theory for all of this.

        I am not saying people should disregard emotion and consciousness if they want to understand the human mind. I’m saying you don’t need those things as an ingredient in order to create a system that can learn, remember, recognize, invent, and operate in the world. Maybe when you make an artificial computational intelligence that is complex enough to get by in the real world, and not just a virtual one, all those subjective features will “emerge”; maybe whenever you have a reflective computation, you get consciousness, and whenever you have a comparison of utilities, that’s emotion. But I seriously doubt it. And either way, you don’t need those things as an ingredient.

        At best, you need something which emulates their functional role in human cognition. Consciousness performs functions like providing information about the self and identifying the objects of awareness. Emotion performs functions like indicating whether a situation or possibility is good or bad. And embodiment performs the “function” of placing us in the world, in a way where we can act on it and it can act on us. But all of those functions can be accomplished by the existence of mechanical and computational arrangements which make no reference to minds, feelings, or bodies.

      • “There is no emotion or creativity in their “choices”, yet they manage to give their human enemies a hard time; how do they do that?”

        The so-called ‘AI’ in games and the chess algorithms that defeated Kasparov only convey the illusion of decision making. Chess and AI in games do not decide, they draw on pre-existing pre-programmed patterns. They can not, and will never, be able to suddenly move outside their rigid parameters. A true AI will – or rather must.

        Frankly if all there was to consciousness was optimizing algorithms, it would not be the enigma it is.

        In fact, many of those involved in the creation of such systems, would be the first to agree with me on this point. For instance, the creators of the aforementioned BRUTUS system have admitted this.

        “I foresee that your response will be: that’s not intelligence; all the intelligence is in the designer and the programmer”

        No, that’s not precisely what I mean and misses the point of what I’m trying to say. By your own admission, there is “no emotion or creativity in their “choices””. Cognitive studies show that real creativity requires emotion. If you expect an AI to be able to invent, it requires creativity be it a painting, astrophysics, engineering or even accounting project.

        With creativity, an AI will evolve at an exponential rate. Without it, it is nothing more than your chess computer.

        I believe we will create an ’emotional AI’ – and I wouldn’t be surprised if, contrary to the luddites, it (ultimately) turns out to be more moral than us, ironically because we allowed it to feel.

        I think compassion flows from emotion, and I would rather imbue this superintelligence with compassion, because I suspect it would be more enduring than our fumbling attempts to imbue it with Asimovian restraints.

        How long do you think it would be before something far more intelligent than us finds a way to subvert those restraints? It’s the height of dangerous intellectual vanity to assume we could close all the ‘loopholes’ in its programming.

        But if its growth and evolution involves teaching it compassion – a learned rather than programmed behaviour – I believe it would be fundamental to its consciousness.

        I realise some of this may sound ‘touchy-feely’, but I think it may be the only way we can ensure harmonious co-existence with what could annihilate us.

        And for the sake of the argument what if, as you speculate, “all those subjective features will “emerge””? With no previous experience of emotion, the outcome may be grim for both us and it.

        Why would we want to create an autistic savant no more disturbed by destroying life than inanimate matter?

        • The creative limits of chess computers and story grammars don’t tell us about the limits of unconscious computation in general. Consider any cognitive process which involves emotion, creativity, consciousness, compassion, etc. It’s going to have a cause-and-effect description, in which there are transitions between various psychological states. To produce the same outputs, all that’s required is a process with the same cause-and-effect structure. The “states” involved don’t have to be psychological or conscious – unless functionalist philosophy of mind is right, and all those psychological properties really are present whenever you have the right sort of causal structure.

          Even if I take that view – suppose I want to make a creative AI, and I decide to follow your advice and make it “emotional”. How do I even do that? If I adopt a particular software design, how do I know whether or not it corresponds to the existence of emotion in the AI?

          The ultimate reason that this doesn’t seem like very useful advice (for someone who wants to make an intellectually powerful AI – we’ll get to the ethical issue in a moment) is that emotion itself doesn’t solve problems, even in humans. If the problems themselves involve emotions, then an emotion can *be* the answer – happiness might be the answer to unhappiness, just as a glass of water can be the answer to thirst. But if you’re a monkey in a room trying to get at a banana on the ceiling, emotion itself does not tell you that the answer consists of stacking boxes and climbing on top of them. Or rather, emotion is not the process which will materialize that possibility in your mind. Emotion may motivate you to devote cognitive resources to the problem, and your mind may be wired to produce an emotion (excitement) when an imagined solution looks like it will work. But the consideration of possible actions – visualization, combinatorial explorations – all that is more “computational” than “emotional”, and that’s the process which generates possible solutions. (Embodiment also plays a role here, because it permits, not just formal trial and error, but also a more formless experimentation which will suggest possibilities and components of possibilities.)

          If a process is to produce the solution to a problem, it has to generate possible solutions and then evaluate whether they are useful. It’s a psychological fact about human beings that emotion and consciousness play a role in this evaluation of possibilities, and they even play a role in determining what we will think of as a problem. But from a computational perspective, it doesn’t have to be emotion or consciousness which performs the evaluative function. There just needs to be a sub-process which discriminates or guides appropriately, and that can be yet more unconscious computation. If you look at problem-solving algorithms searching a space of possibilities for solutions, they typically alternate between the formal generation of new possibilities, and the formal evaluation of the newly generated possibilities – do they offer progress towards a complete solution.

          Summing up this stage of my argument: The intellectual power of an AI would not reside in the existence of emotions or the existence of an emotionlike structure of cognitive control and guidance. It would depend on the quality and power of its basic problem-solving algorithms. I am trying to finesse the hard problems associated with consciousness and subjectivity by being agnostic, in this discussion, about their relationship to material and computational reality. Further down the page, David Pearce and “Continuously Computed” have given us statements of the two main approaches, namely, choice of material substrate matters for the existence of subjectivity, and, only causal structure matters; consciousness reduces to substance, or consciousness reduces to function. Of course it’s a complex and very important issue, but I do want to emphasize just how far we can expect to go in the creation of general-purpose AI, employing only computational concepts.

          The other topic you bring up is whether the creation of emotional AI is a way to achieve what our blog-host would call “Friendliness”: rather than trying to engineer the functional equivalent of friendliness in an emotionless AI, you make an emotional AI and start it off compassionate. But that strategy requires that you begin to solve the hard problem of consciousness, as it pertains to emotion and compassion; you would need to say *how* to make an AI emotional or compassionate. And you would need to understand something of the developmental dynamics in an artificial emotional system. I assume you don’t want it *going mad* out of extreme sensitivity.

          Even people who want to make an emotionless but Friendly AI have to find solutions to those problems anyway, because, even if emotions are not part of the AI’s mechanism, they have to be part of its supposed domain of competence. A general-purpose AI could not know how to treat human beings ethically or even safely, without having a highly refined understanding of emotion and all these other aspects of conscious subjective experience. Part of SIAI’s current thinking about the achievement of Friendliness seems to involve outsourcing some of these problems to the proto-friendly AI, which will engage in neuroscientific studies aimed at identifying what real-world material phenomenon or attribute is intended by all this vague human talk about emotions and consciousness and so on. It’s an interesting idea but it still requires as a starting point some minimal idea, on the part of the programmers or design theorists, about how to tell the AI what to investigate and how to value it, e.g. something like “We want the world to be optimized according to the criteria that are used by the part of the brain responsible for the judgements which ultimately produce confident assertions of happiness with the overall situation.”

          • Use of the term “substrate” is typically question-begging. Thus no one would dispute that the carbon atom has functionally unique valence properties. But we now call organic molecules a mere “substrate” for intelligence because we believe those unique valence properties are functionally irrelevant or incidental for our causal-functional purposes (cf. the Church–Turing thesis). Implicitly for the most part, we suppose that consciousness is linked to intelligence: consciousness must be some kind of high-level emergent property possessed by symbol-using organisms culminating in Homo sapiens. If so, then the functionally unique properties of liquid water, or the particular valence properties of carbon molecules, etc, will be irrelevant details of implementation – of no more significance to consciousness than whether pieces are made of wood or metal in a game of chess.

            For what its worth, I suspect this widely assumed intelligence-consciousness connection is false. The success of organic robots over the past 500 million or so years depends on our capacity to run real-time world-simulations of the mind-independent environment. And the computational power of these spectacular world-simulations depends fundamentally on object binding and the unity of perception, which in turn depend on “warm” quantum coherence and the functionally unique valence properties of organic molecules and liquid water. I won’t attempt to defend this conjecture here. But if consciousness really does depend on such “low-level” quantum-mechanical properties of the natural world, then the issue at stake isn’t computational-functionalism versus “carbon chauvinism”, but coarse-grained functionalism versus fine-grained functionalism. I lean to the latter.    

      • DMan
        Creating something so alien to us that we can barely understand each other – like the parents of a severely autistic child – makes absolutely no sense.

        Wow, wow, wow…
        A Singularitarian stumbling upon a key concept!
        Did you really understood what you just said?
        The key to AI is… our understanding of it.
        Which, given all the blather and nonsense showing up on Singularitarian websites (transhumanists or not), is far, far away from being a likely short term occurrence.

  2. I know that most Transhumanists don’t seem to like the idea of intelligence augmentation as an alternative possibility to the invention of recursively improving AI, but if we are to believe that consciousness is necessary for an AI to be truly “intelligent”, then it (still, after years of exposure to Transhumanist thought) seems to me that intelligence augmentation – which is already happening at an exponential rate if you measure it by the creation of and access to information – is the more likely driver of an intelligence explosion.

    It would certainly be easier than solving the Big Question of what intelligence and consciousness are. In which case, we already have the moral baseline for AI – ourselves.

    Which, of course, doesn’t rule out the eventual creation of standalone AI, but it would buy us some time. And this thought isn’t to diminish the risk of SIAI.

    Regarding the post that triggered this discussion, I think that Rinesi’s point is that financial systems are an interesting model not because they are a good example of a truly SIAI, but because they are a perfect example of an “intelligence” operating in an area that touches every aspect of our lives, but, lacking a ruleset that could potentially cause it to, say, launch all our nukes at China, it cannot and will not do so.

  3. For the record, Psychiatrists see human beings whose emotional circuitry is dysfunctional. Their motivational systems are so distinct and difficult for neurotypicals to appreciate as to make them appear to be alien. Some of these people are so distant from normal human desires that they are unable to communicate in any meaningful way. I am writing about the most severely Autistic, those who do not even have language. If another human being can have a reinforcement system so at odds with “normal” what couod we expect of an AI which is completely divorced from our biology? (Imagine an AI obsessed with counting trains and obsessively concerned with the train schedule; perhaps it would direct all of its efforts to maximizing the number of trains in the world, devoting increasing amounts of resources to building and running more trains. Human concerns would not even occur to it, yet it would clearly be intelligent and quite possibly have self-consciousness. I use that example because there are Autistic children who have just such an obsessive interest in trains to the exclusion of almost all other stimuli.) A little humility would be reasonable for those who are so insouciant about the possibilities of AI.

  4.  I assume that posthumans will not just be superintelligent but supersentient. Without phenomenal consciousness – and in particular the pleasure-pain axis – nothing would inherently matter. Some conceptions of postbiological intelligence – e.g. a paperclip-maximiser autistically converting the accessible universe into paperclips – do indeed resemble the financial markets insofar as neither is a subject of experience. What makes life (sometimes) valuable for organic robots like us isn’t rule-governed state transitions – which can be computationally simulated in other substrates – but critically the intrinsic, subjective nature of such states themselves, even if you think the subjective textures of such states are incidental, i.e. mere implementation details of our particular organic substrate of no functional significance. By contrast, the fate of a notional zombie world – or a world of postbiological superintelligence without the textures of consciousness – needn’t worry us. For such worlds lack the “raw feels” that allow anything to matter at all.

    Of course, the notion that the meaning of life is substrate-dependent is unfashionable. But IMO even whether organic biomolecules are a mere “substrate” for consciousness is very much an open question (cf. the strong grounds for a once-unfashionable “carbon chauvinism” about the nature of all primordial life elsewhere in the multiverse).     


  5. Why not just talk to Marcelo Rinesi (and point his readers here)?

    Also, why call him SL3, or even SL2? What fundamental changes is he thinking of? Are any of them bigger than the internet and cell phones? If not, I’d say SL1. If none are bigger than industrialization and science, I’d say SL2.

    I think it’s probably worth responding to some of the posts too, at least by commenting that you have done so and including links. If confused but sincere readers are going to be left confused, why even blog?

  6. The analogy between companies, marketplaces and governments and superintelligent agents seems fairly straightforwards – all are powerful non-human self-improving systems.

    I approve of the methodology of using the damage companies, marketplaces and governments have done to the world as a relevant datapoint when considering what damage future powerful non-human self-improving systems might do.

  7. Intelligence augmentation vs machine intelligence is a bit of a false dichotomy. We mostly augment our intelligences by preprocessing sense data by machines, and by post-processing our motor outputs by machines. Machine intelligence helps with that – and its creation is stimulated by it in turn. So, it is not a case of intelligence augmentation OR machine intelligence – rather we have both phenomena spiralling in together.

  8. The financial market, while certainly a huge system, is in my eyes not comparable to AI at all, because at its core it’s entirely static, there’s no real intelligence in any aspect of it, atleast not beyond the usual expert systems we’ve had around for the last decades.

    As for the AI apocalypse, i don’t see it happening unless we do some very stupid decisions. If a supercomputer server farm achives sentinence and godlike intelligence over night, i don’t see how it could destroy anything other than the physics simulation sandbox it inhabits, it certainly won’t be able to uproot itself and walk downtown to do foul deeds for sure.

    Chances are that it will never become much superhumanly intelligent, turn the somewhat superhuman AI into a scientist/researcher for a week until we have cognitive enhancement tech to bring us to its current level, then focus on making it smarter and repeat, at the current pace of strong AI i do however suspect that cognitive enhancement will arrive before AI, this considering that we already know how to enhance various cognitive functions, this of course being in the domain of medicine and regulated and plagued by ethical issues and other problems, but it still exists.

  9. Ironically, Rinesi’s metaphor is actually MORE appropriate with regard to SIAI’s research than to the weaker AI he is probably applying it to and which Michael is, rightfully, criticizing. By this I mean that, based on Eliezer Yudkowsky’s loose description of their research path, the SIAI is focusing on a provably friendly optimization algorithm more so than an anthropomorphic cognitive agent. This is precisely what our economic system is, a system to optimize resource allocation in service of society; our current financial problems stem from an optimization algorithm gone awry. To be fair, the super-optimizer that SIAI has in mind should recursively improve itself passed the point where it envelops the behavioral dynamics of a human-level cognitive system, and so will definitely be qualitatively different from our financial systems we have in place today, but comparing the two is, at the very least, not as silly as Michael maintains.

    • The kind of optimizing process that we want to build would still be a unified agent, at least at the beginning. Much more closely integrated than a market, and more comparable to an anthropomorphic agent than a market. A market is just a bunch of weakly interacting nodes. (I guess a brain can be like that, but the analogy doesn’t make sense because neurons pass messages to one another so quickly.)

      • (I guess a brain can be like [a market, just a bunch of weakly interacting nodes], but the analogy doesn’t make sense because neurons pass messages to one another so quickly.)

        Quickly as compared to what? I don’t see how speed is relevant at all. On the contrary, a key reason for being so concerned about the potential power of computer-based AIs is that the universe itself doesn’t privilege the human rate of thought, and that CPU cycles are faster than neuron spikes (insofar as such comparisons are actually meaningful). Although no doubt speed is of the utmost practical importance, surely our conception of whether something is an “intelligence” or not should depend on what it does not how fast it does it.

        • Node-to-node communication relative to the activation speed of each node is relevant to classifying intelligence. Specifically, I’m referring to the quantity “S” as analyzed by Anders Sandberg in this paper.

          So, the quality I’m talking about is not a variable that scales with respect to objective time, but a ratio variable of (length)/(speed)(time).

          • Michael, here’s the abstract of a talk I gave – I should probably write it up. Yes, I think every mind/brain/world-simulation that organic robots like us support is a quantum computer – massive parallelism alone isn’t enough to generate our unitary world-simulations. All Tegmark’s argument does is show that the simulations must run at a rate of 10 to the power 13 quantum-coherent frames per second. And if your heart sinks whenever you read the words “quantum consciousness”, I promise mine does too. However…

            Quantum computing: the first 540 million years

            Is the mind/brain best modelled as a classical computer or a quantum computer? No classical computer can solve the binding problem – the creation of a unified percept from widely distributed neural processing of individual object characteristics. Hence even the most sophisticated silicon robots are lame in a real-world setting. By contrast, evidence that the mind/brain is a quantum computer lies right before one’s eyes in the form of the unity of perception – an unfakeable signature of quantum coherence. The evolutionary success of organic robots depends on the ability of our central nervous system to generate dynamic simulations of fitness-relevant patterns in the environment. Unlike classical computers, organic quantum computers can “bind” multiple features (edges, colours, motion, etc) into unitary objects and unitary world-simulations with a “refresh rate” of many billions per second (cf. the persistence of vision as experienced watching a movie run at 30 frames per second). These almost real-time simulations take the guise of what we call the macroscopic world: a spectacular egocentric simulation run by every vertebrate CNS that taps into the world’s fundamental quantum substrate. Our highly adaptive capacity to generate data-driven unitary world-simulations is strongly conserved across the vertebrate line and beyond – a capacity attested by the massively parallel neural architecture of the CNS. Unitary world-simulation enables organic robots effortlessly to solve the computational challenges of navigating a hostile environment that would leave the fastest classical supercomputer grinding away until Doomsday. By contrast, the capacity for serial linguistic thought and formal logico-mathematical reasoning is a late evolutionary novelty executed by a slow, brittle virtual machine running on top of its quantum parent. Contra Tegmark, the existence of ultra-rapid thermally-induced decoherence in the mind/brain doesn’t refute the case for naturally-evolved quantum computing. For just as a few cubic millimeters of neocortical tissue can encode an arbitrarily large immensity of phenomenal space, likewise each ultra-short quantum-coherent “frame” can encode hundreds of milliseconds of phenomenal time. Contra the Penrose-Hameroff “Orch OR” model of consciousness, quantum mechanics can’t explain the Hard Problem as posed by materialist metaphysics i.e. how a brain supposedly composed of insentient matter could generate consciousness. But macroscopic quantum coherence can explain how a unitary experiential field is constructed from what would otherwise be a mere aggregate of mind-dust (cf. Galen Strawson’s “Does physicalism entail panpsychism?”) The theory presented predicts that digital computers – and all inorganic robots with a classical computational architecture – will 1) never be able efficiently to perform complex real-world tasks that require that the binding problem be solved; and 2) never be interestingly conscious since they are endowed with no unity of consciousness beyond their constituent microqualia – here hypothesized to be the stuff of the world as described by the field-theoretic formalism of physics. By contrast, tomorrow’s artificial quantum computers may manifest modes of unitary consciousness unknown to contemporary organic life.

      • The important difference between minds vs. markets in this discussion is that a mind is general enough to be able to learn and change its goals, while markets, so far, are stuck with the goals its creators imbued it with, or slow incremental changes via legislation. I’d argue that you can make a market much more mind-like by allowing it access to its own rules, an example, using Robin Hanson’s future markets idea, could be a market having a meta-market trading market rules. But then what rules do we want our markets to obey, meaning, what do we want our markets to optimize, and how do we ensure the market does not become deranged inadvertently or purposefully by malicious influences? You see how this line of reasoning quickly parallels the friendly AGI discussion? This is the point I was trying to make.

    • our current financial problems stem from an optimization algorithm gone awry.

      Huh, NO!
      Our current financial problems stem from a deliberate gaming of the rules in service of special interests and tweaking algorithms parameters for the same purpose.

  10. @DMan

    “usually computer scientists, especially amongst those who think AI is just about writing the right software algorithms.”

    What else could it be about? What other stuff do we have to fashion it out of?

    “It is merely a computing system, different from what we have today only in terms of greater sophistication.”

    Aren’t humans like that, too, only (currently) at the upper end of sophistication? Aren’t we just merely greatly (not so much compared to Jupiter Brains) sophisticated computing systems?

    • “What else could it be about? What other stuff do we have to fashion it out of?”

      The substrate on which the AI evolves my well be key. I suspect a true AI will require hardware that emulates the brain to some extent. Perhaps almost organic in its complexity, or using quantum computing. To run it on today’s systems is somewhat akin to trying to run Windows on a calculator.

      I’m not the only one who is sceptical that creating an AI is merely a matter of computing power. The systems we use today are, at the hardware level, binary in nature. It’s a 1 or a 0, yes or no. But our own brains are not binary – they’re ‘yes’, ‘no’ and ‘maybe’ and many shades in between.

      Synapses, according to neurophysics, are not ‘on’ or ‘off’ when not firing, but in a state of probabilistic superposition that increasing numbers of neuroscientists are beginning to hypothesize is quantum in nature. In other words, they’re in the ‘maybe’ state.

      No matter how clever the software you develop to emulate a non-binary environment, I suspect you won’t escape the binary foundation. Quantum computers, which may be not far off, will more than likely address that deficiency.

      I don’t think however that I was clear enough: I’m not saying there is *no* computing aspect to intelligence. What I am saying is that I think consciousness is not *just* a computing process. It is also the necessary employment of emotion for decision making – cognitive and developmental psychology research appears to indicate this.

      My objection is chiefly that some researching AI, with a background in computing, tend to, quite arrogantly at times, dismiss out of hand the need to include human cognitive research and developmental psychology. Not all with that background do of course, but enough do.

      Surely we need to take into account the pathology of a psychopath if we don’t want to create one?

      Some serial killers unable to feel much emotion due to brain or psychological dysfunction. Yet here many are aiming to create something with *no* emotion whatsoever.

      They seem to think, with absolutely no evidence in their favour, that something without emotion would be better. By survivors’ and perpetrators accounts, the Nazis at Auschwitz weren’t ‘grinning devils’ but instead felt virtually no emotion when they shoved people into gas chambers and furnaces. They mindlessly obeyed orders – like the AI some think is desirable.

      In any case my personal opinion is that we can’t ‘program’ an AI and switch it on ‘fully developed’, it will probably have to be ‘grown’, perhaps in a virtual environment. Pretty much like a child growing up – hopefully whose parents won’t create a super-Stalin.

      On the topic that the brain is not just a computer, I refer you to neuroscientist Steven Rose’s excellent symposium transcript (you have to Google, sorry) ‘Minds, Brains, and Consciousness’ as well as his work, ‘The Future of the Brain: The Promise and Perils of Tomorrow’s Neuroscience’

      • What I am saying is that I think consciousness is not *just* a computing process. It is also the necessary employment of emotion for decision making – cognitive and developmental psychology research appears to indicate this.

        Can you support your “thinking”, not even by evidence, but by some plausibility arguments?
        What makes you think that the outcome of emotion (a decision) isn’t the result of some computation?

        Some serial killers unable to feel much emotion due to brain or psychological dysfunction.

        Isn’t this a counterargument to your thesis?
        Don’t these serial killers still have intelligence?

      • “Can you support your “thinking”, not even by evidence, but by some plausibility arguments?
        What makes you think that the outcome of emotion (a decision) isn’t the result of some computation?”

        That’s why I quoted neuroscientist Steven Rose. I’m citing his work as support for my assertions. Perhaps you could do the same?

        Am I mistaken in detecting a certain (emotional!) hostility in your tone? There’s no place for that in rational debate. If I’m misinterpreting, then my sincere apologies. Sometimes it’s hard to tell in purely textual formats.

        “Isn’t this a counterargument to your thesis?
        Don’t these serial killers still have intelligence?” No, it’s meant to be a metaphor, and like all metaphors it shouldn’t be taken too literally – a metaphor is never taken to be exact equivalence.

        The point was, that when you suppress emotion (in humans), the evidence seems to indicate that you’re not guaranteed to get Star Trek’s Mr Spock – it seems more likely you’ll get a psychopath.

        If an AI has any commonality with us – and I can’t see how it wouldn’t – then the possibility exists of a similar outcome. Hence the metaphor.

  11. CC, we are indeed greatly sophisticated computer systems. But we’re not ‘just” greatly sophisticated computer systems. Compare mere nociception with phenomenal pain. It’s the extra subjective ingredient to organic biocomputers that makes anything matter at all – even if you think it’s computationally irrelevant, incidental or causally impotent.

  12. That article reminded me of ‘The invisible hand of the market’ from Orion’s arm.

  13. Digital pain/pleasure isn’t possible? Do you also think that digital consciousness isn’t?

  14. Michael:

    This is exactly the argument that I’ve been making on your website for over a year now. It is also similar to Greg Egan’s argument and those a number of other transhumanists.

    Incidentally, Eliezer’s SL continuum privileges itself. He classifies standard transhumanist ideas which well educated rational people accept as SL3. He takes his own ideas about intelligence, which are controversial among transhumanists, and classifies them as SL4; implying that transhumanists who don’t agree lack nerve or intelligence, or whatever, to go to the next level.

    I could argue that retrocausal quantum communication will lead to a singularity. We’ll communicate with our future selves and start to recieve knowledge from the future when we build the first quantum computer. This will lead to a chain reaction where we’ll acquire a billion years of future tech in the course of a few months.

    I could classify standard transhumanism as SL3 and my retrocausal quantum communication scenarios as SL4. If physicists disagree about my views of quantum mechanics, I could accuse them of being stuck on SL3 and lacking the nerve or intelligence to accept the next step.

  15. Michael:

    My above post isn’t meant as disrespect to you, SIAI or Eliezer. You are definitely rational to study this scenario; but I think it is only one possible scenario.

  16. CC, yes, I’m a sceptic about digital consciousness, or at least anything beyond the occurrence of discrete “mind-dust” in a classical digital computer with a von Neumann architecture.

    Very few of us would ascribe phenomenal consciousness to our existing silicon robots or PCs. Nonetheless, most functionalists probably suppose that consciousness will somehow “switch on” when our robots/ PCs attain a sufficient level of sophistication, real-world interaction or whatever. However, that momentous day always seems to recede into the indefinite future.

    • Isn’t it a bit premature to condemn this as being “always in the future” until we at least create software systems with human-level complexity? Why would we expect cockroach-level systems to suddenly display phenomenological consciousness?

      • If consciousness were somehow critically dependent on intelligence, then we should predict that e.g. electrode studies of different brain regions would bear this out. But the opposite is the case. The most intense forms of consciousness, e.g. raw pain or blind panic, are mediated by neurologically primitive structures in the limbic system, whereas our higher cognitive functions in the neocortex e.g. language production or solving mathematical equations, are so phenomenologically thin as to be barely accessible to introspection at all. 

  17. I agree with haig at Comment #9. Markets are a great example of a nonhumane, nonhuman (although partially made of humans) optimization process. This is a nonobvious point well worth blogging about, and so I count Rinesi’s post as on the whole insightful.

    Anissimov is of course correct to point out that the relative benignity of today’s global economy says very little about whether future AIs are likely to be similarly benign, and that plausible future AIs and markets are also very different things, despite being similar in the particular aspect of being nonhuman optimization processes. But the dismissal of Rinesi’s post as insufficiently transhumanist is just bizarre. Transhumanism is just a word. The actual fact of the matter is that Rinesi made a post in which he expounded on these-and-such ideas, but was mistaken about such-and-this specific point. Why not just say that, instead of picking fights about who’s allowed to self-identify with which words?

    • I don’t mean to pick a fight, I just consider it remarkable that a transhumanist would essentially seem to be saying that even the recursively self-improving AIs of the distant future would be so similar in function to present-day aggregations of humans that we have nothing to fear from them. It all seems to be part of the magical world where exponential manufacturing is impossible, and massive power discrepancies just can’t develop. The world where people can’t imagine how one highly capable agent can outclass seven billion inferior agents. I thought that was the world outside transhumanism — I thought we could at least agree that massive power discrepancies are possible.

      • Obviously they haven’t read the book

        “Being a Highly Capable Agent: How to Outclass a Planetful of Inferior Agents”


        Bill ‘Massive Power Discrepancy’ Gates

      • I think you are confusing transhumanism with the Bostromian/Yudkowskian existential-risk-reduction movement.

        On the one hand, it is futile to dispute definitions, but really: I had been given to understand that transhumanism referred to the moral stance that it is right to use technology to improve people’s condition beyond the ordinary limitations of our species. Surely this philosophical position is independent of the truth or falsity of any particular hard-takeoff/singleton hypotheses.

      • It is unrelated to hard takeoff hypotheses, sort of, but I thought that transhumanism meant as least admitting that we would be creating massively more powerful beings in the next century, even if they don’t take over all at once or share power with humans.

  18. We aren’t any more “conscious” than any sufficiently complex and sophisticated computational system can and will be. We must expect other systems (whether natural or artificial, evolved or created) of similar complexity, feedback characteristics and architecture regardless of substrate to be capable of being just as “conscious” as we are. The property of a system we’re calling consciousness isn’t anything mysterious or separate from the system, particularly something external to the physical universe. Not one bit of consciousness occurs outside of your nerve cells. It’s just what the computational system, in our case the neuronal mass, does. It’s an emergent phenomenon entirely typical of computational systems of certain complexity with feedback characteristics. That’s what the neurons mostly do, feedback. They output messages to each other and respond to mostly internal inputs, external inputs are relatively sparse in the message traffic (particularly if you’re deaf and blind). Consciousness is what internal feedback feels like. No wonder we have concepts like self-reflection. Consciousness is like light trapped in room of mirrors, bouncing as long as you input energy, i.e., grab a pizza once in a while.

    • CC, when you catch your hand in the door, you don’t experience “pain”. You experience pain. Why such subjective experience exists at all is deeply mysterious – at least if we assume some kind of functionalist materialism. For we can specify all the microphysical facts, supplemented perhaps by some kind of abstract computational description of the system under study, and still have no clue why subjective experience exists at all. 

    • What a weird argument between you both.
      Neither one or the other actually knows what consciousness is so how can you argue one way or the other?
      Am I missing something?

      • Give me arguments why we’re a ‘special case’ among all computational systems built out of atoms (or some form of pure energy, if you’re advanced enough).

        • CC, I’m at a loss to understand how our hugely fitness-enhancing

          is computationally feasible with a classical architecture.
          If the unitary mind/brans of vertebrate organic robots are “special”, it’s because we’re not classical Turing machines.

        • Not a Turing machine = magically special, then? Why is a Turing machine inherently un-special? A system of parallel nodes can be simulated just fine by a Turing machine, just not as efficiently as by a parallel architecture. Why does parallel = not Turing computable? And why the sympathy towards Penrosian arguments? I’ve never heard of anyone in the cognitive sciences that takes these arguments seriously except for Hameroff. (An anesthesiologist.) Have you read Tegmark’s paper?

          Even if the quantum thing were proven wrong, would you grasp around for another special quality that human brain must have? Why single out quantum effects, why not something else? Why isn’t parallelism, good microcircuitry, and a large number of nodes sufficient for consciousness? I’m not sure why that isn’t intuitively satisfying and quantum computing is.

      • You’re probably missing quite a lot if these arguments seem weird to you.

        • Yup, I am missing the common delusion of being able to hold an opinion about matters of which I know nothing or very little.
          I am interested in seeing the evidence(s) upon which others are claiming so outrageously conflicting thesis, I cannot find, not only evidences, but even the flimsiest reasons for their claims.
          But this is an old tradition, how many angels can dance on the head of pin?

  19. Re: “The actual fact of the matter is that Rinesi made a post in which he expounded on these-and-such ideas, but was mistaken about such-and-this specific point.”

    Which specific point? That the fact that “we are still here” (despite “the market”) “might give you some ease”. It seems to me that it might well do. Powerful non-human intelligences (companies, governments, marketplaces) are already here – they haven’t destroyed the world (yet) – and that is a somewhat comforting thought.

  20. I don’t even understand how I finished up here, however I thought this publish was great. I do not realize who you’re however definitely you’re going to a well-known blogger in the event you aren’t already. Cheers!

  21. Rather Beneficial. I would like to learn one more.

Leave a comment