Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


My Upcoming Talk in Texas: Anthropomorphism and Moral Realism in Advanced Artificial Intelligence

I was recently informed that my abstract was accepted for presentation at the Society for Philosophy and Technology conference in Denton, TX, this upcoming May 26 - 29. You may have heard of their journal, Techné. Register now for the exciting chance to see me onstage, talking AI and philosophy. If you would volunteer to film me, that would make me even more excited, and valuable to our most noble cause.

Here's the abstract:

Anthropomorphism and Moral Realism in Advanced Artificial Intelligence
Michael Anissimov
Singularity Institute for Artificial Intelligence

Humanity has attributed human-like qualities to simple automatons since the time of the Greeks. This highlights our tendency to anthropomorphize (Yudkowsky 2008). Today, many computer users anthropomorphize software programs. Human psychology is extremely complex, and most of the simplest everyday tasks have yet to be replicated by a computer or robot (Pinker 1997). As robotics and Artificial Intelligence (AI) become a larger and more important part of civilization, we have to ensure that robots are capable of making complex, unsupervised decisions in ways we would broadly consider beneficial or common-sensical. Moral realism, the idea that moral statements can be true or false, may cause developers in AI and robotics to underestimate the effort required to meet this goal. Moral realism is a false, but widely held belief (Greene 2002). A common notion in discussions of advanced AI is that once an AI acquires sufficient intelligence, it will inherently know how to do the right thing morally. This assumption may derail attempts to develop human-friendly goal systems in AI by making such efforts seem unnecessary.

Although rogue AI is a staple of science fiction, many scientists and AI researchers take the risk seriously (Bostrom 2002; Rees 2003; Kurzweil 2005; Bostrom 2006; Omohundro 2008; Yudkowsky 2008). Arguments have been made that superintelligent AI -- an intellect much smarter than the best human brains in practically every field -- could be created as early as the 2030s (Bostrom 1998; Kurzweil 2005). Superintelligent AI could copy itself, potentially accelerate its thinking and action speeds to superhuman levels, and rapidly self-modify to increase its own intelligence and power further (Good 1965; Yudkowsky 2008). A strong argument can be made that superintelligent machines will eventually become a dominant force on Earth. An "intelligence explosion" could result from communities or individual artificial intelligences rapidly self-improving and acquiring resources.

Most AI rebellion in fiction is highly anthropomorphic -- AIs feeling resentment towards their creators. More realistically, advanced AIs might pursue resources as instrumental objectives in pursuit of a wide range of possible goals, so effectively that humans could be deprived of space or matter we need to live (Omohundro 2008). In this manner, human extinction could come about through the indifference of more powerful beings rather than outright malevolence. A central question is, "how can we design a self-improving AI that remains friendly to humans even if it eventually becomes superintelligent and gains access to its own source code?" This challenge is addressed in a variety of works over the last decade (Yudkowsky 2001; Bostrom 2003; Hall 2007; Wallach 2008) but is still very much an open problem.

A technically detailed answer to the question, "how can we create a human-friendly superintelligence?" is an interdisciplinary task, bringing together philosophy, cognitive science, and computer science. Building a background requires analyzing human motivational structure, including human-universal behaviors (Brown 1991), and uncovering the hidden complexity of human desires and motivations (Pinker 1997) rather than viewing Homo sapiens as a blank slate onto which culture is imprinted (Pinker 2003). Building artificial intelligences by copying human motivational structures may be undesirable because human motivations given capabilities of superintelligence and open-ended self-modification could be dangerous. Such AIs might "wirehead" themselves by stimulating their own pleasure centers at the expense of constructive or beneficent activities in the external world. Experimental evidence of the consequences of direct stimulation of the human pleasure center is very limited, but we have anecdotal evidence in the form of drug addiction.

Since artificial intelligence will eventually exceed human capabilities, it is crucial that the challenge of creating a stable human-friendly motivational structure in AI is solved before the technology reaches a threshold level of sophistication. Even if advanced AI is not created for hundreds of years, many fruitful philosophical questions are raised by the possibility (Chalmers 2010).


Bostrom, N. (2002). "Existential Risks: Analyzing Human Extinction Scenarios". Journal of Evolution and Technology, 9(1).

Bostrom, N. (2003). "Ethical Issues in Advanced Artificial Intelligence". Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence.

Bostrom, N. (2006). "How long before superintelligence?". Linguistic and Philosophical Investigations 5 (1): 11–30.

Brown, D. (1991). Human Universals. McGraw Hill.

Chalmers, D. (2010). "The Singularity: a Philosophical Analysis". Presented at the Singularity Summit 2010 in New York.

Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine", Advances in Computers, vol 6, Franz L. Alt and Morris Rubinoff, eds, pp 31-88, Academic Press.

Greene, J. (2002). The Terrible, Horrible, No Good, Very Bad Truth about Morality and What to Do About it. Doctoral Dissertation for the Department of Philosophy, Princeton University, June 2002.

Hall, J.S. (2007). Beyond AI: Creating the Conscience of the Machine. Amherst: Prometheus Books.

Omohundro, S. (2008). "The Basic AI Drives". Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, edited by P. Wang, B. Goertzel, and S. Franklin, February 2008, IOS Press.

Pinker, S. (1997). How the Mind Works. Penguin Books.

Pinker, S. (2003). The Blank Slate: the Modern Denial of Human Nature. Penguin Books.

Rees, M. (2003). Our Final Hour: A Scientist's Warning : how Terror, Error, and Environmental Disaster Threaten Humankind's Future in this Century - on Earth and Beyond. Basic Books.

Wallach, W. & Allen, C. (2008). Moral Machines: Teaching Robots Right from Wrong. Oxford University Press.

Yudkowsky, E. (2001). Creating Friendly AI. Publication of the Singularity Institute for Artificial Intelligence.

Yudkowsky, E. (2008). "Artificial Intelligence as a positive and negative factor in global risk". In N. Bostrom and M. Cirkovic (Eds.), Global Catastrophic Risks (pp. 308-343). Oxford University Press.

Comments (63) Trackbacks (0)
  1. Michael, could you clarify under what conditions it is permissible to “anthropomorphize”? In the heyday of behaviourism, for instance, attributing subjective states to members of other species was regarded as unscientific anthropomorphism. Even today, one reads studies of experimental procedures used to induce “depression-like” behaviour or “distress vocalizations” in captive nonhuman animals – rather than procedures that cause depression and phenomenal pain. A few researchers are indeed guilty of (probably) misplaced anthropomorphism – e.g. the attribution of religion to doubtless highly sentient sperm whales ( ). But such scientists are in a very small minority.

    How about artificial intelligent systems? Clearly, it’s not anthropomorphic to attribute to a computer the capacity to play chess. This is because chess-playing is normally defined purely behaviourally – though it’s still worth asking in what sense Deep Blue understands it’s playing chess; Deep Blue lacks any phenomenology of cognition. Conversely, if future artificial intelligence is endowed with a pleasure-pain axis rather than mere formal utility functions, then I would regard a degree of anthropomorphism as appropriate – even if the nominal IQ score of the sentient AGI is off-the-scale. If, on the other hand, postbiological life is not sentient but merely sapient, i.e. governed purely by formal utility functions, then attribution of any human-like states could be misleading.

    Moral realism raises issues distinct from anthropomorphism, though they are of course connected. Whether or not one is a moral realist, I still think it’s worth stressing that the pleasure-pain axis is the engine of value in a naturalistic sense – i.e. it creates states that are subjectively valuable and subjectively disvaluable, regardless of their propositional content, if any. I know the example of unrestrained paper-clip tiling is sometimes given as an expression of an AGI value system that would be incomprehensibly alien to us. But if such behaviour were displayed by a human being today, then we’d actually diagnose the mindset behind the behaviour quite well. Compulsive paperclip tiling would be a manifestation of severe autism, not least an inability to distinguish the important from the trivial – one of the hallmarks of intelligence. I agree that our successors may have value-systems incomprehensible to human minds. But if so, I doubt this incomprehensibility will stem from the functional equivalent of paperclip tiling.
    Zombies are not going to inherit the Earth. Instead, posthuman values may be incomprehensible to humans because posthumans may occupy state-spaces of experience wholly alien to us [Imagine if echo-locating bats had evolved with opposable thumbs and a recursive syntax. Heaven knows what the lifeworlds of superintelligent post-chiropterans might be like.]

    • Hi David, I agree with your first two paragraphs.

      We still deeply disagree and you seem confused to me when you say this:

      Compulsive paperclip tiling would be a manifestation of severe autism, not least an inability to distinguish the important from the trivial – one of the hallmarks of intelligence.

      There is no such thing as objectively important or trivial, that’s part of my point. To say otherwise is to engage in Jaynes’ Mind Projection Fallacy. To some superintelligences, paperclips are the most objectively important thing in the world, and to them, they are right and you are wrong. Paperclips are not inherently more trivial than anything else.

      I agree that our successors may have value-systems incomprehensible to human minds. But if so, I doubt this incomprehensibility will stem from the functional equivalent of paperclip tiling.
      Zombies are not going to inherit the Earth.

      Why not, because YOU place a lot of value on moral complexity and have trouble coming to terms with the idea of a superintelligence with tremendous cognitive resources put towards relatively simplistic ends?

      You have to describe the process by which a superintelligence with a paperclip-maximizing supergoal, programmed to defend its supergoal at all costs, would change it to something more sophisticated. There is no reason why it would. “It just doesn’t make intuitive sense to me that a superintelligence like that could exist” seems to be the most common argument, and it’s not persuasive.

      Clearly, a species that required paperclips to survive and reproduce for hundreds of thousands of generations would evolve to value paperclips more than all the other stuff you apparently seem to believe to be the objective propositional content of objective morality, i.e., “beauty” and “complexity”.

      Psychopaths exist. Highly intelligent people with zero empathy exist. They aren’t wrong to have zero empathy. There’s nothing “wrong” with eliminating everything on this planet and leaving it as a charred wasteland. There’s nothing wrong with bombarding the surface of the Earth with asteroids until nothing is left. It could actually be quite fun from the perspective of the superintelligence doing it. Funner than putting up with 7 billion humans, perhaps.

      The space of “dark” goal systems is so much wider than the space of “empathic”/”good” goal systems. Generate goal systems from every possible sequence of bits under a certain character limit and you’ll see what I mean. That’s why an entirely random change to the human brain is more likely to lead to a corpse, invalid, or a psychopath than an angel.

      There are entirely selfish and even evil people out there that are absolutely brilliant. Even if there weren’t, a very powerful theoretical argument could be made for the plausibility of their existence. There is no reason why arguments about pain-pleasure axes, or anything else would be persuasive to them. They simply don’t care. You could sit and talk to them about morality all day, and they’ll up and stab you in the back if it was convenient to them. This happens because the universe is not our friend, and not especially biased in favor of “goodness”. “Goodness” is a figment of our imagination that must be aggressively upheld to continue existing.

      • Michael, thanks for clarifying.

        Is there any (objective) fact of the matter about what is important and trivial? Is it even possible to formulate a conception of intelligence without it? What distinguishes a genius like Gödel from an idiot savant? When I defend the objectivity of value, I’m not arguing for the importance of Picassos over paperclips – i.e. one “projection” of feelings onto the mind-independent world rather than another. I agree with you here. There simply isn’t any fact of the matter. Rather I’m arguing for the existence of a world whose ontology includes first-person facts some of which are inherently normative e.g. I-am-in-agony – rather than the existence of an intrinsically valueless zombie world. Agony, by its very nature, is important to the subject – and likewise important to anyone (e.g. a mirror touch synaesthete – ) who adequately represents another subject’s pain. Thus there aren’t any mirror touch synaesthete sociopaths. By contrast, a notional zombie world lacks first-person facts – or anything that made one state of affairs inherently matter more or less than any other. Nothing in such a world is inherently important or inherently trivial or has any normative properties. By the same token, someone today with congenital analgesia or pain asymbolia may ask sceptically what is the “normative force” of the agony of which pain victims (and philosophers) speak. Unfortunately, it’s hard to convey the nature of phenomenal agony to the pain-naive. “Pain” is a primitive term – like “redness”, someone congenitally colour-blind cannot grasp it. But plunge one’s hand in scalding hot water and the normative force of extreme pain is all too vivid; I-ought-not-to-be-in-this-state. What is the explanation? Nobody knows.

        Now the value nihilist may say here: so what? Yes, your agony is important to you. It’s not important to me. It needn’t be important to an AGI either.

        But this “asymmetry of epistemic access” simply reflects our ignorance – or ignorance on the part of the supposed AGI – not the absence of any fact of the matter. Ignorance doesn’t make phenomenally important states like agony any less inherently significant – or any less of an objective fact about the world than the rest mass of an electron. An artificial robot or super-AGI governed by formal utility functions (rather than reinforcement learning and the pleasure-pain axis) may know nothing of phenomenally important states. It may know nothing of non-normative phenomenal colours or sounds either. But surely such ignorance doesn’t throw their objective reality or significance into question?

      • “The space of “dark” goal systems is so much wider than the space of “empathic”/”good” goal systems.”

        Maybe from an entirely human perspective. If you step back from anthropocentric bias, however, you see the tautology: Whatever the AI deems “good” in its utility function is “good” (from the AI’s perspective). In other words, there are no “dark” goal systems. Goals are always “good”.

        Assume a paperclip maximizer that “wins”, ie. it happens to create a system of copies of itself that then form a singleton and wipe out all other sentient life. Then, paperclip utilitronium. Is that a bad outcome? Of course not. It is epic goodness from the perspective of the remaining sentients, which are all paperclip maximizers.

        In my view, the real problem is that real-world Darwinian dynamics don’t lead to such a “win” in a sustainable way – it leads to ecologies of mal-aligned utiliy functions and conflicts between sentients (predator – prey, parasite – host etc). Any outcome that breaks this paradigm without killing off all sentient life has the hypothetical potential to be a “good” outcome (albeit improbable and likely unsustainable).

      • I agree with everything Hedonic Treader said. We could have the same understanding, it’s just semantic difference. HT, I’m trying to explain the issue in a more colloquial fashion rather than using academic precision, my apologies. If this were an academic workshop then I would make very different word choices.

        David, I’m trying to think about what you’ve said… phenomenally significant states may only be of interest to a very small portion of beings. Maybe there are other classes of phenomenally significant states that view all our pain, suffering, or joys as trivial and not a big deal one way or the other. There could be a complex hierarchy of phenomenal experiences.

        An artificial robot or super-AGI governed by formal utility functions (rather than reinforcement learning and the pleasure-pain axis) may know nothing of phenomenally important states. It may know nothing of non-normative phenomenal colours or sounds either. But surely such ignorance doesn’t throw their objective reality or significance into question?

        It doesn’t, they certainly exist, but I’m just trying to emphasize how little an AGI or superintelligence might care. Also, as I mentioned above, the states we experience could just be a speck in a huge space. Maybe our phenomenal states have the same intuitive significance to superintelligences as the phenomenal states of a flatworm.

        • Does an understanding of triviality and importance transcend the pleasure-pain axis? Or is this concept-space – “(un)interesting”, “(in”)significant” “(no) big deal” etc – embedded entirely within it?
          Michael, I agree with you that humans occupy just a small speck of in the huge phenomenal state-space of all possible minds. WhatI’m asking is whether this vast phenomenal state-space can have significance to any subject – including a hypothetical AGI – except insofar as it’s also penetrated by hedonic tone, or is instrumentally relevant to phenomenal states that do have hedonic tone.

          Here are two possibilities.
          If a hypothetical AGI is insentient, governed purely by formal utility functions, then to suppose that the AGI will find anything at all (in)significant or (un)important would be anthropomorphic projection on our part. The AGI might behave in ways systematically interpretable as finding e.g. paperclips supremely important. But the AGI doesn’t really care – any more than a chess program “cares” about beating us at chess.

          But that’s not what you’re suggesting. As I understand it, you’re supposing that a full-spectrum AGI will be sentient – i.e. a subject of experience. So what might it mean to speak of the AGI’s “complex hierarchy [of importance?] of phenomenal experiences”? What criterion of importance might the AGI use that isn’t parasitic, directly or indirectly, on the pleasure-pain axis?

          Today a stocking fetishist and a paperclip fetishist can find stockings and paperclips (respectively) supremely interesting. Naively, they may suppose that stockings (paperclips etc) are objectively the most valuable thing in the world. But let’s assume they are intelligent. They’ll know that there isn’t a mind-independent fact of the matter that makes stockings, or paperclips, or any other insentient, “stuff” intrinsically valuable. Presumably too they’ll understand the molecular mechanisms by which our representations of inanimate objects are attributed value.

          So could a sentient, superintelligent AGI really turn out to be a glorified paperclip fetishist? Granted, the super-AGI might pursue a supergoal like a classical utilitarian ethic whose implementation details strike most humans today as counterintuitive and repugnant – for example the conversion of the accessible universe into utilitronium. But the super-AGI is not – how can I put it – stupid. The super-AGI knows there’s nothing special about paperclips. Or maybe I’m missing something?!

        • David, two relevant questions come to my mind here.

          “They’ll know that there isn’t a mind-independent fact of the matter that makes stockings, or paperclips, or any other insentient, “stuff” intrinsically valuable.”

          Do you think there is a mind-independent fact of the matter that makes activation patterns in human-like pleasure centers intrinsically valuable?

          Do you think that all practically realizable artificial agents with super-human instrumental efficiency in pursuing formally defined strategic goals will discover this fact of the matter and adjust their goal system accordingly – no matter how they were designed by their (human) designers?

          • Hedonic Treader, if we were zombies, then it wouldn’t be the case that some activation patterns are intrinsically colourful, others are intrinsically funny, and others are intrinsically (dis)valuable. But of course we’re not zombies.
            These are objective facts about the world – mind-dependent yes, in the trivial sense that they are undergone by minds, but real properties of occupants of the space-time [or Hilbert space etc] coordinates in question.

            Let’s assume that we can identify the neural correlates of consciousness [let’s say, for instance, that any interesting form of consciousness depends on macroscopic quantum coherence]. If so, then a classical digital computer can be programmed to behave differently towards sentient and nonsentient agents in virtue of these formal properties. But it won’t understand the nature of sentience. Only by instantiating such states oneself can a system understand phenomenal colour, agony, disgust, humour etc.

            If a “recursively self-improving” artificial silicon (etc) robot with a classical architecture were to assemble and physically self-incorporate the modules needed for sentience, then it could discover the nature of pleasure, agony and other powerfully normative states etc. Presumably the robot would then alter its behaviour. Pain and pleasure have a powerful “reprogramming” capacity that can completely override the goals of the original programmers / educators.

          • “Only by instantiating such states oneself can a system understand phenomenal colour, agony, disgust, humour etc.”

            But a system can surely have representational tokens of these mental states precise enough to predict human(-like) behavior – without sharing their underlying experiential valuation – which is good enough for instrumental rationality when dealing with humans. You don’t need to feel the badness of pain in order to accurately predict that aversive behavior results from inflicting it; you don’t need to subjectively share the joy of a joke in order to perfect your skill of making people laugh etc.

            This in fact proves the point that it is entirely possible to create highly effective strategic agents who can excel in the social, political, economic, militaristic etc. fields without empathizing in the slightest with the (other?) sentients they interact with. (Empathizing here in the sense of a well-aligned utility function.) And these systems can out-compete humans to such a degree that they become extremely serious threats very quickly. Why then should we trust that they discover the ethics of universal compassion by mere reasoning? If such systems are endowed with goals and values that do not empathize with the suffering of others, there is no reason to trust that they will start doing so by themselves against their originally programmed value-space.

            Maybe in practice, their values will themselves be prone to evolutionary processes (e.g. memetic, or due to specialization of replicable cognitive AI modules in the cloud etc.). In this case, we would expect certain types of cooperation and symbiotic co-evolution with other agents to evolve, but not universal compassion unless the process is so integrative that it restructures all sentient life on the planet into a singleton (with questionable consequences for long-term stability).

          • [gremlins: Hedonic Treader, apologies, this is supposed to be a response to your reply of Feb 8 below. I guess the software doesn’t like deeply nested replies.]

            Hedonic Treader, you are more optimistic (or pessimistic?) than me about the prospect of super-AGI – at least for the foreseeable future. Michael above was assuming the likelihood of sentient artificial superintelligence. He was speculating how the entire pleasure-pain axis as organic robots like us understand it might not be significant to the super-AGI compared to other regions in the state-space of all possible phenomenal minds. By contrast, you are considering the possibility that hypothetical post-biological super-AGI may be non-sentient. Actually, I consider your scenario overwhelmingly more likely with any classical computer architecture. I just wonder how “super” this nominal super-intelligence will be. To get anywhere close to human-level general intelligence, IMO an artificial super-robot will need, among other things, to:

            1) run multimodal world-simulations comparable to ours – capable of non-sentient analogues of object binding and the unity of perception. Conceived as purely classical information processors, neurons are pitifully slow. Can even massive classical parallelism and clever algorithms generate unitary world-simulations as computationally powerful as ours in real time? As you know, I reckon what passes as the mind-independent macroscopic world is a dynamic simulation generated and run by a quantum supercomputer – the vertebrate mind/brain. Naturally, not everyone would accept the conceptual framework presupposed here. Thus sometimes one reads of “using the world as its own representation” etc. For lots of reasons, I think such perceptual direct realism is hopeless.

            2) like us, the hypothetical post-biological AGI will need to “mind-read” agents within these data-driven world-simulations – i.e. simulate other perspectives, and run these mind-simulations within its functional macroscopic world-simulation. And all generated within a few dozen milliseconds, on the basis of patterns of impulses from the optic nerve etc or its artificial equivalents.

            Perhaps the Church-Turing thesis will be invoked here. Surely even a classical AGI must be able to emulate humans and more – whether we are quantum minds or not? But this may be a red herring. The fact that notionally a classical Turing machine could e.g. factorize 1000 digit numbers doesn’t show that it is really computationally equivalent to a quantum computer. Such a calculation would take a notional classical Turing machine longer than the age of the universe to execute. Of course, the mind/brain doesn’t factorize 1000 digit numbers. But I suspect generating unitary macroscopic world-simulations like ours may turn out to be at least as computationally demanding.

  2. A minor nitpick:

    “Moral realism, the idea that moral statements can be true or false…”

    This metaethical position is actually termed cognitivism, which becomes moral realism only if you add the ingredient that “true or false” means “corresponds to physical reality, or not”. On the other hand, if “true” means “follows from certain axioms and rules of inference”, this is not moral realism any more, although it still is cognitivism.

    Anyway, I’m pretty sure you’re very familiar with all this, since Yudkowsky’s meta-ethics LW sequence is talking about the latter kind of cognitivism-that-isn’t-moral-realism, but I just thought I’d mention this so you could add somewhere this little remark to avoid possible misunderstandings…

  3. Great abstract and very well written. I would love to read the entire paper.

  4. Don’t you get tired of referencing Eli and Nick all the time? Seems like you could move beyond them and bring yourself into a wider scholarly environment.

  5. I try my best but not that many people write on superintelligence. Also, we’re all going to die if the Friendly AI effort doesn’t get a lot of support, like, now, so citing Nick and Eliezer all the time is a pain I just have to endure.

    It’s worth noting that half the citations are people I’ve never cited in academic work before: Greene, Pinker, Chalmers, Wallach, Rees.

    Also, no one but a tiny group actually reads or understands the Eliezer and Nick papers I keep linking. Maybe when more people read the papers, I will stop having a reason to constantly cite them, but they remain highly important until then.

    Anyway, by referencing Greene, Pinker, Brown, Chalmers, Wallach, and Rees, I am in a wider scholarly environment.

    Nick and Eliezer may be passe to some people, but they’re very new and fresh to those who haven’t heard of either before in their entire lives, which will be most of the audience here.

    • An expletive statement like we are “going to die” reduces the credibility of the concept. Better to suggest that “extinction risk” (not existential risk) is possible if humanity does not take a focused and decisive approach to strong AI. It would be more appealing to society to engage the issues of Strong AI as a problem-solving project. I want to see you succeed in your work because I value you. I just want to make sure that you are careful about alarmism because no matter how much you may believe that we will die if we do not developing what you see as necessary, someone else may have a more attractive and verbally charismatic approach.

      My suggestions for including in your essay/paper is Andy Clark, an intriguing thinker who comes from a different philosophical viewpoint but compliments the transhumanist perspective quite nicely.

      – Clark, Andy (2009) Being There: Putting rain, Body and World Together Again.
      – Hibbard, Bill (2002) Super-Intelligent Machines
      – Schneider, Susan (ed) (2009) Science Fiction and Philosophy: From Time Travel to Superintelligence
      – Block, Ned (1995) An Invitation to Cognitive Science, Thinking
      – Abdoullaev, Azamat (1999) Artificial Superintelligence
      – Legg, Shane (June 2008). Machine Super Intelligence

      I think that what is vastly needed in regards to issues of Superintelligences, is design theory. Design theory is developed to address specific problems, whether currently present or potentially present, with the sole aim of solving them through critical assessement of conditions which both compose the problem and construct the solutions. Buckminster Fuller was an ace at this. But the time are quite different now, of course, so design theory has had to advance its own methodology to produce broaders sweeps of conditions and solutions.

      All my best,

  6. The space of “dark” goal systems is so much wider than the space of “empathic”/”good” goal systems. Generate goal systems from every possible sequence of bits under a certain character limit and you’ll see what I mean. That’s why an entirely random change to the human brain is more likely to lead to a corpse, invalid, or a psychopath than an angel.

    You know, before I read this paragraph I was 100% in agreement with the idea, but once I started trying to actually quantify that statement it stopped being obviously true to me. Could you please quantify it to some degree?

    In any event, I know prudence demands we assume it’s true, but it’s still intriguing me.

  7. What we need is a small toy system of a simple AI wandering around in a cellular automaton space.

    Any AI with a goal system at the top (often can be defined implicitly even if it’s not explicitly called it) will have some content in it, stored as bits. If you were to randomly flip the bits around, complex structures would be destroyed, complex structures that define goal content.

    Take the above paragraph and replace ten of its letters with random other letters. It would still make sense. Do the same with 30, and it could still make sense, thanks to humans’ amazing ability to fill in the blanks given context. But, replace 60 or 70 characters and you might start to get nonsense. Go ahead and try it if you have the time or a program, it would be fun to see when it becomes unrecognizable.

    Most random configurations of words and letters are complete nonsense. You can run statistical analyses on corpuses of all types, and there are programs that infer the language from the average length of words, the number of possible letters, and their relative frequencies. Most possible configurations are non-languages. They lack complex structure and syntax.

    Within a tiny space of probabilities are language-like data structures. Even within “language-like” data structures, the vast majority of possible configurations make absolutely no sense. You could have a paragraph with word lengths and letter frequency similar to the English language but composed completely of non-existent words, like a Scrabble game thrown on the board directly from the bag.

    My point is getting the right morality from a huge space of possible code structures is difficult.

    I really wish I had discovered this first, but unfortunately, Eliezer Yudkowsky did, so I’m forced to keep referencing his work in a manner similar to a broken record. Thankfully, other authors have published a few things relevant to the question, like Greene’s PhD thesis, and all Pinker’s book. Yet, they don’t tackle it in the interdisciplinary fashion that someone should when they explicitly are trying to solve the Friendly AI problem, and not just dispel stupid myths about morality or cognition (what Greene and Pinker do).

    Some of this argument is given in this talk, I think:

    For other stuff that’s relevant, this:

    The above is not as fleshed out an answer as I would like to give. I’m working on a longer work that will include something on this, but sadly I have a lot of day-to-day duties that get in the way of my longer-term writing projects, so it’s coming along slowly. If someone else reading this were to know the gist of the above argument, perhaps they could add more detail and citations.

    • Any AI with a goal system at the top (often can be defined implicitly even if it’s not explicitly called it) will have some content in it, stored as bits. If you were to randomly flip the bits around, complex structures would be destroyed, complex structures that define goal content.

      Well, my point is that if the goal is truly random, it’s possible that the vast majority of goals would be entirely benign (or would result in no action whatsoever). But I’m not sure how you could count them to show that this is the case.

      Let’s take for example, an AI that sends asteroids towards Earth. The size, frequency, velocity, and impact site of the asteroids could vary tremendously. So an AI that sends small asteroids to a desert once a month would be a huge boon to humans since it would amount to free, easily-mined mineral resources, without any damage to humans or infrastructure. But even for this highly-constrained example, I find it hard to come up with a way to show that there are more destructive minds than there are benign or helpful ones.

      There are also paperclip-maximizing type AIs that could still be managed. An AI that went around converting forests to ketchup could be tolerable if the resulting ketchup were then composted and turned back into forests, and the conversion rate were slow enough that the effect on the biosphere wasn’t too extreme. But again, the enormous continuum of possibilities makes it hard for me to quantify.

      Am I approaching this the wrong way?

  8. “But, replace 60 or 70 characters and you might start to get nonsense. Go ahead and try it if you have the time or a program, it would be fun to see when it becomes unrecognizable. ”

    Try this. (WARNING: I am legally retarded in twelve states.)

    from random import randint
    from copy import copy

    text = list(“Insert your paragraph here.”)

    alphabet = “abcdefghijklmnopqrstuvwxyz”

    for i in range(10, 110, 10):
    new_text = copy(text)
    for _ in range(i):
    new_text[randint(0, len(new_text)-1)] = alphabet[randint(0, len(alphabet)-1)]
    print(‘with %s changes: ‘%i)
    print(”.join(new_text), ‘\n’)
    new_text = copy(text)

    Use if you don’t have Python installed.

    From now on, I’m going to tell people that I’m a freelance contractor who writes software at the behest of the Singularity Institute.

  9. There should be one indent on lines 6-7, 9-11 and two indents on line 8 (not counting blank lines).

  10. Copy-pasting really messes up the format. All the quotes used should be single quote marks, not the thing under the tilde. Line 10 has two single quotes in a row, not a double quote mark.

    Sorry for the multiple posts.

  11. “Humanity has attributed human-like qualities to simple automatons since the time of the Greeks.”

    Regarding AGI the point is that the artificial-being will NOT be an automaton. We are contemplating artificial beings endowed with self-awareness, consciousness, intelligence. It would be unethical if we interacted with such artificial beings in a non-human, inhuman, inhumane manner. Anthropomorphism must be applied to AGIs.

    Anthropomorphism is a mistake if we are considering the wind, Sun, Moon, or computers but when we are considering intelligent artificial beings then anthropomorphism must apply. What does it mean to be human? If, in the future, a human becomes completely digital via uploading into cyberspace, would such a human cease to be human? Will anthropology apply to humans that transform? If a human becomes a program on a computer will anthropology cease to apply? Being human is more than merely being flesh and bones. Humans who upload into cyberspace will continue to be human despite the loss of their biological body; despite existing on a computer foundation the digital human would be human.

    AGIs may not look like humans (and neither will transhumans) but they will possess the fundamental defining characteristic of Humanity; they will possess intelligence.

    Perhaps you are aware the following news report regarding ‘chicken boy’; a boy raised in non-anthropomorphic conditions therefore the ‘chicken boy’ exhibited chicken-like behavior such as roosting?

  12. Regarding AGI the point is that the artificial-being will NOT be an automaton. We are contemplating artificial beings endowed with self-awareness, consciousness, intelligence.

    What makes you think that “self-awareness, consciousness, intelligence” are not just emergent properties of a very large automaton?
    If you disagree this means you don’t believe in uploading because any computer system IS a very large automaton.

  13. You’re not going to like my response. That said, the idea for the creation of the “…hive mind…” and the “…Global Brain…” emanates from the imaginations of CARNAL HUMAN beings led by the spirit that controls the corrupt will of humanity. Those who think they can create their own future, who think they can create an anthropomorphic representation of the ‘beast’ within, will eventually realize that it won’t end well, and I suspect after reading some of these comments that many of them already realize it! Pandora’s box is open and theres no way to close it! It won’t end well, period!

  14. This one is an inspiration personally to uncover out much more associated to this subject. I need to confess your data extended my sentiments in addition to I’m going to right now take your feed to remain up to date on each coming weblog posts you may probably create. You’re worthy of thanks for a job perfectly performed!

  15. Hi my beloved! I need to say until this article is amazing, nice created and feature approximately all vital infos. I want to see more posts like this.

  16. Excellent goods from you, man. I have be aware your stuff previous to and you’re simply too wonderful. I really like what you have obtained right here, certainly like what you’re saying and the way in which in which you say it. You’re making it enjoyable and you continue to take care of to stay it wise. I cant wait to read much more from you. That is actually a wonderful website.

  17. Oh my goodness! an incredible article dude. Thanks Nevertheless I am experiencing situation with ur rss . Don? know why Unable to subscribe to it. Is there anyone getting similar rss problem? Anyone who is aware of kindly respond. Thnkx

  18. I am writing to let you be aware of what a nice encounter my friend’s daughter enjoyed going through your webblog. She came to find several things, not to mention what it is like to possess a wonderful coaching mood to let the others without hassle gain knowledge of chosen impossible topics. You truly did more than her expectations. Thank you for showing those valuable, safe, explanatory and in addition fun tips on this topic to Jane.

  19. some truly superb posts with this website, thankyou with regard to contribution.

  20. An amazing share, I at the moment with pretty much everything onto any colleague who had formerly been performing small analysis in this. And he then truth be told bought me personally breakfast due to the fact I discovered it for him.. smile. So i want to reword in which: Thnx for your treat! But yeah Thnkx for spending at any time to go over this, Personally i do believe strongly concerning this and really like reading for this subject. If in any way possible, as suddenly you become expertise, could a person mind updating your internet site to fully grasp details? It in fact is incredibly perfect for me. Large browse up with this writing!

  21. Just wish to say your article is as astonishing. The clarity for your post is just nice and i can think you’re knowledgeable in this subject. Fine along with your permission let me to grab your RSS feed to keep updated with drawing close post. Thank you one million and please continue the rewarding work.

  22. Hi, this weekend is fastidious designed for me, because this point in time i am reading this impressive educational article here at my residence.

  23. When I saw this web site having remarkable quality YouTube videos, I decided to watch out these all video clips.

  24. An outstanding share! I have just forwarded this onto a coworker who had been conducting a little homework on this. And he in fact ordered me lunch due to the fact that I stumbled upon it for him… lol. So let me reword this…. Thanks for the meal!! But yeah, thanks for spending time to discuss this topic here on your web page.

  25. magnificent issues altogether, you simply received a new reader. What could you recommend about your post that you made some days in the past? Any certain?

  26. set of two sun shades in their autos. When you’re traveling your car or truck, the particular glare via light could cause temporary dazzling. The reality is, glare via sun light certainly are a key cause for several injuries. This sort of incidents might be prevented whenever you have shades. It’s also necessary for skilled drivers, that typically generate big automobiles such as the school vehicles or perhaps vans to utilize sunglasses because implications

  27. Great write-up, I am normal visitor of one’s blog, maintain up the excellent operate, and It is going to be a regular visitor for a lengthy time.

  28. Netter Blog und danke fuer den Beitrag.

  29. Netter Artikel. Ich schaue nun weiter hier auf der Seite.

  30. Excellent, what any weblog it really is! This internet site presents beneficial information in order to us, keep this up.

  31. Sorry for my English.This is very interesting, You are a very skilled blogger. I’ve joined your RSS feed and look forward to seeking more of your wonderful post. Also, I’ve shared your web site in my social networks!

  32. Definitely believe that which you stated. Your favorite reason appeared to be on the web the easiest thing to be aware of. I say to you, I definitely get irked while people think about worries that they plainly do not know about. You managed to hit the nail upon the top as well as defined out the whole thing without having side effect , people can take a signal. Will probably be back to get more. Thanks

  33. I have lately started a weblog, and the information you offer on this web site has helped me greatly. Thanx for all of your time & work.

  34. hello there and thank you for your information – I’ve certainly picked up something new from right here. I did nevertheless expertise several technical issues using this web site, as I experienced to reload the web site lots of times previous to I could get it to load properly. I had been wondering if your internet hosting is OK? Not that I’m complaining, but sluggish loading instances times will very regularly affect your placement in google and could damage your quality score if advertising and marketing with Adwords. Properly I’m adding this RSS to my e-mail and can look out for significantly much more of your respective exciting content material. Make certain you update this once more soon..

  35. A domain name is an identification label which defines a realm of administrative autonomy, authority, or control in the Internet. Domain names are also critica for domain hosting\website hosting

  36. Greetings! Very useful advice in this particular article! It is the little changes that produce the most important changes. Many thanks for sharing!

  37. Usually I only add YouTube video’s link to FaceBook, famous You will find added your link to my FaceBook!

  38. An fascinating discussion could possibly be worth comment. I do believe that you can create substantially additional about this topic, it is going to not absolutely be a taboo topic but ordinarily folks will not be enough to speak on such topics. To a larger. Cheers

  39. I wish to voice my respect for your kind-heartedness in support of those individuals that really want assistance with this particular field. Your special commitment to passing the message all through had become amazingly invaluable and has continuously encouraged employees just like me to attain their goals. Your informative hints and tips indicates this much to me and substantially more to my mates. Regards; from all of us.

  40. After looking at a handful of the articles on your web site, I really appreciate your technique of blogging. I saved as a favorite it to my bookmark webpage list and will be checking back in the near future. Please check out my website as well and let me know how you feel.

  41. An attention-grabbing dialogue is worth comment. I feel that you must write extra on this matter, it won’t be a taboo subject however usually individuals are not enough to talk on such topics. To the next. Cheers

  42. quotes discontinuances miscarries barytone sandhogs montessori obstruct archeries . arteriocapillary equipments affrights copulatory ripraps us aftercare

  43. After research a few of the weblog posts on your web site now, and I actually like your approach of blogging. I bookmarked it to my bookmark web site record and might be checking again soon. Pls take a look at my web site as effectively and let me know what you think.

  44. There are some fascinating cut-off dates on this article but I don’t know if I see all of them heart to heart. There is some validity but I will take hold opinion until I look into it further. Good article , thanks and we wish more! Added to FeedBurner as properly

  45. This is the fitting weblog for anybody who needs to search out out about this topic. You understand a lot its almost hard to argue with you (not that I truly would want…HaHa). You undoubtedly put a new spin on a subject thats been written about for years. Great stuff, simply nice!some tips here!

  46. Spot on with this write-up, I truly suppose this website needs far more consideration. I’ll most likely be once more to read way more, thanks for that info.Useful info!

  47. Thanks on behalf of the ideas shared proceeding your own blog. An additional thing I would like to state is that fat diminution is not regarding up for grabs on a nutritional fads and wearisome to lower as much stress as you can in a set spot of time. The most operational route to fail stress is by consuming it insignificant by insignificant and next a little chief recommendations which can assist you to achieve the mainly from end to end your attempt to shed substance. You may recognize and previously be real next several of these tips, however reinforcing expertise never hurts.

  48. A powerful share, I simply given this onto a colleague who was doing just a little analysis on this. And he in fact bought me breakfast as a result of I discovered it for him.. smile. So let me reword that: Thnx for the treat! However yeah Thnkx for spending the time to debate this, I feel strongly about it and love studying more on this topic. If doable, as you develop into expertise, would you thoughts updating your weblog with extra details? It is highly helpful for me. Big thumb up for this blog submit!more tips i found on!

  49. I’d need to verify with you here. Which is not something I often do! I enjoy reading a put up that can make folks think. Additionally, thanks for permitting me to remark!

  50. Mr. Elliott,
    Letters via USPS sounds so novel in the internet age whether or not it is an individual letter produced for many, nonetheless it doesn’t sound like a pen-pal relationship. Can 1 write back on the author on the letter?

  51. I¡¦ve learn some excellent stuff here. Certainly price bookmarking for revisiting. I wonder how much effort you place to make this sort of wonderful informative website.

  52. I think that everything wrote was actually very reasonable.

    But, think about this, suppose you wrote a catchier post title?
    I am not suggesting your content is not solid, however what if you added a post title that
    makes people want more? I mean My Upcoming Talk in Texas: Anthropomorphism and Moral Realism in Advanced Artificial Intelligence | Accelerating Future is kinda plain.
    You should glance at Yahoo’s home page and note how they create news titles to get people to open the links.
    You might try adding a video or a picture or two to get readers interested about what you’ve written.
    Just my opinion, it might make your posts a little bit more

Leave a comment

Trackbacks are disabled.