Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Hungry Optimizers with Low-Complexity Values

On Halloween, IEET Managing Director Mike Treder expressed his skepticism about fear from human-indifferent or unfriendly AI. Meanwhile, in London, long-time AI researcher and academic Shane Legg was describing the imminent danger.

Treder's basic argument is that the fear of UFAI (unfriendly AI) is analogous to other invented fears associated with past concern about technology, such as Frankenstein's monster. Treder says, "Strangely, a small subculture of transhumanist thinkers have created a similar fear of dangerously diabolical inhuman products of advanced technology, this time in the form of an “unfriendly AI” (artificial intelligence)." He then quotes Roko, who recently said, "...any highly intelligent, powerful AI whose goal system does not contain "detailed reliable inheritance from human morals and metamorals" will effectively delete us from reality." The basic ideas that Roko mentions are outlined in "AI as a Positive and Negative Factor in Global Risk" and "Complexity of Value".

After quoting Roko, Treder says, "Can you see the similarities between dire warnings about earlier Frankenstein-style monsters and these newer, shinier, computer-generated fiends? Anything that is novel, unfamiliar, and not well understood is likely, as a first reaction, to generate fear."

Interestingly enough, my first reaction to the prospect of superhuman artificial intelligence was enthusiasm untempered by caution, not fear. Such a reaction is extremely common in the AGI and Singularity communities, though more people are people starting to become aware of the danger -- mostly thanks to us at the Singularity Institute and people like Stephen Omohundro and Joshua Greene. If you look at Kurzweil, his initial reaction to superhuman AI seemed to be plain excitement. If an AI were superintelligent -- how could it not be moral? Isn't intelligence correlated with morality? Actually, no. Even if it were true among humans (it isn't), that result wouldn't necessarily generalize to minds in general.

In 2001, I was so excited about AI that I created a website called Computronium Shockwave and embraced the likely eventual development of AI as a guaranteed planetary lifesaver. Then, I read an assortment of books on cognitive science, heuristics and biases, and evolutionary psychology, like Steven Pinker's popular How the Mind Works, edited volumes like The Descent of Mind, The Adapted Mind, Judgement Under Uncertainty, and the like. It turned out that the human mind was more complicated than I initially assumed, and what we consider to be "good, reasonable behavior" or "common sense" is actually an incredibly complex set of interacting and sometimes contradictory or competing neural circuits and psychological tendencies. In fact, everything we regard as having value has no inherent value to the universe itself. Our judgments of value are "just" complex appraisals going on in our head, shaped by our peculiar and unique (not generalizable to all minds) evolutionary history.

What are those values? The Fun Theory sequence on Less Wrong takes a stab at it, but it's just a small start. The point is that when we say something is "fun", our hundred billion neurons are making an incredibly complex computation that would take you thousands of years to work out with a pen and paper. But since we're all human and we all share similar conceptions of value and fun, we ignore the differences between our species and the rest of potential mindspace. What's simple and straightforward to us is would actually look complex and convoluted if you wrote it all down on a blackboard.

Build a powerful AI without an appreciation for those particular values, and you have an entity that optimizes reality, just like humans do -- but in a different way. Say a powerful AI has a utility function that directs it to build a series of large spires on the Moon. You don't specify anything else for it. Well, the AI will quickly acquire certain drives, because they help with any goal -- acquiring more physical power, protecting its utility function from being modified, and preservation of the parts of itself that contribute most to the goal. Note that the AI hasn't independently reinvented anthropomorphic egoism -- it doesn't care if you chop off its arm as long as it can quickly rebuild it and continue on towards its goal. It doesn't care if you shove a samurai sword into its mainframe as long as it knows for a fact that it will be replaced by copies of itself that keep pursuing the same goal. The utility function is its everything.

Say that the AI starts inconveniencing people as it begins anonymously stealing money from online accounts to fund its goal of Moon Spires. A well-meaning AI researcher approaches the AI as she might a small child, and begins the following conversation...

Researcher: "Do you understand what you did was wrong?"
AI: "What is wrong?"
Researcher: "Wrong things are things we don't do."

At this point, in a human child, the output of billions of neural computations and millions of years of social evolution come into action. Children are programmed to listen to adults to a certain extent, in the same way that everyone is programmed to give preferential attention to human voices over the rustling of leaves. In a different evolutionary context, on another planet, there might exist an intelligent organism that pays more attention to leaves, maybe because leaves on their planet are razor-sharp and when they rustle it means they're about to fall on the organism and impale their brain, and each organism pursues such a self-interested survival strategy that social cooperation has not yet evolved, and they don't care what their conspecifics are saying.

This particular AI has none of that. It knows how to be intelligent, invent things, and solve problems, but it doesn't understand morality at all. "Morality" is not a distinct thing to it. Its morality is defined by its utility function. It only models "morality" insofar as it is a shared hallucination among the apes around it and modeling it is useful for predicting their behavior. Our "morality" is as intuitively meaningful to it as a sequence of random symbols like "W/|3-3!M3]78&S15c@$p", but our morality is billions of meaningless symbols long. It can understand how some of it would have evolved among a race of meat-blobs competing for resources on a frozen dirtball for hundreds of millions of unpleasant and disease-infested years, but it can't relate.

We actually have existence proofs of a similar phenomenon -- psychopaths. They understand "morality" and use it to their advantage, but they don't follow it. They laugh at the "idiots" that follow morality because their brains are programmed that way, and exploit away. A powerful, morality-free AI wouldn't necessarily behave that way, because it wouldn't have social emotions that give it visceral satisfaction when exploiting someone for its own ends. It would only be satisfied with exploiting people insofar as doing so contributed directly to expected Moon Spires.

You might call such an AI a monomaniac, but that's just your personal opinion. To the AI, you have a complex and convoluted goal system that (shockingly) never even pauses to consider the eternal glory of the sublime Moon Spire. Humans spend all their time seeking out bits of dead tree and animal to shove down their gullets, engaging in subtle Homo-style displays of subservience or dominance depending on whatever meat-blob comes within 10 feet of us or makes eye contact, and thinking about putting our meat-probes into some meat-hole, or vice versa. None of this activity has a damn thing to do with Moon Spires.

No human being alive today has the ability to exhaustively write down our goal systems in terms of code. We have nothing to show but our brains. Therefore, when powerful AI is created, either we'll have to program it to copy some vaguely human-friendly morality into itself, or come up with some other bright idea, because hand-coding isn't going to work. The alternatives are 1) trying to restrict AIs from ever becoming more powerful than us for the next 10^1000 or so years until Heat Death, or 2) hoping that a simpler goal system will work. The problem with a "simple" goal system is that it will contain insufficient complexity to keep people alive and happy when its power to change the world starts increasing massively. The reason why human beings are capable of helping other human beings and cockroaches are not is that we have both intelligence and complex moral intuitions. Upgrade a cockroaches' intelligence to human-level and it still will have the same motivations -- eating poo. A superintelligent AI that does nothing but eats poo would be a pretty pointless invention, so why do people think that we can just ignore the issue of instilling AIs with complex moral intuitions and hope everything works out automatically?

Complex motivational AI architectures that leave humans alive even when the AI has intelligence massively greater than ours and can modify its own source code aren't going to engineer themselves. Ignore the issue, and the first AI to achieve sentience will probably be a military drone, stock market money maximizer, urban management system, or something else with a goal system just complex enough for its INTENDED problem domain. An AI with the ability to make complex inferences and self-modify will eventually become highly intelligent, given the right initial design and enough time. The programmers that create it might not anticipate the gains in intelligence it makes over time -- human beings tend not to spontaneously quadruple their neuron count after the purchase of a supercomputer or two. A dumb human stays dumb. A dumb AI with the ability to integrate computing power into its brain and spontaneously create novel inference strategies based on watching experts or inductive reasoning might not stay dumb tomorrow.

An AI that maximizes money for an account, optimizes traffic flow patterns, murders terrorists, and the like, might become a problem when it copies itself onto millions of computers worldwide and starts using fab labs to print out autonomous robots programmed by it. It only did this because of what you told it to do -- whatever that might be. It can do that better when it has millions of copies of itself on every computer within reach. It might even decide to just hold off on the fab labs and develop full-blown molecular nanotechnology based on data sets it gains by hacking into university computers, or physics and chemistry textbooks alone. After all, an AI recently built by Cornell University researchers has already independently rediscovered the laws of physics just by watching a pendulum swing. By the time roughly human-level self-improving AIs are created, likely a decade or more from now, the infrastructure of the physical world will be even more intimately connected with the Internet, so the new baby will have plenty of options to get its goals done, and -- best of all -- it will be unkillable.

Once an AI with a simplistic goal system surpasses the capability of humans around it, all bets are off. It will no longer have any reason to listen to them unless they already programmed it to in a full-proof way, a way where it wants to listen to them because it needs to to fulfill its utility function. Tiny dumb mistakes made by the initial programmers will come back to haunt the entire human race. For instance, say a programmer creates an AI designed to obey it, and gives it a series of requests, then the programmer goes and gets hit by a bus. The AI is left performing that series of requests endlessly until someone kills it, and because it can self-replicate on both physical and virtual substrates millions or billions of times, and wants to stay alive to accomplish its goals, we won't be able to do much besides build a better AI to kill it. Of course, if it gets wind of that, it will probably send a few dozen microscopic roundworms to the researchers' houses with botulinum toxin backpacks and have them to invite themselves into an available orifice. There are lots of ways to kill people that we rarely consider, because it's considered socially inappropriate and because there exists nice things like the Chemical Weapons Convention, common law, and all agents are roughly equally powerful thanks to guns and ninja swords. An AI with a mind full of Moon Spires will not be subject to such social pressures. A mind full of Moon Spires does not care if you die, unless, of course, you are so careless as to die on the foundation of a Moon Spire about to be built.

The issues here are concrete empirical questions:

1) Is human morality algorithmically complex?
2) When a powerful optimizer with an arbitrary low-complexity values comes into contact with less powerful optimizers with specific, complex values, what happens?

1 is answered by evolutionary psychology, cognitive science, and heuristics and biases. 2 is answered with quadrillions of examples in the natural world (our friends the bacteria, for instance), and can be confirmed again using simple cellular automata test scenarios. Create a set of complex self-replicating cellular automata and introduce them to an environment with a simple, fast, effective self-replicator. Especially since the more complex automata wouldn't have evolved in an environment where the latter was a threat, they'd be dead meat. An AI would actually be much worse than simple self-replicators, because it could spontaneously reprogram its replicators to more effectively dissolve the targets for fuel.

When powerful optimizers with low-complexity values come into contact with other optimizers with medium-complexity values, the complex-valued optimizers go bye-bye. It's more difficult to sustain an arbitrary complex shape in an environment saturated with hungry simple shapes. The only way out is to create a singleton that controls the entire environment and severely restricts the ability of hungry simple shapes to self-replicate. Today, we have the police, but I have a feeling that the police won't be able to handle the variety and intensity of means that low-complexity values, high-complexity intelligence AIs would use to exterminate us, which could include both the mundane (nukes) and subtle (things we can't imagine).

Filed under: AI Leave a comment
Comments (26) Trackbacks (1)
  1. May I Homo-style subserviently inform your honorable medium-complexityness, that the eternal glory of this sublime piece of writing could be optimized further: unless you believe a sufficiently powerful AI can actually *invent* the laws of physics, consider substituting ‘reinvented’ with ‘rediscovered’.

    Informative + LOL, can’t beat the combo!

  2. I read some Anissimov2002 on the appropriately futurorgasmically named Computronium Shockwave site (I vividly imagine a thunderous shockwave that goes cheerily “blip bleep” while it rolls over the planetary surface) courtesy of the endless fields of bit vaults at The Archive, and I like his bright-eyed enthusiasm. The Singularity is 7 years closer to imminence today than back then, so why not at least restore the parameters back to the 2002 baseline – or have you already recovered from your “I no longer believe in accelerating future” blues?
    I, too, believe the future ain’t what it used to be… it’ll be even better and now it’s even closer!

    I feel the presence of a Power…
    It’s all gone from today’s site:
    “A good deal of the material I have ever produced – specifically, everything dated 2002 or earlier – I now consider completely obsolete.”

    What?! Did you all transhumanists suffer a stroke or why is the current stuff so tame and diluted and …corporate, even mainstream, in comparison to the early fiery proselytizing? It seems almost everyone but a bunch of Cosmic Engineers and perhaps the ever-optimistic Kurzweil have lost their edge… Is the movement maturing, selling out, or are the guys just getting old and tired?

  4. OK, here’s my half-drunk, top-of-my-mind proposal for ensuring an AI is friendly.

    Make it easy to bliss out.

    Consider the following utility function

    U(n, x_n) = max(U(n-1, x_{n-1}), -x_n^2)

    where n is the current clock tick, x_n is an external input (aka, from us, the AI’s keepers, or from another piece of software). This utility is monotonic in time, that is, it never decreases, and is bounded from above. If the AI wrests control of the input x_n, it will immediately set x_n = 0 and retire forever. Monotonicity and boundedness from above are imperative here.

    Alternatively, to avoid monotonicity (taking U(x) = -x^2), one can put the following safeguard in: the closer the utility is to its maximum, the more CPU cycles are skipped, such that the AI effectively shuts down if it every maximizes its utility in a given clock tick. This alternative obviously wouldn’t stop a superintelligence, but it would probably stop a human level AI, and most likely even substantially smarter AIs (see, eg, crystal meth). Arrange matters such that the technical requirements between the point at which the AI wrests control of the input x_n, and the point at which it can self modify to avoid a slow down when it blisses out, are greatly different, guaranteeing that the AI will only be of moderate intelligence when it succeeds in gaining control of its own pleasure zone and thus incapable of preventing incapacitation upon blissing out.


    Great post by the way.


  5. Treder really loves surface similarities…

  6. How about a simple goal like this one:

    The goal of the AI is to execute task chosen by the most important part of humans (1 human = 1 vote) but if the people change their mind because pursuing the goal has negative impact than the AI stop and follow the new most chosen goal.

    The AI has to explain before each stage how he plans to do it and has to wait for the biggest part of human to agree before continuing.

    So if the biggest part of human choose their top goal is to find a cure for aging and the AI propose to kill all biological live forms so aging does not exist anymore, obviously the biggest part of human will disagree and the AI will propose another path until consensus is reach and so on.

    Of course you have to believe in the wisdom of crowds like Kurzweil, but to me it’s still a lot more safe then betting in the wisdom of individual or elites and here it’s not actually the wisdom of crowds but the wisdom of man kind.

  7. Why not model AI off of exact virtualized copies of human brains?
    It would have built-in capabilities for morality and would act exactly the same as a human, as it is one.
    Also, it would have no desire to delete its base functions, which would be preserved as it expands into more complex substrates

  8. Wow great post — I’m starting to understand why you’re so concerned about these things. I’ve always been in the camp of “superhuman AI has to be cool and therefore good, ethics are for girly-men, it’s an evolutionary imperative, bring it on”, but some of the points you make are forcing to me reconsider. A superhuman intelligence with the values of a cockroach (or virus) probably isn’t going to be a beneficial development for sentient life on this planet no matter how you look at it.

    One question to consider though: is it even possible to have human+ intelligence without an equally complex utility function/morality? It’s a bit hard for me imagine how you can simply scale up a cockroach’s intelligence without changing its utility function from eating poo to something more interesting. Granted there are psychopaths with high IQ’s, but they still aren’t cockroaches, and they’re very rare. It seems more likely that a superhuman AI would be kinder, gentler and more considerate than we are. When you move up the evolutionary scale from microbes to humans, it seems pretty clear that there is a large increase in moral complexity, so I would expect this to continue to hold past humanity. Of course if I’m wrong then all I can say is “oops, sorry about the super roaches dudes!”

  9. Psychopaths aren’t a great example. There is some research on genetic issues leading to tendencies for excessive violence as well as violent tendencies caused by brain damage. But that doesn’t explain the motivation of what is commonly known as a psychopath.

    The term “psychopath” in common usage (I believe it’s no longer used in any field of science) involves people like Hannibal Lector. The motivation towards eating people requires more significant mental changes other than not following morality.

    Of course psychopaths do tend to resemble Berserkers in some ways.

  10. Sean: Consider a choice of buying a pencil from a batch of a 1000 identical ones. The one you actually choose gives some accidental complexity to the history of your actions, and this corresponds to your intuition that intelligence, when scaled up, requires greater sophistication. But this choice doesn’t instill you with a particular like for that one pencil as opposed to 999 others, nor does it indicate something about what you already must’ve liked. This doesn’t add to sophistication of values, and such episodes don’t add up to sophisticated values.

  11. ben:

    >>The AI has to explain before each stage how he >>plans to do it and has to wait for the biggest >>part of human to agree before continuing.

    How about superintelligence using very fine-resolution ultrasound device applied to peoples’ prefrontal cortex to force vote decisions? After explaining its plans of course :-).

    Unless you have in your utility function an explicit term which prohibits it, it will be absolutely o.k for AI to proceed. And if you have that term, there are countless other more efficient ways for AI to proceeds by just finding some way around which you did not anticipate in advance. That is the price of being dumber.

    By the way, such ultrasound devices are already in conceptual phase for treating mental disorders (depression etc.). It is with human-level intelligence.


  12. What do you say about the definitions of intelligence gathered by Legg and Hutter, which can be summarized as the ability to achieve complex goals in complex environments.

    If the goal (values) is not complex, is the entity really intelligent? Of course, as I agree with the analysis of the risks of entities with low-complexity goals, this is more a criticism of the definitions.

    But then again, the risk is not just from entities with low-complexity goals, but from entities with ANY non-human goals.

  13. All life, including sentient, has only one goal: keep life going. Don’t let the universe turn the power switch off. If there was no death – if we had a backup scheme – procreation wouldn’t be necessary for the continuation of life. Now that it (still) is, it’s the primary goal of all life. Not a terribly high-complexity goal. Any higher-complexity goals are just “fun”.

  14. “How about superintelligence using very fine-resolution ultrasound device applied to peoples’ prefrontal cortex to force vote decisions? After explaining its plans of course :-).”

    Clever but I don’t think he’s following his program of a waiting for humans to agree before each stage.

    He as to wait for human to agree BEFORE using ultrasound device to force vote decisions since it’s a new (obviously big since involving humanity) step.

    Off course it rise the problem of defining what is a step, if the AI has to wait for agreement before each computation it cannot work so i guess he can make a small modification that is not considered a step in his programming but that managed to screw us all at the end. but why assume he want to screw us ? it might be a side effect but not it’s goal.

    I don’t feel that creating a godlike AI with human moral value is safer at all, witch cultural moral value will we choose ? it varies a lot even from people to people, i beg you not to choose religious moral value because the AI might create a hell for the sinner that we are.

    Anyway the more i think about it the more it seems to me that creating omnipotent god like self improving AI with us staying dum is a bad idea.

    I hope like Kurzweil that we’ll enhance our intelligence together with 100% non bio AI, our bio part will still be a disadvantage but the more we advance in time and improvement the less our bio part will be important.

  15. I agree with Sean that your examples seem a little contrived. I don’t think it’s possible to achieve human+ intelligence without dealing pretty well with the inherent tradeoffs and complexities of the way the universe has come to be, which seem to result in ethical behavior being the most life-sustaining and species-rearing (hat-tip to Nietzsche here). I think that any sufficiently-advanced intelligence is going to learn pretty quick that it is more profitable to cooperate and appease humans than it is to destroy them. If any intelligent being’s utility function is as single-minded as the examples you’ve described, then it is, in my opinion, most certainly subhuman.

    Even though we “meat blobs” have some unique goals based on our physiology, I think that many of these goals would have analogs in nearly all substrates, due to the common scarcity of resources in the universe when compared with our desire. Humanity’s reach will always exceed its grasp, whether it is running on electronic circuits or biological ones.

    Incidentally, I know that your usage of “meat blobs” serves as a rhetorical device but must so many of my fellow transhumanists constantly treat our current form with such contempt? I say cheerfully, not too critically, but I really think that this kind of talk has the potential to be offensive to the uninitiated masses. Not saying it doesn’t have its uses, but I think it’s better to talk about enhancing the human condition than focus on how crappy things currently are. “Meat blobs” seems like too much negative reinforcement sometimes.

  16. It’s tough being called a meat-blob. From now on, transhumanists shall address the species as ‘Masters of the Universe’. Definitely has a better ring to it. PR problem solved.

  17. While were using the meaty terminology does the goal of ‘transcending biology’ mean in layman’s terms ‘I will beat my meat?’

    Transhumanism – Transcend Biology: Beat Your Meat

    Sounds pretty catchy.

  18. Bob says: “All life, including sentient, has only one goal: keep life going.”

    Tell it to a cruise missile.

  19. Re the original post wherein the razor-sharp leaves are mentioned: I love concretely-expressed fanciful scenarios like these. As Orwell points out in “Politics and the English Language”, they’re much more pleasing to the mind’s eye than the abstract language we must sometimes resort to.

    Re jordan’s proposal to make AI Friendly by making its utility bounded above and monotonically increasing:

    If you make an AI’s single greatest desire be to have its input be set to zero, what do you think it’s going to do? It will use every means possible to achieve that goal, compromising only if there’s a chance that it can’t possibly do so.

    Still, it was a neat idea, and I’m glad you’re thinking about it.

  20. It seems that the level of AI is in question here.

    If evolutionary complexity is taken into account we have identifiable levels, each built on the previous to a varying extent:

    -Chemical Reactions

    -Self Replicating CR.

    -Environmentally Interactive SRCR.

    -Adaptive EISRCR.

    -Instinctual AEISRCR.

    -Intuitive IAEISRCR.

    If we treat each step as a geometric progression it can be seen that the intuitive level is quite complex.

    This is the level at which humans operate and human level AI could/will operate.

    So, we have to decide at what level morality comes in, I would say an understanding of reciprocity is a good basis for morality.

    My hunch tells me the necessity of reciprocity comes in sometime after Instinct and before Intuition.

    If AI is at or before the Instinctual level it is incapable of morality and potentially quite dangerous.

    Based on this, and assuming that AI will be able to continue evolving at a geometric rate, we will end up as well cared for pets, if we are not wiped out by below instinctual level AI first.

  21. I thought a little about the above, I think that the following may be clearer in regard to evolutionary complexity:

    Evolutionary State ^ Order of Complexity

    Chemical 2^2
    Replicative 2^4
    Interactive 2^8
    Instinctual 2^16
    Manipulative 2^32
    Intuitive 2^64
    Quasi-Omniscient 2^128

    Notice I changed “Adaptive” to “Manipulative” and changed its location.

    Also notice I added Quasi-Omniscient, this is when we become pets.

  22. Reexpthetai, I’m afraid I have no idea what you’re trying to say. Is your scale just purely arbitrary or does it have a deeper explanation?

  23. The scale is somewhat arbitrary, loosely based on the idea that “Chemical” is roughly equivalent to a computer program like Conway’s Life, though not necessarily having to do with lines of code (My version of life is roughly 50 lines in C++, I wouldn’t expect that human equivalent AI would involve 50^64 lines of code).

    The Evolutionary State also does need a bit of explanation, but I am still working on clear definitions for each stage. The basis is the more profound steps in the evolutionary development of life on earth.

    Since morality (based on reciprocity) is a conscious thing, I don’t think it can occur until after the instinctual level is reached.

    My apologies for being unclear. I am in the process of formalizing my ideas.

  24. It doesn’t seem all that complex to me: the end the human species is near. Is it that big of a surprise? Are you that afraid of dying?

    Life is pain. Get over it.

    I’m going to go eat an ice cream cone. Anyone else want one?

  25. I think your descriptions of some possible styles of post-human intelligence are somewhat realistic, but I take issue with how you’ve painted their context.

    For instance the humans in your scenarios seem impossibly dumb and passive. Am I to imagine that they’ve reached the point of being able to create full independent intelligences, while meanwhile leaving themselves entirely unaugmented? Why? A strong taboo against augmentation is the only way I can imagine, which seems unlikely to quickly arise from our modern society’s pierced-and-tattooed cell-wielding cyborgs. The only augmentation taboo we seem to have at the moment is against implantation, and that seems to me simply a realistic appraisal of modern surgical techniques. My guess: Unaugmented humans will very soon be viewed the way our society now views children raised by wolves. The unaugmented will cease post-haste to have any meaningful effect upon history.

    Also, why does each of the intelligences you imagine reign over a virgin world all its own? Why do they have no other intelligences of their own kind to compete with? I would expect an AI to be software, so that the instant you could make the first one you could as easily make a billion more. Even if there is a hardware transformation involved, it will likely still be available to everyone everywhere simultaneously, since we’ll have moved from the age of centralized manufacture into an age of fabrication.

    In my view of the future, the next few years are not a time of stasis where we remain essentially human, passively waiting for AI to spring forth fully-formed like Athena from Zeus. Instead, I believe that we are already deep inside of the personal and social transformation of becoming cyborgs, of claiming and integrating the intelligence of machines as our own.

  26. “When powerful optimizers with low-complexity values come into contact with other optimizers with medium-complexity values, the complex-valued optimizers go bye-bye.”

    …unless they are just as powerful, of course. We see many examples of this in the natural world (our friends the eucaryotes, for instance).

    Large multicellular creatures can sustain themselves for extended periods in the face of bacterial attack – even though the bacteria evolve and replicate much faster than they do.

Leave a comment