Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Universcale, Dark Energy, and AI Ethics

This Universcale flash app is really impressive. I found the most interesting part around the micro/nanoscale. It includes data points on the very smallest electronics as well as organic molecules.

It was proposed recently that dark energy is just an illusion, caused by the relative difference in collapse speed of matter-dense areas of space relative to the voids. If this is true then it would be quite a fascinating discovery, letting us say that we actually understand 70% of the mass-energy of the universe. The remaining portion to explain would be dark matter. Despite their misleadingly similar names, the only thing that dark matter and dark energy have in common is that we don't know where they come from. Both could be mere artifacts of our interpretations.

On Digg, every few days there is usually some article that hints about human-level artificial intelligence or robotics. The reactions are always twofold. Let me simply paste for a recent thread:

1. The Asimov Laws comment:

While pouring over code for days, lets hope they remember to put in the 3 laws of Robotics.

2. The "I'm worried because of movies" comment:

This shit is scaring me. In every movie involving AI the human race has struggled against robots, computers, or whatever you'd like to call them. If you let AI have physical responsibilities and give it the ability to learn it's only natural that they will evolve and decide to kill humans. Computers can evolve faster than humans and it is almost certain as demonstrated by evolution that they will want to destroy us. There are mutualistic relationships in the natural world, but I personally don't think computers will want us to live like we are right now.

I know some of you will laugh at this, but this is not a joke to me and you should wake up and smell the coffee. If AI is developed it should never be given the right to develop itself physically without giving it restraints that leave the computer unable to expand past a certain point.

Both of these comments are what you get from the average person, and as with many average-level thoughts on difficult topics, they're superficial and unconstructive. Asimov's laws wouldn't work. Negative commands ("don't do this") are useless in comparison to positive commands ("do this"). Unless what you want a robot or AI to do is entirely implicit in the positive commands, the goal structure is unlikely to be self-consistent. Asimov's laws were a plot device invented half a century ago. We aren't going to get anywhere if we keep pretending that they would actually help or are a legitimate way of thinking about AI ethics.

It's smart to be concerned about the future of AI, and "wake up and smell the coffee", with regards to the fact that we aren't going to be the only intelligent species on this planet for much longer. Many transhumanists need to do this. However, saying "it's only natural that they will evolve and decide to kill humans" is the classic boring anthropomorphism that kills all serious discussions of AI ethics before they can even get started. It's like trying to do math without having any coherent concept of number. Humans need to realize that everything we consider "natural" and "normal" about certain psychological patterns is entirely contingent on our historical experiences in a pin-sized corner of the totality of mindspace. There is no automatic connection between intelligence level and goal content except insofar as they sometimes come from the same underlying causal process (in our case, evolution), so to say, "once AIs surpass us in intelligence, they'll want to kill us" is ridiculous Darwinomorphism. By Darwinomorphism, I mean unfoundedly assuming that a intelligently programmed intelligence is going to share the same psychological features that are common to all minds shaped by Darwinian evolution.

Anyone who holds either of these two beliefs - that Asimov's laws are a decent idea, or that AIs will behave in a certain anthropomorphic way - is essentially sending a signal that they can't contribute to the serious discussion of "what dynamic goals do we give the first AI, and what structure should be implementing those goals?" At present, it seems like the community who can discuss these issues is only around 100 people, which is unfortunate, because the clock is ticking and having several thousand would be much more preferable.

Filed under: AI Leave a comment
Comments (21) Trackbacks (1)
  1. So… Where exactly is this constructive conversation happening? You make this complaint often, but I never hear about the acceptable alternative.

  2. Attempts at constructive conversation happen on the SL4 and AGI mailing lists. The Singularity Institute and AGI Research Institute, along with dozens of independent researchers, are thinking things through. For the first serious work on AI ethics, see:

    Creating Friendly AI sets the foundation for the whole field, including actually making an attempt at presenting a workable framework. Generally, if people aren’t familiar with the work, then it shows they don’t really care about the field in general, and aren’t very serious about progressing it. It’s like someone trying to talk biology without ever having looked at a biology textbook.

    Since 2000 (when the field was founded, for all practical purposes), maybe a couple dozen papers have been published looking at the problem.

    By the definition I give in this post, anyone who gets past the Asimov laws and anthropomorphism stumbling blocks can contribute to the conversation. If you’re one of those people, that means you.

  3. I’m worried because of what happened the last time the human race tried constructing an Artificial Person. We got Leviathan, otherwise known as the State.

  4. Not an Artificial Person. A particular arrangement of persons. Very different than a new type of person, which AI represents. It sure is tempting to look at the challenge of Friendly AI using the lens of historical challenges, but this is really something genuinely new. Metaphors must be discarded.

  5. Thank you for speaking of the two main reactions to A.I. Finally someone out there that knows how to deconstruct myths and B.S. I esp. loved this quote, “entirely contingent on our historical experiences in a pin-sized corner of the totality of mindspace” Keep up the rational thinking!

  6. I think the average person is capable of understanding the issues behind AI. I can discuss these concepts with most of my friends. It just takes a few weeks of face to face discussion, usually over the convenient pretext of the latest ‘robots come to kill us all’ movie.

    As to constructive discussion, it is happening, just in very small groups of science/sci-fi literate people who’ve watched enough of said movies to point out the glaring plot holes in them.
    There is still a lot of misconception going around in the people I’ve discussed these topics with, but its more on the level of the common errors you see on the archives of transhumanist newsgroups

  7. Looking for intelligent responses to complex issues on Digg is like looking for honest men in the halls of government. The right people are out there, they’re just to busy thinking to waste time on social networks at the moment. At least, I hope so…

  8. Speaking of Robots Come To Kill Us All Movies:

    That being said; I suspect that when AGI becomes feasible in the eyes of the general public, you’ll find that participatory levels of constructive conversation will explode. Something being “possible” makes that happen. Look at the 1935 reaction to Orwell’s, “Things to Come”, and then compare ten years later.

    Of course, expecting people with zero background in metacognition to comprehend even the nature of what a mind is, let alone AGI, is like expecting an illiterate man to produce in writing alone the works of Beethoven. Not gonna happen.

    On a final note, I’ll once again throw in my warning that there might just be more to the physical environment that ‘forces’ sentience/cognition into specific parameters than “simple” evolution would account for. By attempting to isolate those factors it might just be possible to have predictive parameters by which to judge future-development AGI.

    Par exemplorum; based on the biological world, the wider the gap between levels of cognition, the weaker the resource competition. The weaker the resource competition, the greater the non-existence of interaction becomes. A vastly superior intelligence, given a “naturalistic” goalsystem (hush, Michael; that’s a preassumption not anthropomorphism. Follow — it’s called ‘stipulation’ :) ), would be practically invisible to humanity.

    Its rather like this; do we as a species feel the need to systematically destroy ants living in unoccupied forests? The same would apply, only with humans as the ants.

  9. Michael,
    One thing that concerns me is the assumption that evolutionary pressures won’t be designed into AI. Given your entire premise I fear that this is something that will proactively occur. After all it is all we know, we cannot even fathom beyone “our historical experiences in a pin-sized corner of the totality of mindspace.”

    If we’re lucky more thought will get applied to (F)AI and we’ll see something truly unique and outside our paradigm. I expect we’ll see many, many efforts toward simulating biological evolutionary intelligence, etc prior to something unique along the lines your anticipating. If the AI is setup to believe there is only a limited number of slices of pie, then we could be those annoying ants or parasites. If the vision of a bigger pie is the premise, then symbiosis is an option.

  10. Regarding evolution in AIs, see the CFAI Indexed FAQ. Basically, even if directed evolution were utilized, it’d likely be on modules, not focused on the whole organism, a la Darwinian evolution. There would also be trillions of other important differences between any directed evolution scheme and evolution and natural selection in biology.

    You’re talking about zero sum versus positive sum thinking. Zero sum thinking is something frequently built into Darwinomorphs (like humans). It goes without saying that we’d want to program AIs with positive sum thinking. Not being designed by bloody natural selection, AIs can quite easily be instilled with a positive sum outlook. I doubt it would even be a challenge. The challenge is getting the goal system behind the positive sum outlook to point in the right direction.

  11. Obviously I don’t think it goes without saying… otherwise I wouldn’t have said it. Honestly, I don’t think your pain with respect to who is giving serious thought and dialogue to FAI lends me to believe this as well. I’m sure there are more than 100 people doing research in and around AI.

  12. But they are utterly clueless. If they had the money to build AGI, they would end up creating unFriendly AI and eliminate all life on the planet.

  13. Clueless is probably a bit harsh… misdirected maybe? :)

    I’m with Chris that the most insightful point you made was in our not knowing what we don’t know being the most dangerous aspect facing AI development.

    You’re talking about things being modules and essentially discreetly manageable elements. My point is that I believe the going trend and an oft stated goal is to recreate the human mind artificially (or human level/equivalent.) One of the more obvious ways to do that would be to establish similar selective pressures in an evolutionary model… resulting in a zero-sum simulated biological intelligence, or as you put it, “creating unFriendly AI and eliminate all life on the planet.”

    My fear isn’t that there aren’t a lot of people trying to do this, but that they aren’t trying to create something different because it’s unpredictable. I suspect it is more dangerous to create the predictable, an intelligence with similar motivations as humans, but “smarter.”

    Hard stop on the singularity…Did you want fries with that?

  14. Attempts to create the human mind artificially are based on neurology-inspired reconstructions and cogsci analyses. Not evolutionary scenarios. Note that evolution requires trillions of individual organisms and millions of years to progress. No AI designer is trying to simulate that. It’s far easier just to copy what we know from brain science.

    Again, please see the CFAI Indexed FAQ. It doesn’t take too long to read and it’s the main part of my response I wanted to draw attention to.

  15. After reading the CFAI Indexed FAQ I still think that most of the advance in AI is going to occur once the momentum for massive simulation occurs. I think you are going to see evolution simulations for genetics, etc in the near term 2-5 years. After that I suspect AI researchers will follow suit. We know evolution works, but the results will be the opposite of FAI.

    On the topic of overcoming seed beliefs I’m highly skeptical. While it is possible it is not at all uncommon for individuals to hold to dogmatic beliefs well beyond the point where enough data may exist to logically evaporate that belief. The greatest advantage is that the speeds involved for most any AI are likely to be significantly faster than the human mind so they’ll reach those changes more rapidly. Though that doesn’t eliminate the existence of the window of time where the AI may have a belief system that is in opposition to our well being.

    Having said all that I think the difference between our perspectives comes from where we each believe that AI will emerge. My premise is that it DOES derive from evolutionary scenarios where you are advocating that it will NOT. I hope you’re right.

    How to get more researchers and developers involved in the ethical/moral debate of AI v FAI? Bringing us back around to your core mission for the AI aspect of your blog…

  16. Sorry to go offtopic from FAI, but I just want to point out that the 13.7 billion lightyear figure for the size of the Universe in that “Universcale” Flash app is completely wrong. A ball centered around the Earth measuring 13.7 billion light years in diameter has no special significance — it is not the entire Universe, it is not the Observable Universe, there’s nothing especially interesting about its contents that would distinguish it from any other random section of space (other than the apes at the center, of course), and there’s no reason why any other point in the Universe is a more logical choice of a center for the useless hypothetical ball than Earth. Even the smallest glance at the relevant Wikipedia articles should have informed the author of this app that his/her model of the Universe is miles off target, so all the other information in that app should best be taken with a healthy dose of skepticism.

  17. Michael, I think I’m starting to empathize with your perspective. The more I end up talking about AGI, the clearer it becomes just how whacky and diverse some people’s ethics and anticipations of the results will be.

    This suggests a dilemma: is it really a good idea to publicize the need to get more people working on these problems when the majority of the response it is futurist, speculative noise?

    I’m not a mathematician, and ethicist, or a cognitive scientist, and I don’t think I want to pursue becoming any of those. The more I study about FAI/AGI, the QUIETER I get.

  18. It’s weird how “all this serious work” is so sensitive to “average” perturbations in the megacosm.

  19. Quite an elitist view of something that will affect the whole of mankind.

    Where are the rights for those who do not want AI in their lives?

  20. Michael,

    Perhaps the idea of “negative actions” shouldn’t be dismissed so readily. It seems to me that “negative actions” can potentially take the form of complex goals. Goals that could require just as much foresight, strategy, and planning as goals that embody “positive actions”. As just a very simple example, it would be like instructing a child to “draw a star without taking your pencil off the page”. This goal could be specified using both a “positive action” and a “negative action”. 1)Draw a star. 2)Do not remove your pencil from the page. Of course, there could be, and are more sophisticated examples. In order to “not do” something, a person first thinks about *what* is not to be done. And then thinks about how to avoid doing it.

  21. I’m wondering if you know of any research into the idea of applying “Integral Thinking” to the problem of AI Ethics.

Leave a comment