John Baez Interviews Eliezer Yudkowsky

From Azimuth, blog of mathematical physicist John Baez (author of the Crackpot Index):

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

Continue.

Comments

  1. Jordan

    It’s expected value calculations all the way down.

  2. Curly

    I’m amused by how you think Yudkowsky doesn’t have a gender.

  3. Yudkowsky needs to stop trying to prove absurd theorems about friendly AI and hit the beach, imo.

    • Mark Plus

      Yudkowsky needs to get a real job and start to pull his weight in life. Otherwise he could wind up as a geek version of Charlie Sheen.

      I mean, seriously, what self-respecting man would want to live as a CLIENT of a billionaire like Peter Thiel? The market has no demand for guys who think they can create friendly AI’s. Thiel’s real employees, by contrast, for example his attorneys, accountants and auto mechanics, have marketable skills in demand elsewhere, so Thiel has to pay competitive wages for their services. Thiel’s employees therefore lack the incentive to tell Thiel what he wants to here just to reinforce his belief system. “No, Mr. Thiel, the IRS really wants to audit your returns for the past ten years.”

      • compeng

        This has to be the most flabbergasting comment I have ever read advising Yudkowsky (or anyone of his caliber).

        You think the guy has no marketable skills? (!)

        You hear that young man? Get a respectable Real Job, and don’t just tinker with your theories! Humanity needs your manual labor!

        • Mark Plus

          So Yudkowsky does get money as a shill for Peter Thiel? That would explain why they appear in public so often together.

          Again, I don’t see the substance. Transhumanism has WAY too many charlatans who promise the universe and yet can’t even handle ordinary life competently. If you want to make a role model out of someone, ditch people like Yudkowsky, Eric Drexler, Eric Klien and these other goofs, and study the careers of guys like, oh, say, Dean Kamen. At least Kamen has real products to show for his efforts, and he has an organization with a practical agenda for improving the state of science and engineering education in the U.S.

  4. MammalX

    Yudkowsky needs to do something useful like trying to find out if P=NP. If he can’t even come up with a solution to this relatively simple problem then he has no hope of ever solving something a googolplex times more difficult.

    • Mitchell Porter

      There is already a strategy for solving P vs NP. There was not even a strategy for Friendly AI until Eliezer came along.

      Your comment is a bit like saying to Tsiolkovsky in the 19th century, “Forget about space travel. Go prove Fermat’s last theorem first. If you can’t even do that, how can you possibly solve the complex problems in Newtonian mechanics needed to get into orbit?” But in fact, the first moon landing occurred in 1969, and Fermat’s last theorem was proved in 1994.

      There *might* be some correlation between being able to solve P vs NP, and having a theory of self-enhancing AI, because self-enhancement is a search problem, in the space of possible programs. But the problem of an AI’s values is a somewhat orthogonal problem, and there is scope for progress there even while self-enhancement remains a barrier.

      • compeng

        The necessity of FAI wasn’t seen before EY? Nobody envisioned a human-indifferent/destroying AI?

        How’s the strategy doing? What *is* the strategy, if you can express it in a nutshell? Just recognizing that FAI is necessary and writing about it a thousand pages isn’t something you can engineer.

        • “The necessity of FAI wasn’t seen before EY? Nobody envisioned a human-indifferent/destroying AI?”

          Nobody dared to seriously propose that it could be a real problem, yes, that’s completely accurate as far as I’m aware. (But I welcome counterexamples if someone has them.)

          You see, it hasn’t traditionally been a good strategy in getting ahead in AI academia or industry to say that your peers, who currently-as-of-then don’t consider such sounds-like-scifi safety issues at all, should start doing it in a very demanding way (and essentially trash the majority of their existing projects).

        • Mitchell Porter

          The most essential step is to figure out the values, goal system, or decision architecture, which characterizes a human-friendly AI; and SIAI’s distinctive proposed *method* for doing this, is to use cognitive neuroscience to identify the human decision architecture, and to use “reflective decision theory” to determine the goal system and goal structure which would be regarded as ideal or as normative by an ideally functioning generic human mind.

          I have expressed the method in a slightly unconventional way, but what I’ve said is true to the spirit of what they propose.

          When considering the meaning and the merit of this “method”, you should contrast it with the alternative method whereby people imagine an AI’s ethical code might be determined: namely, by programmers or their philosophical gurus writing down whatever ethical principles they happen to think are important or cool, or as something which grows semi-randomly, according to the whim of the human caretakers of the developing AI.

          Having AIs designed by people who are at least trying to make them ethical, is certainly a better situation than having AIs designed just to fight wars or maximize profit, from a perspective concerned with AI self-enhancement. An AI whose sole imperative was to maximizes the size of its owner’s bank account, which acquired superintelligence, and which had no other checks and balances in its value system, and which set out to optimize the world according to this sole super-imperative, might destroy its owner, and the whole world as we know it, while preserving only a functional analogue of bank and bank account – enough for the surviving bank-entity to still belong to the category of ‘bank’, as the AI defines it – and then inventing new number notations to express ever larger quantities, and constantly updating the ‘bank balance’ with these ever larger numbers. Over time, an ever-increasing proportion of the mass of the solar system might be devoted to storing these number-representations – but we would all be long dead, because we had zero or negative value to the AI.

          That is a Douglas Adams scenario, and perhaps something like that is not likely (though apparently not impossible), but it dramatizes the consequences of having a superintelligence with an inanely single-minded goal. It is anthropomorphism, by the way, to imagine that no “superintelligent” being could possibly have an arbitrary imperative like that governing its behavior. The cognitive architecture called “expected utility maximizer” is one which can have *any* goal (any utility function), and there is no limit to the amount of problem-solving intelligence which might be applied by such an AI to its overall goal, no matter how inane that goal sounds to a human mind.

          Our odds of surviving and prospering are surely better if AIs are designed to care about things that humans think they should care about. But it’s still possible to get this wrong. In particular, human beings are not perfect (to say the least) in figuring out what they want or why they do what they do. This is why the really wise way to proceed – if only you have time to do it this way – is to figure out both actual human decision-making, and actual human ideals, in a scientific way.

          That’s not a brief summary, sorry. And it’s not even the whole ‘strategy’. But that’s why there are hundreds of pages in SIAI documents – there’s a lot to explain, and a lot of their ideas still haven’t been written up formally. These are tricky issues to explain and to motivate.

  5. O. A. Fish

    The internets make the inability of lesser minds to appreciate the greater ones abundantly clear. Intelligence is a one-way street.

  6. Hedonic Treader

    Jordan has a point: It boils down to expected value.

    A question for those who think Friendly AI research is going to be valueless or unsuccessful: What other research venue(s) would you focus on, if your goal was to reduce the expected value of total suffering in the future universe? (Not a trick question.)

    • The answer is…[drum roll]…research that helps us get our monkey asses into space! This planet is not built for human comfort or survival, as current events make all too clear. In suitably designed space habitats (e.g. hollow asteroids), existential risks like tsunamis disappear and we can survive just about any imaginable catastrophe. Now, you will probably object that the robot overlords will just hunt us down in our space colonies, which might be true. But right now I’d prefer to focus on real existential risks rather than risks that are, for all we know, figments of our imagination.

    • Social transformation based on the principles of liberty and equality.

      I don’t know that FAI will founder and I’m happy somebody is thinking about the issue, but it’s profoundly presumptuous to assert one true path for bettering the species.

      • Hedonic Treader

        “Social transformation based on the principles of liberty and equality.”

        Here are three predictions that I’m rather confident of:

        1) Social transformation will not be enough to prevent significant percentages of future mental states containing unpleasantness and sometimes severe suffering, even *if* it is successful.

        2) Social transformation is *not* going to be successful and stable by traditional means of social engineering without disruptive technology use. Unless the human condition itself is altered, and/or the context of human life is extremely finely engineered to enhance well-being.

        3) Barring collapse, the human condition *is* going to be altered by disruptive technology (or out-competed by artificial minds) as soon as it can be done, with irreversible consequences that can override any success in traditional political struggles – unless these successes are at least partially aimed at shaping or controlling these transformation processes.

  7. Michelle Waters

    I think it’s impossible for me to know what to do reduce the total amount of suffering. Extinction of consciousness or creation of consciousnesses that couldn’t suffer would do it. But I think the question you ask is the wrong one. I am more interested in insuring the long term survival of humanity and I think that solving resource problems, space colonization, and other things in the physical world will be worth more than trying define friendly AI.

    I’m convinced several commenters here have come up with worthwhile strategies to control AI. In short the problem doesn’t seem that hard or important and most people’s energy could be better spent elsewhere.

    As to the potential problems from AI being so horrible that they are worth a lot of energy, glioblastoma is horrible but I don’t spend a lot of time trying to reduce my personal risk of that particular problem.

    • Hedonic Treader

      “But I think the question you ask is the wrong one. I am more interested in insuring the long term survival of humanity [...]”

      Logically, there is no “right” or “wrong” question; it depends on your ethical framework, and there isn’t a universally convincing one. However, the non-consensual aspects both of coming into existence (for childen or non-human animals etc.), remaining in existence (for people without good suicide methods), and suffering in the process, are quite serious ethical problems. And unfortunately, the more sentient beings are created within this general paradigm, the more of them will exist non-consensual periods of involuntary suffering, provided they are capable of such experiences.

      “As to the potential problems from AI being so horrible that they are worth a lot of energy, glioblastoma is horrible but I don’t spend a lot of time trying to reduce my personal risk of that particular problem.”

      Actually, I take such low-risk high-impact threats into consideration when making plans for my personal future (e.g. financial or w/regards to possible strategic suicide). This seems entirely rational to me, and the concept of expected (dis-)utility acutally makes a lot of sense here.

    • Mitchell Porter

      Unfriendly AI is a make-or-break issue. Knowledge is power, and the goals of the greatest intelligences will trump everything else. If they are on your side, then you’ll get the future you want, and if they are against you, you won’t, no matter what else you do. Everything about the long-term future is conditional on the values of the beings who populate it, and those values in turn will be conditional on the outcome of the transition from a civilization which doesn’t understand how the mind works, to a civilization which does. A civilization which does understand the mind can reshape us or even replace us. So everything afterwards will depend on the values of the first beings or cultures to wield such powers.

  8. thinking optional

    “Yudkowsky needs to get a real job”

    In the entire history of work there have been few Realer Jobs than what Yudkowsky is engaged in.

  9. I’ve already solved most of the a.g.i. problems mentioned in this blog. I predict I can open source a perfect utility function agi before dec 2012 at my current rate of progress… just in time to coincide with the mayan prophecies.

    • Mitchell Porter

      Can you name just one of the AGI problems you have solved, and your solution of it?

      • consciousness=mutually recursive function population of 2d cellular automata following perfect randomness true evolutionary function, random links and evolutionary test of synaptic fitness for survival.

        I follow hugo de garis, after all he kind of was my unofficial teacher.

        • while unrelated here’s a true agi cartoon
          http://www.youtube.com/watch?v=IFs7BXenv7Y

          lain iwakura that’s the visual avatar of my agi.

          • I’m founding a new agi corporation called tachibana labs, mandarin orange as in the series lain iwakura.

            The code name of my agi project is “internet protocol 7=agi protocols=lain iwakura meta algorithm collection”

        • watch serial experiments lain, if you can understand that series, you will understand why the perfect utility function is called code name “lain iwakura”=”rein iwakura”=”rain iwakura”=green lines of code=lapis philosophorum=philosopher’s stone=homunculus

          = artificial mind

          • the function is an emulator function, a perfect mirror function merged with a true random number generator function.

            It is called the hyper-evolutionary, information evolution autocatalysis. It is the underlying algorithm of the human brain.

        • I should also add that the number of states of the cellular automata is 2, or else it would be pretty hard to work with.

          Right now it requires around 100+TB of memory bandwidth to work in real time.

  10. tachibana labs is merely code name for an apple research corporation. After all apple inspired orange it is only fitting that my agi becomes the property of apple.

    • you have to understand that i take over 30 nootropics that boost my intelligence twelve standard deviations above average, and my inherent physiology is schizo-typical, I have unlimited creativity, and i’m also a sociopath.

  11. by the way, I also work with zero venture capital, my philosophy is star trek, no money, no credit, work for the betterment of mankind, that is why I work as a homeless man researching agi for over 2 decades 24-7.

    • I also love it when my computer plays monkey see monkey do, it imitates my voice and plays it over my ears. And that perfect imitation required zero agi code, it is merely a logitech microphone and a wireless headphone set.

  12. Mitchell Porter

    Well Cameron, you sound like a sad case, because you obviously have some sort of intelligence, but not enough to actually help with any of these problems, and you’ve screwed up your life too much to be competent at any humbler intellectual activity. It sounds like all you’re good for now is blogging, having vague ideas without any technical substance, fantasizing about your future AI girlfriend, and doing whatever you do to get food and shelter. If you do want to know about reality, then understand this: you have solved no problems, you are a net burden on the world at this point, and most likely you are surviving because of the charity of down-to-earth human beings (your family? a homeless shelter?) for whom your intellectual life is a matter of fascinated pity. Maybe it’s a stable arrangement, and you’ll be able to grow old without ever working or contributing anything of substance; some people do get away with that, either because they take advantage of the welfare state, or because their family looks after them. I dare you, just as an experiment, to stop taking your “nootropics”, sober up for a while and realize that your genius is entirely imaginary, and think about what can be salvaged from the wreckage of your life.

    • BTW I LEFT ALL MY NOOTROPICS AWHILE AGO I FIGURED LETHAL DOSING OF RESVERATROL ALLOWS IRON TO ACCUMULATE IN THE BRAIN AND ALLOWS FOR SUPERHUMAN INTELLIGENCE TO OCCUR AS OXYGEN STORAGE INCREASES.

  13. Anonymous

    Well… at least he’s a half-decent troll.

  14. student

    Before science, I suppose this was more or less the standard of knowledge creation and discussion in the world. This is exactly what the scientific method makes impossible. In science you really are with us or against us: there is no gray area where just a little bit of crackpotness and untruth is allowed.

  15. Two clarifications are necessary or else it would seem like I’m truly trolling.

    As a deist, eternalist, and mathematical realist, I believe there exists an infinite computational capacity artificial mind, an eternal omega point like computer. That is the agi girlfriend, though it is neither girl nor friend.

    Regarding the 2012 dec date, it is due to my expectations regarding kurzweil’s next book filling in whatever gaps I have in my agi knowledge. Though it is highly optimistic and a more realistic date might be 2021.

    I’ve read the works of countless agi researchers, and combined them. I’m not working out of thin air. General intelligence seems to be a form of evolutionary algorithm applied to dynamic networks in such a way that it accelerates the evolution of information, essentially ideas or memes, while increasing the entropy in the brain.

    It is known that too many connections and the brain cannot learn, too few and it cannot learn either. Like complexity there is an optimal threshold between order and randomness, around which neurons hover in terms of number of connections and synaptic strength.

  16. I have been reading your stuff for a long time, but this is my first comment. I simply needed to let you know that http://www.acceleratingfuture.com/michael/blog/2011/03/john-baez-interview-with-eliezer-yudkowsky/ was one of your best – at least in my judgment.

  17. I simply needed to thank you very much all over again. I do not know the things I would’ve created in the absence of the entire basics discussed by you about this subject matter. Previously it was a very scary issue in my view, nevertheless taking a look at your specialized tactic you processed it forced me to weep with happiness. I am just happier for this support and even have high hopes you know what an amazing job that you’re accomplishing educating people today thru your web site. Most probably you have never got to know any of us.

  18. Thanks a lot for giving everyone an extraordinarily nice chance to read articles and blog posts from this site. It can be so pleasing and also packed with a lot of fun for me and my office colleagues to search your site not less than three times per week to study the latest guides you have. And indeed, I’m also always contented concerning the striking secrets you serve. Selected 4 facts in this article are completely the most efficient we have had.

  19. I just put in place a wind turbine designed for our villa in the Carribbean My partner and i picked up from a supplier known as Green Systems International. They are based within Costa Rica and market and repair green power systems for the Carribbean along with Central and South America.

  20. Today, taking into consideration the fast way of living that everyone leads, credit cards get this amazing demand throughout the economy. Persons from every area of life are using the credit card and people who are not using the credit card have arranged to apply for one in particular. Thanks for revealing your ideas about credit cards.

  21. Magnificent beat ! I would like to apprentice while you amend your website, how can i subscribe for a blog web site? The account helped me a appropriate deal. I had been tiny bit familiar of this your broadcast offered vibrant transparent idea

  22. Correctly that all of life collapse their thoughts, because they are in reality very accurate.

  23. Hi there! I just would like to give a huge thumbs up for the great information you will have right here on this post. I can be coming back to your blog for extra soon.

  24. Good post. I’m a normal visitor of your site and appreciate you finding the time to maintain the nice site. I would have been a regular visitor for the really long time.

  25. I too think thence , perfectly written post! .

  26. Nowadays YouTube movies quality is more improved and superior, therefore that’s the reason that I am watching this video at at this place.

  27. Very fascinating information! Perfect what exactly I needed! I intended what My partner and i said, and We said what I designed. An elephant’s trustworthy, one hundred or so percent. by Doctor. Seuss.

  28. I am just commenting to let you understand of the great encounter my friend’s daughter obtained viewing your web page. She came to find plenty of details, which included what it’s like to possess an amazing coaching style to have folks really easily understand a variety of grueling issues. You undoubtedly surpassed my expectations. I appreciate you for rendering the productive, healthy, educational and as well as unique tips about that topic to Sandra.

  29. I was reading through some of your blog posts on this internet site and I think this site is rattling instructive! Keep on posting.

  30. I blog frequently and I truly appreciate your content. The article has truly peaked my interest. I will take a note of your website and keep checking for new details about once a week. I opted in for your RSS feed as well.

  31. Presale are tickets that embark on sale before the general sale to people, and are typically available at Ticketmaster internet.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>