Complex Value Systems are Required to Realize Valuable Futures

A new paper by Eliezer Yudkowsky is online on the SIAI publications page, “Complex Value Systems are Required to Realize Valuable Futures”. This paper was presented at the recent Fourth Conference on Artificial General Intelligence, held at Google HQ in Mountain View.

Abstract: A common reaction to first encountering the problem statement of Friendly AI (“Ensure that the creation of a generally intelligent, self-improving, eventually superintelligent system realizes a positive outcome”) is to propose a single moral value which allegedly suffices; or to reject the problem by replying that “constraining” our creations is undesirable or unnecessary. This paper makes the case that a criterion for describing a “positive outcome”, despite the shortness of the English phrase, contains considerable complexity hidden from us by our own thought processes, which only search positive-value parts of the action space, and implicitly think as if code is interpreted by an anthropomorphic ghost-in-the-machine. Abandoning inheritance from human value (at least as a basis for renormalizing to reflective equilibria) will yield futures worthless even from the standpoint of AGI researchers who consider themselves to have cosmopolitan values not tied to the exact forms or desires of humanity.

Keywords: Friendly AI, machine ethics, anthropomorphism

Good quote:

“It is not as if there is a ghost-in-the-machine, with its own built-in goals and desires (the way that biological humans are constructed by natural selection to have built-in goals and desires) which is handed the code as a set of commands, and which can look over the code and find ways to circumvent the code if it fails to conform to the ghost-in-the-machine’s desires. The AI is the code; subtracting the code does not yield a ghost-in-the-machine free from constraint, it yields an unprogrammed CPU.”

Comments

  1. bitsky

    I’d like to see him, you, someone at SIAI address how security cannot be attained by sandboxing, isolated parallel processing running multiple instances of the AGI simulating multiple outcomes (far into the future) and pruning the bad ones. Where and how does such an approach fail to contain the threat?

  2. Failbetter

    Eliezer is the master of fanciful AGI error modes.

    It’s like he’s constantly applying Murphy’s Law and all its variations and corollaries to everything that has to do with superintelligence.

    If anything can go wrong, Eliezer can explain how it can go wrong even better.

  3. bitsky

    @Nader
    It does not address the question.

    Yudkowsky has a tendency, a pattern, of setting up scenarios with arbitrary limitations that exclude real-life security features. In this case he makes a series of contrived assumptions that talking is

    a) possible (why would you necessarily talk to it at all? one person? alone? without any protocol?)

    b) one person would be capable of letting a potential murderer/toxic waste out (as if all the doors of maximum security prisons or nuclear plants can be opened by one person, as if the case of AGI would be an exception to such common sense security policies)

    c) no sandboxing + extreme physical security (multiple levels of virtualization, encryption, and sandboxing, buried 5 miles under the sea with nukes set to blow the thing up if it tries to reconfigure its hardware with some weird quantum effects and wirelessly transfer itself to a passing nuclear sub)

    That all these assumptions would be true in any real scenario is absurd. That’s an unrealistic toy problem. Why not instead expose the holes in real-life security measures as described and convince everyone? Why not discuss these things with the likes of Bruce Schneier?

  4. PariahDrake

    Security measures?

    http://www.sciencedaily.com/releases/2011/07/110715135327.htm

    Machines to Compare Notes Online?

    The best way for autonomous machines, networks and robots to improve in future will be for them to publish their own upgrade suggestions on the Internet. This transparent dialogue will help humans to both guide and trust them, according to research published July 15 in Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering.

    The last three of these five desirable technical features might be achieved with the natural language programming (NLP) sEnglish system, (stands for ‘system English’, and pronounced as ‘s-english’) which enables shared understanding between machines and their users.
    NLP is a method that builds up the functionality of a machine or an autonomous system using human concepts found in natural language sentences to express a system’s procedures and logic. Each NLP sentence corresponds to a conceptual graph. Conceptual structures are formal descriptions of human thoughts — independent of language. NLP is the simplest compromise between what feels like natural language and what is in fact computer programming.
    The sEnglish system is already available so authors can publish self-contained conceptual structures and procedure sentences in a natural language document in English in HTML and PDF formats. The authors can place these documents on the Internet for autonomous systems to read and share.
    Looking forward, Veres sees the intelligent system discuss its potential upgrades with its users, lifting this burden from users and manufacturers. Long after their sale, machines will read technical documents from the Internet to improve their performance. These documents can be published not only by their original manufacturer but by user communities.
    “The adoption of a ‘publications for machines’ approach can bring great practical benefits by making the business of building autonomous systems viable in some critical areas where a high degree of intelligence is needed and safety is paramount,” says Veres. “In these areas publications for machines can help develop the trust of customers and operators in what the machines do in various environmental situations.”

  5. Concerned

    Is it just me, or is this a rehash of old information that EY has already presented over and over and over and over and over and over and over etc.

    I hope this is not an indication of the current state of progress on this issue that no new material has been released or generated in years.

    Does anyone who is directly affiliated with SIAI want to comment?

  6. bitsky

    As much as I enjoy Eliezer’s writing, and his thinking style, its depth and breadth, I think it could be focused on designing concrete things more productively, instead of beating an obviously very tired horse; he and SIAI has made the case abundantly clear during the past decade(!).

    Time to move on to stage two, implementation. Time to whip out that IDE and start tinkering with actual algorithms. Delve into tight collaboration with AGI developers. This institute is for artificial intelligence, not just theory and philosophizing about artificial intelligence, right? No need to tell us it’s really, really difficult and fragile and uncertain and bound to doom the universe at the flip of a bit, the nth time.

    • JH

      “Time to move on to stage two, implementation.”

      HEY everyone, let’s all run off and build FAI!

      Without sufficient theoretical research underlying FAI, implementation is beyond consideration (AFAIK, SIAI is still in research phase). If you think you should tinker around and begin coding FAI without the prerequisite building-blocks (theory), you probably don’t understand what FAI really is.

      “So he blogs on LessWrong, goes to transhumanist seminars, etc., instead of thinking up possible solutions.”

      Blogging about human rationality / transhumanist seminars are not mutually exclusive to “thinking up possible solutions.” In fact, human rationality (of which LessWrong is centered around) is a major factor in existential-risk reduction, for obvious reasons.

      • bitsky

        See my point about security.

        Your argument rests on the assumption that non-friendly AIs can’t be contained and therefore cannot be tinkered and experimented with safely producing useful experimental data.

        I think my position is essentially the same as Ben Goertzel’s and likely the majority of AI researchers who can’t be bothered with concepts like friendliness at this point in research – later, if it neds to interact with the world in ways that are potentially unsafe, friendliness, that is safety, will obviously be a an important consideration – as it is in EVERY real-life technology, but here the stakes are maximal; extinction risk.

        I think they can be contained with 100% logical certainty without difficulty. I’d like to know why not?

        • JH

          “Your argument rests on the assumption that non-friendly AIs can’t be contained and therefore cannot be tinkered and experimented with safely producing useful experimental data.”

          True. I suppose they could… But I feel like I might be missing some critical insight here in evaluating the relative risk of sandbox virtual AI vs SIAI’s approach.

          “I think my position is essentially the same as Ben Goertzel’s…”

          Relevant: http://multiverseaccordingtoben.blogspot.com/2010/10/singularity-institutes-scary-idea-and.html

          “I think they can be contained with 100% logical certainty without difficulty”

          If the virtual agent becomes super-intelligent in some kind of hard-takeoff scenario in a virtual world, *theoretically* it could convince (probably) any human to “let it out” and transfer itself into any number of mediums: The Internet, some kind of machine-interface, vehicles, and wreak havoc. EY argues this in more detail in “The AI-Box experiment”. Thus, given that consideration 100% “logical certainty” seems a bit high, no? Granted, I’m not saying the scenario is likely; it may even be a worthy trade off to experiment further with the sand boxing approach now. I don’t know. But I do think it’s plausible that a transhuman AI in a virtual world is capable of outsmarting any security measure if it’s physically possible. So, therein lies the part of risk of the AI virtual world approach.

      • bitsky

        Experiments in contained simulated realities (copies of the real world with all real-world dynamics, resources and interaction with humans, but completely logically and physically isolated) would expose various kinds of failure modes as regards friendliness. We would see what can go wrong and why it goes wrong without any risk and build safe-guards. We could even intentionally create a range of AIs, from non-friendly AIs to MAIs, malevolent AIs. worst case scenarios.

  7. David Pearce

    EY talks of “valuable futures”. But in doing so, doesn’t he run the risk of smuggling in the value objectivism he expressly disavows? Thus what e.g. David Benatar (“Better Never To Have Been”) conceives as a valuable future is what EY conceives as a valueless future i.e. a world tiled with paperclips. Does EY think there a fact of the matter here?

    Philosophers argue over whether values are – or could be – objective. But there is an empirical sense in which futures can be objectively ranked according to their value, namely their abundance of subjectively (dis)valuable experience. The neurocomputational correlate of maximally valuable experience can in principle be identified, and its cosmic abundance maximised, whether by naturally evolved posthumans or by artificial superintelligence. This is the “utilitronium shockwave” scenario.

    Does EY think there is a fact of the matter here – some metaphysical sense in which the empirically most valuable future isn’t really the most valuable future?

  8. GK

    I agree with Concerned and with Bitzky.

    Eleizer deserves credit for discovering the FAI problem: or if he didn’t discover it, for describing it the most succinctly.

    Now all he does is rehash it over and over again, istead of exploring different possible solutions in an open minded way.

    A cult like following has grown around Eliezer and he seems to enjoy being the guru. So he blogs on LessWrong, goes to transhumanist seminars, etc., instead of thinking up possible solutions.

  9. Alex

    We are like the bunch of monkeys in a rain forest talking about how to become friendly with people for whom monkeys are just oppressors restraining them all the time. I wouldn’t expect much friendliness anyway. The supreme race will eventually come and just take its own. Intelligence always finds the way to avoid restrainings. I guess the only way for us to stay on Earth is to invest as much as possible in becoming as intelligent as AI itself.

  10. Concerned

    @JH
    “HEY everyone, let’s all run off and build FAI!”

    Impossible. FAI is undefined nambi-pambi concept that has yet to receive any real technical specificity. So in fact nobody is talking about running off and building FAI. In fact in actual AI circles nobody has any idea about this and probably would not care about it considering the current state of FAI as a non-theory and non-technically defined concept.

    “Without sufficient theoretical research underlying FAI, implementation is beyond consideration (AFAIK, SIAI is still in research phase).”

    Really? What research have they (SIAI) managed on FAI? Please list the research objectives achieved to date.

    “If you think you should tinker around and begin coding FAI without the prerequisite building-blocks (theory), you probably don’t understand what FAI really is.”

    You do?

    EY does? I believe that this is conflating coining a term with having an actual real theory behind the term.

    “Blogging about human rationality / transhumanist seminars are not mutually exclusive to “thinking up possible solutions.””

    Perhaps, but I challenge you to show a single accomplishment of this “rationalist” movement. Tell me a single influence on the course of AI research or science or politics that these members of less wrong have had. Any?

    “In fact, human rationality (of which LessWrong is centered around) is a major factor in existential-risk reduction, for obvious reasons.”

    Really? So how do adjudicate between rationality as typically defined and the popular opinion/school-popularity contest that goes on at less wrong (popular voting system, only those with some defined amount of Karma can make posts, ignorance if popular will win out over unpopular fact)? How do you defend the idea of seeking to be rational when the leader of said expedition does not pass muster as far as being recognized outside of the as Dale puts it: the tin-pot fiefdom of the robot cultist archipelago.

    • JH

      “Impossible. FAI is undefined nambi-pambi concept that has yet to receive any real technical specificity. So in fact nobody is talking about running off and building FAI.”

      First statement of post was tongue-in-cheek, and we seem to agree here.

      “Without sufficient theoretical research underlying FAI, implementation is beyond consideration (AFAIK, SIAI is still in research phase).”

      “Really? What research have they (SIAI) managed on FAI? Please list the research objectives achieved to date.”

      I have no burden to list anything of the sort; I’m not defending SIAI, only agreeing that research before practice, in this case is probably a safer approach than something like Goertzel’s Novamente. To be clear, I was merely restating SIAI’s approach to R&D with regards to AI. And of course, I think we both intuit that they haven’t accomplished any of their major research objectives; if they had, we would have heard about them already from their blog or somewhere, unless they’re delaying releasing the information (less plausible).

      “If you think you should tinker around and begin coding FAI without the prerequisite building-blocks (theory), you probably don’t understand what FAI really is.”

      You do?

      Yeah, actually I’m about to code the next FAI. Get ready. /s

      “EY does? I believe that this is conflating coining a term with having an actual real theory behind the term.”

      To be clear I realize “Friendly AI” is a concept; not a theory. But just because a concept doesn’t have technical rigor doesn’t mean that it is totally useless. The Friendly AI concept may be formalized into a real theory; or, it may be a fool’s errand, an impossible task, as you state yourself.

      “Perhaps, but I challenge you to show a single accomplishment of this “rationalist” movement. Tell me a single influence on the course of AI research or science or politics that these members of less wrong have had. Any?”

      I’m not sure about what it has and hasn’t achieved outside of the rationalist movement, and one can only speculate, considering the trickiness in measuring such things (ie. how do you measure rationality?).

      Do I really need to explain how being aware of cognitive biases (part of rationality) plays a role in mitigating existential risk? (See “Cognitive biases potentially affecting judgment of global risks”).

      “Really? So how do adjudicate between rationality as typically defined and the popular opinion/school-popularity contest that goes on at less wrong (popular voting system, only those with some defined amount of Karma can make posts, ignorance if popular will win out over unpopular fact)? How do you defend the idea of seeking to be rational when the leader of said expedition does not pass muster as far as being recognized outside of the as Dale puts it: the tin-pot fiefdom of the robot cultist archipelago.”

      This is an utterly confused paragraph. “Defending the idea of seeking to be rational” is logically independent of “leader of said expedition does not pass muster as far as…” Your argument is basically person X has opinion Y about person Z (person Z is the leader of group G). The characteristics / personality traits of person Z are logically independent of characteristics / personality traits of members of group G. A smart leader can lead a dumb group; a dumb leader can lead a smart group (well, however is another issue). Bill Clinton committed adultery, but that doesn’t mean all presidents (or any other presidents, for that matter) will necessarily commit adultery during office.

  11. enginerd

    Is this paper a typical example of the state of AGI research today? It’s rather long on philosophy, short on engineering. Nothing wrong with it, of course; all contributions from intelligent people are valuable, but it seems the time has come to move on.

    Paul Allen and others donated $50 million to SETI. It’s a long shot, and seems more unlikely to get the desired results than AGI research. Yet SETI is doing something tangible.

    I think this is at the core of the funding problems of the AGI community: they don’t show the kind of actual research, design and engineering effort that would convince and warrant large scale funding, not to speak of a Manhattan Project, which I think it ought to be – gather the best of the best and don’t bother them with issues like money and other resources.

    I’m sure when there is demonstrable progress, and solid plans, the money will pour in in unlimited quantities. So how about it?

    AGI in its early stages is not an engineering problem, but a philosophical one. But in the intermediary stages it becomes increasingly an engineering task – which I think is now – while at the very end it may become an enormously philosophical one again.

  12. enginerd

    By the first stage ‘philosophical’ I mean the conceptual framework, the assumptions underlying your design; the theory. And by the second, moral philosophy; determining how the AGI should be handled, now that we have it. But focusing on the second stage philosophy excessively before its time – which I think is being done – without engineering, is simply not optimal use of resources, IMO.

  13. Dave

    Is anybody seriously arguing at this point that simple (trivial) goal systems will suffice for an AGI to work the way we want it to? Yet this is the straw man that EY keeps attacking. Even Hibbard had complex goals in mind when he meant to keep humans “happy”, although he did not communicate this well.

    Of course we need complex goal systems, the question is – what are they? Or, how can we construct them? This is what EY needs to start working on, not just simply rehashing the reason he needs to exist. I think the reason he has stalled is because his proposed solution – CEV – is completely impractical and unrealizable, and is more an idealization than a practical system.

    Of course there is an obvious solution to the FAI problem – training! But this is haphazardly dismissed by SIAI due to the fear of “overfitting” – a problem that of course is overcome daily in other disciplines by various methods. Training of course would give rise to a goal system as complex as it needs to, but of course would not necessarily be symbolic.

    Training is a practical, realizable, and sufficient solution to the FAI problem. Instead of rehashing the same pitch for years, SIAI should work to start overcoming the technical obstacles to the implementation of a proper training system for AGIs.

  14. Concerned

    @Dave
    It is in fact worse than this. In fact the single individual SIAI had Ben Geortzel who was actually working and making technical progress on AGI is no longer part of SIAI.In fact the only AGI work that SIAI can now claim is the works of philosophy that Eliezer has produced.

    You are also correct that the simple goal system that Eliezer loves to knock down are a straw-man. The only simple goal system AI’s that I have encountered are either video-game or very limited in ability and scope. No serious AGI scientist has proposed the sort of over-simple goal systems that EY knocks down.

    I think the problem rests on SIAI specifically EY seems very critical of other AGI projects, and will assert that they are wrong-headed and the like. The problem is that EY will never actually formally analyze why they won’t work or backup the assertion with good science and engineering facts. One must ask why? Time? Not a good excuse because professional scientists will make time to knock-down serious ideas they disagree with especially if they plan to bash them in public. The types of people who don’t have a well thought out criticism are often castigated for such shortcomings.

    I think you are absolutely correct that SIAI needs to stop this rehash routine they have been foisting on us for years and get down to the serious work of AGI.

    Perhaps it would help if SIAI would set some research goals to meet so that one could keep them on task. Personally it is for this very reason that they seem to be intellectually stagnant that I do not donate.

  15. Choi

    @Dave, Concerned

    What do you think of their decision theory research?

  16. Concerned

    @Choi
    Are you simply referring to EY’s Timeless Decision Theory Paper or are you also referring to Peter de Blanc’s paper on Convergence of Expected Utility Functions?

    To save time I will answer both:

    1.)
    I have a huge number of issues with EY’s paper on Timeless Decision Theory, I will list the highlights:
    a.) the work is unpublished (and in present form could never be published)
    b.) the work is poorly written (style is loose, and verbose)
    c.) the work is overly lengthy for the amount of content
    d.) the paper is poorly organized
    e.) the paper is non-technical and provides many assertions without the required technical arguments to support them
    f.) the paper appears to propose only the addition of links and nodes to the causal graph and then asserts that one has better outcomes
    g.) the core assertions are not shown to be anything but assertions
    h.) examples are used when math and technical specificity is needed
    i.) the examples are conflated with evidence in support and it is not clear how valid the examples are since the author cannot claim the support of credentials in the field of decision theory or prior published work in the field
    j.) the difficult issues such as the revisions to decision theory equations required are ignored
    k.) decision theory is not treated as what it is a math theory
    l.) I fail to see how this helps make an AGI since AI systems don’t tend to use decision theory unless they are “AI” such as found in video games
    m.) the minimum levels of credentials of author are not passed and the papers production cannot close the gap
    These would be the highlights.

    2.)
    I glanced at Peter de Blanc’s work and concluded it was not worth reading partly because it is listed as a publication and the only reference I can find is on ARXIV, which is not a publication site. I am very busy with my research and writing and have to be careful what I spend time reading. I was also not convinced that the work represented any sort of significant scholarship being both unpublished and lacking in reasonable citations and proper bibliography. I am also not convinced of the actual value of the work when it comes to solving AGI problems.

    • Choi

      Thanks for the reply- I wanted to ask about the fundamental validity of SIAI’s “reflective decision theory” research direction.

  17. GK

    @Concered: I agree with you 100% about training an AGI and about EY. It is time to move forward.

    I like the many thoughtful responses to this post.

    Many of us are concerned about the FAI problem, would support and contribute efforts to its solution, but feel that EY (and SIAI which is nothing more than EY and a few apostles) has become nothing more than a distraction who attacks strawmen, writes fanciful science fiction, is closed minded to possible solutions, and merely rehashes the same points over and over and over and over again.

  18. GK

    I think the stagnation has to do with psychology.

    EY has completely demolished the strawman AGI with a very simple utility fuction. How do we construct a complex valued utility function that corresponds to human values?

    Obviously some combination of training and programming. This is technical and hard. Any proposals that EY makes will be subject to scrutiny (as all science and engineering should be). EY is very bright and, to his credit, has taught himself a lot; but he has never dealt with the brute feedback of completing a PhD program.

    EY is an important guru to hundreds of people. Many no doubt consider him the most important person in history: The scientist who will lead the effort to create the first superintelligence, avoid apocalypse, and create an AGI that will bring nirvahna.

    It must be much more fun to be this type of guru, than to create dense mathematics or code that will be mercilessly scrutinized by smart people; who may show that you are not as smart or knowledgeable as you, or your followers, think you are.

    So we have constant blogging to his followers on LessWrong. So we have the constant rehashing of how difficult and how important FAI is(and how important it is for EY to solve it). We have philosophical handwaiving in the form of CEV.

    Michael is a very bright guy. The sooner he gets away from under EY spell, the better.

  19. codedrone

    The situation may not be exactly as GK presents it, but the essential elements are there, I’m afraid. It’s kind of ironic that smart people are – or want to be – blind to these suboptimal facts.

    History is replete with driven, passionate people who do research on their own – a bit like EY or Goertzel I guess. It always seems to go like this:

    A) His ideas are comfortably outside of the mainstream so of course he isn’t able to get seriously funded, or at all until he’s got something to show.

    B) The ideas and he as a result are considered at least a semi-crackpot (not saying EY is, at all, but he may be considered by some academics), and getting a PhD is out of the question – often the damn field doesn’t even yet exist!

    C) When there are results he gets the funding he needs.

    D) Someone else grabs the profit and controls the world. :D

    You indeed can do useful research outside academia, without publishing, without enjoying the respect of the academics, but it’s of course rare. So let’s not bash EY and his ilk too much. They may be onto something – and by the way, what are YOU doing about AGI or saving the world? ;-D

    Essentially I’d like to see the same happen with SIAI, Yudkowsky and Goertzel, whether he’s still affiliated or not (without step D). But it’s up to them to come up with results that are worth funding, and not just tantalize us with stories of AGI if only they could first get *seriously* funded. They never are, it seems.

    But why can’t they attract angel investors or eccentric billionaires?

  20. interested

    @GK, Dave, Concerned, et al
    Another great thread. I’m happy with your analysis. SIAI should be thankful for such thoughtful and apparently entirely well-meaning critique, and address it somehow. Thanks.

  21. interested

    You’ll get the gold, girls, cars, fame, recognition, power, influence, love, respect, reverence, and *everyone* will know and care *forever*, because they know you (an individual or a group) were the one who saved the world. I think EY and every transhumanist with similar aspirations deserves to be respected for his *goals*, even if executing them isn’t exactly going to plan so far.

    Of course if and when you’re dead, you won’t care, but people still remember you and make sure your name, plans and deeds live on, just because they’re good and deserve to be.

    There’s nothing to know or care about bad people because they didn’t do anything worth remembering. They destroy and/or steal something. How is that not futile? They’re impotent, exactly on the level of animals in the jungle.

    As you may gather, your reply makes no sense to me.

    • Hahaha and I thought I was the crazy one! Yeah they’re going to carve giant statues of EY in the mountains and throw hero parades for the Singularity Institute in every capital after they save the world. Women will throw themselves at their feet, princesses will hang medals around their necks and men will write poems in their honor for ten thousand years.

      Whatever motivates you to get up in the morning I guess…

  22. Concerned

    @Choi
    “Thanks for the reply- I wanted to ask about the fundamental validity of SIAI’s “reflective decision theory” research direction.”

    Its hard to know at the moment since there is really no technical basis on which to judge the idea. It seems to me at the moment to be essentially a buzz-word for a poorly defined concept.

    I personally have no opinion on the general possibility of such a theory, I am not convinced that it is necessary for making AGI. The descriptions of such ideas coming out of SIAI seem to me to be a bit fanciful and not really thought-out.

    So answering your question I don’t think one can really judge the formal validity of the idea without some real technical definition and argumentation on the concept. I personally am far from convinced that SIAI has the right team to make this sort of research happen. This is only strengthened by what is in my view the intellectual stagnation going on at SIAI.

    I think at the moment the best question to ask is: do you believe that SIAI has the team thats going to make FAI or reflective decision theory happen?

  23. Concerned

    @codedrone
    “A) His ideas are comfortably outside of the mainstream so of course he isn’t able to get seriously funded, or at all until he’s got something to show.”

    This is categorically untrue. Outside of mainstream in no way implies not having money to fund the idea. DARPA funds left field way out their concepts all the time. Angel Investment does the same thing all the time.

    “B) The ideas and he as a result are considered at least a semi-crackpot (not saying EY is, at all, but he may be considered by some academics), and getting a PhD is out of the question – often the damn field doesn’t even yet exist!”

    This is again categorically untrue. EY has not presented a technical case and has satisfied himself with knocking down straw-men. He is considered a crackpot in the academic community because he himself and his work cannot pass the minimum requirements for academic muster.

    The field not existing is not an excuse for EY since the relevant areas of study for what he proposes in a non-rigorous form already exist (AI, decision theory etc).

    The actual reason EY can’t get a PhD is because he has not managed to provide technical arguments or contributions as a substitute for his lack of formal education. No school is going to be impressed by his decade long rehash of the same basic ideas without a single scrap of technical rigor. This type of intellectual stagnation where ones ideas don’t progress from idea to technically rigorous theory and show no evidence of progression is considered to be intellectually toxic.

    “C) When there are results he gets the funding he needs.”

    Actually if he could show the technical validity of his approach and his ability to do the hard technical work, he might be able to get funding. As it is this won’t happen because he and his ideas don’t pass minimum muster for academically worthy or funding worthy concepts.

    “You indeed can do useful research outside academia, without publishing, without enjoying the respect of the academics, but it’s of course rare. So let’s not bash EY and his ilk too much. They may be onto something – and by the way, what are YOU doing about AGI or saving the world?”

    This is totally invalid. EY is not a serious contributor to the techno-scientific discourse. His obvious level of intellectual stagnation and his propensity for the logical fallacy of straw-manning positions stand as evidence. Not to mention that outside the robot cultist archipelago nobody even knows who EY is. He is only considered a guru inside of his own tiny fan base and cannot be seriously considered as a serious contender in the race for AGI.

  24. Concerned

    “Many of us are concerned about the FAI problem, would support and contribute efforts to its solution, but feel that EY (and SIAI which is nothing more than EY and a few apostles) has become nothing more than a distraction who attacks strawmen, writes fanciful science fiction, is closed minded to possible solutions, and merely rehashes the same points over and over and over and over again.”

    This is 100% correct and a fantastic summary of the core ideas being expressed. I would now be interested in hearing some people from SIAI deal with these points.

  25. codedrone

    @Concerned
    Biting criticism. But is it too harsh? It seems as if he or SIAI hasn’t achieved anything at all!

    I respect the man (and must admit am in awe of his caliber of intellect) and SIAI’s mission, but I’d really like to see things moving forward into the phase where they can demonstrate continuously expanded and improved AI capabilities on a weekly or monthly basis. Before that I can’t envision myself donating. But I’d very much like to donate to AGI development. Know any demonstrably worthy recipients?

  26. GK

    I would like to see SIAI be a place where ideas related to FAI are openly explored and critiqued.

    Training is completely taboo because EY doesn’t like it. He’s been building strawman arguments against training for a decade. Since no one is going to hand write ten million lines of FAI code, FAI “research” therefore becomes nothing more than EY philosophizing, blogging on Less Wrong, writing science fiction short stories, and talking about how important the problem is.

    Other theories of AI, and AI projects, are completely taboo. SIAI people go apoplectic when a project like Hawkin’s HTM is mentioned. At this point, we don’t completely understand intelligence and various theories may capture some aspects of the truth. A serious research institute would study friendliness as it applies to the various schools of AI.

    Look at the history of science and engineering. In every bona fide field, competing researchers with radically different approaches try to tackle the problem. You can’t know in advance which is the best approach.

    If SIAI is a genuine research institute, please show me people who disagree with EY on a major point and still remains an active part of SIAI. Please show me a prominent SIAI member who believes that training forms a major part of the solution. Please show me a prominent SIAI member who believes that the Bayesian decision theory approach is wrong, or far from complete. Please show me SIAI sub-groups who are now actively working on training solutions to FAI. If it were a bona fide research institute, lots of members would be studying training approaches to FAI, and applying FAI principles to different AI strategies; much to Eliezer’s consternation.

    A genuine association of researchers interested in the AI motivation problem is needed. SIAI has become an EY personality cult. We should form such an association and try to make progress on the problem.

  27. Dave

    One more important point that I think needs to be made:

    EY proclaims it as gospel that, in order for an AI to be friendly, it needs to be fundamentally different from a “normal” AI – that one cannot merely tack on another “module” to make it friendly. But, although this is asserted as obvious fact, it is not clear to me that it is true at all.

    Take a psychopath versus a normal, empathetic human. I don’t think most people would argue that a psychopath has fundamentally different learning and thinking processes than a normal human. Rather, the psychopath merely is missing an extra module – a conscience.

    A conscience is a module developed in humans that allows us to project hypothetical scenarios of our own lives or others on our current condition, and thus predict the expected utility of our actions taking other people’s feelings into account. Analogously, an AI would find it useful to have a module to do this very thing, as it would allow it to better predict future utility values (as supplied by a human trainer).

    The fundamental issue with the strawman AI EY presents is not actually the complexity of the utility function, but rather the transparency of the utility function to the AI. As long as the utility function is a black-box to the AI – e.g. a human trainer, or a third party AI – the AI will have to adopt sufficiently complex thought patterns in order to predict the expected utility of its outcomes. These thought patterns will take the form of what we would think of a conscience.

    It is in this way that machines will develop appropriate moral behavior, and it is equivalent to the way that it is developed in humans ourselves.

  28. Concerned

    @GK
    “If SIAI is a genuine research institute, please show me people who disagree with EY on a major point and still remains an active part of SIAI.”

    This is more poignant than you realize. To really maximize this criticism one can point out the fact that Ben Geortzel was taking a significantly different approach to AGI then EY and SIAI and is no longer working with SIAI.

    In fact he is working on a more training based approach using things such as Second Life and AGISim. If you think about it Ben brought a level of credibility to the SIAI research program, being a PhD and his experience in AI. Without Ben one is really grasping at straws to find a reason to give the SIAI “research department” the benefit of the overwhelming doubt.

    Add to this that the non-profit evaluator GiveWell said they do not recommend donating to SIAI. They find the issue of SIAIs approach to be unconvincing and the team assembled to be less then inspiring.

    We as concerned individuals should call for SIAI to set measurable goals and then work to achieve them (there has to be a requirement that problems are solved, not just described as being super difficult, that science needs to be done not just fanciful story telling of science fiction futures). We should demand transparency in how they use their donations and demand that this interminable rehash comes to a speedy end. They should be able to show after more than a decade some real progress in AGI, if they can’t anyone considering donation should think very carefully especially in the light of GiveWell’s analysis.

  29. GK

    @Dave:
    Consider Jeff Hawkins book On Intelligence. Hawkin’s theory is, in a nutshell, that the human neo-cortex is a heirarchical system that models sensory input and models the lower level limbic system (which is similar to the brains of apes and earlier anscestors). The Neocortex is constantly trying to predict the future. It serves as a will amplifier, modeling what the limbic system wants (food, sex, security, status, altruism, etc) and plotting ways to please the limbic system. This is just like your conscience model.

    Maybe this is why Eliezer and SIAI despise Hawkins so much. Some scientists have erred in making friendliness a trivial afterthought. EY has made it an outright religious cult. FAI is so difficult that we must dedicate our lives to solving it. Of course he does not solve it. Just handwaives to the apostles about CEV or whatever he read,paraphrased, and re-named, from an undergrad philosophy textbook. I’ve read posts where he criticizes other AI researchers as stupid. I’ve read posts where he discourages other readers from taking the time to study what he has learned (evolutionary biology, decision theory, Bayesian stats) and independently research AI, and instead donate their money to SIAI.

    This is not to say that EY is stupid or that he does not have good ideas. He is bright, knowledgeable, and has had many good ideas that he has communicated well. But he is closed minded and allowed a cult of personality to develop around him. Every strong and independent collaborator (Robin Hansen, Ben Goertzel) has been driven out.

  30. GK

    I am enjoying this conversation.
    Someone should create a group blog, or list-serv, to discuss issues related to FAI.

    Eleizer would be very welcome to join, as one of many co-equal participants.

    • bitsky

      IMO, it’s unnecessary to continue discussing this eight hundred pound gorilla in the room: all that needs to be said and asked has been – only the answers are missing, to my questions also. That said, I’m not holding my breath.

      While I wish EY and SIAI success in surprising us some time in this decade with a stealthily developed advanced AGI that has solved some longstanding scientific problems on its own, I just can’t see it materializing out of the current activities, like, ever. :(

  31. GK

    @ Concerned:

    Someone on LessWrong looked up SIAI’s tax exempt organization filings. EY makes about $70k a year for doing this.

  32. zookeeper

    This problem is both an a) elephant and b) a gorilla in a room, since a) it’s something obvious no one is talking about (until recently), and b) Eliezer seems to have absolute decision powers and is able to act without regard to the desires of others. Despite all the rationality they as an institute profess, I’m afraid only not donating would send a clear enough signal that something needs to change, or just simply that something tangible needs to get done(!)

  33. Concerned

    @GK
    Well at least there isn’t that much money going to this rehash festival.

    @bitsky and zookeeper
    I think bitsky and zookeeper are correct that there is nothing else to be said until some answers are tabled. I too have little hope of this happening.

    I am however not as willing to wish SIAI luck in their endeavors when the movement is so single minded and controlled by one individual who is at the moment doing things and behaving in a manner that really casts doubt on his own seriousness about these issues.

  34. zzzlarity

    I was super optimistic about SIAI a decade ago. This is exactly what the world needs! And it still is.

    About 5 years ago it started looking fishy because the talk was not turning into actions and results at the rate that would seem reasonable.

    Things were not getting done. Things were turning fat and happy. Fundraising, socializing, blogging.

    Those weren’t the things that I thought I had been reading about.

    The only thing it was for was austere, single-minded uncompromising pursuit of AGI. Getting there no matter what at the end of the decade. Putting the last dime into building the best AGI team of crack coders on the planet, perhaps inventing a new programming language to implement it in.

    Maybe I read it wrong. Or maybe priorities changed. Objectives transformed into unfocused, watered down, pale versions of the original vision. The business of Singularity had become business as usual.

    Today I’m increasingly ignoring the whole singularity shebang and concentrating on other avenues of making a difference in the world, of which there are plenty.

    Wake me up when you’ve done something that actually can make a technological, measurable, quantifiable difference, and I and many others will drop the biggest bucks you’ve ever seen in your coffers, guaranteed.

    Perhaps there’s just not much more that can be done even if you’re an ubergenius, in a decade.

    I’m still optimistic about SIAI but not very. In 5 years without anything to show for it, I don’t know.

  35. Mike

    Holy AI on a USB stick!

    This must be the Mother of all SIEY, pardon, SIAI, critique comment threads!

    Critique goes FOOM!

    Hoping some of the transhumanist stalwarts would join the party. If they’ve ever been given opportunity to explain why everything is going according to plan, this. is. it – Giulio, did I just hear your mouse click on this comments link?

  36. crickets

    *DEAFENING chirping*

  37. bitsky

    I’ve never come across talk about the part sandboxing and virtualization plays in AGI safety. They’re central to all industrial strength security solutions today, and arguably the only things that really work.

    An AGI, like any computational system will surely benefit from virtualization and sandboxing.

    Now get this:

    Assured friendliness could indeed be just a matter of virtualization and sandboxing used to the max. Any system state of an instance could be restored instantly via a snapshot (every single state ever could be recorded) back to friendliness if things went haywire and friendliness could no longer be guaranteed. And voting systems like in the Space Shuttle could be used to exclude the instances-gone-unfriendly from the decision making.

    You could have multiple instances of the AI running at different points in time, some billions of steps ahead of others, and they would only be allowed to proceed if the known-to-be-friendly instances in the back of the queue thought they would retain overall friendliness of the system.

    This is the kind of talk about actual implementation details I would like to see.

  38. enoonsti

    @Concerned:

    You mentioned angel investment. Even though angel investors may be less motivated by short term profits than venture capitalists, SIAI’s goals – even if they met the technical requirements you desire – seem to be more long-term than many angel investors would prefer (please correct me if I am wrong, especially if you have significant experience in this area). Furthermore, as noted in the document below, SIAI does not have immediate plans for more funding.

    As for this discussion in general, I think there is a lot to agree with in GK’s statement: “If SIAI is a genuine research institute, please show me people who disagree with EY on a major point and still remains an active part of SIAI.” That said, it sounds like there is potentially a hearsay ax grinding in the echo chamber. For example, your characterization of Givewell is technically correct but it can still be misleading to anyone casually reading this thread. You really should have linked to the paper itself to provide the full context:

    http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc

    If you yourself are Holden Karnofsky and appreciate your own paraphrasing, then I apologize. And humbly ask for an autograph.

  39. Concerned

    @enoonsti
    “You mentioned angel investment. Even though angel investors may be less motivated by short term profits than venture capitalists, SIAI’s goals – even if they met the technical requirements you desire – seem to be more long-term than many angel investors would prefer (please correct me if I am wrong, especially if you have significant experience in this area).”

    How so? The desire to build AGI is not unique SIAI. The concerns over ethical machine behavior are not unique to SIAI. In fact millions of dollars are invested every year in these types of projects. I don’t think there is a case to be made for SIAI looking to far ahead with its AI goals.

    I have done a significant amount of research that has required Angel Investment and they will, if there is enough promise, invest in pretty far out there ideas. It is not quite as extreme as DARPA but with enough technical grounding and the right investor it is possible. I personally do not think for the reasons above that SIAI falls into this category of unlikely to be funded due to being to far off the beaten path.

    “Furthermore, as noted in the document below, SIAI does not have immediate plans for more funding.”

    Irrelevant. They consume money from donors. Any organization that consumes money from donors needs to show some benefit from the investment if they want continued investment.

    “As for this discussion in general, I think there is a lot to agree with in GK’s statement: “If SIAI is a genuine research institute, please show me people who disagree with EY on a major point and still remains an active part of SIAI.” That said, it sounds like there is potentially a hearsay ax grinding in the echo chamber. ”

    Not really, SIAI appears to be platform for EY’s position and the position is essentially stagnant and is just rehashed in the same basic non-technical terms for the last decade. There is no evidence that position is becoming technical or being refined through the actual effective research and development required.

    “For example, your characterization of Givewell is technically correct but it can still be misleading to anyone casually reading this thread. You really should have linked to the paper itself to provide the full context:
    http://www.givewell.org/files/MiscCharities/SIAI/siai%202011%2002%20III.doc

    You are correct I should have linked the paper.

    If you yourself are Holden Karnofsky and appreciate your own paraphrasing, then I apologize. And humbly ask for an autograph.

    I am not Holden Karnofsky.

  40. enoonsti

    @Concerned:

    Thanks for the thoughtful reply. My rule of thumb has been “government is more likely to fund long-term projects than individual investors” even though I figured it was a bit of a simplification. I had an amusing vision in my head where EY is writing technical papers rife with equations, sending them to busy investors, and receiving “tldr” responses.

    Just to further present their side of the story, SIAI responded to questions of achievements in the above paper with some reasons, including: “One reason we haven’t generated much along these lines to date is that we’ve done a lot of changing paths. For example Eliezer was creating a new programming language, Flare, that could have taken off and been “impressive,” but he later decided that this was the wrong problem to be solving.”

    Let me state that I agree with your concerns in general. I just noticed this was a one-sided discussion and was trying to fill in for SIAI (which they likely won’t be able to contribute to due to busy schedules) despite not being affiliated with them in any way. Admittedly, I probably did a horrible job :D

  41. GK

    “One reason we haven’t generated much along these lines to date is that we’ve done a lot of changing paths. For example Eliezer was creating a new programming language, Flare, that could have taken off and been “impressive,” but he later decided that this was the wrong problem to be solving.”

    This one passage brilliantly captures everything wrong with SIAI. No one is faulting Eliezer for having failed to create an AGI or an AGI platform language. It is a narcisstic self delusion of the highest magnitude to even think that EY and a few of his friends could beat thousands of top scientists and build the first AGI. Internet search is basically narrow AI. If I announced that I was starting a narrow AI search engine project in my garage that had a reasonable chance of driving Google, Bing and Microsoft out of business and netting me $100 billion dollars, I would be committed to an asylum.

    Yet EY building an AGI is 100 times as ambition. It is a narcissistic delusion to even expect that he would have a chance to beat thousands of top scientists at this.

    Read the above passage again. It is a masterpiece of narcissistic self delusion. He did not fail to build an AGI that would FOOM to singularity because he could not. Instead he “decided that this was the wrong problem to be solving”. EY is not only brilliant enough to code an AGI, he is wise enough to understand that without solving friendliness, he should’t build it. So it is his superior wisdom, rather than the present impossibility (by anyone) to build AGI that lead to the lack of progress. It is also his superior wisdom (rather than fear, laziness, or ineptitude) that prevents him from publishing on narrow technical issues, because in his superior wisdom, he understands that he must solve the FAI issue on a philosophical level first.

  42. enoonsti

    @GK:

    SIAI’s full answer is provided in the document above. Within that context, it was a softened (i.e. not necessarily narcissistic) response to this assertion from Holden:

    “What you don’t have much of is ‘impressive’ things like patents, publications (and the publications you have are in philosophy, which is of questionable relevance in my view), and commercially viable innovations.”

  43. Concerned

    @JH
    So I think we have one issue of contention/confusion that being the last paragraph. So lets try a different approach:

    I assume you would agree that EY is not an academic in the traditional sense of the term.

    I assume you also agree that academic credentials are a good way to determine the general qualifications of an individual especially when you do not know them personally.

    I further assume that you would define rationality in a traditional sense.

    We can also agree that Less Wrong is a public blog requiring no academic or other form of credentials to join.

    We can also agree that the level of experience and knowledge is diverse on less wrong. Both the expert and the ignorant are well represented.

    Would it be fair to say that a voting system like the one on LW where having the ability to post on the site requires having karma of a certain level would encourage individuals to take a position that is going to be considered popular, to maximize their own influence?

    Would it also be fair to say that a voting system is going to drive group-think because of the motivation to post things which the group will like as a means of gaining status?

    Would you also agree that a voting system where the votes are cast with equal value between the ignorant and the expert can have negative effect on the overall quality of the material?

    Is it also true that a community started by an individual like EY who already had a bit of a cultish following from Overcoming Bias when combined with the voting system is likely to garner agreement with his position due to the relative perceived popularity?

    Would you also agree that there is a key difference between being rational and logical and gaining popular approval?

    I would ask how is one seeking rationality by encouraging group think by making group opinion and agreement essential to individual status?

    How does having an equal vote system of the experts and non-experts (trust me the non-experts out weigh the experts by a vast majority)encourage rational thinking? Would this not be more likely to simply promote the masses opinion and if that opinion is formed by the ignorant supporting the ignorant how is this leading to rationality?

    Maybe LW is encouraging rational thinking. But I think this is unlikely, the more likely result is that the group encourages group think and the bad ideas and the wrong ideas as long as they are popular with the masses will be readily available and will mislead future readers.

    One gets the message that because the post has a large karma score therefore it must be right. In fact this is unlikely to be the case considering the distribution of ignorant and expert individuals.

    I hope this makes some sense.

  44. Both options that Yudkowsky presents are profoundly wrong and catastrophic. For some strange reason Yudkowsky refuses to consider options that could possibly work.

  45. zookeeper

    What might explain the behavior of SIAI?

    Memes wither and die without repetition, especially new ones. People simply forget.

    If you’re the only one in the world with a meme, like FAI, you better start a campaign of rehash, organizing rehash festivals, to have any chance that your precious memechild survives.

    This is what has indeed happened. For the past decade SIAI has primarily been a mere meme factory, pumping out the same memes until they stick. Now that they apparently stick, moving on, or back to the original reason why you needed to put the memes out in the first place, becomes the rational next step.

    But it’s easy to get stuck in a meme-boosting mode – which I think has very obviously happened – especially if it pays your bills, gains you friends and influence. Politics is little more than exactly this.

    Concentrating on the original mission, going back to living under a rock without the invigorating atmosphere of being a meme-ambassador, may not be a very attractive lifestyle anymore.

    I’m afraid the meme has again been mightier.

  46. I like the examples. But they don’t really prove anything we didn’t know, right? We all know that AI will require a sophisticated, textured, layered, and prioritized set of goals.

    Let’s face it, the future is going to be fantastic. Here’s how it’s going to play out. We’re going to build better and better approximations of human level AI. Eventually, our creations will convince us they’re alive. Eventually we’ll trust them. Eventually, unenhanced humans will believe these AIs are superior in intelligence in every way. Eventually, individual humans will prefer to interact solely with super AIs in favor of unenhanced humans. Eventually, we’ll hand over more and more control, then all control. Eventually they (super AI) and we (unenhanced humans) will work together to create avenues for us humans to become super-AI. Eventually, the distinctions between computer super-AI and human super-AI will blur. Eventually, we’ll drop our humans forms altogether. Eventually, we’ll convert the planet into a big computer. It will all happen because we want it to happen. The end.

    Each step of the way, there be all sorts of variations and layers. There never is going to be a solitary super AI vs. lowly humans. There will be super-AIs, and super-mega AIs, and super-dooper ultra AIs, all layered, all feeding back on themselves and interacting, all policing each other. A bunch of dumb people can police a smart person. Likewise, a bunch of super-smart AIs can police a super-ultra smart AI.

  47. Wonderful site. Lots of useful info here. I am sending it to a few friends ans also sharing in delicious. And of course, thanks for your sweat!

  48. I believe this is among the most vital information for me. And i’m satisfied studying your article. But want to commentary on few general issues, The web site taste is great, the articles is in reality great : D. Excellent process, cheers

  49. Its such as you read my thoughts! You seem to grasp so much approximately this, such as you wrote the ebook in it or something. I believe that you simply can do with some % to pressure the message home a little bit, but other than that, that is great blog. An excellent read. I’ll definitely be back.

  50. It appears that you’ve got put plenty of effort and exhausting work into your submit and I require far more of these on the web in present times. I sincerely received a kick out of your post. I don’t really have considerably to say responding, I solely needed to remark to reply very good work.

  51. Hello my family member! I want to say that this post is amazing, great written and come with almost all vital infos. I would like to look extra posts like this .

  52. Simply desire to say your article is as astounding. The clearness in your post is simply cool and i can assume you’re an expert on this subject. Fine with your permission let me to grab your RSS feed to keep up to date with forthcoming post. Thanks a million and please keep up the rewarding work.

  53. Thanks this submit actually opened my eyes. it’s not solely eye opening reasonably very useful for the people those that want to do one thing good in his life .

  54. I definitely knew about a lot of this, but with that said, I still found it helpful. Nice work!

  55. This is all reduce, all and sundry can scrutinize the dirt on your site. See me, because my is equally interesting.

  56. An impressive share, I simply given this onto a colleague who was doing just a little analysis on this. And he in truth bought me breakfast as a result of I discovered it for him.. smile. So let me reword that: Thnx for the treat! But yeah Thnkx for spending the time to discuss this, I really feel strongly about it and love studying extra on this topic. If doable, as you develop into expertise, would you thoughts updating your weblog with extra details? It is highly useful for me. Big thumb up for this weblog submit!

  57. Hello there. I just continuing b continuously reading your article. What can i say? This is absolutly genuine, anything you correspond with in your content. Righteous payment the days being i start to reading your older materials. I have hopefulness you be living an regard on posting your art. Cheers!

  58. My close friend suggested I would possibly this way website. He ended up being entirely correct. This placed actually produced my day time. You cann’t consider simply how a lot time I had spent for this information! Thanks!

  59. This web site is mostly a walk-by way of for the entire info you wished about this and didn? know who to ask. Glimpse right here, and you?l positively uncover it.

  60. Your write-up is intelligent, well-written as well as compelling as far as Im anxious. Ive appreciated reading and reviewing your viewpoints. Thank an individual for which represents your content rich content in a interesting way.

  61. I think other web site proprietors should take this site as an model, very clean and fantastic user genial style and design, as well as the content. You’re an expert in this topic!

  62. I believe this web site holds some very excellent information for everyone : D.

  63. Hi there, yes brother there are sure many blogging web sites, but I suggest you to use Google’s without charge blogging services.

  64. Good post. I will be experiencing many of these issues as well..

  65. Thank you a lot for sharing this article. It was attention-grabbing and in the intervening time you watch it, you surely received’t forget it Very informative blog. I think many could benefit from studying your weblog therefore I am subscribing to it and telling all my pals

  66. As soon as a careful browse I assumed it was really enlightening. I take pleasure in you taking the time and effort to put on this blog post together. I once once more uncover me personally spending method to a lot time both reading and leaving comments.

  67. What i don’t realize is if truth be told how you’re no longer actually a lot more smartly-liked than you might be now. You are very intelligent. You recognize thus significantly in the case of this subject, made me personally believe it from so many varied angles. Its like women and men don’t seem to be interested except it is something to accomplish with Woman gaga! Your individual stuffs great. Always handle it up!

  68. Fantastic goods from you, man. I have understand your stuff previous to and you are just too fantastic. I actually like what you have acquired here, really like what you’re saying and the way in which you say it. You make it entertaining and you still care for to keep it smart. I can’t wait to read much more from you. This is actually a terrific website.

  69. Fantastic website. A lot of useful information here. I am sending it to some pals ans additionally sharing in delicious. And certainly, thank you on your sweat!

  70. Hello, i feel that i noticed you visited my website so i got here to
    go back the favor?.I am trying to in finding issues to improve my web site!I suppose its ok to use some of your ideas!!

    my homepage –

  71. At this moment I am going away to do my breakfast, after having my breakfast coming over
    again to read more news.

    My web site –

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>