Steve Wozniak a Singularitarian?

Wozinak:

Apple co-founder Steve Wozniak has seen so much stunning technological advances that he believes a day will come when computers and humans become virtually equal but with machines having a slight advantage on intelligence.

Speaking at a business summit held at the Gold Coast on Friday, the once co-equal of Steve Jobs in Apple Computers told his Australian audience that the world is nearing the likelihood that computer brains will equal the cerebral prowess of humans.

When that time comes, Wozniak said that humans will generally withdraw into a life where they will be pampered into a system almost perfected by machines, serving their whims and effectively reducing the average men and women into human pets.

Widely regarded as one of the innovators of personal computing with his works on putting together the initial hardware offerings of Apple, Wozniak declared to his audience that “we’re already creating the superior beings, I think we lost the battle to the machines long ago.”

I always think of this guy when I go by Woz Way in San Jose.

So, if artificial intelligence can become smarter than humans, shouldn’t we be concerned about maximizing the probability of a positive outcome, instead of just saying that AI will definitely do X and that there’s nothing we can do about it, or engaging in some juvenile fantasy that we humans can directly control all AIs forever? (We can indirectly “control” AI by setting its initial conditions favorably, that is all we can do, the alternative is to ignore the initial conditions.)

Comments

  1. Nicky

    We have a very, very, very long way to go.

    I believe it will be (and is) an extremely smooth transition. AI today is much more sophisticated than it was 50 years ago. It is not going to be sudden or violent. If it’s not going to end well we won’t realize it until it’s too late…

    • For everyone who thinks this (that we have a “long, long way to go”), could you explicate a bit more? I’m curious about the arguments.

      • csci

        We’ve tried and tried and tried. All the best minds in the world, with unlimited funding have tried. For decades. And they can’t. So how could just “some guy” or a small team of nobodies do it? There’s just no solution that we’ve found. Not even partial ones. We can’t handle that level of complexity. Our supercomputers are ridiculously underpowered. And even if we had 10^6 faster ones (and we’re seriously running out of steam just approaching exascale) it wouldn’t help. It’s all about the theory. We’re nowhere near it happening.

        I’m not saying it can’t happen, it just seems, by all credible industry and academia indicators, so far-fetched it’s not even worth thinking about today. Transhumanists are WAY ahead of their time. By decades minimum, probably a century or two. If there’s inside info that we’re on the cusp of cracking general intelligence, please share.

      • csci

        There are lots of optimistic individuals and teams who say (not sure if they really think so) they’ll crack this or that problem if only they get funded. And they’re sure to mention investor-friendly timescales, not to speak of human-lifetime-friendly ones. At most it’s 10 years.

        Isn’t it curious that it’s always about the money? Never any lack of understanding or problems they think they couldn’t solve or might be unaware of. There’s unshakeable Confidence in one’s ability to do the Impossible – if only there was money and a few codemonkeys and a maid to take care of the household.

        In any complex research the number and difficulty of problems increases the longer you work on it – and that’s when your research is successful. That’s progress. The guys who still think after 3-5 or 10 years that they’re just as likely to crack it, haven’t actually made progress, because they haven’t even recognized the number and depth of the problems that successful research has. They seem to think “I’ve just got to solve this and this and then the solution falls into my lap” when actually the list of practical and theoretical problems goes on and on, far beyond anything they could even begin to imagine.

  2. There’s a couple of answers. First, we humans will be turning into transhumans: more intelligent and better bodies–or no need for bodies at all–so it won’t be computers pampering us so much as it will just be us being us.

    But even if it was them (computers) v. us (not computers), AI will grow out of our human values; in a sense, it will start out thinking of itself as human the same way a dog might. And the major causes of human meanness and cruelty (resource deprivation and ignorance), essentially won’t exist for AI. Sure, AI might think it best to convert everything to computronium, but there’s a heck of a lot of matter out there that could be used without having to immediately wipe out humans and all other life. What reason would it have for not bringing us along for the ride? The relative cost of helping us up would be small I think.

    This is all to say, I can’t imagine a scenario where an AI would actually be motivated to dispense with humanity.

    Sure, there could be reasons I’m incapable of imagining, but I can’t think of them. So, it’s hard for me to worry about something I can’t even imagine. Consequently, I’m more or less fatalistically optimistic about the future: it’s gonna’ to be great.

  3. Dave

    Yeah, for an AI, it would be much easier to convert non-living matter into computronium, i.e., to cooperate with humans, than not to, from both a survival and an energy perspective.

  4. Panda

    Dave–your argument reminds me of the liberal school of international relations championed in the pre-World War era. These scholars said that it was too costly for economically integrated nations to wage war over resources; hence, no one would fight wars anymore. Unfortunately, they were wrong.

  5. PariahDrake

    There is some probability that some AI hobbyist somewhere in the world is capable of making a self-improving AI in their garage.

    With each passing year, hardware performance gets better, and general knowledge about AI increases (especially due to open source projects).

    We can only control those AI projects that are public (government or university funded) – in plain sight.

    How do you plan to stop everyone who is secretly trying to make their own super-AI? There must be thousands of them now, and I can only imagine that as interest increases, available computing power increases, and general knowledge about AI increases, that at some point, someone, somewhere, is bound to create one.

    I guess we could try and ban it, but does anyone seriously believe that would stop people?

    • JohnHunt

      > With each passing year, hardware performance gets better, and general knowledge about AI increases (especially due to open source projects).

      This is very true. I would suggest a couple of things. First, I want the first seed AI to be produced in a contained environment. I want an off-the grid lab with an automatic kill switch then the program gets beyond some low-level of complexity or function – let’s say lizard level. If we can fo from zero to lizard, then eventually it can probably go super-human eventually. Zero to lizard is the hard part. But lizards don’t understand that you are pointing a gun at them and so killing a lizard-level AI will be as easy as flipping a switch. Then you have the real evidence that you need in order to shock policy-makers around the world. You let scientists from any country come and see the function of the AI without explaining how you started it. At that point Friendly AI will get funded at billions of dollars a year. AI publications would be controlled so that that info isn’t being spread so generally to individuals. A closed group of international scientists start developing defenses against accelerating AI. Automatic disconnect switches are placed on the Internet. And existential risks in general get more attention and funding.

      > I guess we could try and ban it, but does anyone seriously believe that would stop people?

      No, banning it won’t work. But even a delay is beneficial. And there might be pretty effective ways of countering private development of seed AI. For example, there are a finite number of computer chip makers. Legally require those chip makers to include monitoring parts of their chips to be checking for signs of accelerating complexity and function. Even smart programs are largely static in their complexity. Seed AI would reveal itself in a pattern of increasing complexity or function. So report or suspend that computer until it can be confirmed as safe and then released.

      Instead of us just saying that it is impossible to stop accelerating AI, let’s try to look and work towards some solutions.

  6. Dave

    Panda: Yeah, but if you think about it, they were actually right. The Germans acted irrationally and ending up losing.

  7. ParaiahDrake: It is hard enough for USA to monitor nuclear programs around the world. AGI, possibly needing no resources other than smart people and a large cluster may be very difficult to monitor – at least right now. What would it take to monitor AGI hobbiests, rouge underground AGI attempts, or clandestine corporate attempts at AGI?
    Perhaps a Nanny AGI?

    • PariahDrake

      I get what your saying.

      I just have this hunch that encrypting my BCI may be the most practical safety measure against rogue SAI’s created by accident or intentionally.

  8. John Pombrio

    I love this kind of talk. The funny thing is that this “evolution of AI” is pervasive even now, going on around us without our noticing. We are so limited in “our” view of the world that we only see changes when the AI is like us.

    Computers are nothing like humans in the way they are wired, built, compute, perceive, or act. We literally have to force them to act as a poor excuse for a human. Left to itself, an AI would have very little in common with people and absolutely no reason to adapt to us let alone “pamper” us. Common AI such as search engines, stock pickers, shelf stackers, and car stoppers work by doing things in a way that make no intuitive sense to humans but work well.

    Whenever we imagine the future, we will always come back to the village life in which we live, with its gossip, jokes, stories, food, drink, and sex. Budding AIs have none of this in common and will go in its own way, ignoring us and leaving us behind. We should worry more about indifference rather than benevolence or hatred.

  9. JohnHunt

    > or engaging in some juvenile fantasy that we humans can directly control all AIs forever?

    It is certainly a fantasy that we can create an accelerating AI and then controll its behavior forever.

    But let me make this analogy. Imagine if in the late 1960s so
    ething went terribly wrong and the US and USSR did a full nuclear exchange targeting each other’s populations. Say each country lost 2/3rds of their population and nobody wins the war but both loose mightely.

    Wouldn’t you imagine that the entire world including the combantants might disavow nuclear technology forever? Could you imagine everyone moving away from nuclear power and even international regimes able to inspect and sanction any country secretely working on any type of nuclear program. Since nuclear weapons has a high technological threshold, persistent diligence could keep people from getting anywhere near a nuclear weapon.

    Now, you’re probably thinking that there are differences. But we allow AI research and we allow nuclear development and not-very-strong inspections precisely because we haven’t had either a nuclear nor an AI partial-doomsday scenario happen yet.

    So again, I think that a contained demonstration of an existential threat is perhaps the best way to achieve the political will necessary to control uncontrolled AI development AND buy us some time to develop friendly AI and an off-Earth colony.

  10. MED

    … but there was no nuclear exchange. The russkies weren’t THAT crazy after all, even though they were strongly ideologically, that is memetically, motivated and driven. Humanity was lucky in that the russians aren’t exactly what you’d call stupid. Damn they’ve got some smart people!

    The only component that was missing from their memeplex: the angry daddy in the sky. Add that and you WILL get ANY kind of exchange. Bio, chem, nuke, you name it, it’s on the list of the angrydaddyists. And it’s coming. We will get a demonstration.

    We’ve got many already, but apparently they haven’t been large enough to achieve the political will to target the underlying cause, the memes themselves.

    Going to war against nation states or rogue groups is like fighting the flu by killing or capturing a few who carry it, while the rest of the population keeps on sneezing and spreading the contagion.

    Collectively waking up is probably the hardest thing for humans. Look what the Germans and the Japanese had to go through in World War 2 to come to their senses.

  11. joeb

    Smart as in transhumanists? Russians – (coughnissimov) – are indeed very smart people, but they’ve got some bad cultural habits, I guess you could call them bad memes. (Well, don’t we all?)

    Are there many transhumanists in Russia?
    What countries have the most? (The U.S. probably?)
    What countries have the largest (tiny) percent of their population? (Some nordics perhaps?)
    How many transhumanists are there estimated to be worldwide?
    (10,000? 1M?)
    Do such statistics exist?

  12. JohnHunt

    MED > Going to war against nation states or rogue groups is like fighting the flu by killing or capturing a few who carry it, while the rest of the population keeps on sneezing and spreading the contagion.

    The technologic threshold for nuclear weapons is very high because enriching fissile material is physically very challenging. So, this is why it can be largely controlled at the nation-state level. Rogue groups cannot produce it themselves so they would have to steal it from nation-states.

    On the one hand, Seed AI may require an ordinary computer and someone figuring out the right seed code to get an evolving entity selected for slightly greater intelligence. But, although a lone wolf individual may today have a computer powerful enough to achieve worm-level intelligence, apparently figuring out what the initial seed AI code shoud be is not that easy because, AFAIK, noone has grown a worm-level intelligence from scratch.

    My point is that we need to be taking steps to see that the day that a seed AI with infinite potential is either prevented or delayed enough that further control mechanisms can be put in place, friendly AI is developed, we have an off-Earth colony (for other risks), etc. That’s why I promote the idea of a contained demonstration.

  13. Sean S

    Don’t computers still have the intelligence, creativity and consciousness level of a doorknob, roughly speaking? I find it odd that everyone is so confident that superhuman AI’s are coming soon when we can’t even build insect-level intelligences after 60+ years. I suspect there’s a huge “cheesecake fallacy” at work here, though I can’t quite put my finger on it. My guess is there really is some kind of “secret sauce” or complexity to human consciousness that isn’t remotely similar to any machine in current or theoretical existence. There’s probably something going on at the quantum level where consciousness and matter interact that can’t be replicated by a classical computer. Using classical computers to simulate consciousness will probably turn out to be a sophisticated form of Cargo Cultism.

  14. chzero

    Regarding Cargo Cultism, why is it so difficult to think that brains are strictly mechanical systems like everything else made of matter, that can be simulated? Why do people exclude the brain even if they readily accept that physics and math can (ultimately) explain everything else observable? This mind-exceptionalism makes no sense to me and I still haven’t heard an argument that doesn’t introduce an unnecessary rule, breaking Occam’s Razor.

  15. Michelle Waters

    The trouble with theories of consciousness that assume it’s complex is that it doesn’t seem complex.

    One question I have for AI skeptics, is do you just believe a machine cannot be conscious or do you believe that no machine could figure out how to achieve it’s programmed goals as well as a human can figure out how to achieve his or her goals? The latter may be of more practical importance. Even if you don’t believe that humans could be destroyed by self-improving AI, good, non-conscious, AI might still make many human jobs obsolete.

  16. Human brain is a best creation of nature. It can,t be make a duplicate of them. Whenever we trying to stand against nature there are big problems occurs. We can make more smarter machines what that could be used by more dirty human brains for disasters.

  17. Thanks for another excellent post. Where else could anybody get that kind of info in such a perfect way of writing? I’ve a presentation next week, and I am on the look for such information.

  18. few of sun glasses inside their automobiles. When you find yourself driving your car or truck, your glare via sun rays might cause temporary blinding. Actually, glare from sun rays are a major grounds for many injuries. These kinds of accidents might be averted in the event you have shades. It’s generally essential for professional owners, who normally travel significant cars just like the school vehicles also as trucks to utilize sun shades contemplating that the effects

  19. Please forgive my English.I am continually investigating online for ideas that can aid me. Thanks!

  20. Write far more, thats all I’ve to say. Literally, it seems as though you relied on the video to make your point. You surely know what youre talking about, why throw away your intelligence on just posting videos to your website when you can be giving us something enlightening to read?

  21. Spot on with this write-up, I really suppose this internet site needs considerably a lot more consideration. probably be once a lot more to learn way far more, thanks for that info.

  22. Alexandria Vert

    Interesting website, i read it but i still have a few questions. shoot me an email and we will talk more becasue i may have an interesting idea for you.

  23. Gaming pc’s aren’t as difficult as you might perhaps think, and creating your extremely own gaming pc is not as tricky as a complete lot of folks would make you consider. Because you by now have a distinct software in head when constructing your individual gaming machine, you’ll find in fact only three main elements you’ve to have to worry about, and every small thing else is truly secondary: the processor, video clip card, and RAM.

  24. I picture this could be a number of upon the written content material? nevertheless I nonetheless consider that it might be suitable for just about any type of topic subject matter, as a result of it will regularly be gratifying to resolve a warm and delightful face or maybe pay attention a voice whilst initial landing.

  25. I truly appreciate this post. I’ve been looking everywhere for this! Thank goodness I identified it on Bing. You’ve created my day! Thx again!

  26. I am glad for commenting to let you be aware of of the exceptional discovery our girl undergone reading through your web site. She mastered so many things, most notably how it is like to possess a wonderful helping character to get the mediocre ones quite simply master a number of multifaceted matters. You really exceeded our own expectations. Thank you for providing those effective, trustworthy, edifying and easy thoughts on that topic to Sandra.

  27. There exists noticeably lots of funds comprehend this. I assume you’ve made certain great points in functions also.

  28. I like this weblog very much, Its a very nice place to read and incur info . My bloag ?????????? ????? ???? ???????

  29. Thanks for the info, really gave me an idea of the options I was looking at. I am on a budget so yeah we’ll just see.

  30. Howdy! This is my 1st comment here so I just wanted to give a quick shout out and say I truly enjoy reading your posts. Can you recommend any other blogs/websites/forums that go over the same subjects? Appreciate it!

  31. Well I sincerely enjoyed studying it. This subject offered by you is very constructive for proper planning.

  32. I’ve learn some just right stuff here. Certainly price bookmarking for revisiting. I surprise how so much effort you set to create such a excellent informative website.

  33. Hiya, I’m especially glad I have bring into being this information. At the moment bloggers publish only concerning gossips and internet and this is especially frustrating. A good locate with exciting content, this is what I penury. Thanks for keeping this website, I’ll live visiting it. Solve you perform newsletters? Can not stumble on it.

  34. I think that similar homepage owners will need to check out this unique homepage as an example. Very clean and user friendly design and style, as well as beneficial information! You’re an authority within this excellent topic area :)

  35. WONDERFUL Post.thanks for share..extra wait .. …Useful info!

  36. There are some interesting deadlines in this article but I don’t know if I see all of them middle to heart. There is some validity however I will take maintain opinion until I look into it further. Good article , thanks and we wish more! Added to FeedBurner as well

  37. An attention-grabbing discussion is value comment. I feel that you need to write more on this subject, it might not be a taboo subject but typically people are not sufficient to speak on such topics. To the next. CheersUseful info!

  38. WONDERFUL Post.thanks for share..more wait .. …Useful info!

  39. What is in place cool internet site! Guy. Wonderful. Outstanding. Let me save your site as well as go ahead and take nourishes as well? We’re glad to discover a number of handy details right here within the release, we require come up with further approaches to this specific consider, we appreciate you spreading.

  40. Thank you for having a awesome blog.

  41. Hi, Neat post. There’s an issue together with your site in web explorer, might test this¡K IE still is the market leader and a big component to folks will miss your great writing due to this problem.

Leave a Comment

Awesome! You've decided to leave a comment. Please keep in mind that comments are moderated.

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>