Accelerating Future Transhumanism, AI, nanotech, the Singularity, and extinction risk.


Humans are Inherently Poor Programmers — Advanced AIs Could Possess a Sensory Modality for Codelike Structures

To quote from Part 3: Seed AI of Levels of Organization in General Intelligence, which I've been re-reading:

Even at our best, humans are not very good programmers; programming is not a task commonly encountered in the ancestral environment. A human programmer is metaphorically a blind painter - not just a blind painter, but a painter entirely lacking a visual cortex. We create our programs like an artist drawing one pixel at a time, and our programs are fragile as a consequence. If the AI's human programmers can master the essential design pattern of sensory modalities, they can gift the AI with a sensory modality for codelike structures. Such a modality might perceptually interpret: a simplified interpreted language used to tutor basic concepts; any internal procedural languages used by cognitive processes; the programming language in which the AI's code level is written; and finally the native machine code of the AI's hardware. An AI that takes advantage of a codic modality may not need to wait for human-equivalent general intelligence to beat a human in the specific domain competency of programming. Informally, an AI is native to the world of programming, and a human is not.

A codic modality and associated "codic cortex" could allow an AI with significant general intelligence to make self-improvements to low-, mid-, and maybe even high-level aspects of its own code more effectively than humans could. Even if its general intelligence were low, its programming competency might be high, like an idiot savant. Eventually, it could combine "merely" human-level general intelligence with significantly superhuman coding abilities.

With fast serial thinking speed and immunity to boredom, an AI could apply its "best thinking" in an automatic and rigorous way to all levels of its own design. Kevin Kelly has pointed out that self-improving AIs might be limited by the need to conduct experiments in the outside world at human-characteristic timescales (unlikely), but this criticism would not apply to the internals of an AI, which could be tested extremely quickly if the appropriate levels of computing power could be furnished.

Comments (10) Trackbacks (0)
  1. It’s going to need a damned good VCS.

  2. How many lines of code has the SIAI written to create these AIs?

  3. How many levels of abstraction does this need to cross? I would question whether a single ‘codic’ modality is adequate. Beyond this the discussion starts from such a broad set of assumptions that it seems hard to argue or even discuss. Tantalizing, yes ;-)

  4. Human self-awareness, intuition, consciousness, imagination – these are all ‘inward looking’ model spaces. I described it to my neighbour like this –

    ‘imagine, you;’d lose your keys. I would then walk in your living room, and in my mind, I’d make a model of the room in a single glance. I couldn’t know where you would have lost your keys at first glance, but I would however be able to generate a sufficiently detailed model to simulate and flag all the places where keys could in practice accumulate. I could create a device that generate a sequence of simulated ‘key losing events’, and color all the spots in the room where the keys would be hidden, and more likely to end up. I could couple that with human behavior, based on what I know of your behavior. If I thought fast enough, I could see the spots where the keys would likely to be in deep red colors. I’d go over them one by one, and you’d think I was some sort of magician’ whenever I found them in under a minute.

    People have little or no clue what minds can do, if you take the implications to the extremes. And that’s just with evolved minds. I have only one ‘mind’s eye’, and I can only use it when I mostly lock down my real visual awerenss (don’t use your laptop while driving!). Imagine if I had both, or ten, all meshed into a single psyche, scattered lossless over several consciousness systems.

    Modelling only goes so far. Add up these thought modalities in an enhanced human brain and very very soon the humans who are left behind see you slip around a corner and you enter modes of function that are beyond baseline understanding at a transcendant level – unknown unknowns. Total weirdness.

    The biggest problem is to convince humans that an AI on a self-engineering runaway effect will do things that will be surreal. You can’t go ‘survivalist’ on them and win because John Connor knows how they operate – The moment between starting self-improvement and transcending all known boundaries is weeks at most.

    This makes it so strange that nature got so little results with human brains. Why hasn’t there ever been a freak mutation human with a 5 kilo functional brain in all of human history? If we ever would have had one, in all those billions, it should have been anle to do unprecedented marvels. Yet humans stay so firmly locked in a very narrow band of cognitive achievements.

    Then again, would I acknowledge a superhuman if I see her/him do his/her thing?

  5. Kevin Kelly is a Christian, no? Thus, this person is about as useful and intelligent as a dog barking in morse code.

  6. Brad: zero! We are trying to figure out a lot of things mathematically before we start coding. Current models of decision theory are not adequate to build an AI that can operate in the real world, much less improve itself. Just like a great and delicate building, building AI without a precise blueprint will guarantee failure.

    Standard decision theories are susceptible to being fooled by situations humans never would be (like Pascal’s mugging-type situations, to name one that has been studied). So, we have to come up with decision theories that avoid such potholes while still remaining mathematically consistent and understandable. For recent attempts, see the Less Wrong posts on timeless decision theory, including open problems.

  7. The SIAI is the only group I know that is focused on friendliness rather than “full speed ahead” AI approach. The longer the SIAI takes to actually start programming the AI the more time other less benign organizations have to advance their own programs. Tim Tyler addresses this trade off between caution and speed in one of his youtube videos

  8. Where do I start… Well, first humans aren’t good painters. Even with the necessary sensory motor equipment the first stick figures we draw look like the thing a blind painter might fling on his canvas. Why that is so, seems like an interesting topic in itself…

    The main problem I see here is the analogy of a “codic modality”. What would that have to be? And what would a “codic cortex” have to do?

    What the “codic modality” seems to boil down to, is a layer of number crunching before the manipulation of the code, putting it into some kind of specialized representation. Then the AI will act upon it (intelligently) and put it through the cruncher back into the code from before.

    So… Is this “codic-modality” basically a compiler you put in front of your AI, or am I missing something important?

    If that’s the case, it suddenly doesn’t sound all that special anymore.
    Apart from the “codic cortex”, which will do… what additional thing?

    Let me phrase it in the painting analogy: You will have a “codic modality”, which will turn your code into a painting. The painter will make the painting aesthetically pleasing for him and then the “codic modality” will turn it back into code again.

    So the tasks to accomplish are to build a compiler which can literally transform code into something analogous to the fine grained complexity of a painting and program an entity that has something akin to the aesthetic sensibility of a painter.

    A bold proposal.
    As always.

  9. Im pleased I located this webpage, I couldnt find any information on this subject prior to. I also manage a website and if you are ever serious in a little bit of guest writing for me if possible feel free to let me know, i’m always look for people to check out my site. Please stop by and leave a comment sometime!

  10. making contradictory statements on those injured during a police swoop on the yoga guru and his supporters in New Delhis Ramlila

Leave a comment

No trackbacks yet.