Humans are Inherently Poor Programmers — Advanced AIs Could Possess a Sensory Modality for Codelike Structures
To quote from Part 3: Seed AI of Levels of Organization in General Intelligence, which I've been re-reading:
Even at our best, humans are not very good programmers; programming is not a task commonly encountered in the ancestral environment. A human programmer is metaphorically a blind painter - not just a blind painter, but a painter entirely lacking a visual cortex. We create our programs like an artist drawing one pixel at a time, and our programs are fragile as a consequence. If the AI's human programmers can master the essential design pattern of sensory modalities, they can gift the AI with a sensory modality for codelike structures. Such a modality might perceptually interpret: a simplified interpreted language used to tutor basic concepts; any internal procedural languages used by cognitive processes; the programming language in which the AI's code level is written; and finally the native machine code of the AI's hardware. An AI that takes advantage of a codic modality may not need to wait for human-equivalent general intelligence to beat a human in the specific domain competency of programming. Informally, an AI is native to the world of programming, and a human is not.
A codic modality and associated "codic cortex" could allow an AI with significant general intelligence to make self-improvements to low-, mid-, and maybe even high-level aspects of its own code more effectively than humans could. Even if its general intelligence were low, its programming competency might be high, like an idiot savant. Eventually, it could combine "merely" human-level general intelligence with significantly superhuman coding abilities.
With fast serial thinking speed and immunity to boredom, an AI could apply its "best thinking" in an automatic and rigorous way to all levels of its own design. Kevin Kelly has pointed out that self-improving AIs might be limited by the need to conduct experiments in the outside world at human-characteristic timescales (unlikely), but this criticism would not apply to the internals of an AI, which could be tested extremely quickly if the appropriate levels of computing power could be furnished.