Charles L. Harper, Jr. is Senior Vice President of the John Templeton Foundation. He has worked to transform philanthropy by developing innovative entrepreneurial practices in grant making, and has created more than $200 million in grant-based programs ranging widely from the study of forgiveness and reconciliation, to enterprise-based solutions for poverty, and projects in chemistry, neuroscience, evolutionary biology, medicine, and the philosophy of science. At the 2007 Singularity Summit, he spoke on the dilemma of power, which describes how science and technology are seen to create new forms of power rapidly, whereas cultures and civilizations do not so easily create the parallel capacities of benevolent stewardship.
The following transcript of Charles L Harper, Jr.’s 2007 Singularity Summit presentation “Superintelligence, the Dilemma of Power, and the Transformation of Desire” has not been approved by the author. An audio recording of the talk is available at the Singularity Institute website.
Superintelligence, the Dilemma of Power, and the Transformation of Desire
It’s a pleasure to be here. I was a little surprised to be invited. I thought there might have been a mistake made. I said, “I don’t know anything about AI, but I can do off-the-wall.” And they said, “Well, you’ll fit right in.”
My task is not pedagogical. The expertise is with you. My task is perhaps to be a catalyst in some way and to raise some big questions that I hope will be fruitful for you. I have an agenda to meet some people and get some feedback from you about some projects that we might be interested to fund, having to do with computer games. We’ll get to that a little bit later. I look forward to learning from you and hope again simply to raise some useful questions thare are “off-the-wall.” I’m not really going to focus on risks. I’m not doing a risk thing, but I will cover things that have to do with risk.
The three big questions I want to raise, the first one is “What does a slug know of Mozart?” The question has to do with whether intelligence as it goes “super” involves discontinuities or not. The second question is about the dilemma of power, and I’ll define that. The question is, the general dilemma of power, is it a big issue or not a big issue? That’s the question. The third question: How much significance is there in the whole future of AI to the issue of the transformation of desire? Kind of a spiritual issue.
The first question: What does a slug know of Mozart? We tend naturally to think of superintelligence that it is like John Von Neumann, only smarter. That would be the continuity vision of superintelligence. But you could have an idea of radical discontinuities. You could have an ontology in intelligence that has radical discontinuities just like a phase diagram of a liquid.
Epistemology for creatures, for us, follows ontology. A slug has a certain biology, you could call that its ontology, and its epistemology follows its slugness. So what a slug knows or perceives and what it cannot know and cannot perceive is simply dependent on what it is biologically. Of course, a slug’s capacity is very limited with respect to ours. Therefore, to a slug all of Mozart’s musical creativity is just a primitive perception of vibrations.
Think of a closer one to us. Biologically we are incredibly close to chimps. We have deep insights from primatological science into human nature. Profoundly important research insights are coming out all the time, and we at Templeton fund a considerable amount of this work. However, there is a fundamentally significant transition whereby we ought to be considered a new phylum. The transition to language and the kind of culture that is not animal culture, that is the kind of cumulative culture that is what this meeting is about. Before this transition and after, what changes is this ontology of language and the accumulation of symbolic thought, culture both analytic and aesthetic.
This is a profound transition. Biologically, although there is a small genetic increment, we might as well be a new phylum, something new and discontinuous has arisen because of a small genetic increment of change. Again, biologically not very much different, but what’s happened is what you could call a post-Lamarckian transition. Lamarckian biology is false, the idea that learned traits end up in the genome of the individual. But that is what we do with culture. We have useful knowledge which is accumulated in libraries, and that expands our adaptive power as individuals and as a species, because we combine our primatological genetics with our cultural accumulation of information.
Take a picture like this of a social experience. Culture is social and is based on linguistic ontology, the fact that we are homo linguisticus. We are a speaking species. We have culture, learning and the passing on of that learning. To understand Mozart, a slug, of course, is not well matched. A slug does not have this ontology, and so, can know nothing of Mozart. Take the same diagram as before and let’s say that gork is speglek because it’s based on orphensic ontology. None of us know what these words mean because I just made them up. But if we don’t have speg, we won’t be able to have gork, because it’s based on orphensic. So superintelligence, if it involves transitions, they will be like this. We won’t have orphensic ontology, we won’t be able to speg, and gork will mean nothing to us. It does mean nothing to us.
If the slug knows nothing of Mozart, you can ask the question, “Are we like slugs with respect to some higher complexity level?” In our biology, or if you want to make it in silicon, perhaps you can. Could major continuities exist that we know nothing about, and, beyond that, that we could know nothing about? We could not have the possibility to know about them. This is not a new idea, but it’s a big one.
You can think of a biological ontology and the associated epistemology as like a mountain. We typically think that we are on top of Mount Everest. Science gives us prostheses that are amazing, but they are not transcendent in terms of fundamental discontinuity. We tend to think that we are on top of Mount Everest and a slug is far toward the base. We tend to think that we are on top. Maybe it’s true, but maybe it’s not.
What does a slug know of Mozart? That’s the first question. For those of you in AI, that’s a question that you can’t answer, but one would have to face as a possibility. We do not know about this. Question number two is a little more practical. How serious is the dilemma of power? On the left is Fritz Haber, on the left is Lise Meitner. On the top right is the Haber-Le Rossignol apparatus for the fixation of nitrogen developed in 1909. What is the dilemma of power? As I define it, it goes like this: the dilemma of power is the fact that science and technology create new forms of power rapidly, whereas cultures and civilizations do not so easily create the parallel capacities of stewardship required to utilize newly created powers for benevolent use and to restrain them from being used to serve malevolent ends.
So, it is an asymmetry of rate. It is easy to make raw new human powers of any sort, whether it is cellphones, nuclear bombs, vaccines, good things, bad things, just raw power that comes out of the great Baconian enterprise that is now realized. Francis Bacon’s dream is true, and accelerating in its success. Perhaps. There are some debates about the curves . But that’s the dilemma of power.
My question is, does it matter much for this AI question. It’s a simple question. I won’t answer it. I want to raise the question, “Does the dilemma of power matter?”
So some basic issues, again, that the acceleration of the development of new powers is rapid and relatively easy. If it’s like a black box, you can put in capital, talented people, and time – the wheel cranks and out comes new technological powers, new scientific insights that create new technologies, and so forth. But if you think, is there a comparative box for civilizational capacities of stewardship, you cannot think that there has been a Baconian enterprise whereby we have learned to put in capital, talent and time and create these capacities of stewardship.
We have not created that institutional capacity in our world to do that. Therefore, in a general way this suggests a reasonable prospect for future mega-disruptive mega-disasters. That is just straightforward, general logic. And, on the more happy side, it suggests a challenge for innovation to advance civilizational production for new capacities of appropriate stewardship. But what to do. It suggests the challenge; it does not suggest the answer.
We can look back at history to what some interesting scientists have done in the past to make this seem both realistic and intense. Next year will be the 70th, one lifetime, anniversary of an idea that came to Lise Meitner in some Swedish woods on Christmas Eve in 1938. Just one lifetime ago, she had this idea with her nephew, Otto Frisch, about fission. One thing led to the next, and between 1938 and 1944, those new powers which came from the innovative idea of a scientist completely transformed the world. They weren’t controlled by scientists, but immediately taken over by government for good reason.
Fritz Haber the chemist is very interesting. He also was at the Keiser Wilhelm Institute in Berlin. He won the Nobel Prize for the fixation of nitrogen. About half the food we eat is due to his discovery in the creation of fertilizer.
Now, nitrates are also of course used for explosives. I found this statement in a book about Haber’s life fascinating. “The device that Haber and Le Rossignol built, compact enough to fit on a small table, rests today in the Deutsch Museum in Munich. Sitting quietly, separated from visitors by a short barrier, it’s a deceptively modest kernel of a thing, an embryo from which sprouted monsters: machines taller than houses, factories covering hundreds of acres, world wars, and a global flood of grain.” You see good and evil in the same scientific insight of a profound scale in the life of Haber.
Within a few short years these giant factories were created. This is the first tank car of ammonium leaving the Leuna Works in 1917, and it’s used for war. In German on the side of the tank is written by some of the workers “Death to the French!” Now, Haber, also as a chemist, pioneered in World War I gas warfare.
Bonus question, anyone want to guess another chemical substance Fritz Haber invented that had a profound impact on world history? Zyklon B. He was Jewish and a great German nationalist. He died in the Great Depression in Switzerland, having fled.
It’s an interesting example how out of great scientific creativity comes tremendous good and tremendous evil. And tremendous irony in the life of a great scientist. He wrote himself, “The great technological accomplishments that the past fifty years have granted us when controlled by primitive egoists (this is the dilemma of power) are like fire in the hands of small children.”
I want to put a name to this problem. It’s the “loaded AK 47s in a kindergarten” mixing problem. If you have a kindergarten and you put in a lot of loaded AK 47s, you have a mixing problem. So, the question is, we are creating immense new powers that are like AK 47s to some degree, there’s a relative relation. We might be like kindergarteners, unavoidably so, precisely because we can’t invest in civilizational capacities for ethical stewardship at all levels, whether they are individual, global, national, societal, familial… whatever they are, we have great difficulty with this challenge. That’s the “loaded AK 47s in a kindergarten” mixing problem.
This is a hot issue today on the front pages of newspapers, because this technology that came originally from the fertile mind of Lise Mietner is now being realized in places that are problematic like Pyongyang and Tehran. I don’t want to get into politics, but this is the concern. Because these technologies exist, very low probability events perhaps are not unlikely to happen.
Will a nuclear bomb explode in a large city like New York? There could be a low probability, but there’s a lot of large cities and there’s a lot of time. This is not a new topic, it’s an old topic, but I would be myself a little nervous living in New York. I don’t think my suburb will be targeted.
The restatement of the core issue is that the major trajectory of the output of scientific and technological innovation generates the acceleration of the manufacture of new human powers. Surely, the Singularity vision has much to do with that. Of course, very much is good and benign. To be a technophobe would be a horrible mistake. We want to be technophiles for human benefit. But again, fast growth in power generally necessitates some kind of corresponding growth in stewardship. If that logic can be shown to be not true, that’s rather interesting. But it seems to be general.
Does the dilemma of power matter a lot? I don’t have the answer. I think it probably does, because of this general logic. But it’s a very important question to keep in mind. On the stewardship side, what could be done? Could science and technology do something very positive on that side? That’s a challenge for the future.
Question #3: How important is the transformation of desire? Now, why in the world is Michael Jackson on this slide? I think it’s just a mistake, but let’s interpret it anyway. Because Michael Jackson is wealthy, he had the power to pursue his desires. What you see here is a kind of icon of the ability to pursue desires with power in which the desires themselves are problematic. They represent something like self-hatred. That’s kind of an icon of the problem of the transformation of desire. I don’t want to pick on Michael Jackson, but it’s rather striking to see it in a face.
The desire to eat. On the bottom left is a hagfish. We have a hagfish nature. We are eating creatures. We have evolved to survive, to reproduce, and a part of that is to eat. We have this fundamental drive to put food into an orifice in our bodies. We share that with a hagfish. There is a lovely book by Leon Kass called The Hungry Soul, which talks about the transformation of desire. We can be hungry for justice, we can be hungry for truth, we can be hungry for goodness, we can be hungry for many things: advancement of our career, scientific reputation, but that we have desires is a fundamental part of our nature. We are hungry creatures. We share that with hagfish.
Our culture often tries to take the transformation of desire and accelerate it in directions that we consider to be good. For example, fasting is the direct repression of this fundamental biological drive of hunger. It’s in all religious and spiritual traditions, even in secular ones like health and happiness traditions. But fasting is directly the attempt to transform the desire of hunger into the desire for something we would say is higher. That’s what the transformation of desire is about.
When I was preparing this, I wanted to see what books had been published with the name “transformation of desire” in the title. I found two. The one on the right is Christian, the one on the left is Buddhist. The Buddhist case, if you think of what superintelligence is, it’s very extreme in the four noble truths. You could think of desire as being similar to attachment. The origin of suffering is attachment, or desire. The path to superintelligence has to do with the extinction of desire. This is at the deep core of Buddhist thought. Nirvana is again the extinction of desire. This is a quite radical idea of the transformation of desire because it is trans-personal in an almost extinguishing of the self.
Let’s look at a humanist version. Not so extreme. In Turkey, you can go to Ephesus and see the Library of Celsus and you see a humanist vision of balanced ideas. You see on plinths there four statues. These are Sophia, Arete, Ennoia and Episteme. Intelligence is sort of in here. Not in the modern sense, in the ancient sense, in Ennoia: wisdom, virtue, excellence, thoughts, intention and knowledge. You have the idea of the transformation of a person and of desire that involves this balanced component. It’s not all about advancing IQ or effective IQ. Again, this combination.
Christian humanism adds to this a similar aspect to Buddhism: the radical transformation of mindset. Here is a famous verse from Phillipians. “Finally, brothers, whatever is true, whatever is honorable, whatever is right, whatever is pure, whatever is lovely, whatever is of good repute—if there is any excellence and if anything worthy of praise—dwell, think about, make these things internal to yourself.” Christian humanism also adds the agopic dimension of love—altruistic love. It adds this to the vision of what it is to be intelligent in the broad sense of intelligence.
This becomes a kind of spiritual technology in the West. Here in Dante you have the dialectics of sin and virtue. Humility, kindness, forgiveness, diligence, temperance, and chastity against their opposed vices.
In modern parlance, in addition to IQ is it an intelligent agenda to try to advance wisdom or empathy? Is it intelligent to advance these? It has been fascinating to me in this meeting to see so much interest in this kind of dimension of what would be superintelligent. Now, how in the world could this stuff relate to computational science? Let’s get back to the topic of the conference.
For the Templeton Foundation, I am very interested in this question that relates to a practical issue. We are supporting a pretty big project at Harvard with a game theorist named Martin Nowak, and his agenda is to develop a formal realistic mathematical understanding of logics of cooperation and the ethical dynamics of the human social orders. His latest book is Evolutionary Dynamics. He wants to have a general theory of the dynamics of evolutionary processes.
My question is: Could the games industry have some synergy with research support for these kinds of fundamental agendas? Could you have games that run off of this kind of deep theoretical work? I presume that most games don’t. I’m wondering if there is some connection between the games industry and the learning of virtues which games can engage us in, and deep research on understanding these things.
Now, what’s the big question? The question is about the evolution of the universe. What’s the space of lifelike things out there? Of course, we don’t know. Superintelligence could unavoidably, naturally solve the ethics problem. Or you could have gian, horrible insects out there. It’s a completely un-dealt with question. You need a deep fulsome theory of evolutionary dynamics to understand the structure of the evolution of ethical systems in post-cultural species. No real science has been done on this to my knowledge, but it’s an attempt to do real science on what was engaged in in the science fiction of Olaf Stapledon in Star Maker. We’re interested in that.
With games, might it be possible to develop a win-win synergy between these two quite different domains of the games industry and fundamental research on the deep evolutionary logics of what you might call the virtues? I don’t know, but it’s worth a shot. I’ll just mention that the success of modernity follows from a division of labor logic, whether it’s business labor or intellectual labor. There is a disintegration of domains. When we talk about the virtues or talk about spiritual transformation, when we talk about computer science, these are very different domains at a university, and they are separated for good reasons to do with productivity. Therefore, there is often an opposition to these grand integrative scenarios. They just don’t strike people as the right way to do science.
However, in the initial visions, if you look at what Newton was trying to do, or what Bacon was about, they were actually trying to solve the politics problem, the theology problem. They were interested in social harmony, and to some degree Newton discovered gravitational dynamics and the calculus as a mistake along the way. I just talked to Rob Iliffe who is editing all these unpublished papers of Newton and this is what he thinks. It’s very clear in the case of Francis Bacon that this kind of agenda of an integrative relief of the human estate involves not just specifically working on math and science.
This meeting is fascinating. It raises challenges directly for an integrated vision in relation to the expectation of the creation of a massive new transhumanist power. And also it engages people that I am very keen and eager to meet that are involved in the programming of the technological advancement of the games industry who are interested in this task of training the virtues and not just World of Warcraft type stuff.
Again, a summary of the questions. Thank you for your attention.