Thank you to all the bright people who participate with great enthusiasm in the comments on Accelerating Future. The quality of discussions here is often very high — thank you!
In the “future superintelligences indistinguishable from today’s financial markets?” thread, DMan and Mitchell Porter engage in a discussion about emotions and their relevance to AI.
Correction: Mitchell Porter not Howe. Sorry to Mr. Porter, both him and Mr. Howe have insightful comments but I usually find myself reposting the latter.
If I understand Rinesi correctly, he is equating distributed networks of information processing with AI. This model has been used by some neuroscientists as a model for consciousness, without much success in my opinion.
The problem, it seems to me, is that such a system would always be too diffuse to ever have an internal model of self thatâ€™s required for a useful form of consciousness. As far as I can see, there has to be a central point to which information flows and is interpreted by the overseer of the â€˜selfâ€™.
Itâ€™s really like saying that a flock of birds flying in formation, schools of fish, or the coordinated activities of a termite nest are â€˜intelligencesâ€™ in of themselves. In many ways the cooperative activities of social animals and hive insects resemble the financial market metaphor â€“ would you call them an â€˜intelligenceâ€™?
Itâ€™s kind of an â€˜emergent hive mindâ€™ theory with more in common with magical thinking than science.
As to AI being no existential threat:
I would (again) refer you to the excellent book by science writer Rita Carter, â€˜Consciousnessâ€™ that collates evidence from neuroscience, cognitive studies and psychiatry in an attempt to understand the only working model of consciousness we have â€“ us.
In it, research appears to indicate that:
a) There can be no â€˜intelligenceâ€™ without â€˜consciousnessâ€™
b) Consciousness â€“ no matter human or otherwise â€“ requires some form of embodiment to function (so, again, Rinesiâ€™s hive mind is not embodiment).
c) Consciousness requires an emotional component in order to function. The very act of making even an abstract judgement requires a â€˜feelâ€™ for what is â€˜rightâ€™ or â€˜wrongâ€™ â€“ thatâ€™s emotion.
My point in mentioning this is that an AI (perhaps better name is AC â€“ Artificial Consciousness?) will in all likelihood have to be able to experience emotion to be useful to us. Without emotion there is only a pallid facsimile of creativity â€“ and without creativity what could it invent?
So if it can â€“ or must â€“ have emotion, it most certainly can and will be dangerous. It could feel resentment, jealously, ambition and hatred.
This is why,I suspect, many so-called AI â€˜expertsâ€™ (experts on something that is currently not a reality doesnâ€™t really warrant the title does it?) are uncomfortable with the idea of emotion being necessary, and go out of their way to dismiss it. Because it implies a potential existential threat to us.
But if theyâ€™re wrong and emotion is indeed needed for AI, then Rinesiâ€™s telling us not to worry is â€“ at best â€“ irresponsible.
â€œWithout emotion there is only a pallid facsimile of creativity â€“ and without creativity what could it invent?â€
Computers can invent in the way that evolution does â€“ through trial and error. They can also employ smarter search algorithms than that, e.g. to explore a space of possible designs.
The whole significance of computing is that it provides a mathematical theory and a material technology capable of duplicating every mental function â€“ goal-directed behavior, pattern recognition, communication â€“ in a way which makes no reference to mind, thought, meaning, consciousness, emotion, etc. Itâ€™s all just physical â€œstate machinesâ€ interacting with each other.
And since neuroscience is analyzing the human brain and human behavior in the same way, this raises serious questions about the relationship between subjectivity and the world of mindless physical cause and effect. That debate has been around for centuries now, but the theory of computation gives it a new twist. Lots of people now like to think that e.g. emotion or consciousness occurs wherever a certain type of computation occurs. Whether you believe that or not, the significance for AI is that emotion and consciousness are not a necessary part of AI theory or practice. You donâ€™t need to think of a thermostatâ€™s operation in terms of liking and disliking in order to design and make it, and the same goes for the far more intricate feedbacks and calculations which would make up an artificial â€œintelligenceâ€ capable of rivaling the human mind.
There’s a bit more back-and-forth, then Mitchell posts the following, which I think is especially on-target:
The creative limits of chess computers and story grammars donâ€™t tell us about the limits of unconscious computation in general. Consider any cognitive process which involves emotion, creativity, consciousness, compassion, etc. Itâ€™s going to have a cause-and-effect description, in which there are transitions between various psychological states. To produce the same outputs, all thatâ€™s required is a process with the same cause-and-effect structure. The â€œstatesâ€ involved donâ€™t have to be psychological or conscious â€“ unless functionalist philosophy of mind is right, and all those psychological properties really are present whenever you have the right sort of causal structure.
Even if I take that view â€“ suppose I want to make a creative AI, and I decide to follow your advice and make it â€œemotionalâ€. How do I even do that? If I adopt a particular software design, how do I know whether or not it corresponds to the existence of emotion in the AI?
The ultimate reason that this doesnâ€™t seem like very useful advice (for someone who wants to make an intellectually powerful AI â€“ weâ€™ll get to the ethical issue in a moment) is that emotion itself doesnâ€™t solve problems, even in humans. If the problems themselves involve emotions, then an emotion can *be* the answer â€“ happiness might be the answer to unhappiness, just as a glass of water can be the answer to thirst. But if youâ€™re a monkey in a room trying to get at a banana on the ceiling, emotion itself does not tell you that the answer consists of stacking boxes and climbing on top of them. Or rather, emotion is not the process which will materialize that possibility in your mind. Emotion may motivate you to devote cognitive resources to the problem, and your mind may be wired to produce an emotion (excitement) when an imagined solution looks like it will work. But the consideration of possible actions â€“ visualization, combinatorial explorations â€“ all that is more â€œcomputationalâ€ than â€œemotionalâ€, and thatâ€™s the process which generates possible solutions. (Embodiment also plays a role here, because it permits, not just formal trial and error, but also a more formless experimentation which will suggest possibilities and components of possibilities.)
If a process is to produce the solution to a problem, it has to generate possible solutions and then evaluate whether they are useful. Itâ€™s a psychological fact about human beings that emotion and consciousness play a role in this evaluation of possibilities, and they even play a role in determining what we will think of as a problem. But from a computational perspective, it doesnâ€™t have to be emotion or consciousness which performs the evaluative function. There just needs to be a sub-process which discriminates or guides appropriately, and that can be yet more unconscious computation. If you look at problem-solving algorithms searching a space of possibilities for solutions, they typically alternate between the formal generation of new possibilities, and the formal evaluation of the newly generated possibilities â€“ do they offer progress towards a complete solution.
Summing up this stage of my argument: The intellectual power of an AI would not reside in the existence of emotions or the existence of an emotionlike structure of cognitive control and guidance. It would depend on the quality and power of its basic problem-solving algorithms. I am trying to finesse the hard problems associated with consciousness and subjectivity by being agnostic, in this discussion, about their relationship to material and computational reality. Further down the page, David Pearce and â€œContinuously Computedâ€ have given us statements of the two main approaches, namely, choice of material substrate matters for the existence of subjectivity, and, only causal structure matters; consciousness reduces to substance, or consciousness reduces to function. Of course itâ€™s a complex and very important issue, but I do want to emphasize just how far we can expect to go in the creation of general-purpose AI, employing only computational concepts.
The other topic you bring up is whether the creation of emotional AI is a way to achieve what our blog-host would call â€œFriendlinessâ€: rather than trying to engineer the functional equivalent of friendliness in an emotionless AI, you make an emotional AI and start it off compassionate. But that strategy requires that you begin to solve the hard problem of consciousness, as it pertains to emotion and compassion; you would need to say *how* to make an AI emotional or compassionate. And you would need to understand something of the developmental dynamics in an artificial emotional system. I assume you donâ€™t want it *going mad* out of extreme sensitivity.
Even people who want to make an emotionless but Friendly AI have to find solutions to those problems anyway, because, even if emotions are not part of the AIâ€™s mechanism, they have to be part of its supposed domain of competence. A general-purpose AI could not know how to treat human beings ethically or even safely, without having a highly refined understanding of emotion and all these other aspects of conscious subjective experience. Part of SIAIâ€™s current thinking about the achievement of Friendliness seems to involve outsourcing some of these problems to the proto-friendly AI, which will engage in neuroscientific studies aimed at identifying what real-world material phenomenon or attribute is intended by all this vague human talk about emotions and consciousness and so on. Itâ€™s an interesting idea but it still requires as a starting point some minimal idea, on the part of the programmers or design theorists, about how to tell the AI what to investigate and how to value it, e.g. something like â€œWe want the world to be optimized according to the criteria that are used by the part of the brain responsible for the judgements which ultimately produce confident assertions of happiness with the overall situation.â€
Does anyone else believe that phenomenological consciousness is necessary for general problem-solving? I’m sort of confused why anyone would think that. This line in particular is confusing to me:
And how do they judge, from â€˜trial and errorâ€™ which is the best solution? Other than something blowing up of course? Without the ability to judge â€“ which according to cognitive studies, requires emotion?
This statement sort of implies that “judge” is a natural category distinct from trial-and-error, and that the two are naturally separated and distinct clusters in algorithm-space, rather than two points on a continuum. It doesn’t make sense because there’s obviously a million shades of competence between the most simplistic trial-and-error algorithms and human judgement, to refer to them as two natural categories is most confusing. Surely animals have solved numerous “judgement” problems without human-level “emotions”.
The thinking seems to be that the universe can grant a special gift, “consciousness”, on certain hallowed beings, then they get magical judgement powers. But phenomenological consciousness doesn’t seem particularly related to judgement capabilities, and thinkers like Chalmers never suggest that it does in their papers on consciousness.
Checking out the reviews for Consciousness on Amazon, I see that Carter is a science writer rather than a scientist, and makes basic errors like thinking that an atom becomes positively charged when it gains an electron. Still, I don’t think that being a science writer rather than a scientist should ruin someone’s scientific credibility. Some science writers have a much more thorough interdisciplinary knowledge of science than many scientists.