When cheap, advanced sensors give rise to ubiquitous monitoring technology, there will be the potential for what David Brin in The Transparent Society and others have called “sousveillance” to become universal. One could envision a future in which everyone was monitoring the activities of everyone else. At the AGI-09 post-conference workshop, Ben Goertzel presented on a paper with Stephan Bugaj on various scenarios resulting from a future of advanced artificial intelligence that includes sousveillance technologies.
The following transcript of Ben Goertzel and Stephan Bugaj’s AGI-09 presentation “Is Sousveillance the Best Path to Ethical AGI” has not been approved by the speaker. Video and audio are also available.
Is Sousveillance the Best Path to Ethical AGI?
I am now going to give a kind of science fictional talk moving out even further into the hypothetical future. I have given talks before on similar issues to what the last two talks were about. Stephan and I gave a presentation at last year’s AGI conference on stages of ethical development for logic-based AIs. I did not think of anything new to say about that topic for this year’s workshop, so I decided to share some thoughts we have been pursuing about a related topic, which is the intersection of advanced AGI technology and other possible technological developments.
Often when extrapolating what will happen in the hypothetical future where we create very powerful AIs, we tend to ask “What if everything else was the same?” That is a sensible way to investigate things, varying one parameter and seeing what happens. There are a lot of other interesting changes that could happen in the future of humanity as well. It’s interesting to look at some of the intersections, what happens when advanced AI is combined with other things that could occur. What we are talking about here is the intersction of AI and what David Brin and other thinkers have called “sousveillance.”
This is a variant of the traditional concept of surveillance. The Big Brother is Watching You-type future scenario is one where the government or the galactic megacorporation is watching everyone and everything you do, whether for your own safety or for their own ends. Sousveillance is a tweak on that idea, where basically everyone is watching everyone else. David Brin, in a book I think is really excellent, called The Transparent Society, posits that given the development of surveillance technologies, pretty much the only options we have are the powers that be watch everyone or everyone watches everyone.
The argument he puts forth is that privacy is going to be dead, one way or the other. He figures we are better off with everyone watching everyone than just having the government watch all of us. If everyone watches everyone, then you can watch the watchers. There is a bit more safety potentially there. If you go even a little further out into science fiction, you can talk about really strong sousveillance, where you watch other people’s mental states as well. That gets far out with humans. When you are talking about AGI systems it is quite possible that you could demand that any AGI created make its mental states open to inspection by other AGIs.
That raises a lot of interesting questions. Even if you can see into its mind, can you understand what is happening? I am going to talk a bit about what may happen when you put AI together with this sort of universal surveillance that may be part of the future. One of the interesting things that comes up when you think about this in a psychological sense is, even setting AGI aside, what would this do to people? Imagine, hypothetically as a thought experiment, you had a brain chip (a cranial jack, like in William Gibson‘s novels) that could read what thoughts were in your brain and project them to other people. What effect would that have on us?
What if you came back from a late friday night out and you wouldn’t let your wife tap into your cranial jack? Is she going to become suspicious? What if you won’t let your boss look at what you are thinking? How much more efficient could a software team be if you could share thoughts of a certain nature? Maybe you would want to filter out certain thoughts and share other ones. You could make an argument that the nature of the human mind would change dramatically once you had this kind of technology that the construct Thomas Metzinger calls in his book Being No One the “phenomenal self.” This is the image we make up of ourselves and guides our interactions. Maybe that will either go away or radically change with this type of technology. Of course, we don’t know.
There is a possibility that you could have a new kind of cognitive agent, which I’ve called a “mindplex.” This is a group of minds, maybe human minds, that are very tightly coupled together. Each has its own individual consciousness, maybe with some kind of collective consciousness at a higher level, something that we cannot have now because of the different technological substrate. What I have just said about humans you could envision with intelligent AI systems all the more readily, because the cranial jack essentially is already there. You can read out the internal representation of an AI system to other ones. You get into interesting questions there about sharing cognitive content between AIs of similar versus different designs.
What happens when you put AGI and sousveillance together? The first, most obvious point is that AGIs may be necessary to enable really effective sousveillance. We already see the inklings of this in what is facing the U.S. intelligence community and other intelligence agencies. They are gathering a lot of information about all of us right now. Part of the reason Big Brother is not here yet is that mining all the information that they are gathering is really hard. Look at all the security cameras watching us everywhere, all the information about our credit card transactions and on Google’s databases, which is most likely being piped to other repositories run by the government. Why are they not doing more with that? Part of the reason is that they may not be as evil as some people say. Part of the reason is surely that mining through all that knowledge is really hard.
We all know enough about AI here to know how hard it is to find relevant information in a huge, heterogeneous knowledge store. Speech-to-text does not work that well, search engines do not work that well—we are still stuck on keyword search—image search does not work that well. If all those things really worked, Big Brother might be here already with the existing surveillance technologies. Arguably a powerful general intelligence, or even the right kind of combination of advanced narrow AI technologies, could enable both surveillance and sousveillance.
We can see the start of a phenomenon with things like Google Street View. There are cameras and you can go to Google Maps and see who is walking on the street in some places, and I think they did something to obscure people’s faces because it seemed to be an invasion of privacy to watch people walking around. You could counter-argue—is it really an invasion of privacy? If you are walking down the street, everyone can see you. How is it different from seeing you through a camera? That is just going to get worse and worse.
Other interesting options are, with the cranial jack scenario, you have the possibility to plug into Google, plug into Mathematica (which would make theorem proving a lot more fun). If you have an AGI theorem prover, a mathematician will be impaired if they cannot hybridize into a cyborg mind with the AGI theorem prover. That makes for AGI + human mindplexes.
Then there are all these questions of AGI ethics. What happens when you make smarter and smarter AIs? Could better surveillance technology help with this? You could inspect the AGI’s mind as it gets smarter and smarter and develops the desire to annihilate you, then perhaps intervene and protect against it somehow.
When you follow this train of thought a little further, it gets kind of interesting. The most interesting scenario to think about, consider the case where there is sufficiently advanced sousveillance for a bunch of minds to see what each other are doing, and there is relatively symmetric practical power among the different agents. You come to some interesting conclusions. If you have a situation where one agent is massively more powerful than everyone else, it does not particularly matter whether you can see into their mind or not. If I am much stronger than you, and I have a huge weapon and you don’t, and you can see that I’m coming at you, and can even see in my mind that I’m thinking of clobbering you over the head, if you do not have the power to stop me it does not help you very much. If you have a collection of agents that are roughly equally powerful, you get potentially some interesting dynamics, given universal surveillance.
I think that there is a real issue here in terms of conformism versus innovation. You can see a bit of this in the financial markets. If you have a situation where everyone can see what everyone else is doing, if there is no one who is too different from anyone else, it is probably a fairly tractable problem to make sure no one gets too far out of line and does something nasty. If you imagine a community of 10,000 very smart AGI systems that can monitor each others’ cognitive states to some level, if they are operating fairly similarly they can probably tell if one of them is getting out of line and starting to develop dangerous tendencies.
On the other hand, if one of those guys is thinking according to very different algorithms and is just behaving in a way that is unpredictable relative to the other ones, the other guys are not going to be able to tell what it is doing. I would say sousveillance combined with conformity potentially could provide a measure of safety. This gets back to these evolutionarily stable strategies. If you have a bunch of guys who are fairly similar and they all understand each other, if anyone gets too weird you clobber him. Monoculture survived a really long time before we developed advanced technology.
It is true that stability is not always the optimal state. Innovation generally results from non-conformity. Hyper-conformist societies can stagnate. There is a kind of give and take here between safety and innovation. This gets into the economics of future societies. The financial markets are an interesting analog here. I spent some time doing consulting work with hedge funds and other financial actors. I should say as a preface to this that I do not believe in the efficient market hypothesis. There are a lot of inefficiencies in markets, which pop up, and smart people exploit them. The markets are close to efficient, but not fully efficient. If you buy that, which is the attitude of most traders and quantitative finance people, the next step along that line is to gain an advantage in an environment where everyone sees the trades that everyone else is making. Here it is important not just to be smart, but to be smart in a weird way… what I call “peculiar cleverness.”
Support vector machines are pretty smart, but there are a lot of people trying to do trading using support vector machines now, so the benefits to be gotten from that method are largely priced into the market. Feed-forward neural nets were priced into the market a long time ago, and before that linear regression was priced into the market. It is not enough to be really clever. In a case where everyone can see what everyone else is doing and copy it, you have to be clever in a weird way so that people will take a long time to copy it. In a situation where there is so much mutual observability among actors that are roughly equally powerful, what you need to prevail is to be smart in a way where no one else can understand what you are doing for a while.
That would be even more true in a community of general actors that were doing something besides financial trading. Imagine that all of us in this room were competing at some goal, like AGI research, and we could see what each other were thinking. Let’s say we were competitive and wanted to beat each other to the goal. How would we get there? If we thought in such a convoluted and weird way that even when the other guys could see our thoughts they would not understand what they were seeing, we would have a chance of getting there ahead of anyone else. Whereas if you were really smart but everyone else could see what you were thinking, the odds of beating everyone else would be lower. Peculiar cleverness is a big advantage in the face of sousveillance. On the other hand, non-conformity has to be squashed because it could lead to someone causing damage.
You have a really interesting dichotomy between freedom and security. Of course, we have that all the time, but it becomes much more poignant when you have this type of universal surveillance technology. Similar issues come up when you talk about the notion of mindplexes. That makes things even harder to track because, as I said before, the most interesting situation is when you have actors that are roughly equally powerful. If you have a mindplex where 500 people meld together into a Borg mind to think better, predict the future better and be peculiarly clever better, then what is the individual actor? Is it the individual guy, or is it the Borg mind? It’s hard to know which individuals seek an unfair advantage when you cannot tell who the individuals are. You come to the conclusion that mindplexing, fusing into group minds, would need to be restricted in this kind of future to prevent anyone from becoming more than x times smarter than everybody else.
There are a lot of potential future scenarios that could come out of this. The next few slides run through a few fairly obvious possibilities that could come about. I do not claim to know what the probability weighting is of all these different science fictional possibilities. One simple possibility is that we get to watch each other, and everyone gets kind of embarrassed, and we become a Puritanistic society without much innovation. I don’t think this is too likely, but you can see it as a possibility.
One possibility is that all the inhibitions and nasty impulses we have get dampened out because everyone can see what everyone else is doing. We realize that we can waste less resources by not worrying about it and see through each others’ games and ruses. Then we could get a sousveillant utopia. That also probably is not very likely, but it’s interesting.
The panopticon, where basically sousveillance fails, is where large organizations can get superior monitoring resources. Basically sousveillance gives way to surveillance, which I am afraid may be a more likely possibility than the previous two. However, I do not place too much weight on my probability estimates.
A mindplex utopia is where basically individuality goes away and it becomes to everyone’s benefit to link into the global brain. This is an interesting scenario, which I also think is not completely unlikely. We think of it as giving up freedom to link into the Borg, because that is how Star Trek portrayed it. Ultimately, from a neuron’s point-of-view it may not have its freedom restricted by being part of a brain. We may be able to link into some higher-level global mind and still have a sense of individuality, freedom and personal satisfaction.
Of course we have the Borg collective, which is well known from science fiction. The mindplex level, in order to achieve advantage, becomes more and more restrictive of what the individuals can do. I would say we actually don’t know whether in a competition between mindplexes either the more Borglike mindplexes or the mindplexes allowing more internal freedom to their members is more efficient. We don’t really know that. It may depend on a lot of other factors.
Instead of the Borg collective, we could have the “bored collective.” This is sort of like the first one but without all the Puritanism. The singularity kind of throws a wrench into this whole thing. It could be that once we achieve a certain level of intelligence, whether individually or through the mindplex, as Vernor Vinge says, once something gets enough more intelligent than we are now, we are idiots to think we can predict what is going to happen.
Of course, another scenario that I’m afraid is not that unlikely is that something goes wrong with all these dynamics and we can watch very closely what some other agent is doing while it prepares to annihilate us, and then does so.
In conclusion, sousveillance is an advanced form of collectivism that we cannot understand very well right now. I think it is a form that is fairly likely to happen, just as I think totalitarian surveillance-based society is also fairly likely to happen. I’m not sure which one is more likely. I tend to buy David Brin’s argument that with the advent of monitoring technologies, either sousveillance or surveillance is likely to be the future, unless we bomb ourselves to oblivion first. If it is the sousveillance-based future, there are going to be real issues with preserving the right to innovate and non-conform. This is an issue that we all should be thinking about.
There are huge benefits in terms of possible increased cooperation and harmony between people and also huge risks to the individuality and diversity. Certainly in terms of the safety of advanced AGI systems, sousveillance provides no guarentees, but it does change the dynamics of the scenarios. It is a different scenario in terms of AGI safety than the scenarios in which sousveillance does not exist. The dynamics are different and they are very complex. When you spend time thinking about them, you run into an awful lot of uncertainties, which is hardly surprising considering that we can just barely think through scenarios where we introduce AGIs into current society without any other intervening variables, such as sousveillance.