Yes, AI should be open

Scott Alexander: Should AI be Open? 

Or are we worried that AI will be so powerful that someone armed with AI is stronger than the government? Think about this scenario for a moment. If the government notices someone getting, say, a quarter as powerful as it is, it’ll probably take action. So an AI user isn’t likely to overpower the government unless their AI can become powerful enough to defeat the US military too quickly for the government to notice or respond to. But if AIs can do that, we’re back in the intelligence explosion/fast takeoff world where OpenAI’s assumptions break down. If AIs can go from zero to more-powerful-than-the-US-military in a very short amount of time while still remaining well-behaved, then we actually do have to worry about Dr. Evil and we shouldn’t be giving him all our research.

 

// I’ve been meaning to write a critical take on the OpenAI project. I’m glad Scott Alexander did this first, because it allows me to start by pointing out how completely terrible the public discussion on AI is at the moment. We’re thinking about AI as if they are Super Saiyan warriors with a “power level” of some explicit quantity, as if such a number would determine the future success of a system. This is, for lack of a better word, a completely bullshit adolescent fantasy. For instance, there’s no question that the US government vastly overpowers ISIS and other terrorist organizations in strength, numbers, and strategy. Those terrorist groups nevertheless represent a persistent threat to global stability despite the radical asymmetry of power– or rather, precisely because of the ways we’ve abused this asymmetry. “Power level” here does not determine the trouble and disruption a system can cause; comparatively “weak” actors can nevertheless leave dramatic marks on history. Or better: corporations have quickly become more powerful than governments in many ways without needing to mount a direct military threat to its power. Corporations aren’t omnipotent, but they have influence and power in ways that governments do not. Among other things, they use this power to control the operation of government from the inside. This threatens the power of the democracy and the well-being of humanity, without posing anything like a nuclear threat. Such threats can be much more insidious because they don’t come from the coercive force of a gun.

This whole absurd discussion is the product of believing, falsely, that “human-level intelligence” is a coherent concept, and that if other things demonstrate better than human intelligence it somehow represents a threat to us and our capacity for controlling it. This assumption frames all of Alexander’s discussion in this post, and all of Bostrom’s and Yudkowsky’s, and all of the discussion among the Singularity crowd, and it is endlessly frustrating because it is so hopelessly immature. It is a comic-book level discussion of intelligence and control, and the hyperbolic discussions it generate are not much more than red herrings. There are genuine concerns about the future of AI, but superintelligence is not among them.

First of all, super-human intelligence has nothing whatsoever to do with control. I recently watched a documentary on Coexter (https://goo.gl/3vPXHW) which includes, among other things, an interview with his daughter talking about her controlling mother/his controlling wife. She explains that allowing his wife to control everything else about the family life allowed Coexter to invest himself fully in his mathematics. On the range of human intelligence, Coexter veers much closer to “super” than most. But he lived his life subservient to someone with (likely) a far less capable brain, who was nevertheless able to use other forms of influence and power to dominate the superior intelligence.

The Alexander/Bostrom discussion assumes that some neat equation will convert some level of computing power and into its military-power equivalent, and since the power of AI is growing in one domain it represents a threat to all domains of power. But this simply isn’t how power or intelligence or control works. El Nino is one of the more powerful climate phenomena on the planet, and depending on where you live it can be a destructive or a rejuvenating force. But it’s also (literally) as dumb as rocks, and humans are more or less completely at its mercy. Trying to map out coherent relations of power, intelligence, and control for literally any other natural phenomenon is a complete nonstarter, so why do we think these are useful starting points for talking about AI? Oh yeah, because we think that “superhuman intelligence” is the magic key to infinite power and control. If we drop this assumption, the whole suite of worries collapses under its own weight.

Look again at the above quote from Alexander’s piece. He’s worried that a private actor develops AI that is more powerful than the government is able to contain. He’s really worried about only one form of control, in the government’s capacity to check a private actor. Now imagine a comic book scenario where Trump wins the presidency and announces a fifth Reich and a Muslim holocaust, and that somehow the whole of the US military agrees to enforce his maniacal request. In such a scenario, you might imagine a thrilling, Independence Day-like action movie where developing superior AI is the only way for humanity to defeat the fascist monster, and that the government’s incapacity to check the AI is a good thing! The situation is absurd, of course; the point is to say that it can’t be taken for granted that the government is on the side of Dr. Good. If we’re just looking at “power levels” we’re making a huge, oversimplifying mistake.

And THIS is the real problem with the OpenAI initiative. It claims to want to operate in the interest of the public, but whose public? How can we be sure that the board represents the interest of humanity considered in its entirety, and not just the interests of a few panicked, eschatological SV billionaires? The reason we assume the government operates on the side of Dr. Good is because, in theory, the people have democratic control over the operation of government, and so the people’s interests are indirectly represented. But as I mentioned above, this isn’t always obvious; there are plenty of cases where the government seems to be representing private interests over the interests of the people, despite its apparent obligation otherwise. It is hard to think about the US military operating solely in the interests of the American people, and not in the interests of the military industrial complex they serve more directly.

Corporations have no obligation to operate in the interest of the people. Governments and nonprofits can claim to represent the people, but they need only do so nominally and to avoid generating enough hostility that they lose power. The threat in both cases is a lack of control. Control doesn’t come through superior intelligence or power considered as abstract quantities. Control arises through mechanisms that can be used to real effect. A virus that controls my immune system can take me down despite being much less intelligent and powerful than me. If you want to control anything, no matter how big, intelligent, or powerful, then you need to find the mechanisms that give you leverage over its dynamics.

The government has traditionally been the primary source of democratic control because voting and elections describe the precise mechanism whereby control is leveraged. Those mechanisms can be corrupted, however, so there’s a constant question about whether our democracy actually gives people meaningful control. This question cannot be taken for granted. You test your control through feedback: you exert control and then check to see if it does what you expect. Studies have shown that the US is not a democracy, and that the people actually don’t have meaningful control over what the government does. This wasn’t a change in the intelligence of the government; this was a change in the mechanisms of control. Maybe you like the US government and this this isn’t such a bad thing. But it demonstrates that living within a system where we have no control doesn’t require the system be particularly intelligent. The threat of a lack of meaningful control is not unique to AI; it is an ever-present threat of natural organization itself.

Of course, the US government is already a superhuman intelligence: it is composed of many organized systems of human and machine intelligence, and as such outstrips the power and control of any individual human. But its power does not rest in its intelligence; it rests in the particular mechanisms that keep the system functioning as it does. Whether we have control of the system depends on the state of those mechanisms. And that’s where our focus must always be.

Nonprofits like OpenAI can also become hostile actors; the fact that it operates as a nonprofit is not itself a safeguard against danger. This was my worry about the project, and this is why Alexander is wrong to think open source is a greater risk than reward. Open source provides another mechanism for public control of the software. Even if the advisory board is compromised and the government is defanged, open source ensures that the public at least know what is going on with this system, and provides the opportunity of mounting an effective response. Open source doesn’t stop people from writing malicious code, but if the code from every virus was on github we’d have a much easier time tracking them down and mitigating their effects. OpenAI doesn’t stop other actors from developing AIs for whatever potentially malicious ends they want, but it does give us some mechanisms for control over what it can do

But more importantly, OpenAI serves as a model for open source governance that takes for granted the unreliability of government and private business to effectively protect the people. I was skeptical that Musk’s intentions here are entirely humanitarian, but the OpenAI project seems to recognize this skepticism as valid, and opening its source code is central to this recognition. If successful, OpenAI might accelerate the development of AIs for both good and malicious purposes, and if it is done in the open hopefully we’ll have enough time to study the opportunities for control better. The long history of open source software suggests that open development leads not just to better, safer code but also to much better public understanding. These are unqualified good things.

Alexander thinks the trade off is that a malicious “hard take off” scenario might catch us unawares, and to avoid that possibility he’s willing to risk the one potential mechanism for control– open source– the public has at its disposal. The problem is not just that Alexander is willing to trade a freedom for a security. The problem is that open source is ultimately the only form of security the public has. Anything less than open source and the public would be entirely dependent on possibly corrupt third parties to represent our interests. On the other hand, establishing open source as the standard, and the public’s right to access and information has the potential to pressure other AI developers to follow suit. A world where the majority of AI is developed in the open is undoubtedly better and more secure than the alternative. The possibility for a malicious actor catching us unawares goes down dramatically in a world where most AI techniques are discussed widely and openly.

Yes, AI should be open. Everything should be open; closed systems beyond meaningful public control pose a greater threat to humanity than any AI being developed today. Keeping AI private also keeps the public in the dark, and that’s the ideal circumstances for being caught unaware by malicious agents.

Title image from Nathan Sawaya

Submit a comment