On Chalmers

David Chalmers at Singularity Summit 2009 — Simulation and the Singularity.

First, an uncontroversial assumption: humans are machines. We are machines that create other machines, and as Chalmers points out, all that is necessary for an ‘intelligence explosion’ is that the machines we create have the ability to create still better machines. In the arguments below, let G be this self-amplifying feature, and let M1 be human machines.

The following arguments unpack some further features of the Singularity argument that Chalmers doesn’t explore directly. I think, when made explicit and taken together, these show Chalmers’ approach to the singularity to be untenable, and his ethical worries to be unfounded.

The Obsolescence Argument:

(O1)  Machine M1 builds machine M2 of greater G than M1.

(O2)  Thus, M2 is capable of creating machine M3 of greater G than M2, leaving M1 “far behind”.

(O3)  Thus, M1 is rendered obsolete.

A machine is rendered obsolete relative to a task if it can no longer meaningfully contribute to that task. Since the task under consideration here is “creating greater intelligence”, and since M2 can perform this task better than M1, then M1 no longer has anything to contribute. Thus, M1 is ‘left behind’ in the task of creating greater G. The obsolescence argument is at the heart of the ethical worries surrounding the Singularity, and is explicit in Good’s quote. Worries that advanced machines will harm us or take over the world may be implications of this conclusion, but not necessarily so. However, obsolescence does seem to follow necessarily from an intelligence explosion, and this on its own may be cause for alarm.

The No Precedence Argument:

(NP1)  M1 was not built by any prior machine M0. In other words, M1 is not itself the result of exploding G.

(NP2)  Thus, when M1 builds M2, this particular act of creation is not leaving anything “far behind”.

(NP3)  Thus, when M2 builds M3 and initiates an ‘intelligence explosion’, this is an unprecedented (that is, a singular) event.

The No Precedence Argument goes hand-in-hand with the evolutionary considerations that motivates Chalmers’ positive suggestions at the end of his lecture. M1 was produced by dumb evolutionary processes. Designing intelligent machines, the defining feature of M1, is supposedly not itself a dumb process, even if M1 uses methods inspired by dumb evolutionary algorithms. Thus, when M1 builds M2, this is a straightforward application of G; call it “linear G”. When M2 creates M3 and produces exploding (or exponential) G, this is an unprecedented event. Since it is unprecedented, we don’t know what to expect from exploding G. This is also cause for alarm.

Both the Obsolescence argument and the No Precedence argument are packed into Chalmers’ formulation of the Singularity, and both give reasons to be worried about this event. Chalmers seems to implicitly endorse both arguments, and indeed argues that there are grounds for taking precautions in response to these possible threats. Furthermore, both arguments have the implication that if such an event is possible, it certainly hasn’t happened yet, and likely won’t happen in the near future. We currently play an essential role in the development of technology, and show no signs of becoming obsolete in the design of future technologies. As a result we have not experienced (and have no grounds for expecting) any radical discontinuities in the development of technology. Chalmers is explicit that if the Singularity is possible, it is a distant concern. This is evidence that he is committed to something like the above arguments.

I will argue that the conclusions of both arguments are false. First, I will argue that the ‘intelligence explosion’ predicted by the Singularists is not unprecedented, but is fundamental to the nature and use of technology. From this, I will argue that we are not in danger of becoming obsolete; the introduction of better technology does not leave us behind any more than the introduction of the wheel or the computer left us behind. Instead, it changes who ‘we’ are.

My arguments are largely inspired by Andy Clark’s discussion of technology, which Chalmers knows well but leaves out of his talk. Let’s formulate a Clarkian principle of technology to help us along:

The Interdependence Principle: Human intelligence and the technology it creates and are fundamentally interdependent.

There is a superficial sense of interdependence that makes this principle obviously true. Machines need us to build and use them, and we need to use and build machines in order to survive. But Clark would say this mutual dependence is  closer to a kind of symbiotic relationship, where who we are is essentially tied to the tools we use. Our very capacity for G is not a feature of our naked brains, but is the result of thousands of years of developing an intimate (Clark’s term) relationship with technology. Our best machines today are not designed by any isolated human brain. They are designed by collectives of brains in cahoots with a variety of technological machines that assist in the design, development, and construction of still better machines. It is only through these elaborate cooperative enterprises, incorporating both humans and machines, that enables our steady technological progress. The computer I am writing on could not have been built without the computers that came before.  In other words, O1 is true; not just possibly in the distant future, but actually and in our world today. AI is already a fact about our world, in such a straightforward and familiar way that we hardly recognize it. Here, I’ll introduce you.

If the Interdependence Principle is true, and I believe it is, then (NP1) and (NP3) are worse than simply false; they represent a fundamental conceptual error in thinking about the relationships between humans and their technology. If M1 represents humanity at its current level of technological development, then clearly we ARE the result of the humans and machines of a different technological age. We stand on the shoulders of giants, and some of those giants are robots. And the next generation of machines will exploit and incorporate our own generation; we will be swept along as technology marches forward. In other words, there is no substantive distinction to draw between M1 and M2, or indeed between M1 and M3. These are not unprecedented leaps into an unknown future, they are the signposts of a very familiar pattern of human behavior.

This does not suggest that the future of technological development can be predicted with any accuracy or certainty, and I am not suggesting that we try. One of the benefits of having philosophers speak on the Singularity is our deeply ingrained skepticism about induction, and a general distaste for futurism, both of which run rampant among the Singularity enthusiasts. Chalmers does us all a service by staying cool about the future.

If there is no distinction to draw between ourselves and the machines with which we coexist, then we are not threatened by the possibility of obsolescence, for we are necessarily carried along with the intelligence explosion; indeed, we partly constitute it. This does not mean it will be a painless transition. Technological change has always caused enormous suffering; witness Detroit or China. Such examples show that technological change does leave people behind (e.g., the Digital Divide), but this is because technology is part of humanity, and it therefore participates in the same social and political institutions as we do. This is certainly cause for concern, but not of the variety that Chalmers suggests. The very idea of incomprehensible technology, of the sort that the Singularists argue for, rests on a denial of the Interdependence Principle. Endorsing the principle makes it clear that O2 and O3 are likewise false.

Since both arguments violate the Interdependence Principle, then both arguments can be rejected. I will suggest that if we reject both arguments, then there is nothing left of the Singularity to worry about. Of course, there are still worries about the use of intelligent machines, and it may very well be possible to initiate an intelligence explosion (on my view, we already have!). However, there are no and will be no discontinuities leaving us “far behind”, because continuity is part and parcel of our continued use of technology. This curve might be exponential, but from our perspective it will always seem gradual because we are traveling in the same frame of reference.

If correct, the Interdependence Principle not only defeats the Singularity, but also reveals Chalmers’ positive suggestions for dealing with the Singularity to be hopelessly naive. It is impossible to build machines that we can isolate from having any effect on the real world. One implication of the Interdependence Principle is that even the most exclusive, proprietary, well-protected machines can have dramatic consequences for humanity, even if they are rarely used (see the Atomic Bomb). Whatever thin veneer of safety Chalmers thinks he can derive from the metaphysically suspicious distinction between the real world and virtual world betrays a deep misunderstanding of the very nature of technological change. Insofar as interest in the Singularity is symptomatic of a deep fear of the unknown technological future, we would be wise not to reinforce the mysticism surrounding machines that Chalmers’ argument represents.

1 Comment

  1. In conversation with a colleague, I was asked about the nature of a discontinuity between humans and machines. He argued that if my claim is that there is always some causal continuity, this is trivial, but why can’t there be different kinds of continuity? For instance, what if we encounter alien computing machines? Wouldn’t such machines pose the sake kind of risk that AI might pose? My response follows:

    I don’t think I need to appeal to *causal* continuity to make my claim. I’m not entirely sure I agree that causal continuity is trivial, but then again I’m not entirely sure I believe in causation. But that’s neither here nor there. Even the singularists will agree that there will be causal continuity.

    My problem is more general than that. I think there is an incorrigible problem of individuation among mechanical systems, that the way you choose to identify individual machines is at best underdetermined by facts about the machine. My rejection of the singularity argument is precisely that it assumes a naive sort of individuation that I don’t think stands up to scrutiny. The very idea of a machine that can stand isolated from others is, I think, a misunderstanding of the nature of technology.

    This doesn’t mean that all the machines we build will play nice with us, or that there won’t be conflicts. We have definitive empirical evidence to the contrary already. Besides, if the worry is just “machines will do bad things to us”, this is true of technology generally and has nothing to do with an intelligence explosion. The worry put forward by the Singularists is explicitly about the intelligence of future machines, and the discontinuity is one of incomprehensibility. They argue that if intelligence explodes, it will outstrip humanity: not only will it free itself from our control, but it will resist any attempt at keeping up with its advance. This is the point of ‘leaving us behind’. Exploding technology is singular because it leapfrogs past our intellectual abilities. This prediction only makes sense on the assumption that we can individuate M2 from M1, and M3 from M2, and I am denying that we can so individuate.

    This is a little more complicated than I wanted to get into in my little blog post, but my view here is basically Davidsonian. Davidson says that there is at most one language. In a sense, I am saying that there is at most one machine.

    Davidson’s point is roughly that we should be able to translate any language into our home language, if we are charitable enough. In other words, it is impossible to come across a language that completely resists interpretation, or for my purposes, we will never come across a language that is incomprehensible to us. If we can’t find a translation, then it simply isn’t a language, but the principle of charity demands that we do a lot of work before we make this conclusion.

    I want to say the same thing about machines. You might say that I am a holist about technology. There is always a way of understanding the behavior of machines as extensions of our own humanity. This may require enlarging the scope of what we consider human (I think technology always forces us into this position of re-evaluation), and as such we will never encounter any truly incomprehensible machines. This requires something like the principle of charity for judging the behavior of machines, but I think that’s exactly what Turing was after with his principle of Fair Play.

    In other words, your concerns are right on point- what happens if we encounter other machines that seem to have non-human origins? This puts us precisely in the position of radical translation of the sort that the early philosophers of language were worried about, and my response is exactly the same. Either we recognize the other machine as a “translational” variant of our own home purposes, in other words we interpret the foreign machines as included in ‘humanity’ broadly construed, or if the machine continues to resist translation we ultimately decide that there is nothing there to translate, that they aren’t speaking a language at all. But in no case do we resign ourselves to brute incomprehensibility, and so we never encounter the sort of discontinuity assumed by the Singularists. Its just not in the cards, if we accept a principle of charity. By the way, including non-humans into the scope of humanity isn’t just wide-eyed fantasy, I think this perspective lies behind a lot of environmentalist ethics.

    Now I admit this puts me in a hard position, because lots of my recent work has been on machine autonomy, and I have argued that autonomy is necessary for artificial intelligence. But I am arguing for a sense of autonomy that allows us to describe a machine as a participant, in the same way that a human can be a participant, which doesn’t require the deep sort of isolated individual that, for instance, Strawson would defend. Its not a metaphysical claim, its claim that the machine is *practically* independent. This requires some kind of individuation, but that individuation is always understood in the context of the activity that the participants are engaged in. So Deep Blue is playing chess (ie, is an autonomous participant) when it defeats Kasparov, because the rules of chess are such that there are two distinct players, and one of them is Deep Blue.

    In the same way, we might build a machine that for whatever reason is at cross purposes with us and we find ourselves in conflict with. And in terms of that conflict, there are individual parties engaged (or participating) in that conflict. But such conflict, if anything, demonstrates just how closely linked we are with machines, and how easily they pull us along with them. The very fact of conflict reveals a deep continuity between machines, the existence we all share.

    Putting conflict aside, I also fully admit that it is possible to build machines that are much smarter and faster than we are (on a parochial reading of ‘we’). Such machines are far less likely to be in conflict with us, and far more likely to just not care about us. I really like the following short story by Ted Chaing:

    http://eripsa.org/the-evolution-of-human-science/

    Anyway I hope that clarifies some things for you, it definitely helps me.

Submit a comment