Part 1: Chomsky’s misreading of Turing
In this interview, Chomsky reads the quote from Turing (1950): “I believe the question ‘can machines think’ to be too meaningless to deserve discussion” (at [10:05]) as a claim about the improbability of AI. He interprets this as if Turing is claiming that the issue of AI and thinking machines were irrelevant or uninteresting.
This is a deliberately misleading interpretation. Turing obviously cares a lot about the issue of thinking machines, as evidenced by, for instance, the letter he sent his friends “in distress“.
+Jay Gordon clarifies Chomsky’s views on Turing as follows:
Chomsky states that Turing states that whether or not machines can think is a question of decision not a question of fact, akin to whether an airplane can fly. Chomsky actually cites Turing verbatim on this issue in his book Powers and Prospects (p 37ff -ed.)
I’m not sure I appreciate the distinction drawn between a question of decision and a question of fact, or the suggestion that Turing treats the question of thinking machines as the former instead of the latter. Turing recognizes it as a fact that in his time people refused to accept the proposition that machines can think. But he also recognized that by the turn of the century these prejudices against machines would change, and that people would speak more freely of thinking machines. And getting from the former to the latter state of affairs isn’t a matter of any one decision; Turing thought it was a matter of social change, on par with the reversal of attitudes towards homosexuality, both of which unfortunately came too late for his time. Turing says the question “can machines think” isn’t helpful in this process because it invokes conceptual and prejudicial biases about “thinking” and “machines” that themselves can’t be clearly resolved, largely because they rely on confused metaphysical (or superstitious) beliefs. Turing’s claim of “meaninglessness” reflects the positivist attitude of the scientists and logicians that formed Turing’s intellectual community. The question “can machines think” is too meaningless to deserve discussion because it invokes these metaphysical confusions and keeps us mired in pointless debate.
Nevertheless, Turing thought that our attitudes towards these machines would change, and that by the turn of the century we’d simply accept it as a fact that machines could think. And, of course, he’s right: every day we talk to and engage with intelligent machines that play a wide variety of roles in our lives. We routinely deal with them at the level of intentional agents capable of linguistic exchange. We don’t talk to them like people, but of course Turing never expected we would. His Turing test was not designed to make machines more intelligent; it is a test to help us better see the intelligence in our machines. Turing also defends “fair play for machines” and emphasizes their social impact (including our tendency to discriminate against them) in several places outside his 1950 paper.
Concluding the Turing thought AI was “meaningless” is a deliberately misleading interpretation of his claims and views. Not only did Turing believe that machines can think, but believed further that it’s important to advocate on their behalf. Using Turing to to argue that AI has not been important to our understanding of our minds and the world is exactly antithetical to Turing’s thesis.
Chomsky has a generally very pessimistic view of AI and he’s trying to enlist Turing on his side but it’s simply an abuse of the view. Chomsky also claims in this interview that the last 40 years of AI research “hasn’t given any insight to speak of into the nature of thought and organization of action”. He says (at [10:50]) perhaps the only scientific insight gained from AI research like Deep Blue is in advancing computer engineering. He suggests that most AI research is just a cover for making consumer electronics commercially viable. He’s right that Deep Blue and Watson both appeared to be primarily marketing projects instead of disinterested research, but Chomksy is emphatically wrong to suggest that research into artificial intelligence hasn’t changed our concept of thinking. AI and computer science have had a tremendous impact on the entire structure of human knowledge, and have reshaped our thinking about ourselves and our world at many levels an in unaccountably many ways.
More generally, a computer is an idealized intelligent agent capable of following a set of clearly organized rules in regular and systematic ways. Using computers has allowed us to specify in a very clear and precise way the theoretical limits of any information processing system, and has given us an understanding of the more general class of agents of which human beings are only one type. In this way, computers are a model of thinking machines, and the standards by which we conceptualize and evaluate ourselves.
Part 2: Why it matters for reading Turing
+Deen Abiola asks for some support for the claim made above: “His Turing test was not designed to make machines more intelligent; it is a test to help us better see the intelligence in our machines.”
First off, I don’t know that anyone has argued that the imitation game was designed to make machines more intelligent. The standard interpretation of the test is that it sets a criteria for intelligence: namely, behavioral indistinguishability from human conversational linguistic performances. The standard interpretation is that Turing believes if a machine meets that standard, then it should be treated as intelligent. So it’s not a method for improving our machines, it is a method for determining whether the description of intelligence is appropriately applied to them.
The standard interpretation of Turing basically stops there, although philosophical and technical questions still linger about the criteria Turing uses in the test. Most of the discussion about the test involve various ways of extending the behavioral criteria into other domains of human competence (with the limit being the so-called Total Turing Test), or more critically about whether behavioral tests are sufficient for identifying intelligence at all (see Searle, Dreyfus, etc), but none of this has much to do with Turing’s own views about AI. They are ways of using his criteria to address the question of thinking machines itself. In any case, we are left with the impression that Turing thinks intelligence is specific property (or set of properties), and that when machines come to have that property then the term should be applied to them. This is, I think, more of less the interpretation Chomsky has, and it aligns pretty well with the consensus interpretation of the test.
I’m not disagreeing with the interpretation of the test; I’m just saying that Turing’s reasons for offering the test are obviously more complicated than the criteria itself suggests. For one thing, the 1950 paper begins with a discussion of the traditional “imitation game”, which was a parlor game where people try to guess the gender of the interlocutors. Turing explicitly uses this game as the inspiration for the Turing test; if we were to apply the standard interpretation to the original imitation game, it would imply Turing’s views on gender were as follows: A man is anything that acts like a man, and a woman is anything that acts like a woman. I think it’s reasonable to think that, given Turing’s personal history, that his views on gender identity were a little more complicated than that.
In fact, I’m arguing that Turing’s views here are explicitly constructivist: there are no absolute criteria for gender identity (or intelligence), except what our collective social biases take them to be. If you actually play the gendered imitation game with a group of people, you’ll see that players tend to employ the broadest of gender stereotypes and generalizations in order to spot some potential difference between the players. It’s unlikely that Turing is endorsing the use of these stereotypes in determining an objective criteria for gender identity. The lesson of the game is precisely to show how the prejudices and stereotypes we have about gender inform the kinds of things we’ll recognize as being of some particular gender type. The gendered imitation game is supposed to be entertaining and a bit sexy because it highlights these otherwise unspoken prejudices about the genders. It’s obviously not meant to be a strict (much less scientific) criteria for judging the gender of an individual.
One feature of the imitation game is particularly important: the interactions are restricted to conversational questions and responses. In the gendered game, the point is to restrict the more obvious signs of gender identity, like visual appearance or vocal tenor, which would reveal too much about identity to make the game any fun. The Turing Test likewise eliminates visual appearance and physical performance, which Turing thinks would unfairly weight a judgement against the machine. Turing’s point is quite clear: if we saw that our interlocutor were a computer, then we’d typically be more skeptical of its intelligence than might be appropriate from its linguistic behavior alone. So the Turing test is designed to filter out these prejudices to give the machines a fair chance.
This notion of “fair play for machines” is really the central motivation for my interpretation (and completely absent from the standard interpretation). It first appears in Turing’s work a few years prior to his 1950’s paper, in the context of an argument about the computer’s ability to perform mathematical proofs. You can see the context of the quote here:
“Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.”
Turing here is clearly worried about our biases in judging the behavior of the machine. The traditional Turing test sets out to filter these biases, so that the performance of the machine can be judged directly without unfairly weighting it against the machine. In the 1950’s paper he makes a remark about the imitation game being weighted against the machine too; fairness for the machine is clearly at the front of his thinking.
In fact, in the responses to objections in that 1950’s paper, Turing argument repeatedly takes the form of “how might we apply the same standards of judgement to the machine,” which looks to me clearly like an appeal to fairness. Consider, for instance, the Theological objection Turing considers: god gave a soul to men and not to machines, therefore machines cannot think. His approach is not to argue that souls really are a part of the machine; instead, his response is to argue “why can’t God give machines souls?” The strategy is as follows: If you think X is an important feature of human intelligence, then Turing will show that the very conditions under which you judge a human to have X can also be used to judge the machine to have X, and therefore we should conclude the machine to be intelligent. In the argument from consciousness, for instance, we consider humans intelligent because they are conscious, but insofar as we make that judgment on the basis of convincing linguistic performances then we might also make it of machines executing similarly convincing performances. Again, Turing isn’t arguing that consciousness is essential to intelligence; he’s arguing that if we think it is, then we should at least judge the machine by the same criteria we use to judge humans. This is fundamentally a plea for fairness for machines, not a defense of a strict ideological perspective on intelligence.
The upshot of all this is that Turing isn’t arguing that “behaviorial indistinguishability” is sufficient for intelligence, in the sense that it is an objective standard for the description. Instead, Turing’s test is meant to provide for the conditions under which a person might fairly judge the differences between a human and a machine, without simply falling back on prejudice. Turing argues that on the basis of simple fairness, if this behavior is sufficient for judging the human intellient, then it is also grounds for judging the machine to be the same. Turing isn’t interested in defending (or refuting) the claim that humans are intelligent; he takes it for granted that we think they are, and his test is designed to allow us to apply the same judgment fairly to our machines.
Insofar as the received interpretation of Turing has turned him into a ideologue about the nature of intelligence, it’s done an injustice to his position on thinking machines. It’s also left room for people like Chomsky to misuse Turing’s position to argue against technological optimism about thinking machines, and I find that incredibly unfortunate and worth responding to directly. Turing thought it important to speak on behalf of fairness to machines, and the imitation game is an attempt to describe a setting in which a human might treat a machine fairly. It’s completely appropriate that Turing is the first to recognize some social obligations we might have towards machines, and it’s completely inappropriate to use this argument, as Chomsky does, to suggest that Turing though the issue of thinking machines were trivially irrelevant.
Part 3: Why it matters for all of us
Hopefully none of the above can be interpreted as a defense of the Singularity theory. I’ve argued against the Singularity theory for its unclear, quasi-mystical thinking about our technological age, and I agree with Chomsky’s assessment that the idea of a singularity is mere science fiction. +Matt Uebel is right to point out that science fiction often leads the way in science, but that’s surely compatible with keeping clear the distinction between serious theoretical inquiry and fantasy, and recognizing that the singularity theories exemplify the latter and not the former.
From history we know that people tend towards mystical thinking in situations of uncertainty and fear. I believe that the rise in popularity of singularity mysticism is symptomatic of our uncertainty with respect to the nature and future of artificial intelligence, and the fear that it has become increasingly important to our lives and yet beyond our control. Singularity theory has become popular in these conditions partly because there is no real alternative theory in the popular discussion for thinking about our technological condition, and insofar as it helps people understand their circumstances at all it is preferable to treating technological change as wanton and chaotic. I believe that the proper response to these circumstances is to provide a serious theoretical framework for thinking about the relationships between ourselves and the intelligent machines with which we share our spaces, but Chomsky’s deflation of the term “artificial intelligence” is antithetical to that project.
I think that Turing’s call for “fair play” provides an important guiding principle in developing an alternative theory of technology in the age of intelligent machines. Turing’s concern for artificial intelligence directly addresses their role as social participants in the games we humans play. Although we’ve historically tended to play these games mostly with each other (and perhaps a few domesticated animals), Turing believed that machines would increasingly find roles to play in our games, and he introduces the question of thinking machines to directly address this context.
Turing’s work suggests dealing with machines directly as intelligent agents participating in social relations with human beings of the same sort with which humans engage each other, through the use of language and the accumulation of social conventions. Turing’s framework leaves open the possibility for treating intelligent machines potentially as equals, but more generally Turing is concerned with judging their performances “fairly”, as we would judge any other agent. I think the sciences of multi-agent systems (a core development of AI research) provides the right framework for having that discussion, but the results are simpler than the theory suggests. Everyone judges Ken Jennings to be intelligent because of his success at playing Jeopardy. Watson was shown to outperform Ken Jennings at a fair game of Jeopardy. Therefore, we should treat Watson as intelligent, too. Even on the standard interpretation of the Turing test, this is precisely how the situation would be described. Unfortunately, Turing’s death came too soon to see these events occur. But for Turing’s sake it’s worth explicitly recognizing that these events have occurred, and indeed within the time frames that Turing predicted. Turing’s views have been vindicated by history, albeit without recognition.
By Turing’s standards, we’re already living in a world populated by machines capable of incredibly impressive and nontrivially mind-like performances, each of which play any number of important and meaningful social roles. These roles have huge consequences for who we are and what we take ourselves to be capable of, both individually and as a collective body. Without these machines, I would be an entirely different person, and we would be an entirely different global population. If any human being were playing the equivalent role these persistent and indefatigable machines played, they would certainly deserve respect and recognition for their contributions, and a salary besides. Machines should be afforded recognition for their contributions as well, by applying the same criteria on grounds of simple fairness. When Chomsky dismisses the results of artificial intelligence, he denies the intelligent machines that already exist this recognition. He’s denying what Turing saw as a legitimate way to appreciate the contributions of machines to our world: by treating them as intelligent agents and seeing how their behavior compares to other agents. Instead, Chomsky asserts that computers behave nothing like intelligent agents. The claim simply doesn’t cohere with the experience of most of the audience watching this video, and Chomsky’s views appear out of touch and reactionary as a result.
With Chomsky’s misreading of Turing, the audience might come to believe that Turing’s views are the same. But Turing would not agree with Chomsky, and would not endorse the dismissive attitude to which Chomsky treats machines. Turing took an inclusive, participatory attitude towards machine intelligence; even the standard interpretation recognizes that Turing sets the bar for intelligence relatively low, in stark contrast with Chomsky’s strict views. Turing probably wouldn’t find anything interesting within the Singularity literature either, which tends to treat intelligent machines (even so-called “friendly” AI) as a monolithic and incomprehensible other. The singularity literature treats the the relation between humans and thinking machines as a fragile, precarious relationship that perpetually borders on oppression and corruption, and begins from the presumption that the behavior of an artificially intelligent machine is opaque and beyond our comprehension or control.
Neither theory is appropriate for the unpresumptuous relationship that Turing describes in the form of a game played by a human and a machine. The most detailed and extended example Turing himself gives of artificial intelligence in the essay is of a human and computer casually discussing poetry, and of the machine contributing insights that productively move the conversation along by advancing an understanding of character and mood in dialogue with an engaged interlocutor. Such a machine would, presumably, be welcome among Turing’s group of friends for their own parlor games.
These are the machines Turing expected to arrive and prepared our philosophical discussions for. And, I’m arguing, these machines have already populated our world and significantly influenced its dynamics. We’ve been too enamored with our fictions and superstitions to fully recognize their arrival, but the activity of intelligent agents play a focal role in the organization of action today.
It’s time we all play fair.