My mailbox today contained a small clipping from the letters to the editor section of The New Yorker. It didn’t come with a issue number, or even what article this was in response to; I’ll let you know if I find those references.
Update: The response is to an article entitled “Your Move: How Computer Chess Programs Are Changing the Game†from Dec. 12, 2005. (Thanks, Maschas)
Total lack of emotional involvement in the game may give chess programs a strategic advantage over human players, but it is also precisely what robs them of anything like geunine intelligence. Can we even say that such programs are “playing” the game when they neither know nor care what it means to win or lose, or even just to do something or be thwarted? Real animal intelligence involves the organism responding affectively to its environment. Computer programs literally could not care less, which is why they are mere simulations of intelligence.
Taylor Carman
Associate Professor of Philosophy
Barnard College, Columbia university
New York City
This gives me hope, because this view is still alive and well among even the distinguished academics in our field. It is of course no surprise that Carman is a Heidegger scholar. But lets attack his arguments here nice and methodically. I’ll start with the easy one first.
1. Machines aren’t really “playing the game” because they don’t know or care what it means to win or lose.
We should hold off on answering the question about ‘playing’ until we know whats at stake in that question. Carman just assumes that participation requires care and emotional investment (more on that below); I don’t think the case is quite so open and shut. ‘Knowledge’ here is much easier. Of course the machine knows what it means to win: thats the goal of the program. If it didn’t know how to win, it would have no way of evaluating its moves as getting closer to farther from that goal. And that sort of evaluation is all the machine does; it seems to be a serious misunderstanding of both the machine’s internal programming and external behavior to say ‘it doesn’t know what it means to win’.
Perhaps Carman will respond, as most do, that the above is merely a metaphor we use to understand the machine’s behavior, but reflects nothing of the machine. The machine doesn’t know how good it feels to win, or how bad it feels to lose; in other words, it lacks emotional involvement. It has no stake in the game. And without that affective dimension, we can’t even understand the machine’s behavior as knowledge, much less genuinely intelligent. In other words, Carman’s argument rests on emotional involvement as central and necessary for intelligence, and derivatively for cognitive states like knowledge. So lets turn to whether emotional involvement is necessary for intelligence.
2. Emotional involvement is required for anything like genuine intelligence
Carman is alluding to ‘care’ here as an affective necessary condition of intelligence, which we will attack in a moment, but first we need to absolve ourselves of this idea of ‘genuine’ intelligence as a sensible distinction. Intelligence, as understood in cognitive science and artificial intelligence, is merely the ability of a system to construct a plan for achieving some goal, or for solving some problem. A system is more or less intelligent by being more or less capable of realizing that goal given various starting conditions and environmental constraints (including processing speed, time, memory, and efficiency constraints, etc). An evolutionary psychologist would add here that there is no such thing as ‘general intelligence’, but intelligence is always domain specific: a system is more or less intelligent at some particular task, or at realizing some particular goal, and always in some particular (environmental) context. In any case, anything conforming to these general parameters is considered ‘intelligent’, and there is simply no sense in making a distinction between genuine intelligence and ersatz intelligence.
Carman thinks this isn’t the whole story. Real intelligence is not just the solving of some planning task, but necessarily involves some story about the way those problems are solved. In real intelligence, problems are solved affectively: there is some personal investment or emotional attachment to the details of the plan and its ultimate fruition. Affective involvement is the hallmark of ‘real animal intelligence’, and machines clearly don’t have that. But notice the goalposts have shifted, or at least been clarified. We aren’t talking about intelligence in the cog sci sense, but we are talking about specifically ‘animal’ intelligence. No one, to my knowledge, has tried to build a system that plays chess like an animal; they try to build systems that play good chess.
But why should emotional involvement be necessary for intelligence? Science, ideally, is disinterested inquiry; should the mathematician chugging through the details of the Riemann Hypothesis with care only for the formal structure and validity of his arguments be considered less intelligent than the one who is overwhelmed with passion and zeal? Of course, Carman’s argument goes much deeper than that. Carman’s claim is that there is some particular way, unique to humans (or perhaps animals generally), that embodies the whole range of affective qualities that might shape and augment a particular planning strategy: we care. Any two people might come to some task with different interests and concerns and affective dispositions, and therefore approach some problem with different levels of involvement; this qualitative distinction itself is not enough to deprive either of ‘genuine’ intelligence, since both have some investment in the matter, and both care to some extent.
Emotions here are understood as arising within a planning structure, as augmenting or filtering the agent’s relation to the various levels of its plan of action, in addition to its relation to the terms of the constraints and the environment in which the task is carried out. But surely the machine augments information in some way: by encoding the chess board and moves into a language it understands, by embodying that representational system in some architecture, and so on. The machine does, in a certain (but very real) sense, make the game its own, by filtering the game and certain aspects of the context through its hardware. Granted, the machine’s internalization of the game looks radically unlike any filtering relation we are familiar with. But that in itself is not enough to buttress the claim that there is a difference in kind between the human and the machine’s involvement with the game of chess. If two people with perhaps radically disjoint motives and affects can be considered to genuinely play the game, and play it intelligently, then we need another argument to show that the machine’s approach to the game is not a mere qualitative distinction but a radical difference in kind- that is, by saying the machine “plays the game”, we are actually making a category mistake. Carman does not (and cannot) support this claim without begging the question.
Carman would object- the machine doesn’t make the game its own at all! This comes out in his distinction between the machine’s ersatz intelligence and ‘real animal intelligence’, as the latter involves affective responses to an environment. Even granting that the machine filters incoming information in certain ways, Carman’s intuition seems to be that the machine is not interacting or engaged with an environment at all. Thus we see the deep bias and chauvinism against machines revealed. It is only under the assumption that machines do not interact with the world, but only with the pure realm of mathematics or Platonic forms, does this bias begin to take hold. The machine merely calculates, Carman thinks. Thus, it is not an embodied, worldly agent engaged with an environment, and thus does not genuinely filter the world through ‘affective’ hardware, and thus does not embody genuine human intelligence, knowledge, or the capacity for participation in an activity like a game. The machine suffers for lack of a body.
But why would anyone hold the view that machines are not worldly? Well, I have my suspicions; I think this bais against machines traces back at least to the early modern conception of nature as itself a machine, and our contemporary reaction against Cartesianism. I think that this bias seriously misunderstand the (genuine, substantive) role machines play in our life and in this world. It seems clear to me that the machine is engaged in a real game of chess (and not just its abstract form), and with this engagement comes all the attributes of agency: intelligence, knowledge, participation. Of course it is correct that there are substantive structural distinctions between animals and machines, and in arguing that machines are agents (or ‘genuine’ agents) I am not holding that the machine’s understanding of chess mirrors are own. But surely there is overlap (they are both concerning chess), and in any case a mere difference of understanding is itself insufficient to revoke both care and the very possibility of participation. It is obvious that humans and machines approach chess in distinct ways- that is precisely what makes watching them play together so interesting
Enowning seems to be the only other blog posting anything about this; and it is a Heidegger-fest of the highest degree. I am going to email this to Carman, to see if I can get some response from him.
I saw this letter in my copy of the New Yorker, and it provoked much the same reaction. While one can attempt to differentiate machine and human intelligence, Carmen does a particularly poor job in this case. I sort of wonder why he even bothered sending a one-paragraph answer to such a deep and abiding question.
BTW, the article you’re looking for appeared in The New Yorker on Dec. 12, 2005, and was entitled “Your Move: How Computer Chess Programs Are Changing the Game”. I found it very interesting, especially in that it examined in depth how the current best and brightest chess programs operate (hint: not brute force).
Thanks for the info. OP edited.
I think the Carman deviates from the standard argument in Heidegger by referring to emotions–although it probably helped make his letter intelligible by and get it published in the New Yorker. Heidegger’s argument is not about whether minds have states called emotions, but a more fundamental ontological argument–roughly about what exists, what that means and to whom.
I think one basic issue is to imagine that there is a game of chess that exists as an anstract platonic ideal, and then there are machines and humans understand it. One, it is humans that give the chess game any meaning, and two, machines understand nothing at all.
BTW, Hubert Dreyfus’s books on Why Computers Can’t Think address this issue in more detail.
Dear Eripsa,
I’m happy to see that my four little sentences gave you so much to work with!
“Of course the machine knows what it means to win: thats the goal of the program. If it didn’t know how to win, it would have no way of evaluating its moves as getting closer to farther from that goal. And that sort of evaluation is all the machine does; it seems to be a serious misunderstanding of both the machine’s internal programming and external behavior to say ‘it doesn’t know what it means to win’.”
Nothing here is “of course,†of course. On your criterion, as John McCarthey once said, the thermostat knows three things: when it’s too warm, when it’s too cold, and when it’s just right. Is that what you mean by knowledge? After all, if it didn’t know when it’s too warm or too cold, or just right, it would have no way of “evaluating†its actions as either appropriate or inappropriate to the situation.
“Perhaps Carman will respond, as most do, that the above is merely a metaphor we use to understand the machine’s behavior, but reflects nothing of the machine.”
Right.
“The machine doesn’t know how good it feels to win, or how bad it feels to lose; in other words, it lacks emotional involvement. It has no stake in the game. And without that affective dimension, we can’t even understand the machine’s behavior as knowledge, much less genuinely intelligent. In other words, Carman’s argument rests on emotional involvement as central and necessary for intelligence, and derivatively for cognitive states like knowledge. So lets turn to whether emotional involvement is necessary for intelligence.”
Actually, my argument started one step back: affect is necessary not just for intelligence, but for agency. My claim is that it’s already metaphorical to say that the program or the cimputer is “doing†anyting. (There are nonagential senses of “doing,†of course. Hence the joke: “Waiter, what’s this bug doing in my soup?†Waiter: “Looks like the back stroke.â€)
“2. Emotional involvement is required for anything like genuine intelligence
Carman is alluding to ‘care’ here as an affective necessary condition of intelligence, which we will attack in a moment, but first we need to absolve ourselves of this idea of ‘genuine’ intelligence as a sensible distinction. Intelligence, as understood in cognitive science and artificial intelligence, is merely the ability of a system to construct a plan for achieving some goal, or for solving some problem. A system is more or less intelligent by being more or less capable of realizing that goal given various starting conditions and environmental constraints (including processing speed, time, memory, and efficiency constraints, etc). An evolutionary psychologist would add here that there is no such thing as ‘general intelligence’, but intelligence is always domain specific: a system is more or less intelligent at some particular task, or at realizing some particular goal, and always in some particular (environmental) context. In any case, anything conforming to these general parameters is considered ‘intelligent’, and there is simply no sense in making a distinction between genuine intelligence and ersatz intelligence.”
I don’t think our concept of intelligence can be operationalized in that way. And if you’re not using that concept, but instead stipulating that anyting that passes a certain technical test is to be called “intelligent,†then the conclusion is trivial. I agree, however, that if you can build or program something that has ALL the causal powers of human (or even dog or cat) brains, then of course you will have built something genuinely intelligent. I would say, however, that I think you’re committing the “first-step†fallacy here, thinking that the tiniest glimmer of apparent success must constitue progress toward the goal. On this rather generous criterion, the first ape who ever climed a tree had thereby taken the first step to the moon.
“Carman thinks this isn’t the whole story. Real intelligence is not just the solving of some planning task, but necessarily involves some story about the way those problems are solved. In real intelligence, problems are solved affectively: there is some personal investment or emotional attachment to the details of the plan and its ultimate fruition. Affective involvement is the hallmark of ‘real animal intelligence’, and machines clearly don’t have that. But notice the goalposts have shifted, or at least been clarified. We aren’t talking about intelligence in the cog sci sense, but we are talking about specifically ‘animal’ intelligence. No one, to my knowledge, has tried to build a system that plays chess like an animal; they try to build systems that play good chess.”
Perhaps we agree. If you’re not interested in taking the animal case as paradigmatic, then of course tracking and duplicating its real features won’t be important to you. But then I think we’re no longer talking about (what people normally mean when they say) “intelligence.†Similarly, we can build a “running†machine that outruns a human being. But if we’re not constrained to build something that runs like a human being, then the achievement is trivial, or rather unilluminating. You build something that goes faster than a person running. Fine, but so what? That has nothing to do with running. So too, computer chess have nothing to do with intelligence.
“But why should emotional involvement be necessary for intelligence? Science, ideally, is disinterested inquiry; should the mathematician chugging through the details of the Riemann Hypothesis with care only for the formal structure and validity of his arguments be considered less intelligent than the one who is overwhelmed with passion and zeal? Of course, Carman’s argument goes much deeper than that. Carman’s claim is that there is some particular way, unique to humans (or perhaps animals generally), that embodies the whole range of affective qualities that might shape and augment a particular planning strategy: we care. Any two people might come to some task with different interests and concerns and affective dispositions, and therefore approach some problem with different levels of involvement; this qualitative distinction itself is not enough to deprive either of ‘genuine’ intelligence, since both have some investment in the matter, and both care to some extent.
Emotions here are understood as arising within a planning structure, as augmenting or filtering the agent’s relation to the various levels of its plan of action, in addition to its relation to the terms of the constraints and the environment in which the task is carried out. But surely the machine augments information in some way: by encoding the chess board and moves into a language it understands …”
This begs the question, of course. But I’ll take it as metaphor, for now.
“… by embodying that representational system in some architecture, and so on. The machine does, in a certain (but very real) sense, make the game its own …”
Well, I think the machine is doing nothing of the kind. The metaphor you inserted earlier now seems to be taking on a life of its own. (See above comments on agency.)
“… by filtering the game and certain aspects of the context through its hardware. Granted, the machine’s internalization of the game looks radically unlike any filtering relation we are familiar with. But that in itself is not enough to buttress the claim that there is a difference in kind between the human and the machine’s involvement with the game of chess. If two people with perhaps radically disjoint motives and affects can be considered to genuinely play the game, and play it intelligently, then we need another argument to show that the machine’s approach to the game is not a mere qualitative …”
(I take it you mean “quantitative�)
“… distinction but a radical difference in kind- that is, by saying the machine “plays the gameâ€, we are actually making a category mistake. Carman does not (and cannot) support this claim without begging the question.”
I don’t follow. You’ve offered no argument that computers (or programs?) have anything like affects. “Augmenting information in some way,†as you say, doesn’t even come close. Are you claiming that chess programs (or the computers running them?) really do experience emotions?
Nor can I understand why you think, simply because two human players can play with different affective sets, that I can draw no distinction between the human player and the machine. This strikes me as a non sequitur. My argument is that machines lack affect altogether, so that even ascribing agency, and a fortiori intelligence, to them remains metaphorical. Admittedly, my New Yorker letter didn’t make the argument in any detail, but that’s the suggestion. I’m a bit unclear: are you conceding that affect is necessary for intelligence, or are you claiming that computers are, or can be, intelligence without it?
“Carman would object- the machine doesn’t make the game its own at all!”
Right. Even stronger, the machine isn’t “doing†anything, in the agential sense of the word. The computer (or program?) is “playing†chess in the same sense in which my camera “takes pictures.†To say that my camera “takes good pictures†is not say that it takes better pictures than I do!
“This comes out in his distinction between the machine’s ersatz intelligence and ‘real animal intelligence’, as the latter involves affective responses to an environment. Even granting that the machine filters incoming information in certain ways, Carman’s intuition seems to be that the machine is not interacting or engaged with an environment at all.”
Right, because it’s not acting. No more than the Coke machine, anyway.
“Thus we see the deep bias and chauvinism against machines revealed. It is only under the assumption that machines do not interact with the world, but only with the pure realm of mathematics or Platonic forms, does this bias begin to take hold. The machine merely calculates, Carman thinks.”
Right. That’s why they’re called “computers.†What they do is compute. (Actually, I think John Searle is right that even that is a metaphor. But never mind.)
Do you think computers are doing more than computing? If so, and if that other thing is what makes them intelligent, then they’re not intelligent just in virtue of their computational functions. Which means functionalism is false. In that case, if I could build a computer that lacked those other features, but still carried out all the computations necessary for “winning†at chess, then you would have to admit that it wasn’t really playing intelligently at all. I take it the whole disucssion here is about whether formal computation is a sufficient for intelligence.
“Thus, it is not an embodied, worldly agent engaged with an environment, and thus does not genuinely filter the world through ‘affective’ hardware, and thus does not embody genuine human intelligence, knowledge, or the capacity for participation in an activity like a game. The machine suffers for lack of a body.”
Right.
“But why would anyone hold the view that machines are not worldly? Well, I have my suspicions; I think this bais against machines traces back at least to the early modern conception of nature as itself a machine, and our contemporary reaction against Cartesianism. I think that this bias seriously misunderstand the (genuine, substantive) role machines play in our life and in this world. It seems clear to me that the machine is engaged in a real game of chess (and not just its abstract form), and with this engagement comes all the attributes of agency: intelligence, knowledge, participation.”
Do you have an argument for this? Just the opposite seems clear to me.
“Of course it is correct that there are substantive structural distinctions between animals and machines, and in arguing that machines are agents (or ‘genuine’ agents) I am not holding that the machine’s understanding of chess mirrors are own. But surely there is overlap (they are both concerning chess), and in any case a mere difference of understanding is itself insufficient to revoke both care and the very possibility of participation. It is obvious that humans and machines approach chess in distinct ways- that is precisely what makes watching them play together so interesting.”
Here again you seem to be saying that machines really do care. I have to admit, I don’t see how you can seriously believe that, short of trivially redefining what you mean by the word “care.â€
Hope that helps,
Taylor
Does that mean it’s on?
Isn’t a Tamagotchi a simulation of an affective state? After all, it expresses a need for hunger and affection, even if these are little more than a simple perceptron which is triggered when a hunger value reached a certain threshold. Say someone combined a Tamagotchi with a planner that could determine a way to satisfy these needs for hunger and affection. Would that be sufficient for “genuine intelligence”. (As a side note, this is pretty much how the AI in the game Black & White is done)
Dare I wade in?
Probably not, and anyway, I think that you should concentrate on TC’s remarks. But here’s one observation. A lot of the dirt that you kick up in this and related posts really just amounts to pissing on fenceposts as a way of marking out priveleged semantic territory. You keep asserting that certain behaviors are ‘intelligent’ or constitute evidence of ‘knowledge’ or that they are done in service of ‘obligations.’ And the fight ends up being whether or not you can enforce your definition of those terms, and whether the definitions that others use are too restrictive.
This insistence on terminological purity strikes me as odd since, as near as I can tell, you aren’t a platonist. So, why not just get clear on what TC means by ‘intelligence’ for example and ask about it’s practical import, then get clear on what it is that is meant by ‘machine intelligence’ and ask about it’s practical import, and then ask some nuanced questions about how the two are related? What do you get out of asserting that there’s some master term under which both notions fall?
Isn’t intelligence nothing more than one’s ability to perform a given computational task? Emotive responses would only seem to govern one’s inclination to perform a given task, not the potential for performing it.
I’d be interested to know at what level of complexity Carman thinks an animal form gains this mysterious “agency.” Can a wasp be said to “do” something? A bacterium?
I think zwischenzug is right. The concept of machine “intelligence” that computer scientists use is probably simply a different concept altogether from (plain old actual organic) intelligence. Its relation to intelligence is probably more like the relation between the flight of hot-air balloons and the flight of birds. At a very abstract level, you get the same effect, but by utterly different means. An analogy between bird flight and airplane flight would be misleading, since in that case you have a demonstrable sameness of some aerodynamic principles, such as lift (still no feathers and flaping). I see know such similarity between brains and computers, hence no justification for applying the same concept to the two systems.
In reply to Jason. There seems to be a gradual difference between mechanistic and agential behaviors. Insects fall somewhere in the middle of the spectrum, and I suspect our ambivalence about them (for example, when they give us the creeps) has something to do with the blurriness of our concept of agency. The cockroach seems to run away in fright when the lights go on, but otherwise it looks like a little wind-up toy. The concept can seem to apply all-or-nothing, but I think it’s a mistake to think that reality must be parceled out neatly in that way. So, I don’t think I’m obliged to draw a sharp line, since I doubt there is one. As with many fuzzy distinctions, however, there are perfectly clear cases at either end of the continuum. Nonorganic machines exhibit nothing like real agency, since machine behaviors lack a vast repertoire of agential phenomena: pain, pleasure, hesitation, startle, fear, aggression, agitation, relaxation, and so on. They’re just not even in the ballpark.
“In reply to Jason. There seems to be a gradual difference between mechanistic and agential behaviors. Insects fall somewhere in the middle of the spectrum, and I suspect our ambivalence about them (for example, when they give us the creeps) has something to do with the blurriness of our concept of agency. The cockroach seems to run away in fright when the lights go on, but otherwise it looks like a little wind-up toy. The concept can seem to apply all-or-nothing, but I think it’s a mistake to think that reality must be parceled out neatly in that way. So, I don’t think I’m obliged to draw a sharp line, since I doubt there is one. As with many fuzzy distinctions, however, there are perfectly clear cases at either end of the continuum. Nonorganic machines exhibit nothing like real agency, since machine behaviors lack a vast repertoire of agential phenomena: pain, pleasure, hesitation, startle, fear, aggression, agitation, relaxation, and so on. They’re just not even in the ballpark.”
Do you think it’s conceivable, given enough knowledge of human biology, to build a machine capable of replicating “humanness”?
“Do you think it’s conceivable, given enough knowledge of human biology, to build a machine capable of replicating “humannessâ€?”
Conceivable? Sure. Likely? No. The crucial question is whether any old physical stuff could duplicate all the (relevant) causal properties of the organic material that actually constitutes organisms like us. The spectacularly implausible — and utterly unscientific — assumption underlying functionalism (cognitivism, computationalism, AI) is the idea that the actual substance instantiating the behavior shouldn’t make any difference; that computer circuits could in principle do just as well as neurons. That follows directly from computationalism, since the idea is precisely that intelligence is computation, and computation is in principle indifferent with respect to its material instantiation. But the same computations can be instantiated in completely different material substrates having virtually none of the same same causal powers. As Searle has siad, it’s like expecting to get wet jumping into a pool filled with ping pong ball models of water molucules.
But could we in principle build a physical duplicate of a human organism? Sure. Why not? But don’t hold your breath,
By the way, I might have seemed to be conceding a lot (too much) in my last post when I said that machine and human behaviors achieve “the same effects” in the way birds and balloons do by both flying. But the analogy, hence the concession, applies only to chess, which is a computationally closed domain, and so quite unlike actual situations eliciting intelligent behavior in organisms. In any more open context, the history of AI has been a history of dismal failures. In those cases, there isn’t even anything like the convergence of birds and balloon’s on the common task of flying.
What I take to be the main issue (or at least a central issue) is that of the human-centeredness of notions like ‘intelligence’, ‘knowledge’, ‘engagement’, ‘interaction’, etc. By that I mean that our notion of intellegence, say, is judged to the standard of what it is for a human to be intelligent. Dogs, cats, etc., are sometimes judged intelligent because they exhibit some of the behaviors that we exhibit when we act intelligently – e.g., they can catch balls, they don’t run into trees, jump through hoops when we command them to, etc. A typical reason is the appeal to “it’s the best we’ve got” – i.e., we only know how to judge intelligence by our own lights because these are the only lights we’ve got to judge by. This seems patently false. To make the argument analogically: just because most Americans can only judge high culture by shopping at Target, as opposed to Walmart, doesn’t mean that that is all there is to high culture, or even that that is what high culture is. Hopefully, my point is clear. I think that what we need is a Husserlian bracketting off of what is human from intelligence and the other above-enumerated notions in order to derive an ontologically basic notion of intelligence. If computers don’t fit this notion, then we shouldn’t call them intelligent. If they do, then we should. Also, in response to Zwichenzug, I don’t think computer scientists are applying the concept “intelligence” differently that we normally are when we say that something is intelligent. There does seem to be a single “master concept” that has two different targets.
This will probably confuse matters more than anything else.
B-dizzle
Sorry for my delayed response. Thank you very much, everyone, for continuing this conversation, and thanks especially to Taylor Carman for taking his time to have this debate here.
Carman makes a lot of points worth discussing in his response, and seems to mostly agree with my analysis of his argument. My purpose in the original post was to unpack the basic argument given in the original letter, which I found to be the most compact and straightforward telling of the position I’ve come across.
I want to just comment on the foundation we seemed to have reached, which motivates the conclusions Carman follows:
“The machine merely calculates, Carman thinks.’
Right. That’s why they’re called “computers.†What they do is compute…
Do you think computers are doing more than computing?… I take it the whole disucssion here is about whether formal computation is a sufficient for intelligence.”
The traditional discussion of artificial intelligence, including Searle, takes the form of exactly this question. But I am concerned that anyone thinks there are such pure computers, who lack any substantive embodied engagement with the world. It seems, on Carman’s view, that the game of chess is carried out in the machine on a purely symbolic, purely mathematical level with neither effects nor extension in the world. If thats the case, then the machine is simply not an embodied agent, and in no way can I be understood to interact, much less play with it.
But how can anything that exists have no effects in this way? How can anything be so disembodied? On some level, you could say that human thought is merely the abstract functional connections between neurons, which itself can (most likely) be understood purely symbolically. But surely Carman would object, and I would well agree: what matters is not the abstractness of the functional architecture, but the fact that this abstract organization is embodied within a system capable of altering its environment, and that system is actually embedded within an environment. This is the real source of animal intelligence, on Carman’s view.
But who ever thought otherwise of the computer? When I play my computer chess, its computations result in a movement on the chess board, and in response to my own moves. This is surely more than mere formal computing- it is responding to the game. More to the point, I am not interacting with a mere formal system; I am interacting with the black box on my desk, which has its own extension, and embodies that formal structure in a concrete (and far from ideal) way. On Carman’s view, it is just symbols in the void, as far as the ‘computer’ is concerned; and thus, I surely can’t be intereacting with it. What a strange and heretofore unencountered dualism- this time, not of mind/body, but of world/computation. We have stripped the machine of agency the back way, by stripping it of any physical presence to begin with.
Can we change Carman’s mind? Once we absolve ourselves of the idea that the computer is really operating outside this world, and embrace the view that machines, in fact, have a this-worldly extension, then we become open to the possibility that, in fact, the machine can interact with the world, and can affect events in the world. With a body, we get at least minimal agency for free. Perhaps this turns out to be nothing more than the agency of the thermostat, but the slope is slippery from there. An agent’s intelligence and knowledge increases as its means and ways of understanding its environment increase- perhaps with a thermostat or chess program at one end, a Roomba or my computer somehwere near the middle, and Google at the far end.
I dont know if this is satisfactory to Carman, but I find it quite hard to sympathize with the position that machines don’t interact with the world, and attacking this assumption serves as the main motivation for my argument here. Having exposed the ugly dualism at root in this argument, I have to agree with Zwichenzug: perhaps we are working with a family of related notions here, and there is not much sense in insisting a strict reduction of one to the other. Clearly, if Google is intelligent, it isn’t anything like the human intelligences we are familiar with. I am personally inclined to discuss this in terms of areas of competence, which is something like a behaviorist alternative to ‘intelligence’ that abstracts away from the particular way in which a system realizes that competency. Someone like Carman will always object that this isn’t the complete story, and then we could engage in some meaningful discussion about what is missing and how it is relevant to things like justification and obligation.
But before we could ever begin that discussion, we would first need to accept that these machines are not idealized, pure mathematical engines, but legitimate, fallible agents who can actually perform tasks that have real effects in the world; and that world is, importantly, shared with us. Carman has been clear in his remarks here that he does not accept even this move, and most people are at least sympathetic to this initial intuition. So it is this intuition that must be attacked to even begin to make progress in my project.
Just a quick reply.
Here (and earlier) I think you’re conflating hardware and software. I’m still unclear which you want to ascribe agency and intelligence to the computer or the program. Of course machines themselves are physical objects causally interacting with the world. I think no one would deny that. The programs they run, by contrast, are in a sense abstract objects, but one needn’t be a Platonist or a dualist to say that. The equator is an abstract object; nothing controversial there.
My view is that computers are physical objects carefully designed in such a way that what they “do” is readily interpretable by us in computational terms, just as cameras are constructed so that the artifacts they generate look to us like visual representations of the world. The interpretation of the product is what’s crucial. Physical states of computer circuitry (and color patches on paper) are not computations (or pictures) just in virtue of their physical properties; they stand in need of our construing them as such. Same is true of a calculator or a thermostat. Or an abacus, for that matter. In fact, at an extreme, just about anything counts. Putting an apple in a bowl with another apple constitutes an adding maching. Is that process computational? Sure. Computation comes cheap.
So, what kind of real “engagement” with the world do computers lack? In a word, bodily-perceptual competence. I think competence is a kind of agency, which goes back to my previous arugment that no existing computer (or program) exhibits anything like competence in an environment. Chess programs don’t, since chess itself is not an environment (though, again, you can change the definition of “environment” to suit your argument, if you like, in which case the point is trivial once again).
Consequently, I’m perfectly happy to go along with (roughly) behavioristic criteria of intelligence. Intelligence is intelligent behavior, so show me that and I’ll admit defeat. That is, I won’t dig in my heels and insist that no replication of intelligent behavior, no matter how impressive, can ever be as good as the “real” thing.
But this methological concession is not to say there’s an actual slippery slope from thermostats to minds. That’s like the slippery slope from climing trees to traveling to the moon. Not very slippery, after all.
Finally, does anyone seriously think that Google is intelligent? If so, then the bar has been set low indeed, and I think we’re just using words differently.
Those who spend much of their time on the computer end of things are constantly confronted with hardware emulators: I have a program that runs on one set of hardware, and I want to run it on another set of hardware. So on the target hardware, I first create a software simulation of the original hardware. The desired program then runs on the simulated hardware, which in turn runs on the actual target hardware.
In terms of slippery slopes, I don’t think it is at all implausible, nor even very distant in the future, that we could have a complete map of human thought as described by a) human output and b) detailed measurements and understanding of the physical processes that result in that output. Once we have such a map, we could *easily* build a computer simulation of said physical processes. Even if the limits of our computer hardware constrained the speed of that simulation, the simulation could still run. Over time, we would have understandably intelligent output coming from a system running on totally different hardware.
Given your (Carman’s) methodological concession in the last post, it seems to me that this would convince you of the machine’s intelligence. And it also seems that at this point, the difference between your point of view and mine amounts to a prediction about the pace and direction of technological advancement.
I don’t see the connection between software simulations on computers and intelligence, but maybe I’m missing something. My point about hardware and software was a rather narrow one about getting clear about exactly what we’re ascribing intelligence to, the machine or the program. Philosophers have had the idea that the mind is to the brain as software is to hardware, in which case (I guess) the person or agent is the program, not the circuitry. We ascribe attitudes to (embodied) persons, after all, not just their brains. I think that’s an important distinction, but it’s probably not so important for the present discussion.
I’m not sure I understand exactly what your “map” or “simulation” would amount to. Would it be a model of thought or a model of the brain? Those represent two very different directions of AI research. I think the former is doomed, for reasons that usually go by the name “frame problem.” What formal symbolic representations arguably cannot capture are the tacit and contextual aspects of embodied experience in an environment that make some information relevant and some not. I think the history of AI offers no grounds for hope in that direction. In fact, I think there are good reasons to suppose that intelligence is not (cannot be) just the formal manipulation is explicit information in symbolic form.
However, if you’re talking about building a brain, then again, in principle, why not? The challenge there is different, but probably equally hopeless for different reasons, namely, the sheer overwhelming physical complexity of the brain.
You said “map of human thought,” though, so I take it you mean a complete inventory of the information (and inferential structure) of human cognition. Pretty tall order. Isn’t this what Douglas Lenat is/was trying to do with CYC? I think most people in the field think that — the sheer massive accumulation of explicit information — is hopeless.
Anyway, building an encyclopedia is a lot easier than, say, building a puppy.
I conflated the two a little bit. Ultimately, I believe we’ll be able to build a simulation of a brain (and a puppy, for that matter). I think it’s far from hopeless. In fact, I think we have the technology to do this with the tools we have right now, although it would take probably a century of careful measurements of the brain. I anticipate rapid advances in brain imaging technology and in computational power that would make this a more immediate reality, but there we get to the predictions of the future again.
(As for building a “mind” before we can build a “brain,” I don’t discount this either. I think at a certain stage in the measurement of the brain, the algorithms that make up the mind will become apparent, and we can dispense with the rest of the brain model, speeding things up.)
“So, what kind of real “engagement†with the world do computers lack? In a word, bodily-perceptual competence. I think competence is a kind of agency, which goes back to my previous arugment that no existing computer (or program) exhibits anything like competence in an environment. Chess programs don’t, since chess itself is not an environment (though, again, you can change the definition of “environment†to suit your argument, if you like, in which case the point is trivial once again).”
Would the AI guiding the robot vehicles in the DARPA Grand Challenge (linked below) qualify as intelligent? The machines in question were certainly able to demostrate competence in an environment.
http://news.com.com/Stanford+wins+2+million+in+robotic+car+race/2100-11394_3-5892115.html?tag=nl
By the way, thanks for engaging in this discussion. Your letter in the New Yorker immediately provoked in me a desire to engage in a debate on the issue of machine intelligence, but I never thought that debate would happen, much less with the person who actually wrote the letter.
No, surely DARPA Challenge kinds of things are far too crude, not to mention far too task-specific, to justify any talk of intelligence. It’s true that what robotics has going for it is the recognition that intelligence has to be manifest in behavior, not just abstract symbol-manipulation. But robotic systems are laughably primitive, even compared to insects. I don’t think we’re even tempted to describe their behaviors with intentional idioms (“see,” “believe,” “intend,” “remember,” “anticipate,” “infer”). But again, the question is how low you set the bar. Is the goal of AI simply to replicate somethig as primitive insect behavior? Are insects intelligent? Are they even on the spectrum? Not really. They don’t seem to have any capacity for learning or insight. And yet building robots as clever as insects would be quite an achievement.
And thank to you (all) for chatting, too. It’s fun. You guys obviously know a lot more about computers than I do. I’m not an expert about any of this, but I’ve had good teachers, so I think I know more or less how the arguments go. As Bert Dreyfus says, the nice thing about AI is that the actual (dismal) performace of the systems eventually keeps researchers honest: you can’t just keep promising and predicting future success without eventually coming clean about the actual lack of progress. One is always entitled to make a leap of faith, of course. Leaps of faith are nice, but you can’t (and shouldn’t try to) justify them. The record has to speak for itself.
Insects are quite capable of learning. For example, a male fruit fly must learn the signal that a female gives if it does not desire to mate. Another example is certain species of wasp who are attracted to a certain tree that gives off wasp-specific pheremones. At first the wasp will attempt to mate with the flowers and thus pollinate them, but it eventually learns that these attempts are not succeeding and begins to avoid the location of the tree. If someone were to break off a branch of the tree and move it away from the tree, the wasps will again seek to mate with the flowers on the branch in the new location.
Okay, fair enough. “Learning” is a vague word, and I’m not wedded to it. What you’re describing is something like operant conditioning: the organism blindly does what it’s hard-wired to do, the environment doesn’t cooperate, so the organism backs off and (blindly) tries something else. If I understand the broken branch example, it sounds like the wasp tracks the pheremones coming from the tree, which are then associated with the flower, so the wasp then follows the flower (away from the tree). That sounds like Humean association. Pretty mechanical, and a long way from anything like thinking or reasoning, or even (for my money) believing (thought, again, people have different intuitions about how far we can stretch the concept of belief from its ordinary use).
There’s a further refinement in learning, which begins to look a bit more like “thinking” (another notoriously vague term), namely when animals hesitate and (seem to) anticipate consequences BEFORE trying out failed behaviors. This is what I had mind. It’s my understanding that mammals and birds and fish can do this sort of thing. I don’t know for sure, but I doubt insects can. What’s more typical of insects is blind trial-and-error behavior, with no internalized mental work going on prior to the impulsive actions.
Finally, even animal thinking (as distinguished from impulsive trial-and-error behavior) falls short of characteristic human (and perhaps higher mammal …?) intelligence, which involves interacting with other agents and recognize and responding appropriately to THEIR attitudes. It’s very hard to test for this sort of thing, and some ethologists claim that birds feign injury to deceive predators. I personally doubt they know what they’re doing, but it’s hard to know how to decide that. There’s psychological research that indicates that although children are sensitive to other people’s desire and intentions from early on, they don’t have a clear concept of other people’s beliefs differing from their own until they’re about four years old. More precisely, they’re unable to ascribe false beliefs, either to themselves or to others. The world just seems to them transparently available to everyone in the same way. Research about other animals social intelligence is fascinating, but (predictably) enigmatic and controversial.
P. S. I meant I doubt the birds know what they’re doing, not the ethologists (though that may be true, too, for all I know).
By the way, it may well be that watershed between blind-impulsive (insect) behavior and “thoughtful” (animal) hesitation is precisely where emotion enters into the picture. After all, hesitatation seems very likely connected to affect in some way: we (and dogs) stop in our tracks out of fear, for example. This seems intuitively right, too, since insects seem so (creepily) affectless. Their mechanical behavior seems to go hand in hand with a certain unfeeling, cold-blooded style of comportment. This is just speculation, but I suspect our intuitions about intelligence and thought are connected to these two (more or less) observable phenomena: self-imposed inhibition (“stopping to think,” holding back hesitating) and emotion (fearing, worrying, and so on).
Well it’s more difficult to ascribe simple instinctive impulsiveness if the reaction to the other animal’s attitude is learned, and some birds seem to do just this. A scrub jay will become “suspicious” of another jay that sees it steal food if it itself has stolen food. Granted, this seems much like your example of the child who doesn’t understand that other people’s beliefs differ from their own, but there is still an element of learning how their their actions affect the actions of another bird
From http://www.nature.com/nature/links/011122/011122-5.html:
“Mental time travel — using knowledge about past events to plan for the future — was thought to be uniquely human. But it has now been shown in scrub jays, which use tactics to prevent their food stores being stolen by conspecifics. Jays that had themselves pilfered another bird’s food in the past re-hid their own food if another bird had observed them storing it. Individuals with no pilfering experience did not move their caches, even if they had buried their food in full view of another bird. Experienced birds are thus aware of the social context of stealing and adjust their behaviour to avoid its consequences.”
Yes, birds are different from insects in this way. (Try to train an insect.) There is apparently SOME quasi-intelligent behavior in birds.
It pays to be skeptical of these kinds of claims, though, since what looks like intelligence can turn out to be something much simpler. So, the feigned injury behavior I mentioned could just be an automatic response triggered by a certain kind of stimulus, like sweating or hair standing on end. One needs more evidence that the behavior was thoughtful. (And I don’t think there is any in this case, but I’m not sure.)
So too, in your example, the thieving bird that becomes “suspicious” may just be making an automatic (unthinking) association between two cases of two-birds-one-pile situations: the first when IT stole, and how this new case. Even this is potentially misleading, though, since I those kinds of primitive associations don’t require that one THINKS about the similarity between the two cases; it’s rather that the similarity itself (the input) just generates the coherent pattern of (similar) response(s). It’s easy to overintellectualize the mental operations that in fact underlie and make possible explicit inferences. So, for example, it’s tempting, but hazardous, to ascribe to the bird THOUGHTS, such as “Well, I stole, therefore …” What has happened is that the bird has picked up a new response from the prior situation. This is a kind of learning, but is it intelligent? Has the bird thought about or inferred anything, or has a new behavior simply been solicited by a prior situation?
Do you know about “mirror neurons”? Babies imitate facial expressions long before they have any idea of what their own faces look like, and so whether they look like or unlike someone else’s. How do they do it? Well, it turns out there are very specific neurons associated with such responses. Automatic mirroring happens long before (and without) “I-thoughts.” The baby’s response is not itself intelligent, but being capable of that kind of automatic mirroring response is (probably) a condition for the intelligence that eventually manifests itself later in language competence, “mind-reading,” and so on.
We haven’t even mentioned natural language. That’s a pretty good benchmark for full-blooded human intelligence, and yet it makes these other examples look like tree-climbing compared to moon landing. And we have no idea what linguistic competence requires.