finally

Jon links to Forbes’ special edition on AI. I’ll go through most of these, commenting when appropriate. For instance:

Dumb Like Google

While the switch to “stupid” statistically based computing has given us tools like Google, it came with a steep price, namely, abandoning the cherished notion that computers will one day be like people, the way early AI pioneers wanted them to be. No one querying Google would ever for a minute confuse those interactions with a Q&A session with another person. No matter how much Google engineers fine-tune their algorithms, that will never change. Google is inordinately useful, but it is not remotely intelligent, as we human beings understand that term. And despite decades of trying, no one in AI research has even the remotest idea of how to bridge that gap.

Since AI essayists like to make predictions, here’s mine. No one alive as these words are being written will live to see a computer pass the Turing Test. What’s more, the idea of a humanlike computer will increasingly come to be seen as a kitschy, mid-20th-century idea, like hovercraft and dinner pills on The Jetsons.

This is basically what I’ve been saying for a decade, with a few caveats. First, I don’t think we can make much sense of the ‘unbridgeable gap’ lamented in the first paragraph, as if intelligence were a single-dimensional spectrum with a large black void somewhere near the top. Thats a silly little antiquated picture, and revising the picture makes Gomes’ thesis that much stronger. Intelligence is task-specific; computers, humans, animals, and everything else are good at solving certain kinds of problems, and bad at solving other kinds of problems. Since solving some problems does not necessarily imply success at other problems (even when those problems are closely related), then intelligence can’t be understood in single-dimensional terms. Deep Blue can play chess but not checkers; my dog can fetch the paper but not milk from the corner store.

Importantly, the strengths of computational techniques do not have a straightforward implementation for solving the tasks for which human intelligence evolved. It takes a lot of work to go from the basic logical structures of a computing machine to the complex, robust tasks that humans are capable of. Early computer scientists were famously optimistic (to the point of naïveté) about designing such an implementation, and it is easy to laugh at them from 60 years in the future. But we should remember that, at the time, we had a very dim understanding of how the brain/mind worked. The competing psychological theories included behaviorism and Freud, neither of which were particularly good at explaining the subtleties of the mind, and neither of which would have been that difficult to program into a machine. In fact, it was precisely because of early stumbling blocks in developing artificial intelligence that psychologists went back to the drawing board in order to develop a cognitive theory of the mind that treats mental processes as essentially computational manipulations of mental representations. Such a research paradigm has been incredibly successful at explaining both human minds, and at helping to develop newer and more powerful artificial intelligent systems. As far as the research is concerned, there is no unbridgeable gap to cross that has stymied progress; instead, there has been rather steady advances in machine intelligence that has made significant advances in every area of “paradigmatically human” intelligence.

But advances in human-like intelligence are but a sub-sub-area of the much more radical advances in cognitive theory and technology, which has learned to solve problems that are important to humans without replicating anything like the human mind. Google solves problems that humans simply can’t solve, nor would we expect any human to attempt a solution. I appreciate Gomes’ recognition that this doesn’t represent a failure of artificial intelligence so much as it reveals that human-like intelligence isn’t really a goal that needs to be achieved.

This shouldn’t be surprising. Technology advances according to its own unique rhythms and advantages, and the evolutionary pressures on technology look nothing at all like the evolutionary pressures on the early ancestors of humans. It confuses me to no end that we still believe that all intelligent systems ought to converge to the same point, conveniently to the point that humans have already ‘achieved’. We don’t need our technology to replicate our own behavior; we want our technology to be useful, and to work with us on problems we find important. What concerns us most is that those problems get solved, by any means available, and that ultimately means that we need a variety of techniques, some of which look completely unlike humans, to get in on the action.

That said, I’d hesitate to make Gomes’ prediction; such predictions are a constant source of hilarity and embarrassment. More likely, someone will produce a machine that convincingly passes the Turing Test but has very little practical use. Such a machine will be reported by the mainstream press as a novelty amid a good deal of objection and controversy, and quite a bit of nerdy fan-boy fascination (indeed, such machines are highlighted in the press quite regularly). After the novelty wears off no one will give it much thought, and will still passionately object to the possibility of machine intelligence despite the evidence. My prediction is that a machine will pass the Turing Test quite convincingly and quite soon[1], and for the most part the public will remain unimpressed.

Meanwhile, the technological breakthroughs that generated such a machine will slip in to all sorts of peripheral technologies that we are already familiar with (in your car, your cell phone, your computer, your smart toilet), and it will radically improve and dramatically change our way of life. And we will become even more desenstitized to the increasingly intelligent machines we surround ourselves with every day.

[1] In another of the Forbes articles on the Turing Test, Warwick claims that Turing originally “dared to suggest that within 100 years a human simply wouldn’t be able to tell the difference between another human and a machine.”

Well, thats a little generous. From Turing’s 1950 paper: “Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. I believe further that no useful purpose is served by concealing these beliefs.”

Turing clearly means to suggest that we will have thinking machines by 2000, and did not mean to suggest we would have thinking machines within a hundred years. Nevertheless, I believe that Turing’s prediction came true long before he predicted. Since at least the Kasparov-Deep Blue match in 1997, general educated opinion (and not just niche philosophers/cognitive scientists) spoke of thinking machines with no contradiction, and I dare you to find any topic that wont generate SOME response from educated objectors.

Even though his prediction came true, the onus is on US to recognize this fact. And, you know, good luck with that.

8 Comments

  1. I doubt anyone would deny that we have invented machines that can solve tasks. Dryers do that, so does Deep Blue. Is there a qualitative difference between the two? Most people probably have some kind of Cartesian assumption that unless it is conscious it is not thinking. You want to steer clear of that, I think, but you need to do a better job at convincing people why consciousness doesn’t matter (so much). To simply point out than that they can solve (complicated) tasks isn’t enough- it’s just posturing from different sides. The onus is as much on you as anyone else. It seems to me that the hold out is obvious: many people would retort, “deep blue isn’t conscious.” It strikes me that there is a genuine disagreement here, having to do with necessary conditions of thinking. One problem that arises in many dissertations goes as such. A certain term ‘X’ can be defined either thickly or thinly. The philosophically robust thesis requires revisions in accordance with the thick definition; however, since that is hard to defend, many students fall back on the thin sense, which is defensible because uncontroversial. Of course, Deep Blue can think in the sense of solve very complicated tasks. What people want to know is whether we have designed machines that are conscious and *on the basis* of their conscious experience can make decisions, plan, etc. Deep Blue makes plans, just not on the basis of being conscious, one might say. At least, this seems to me the nature of the debate. You have shifted the goal posts, but you need to convince others why your game is more fun and interesting to play. I think the objection is that your game is OK, sort of fun, but not as fun as we expected. And that’s what this mostly is: a language game. Not necessarily an objection, because in some moments I think that Wittgenstein was right about philosophy.

  2. Fuck, I always contradict myself. But it doesn’t negate the whole comment.

  3. The contradiction apparently concerned the claim that conceptual analysis is required and that it is not (i.e., a language game). But I retract the claim to contradiction. What I understand by ‘conceptual analysis’ is substitutional analysis. But I don’t know what you think about these issues.

    Anyway, who is your audience? In other words, why are you writing a dissertation on this if the educated public agrees that there are thinking machines in the sense of machines that can solve tasks, e.g., calculators? Direct your arguments against those who disagree. I want to see an explicit statement on the nature of the disagreement, as well as your premise-form response. I want premises. I am now at a loss as to what your contribution is and to whom you are contributing. Machines are participants, fine, i.e., non-conscious ones. Calculators participate in paying my bills; steam engines altered the course of history and in that sense are participants. But if consciousness is a necessary condition of agency, then your participants aren’t agents. What is an agent on your view? How do you draw agency and participation in relation to one another? It’s like you want to avoid all of the interesting questions. Why can you avoid them? Remember the dissertation Mark wrote? He argued that animals are agents because they care for one another. Melnick responded, sure, but that’s not agency – or at least the agency I care about. You seem to be in the same position as Mark. Thick vs. thin. I have no doubt that philosophers will robustly confront you in this way, and I want to see a clear argument outlining how you plan to defuse the confrontation. Why are you doing more than merely making the thin claim, or why is the thin claim that you are making philosophically substantive?

    Have you considered the question of whether this an issue of conceptual analysis or instead the politics of language (akin to calling slaves 3/5 of a person)? Are the two really distinct? It seems that you really only want to make the thin claim: to be able to think is to be able to solve (complicated) tasks. Now this redefinition might have ethical implications, as retracting the previous definition of a person, but then again you don’t want to discuss ethics. I am left wondering what your contribution is because I don’t know who would deny the claim that in the thin sense thinking can be so defined. But the ethical implications will only take effect if the thick sense is dropped. So it seems that your project ought to be to convince people to altogether drop the thick sense (i.e., a politics of language) … but for what end? I fail to see a purpose that coheres with a politics of language. It’s as if one side of your mouth speaks ontology and the other politics. But then instead of merging them, your whole mouth says that you want to do neither. Maybe I am completely off track, but I suspect that many journal referees will think along the same lines.

  4. Thick or thin conceptions of what, exactly? Consciousness? I deny the thick version in a foot-stomping way, and I’m not sure what a ‘thin’ version would amount to.

    Thinking? Part of the point of this post is to say that there is no such thing as ‘thinking’ as a simple property or activity. Rather, what we call thinking is just a cluster of activities that humans perform. More specifically, thinking is having the ability to perform certain kinds of tasks, or solve certain kinds of problems. Many of the tasks we perform overlap with tasks that other animals perform, but some are unique to humans (and perhaps our very close genetic relatives). Productive language use is a classic example.

    Research into artificial intelligence is motivated by analyzing these performances and attempting to reproduce them through computational and mechanical means, and it has continued to encroach on that unique cluster of activities to the point that there is very little left that is unique to humanity. Very few machines, however, have the ability to accomplish a wide range of those tasks. We generally design specialized machines for particular purposes, and we haven’t come very close to a general thinker. The assumption is that, once we build a machine that can perform all (and only?) those tasks humans perform, you will have a machine that passes the Turing test. I have no sympathy with the skeptical view that this is impossible in principle.

    So if the question is, what would make Google (or a calculator, or a dryer) ‘conscious’ or ‘thinking’, thats a bad question. These machines are expert systems designed for specific purposes; since thinking is a cluster of activities, its not the kind of thing you can just attach to these single-purpose machines without turning them into something very different. A thinking calculator isn’t a calculator; it is a thinker that does calculations. More importantly, solving the search problem doesn’t require thinking, and in fact simply CANNOT be solved by attempting to replicate thoughtful human behavior.

    I take it that none of this is especially controversial, or particularly interesting for philosophers. One interesting question is, for a given task, what does it take to solve that task? I take it that this is a question for science to solve, not philosophy. Similarly for the question “what range of activities do humans perform?” Philosophers can help to the extent that they develop the theoretical resources for answering these questions, but once the questions are formulated well enough to motivate experimental research, its out of the philosophers hands.

    Philosophers, however, think the interesting question is “what is consciousness” or “what is thinking”. I find no truck with these questions insofar as they do not specify a set of explicit, observable behaviors or activities that are manifestations of this underlying feature and require non-empirical answers. The onus is not on me to explain how the above view accords with philosophical daydreams about human uniqueness. The onus is on the daydreaming philosopher.

    This is really the extent of my interest in “Artificial Intelligence” as it has been discussed in the literature, and really I don’t have much more interest in the philosophy of mind except in the deflationary sense described above.

  5. The more important notion to my own work is probably the notion of agency, and here I can give you substantive thick vs thin notions. Lets be explicit:

    Thick agency: agency that implies some ethical or moral responsibility for an action.

    Thin agency: Activity that is independent from the activity of other agents.

    Thick agency is the subject of much moral theory, and action theory more generally. Thin agency may or may not entail a substantive moral/action theory. Nevertheless, I think the notion of independence implicated by the thin notion involved is both philosophically interesting and underappreciated especially with regard to technology.

    I am interested in the question: can machines perform activities independent of their human designers and users? If so what are the consequences this has for both a theory of technology and a theory of the human mind?

    As you know from my previous work, I’ve spent some time detailing (with arguments in premise/conclusion form!) the nature of the use relationship and the design relationship. My thesis is precisely that these relationships are not exhaustive. This is a positive claim that is not mere posturing, and does not require attributing either robust mental processes (thinking, consciousness) to machines, nor attributing to them a thick notion of agency. However, although I am working with a thin notion of agency, my claim is not trivial. The majority view is that these relations are exhaustive, and consequently the apparent activity of machines can be explained by direct appeal to the activity of other agents, and hence without attributing that activity to the machine itself. In other words, despite the scientific consensus with respect to artificial intelligence, there is a deeper barrier to accepting genuine machine independence. This is apparent in discussions pertaining to the philosophy of mind, the metaphysics of artifacts, and anthropological and psychological literature dealing with the use and design of tools. I believe that the issue of machine independence is really what is turning the crank in the artificial intelligence debate, and that answering the question of machine independence ought to have serious implications for that debate, but nevertheless the question of independence is unique and more fundamental. I think a strong notion of independence is all that is required for a thin notion of agency. I think a strong notion of independence is philosophically important given the trend of externalizing and relativising the mind. I think the issue of independence is one of those interesting questions that I am keen to tackle head on, even though I slide around other messy philosophical notions.

    Arguing that use and design are not exhaustive is not enough to generate a strong notion of independence, since there may be other dependence relations that hold between minds and machines. I think that participation is one such dependence relation, but it has the interesting feature that the dependence requires treating other participants are independent in some relevant sense. I think this is the backbone of the Turing test, and so my dissertation is specifically designed to treat this notion of ‘independent participation’ as it bears on the philosophical discussion of artificial intelligence.

    Is this all conceptual analysis? I don’t think so, although it certainly involves clarifying and explicating a number of entrenched concepts. I think I am also attempting to open some conceptual space for dissolving the apparent debate over artificial intelligence, and I think that is progress in a sense. However, I’m not impotently suggesting we play my game without stating any advantages over other games. I think the philosophy of technology aspect of my work is more than just suggestive: I am trying to uncover a deep flaw with respect to our thinking about technology, and attempting to correct the flaw in order to better explain our relations with machines. There is nothing relativistic or constructivist about this point.

    Have I justified my existence adequately, Todd?

  6. Of course, no premises, and no answer to my question: who is your audience?

    I was concerned with your definitions of ‘thinking’ and also ‘participation.’ In my view, thick vs. thin definitions have to do with the difference between a descriptive and normative analysis. So, a descriptive analysis of ‘participation’ might read: to be a participant is to have causal efficacy over and above what the designers ever intended. A normative analysis might read: to be a participant is to be capable of self-correction, i.e., learning. But then this requires analysis of the preconditions for self-correction. Descartes’ 4th Meditation argues that one precondition is freedom. But you don’t want to go there … I want you to say how you can ground norms apart from freedom. Kant would never agree. Again, who is your audience?

    You aren’t answering my questions or defining terms. For example, “Rather, what we call thinking is just a cluster of activities that humans perform. More specifically, thinking is having the ability to perform certain kinds of tasks, or solve certain kinds of problems.” Awful definition. What do you mean by ‘is just’? Usually that expression indicates an identity statement, in which case you claim that “a cluster of activities that humans perform” is both necessary and sufficient for thinking. But then in the next sentence you change your game: “more specifically” (huh?) – actually you are saying “more generally” – that thinking is not restricted to human activity but “the ability to perform certain kinds of tasks, etc.” This looseness is bothersome. You want to say X but none of your sentences add up to X. And your “more specific” characterization has obvious counterexamples, e.g., a calculator.

    But then you say – not in your “definition,” but as an addendum – that a “general thinker” can perform a much wider range of tasks than a calculator. OK, so a calculator is a “specific thinker.” What sneaky work is “general thinker” doing?

    “These machines are expert systems designed for specific purposes; since thinking is a cluster of activities, its not the kind of thing you can just attach to these single-purpose machines without turning them into something very different” Cluster of what activities – and where does this come from? A bald and weird assertion. Are you actually telling me that calculators don’t perform a “cluster of activities”? I see, they only do math activities. But humans only do human activities. Now you are playing on the term ‘cluster.’ Not to mention ‘activity.’ See Sartre’s analysis of ‘action.’ Why is he wrong to claim that a necessary condition of action is non-positional consciousness? I need to know what an activity is. Descriptive – normative? What sense are you employing? Sartre clearly distinguishes between the two, and only then proceeds forward. The former is what he calls an event or occurrence or result (descriptive).

    “I take it that none of this is especially controversial, or particularly interesting.” Well, it doesn’t make sense. I’m not necessarily saying you are confused, but only that your writing is confused.

    “but once the questions are formulated well enough.” That hasn’t happened here.

    “Philosophers, however, think the interesting question is “what is consciousness” or “what is thinking”. I find no truck with these questions insofar as they do not specify a set of explicit, observable behaviors or activities that are manifestations of this underlying feature. The onus is not on me to explain how the above view accords with philosophical daydreams about human uniqueness.” What? You are a behaviorist? What is the role of ‘manifestation’? Is this an ontological or epistemological appeal? See Searle’s Mind book. Not to mention Haugeland. This is straightforward begging of the question – a flat out assertion when I asked for an argument in premise form. Furthermore, neither make any claim to “human uniqueness.” Other species can do just as well.

  7. “I am interested in the question: can machines perform activities independent of their human designers and users? If so what are the consequences this has for both a theory of technology and a theory of the human mind?” Of course they can. Anyone can design a machine that winds up doing more than they excepted – i.e, they have causal efficacy beyond what the designer intended. But google seems more interesting, as you admit: it learns – or seems to. That’s where the issue lies. The rub is in the learning. Kant distinguished between acting in accordance with law and acting from the conception of law. Only the latter, on his view, are learners.

    “The majority view is that these relations are exhaustive, and consequently the apparent activity of machines can be explained by direct appeal to the activity of other agents, and hence without attributing that activity to the machine itself. In other words, despite the scientific consensus with respect to artificial intelligence, there is a deeper barrier to accepting genuine machine independence. This is apparent in discussions pertaining to the philosophy of mind, the metaphysics of artifacts, and anthropological and psychological literature dealing with the use and design of tools.” I want some references. Maybe you are right, and philosophers are dumber than I think. Of course machines are independent in the sense in which I defined ‘independence’ in the second sentence of the second paragraph above. If philosophers deny this then they are dumb. The issue has to do with learning and its preconditions. This takes you right to the heart of Kant’s philosophy (other philosophers too): norms and freedom. Philosophers might well deny that google can learn (normatively defined). You need something to say about this, otherwise you aren’t engaging philosophers. Maybe Kant is wrong; but why?

    Look, I’m not trying to be a jerk. Rather I find your project highly engaging otherwise I wouldn’t bother to comment. You touch on deep stuff, but sometimes I wonder whether you slide by the depth too quickly.

  8. By ‘independence’ I mean (descriptively understood): “Anyone can design a machine that winds up doing more than they excepted – i.e, they have causal efficacy beyond what the designer intended.” This is obviously accommodated by your denial of the claim: “the apparent activity of machines can [only] be explained by direct appeal to the activity of other agents, and hence without attributing that activity to the machine itself.” Whether germs or the printing press, they do much more than intended, so they are of course independent of the designer (in that sense).

Submit a comment