Example II: The naming machine

Say we automate astronomy by building telescopes that searched the sky in regular patterns and, upon finding a star or otherwise notable object in space, it assigns that object a name from an officially designated list of names.

On Kripke’s view, a name has a reference in virtue of a causal history of use that can be traced back to an initial ‘baptism’ or imposition of a name. Some person at some time in the past pointed at water and said ‘water’ (or some cognate), and from that point forward the word ‘water’ rigidly designates water in all possible worlds.

Assume for a moment that Kripke is right. Does our automated astronomy bot name the star?

One might think ‘no, the star is named in virtue of the pattern of search employed by the machine, and the list of names, both of which are developed by the scientists and engineers who designed the machine.’

But, as I have been arguing, the designers don’t name anything. It is the machine itself that forms the connection between a name and an object. The designers wouldn’t have known which object the proposed name would attach to, or even if the name would ever in fact be used. We can complicate the story by making the lists more complex (for instance, different lists for different categories of stars), or having the machine pick a random starting point within the list. I don’t think either variation helps the sitution much.

Of course, the scientist’s ignorance about which object the name is attached to doesn’t itself hurt the Kripkean theory, since ‘water’ means H20 in all possible worlds, even those in which no one knows that ‘water’ is H20. But the case here is more severe: the scientists not only lack knowledge about which star is named, but even that a star has been so picked out at all. The scientists lack even the initial ostentation, the pointing at some bright object and saying “that is _____”. It is the machine that makes the connection.

All this shows, of course, is that the machine could be the relevant causal instigator of a normative story of reference. I don’t know if Kripke talks about who or what can be in the business of naming things, but it seems to me that machines certainly can.

6 Comments

  1. All of the work is done by the social practice. If our social practice says that this machine’s designation accomplishes a baptism, then a baptism is accomplished. Did the designers name the star? Well, it is a little odd to say that they did, acting remotely through the machine. It’s no less odd to say that the machine named the star. Luckily, we don’t need to say either of those things. We can just say that the star is named such and such and stop right there. We could as easily have a social practice by which we selected names by pulling them out of a hat. Did the hat name the star?

    Does the machine play a role in the normative story? Yes, but so what? Suppose that a dentist uses an X-ray machine to discover whether or not you have a cavity. Once he’s done that, we can say that the machine played a normative role, since the x-rays it produced are relevant to what the dentist should do next. Similarly, suppose the dentist didn’t have an x-ray machine and instead searched for categories using one of those horrid picks. Well, then the pick would also play a role in the normative story, and in a similarly denuded way. This is all trivial.

    Apparently, you have some richer version of the supposed normative role, but I can’t see what it is. You use the phrase “relevant causal instigator” and maybe that’s supposed to do some work. I don’t see it. If the operators of the naming bot aren’t the relevant causal instigators then neither is the dentist as operator of the x-ray machine. But the role played by the x-ray machine is trivial, so I don’t see what’s supposed to be so important about the naming bot.

  2. On this and the last post, I’m not sure what would change if the algorithm which the machine follows were instead followed by a man with a set of rules and a bunch of index cards.

    Let’s also look at the naming of hurricanes. We have a list. We know the next hurricane will be named “Jehosaphat” or something. Would say that the creators of the list named the hurricane? Maybe not. Perhaps we would say the list itself, or the list plust the instructions, named the hurricane. I’m not saying it’s not the algorithm that is “acting” here. But if it is, it can do so without a computer.

  3. I thought of the hurricane example too. It’s part of what I was getting at in writing the first paragraph of my comment. Maybe I can make the underlying idea a little bit clearer. When would we appeal to the hurricane naming reference list? Just when someone asked how a hurricane came to have the name that it did. We’d then say something like, “see, we have this system where we make a list of names in advance, and then they’re assigned in alphabetical order as storms arise.” There’s nothing in that explanation about who named the hurricane, and, moreover, there’s no need for that sort of information. In fact, if somebody were to ask, “who named this hurricane?”, the best answer would be, “it doesn’t work like that.”

  4. I’m fine saying that the explanation is given in terms of the social practice, full stop. But Kripke’s theory does require some discussion of baptism, and it is only within that theory does this example work. My argument is simply that on Kripke’s theory, the machine itself is doing the naming work.

    But say that the initial baptism doesn’t matter, and all that we are concerned with is that the star is named such and such. My point is merely that the naming machine is importantly involved in setting that practice in motion. My ultimate claim here is that machines contribute to and participate in our social practices.

    Both examples are meant to attack the idea that machines themselves don’t do anything at all. Carman’s whole theory of computers rests on this point, and after I made this point explicit, he agreed with it. On his (and any Heideggerian) view, the machine merely computes; this telescope doesn’t actually ‘search the sky’ or ‘name the stars’ except in a loose metaphoric sense. These examples are designed to show that the Heideggerian line here is stupid. So ‘relevant causal instigator’ isn’t meant to do any normative work, but just the simple causal work of getting certain social practices going.

    A good response might be “Who cares what the Heideggerians say?” Well, the fact is that they are the most vocal opponents of the computationalist paradigm in cognitive science, and their view I think ultimately captures what lies at the bottom of our intuitions on these matters. If the computer can’t do anything in the first place, then obviously it can’t play chess or otherwise participate in our social games.

    Z, you seem to be just fine in saying that the machine actually does things, but the examples you appeal to confuse the point. Neither the X-ray machine or the pick do things beyond their instrumental value for the dentist. The pick is an almost paradigmatic case of tool use, and the X-ray machine doesn’t expand the dentist’s knowledge, but merely the scope of his observational abilities.

    The machines I am interested in are specifically those that do things beyond mere tool use, that actually contribute to our normative practices. Say the X-ray machines was hooked up to some program that evaluated the scan, made a diagnosis, and suggested the best methods and procedures for treating the problem. Do we still say the machine is just a tool? After all, that is exactly what the dentist is supposed to do… is that dentist just a tool? When the machine fills the same epistemological role as our experts, I see no reason to grant the machine expert status- and experts are by definition members of our social practice.

    But to even get that far we need to grant that machines do things.

  5. A couple of preliminary comments. (1) I think zw is correct to imply in his first comment that your example cases are glorified intuition-checks. (2) We already are using a list-algorithm for naming objects in space (basically they are assigned a number based on the order of discovery and the year, and some letters to do with their classification if I recall correctly).

    On assumption that Kripke is right. The astronobot plays a role in the causal story both at the design end and at the output end: its normative description has essentially to do with what the names it produces are for, i.e. use by scientists and others. (Robotic scientists could of course build such an astronobot.) The mechanism is one that produces names because it is set up to do this; here we can imagine discovering an object on mars that functions as a star-namer without our being able to impute to it a star-naming function.

    We might imagine, however, an astronobot that merely matches sequences of numbers to regions in space where some (notable?) object is detected. Scientists later take these matched pais and refer to the objects (let’s assume this is possible) by the number-sequences. I take it we would agree that now, at least, the objects have names. I would ask whether there is a difference between the matcherbot and the namerbot. In other words, and in response to the person who takes Kripke’s line, who gets to create rigid designators? Of course this question (much like zw’s second comment) is an inversion of the line of inquiry you’re taking.

    With respect to the hurricane case, I would say the answer is not “It doesn’t work like that” but rather “The agency that’s in charge of such things” or, better, “The officer of the agency who certifies that such-and-such is a tropical storm-cum-hurricane.” The act of naming is what’s important–which name you chose is immaterial; the list is a distraction in this example. Perhaps this lends credence to the position that the astronobot is “naming” the stars.

    Oh, oops: I think I just suggested some conceptual analysis of a technical term.

  6. First, a quickcomment about naming. As far as Kripke goes, I don’t know what he himself would say (he’s crazy), but it doesn’t seem to me that there’s anything about the causal theory of reference generally that requires that the baptism be done in one way rather than the other. What it requires, as I understand it, is that there be a social practice in place, the results of which are baptisms. It seems to me that we have any number of social practices in place which create rigid designators more or less by rote, without the interactions of agents The hurricane and astronomical object naming systems are just two of them. To insist , as fizh does, that we insert an agent into the description is to make one of the mistakes W warned against in PI.

    Now eripsa — I chose those examples on purpose, because nothing you say seems clearly to distinguish them from your naming bot, and it’s pretty clear (I hope) that the role they play in our normative story isn’t one which differs from the role played by very straigtforward tools. Your response adds in the notion that the naming bot creates knowledge in a way that the x-ray machine doesn’t. I’m not really sure I agree, but it’s hard to say because the notion of creating knowledge here is odd. Extending observational acuity doesn’t count, but naming does. Why? Presumably because naming creates information. What would you say about a calculator? A weaving machine that creates rugs according to randomly selected fractal patterns?

    I don’t have any confidence at all that your criterion is going to succeed in weeding out the cases of agent like machines from machines which aren’t agent like. And that’s because, at the end of the day, your criterion applies only to that which is externally observable, but the relevant distinction has to do with the internal states (of lack thereof) of the machine.

    Let me pick out something in what you wrote in your response that gets at the underlying point: “When the machine fills the same epistemological role as our experts, I see no reason to grant the machine expert status- and experts are by definition members of our social practice.”

    I think that it risks begging the question to say that the machine ‘fills the same epistemological role’ since what’s at stake is whether the machine is the sort of thing that could ‘fill a role’ at all.

    This can be avoided if we interpret ‘fill a role’ in a denuded way, where saying that the machine fills a role doesn’t attribute robust agency to the machine, but then we appear to have no principled reason not to include dictionaries, encyclopedias, and tuning forks in the category of things which fill the same epistemological role as our experts. And if such things fit that category, then it seems wrong to license an inference from membership in that category to ‘members of our social practice’. Unless, similarly, ‘members of our social practice’ is interpreted in an extremely denuded way (one that includes, that is, hammers, picks, and so on).

Submit a comment