So in the D&D thread on the Deep Blue article, I was getting a bit liberal with my misanthropist technophile rhetorical flourishes. This particular response makes me chuckle a bit:
Not to attack you or anything, but you get overly dramatic over bizarre stuff. What do you mean by “This comparatively simple inert machine generated genuine panic and emotion in humanity’s best representative; in the face of the machine, we flinched first” exactly? It seems like you’re turning the frustration of one person into a species-wide defeat that we all felt — and on top of it, you really seem to relish it. It seems odd to me that you simultaneously place such great significance upon machines performing the tasks they were built to perform and such great satisfaction in humans “losing.”
After I gave my colloquium on Friday, there was some discussion about how my intuitions concerning machines and technology didn’t align with most people at the talk. A certain Mr. Swenson suggested, via an illusion to Jane Goodall, that perhaps I had spent so much time around machines that I actually started to think like them .
Well, if loving machines is wrong then I don’t wanna be right.
That’s precisely my objection as well. Somehow, you have the intuition that machines/computers are more than just tools, and that is the basis for much of your argumentation. I certainly don’t share that intuition, and it seems to me that there is a mountain of both empirical and intuitive evidence against it.
Certainly some machines and computers operate as tools, at least some of the time. I just think that some machines can sometimes function independently, without the control of a user, and thus cannot be simply understood as a tool.
I don’t accept your intuitive evidence, because it is the same ‘evidence’ that makes you believe that you have a mind, and that your mind is somehow ‘special’. At the very least your intuitions here are parochial, and I’m happy to be outside that closed circle. In any case, I’m not sure what empirical data you have in mind that would make the quasi-metaphysical distinction I am drawing between tools and non-tools.
But, see, I’m pretty sure I wouldn’t even accept your account of tools (in fact, I don’t think you can give an account; but prove me wrong, Jon!). I think the only possible definition of tool is given in the extended mind literature: a tool is an extension of the mind, employed by a user as an external resource for solving a problem.
Clark puts the point as follows:
“Real embodied intelligence is fundamentally a way of engaging the world… the image here is of two coupled complex systems (the agent and the environment) whose joint activity solves the problem”
A tool, then, is any environmental resource that gets tightly coupled with the agent as it engages the world; Clark wants to extend the agent to include the tools being used in its engagement with the world, so at least some of the time, the naked mind isn’t identical to an agent, but functions as a User, set with the task of coordinating its tools to direct them to the appropriate task.
So my point here is simply that some machines don’t function as tools, because they aren’t an extension of any mind. Even if I grant the claim that Google’s meaningful activity is derivative on our own, it still doesn’t function as an extension of my mind, or anyone’s mind for that matter. Thus, Google is not a tool. That doesn’t make it a person, that doesn’t make it a mind, and it certainly isn’t conscious, but to dismiss it as a mere tool greatly misunderstands what is going on.
I meant to add:
My view also suggests an empirical distinction can be drawn:
Figure out what brains do when they use tools. If the notion of ‘tool use’ is to have any scientific weight whatsoever, there should be some kind of theory to account for the neurological basis of tool use (even if it is spread over a variety of mental modules). Then, find instances when the brain is ‘cooperating’ (in the loose colloquial sense) with a machine to solve a problem (ie, the solution to the problem depends on the contributions of the machine), but the brain does not exhibit signs of legitimate use.
If such instances exist, then you have an empirical example of a mechanical non-tool. I am simply arguing, in lieu of a neurological basis for use, that such instances not only exist but are quite common.
I feel like I should quote some Wittgenstein at you, but I’m out of practice.
Hmmm, your kung-fu is good. I’m bothered by the necessary inclusion of the extended mind thesis into your theory of tools, though; that immediately locks me out from disputing your conclusion that Google is not a tool, as I don’t accept one of your premises. Let me see if I can work on an account of tools that satisfies me. I’ll get back to you.