First, some terms introduced in part 1:
artifact: any product of human construction (including nonfunctional products, like art, waste, atmospheric carbon, etc).
machine: any functional artifact (cars, hammers, bridges, etc)
tool: any functional artifact whose functional character depends on human mental activity
The dual natures view of artifacts insists that all machines are tools: that the categories are both coextensive as a matter of fact and cointensive as a matter of metaphysical or conceptual analysis. I will argue, contra the dual natures view, that some machines are not tools, but instead are participants that deserve treatment other than the purely instrumental. My argument is structured according to the outlined argument below.
1. Machines derive their functional natures from minds (and are therefore tools) in two primary ways: either through their use or their design. Design and use are semi-independent aspects of an artifact’s functional character, and will be treated independently.
2. Concerning use:
2a. Use is the process by which a user puts a tool towards some purpose, thereby extending their agency and capacities.
2b. There are no privileged users. It is just tools, all the way down.
2c. Users play a coordinating role among a collection of tools, orienting them towards some goal. In some cases, machines play this coordinating role. In other words, some machines are users.
2d. Although we sometimes treat other users as tools, we also sometimes treat other users as participants in a shared activity. The difference is that in the former case, I take the tool to be an extension of my own agency, whereas in the latter case I recognize a distinct agent with effective capacities in an environment overlapping with my own. I’ll call distinct agents participants.
2e. If some machines are users, and some users are participants, then it is logically possible that some machines are participants. If some machines are participants, then some machines have functional characters that do not derive from their use.
3. Turing’s test is about machine participants
3a. The standard reading of Turing emphasizes the indistinguishability requirement of the Turing test as central to his conception of “intelligent machines”. This mirroring relationship is central to the classic discussion of artificial intelligence, and introduces the technological irrelevancy thesis, which asserts that artifactual kinds are simply irrelevant to the question of thinking.
3c. However, Turing advocates for treating machines neither as tools nor as human equivalents, but as cooperating agents (in a conversation, for instance) with capacities that are (or may be) distinct from ours. Turing makes a plea for what he calls “fair play for machines” that recognizes the functional capacities of machines independent of our use, and constitutes a defense of machine participation.
4. Concerning design:
4a. Turing considers the relevance of design for the question of intelligent machines explicitly in his discussion of the Lady Lovelace objection. Lovelace argues that machines “can do whatever we know how to order it to perform.” Turing paraphrases: “The machine can only do what we tell it to do.” Common parlance: “Computers only do as they’re programmed.”
4b. Lovelace is not skeptical of artificial intelligence in the sense of mirroring, she is skeptical of machine autonomy. Lovelace argues that a machine’s design prevents its performances from being genuine performances of the machine.
4c. Three senses of autonomy relevant here: the roboticist’s conception (automaticity), the philosopher’s (genuine autonomy), and autopoiesis (self-organization or self-constitution).
4c1. Automaticity distinguishes between the performance of the machine “online” and its construction “offline”. Automaticity is a measure of the degree of automation during online performances.
4c2. Genuine autonomy includes not just the automation of the performance, but also the extent to which the operations governing the performance were also freely selected. Relates to the classic philosophical debates over “free will” and human agency more generally.
4c3. Autopoiesis refers to a system’s ability to generate and maintain its own constitution as a persistent entity, and is a measure of organism independence.
4d. Lovelace doesn’t object to the possibility of autonomous machines in either the second or the third sense, provided that we are clever enough to order them to generate such performances. Instead, Lovelace’s objection targets the distinction in the roboticist’s conception between its online and offline behavior. Lovelace’s argument suggests that this distinction is insufficient for justifying the technological irrelevancy thesis. Artifacts are tools (dependent on human mental activity) no matter what they are doing or when they are doing it.
5. Defeating machine autonomy skepticism
5a. If this is the right reading of Lovelace, then we should read Turing’s responses to Lovelace as arguments against machine autonomy skepticism thus described. Turing responds to Lovelace by suggesting that we build learning machines. This response is baffling on most interpretations of Turing and Lovelace, and ascribe to Turing ridiculous views (like “learning is the result of an inexplicable event equivalent to an oracle machine”, see here).
5b. On my interpretation, Turing appeals to a simple fact about learning in the case of human beings. After a successful round of education, a student is able to independently reproduce certain kinds of performances (solving a math problem, for instance) they acquired from an instructor. In this case we judge that the student has learned, and that their performance is their own, even when the instructor can predict their behaviors in advance quite accurately. Turing says:
“An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behavior. This should apply most strongly to the later education of a machine arising from a child machine of well-tried design (or programme). This is in clear contrast with normal procedure when using a machine to do computations one’s object is then to have a clear mental picture of the state of the machine at each moment in the computation. This object can only be achieved with a struggle. The view that “the machine can only do what we know how to order it to do,”‘ appears strange in face of this.”
5c. Turing’s suggestion is that it is strange to consider machines that learn to be extensions of minds that really know little of what’s going on in the moment of the performance. The “clear contrast” is with, for instance, the puppeteer who knows exactly which strings are being pulled, and what behavior will result, at each moment in the performance. Turing is arguing that learning machines aren’t puppets in that sense, even if they are significantly designed, if we grant them the simple form of independence we grant to any learning human being (which doesn’t assume any substantive online/offline distinction). And, by appeal to fairness, we ought to apply the same standard to both humans and machines.
6. In conclusion
6a. Turing’s defense of thinking machines considers issues pertaining to the fact that artifacts are both used and designed by human beings, and presents a framework for thinking about intelligent machines (“fair play”) as a systematic alternative to either instrumental conception (or combinations of the two).
6b. Turing’s conception is also distinct from the mirroring relation emphasized in the classic debate over artificial intelligence: “It will not be possible to apply exactly the same teaching process to the machine as to a normal child.”
6c. On my interpretation, Turing’s conception of “machine participation” anticipates models of complex multi-agent systems and their dynamics with “thinking machines” playing participatory roles. I’ll argue that the development of a participatory view of artifacts and their functional natures which makes no assumptions about mind-dependence is the proper way to think about artifacts in a naturalist framework.
6d. Finally, I’ll give some examples of participatory machines of various sorts, and the implications that networks of machines have for our autopoietic organizations.
Glossary of terms, grouped by relevance:
artifact: any product of human construction (including noninstrumental products, like art, waste, atmospheric carbon, etc).
machine: any functional artifact (cars, hammers, bridges, etc)
tool: any functional artifact whose functional character depends on human instrumental activity
automaticity: a measure of the degree of automation during online performances.
genuine autonomy: self-governance. Not just automatic performance, but performance according to self
autopoiesis: a system’s ability to generate and maintain its own constitution as a persistent