Consider eHarmony, the online dating service that uses some highly sophisticated statistical methods for matching people up, with the express goal of long-term compatibility.
From The Atlantic: How do I love thee?
“We’re using science in an area most people think of as inherently unscientific,†Gonzaga said. So far, the data are promising: a recent Harris Interactive poll found that between September of 2004 and September of 2005, eHarmony facilitated the marriages of more than 33,000 members—an average of forty-six marriages a day. And a 2004 in-house study of nearly 300 married couples showed that people who met through eHarmony report more marital satisfaction than those who met by other means. The company is now replicating that study in a larger sample.
“We have massive amounts of data!†Warren said. “Twelve thousand new people a day taking a 436-item questionnaire! Ultimately, our dream is to have the biggest group of relationship psychologists in the country. It’s so easy to get people excited about coming here. We’ve got more data than they could collect in a thousand years.â€
The stength of eHarmony, and what makes it so popular and apparently successful, is the sheer amount of data they have collected, and their theoretical models of relationships that can mine the data for compatibility results. They claim to be using science to build relationships (contrast with chemistry.com, which basically uses a suped up Myers-Briggs test).
Question: who is responsible for the resulting pairs suggested by the system?
Consider: The statistical models are the result of lots of r&d from some rather prominent academics and experts in this field of psychology.
None of the scientists responsible for building those models (or, for that matter, any of the programmers and engineers responsible for implementing the model) directly influence the resulting suggestion from the statistical analysis.
It seems at least intuitively plausible to say that the machine (the models and so forth implemented in actual computer systems) is the one making the suggestion of compatibility, questions of responsibility aside. That is, it is the machine who produces matches from the data, and nothing but the machine could have produced those matches.
Common practice suggests that the one who makes a suggestion is often blamed (if not held ‘responsible’ in a robust sense) for suggestions that go awry. “Let’s go to the Esquire!” (20 minutes later) “Who suggested we come here?”
I would guess that this common practice flows over into the eHarmony situation. On a blind date going poorly: “they picked a bad match”. Note ‘they’ here is ambigious between eHarmony qua company, or the scientists modeling the relationship data, or the system that actually produced the match. That the first two possibilities represent different kinds of agents (corporate vs institutional) suggests it is at least an open possibility that other deviant agents (like the computational systems producing matches) are open to blame.
Taken together, this seems to suggest that it is at least within common and intuitive practice to see the system as open to blame in making false matches.
I take it, though please correct me if I’m wrong, that blame can in most cases be separated from responsibility. Cheney might be the one to blame in the shooting, in the sense that he was the one who pulled the trigger, and yet might not be responsible, in the sense that he deserves retribution, or any corrective or preventative measures. In other words, I take it that responsibility adds to mere blame at least the idea that steps should be taken to correct the fault (and pick your own theory of justice to fill in the account of ‘correct’ here).
In the case of eHarmony , and all such statistical, information-based models, the results DO get better over time. The more people that sign up for the program and fill out the questionaire, the more data they have to model, and the more accurate the results. Perhaps there are upper limits to the accuracy of the model, but increasing information tends to approach that limit.
Perhaps this is not true responsibility (cf Kant on acting in accordance with a duty). But note that these corrective measures both increase the reliability, and by extenstion the trustworthiness of the system. In other words, whether or not someone or something is responsible for the match, people do tend to depend on these models. Both epistemic and ethical weight is put on the results eHarmony produces. And yet, we seem to lack any good moral agent around to support that weight.
But there are all kinds of situations which arise (for good or ill) and for which we don’t accord praise and blame. So, for example, we wouldn’t think that blame was appropriate if a previously unobserved asteroid fell out of the sky and obliterated NYC. Why? For the simple reason that their isn’t an agent essentially involved in the causal chain.
Further, consider what we would say if someone did want to assign blame. Pat Robertson, to pick a name out of a hat, might claim that The New School is to blame because their fashion forward approach to education brought down the wrath of God. What we’d say to that, I hope, is that Pat Robertson is wrong. That he has made the mistake of attributing agency where there is none.
One of the weird things about your project of expanding the scope of our normative explanations is that it reverses a trend which seems to have been moving in the right direction. You lay out these situations and appeal to our temptation to view them as situations where agency is present but, the thing is, that tempation is the same one which informs all manner of superstitions. In moving past those superstitions, part of what we did was accept the unreliability of our inclination to attribute phenomena to agents.
Say I come across a scene of devastation, and ask “Who did this?”, and someone responds “It was an asteroid.” Implicit in my question is a request for the responsible agent, but giving a causal story of how it happened still suffices to fix blame, even if it fixes on something that is not an agent and hence not responsible. At the intuitive level, ‘blame’ attribution seems independent of agent status, since it is ambiguous between responsibility and causal instigator (in the sense of ‘who/what is at fault’). Once we find out that it was an asteroid, we then conclude that there are no agents to hold responsible- it was an accident. Similarly, if I accidentally shoot someone in the fact with a shotgun, I can be blamed for my action without being held responsible. It was an accident, so justice has not been disrupted, and thus there are no wrongs to right.
(This probably isn’t a very good reading of the word ‘blame’, and I’m not at all familiar with the ethics literature here, so I don’t know any better word. I just want ‘blame’ to be the properties one has when held responsible apart from their status as moral agent (and all the complications that entails). So on my reading, we can blame the asteroid or the hurricane for the devistation it causes without holding any agent responsible. That seems to accord with common practice.)
You are right, common practice is awfully superstitious. This is why I am trying to state my project in normative terms, as opposed to, say, ‘consciousness’ or some such thing that might commit me to pan-psychism. When we describe why we think we need normative theories, we usually describe it in agent-neutral terms: we need to know where to place the justificatory or ethical weight in particular situations. But responses to these demands assume that justification rises and falls with the particular kind of self-reflective beings humans are, and that simply doesn’t do justice to the many complications in our practices of justifying our claims to knowledge and behavior.
I’d say that of all the examples I constantly harp on, eHarmony here is most clearly NOT an agent. It doesn’t do anything but crunch the numbers in the way the scientists tell it to. And yet and yet IT is the one who establishes a certain match, and IT is the one who suggests compatibility statistics based on the criteria it is given. Nothing else makes any relationship suggestions for any particular individuals. You might reply “well, it doesn’t even really make ‘relationship suggestions’, because it doesn’t understand what that entails”, but that misses the point. The point is that it is the system itself who instigates the causal chain that ends with you in a happy relationship or not. Things would have been otherwise if the computer had done otherwise, holding everything else constant. Of course that’s true for lots of things, but the humble point I am arguing for is that the machine is actually doing something, much more so than the asteroid flying through the air. And because it is really doing something, it is a relevant aspect in our normative calculations. The machine itself can bear justificatory weight.
So I think you are wrong, that this trend of narrowed normative scope is in the right direction. Individualistic, agent-centered moral theories leads to a bias that favors self-reflective, autonomous agents. This seems increasingly anachronistic in a world populated by governments, public and private institutions like science and corporations, groups like unions and fundamentalists, and artificially intelligent systems, all of which play hugely important, and deeply interconnected, roles in our lives.
hmmmm. I don’t know man it seems the IT here is crunching the numbers and sorting things into categories. Are you saying that the sorting requires it to leap outside the programing where it would be “doing something”? Would you use e Harmony or does the e part scare you a bit? (sorry I couldn’t resist) Anywho I think the more interesting argument might be the one you and your Z friend are having about agency. You’ve given it a delightful political spin in that last paragraph of your response.
Ummm, unless we had good reason to believe that it was man-made, wouldn’t we look at a scene of devastation and say, “What did this?” and not “Who did this?”
You need to watch more Dragonball Z Patrick.
Yeah, that’s pretty clearly true.
–snip–
So I think you are wrong, that this trend of narrowed normative scope is in the right direction. Individualistic, agent-centered moral theories leads to a bias that favors self-reflective, autonomous agents. This seems increasingly anachronistic in a world populated by governments, public and private institutions like science and corporations, groups like unions and fundamentalists, and artificially intelligent systems, all of which play hugely important, and deeply interconnected, roles in our lives.
–snip–
I don’t dispute that purely agent centered moral theories are problematic. These issues, though, seem to be best handled by expanding the pool of moral actors in a very particular way. Which is to say, to expand it in a way that included collectives of agents as moral actors. There’s also (closely related) work to be done in explaining how the reasons of individual agents are dependent on the reasons of other individual agents)
(can you find the fancy new term of art in that paragraph? I knew you could!)
The expansion to collective actors isn’t like the expansion to machines — or if it is, it isn’t to be argued for in the way you’ve argued in the post. The expansion I suggested arises out of an understanding of what those collectives are and how they operate in the causal nexus. That is, we see those collectives as embodying, in some sense, the moral powers of their members. If we perform a similar analysis for machines (or for programs or whatever) then we don’t get the same result. We end up moving our moral judgements up a level, to the agents (or collectives) which created the machine. And this is for the simple reason that agents almost always act remotely, so something like eHarmony doesn’t even begin to look different from standard action.
But to get back to my initial objection. The main part of your reply seems to be that we ought to use normative terminology not to fix responsibility on agents, but rather in a broader way that fixes responsibility without the presupposition that the responsible (what?) X is an agent. Well, I don’t think that’s a particularly useful way of reforming our normative usage, but suppose that we did. What, then, would your argument accomplish? One thing’s for sure, there would no longer be any reason to think that our attribution of blame, praise, or whatever to machine systems gave us any evidence at all that those systems were in any way like us.
I don’t see how the situation is helped any by insisting that collectives can only be constituted by morally relevant agents. I suppose I am advocating that collectives themselves can be taken as a fundamental unit of moral analysis without worrying about deriving power from ‘autonomous individuals’, which at least to my mind has been unsatisfactorily explained. Of course, to look at the way normative weight falls we would need to look at the relevant factors within that collective: its internal dynamics and structures, and the individual actors and events that influence the eventual result. My claim is merely that the machines play a normatively relevant role within group dynamics.
I admit that I am trying to walk a very narrow line. In the OP I tried to be clear that I was not saying eHarmony’s computer system is an agent in any more robust a sense than “it made the match”. I am not saying it is fully responsible, but I am also not saying it is causally or normatively irrelevant. So I get attacked from both sides in this debate: on the one hand you have the Heideggerians arguing that the machines ARE irrelevant because they dont actually DO anything. I suspect this is the intuition that compels us to ‘move our judgements up’ to the humans that design the machine. On the other hand, there is the agency debate, which may grant the machine causal powers but because those causal powers don’t mirror our own they are exempt from any normative status whatsoever.
First of all, your claim isn’t merely that machines play a normatively relevant role within group dynamics. That claim is trivial, and is true of any machine that plays a role in any process where values are involved. It’s true, for example, that the Hindenberg played an normatively relevant role in the social and polical culture of interwar Germany.
I’m not entirely clear what your claim is, but it certainly goes deeper than that. At the very least, you seem to be claiming that when when engage in the behavior of blaming machines we’re doing the same sort of thing that we’re doing when we blame people. And maybe we are, but your way of talking about it in this post gets that at the cost of saying that this is the same thing we’re doing when we blame asteroids. And it seems clear to me that when we blame an asteroid then we are doing something different. Moreover, it seems to me that if we think we’re doing the same thing then we’re making the mistake of thinking that asteroids are different sorts of things than they are.
Regarding your point about collectives, I’m unconvinced. To say that a collective is, without further analysis, a fundamental unit of moral analysis gives us no way of distinguishing between collectives and asteroids. The thing about collectives is that I can give an account by which I explain the appropriatness of assigning praise and blame. That is, I can talk about the fact that the activities of those collectives are themselves expressions of the very same moral powers (in Rawls’ sense) that undergird our practice of assigning responsibility to people. So there is a deep parallel between collectives and agents, a parallel which explains the appropriateness of using the same sort of normative language to apply to both.
And the point I tried to make earlier is just that if you were to do an analysis of a machine in which you found those same moral powers expressed then the natural conclusion would be that human beings are acting, remotely, through the machine. What you need to establish your conclusion is to show that in a given machine activity there is an expression of different moral powers, moral powers which are similar to human moral powers but orignal to the machine.
Now, for myself, I’m open to the possibility that sufficiently complex machines could instantiate relevantly similar moral powers. My objection here is merely to the strategy of argument which seeks to bootstrap machines into the normative universe by divorcing our normative practices from the understandings which ground them.