A conversation with David Pescowitz

If one of the fundamental problems of the technological world is the explosion of information, then it seems to me that the task of ‘sensemaking’ is a burden that must be taken up by both humans and machines. This is where the real power of human-machine collaboration lies: machines are not just tools to be used by humans, but are fellow sensemakers confronting a VUCA world alongside us. Cognitive enhancements, memory and attention drugs, and so on, look like pissing in the wind compared to the overwhelming amounts of information produced by the collective. At best, it seems that these enhancements help the individual mind focus on the information relevant to the goals and projects of the individual mind, and to mask off the unimportant or uninteresting information. Perhaps this leads to a certain amount of individual empowerment, but it still leaves mountains of (possibly relevant) data untouched, and therefore doesn’t solve the problem.

Perhaps I am using the term ‘sensemaking’ to be more or less synonymous with terms like ‘interpreting’ or ‘understanding’, and maybe this isn’t exactly what you mean. But my idea is that our machines themselves will play a role in helping to determine what is important or interesting. This is why I said that Google is itself a sensemaker, because it has the goal of sorting out what is relevant and what is irrelevant. As you rightly point out, Google isn’t terribly good at the task, and the user must use their own judgment in how to make use of the results Google makes available. But Google is already good enough that even the unenhanced individual doesn’t have too much trouble, with a moderate amount of training, to make a decent judgment call. The upshot is that with the collaboration of sensemakers like Google, we have rendered that apparently insurmountable flood of information manageable.

You responded that sensemaking isn’t just the raw interpretation of data, but is also doing something with that information. I found this troubling, and somewhat antithetical to the general idea of human-machine collaboration. The implications seemed to be:

1) Machines aren’t doing anything with the information that collect.

– I don’t think you want to admit this (it is exactly what the idea of blogjects/spimes deny), although I admit that most of the interesting examples you gave during the talk are examples of people doing interesting things with information.

2) Only humans can be sensemakers, because only humans have projects (goals, commitments) that require them to act.

– This is a standard move against artificial intelligence, but I would argue that the very idea of human-machine collaboration is to argue that machines share our projects. Maybe machines do not have projects of their own (though I think they do), but their actions and behaviors are completely intertwined with our commitments.

3) Human individuals are the theoretical center and driving force of technological advancement.

– This is the most worrying implication, since it implies that humans on their own are responsible for coping with the technological environment. But if machines are developing us, as you say, then humans aren’t the driving force!

We have, ultimately a collection of humans and machines working together, each contributing to the overall project of sensemaking.

Submit a comment