Peeping Bots vs. Peeping Toms

Why do we care more about violations of privacy by conscious agents?

Most of us know that we have become data production engines, radiating our locations, interests and associations for the benefit of others. A number of us are deeply concerned about that fact. But it seems that people really get outraged when they find out that actually humans are listening to Alexa inputs or that Facebook employees are scoping out private postings. Why is that? We can call it the Peeping Tom effect: we have a visceral reaction to our private lives being observed by living breathing agents that we lack when the same information is collected by computers. Perhaps this seems too obvious to remark upon, but it deserves some serious scrutiny. One hypothesis, which I advance in a forthcoming paper with my colleague Ken Daley, is that we are likely hard wired–perhaps evolutionarily–to have alarm bells ring when we think about human agents in our “space” but that we have no such inborn reactions to the impersonal data collectors we have developed in the past fifty years. The fact that alarm bells ring in one instance and not another is not a reason to ignore the silent threat. There’s a good case to be made that the threat of corporate knowledge–even if it doesn’t involve knowledge by a human–is quite a bit more dangerous than the threats we are more inclined to vilify.

Two features of human versus machine knowers stand out. Humans are conscious beings, and they have personal opinions, plans and intentions. It’s hard to swallow the idea that corporations or computer networks are themselves conscious, and it’s therefore hard to think of them as having opinions, plans and intentions. I’m inclined to grant the former–though it’s an interesting thought experiment to imagine if computer networks were, unbeknownst to us, conscious–and for the sake of argument I’ll grant that corporations don’t have opinions, plans or intentions (though we certainly talk as if they do). It’s worth asking what extra threat these features of humans might cause?

It’s admittedly unappealing to think of a Facebook nerd becoming engrossed in the saga of my personal life, but what harm does it cause? Assuming he (pardon the assumption, but I can’t imagine it not being a he) doesn’t go rogue and stake me out and threaten me or my loved ones, why does it matter that he knows that information? From one perspective, assuming he’s enjoying himself, that might even be thought to be a good thing! If the same information is simply in a computer, no one is enjoying themselves and isn’t more enjoyment better than less? Perhaps we think the privacy violation is impermissible and so the enjoyment doesn’t even start to outweigh that harm. But we’re not really talking about whether or not it’s permissible to violate privacy–presumably it’s just as impermissable if my privacy is violated and the illicit information is stored in a network. We’re asking what is the worse situation–a violation of privacy with enjoyment by a third person and a violation of privacy without. I share the feeling that the former is worse, but I’d like to have something to say in defense of that feeling. Perhaps it’s the fear that the human will go rogue and the computer can’t. But my feeling doesn’t go away when I imagine the human is spending a life in prison, nor does it go away when I realize that computers can go rogue as well, causing me all sorts of harm.

There’s lots more to say and think about here. But for now let’s just let the question simmer: Are violations of privacy more harmful if they involve knowledge by conscious agents, and if so, why?