Why We Shouldn’t Be Allowed to Waive our Privacy Rights

There is little doubt that privacy clauses and terms of service agreements don’t support the moral burden they are meant to carry.  All too often they are designed to provide political cover rather than to generate informed consent.   Not only does no one read them, but even if someone did and had the attention span and intelligence to follow them, it’s doubtful that they would find all the policies hidden in documents several clicks deep. Interesting fact: If the average American actually read all the policies they encountered, they would lose 76 full workdays in the process. The cost to productivity if all Americans were so conscientious would approach $1 trillion.

There is no arguing it, really: clicking on an AGREE button no more means that you agree with the content of a terms of service agreement than politely nodding your head during a mumbled conversation in a noisy bar means you are agreeing with the opinion you aren’t really hearing.

              This is a big problem with the way we are doing things, but there is another, more fundamental issue that few have recognized: our privacy rights aren’t ours to waive. 

              That sounds paradoxical, but there are other rights we intuitively can’t waive—I cannot waive my right to self-determination by selling myself into bondage, for example, and I can’t waive my right to my body by selling myself to a cannibal for Thanksgiving Dinner.  It’s not plausible, though, that privacy violations inflict such extreme harms, so those probably aren’t the best places to look for analogues. 

A closer analogy to privacy rights is voting rights.  I cannot waive my right to vote.  I can choose not to exercise it, but I cannot waive it.  I cannot exchange my right to vote for internet access or for a cushy job. I certainly can’t transfer my right to you, no matter how much you want to pay me. It’s my right, but that doesn’t mean I can give it up. That’s because my right to vote doesn’t only protect me—it protects my fellow citizens and the institution of democracy we collectively cherish. 

If I have the right to sell my vote, it endangers the entire democratic franchise.  It is likely to make your vote less valuable in comparison to someone else’s—plug in your favorite malevolent billionaire here for a scenario in which electoral outcomes are determined by the mass purchase of voting rights.  We cannot waive our right to vote because that right doesn’t primarily prevent a harm to us as individuals; it prevents a harm to an institution that undergirds the rights of others.

              I suggest privacy rights are like voting rights in this respect.  While we can suffer individual harm if someone knows our political preferences or gains access to the subtle triggers that sway us for or against a product or a candidate, the more important harm comes with the threat to the valuable institutions we collectively comprise. 

If I have the ability to waive my access to privacy rights so does everyone else.  If we all waive those rights we enable the collection of data that enables significant control over the electorate as a whole.  Given enough information about the thoughts and behaviors of voters, propaganda and advertising can be extremely effective in swaying enough attitudes to change the outcome of an election. Though votes aren’t being bought, the result is similar: each individual vote is now outweighed by the statistically certain outcome of a data-informed campaign of voter manipulation.

              If this is right, we’ve largely been looking in the wrong direction both for the harms of privacy rights violations and for the harms involved in our wanton disregard of those rights.  In an age where data-analytics can discern surprising connections between different elements of human personality and behavior, our data is not our own.  By giving up our own data, we are essentially informing on those like us and enabling their manipulation.  We shouldn’t do that just because we have an itch to play Clash of Kings.

So where does this leave us?  I like to play Clash of Kings as much as the next guy and frankly, when I think of it in terms of the harms likely to come to me, Clash of Kings can win pretty easily.  When I realize that my own visceral reaction to privacy harms really isn’t to the point, I’m a little less cavalier about parting with my data.  The truth is, though, that this is a place for governmental regulation, just as it is in the case of voting rights.  In today’s political climate I won’t hold my breath, but the way we all think of these issues needs to undergo a shift away from our worries about our own individual private lives.  As important as each of us is as an individual, some of the most worrisome harms come from the effect on the groups to which we belong.  We need to shift our focus toward the harm these privacy violations cause all of us by enabling the manipulation of the public and the vitiation of our democracy.

Originally appeared in The Hill

Social Media, Democracy & Citizen Responsibility

In today’s climate of justifiable suspicion about the Googles and Facebooks of the world, it’s easy to overlook the responsibilities of the individuals using these platforms. While I’m always happy to point out the problematic nature of the data harvesting and information dissemination that these companies are built upon, I would also suggest that this does nothing to diminish our own social and moral obligation to make significant efforts to inform ourselves, resist contributing to increase polarization and do whatever necessary to escape our cozy echo chambers and information bubbles.

Being a good citizen in a democracy requires more than many of us seem to think and much more than our actions often suggest. Pointing fingers, even when done in the right directions, is nowhere near enough. We need to wade through the oceans of bullshit emanating from partisan talking heads, fake news peddlers, marketing driven, agenda-suffused cable-news stations and algorithmically curated newsfeeds in order to determine which actions, policies, and candidates best represent our values, as well as the values most conducive to the health of a thriving democratic society.

President Obama, in a recent interview at the Obama Foundation Summit offered the following: “This idea of purity and you’re never compromised, and you’re always politically woke and all that stuff, you should get over that quickly. The world is messy. There are ambiguities. People who do really good stuff have flaws. People who you are fighting may love their kids and share certain things with you.”

The point, I take it, is not to disregard the mistreatment of marginalized groups but to do something beyond mere posturing and attempting to appear ‘woke’.

Too many of us today seem to believe that it’s enough to call out the many who err or those we may simply disagree with. “Then I can sit and feel pretty good about myself”, said Obama, “because, ‘man, you see how woke I was? I called you out.’ That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do”.

And while I’m quick to agree that platforms like Twitter and Facebook lend themselves to this practice of racing to be the first to spot and out injustice or ignorant speech, we still need to recognize when we’re being lulled into an ineffectual gotcha game of virtue signaling that, though it may provide fleeting feelings of superiority, produces very little in the way of lasting change or dialogue.

—-

The speed with which Facebook and Google have come to play a central role in the everyday life of so many makes it easy to overlook how recent these companies are. Nonetheless their effects are undeniable. As we shared baby photos and searched for information on anything that might spark our curiosity, they’ve been aggregating our offerings and feeding us what they know will keep us coming back.

None of us like to be confronted with the possibility that we’re alone, that our beliefs might be false, or our deeply held values ultimately misguided. So social media curates our experience to provide us with the validation we so often seek. How better to do this than to gift us our own beliefs and values through the words and stories of others? This keeps us clicking and keeps the advertising dollars pouring in for the companies involved. Just like the angry incel savoring the hateful rantings of Donald Trump, we all feel the cozy pull of having our own views echoed back to us.

But, of course, none of this provides anything by way of truth or understanding. And more to the point at issue, none of this is conducive to an open-minded population willing to do the work required to breathe new life into an ailing democracy teetering on the precipice of unbridgeable polarization. While Aristotle, in the first democracy, aptly said (and I’m paraphrasing) it’s the mark of an educated mind to be able to entertain a thought without accepting it, social media has given us the means of reinforcing our own thoughts without subjecting them to the slightest scrutiny. In fact, one might find these two distinct ideas to be fitting bookends for the nearly 2500 year run of democracy.

While this characterization of things may be a bit hyperbolic, the existence of problematic echo chambers and curated tunnel vision is quite real. Fox News acolytes dig in their heels while liberals roll their eyes, and each side drifts further away from the possibility of honestly engaging with the views of the other. (*I refuse to equate the so-called ‘extremes’ on the left with those on the right. There’s a clear moral and epistemic difference between an oblivious (or worse) refusal to acknowledge, for example, the current resurgence of xenophobia and white supremacy and the desire for health care for all or basic income).

The online social media environment, with its intrusive notifications and conduciveness to mindless scrolling and clicking, falls short of providing an optimal arena for informed evaluation and close examination of information. It’s for this reason that I believe we need to slow our online experience. So many of us ‘grown-ups’ impose limits on our children’s technology usage but do so while staring into the void of facile stumpers and bottomless distraction. Maybe a simple break would do us well. Forget that. A break would do us well. Many of us need to take a goddamn walk…without the phones. Look someone in the eye. It might turn out that the ‘idiot Trump supporter’ or the ‘snowflake Socialist’ is just an ordinary, imperfect human like yourself (*hate-filled racist, nationalist misogynists to the side – there is nothing worthy of engaging in such cases).

Moreover, in these days where our every move is harvested and aggregated, and where massive data companies commodify our very lives, it’s crucial that we recognize all of this while avoiding a victim’s mentality. We have an obligation to inform ourselves, evaluate our beliefs, reevaluate when new information arrives, then incorporate or discard where appropriate.

Navigating the world of ideas and agendas has become far more difficult due to social media, the intentions of the all-pervasive corporate giants, the sheer quantity of information which leads to more skimming than careful consumption, the ever-lurking pull of fatigue-based complacency and politically-motivated fake news, amongst countless other factors. But, one way or another, we need to adapt if we’re going to have any hope of preserving democracy. Otherwise, we’re likely to revert to a power-dominated state-of-nature in which the only difference is the fact that this time around it was ushered in by technology.

Peeping Bots vs. Peeping Toms

Why do we care more about violations of privacy by conscious agents?

Most of us know that we have become data production engines, radiating our locations, interests and associations for the benefit of others. A number of us are deeply concerned about that fact. But it seems that people really get outraged when they find out that actually humans are listening to Alexa inputs or that Facebook employees are scoping out private postings. Why is that? We can call it the Peeping Tom effect: we have a visceral reaction to our private lives being observed by living breathing agents that we lack when the same information is collected by computers. Perhaps this seems too obvious to remark upon, but it deserves some serious scrutiny. One hypothesis, which I advance in a forthcoming paper with my colleague Ken Daley, is that we are likely hard wired–perhaps evolutionarily–to have alarm bells ring when we think about human agents in our “space” but that we have no such inborn reactions to the impersonal data collectors we have developed in the past fifty years. The fact that alarm bells ring in one instance and not another is not a reason to ignore the silent threat. There’s a good case to be made that the threat of corporate knowledge–even if it doesn’t involve knowledge by a human–is quite a bit more dangerous than the threats we are more inclined to vilify.

Two features of human versus machine knowers stand out. Humans are conscious beings, and they have personal opinions, plans and intentions. It’s hard to swallow the idea that corporations or computer networks are themselves conscious, and it’s therefore hard to think of them as having opinions, plans and intentions. I’m inclined to grant the former–though it’s an interesting thought experiment to imagine if computer networks were, unbeknownst to us, conscious–and for the sake of argument I’ll grant that corporations don’t have opinions, plans or intentions (though we certainly talk as if they do). It’s worth asking what extra threat these features of humans might cause?

It’s admittedly unappealing to think of a Facebook nerd becoming engrossed in the saga of my personal life, but what harm does it cause? Assuming he (pardon the assumption, but I can’t imagine it not being a he) doesn’t go rogue and stake me out and threaten me or my loved ones, why does it matter that he knows that information? From one perspective, assuming he’s enjoying himself, that might even be thought to be a good thing! If the same information is simply in a computer, no one is enjoying themselves and isn’t more enjoyment better than less? Perhaps we think the privacy violation is impermissible and so the enjoyment doesn’t even start to outweigh that harm. But we’re not really talking about whether or not it’s permissible to violate privacy–presumably it’s just as impermissable if my privacy is violated and the illicit information is stored in a network. We’re asking what is the worse situation–a violation of privacy with enjoyment by a third person and a violation of privacy without. I share the feeling that the former is worse, but I’d like to have something to say in defense of that feeling. Perhaps it’s the fear that the human will go rogue and the computer can’t. But my feeling doesn’t go away when I imagine the human is spending a life in prison, nor does it go away when I realize that computers can go rogue as well, causing me all sorts of harm.

There’s lots more to say and think about here. But for now let’s just let the question simmer: Are violations of privacy more harmful if they involve knowledge by conscious agents, and if so, why?