Why We Shouldn’t Be Allowed to Waive our Privacy Rights

There is little doubt that privacy clauses and terms of service agreements don’t support the moral burden they are meant to carry.  All too often they are designed to provide political cover rather than to generate informed consent.   Not only does no one read them, but even if someone did and had the attention span and intelligence to follow them, it’s doubtful that they would find all the policies hidden in documents several clicks deep. Interesting fact: If the average American actually read all the policies they encountered, they would lose 76 full workdays in the process. The cost to productivity if all Americans were so conscientious would approach $1 trillion.

There is no arguing it, really: clicking on an AGREE button no more means that you agree with the content of a terms of service agreement than politely nodding your head during a mumbled conversation in a noisy bar means you are agreeing with the opinion you aren’t really hearing.

              This is a big problem with the way we are doing things, but there is another, more fundamental issue that few have recognized: our privacy rights aren’t ours to waive. 

              That sounds paradoxical, but there are other rights we intuitively can’t waive—I cannot waive my right to self-determination by selling myself into bondage, for example, and I can’t waive my right to my body by selling myself to a cannibal for Thanksgiving Dinner.  It’s not plausible, though, that privacy violations inflict such extreme harms, so those probably aren’t the best places to look for analogues. 

A closer analogy to privacy rights is voting rights.  I cannot waive my right to vote.  I can choose not to exercise it, but I cannot waive it.  I cannot exchange my right to vote for internet access or for a cushy job. I certainly can’t transfer my right to you, no matter how much you want to pay me. It’s my right, but that doesn’t mean I can give it up. That’s because my right to vote doesn’t only protect me—it protects my fellow citizens and the institution of democracy we collectively cherish. 

If I have the right to sell my vote, it endangers the entire democratic franchise.  It is likely to make your vote less valuable in comparison to someone else’s—plug in your favorite malevolent billionaire here for a scenario in which electoral outcomes are determined by the mass purchase of voting rights.  We cannot waive our right to vote because that right doesn’t primarily prevent a harm to us as individuals; it prevents a harm to an institution that undergirds the rights of others.

              I suggest privacy rights are like voting rights in this respect.  While we can suffer individual harm if someone knows our political preferences or gains access to the subtle triggers that sway us for or against a product or a candidate, the more important harm comes with the threat to the valuable institutions we collectively comprise. 

If I have the ability to waive my access to privacy rights so does everyone else.  If we all waive those rights we enable the collection of data that enables significant control over the electorate as a whole.  Given enough information about the thoughts and behaviors of voters, propaganda and advertising can be extremely effective in swaying enough attitudes to change the outcome of an election. Though votes aren’t being bought, the result is similar: each individual vote is now outweighed by the statistically certain outcome of a data-informed campaign of voter manipulation.

              If this is right, we’ve largely been looking in the wrong direction both for the harms of privacy rights violations and for the harms involved in our wanton disregard of those rights.  In an age where data-analytics can discern surprising connections between different elements of human personality and behavior, our data is not our own.  By giving up our own data, we are essentially informing on those like us and enabling their manipulation.  We shouldn’t do that just because we have an itch to play Clash of Kings.

So where does this leave us?  I like to play Clash of Kings as much as the next guy and frankly, when I think of it in terms of the harms likely to come to me, Clash of Kings can win pretty easily.  When I realize that my own visceral reaction to privacy harms really isn’t to the point, I’m a little less cavalier about parting with my data.  The truth is, though, that this is a place for governmental regulation, just as it is in the case of voting rights.  In today’s political climate I won’t hold my breath, but the way we all think of these issues needs to undergo a shift away from our worries about our own individual private lives.  As important as each of us is as an individual, some of the most worrisome harms come from the effect on the groups to which we belong.  We need to shift our focus toward the harm these privacy violations cause all of us by enabling the manipulation of the public and the vitiation of our democracy.

Originally appeared in The Hill

Emotional Manipulation, Moving Too Fast and Profiting on the Broken Things

The task of keeping up with tech news has become rather harrowing as of late. The avalanche of information keeps the constant replacement of stories flowing and our attention overloaded. This has become so clearly the case that it’s easy to forget what happened just a few weeks ago. Facebook’s weak stance on political ads quickly became Google’s acquisition of our medical records before both companies then announced they would narrowly expand the minimum number of profiles required for targeted ads. In fact, I expect companies like Facebook bake our forgetting into their internal, day-to-day practices.

This hurtling forward coupled with our inability to keep up with the resulting scandals has allowed for the actualizing of the oft-derided ‘move fast and break things’ motto. While one takeaway might be that our attention spans have contracted due to informational overload, it’s certainly not the only possibility. One might suspect that we are incapable of focusing on any particular tech scandal, not because of the shrinking of our attention spans but because of the ever-evolving techno-scandal culture we now inhabit. To recognize the ease with which we forget, one need only revisit one particularly troubling example of ‘breaking things’ from just a handful of years ago.

In 2013, many people were unknowing subjects in a social experiment run by Facebook. Curious about whether the influence they had acquired could allow them to cause changes in the moods of its users, they manipulated their News Feeds. For some, they displayed only uplifting, cheerful stories while for others only negative, depressing stories. Their hypothesis was verified and the resulting paper was published by a prestigious peer-reviewed scientific journal (The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788–90). It’s worth having a look at the abstract for the paper:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337: a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.[1]

And within the first page:

On Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product. Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging. [italics added][2]

It’s one thing to read this from an academic perspective. It’s an entirely different thing to truly consider the fact that Facebook manipulated the emotions and mental states of millions of people. It’s important to feel the outrage that’s appropriate toward something so outrageous. It’s worth reflecting upon the power that such an ability and the willingness to use it implies. And finally, it’s unnerving but necessary to acknowledge that we now live in a world where this power is wielded by numerous profit-driven companies that have come to dominate a significant portion of the global, online distraction economy.

Concerning such questionable activities, I fear we’re no longer shockable. We see that these companies absorb our health and fitness data, track our purchase and click patterns, and buy our driving, employment, arrest and voting records. All the while, another video of a ‘disenfranchised’, petulant white lady raging at the sight of a black child selling water ‘without a license’ goes viral. Because the latter is more visceral it becomes a more likely object of our fleeting anger, and hence a more likely object of our attention.

In light of all this, it’s natural to wonder, what’s the difference between a state-run media outlet that attempts to placate its citizens with inspirational, dangling kittens and a social media company that manipulates the emotions of its users? While one is powerful, immensely profitable and potentially oppressive, the other is unlikely to be run by a barely grown-up billionaire who stumbled upon too much power after launching a website aimed at rating the ‘hotness’ of women on his college campus.

It’s one thing for these companies to harvest then profit from our data. It’s another thing altogether to experiment on us — without our consent, mind you — while doing so. It’s about time that we ask, at what point does free access to their services no longer suffice as compensation for being unwitting subjects in a social experiment? I expect that our giving this the consideration it deserves would require us to remember the last scandal long enough to recognize that the experiment is ongoing and that many more ‘things’ have been broken.

[1] Adam D. I. Kramera, Jamie E. Guillory and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks”. The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788.

[2] Ibid.