Emotional Manipulation, Moving Too Fast and Profiting on the Broken Things

The task of keeping up with tech news has become rather harrowing as of late. The avalanche of information keeps the constant replacement of stories flowing and our attention overloaded. This has become so clearly the case that it’s easy to forget what happened just a few weeks ago. Facebook’s weak stance on political ads quickly became Google’s acquisition of our medical records before both companies then announced they would narrowly expand the minimum number of profiles required for targeted ads. In fact, I expect companies like Facebook bake our forgetting into their internal, day-to-day practices.

This hurtling forward coupled with our inability to keep up with the resulting scandals has allowed for the actualizing of the oft-derided ‘move fast and break things’ motto. While one takeaway might be that our attention spans have contracted due to informational overload, it’s certainly not the only possibility. One might suspect that we are incapable of focusing on any particular tech scandal, not because of the shrinking of our attention spans but because of the ever-evolving techno-scandal culture we now inhabit. To recognize the ease with which we forget, one need only revisit one particularly troubling example of ‘breaking things’ from just a handful of years ago.

In 2013, many people were unknowing subjects in a social experiment run by Facebook. Curious about whether the influence they had acquired could allow them to cause changes in the moods of its users, they manipulated their News Feeds. For some, they displayed only uplifting, cheerful stories while for others only negative, depressing stories. Their hypothesis was verified and the resulting paper was published by a prestigious peer-reviewed scientific journal (The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788–90). It’s worth having a look at the abstract for the paper:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337: a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.[1]

And within the first page:

On Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product. Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging. [italics added][2]

It’s one thing to read this from an academic perspective. It’s an entirely different thing to truly consider the fact that Facebook manipulated the emotions and mental states of millions of people. It’s important to feel the outrage that’s appropriate toward something so outrageous. It’s worth reflecting upon the power that such an ability and the willingness to use it implies. And finally, it’s unnerving but necessary to acknowledge that we now live in a world where this power is wielded by numerous profit-driven companies that have come to dominate a significant portion of the global, online distraction economy.

Concerning such questionable activities, I fear we’re no longer shockable. We see that these companies absorb our health and fitness data, track our purchase and click patterns, and buy our driving, employment, arrest and voting records. All the while, another video of a ‘disenfranchised’, petulant white lady raging at the sight of a black child selling water ‘without a license’ goes viral. Because the latter is more visceral it becomes a more likely object of our fleeting anger, and hence a more likely object of our attention.

In light of all this, it’s natural to wonder, what’s the difference between a state-run media outlet that attempts to placate its citizens with inspirational, dangling kittens and a social media company that manipulates the emotions of its users? While one is powerful, immensely profitable and potentially oppressive, the other is unlikely to be run by a barely grown-up billionaire who stumbled upon too much power after launching a website aimed at rating the ‘hotness’ of women on his college campus.

It’s one thing for these companies to harvest then profit from our data. It’s another thing altogether to experiment on us — without our consent, mind you — while doing so. It’s about time that we ask, at what point does free access to their services no longer suffice as compensation for being unwitting subjects in a social experiment? I expect that our giving this the consideration it deserves would require us to remember the last scandal long enough to recognize that the experiment is ongoing and that many more ‘things’ have been broken.

[1] Adam D. I. Kramera, Jamie E. Guillory and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks”. The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788.

[2] Ibid.

Pigeonholing and Personalization of the Online Experience

The personalization of our online experience – a result of the algorithmic extraction of our personal data, the subsequent curation of what we see, and the boundaries of our own clicking behavior – threatens to lead to our being pigeonholed into increasingly narrow categories determined by online platforms and data brokers. Such pigeonholing will further constrain what we encounter online, as with each stage of narrowing we will continue to click on increasingly limited subsets of what is made available to us. While the amount of information we encounter will, I expect, remain as robust as ever, the content of this information will be constrained by the bubbles to which we’re assigned. One troubling implication is that what we encounter will continue to narrow until the original promise of the internet ‘opening’ the world may eventually have the opposite result, leading us to become more easily predictable consumers and more easily persuaded actors in an increasingly curated spiral of contracting content.

To see how we are already being categorized, consider one of the many pigeonholing practices of Acxiom, one of the world’s most powerful data brokers:

“Acxiom assigns you a 13-digit code and puts you into one of 70 ‘clusters’ depending on your behavior and demographics… People in cluster 38…are most likely to be African American or Hispanic, working parents of teenage kids, and lower middle class and shop at discount stores. Someone in cluster 48 is likely to be Caucasian, high school educated, rural, family oriented, and interested in hunting, fishing, and watching NASCAR.”[1]

As companies like these persist in selling our data and the content of our online experiences narrow further, we will continue to be treated as categorizable cogs in an increasingly competitive attention economy.

In fact, Acxiom’s own ‘Consumer Data Products Catalog’ gives us a look inside just how they view us:

“Information includes consumers’ interests — derived, the catalog says, “from actual purchases and self-reported surveys” — like “Christian families,” “Dieting/Weight Loss,” “Gaming-Casino,” “Money Seekers” and “Smoking/Tobacco.” Acxiom also sells data about an individual’s race, ethnicity and country of origin. “Our Race model,” the catalog says, “provides information on the major racial category: Caucasians, Hispanics, African-Americans, or Asians.” Competing companies sell similar data.”[2]

It must be admitted that being placed in categories provides us with ads for products we’re more likely to desire, but it’s nonetheless natural to wonder if these benefits can compete with the costs. As each company learns more about you, it more finely shapes what you see. Beyond limiting the range of information we’re exposed to, this may, as noted by Frischmann and Desai[3], lead to the standardization of the individual. In other words, we have become subjects in a massive social engineering project. If companies can determine what we want with ever-increasing precision, they may ultimately be able to (at least partially) determine our online behavior by way of precisely tailoring the options they provide. In short, corporate knowledge of individuals may allow them to psychologically pigeonhole us in ways that are conducive to the ends of the corporation itself rather than our own. Consider the following from Frischmann and Desai:

“Suppose we’d like to induce a group of people to behave identically. We might personalize the inducements. For example, if we’re hoping to induce people to contribute $100 to a disaster relief fund, we might personalize the messages we send them. The same applies if we’re hoping to nudge people to visit the doctor for an annual check-up, or if I’m hoping to get them to click on an advertisement. Effective personalized ads produce a rather robotic response—clicks. Simply put, personalized stimuli can be an effective way to produce homogenous responses.”[4]  

A closely related worry involves the emergence of echo chambers and filter bubbles. The personalization and filtering of our online experience can lead to homophily, or the forming of strong connections to, and preferences for, people who share our beliefs and attitudes. While this can be psychologically comforting it can also reinforce confirmation bias and lead to the dismissal of opposing ideas.[5] Clearly, this phenomenon is problematic on many fronts, one of which involves the erosion of democracy. A vibrant, well-functioning democratic society requires the free, active and willing exchange of diverse ideas. The outright dismissal of opposing ideas yields pernicious polarization that undercuts both the likelihood of these crucial exchanges as well as the open-mindedness and willingness to truly consider competing opinions.

One finds oneself in a filter bubble when one is presented with limited perspectives on any relevant issue(s).[6] The filtering of our online experience may lead us to mistakenly believe the information we’re receiving is comprehensive while leaving us informationally and epistemically sheltered. Alternative, competing ideas are likely to seem not only foreign but reasonable targets of scorn and dismissal.

The more any entity knows about you the more likely it will be able to persuade you to act in particular ways. This, in fact, is the goal of social engineering. And clearly this would be an attractive scenario for any organization seeking results of any kind. We know that companies – and possibly countries – exploited partnerships with Facebook by microtargeting individuals during the 2016 Presidential campaign. In addition to Cambridge Analytica, the Russian Internet Research Agency targeted minorities, amongst others, by creating fake accounts and nudging them either toward voting for the third-party candidate or not voting at all. The more companies know about us, the more they can target us (or small, pigeonholed groups of us) directly in order to affect our beliefs and, therefore, our actions.

Nonetheless, the complete determination of our desires, the orchestrated directedness of our attention and the erosion of our democracy are not inevitable. It’s important to recognize that we are not without responsibility or control in this brave new world, regardless if it takes some significant reflection and understanding of the workings of the online universe.

It would be entirely unreasonable for our IRL (i.e., in real life) behavior to be constantly monitored in order to commodify our attention and modify our behavior. While I expect little resistance to this claim, this is the reality when it comes to our online lives. We need to at least consider the possibility that what we see is being limited by organizations seeking to monopolize our attention and direct our behavior. But we also need to realize that our online behavior is part of what is leading to our limited purview. In my very limited wisdom, I would suggest that we seek out opposing viewpoints, alternative news sources, new experiences and attempt to engage with information that transcends what we find comforting and agreeable. Harder still, we need to truly remain open to changing our opinions in the face of new data.


[1] Lori Andrews, I Know Who You Are and I Saw What You Did. p. 35.

[2] https://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-database-marketing.html

[3] Frischmann, B. and Desai, D. “The Promise and Peril of Personalization”. https://cyberlaw.stanford.edu/blog/2018/11/promise-and-peril-personalization

[4] Ibid.

[5] For further discussion see C Thi Nguyen, “Escape the Echo Chamber”, https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult

[6] See Eli Pariser’s The Filter Bubble.

The Norms that Undermine Online Privacy

When it comes to dominant online platforms like Google, Amazon and Facebook, the idea of ‘privacy norms’ may bring to mind the default settings that one might tweak if one cares at all about privacy. On the other hand, there are additional ‘norms’ that lurk in the background and provide the foundation for the riches these companies have accumulated.

These norms, developed and perpetuated by the corporate behemoths in question, provided the groundwork for their present-day domination by allowing them to harvest massive quantities of our data. These companies have come to occupy such a ubiquitous presence in our lives that it’s easy to overlook the background assumptions that gave rise to the paradigm shift that made the phrase ‘data is the new oil’ so fitting.

While introducing the topic of privacy and big data to my Tech Ethics students this semester, I mentioned Jaron Lanier’s idea that we ought to be paid for our data. If there’s anything problematic about sweatshop workers being paid next to nothing for their roles in the production of products that are then sold at exorbitant prices (just to mention one category of worker exploitation) then there’s certainly something to Lanier’s claim. These companies are selling us. We generate the data, data that simply would not exist without us. And while one might claim that the same information could be generated by any other platform user, the fact is that these companies profit from the fact that we generated it. Moreover, if anyone else can generate the same data, then why do they need ours? Because, obviously, more data is better than less.

My students’ reactions ranged from chuckles to furrowed brows. And I don’t blame them. I expect that these reactions are a result of the simple fact that we don’t (and not that we couldn’t or shouldn’t) get paid for generating this data, the fact that they came of age in a time where these companies have always dominated, as well as the fact that we have been socialized to acquiesce to certain default assumptions as though they were handed down from a higher power. And no matter how independently plausible Lanier’s idea may be, it’s likely to seem bizarre to most. Once again, there’s reason for this.

The default assumption is that we have little to no rights when it comes to our online data, as though any such rights evaporate when we touch our phones or open our laptops. We forfeit these rights (so goes the assumption), even if we do so unknowingly (or, as I would contend, without consenting) every time we click in the tiny, psychologically insignificant ‘I agree to the terms of service’ box which allows us to get to the next page of uncountably many sites that are central to our everyday lives (e.g., paying credit card bills, uploading checks, renting a car, booking a flight, sending money to a friend, ordering food online, paying our mortgage, signing up for newspaper subscriptions, etc.). Because it’s the ‘default’, we accept it without question. Nevertheless, I believe this assumption is outrageous when considered with any care.

If I went to my local Target and, when given the credit card receipt, signed next to an analogous, analog ‘terms of service’, would this really give Target the right to have someone follow and record me for the rest of my life? What if the cashier somehow affixed a tracking device to me that would accomplish the same thing? What’s the difference between tracking my activities with an algorithm and doing so with a camera or a microphone? While the latter is clearly wrong, we somehow accept the former as an unassailable fact of online life.

It’s as though these companies are driven by impersonal algorithms developed for nothing more than extracting data and the profits they bring (*incredulous look implied), all the while treating the originators of such data – flesh and blood human beings – as mere means to these ends. And the scope of these aims seems to have no boundaries. According to a March 2019 article in The Atlantic:

“Amazon has filed a patent for a voice assistant that would recommend cold and flu medicine if it overhears you coughing.”

And…

“The health-care start-up Kinsa drew sharp criticism from privacy experts last year for selling illness data. Kinsa makes a smart thermometer that takes a user’s temperature, then instantly uploads it to a server along with gender and location information.

Kinsa used this data to create real-time maps of where people were getting sick and to refine its flu predictions for the season, with accuracy levels matching those of the Centers for Disease Control and Prevention. But it also sold the information to Clorox, which beefed up its marketing of disinfecting wipes and similar products in zip codes where thermometers reported a spike in fevers.”[1]

When one considers these examples and others, such as iRobot’s plan to use their Roomba vacuum products to map the layout of users’ homes in order to sell the data to the Googles and Amazons of the world, it becomes difficult to imagine anything being off limits.

Moreover, I’m writing this within weeks of Google’s announcing its acquisition of Fitbit (the makers of one of many WiFi-enabled fitness and movement trackers) and just days after The Wall Street Journal’s reporting that Google, by partnering with Ascension (a Catholic network of hospitals and clinics across 21 states), had harvested medical records from over 2500 hospitals and clinics in a venture codenamed ‘Project Nightingale’. The data includes lab results, doctor’s visits, prescriptions, hospitalization records, diagnoses, as well as patient names and dates of birth. And all of this occurred without informing those whose records were harvested or the doctors who provided the services. Within hours of WSJ’s story breaking, Google and Ascension made their partnership public.

When condemnation from a handful of Senators on both sides of the aisle became public, Google released a statement including the following: “We believe Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and comes with strict guidance on data privacy, security, and usage.” Meanwhile, Ascension stated that “all work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

It turns out that they just might be right. According to HIPAA (i.e., the Health Insurance Portability and Accountability Act of 1996), health care providers can disclose health records to third parties if the goal of doing so is to improve the quality of the health care provided. If this is correct, then it highlights a clear case where what is legal and what ought to be legal part ways. If we care at all about privacy, especially when considering the sensitive information that will be extractable from us as technology inevitably continues to advance into the future, then we need to hold these companies to ethical standards and not just legal benchmarks or we risk losing far more control over our own lives.

According to a Whistleblower claiming to have worked on the project: “Patients haven’t been told how Ascension is using their data and have not consented to their data being transferred to the cloud or being used by Google. At the very least patients should be told and be able to opt in or opt out”.[2]

With that said, Google also announced that “under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”.

This might sound familiar. Before 2012 when Google announced that they would merge profile information across their many platforms (including Google search, Gmail, YouTube and Android OS) without allowing users to opt-out, they said they would not do so. In addition, a recent Wall Street Journal investigation revealed that Google does, in fact, curate its search results, despite stating on its blog that “we do not use human curation to collect or arrange the results on a page.”[3] Such tweaks to its algorithms include those that favor big businesses like Amazon, Facebook and eBay over smaller businesses as well as “blacklists to remove certain sites or prevent others from surfacing in certain types of results”.[4] Google employs contractors to evaluate the results of its search result rankings and, according to some of these contractors, they are informed about ‘the correct ranking of results’. The company then uses the evaluations from the contractors to adjust their algorithms. The overarching point is that Google has often said that it doesn’t or wouldn’t do things that they in fact do or that they eventually have done. In light of this, one can be forgiven for being skeptical of their claim that “patient data cannot and will not be combined with any Google consumer data”.

It’s worth stressing that, in light of the relatively recent mind-boggling advances in artificial intelligence and extraction algorithms, it may be impossible to conceive of the power of future technologies. As a result, the current importance of our privacy rights over our online data cannot be overstated.


[1]https://www.theatlantic.com/technology/archive/2019/03/flu-google-kinsa-sick-thermometer-smart/584077/

[2] https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

[3] https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/

[4] https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753?mod=hp_lead_pos7

Social Media, Democracy & Citizen Responsibility

In today’s climate of justifiable suspicion about the Googles and Facebooks of the world, it’s easy to overlook the responsibilities of the individuals using these platforms. While I’m always happy to point out the problematic nature of the data harvesting and information dissemination that these companies are built upon, I would also suggest that this does nothing to diminish our own social and moral obligation to make significant efforts to inform ourselves, resist contributing to increase polarization and do whatever necessary to escape our cozy echo chambers and information bubbles.

Being a good citizen in a democracy requires more than many of us seem to think and much more than our actions often suggest. Pointing fingers, even when done in the right directions, is nowhere near enough. We need to wade through the oceans of bullshit emanating from partisan talking heads, fake news peddlers, marketing driven, agenda-suffused cable-news stations and algorithmically curated newsfeeds in order to determine which actions, policies, and candidates best represent our values, as well as the values most conducive to the health of a thriving democratic society.

President Obama, in a recent interview at the Obama Foundation Summit offered the following: “This idea of purity and you’re never compromised, and you’re always politically woke and all that stuff, you should get over that quickly. The world is messy. There are ambiguities. People who do really good stuff have flaws. People who you are fighting may love their kids and share certain things with you.”

The point, I take it, is not to disregard the mistreatment of marginalized groups but to do something beyond mere posturing and attempting to appear ‘woke’.

Too many of us today seem to believe that it’s enough to call out the many who err or those we may simply disagree with. “Then I can sit and feel pretty good about myself”, said Obama, “because, ‘man, you see how woke I was? I called you out.’ That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do”.

And while I’m quick to agree that platforms like Twitter and Facebook lend themselves to this practice of racing to be the first to spot and out injustice or ignorant speech, we still need to recognize when we’re being lulled into an ineffectual gotcha game of virtue signaling that, though it may provide fleeting feelings of superiority, produces very little in the way of lasting change or dialogue.

—-

The speed with which Facebook and Google have come to play a central role in the everyday life of so many makes it easy to overlook how recent these companies are. Nonetheless their effects are undeniable. As we shared baby photos and searched for information on anything that might spark our curiosity, they’ve been aggregating our offerings and feeding us what they know will keep us coming back.

None of us like to be confronted with the possibility that we’re alone, that our beliefs might be false, or our deeply held values ultimately misguided. So social media curates our experience to provide us with the validation we so often seek. How better to do this than to gift us our own beliefs and values through the words and stories of others? This keeps us clicking and keeps the advertising dollars pouring in for the companies involved. Just like the angry incel savoring the hateful rantings of Donald Trump, we all feel the cozy pull of having our own views echoed back to us.

But, of course, none of this provides anything by way of truth or understanding. And more to the point at issue, none of this is conducive to an open-minded population willing to do the work required to breathe new life into an ailing democracy teetering on the precipice of unbridgeable polarization. While Aristotle, in the first democracy, aptly said (and I’m paraphrasing) it’s the mark of an educated mind to be able to entertain a thought without accepting it, social media has given us the means of reinforcing our own thoughts without subjecting them to the slightest scrutiny. In fact, one might find these two distinct ideas to be fitting bookends for the nearly 2500 year run of democracy.

While this characterization of things may be a bit hyperbolic, the existence of problematic echo chambers and curated tunnel vision is quite real. Fox News acolytes dig in their heels while liberals roll their eyes, and each side drifts further away from the possibility of honestly engaging with the views of the other. (*I refuse to equate the so-called ‘extremes’ on the left with those on the right. There’s a clear moral and epistemic difference between an oblivious (or worse) refusal to acknowledge, for example, the current resurgence of xenophobia and white supremacy and the desire for health care for all or basic income).

The online social media environment, with its intrusive notifications and conduciveness to mindless scrolling and clicking, falls short of providing an optimal arena for informed evaluation and close examination of information. It’s for this reason that I believe we need to slow our online experience. So many of us ‘grown-ups’ impose limits on our children’s technology usage but do so while staring into the void of facile stumpers and bottomless distraction. Maybe a simple break would do us well. Forget that. A break would do us well. Many of us need to take a goddamn walk…without the phones. Look someone in the eye. It might turn out that the ‘idiot Trump supporter’ or the ‘snowflake Socialist’ is just an ordinary, imperfect human like yourself (*hate-filled racist, nationalist misogynists to the side – there is nothing worthy of engaging in such cases).

Moreover, in these days where our every move is harvested and aggregated, and where massive data companies commodify our very lives, it’s crucial that we recognize all of this while avoiding a victim’s mentality. We have an obligation to inform ourselves, evaluate our beliefs, reevaluate when new information arrives, then incorporate or discard where appropriate.

Navigating the world of ideas and agendas has become far more difficult due to social media, the intentions of the all-pervasive corporate giants, the sheer quantity of information which leads to more skimming than careful consumption, the ever-lurking pull of fatigue-based complacency and politically-motivated fake news, amongst countless other factors. But, one way or another, we need to adapt if we’re going to have any hope of preserving democracy. Otherwise, we’re likely to revert to a power-dominated state-of-nature in which the only difference is the fact that this time around it was ushered in by technology.

FTC sues Match.com for misleading users

Machine-learning together with deepfakery will exacerbate this problem as companies like Match will be able to use one’s profile information together with their ‘likes’ and saved profiles to generate profiles (esp. pictures) of men and women that targeted users are most likely to want to connect with. Despite their not being actual people, this will increase sales (as one needs to be a paid member to respond to messages) while providing nothing to users. Once again, technology allows the profit-motive to run wild (i.e., morally wrong).

Google’s Wrist is Slapped for Violating Child Privacy

Ok, so maybe there’s nothing problematic about harvesting data from adults so that advertisers can microtarget us with ads in order to optimize their chances of persuading us to buy. I doubt it, but regardless…

There’s certainly something problematic about harvesting data on children.

Two issues:

First, when it comes to children, talk about consent is silly.

Second, and more important, the persistent harvesting of a child’s data, together with the ways in which such data can now be algorithmically analyzed and the unknown ways in which it may be used and disseminated in the future suggests the possibility that such a child may become entirely transparent to data brokers, advertisers, the government, police departments and anyone else who might purchase such data.

Of course, the child might become the greatest criminal mastermind of their time, but this has to be the case for every such child for this response to get off the ground.