Facebook’s Distorting Lens: The Danger of Social Deference

               Recently I got over my revulsion for Facebook and once again activated an account.  I did it in part because though I dislike the platform for some obvious reasons, I feel it’s important to engage with something that is so monumentally influential.  It’s important to know firsthand just what the atmosphere is like, what sorts of effects it has on its users, and what sorts of changes happen in the environment and the effects they seem to have.  I’m quite familiar with the way in which it creates echo chambers and epistemic bubbles, the draining effect it tends to have on my psyche, but in my recent interactions I feel most upset by what seems to be a lack of autonomy in the social realm.  I feel shuffled from post to post without knowing why and without having any sense that I can control what and who I see.  It’s all the more distressing that in Facebook my social interactions are being governed by unknown algorithms.  I am troubled by what seems to be an integral part of Facebook, something I’ll call social deference.

               It’s impossible to live in the modern world without deferring to others about a good deal of things.  We simply don’t have the ability to know firsthand and from the ground up all the information we need to know.  The most obvious form of deference is deference about facts.  When I accept someone’s word on something, I’m taking on what they say as my belief.  We defer to doctors about the safety of medications and treatments, to engineers about the safety of our planes and bridges, and to news organizations about the events of the day.  This sort of thing is both commonplace and necessary: it would be difficult to get out of bed without trusting other people to sort some of our facts for us.

               There are, on the other hand, facts about which it seems peculiar to defer.  Several years ago, I proposed the following thought experiment.  Suppose that Google offered an app called Google Morals.  You could enter in any question about morality—should I be a vegetarian? Is it permissible to lie to achieve one’s ends?  Is abortion permissible? —and Google Morals would give you the answer.  Set aside for the moment that it would be unclear just how the app would work and how it would have access to the moral truths.  Suppose we had reason to believe it did.  Nevertheless, I maintain, there is something peculiar about deferring to Google Morals, something that isn’t peculiar about deferring to Google Maps in order to learn how to get from Springfield to Capital City.  There is a way in which one is shirking one’s responsibility as a person when one simply takes Google’s word when it comes to moral matters.

               A good part of the problem with moral deference is that we don’t have access to why Google provides the answers it does.  It wouldn’t be a problem if we could “see the work” and understand why Google provides the verdicts it does.  In that case it’s likely we wouldn’t simply be deferring—we wouldn’t be accepting Google’s verdict simply because of Google’s output, we would be altering our beliefs because we understood the reasons why Google said what it said.  Understanding why something is true, being able to articulate the ins and outs, is important when it comes to some of our beliefs—namely the moral beliefs that make us who we are.

               Ok, so suppose this is right; what does this have to do with Facebook?  It strikes me that Facebook encourages a sort of deference as well that is likely as problematic as moral deference.  Call it social deference.

               Suppose that you systematically deferred to others about who was a good friend.  Instead of evaluating someone based on their merits, based on how they treated you, you simply asked a friend expert, a “friendspert,” whether someone was a good friend.  It’s not just that the friendspert recommends you check someone out and that they might be a good friend, but that you adopt the belief that the person is your friend based on their advice and you organized your life accordingly.  This is a sort of social deference—one is allowing one’s social circle to be determined based on the sayso of another.  In some sense one is shirking one’s duties as a friend and is offloading important work onto others that really should be done by each of us—evaluating people based on their perceived merits and demerits and befriending them based on how they treat you.  There would be something wrong if someone asked “why are you my friend” and your answer was “because the friendspert told me to be.”  Acting that way depreciates friendship to the point that it’s not clear that one really has a friend at all.

               The friendspert is an extreme case, and though it’s tempting to say that Facebook, with its friend suggestions, is acting like a friendspert, that’s probably not quite right.  There is perhaps a little truth to this, but it almost certainly overestimates what is really going on when we “friend” someone on Facebook.  It’s not as though when we click that blue button the person actually becomes our friend in any robust sense, and it’s not as though we shut down our independent evaluation of that person and just defer to Facebook’s algorithm.  We form beliefs about the person and make attachments based on what we see on our feed or how we interact with them.

               There is, though, a type of social deference involved in Facebook however that might even be more insidious.  We are deferring in this case to an algorithm that affects how our friends and social circles appear to us.  Who we see and which posts we see are determined by a system that is unknown to us.  To the degree that we let our attachments be shaped by those algorithms we are guilty of social deference.  We are allowing our connections to other people to be shaped based on decisions and frameworks that are not our own.  In doing so we are ceding our social autonomy and we’re allowing one of the most essential parts of ourselves—the social part—to be molded by a third party.

               Most of us know, at least after adolescence, that we should not judge people simply by what others report about them.  Even if those reports are accurate, the intermediary in this case is apt to distort our picture of other people, thereby shaping our judgments about them.  It is important, indeed it’s our responsibility, to judge people as much as we can without intermediaries shaping our perception of them.  The problem isn’t just that falsehoods and misrepresentations enter the mix.  Even supposing they don’t, it is our responsibility to form our interpersonal relationships—especially our friendships—ourselves.  Forming and nourishing friendships requires a subtle navigation between revealing too much about oneself and not enough, foregrounding some features and not others.  This isn’t dishonest, it’s a recognition that not every fact is relevant to every relationship, and sometimes the order and emphasis of what one reveals about oneself says as much about oneself as the information revealed.  (If I start every conversation announcing my religion or political affiliation, that fact will tell you as much about me as whatever you learn about my faith or politics.)

When we use Facebook, we are ultimately introducing an intermediary between us and our social world and are placing trust in it to provide an accurate picture of our social world.  In fact, what we get is a distorting lens that highlights some parts of our friends at the costs of others.  Importantly, the algorithms that generate what posts we see is not interested in generating or preserving true friendship, nor it is interested in showing us the truth about people.  It is interested in what keeps us clicking, and as such it tends to show us the most provocative parts of our social sphere.  People’s most outrageous opinions are foregrounded and those features that are relevant to true friendship are irrelevant.

               We needn’t rest with abstractions to see the point.  How many of us have seen the political posts of our family members and changed forever how we see them?  How many of us have seen the posts of our friends only to resent them for their self-righteousness or for what might appear to be their self obsession?  Our perspective on our social world is being shaped by the hidden algorithms that lead users to spend time on the site, not by anything that matters to friendship.  This is a kind of social deference and by engaging in it we are handing over responsibility for our relationships to a source we all know is untrustworthy.  The result is a weakening and cheapening of our relationships, but we can’t just blame Facebook.  It’s our decision to give a third party the power to distort and mediate our relationships, and to that degree we deserve a large share of the blame for abandoning our responsibilities to our friends and our social sphere.

Emotional Manipulation, Moving Too Fast and Profiting on the Broken Things

The task of keeping up with tech news has become rather harrowing as of late. The avalanche of information keeps the constant replacement of stories flowing and our attention overloaded. This has become so clearly the case that it’s easy to forget what happened just a few weeks ago. Facebook’s weak stance on political ads quickly became Google’s acquisition of our medical records before both companies then announced they would narrowly expand the minimum number of profiles required for targeted ads. In fact, I expect companies like Facebook bake our forgetting into their internal, day-to-day practices.

This hurtling forward coupled with our inability to keep up with the resulting scandals has allowed for the actualizing of the oft-derided ‘move fast and break things’ motto. While one takeaway might be that our attention spans have contracted due to informational overload, it’s certainly not the only possibility. One might suspect that we are incapable of focusing on any particular tech scandal, not because of the shrinking of our attention spans but because of the ever-evolving techno-scandal culture we now inhabit. To recognize the ease with which we forget, one need only revisit one particularly troubling example of ‘breaking things’ from just a handful of years ago.

In 2013, many people were unknowing subjects in a social experiment run by Facebook. Curious about whether the influence they had acquired could allow them to cause changes in the moods of its users, they manipulated their News Feeds. For some, they displayed only uplifting, cheerful stories while for others only negative, depressing stories. Their hypothesis was verified and the resulting paper was published by a prestigious peer-reviewed scientific journal (The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788–90). It’s worth having a look at the abstract for the paper:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337: a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.[1]

And within the first page:

On Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product. Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging. [italics added][2]

It’s one thing to read this from an academic perspective. It’s an entirely different thing to truly consider the fact that Facebook manipulated the emotions and mental states of millions of people. It’s important to feel the outrage that’s appropriate toward something so outrageous. It’s worth reflecting upon the power that such an ability and the willingness to use it implies. And finally, it’s unnerving but necessary to acknowledge that we now live in a world where this power is wielded by numerous profit-driven companies that have come to dominate a significant portion of the global, online distraction economy.

Concerning such questionable activities, I fear we’re no longer shockable. We see that these companies absorb our health and fitness data, track our purchase and click patterns, and buy our driving, employment, arrest and voting records. All the while, another video of a ‘disenfranchised’, petulant white lady raging at the sight of a black child selling water ‘without a license’ goes viral. Because the latter is more visceral it becomes a more likely object of our fleeting anger, and hence a more likely object of our attention.

In light of all this, it’s natural to wonder, what’s the difference between a state-run media outlet that attempts to placate its citizens with inspirational, dangling kittens and a social media company that manipulates the emotions of its users? While one is powerful, immensely profitable and potentially oppressive, the other is unlikely to be run by a barely grown-up billionaire who stumbled upon too much power after launching a website aimed at rating the ‘hotness’ of women on his college campus.

It’s one thing for these companies to harvest then profit from our data. It’s another thing altogether to experiment on us — without our consent, mind you — while doing so. It’s about time that we ask, at what point does free access to their services no longer suffice as compensation for being unwitting subjects in a social experiment? I expect that our giving this the consideration it deserves would require us to remember the last scandal long enough to recognize that the experiment is ongoing and that many more ‘things’ have been broken.

[1] Adam D. I. Kramera, Jamie E. Guillory and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks”. The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788.

[2] Ibid.

Pigeonholing and Personalization of the Online Experience

The personalization of our online experience – a result of the algorithmic extraction of our personal data, the subsequent curation of what we see, and the boundaries of our own clicking behavior – threatens to lead to our being pigeonholed into increasingly narrow categories determined by online platforms and data brokers. Such pigeonholing will further constrain what we encounter online, as with each stage of narrowing we will continue to click on increasingly limited subsets of what is made available to us. While the amount of information we encounter will, I expect, remain as robust as ever, the content of this information will be constrained by the bubbles to which we’re assigned. One troubling implication is that what we encounter will continue to narrow until the original promise of the internet ‘opening’ the world may eventually have the opposite result, leading us to become more easily predictable consumers and more easily persuaded actors in an increasingly curated spiral of contracting content.

To see how we are already being categorized, consider one of the many pigeonholing practices of Acxiom, one of the world’s most powerful data brokers:

“Acxiom assigns you a 13-digit code and puts you into one of 70 ‘clusters’ depending on your behavior and demographics… People in cluster 38…are most likely to be African American or Hispanic, working parents of teenage kids, and lower middle class and shop at discount stores. Someone in cluster 48 is likely to be Caucasian, high school educated, rural, family oriented, and interested in hunting, fishing, and watching NASCAR.”[1]

As companies like these persist in selling our data and the content of our online experiences narrow further, we will continue to be treated as categorizable cogs in an increasingly competitive attention economy.

In fact, Acxiom’s own ‘Consumer Data Products Catalog’ gives us a look inside just how they view us:

“Information includes consumers’ interests — derived, the catalog says, “from actual purchases and self-reported surveys” — like “Christian families,” “Dieting/Weight Loss,” “Gaming-Casino,” “Money Seekers” and “Smoking/Tobacco.” Acxiom also sells data about an individual’s race, ethnicity and country of origin. “Our Race model,” the catalog says, “provides information on the major racial category: Caucasians, Hispanics, African-Americans, or Asians.” Competing companies sell similar data.”[2]

It must be admitted that being placed in categories provides us with ads for products we’re more likely to desire, but it’s nonetheless natural to wonder if these benefits can compete with the costs. As each company learns more about you, it more finely shapes what you see. Beyond limiting the range of information we’re exposed to, this may, as noted by Frischmann and Desai[3], lead to the standardization of the individual. In other words, we have become subjects in a massive social engineering project. If companies can determine what we want with ever-increasing precision, they may ultimately be able to (at least partially) determine our online behavior by way of precisely tailoring the options they provide. In short, corporate knowledge of individuals may allow them to psychologically pigeonhole us in ways that are conducive to the ends of the corporation itself rather than our own. Consider the following from Frischmann and Desai:

“Suppose we’d like to induce a group of people to behave identically. We might personalize the inducements. For example, if we’re hoping to induce people to contribute $100 to a disaster relief fund, we might personalize the messages we send them. The same applies if we’re hoping to nudge people to visit the doctor for an annual check-up, or if I’m hoping to get them to click on an advertisement. Effective personalized ads produce a rather robotic response—clicks. Simply put, personalized stimuli can be an effective way to produce homogenous responses.”[4]  

A closely related worry involves the emergence of echo chambers and filter bubbles. The personalization and filtering of our online experience can lead to homophily, or the forming of strong connections to, and preferences for, people who share our beliefs and attitudes. While this can be psychologically comforting it can also reinforce confirmation bias and lead to the dismissal of opposing ideas.[5] Clearly, this phenomenon is problematic on many fronts, one of which involves the erosion of democracy. A vibrant, well-functioning democratic society requires the free, active and willing exchange of diverse ideas. The outright dismissal of opposing ideas yields pernicious polarization that undercuts both the likelihood of these crucial exchanges as well as the open-mindedness and willingness to truly consider competing opinions.

One finds oneself in a filter bubble when one is presented with limited perspectives on any relevant issue(s).[6] The filtering of our online experience may lead us to mistakenly believe the information we’re receiving is comprehensive while leaving us informationally and epistemically sheltered. Alternative, competing ideas are likely to seem not only foreign but reasonable targets of scorn and dismissal.

The more any entity knows about you the more likely it will be able to persuade you to act in particular ways. This, in fact, is the goal of social engineering. And clearly this would be an attractive scenario for any organization seeking results of any kind. We know that companies – and possibly countries – exploited partnerships with Facebook by microtargeting individuals during the 2016 Presidential campaign. In addition to Cambridge Analytica, the Russian Internet Research Agency targeted minorities, amongst others, by creating fake accounts and nudging them either toward voting for the third-party candidate or not voting at all. The more companies know about us, the more they can target us (or small, pigeonholed groups of us) directly in order to affect our beliefs and, therefore, our actions.

Nonetheless, the complete determination of our desires, the orchestrated directedness of our attention and the erosion of our democracy are not inevitable. It’s important to recognize that we are not without responsibility or control in this brave new world, regardless if it takes some significant reflection and understanding of the workings of the online universe.

It would be entirely unreasonable for our IRL (i.e., in real life) behavior to be constantly monitored in order to commodify our attention and modify our behavior. While I expect little resistance to this claim, this is the reality when it comes to our online lives. We need to at least consider the possibility that what we see is being limited by organizations seeking to monopolize our attention and direct our behavior. But we also need to realize that our online behavior is part of what is leading to our limited purview. In my very limited wisdom, I would suggest that we seek out opposing viewpoints, alternative news sources, new experiences and attempt to engage with information that transcends what we find comforting and agreeable. Harder still, we need to truly remain open to changing our opinions in the face of new data.


[1] Lori Andrews, I Know Who You Are and I Saw What You Did. p. 35.

[2] https://www.nytimes.com/2012/06/17/technology/acxiom-the-quiet-giant-of-consumer-database-marketing.html

[3] Frischmann, B. and Desai, D. “The Promise and Peril of Personalization”. https://cyberlaw.stanford.edu/blog/2018/11/promise-and-peril-personalization

[4] Ibid.

[5] For further discussion see C Thi Nguyen, “Escape the Echo Chamber”, https://aeon.co/essays/why-its-as-hard-to-escape-an-echo-chamber-as-it-is-to-flee-a-cult

[6] See Eli Pariser’s The Filter Bubble.

The Norms that Undermine Online Privacy

When it comes to dominant online platforms like Google, Amazon and Facebook, the idea of ‘privacy norms’ may bring to mind the default settings that one might tweak if one cares at all about privacy. On the other hand, there are additional ‘norms’ that lurk in the background and provide the foundation for the riches these companies have accumulated.

These norms, developed and perpetuated by the corporate behemoths in question, provided the groundwork for their present-day domination by allowing them to harvest massive quantities of our data. These companies have come to occupy such a ubiquitous presence in our lives that it’s easy to overlook the background assumptions that gave rise to the paradigm shift that made the phrase ‘data is the new oil’ so fitting.

While introducing the topic of privacy and big data to my Tech Ethics students this semester, I mentioned Jaron Lanier’s idea that we ought to be paid for our data. If there’s anything problematic about sweatshop workers being paid next to nothing for their roles in the production of products that are then sold at exorbitant prices (just to mention one category of worker exploitation) then there’s certainly something to Lanier’s claim. These companies are selling us. We generate the data, data that simply would not exist without us. And while one might claim that the same information could be generated by any other platform user, the fact is that these companies profit from the fact that we generated it. Moreover, if anyone else can generate the same data, then why do they need ours? Because, obviously, more data is better than less.

My students’ reactions ranged from chuckles to furrowed brows. And I don’t blame them. I expect that these reactions are a result of the simple fact that we don’t (and not that we couldn’t or shouldn’t) get paid for generating this data, the fact that they came of age in a time where these companies have always dominated, as well as the fact that we have been socialized to acquiesce to certain default assumptions as though they were handed down from a higher power. And no matter how independently plausible Lanier’s idea may be, it’s likely to seem bizarre to most. Once again, there’s reason for this.

The default assumption is that we have little to no rights when it comes to our online data, as though any such rights evaporate when we touch our phones or open our laptops. We forfeit these rights (so goes the assumption), even if we do so unknowingly (or, as I would contend, without consenting) every time we click in the tiny, psychologically insignificant ‘I agree to the terms of service’ box which allows us to get to the next page of uncountably many sites that are central to our everyday lives (e.g., paying credit card bills, uploading checks, renting a car, booking a flight, sending money to a friend, ordering food online, paying our mortgage, signing up for newspaper subscriptions, etc.). Because it’s the ‘default’, we accept it without question. Nevertheless, I believe this assumption is outrageous when considered with any care.

If I went to my local Target and, when given the credit card receipt, signed next to an analogous, analog ‘terms of service’, would this really give Target the right to have someone follow and record me for the rest of my life? What if the cashier somehow affixed a tracking device to me that would accomplish the same thing? What’s the difference between tracking my activities with an algorithm and doing so with a camera or a microphone? While the latter is clearly wrong, we somehow accept the former as an unassailable fact of online life.

It’s as though these companies are driven by impersonal algorithms developed for nothing more than extracting data and the profits they bring (*incredulous look implied), all the while treating the originators of such data – flesh and blood human beings – as mere means to these ends. And the scope of these aims seems to have no boundaries. According to a March 2019 article in The Atlantic:

“Amazon has filed a patent for a voice assistant that would recommend cold and flu medicine if it overhears you coughing.”

And…

“The health-care start-up Kinsa drew sharp criticism from privacy experts last year for selling illness data. Kinsa makes a smart thermometer that takes a user’s temperature, then instantly uploads it to a server along with gender and location information.

Kinsa used this data to create real-time maps of where people were getting sick and to refine its flu predictions for the season, with accuracy levels matching those of the Centers for Disease Control and Prevention. But it also sold the information to Clorox, which beefed up its marketing of disinfecting wipes and similar products in zip codes where thermometers reported a spike in fevers.”[1]

When one considers these examples and others, such as iRobot’s plan to use their Roomba vacuum products to map the layout of users’ homes in order to sell the data to the Googles and Amazons of the world, it becomes difficult to imagine anything being off limits.

Moreover, I’m writing this within weeks of Google’s announcing its acquisition of Fitbit (the makers of one of many WiFi-enabled fitness and movement trackers) and just days after The Wall Street Journal’s reporting that Google, by partnering with Ascension (a Catholic network of hospitals and clinics across 21 states), had harvested medical records from over 2500 hospitals and clinics in a venture codenamed ‘Project Nightingale’. The data includes lab results, doctor’s visits, prescriptions, hospitalization records, diagnoses, as well as patient names and dates of birth. And all of this occurred without informing those whose records were harvested or the doctors who provided the services. Within hours of WSJ’s story breaking, Google and Ascension made their partnership public.

When condemnation from a handful of Senators on both sides of the aisle became public, Google released a statement including the following: “We believe Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and comes with strict guidance on data privacy, security, and usage.” Meanwhile, Ascension stated that “all work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

It turns out that they just might be right. According to HIPAA (i.e., the Health Insurance Portability and Accountability Act of 1996), health care providers can disclose health records to third parties if the goal of doing so is to improve the quality of the health care provided. If this is correct, then it highlights a clear case where what is legal and what ought to be legal part ways. If we care at all about privacy, especially when considering the sensitive information that will be extractable from us as technology inevitably continues to advance into the future, then we need to hold these companies to ethical standards and not just legal benchmarks or we risk losing far more control over our own lives.

According to a Whistleblower claiming to have worked on the project: “Patients haven’t been told how Ascension is using their data and have not consented to their data being transferred to the cloud or being used by Google. At the very least patients should be told and be able to opt in or opt out”.[2]

With that said, Google also announced that “under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”.

This might sound familiar. Before 2012 when Google announced that they would merge profile information across their many platforms (including Google search, Gmail, YouTube and Android OS) without allowing users to opt-out, they said they would not do so. In addition, a recent Wall Street Journal investigation revealed that Google does, in fact, curate its search results, despite stating on its blog that “we do not use human curation to collect or arrange the results on a page.”[3] Such tweaks to its algorithms include those that favor big businesses like Amazon, Facebook and eBay over smaller businesses as well as “blacklists to remove certain sites or prevent others from surfacing in certain types of results”.[4] Google employs contractors to evaluate the results of its search result rankings and, according to some of these contractors, they are informed about ‘the correct ranking of results’. The company then uses the evaluations from the contractors to adjust their algorithms. The overarching point is that Google has often said that it doesn’t or wouldn’t do things that they in fact do or that they eventually have done. In light of this, one can be forgiven for being skeptical of their claim that “patient data cannot and will not be combined with any Google consumer data”.

It’s worth stressing that, in light of the relatively recent mind-boggling advances in artificial intelligence and extraction algorithms, it may be impossible to conceive of the power of future technologies. As a result, the current importance of our privacy rights over our online data cannot be overstated.


[1]https://www.theatlantic.com/technology/archive/2019/03/flu-google-kinsa-sick-thermometer-smart/584077/

[2] https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

[3] https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/

[4] https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753?mod=hp_lead_pos7

Social Media, Democracy & Citizen Responsibility

In today’s climate of justifiable suspicion about the Googles and Facebooks of the world, it’s easy to overlook the responsibilities of the individuals using these platforms. While I’m always happy to point out the problematic nature of the data harvesting and information dissemination that these companies are built upon, I would also suggest that this does nothing to diminish our own social and moral obligation to make significant efforts to inform ourselves, resist contributing to increase polarization and do whatever necessary to escape our cozy echo chambers and information bubbles.

Being a good citizen in a democracy requires more than many of us seem to think and much more than our actions often suggest. Pointing fingers, even when done in the right directions, is nowhere near enough. We need to wade through the oceans of bullshit emanating from partisan talking heads, fake news peddlers, marketing driven, agenda-suffused cable-news stations and algorithmically curated newsfeeds in order to determine which actions, policies, and candidates best represent our values, as well as the values most conducive to the health of a thriving democratic society.

President Obama, in a recent interview at the Obama Foundation Summit offered the following: “This idea of purity and you’re never compromised, and you’re always politically woke and all that stuff, you should get over that quickly. The world is messy. There are ambiguities. People who do really good stuff have flaws. People who you are fighting may love their kids and share certain things with you.”

The point, I take it, is not to disregard the mistreatment of marginalized groups but to do something beyond mere posturing and attempting to appear ‘woke’.

Too many of us today seem to believe that it’s enough to call out the many who err or those we may simply disagree with. “Then I can sit and feel pretty good about myself”, said Obama, “because, ‘man, you see how woke I was? I called you out.’ That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do”.

And while I’m quick to agree that platforms like Twitter and Facebook lend themselves to this practice of racing to be the first to spot and out injustice or ignorant speech, we still need to recognize when we’re being lulled into an ineffectual gotcha game of virtue signaling that, though it may provide fleeting feelings of superiority, produces very little in the way of lasting change or dialogue.

—-

The speed with which Facebook and Google have come to play a central role in the everyday life of so many makes it easy to overlook how recent these companies are. Nonetheless their effects are undeniable. As we shared baby photos and searched for information on anything that might spark our curiosity, they’ve been aggregating our offerings and feeding us what they know will keep us coming back.

None of us like to be confronted with the possibility that we’re alone, that our beliefs might be false, or our deeply held values ultimately misguided. So social media curates our experience to provide us with the validation we so often seek. How better to do this than to gift us our own beliefs and values through the words and stories of others? This keeps us clicking and keeps the advertising dollars pouring in for the companies involved. Just like the angry incel savoring the hateful rantings of Donald Trump, we all feel the cozy pull of having our own views echoed back to us.

But, of course, none of this provides anything by way of truth or understanding. And more to the point at issue, none of this is conducive to an open-minded population willing to do the work required to breathe new life into an ailing democracy teetering on the precipice of unbridgeable polarization. While Aristotle, in the first democracy, aptly said (and I’m paraphrasing) it’s the mark of an educated mind to be able to entertain a thought without accepting it, social media has given us the means of reinforcing our own thoughts without subjecting them to the slightest scrutiny. In fact, one might find these two distinct ideas to be fitting bookends for the nearly 2500 year run of democracy.

While this characterization of things may be a bit hyperbolic, the existence of problematic echo chambers and curated tunnel vision is quite real. Fox News acolytes dig in their heels while liberals roll their eyes, and each side drifts further away from the possibility of honestly engaging with the views of the other. (*I refuse to equate the so-called ‘extremes’ on the left with those on the right. There’s a clear moral and epistemic difference between an oblivious (or worse) refusal to acknowledge, for example, the current resurgence of xenophobia and white supremacy and the desire for health care for all or basic income).

The online social media environment, with its intrusive notifications and conduciveness to mindless scrolling and clicking, falls short of providing an optimal arena for informed evaluation and close examination of information. It’s for this reason that I believe we need to slow our online experience. So many of us ‘grown-ups’ impose limits on our children’s technology usage but do so while staring into the void of facile stumpers and bottomless distraction. Maybe a simple break would do us well. Forget that. A break would do us well. Many of us need to take a goddamn walk…without the phones. Look someone in the eye. It might turn out that the ‘idiot Trump supporter’ or the ‘snowflake Socialist’ is just an ordinary, imperfect human like yourself (*hate-filled racist, nationalist misogynists to the side – there is nothing worthy of engaging in such cases).

Moreover, in these days where our every move is harvested and aggregated, and where massive data companies commodify our very lives, it’s crucial that we recognize all of this while avoiding a victim’s mentality. We have an obligation to inform ourselves, evaluate our beliefs, reevaluate when new information arrives, then incorporate or discard where appropriate.

Navigating the world of ideas and agendas has become far more difficult due to social media, the intentions of the all-pervasive corporate giants, the sheer quantity of information which leads to more skimming than careful consumption, the ever-lurking pull of fatigue-based complacency and politically-motivated fake news, amongst countless other factors. But, one way or another, we need to adapt if we’re going to have any hope of preserving democracy. Otherwise, we’re likely to revert to a power-dominated state-of-nature in which the only difference is the fact that this time around it was ushered in by technology.

FTC sues Match.com for misleading users

Machine-learning together with deepfakery will exacerbate this problem as companies like Match will be able to use one’s profile information together with their ‘likes’ and saved profiles to generate profiles (esp. pictures) of men and women that targeted users are most likely to want to connect with. Despite their not being actual people, this will increase sales (as one needs to be a paid member to respond to messages) while providing nothing to users. Once again, technology allows the profit-motive to run wild (i.e., morally wrong).

Social Media & Consent

Suppose John is in China.  While he doesn’t understand any Chinese, he wants a cup of coffee, so he enters the nearest coffee shop and points toward a coffee pot.  The barista pours some coffee into a paper cup, hands it to John, then asks (in Chinese) “may I charge you double the price for this cup of coffee?” John, understanding nothing of what’s been asked, nods his head to expedite the process of getting his coffee.  Has John consented to being charged double the regular price for his coffee?

We think not, but maybe you disagree. Suppose rather than asking to overcharge him, the barista asks if he can violently take possession of John’s heart. It seems clear that John has not consented to that. You might say that John ought not to have nodded his head in agreement, that he acted carelessly, maybe he even did something wrong, but that’s not the point. His nodding or saying ‘yes’ does not, by itself, entail that he has consented to whatever has been asked.

I would argue that, in the majority of cases, users of social media platforms likewise have not consented to the relevant ‘terms of service’ (TOS). The length (consider not only the relevant text but the embedded links) and opaque legalese supports the analogy to John in the relevant ways. This calls into question many current practices including the use of personal data, the extraction of further data never explicitly provided, the monitoring of surfing and purchasing patterns, and researching connections between emotional states and behavior, all in order to better generate profits. We think these worries can be eliminated, or at the very least reduced, by making it easier for users to become truly informed regarding the relevant TOS.