Facebook’s Distorting Lens: The Danger of Social Deference

               Recently I got over my revulsion for Facebook and once again activated an account.  I did it in part because though I dislike the platform for some obvious reasons, I feel it’s important to engage with something that is so monumentally influential.  It’s important to know firsthand just what the atmosphere is like, what sorts of effects it has on its users, and what sorts of changes happen in the environment and the effects they seem to have.  I’m quite familiar with the way in which it creates echo chambers and epistemic bubbles, the draining effect it tends to have on my psyche, but in my recent interactions I feel most upset by what seems to be a lack of autonomy in the social realm.  I feel shuffled from post to post without knowing why and without having any sense that I can control what and who I see.  It’s all the more distressing that in Facebook my social interactions are being governed by unknown algorithms.  I am troubled by what seems to be an integral part of Facebook, something I’ll call social deference.

               It’s impossible to live in the modern world without deferring to others about a good deal of things.  We simply don’t have the ability to know firsthand and from the ground up all the information we need to know.  The most obvious form of deference is deference about facts.  When I accept someone’s word on something, I’m taking on what they say as my belief.  We defer to doctors about the safety of medications and treatments, to engineers about the safety of our planes and bridges, and to news organizations about the events of the day.  This sort of thing is both commonplace and necessary: it would be difficult to get out of bed without trusting other people to sort some of our facts for us.

               There are, on the other hand, facts about which it seems peculiar to defer.  Several years ago, I proposed the following thought experiment.  Suppose that Google offered an app called Google Morals.  You could enter in any question about morality—should I be a vegetarian? Is it permissible to lie to achieve one’s ends?  Is abortion permissible? —and Google Morals would give you the answer.  Set aside for the moment that it would be unclear just how the app would work and how it would have access to the moral truths.  Suppose we had reason to believe it did.  Nevertheless, I maintain, there is something peculiar about deferring to Google Morals, something that isn’t peculiar about deferring to Google Maps in order to learn how to get from Springfield to Capital City.  There is a way in which one is shirking one’s responsibility as a person when one simply takes Google’s word when it comes to moral matters.

               A good part of the problem with moral deference is that we don’t have access to why Google provides the answers it does.  It wouldn’t be a problem if we could “see the work” and understand why Google provides the verdicts it does.  In that case it’s likely we wouldn’t simply be deferring—we wouldn’t be accepting Google’s verdict simply because of Google’s output, we would be altering our beliefs because we understood the reasons why Google said what it said.  Understanding why something is true, being able to articulate the ins and outs, is important when it comes to some of our beliefs—namely the moral beliefs that make us who we are.

               Ok, so suppose this is right; what does this have to do with Facebook?  It strikes me that Facebook encourages a sort of deference as well that is likely as problematic as moral deference.  Call it social deference.

               Suppose that you systematically deferred to others about who was a good friend.  Instead of evaluating someone based on their merits, based on how they treated you, you simply asked a friend expert, a “friendspert,” whether someone was a good friend.  It’s not just that the friendspert recommends you check someone out and that they might be a good friend, but that you adopt the belief that the person is your friend based on their advice and you organized your life accordingly.  This is a sort of social deference—one is allowing one’s social circle to be determined based on the sayso of another.  In some sense one is shirking one’s duties as a friend and is offloading important work onto others that really should be done by each of us—evaluating people based on their perceived merits and demerits and befriending them based on how they treat you.  There would be something wrong if someone asked “why are you my friend” and your answer was “because the friendspert told me to be.”  Acting that way depreciates friendship to the point that it’s not clear that one really has a friend at all.

               The friendspert is an extreme case, and though it’s tempting to say that Facebook, with its friend suggestions, is acting like a friendspert, that’s probably not quite right.  There is perhaps a little truth to this, but it almost certainly overestimates what is really going on when we “friend” someone on Facebook.  It’s not as though when we click that blue button the person actually becomes our friend in any robust sense, and it’s not as though we shut down our independent evaluation of that person and just defer to Facebook’s algorithm.  We form beliefs about the person and make attachments based on what we see on our feed or how we interact with them.

               There is, though, a type of social deference involved in Facebook however that might even be more insidious.  We are deferring in this case to an algorithm that affects how our friends and social circles appear to us.  Who we see and which posts we see are determined by a system that is unknown to us.  To the degree that we let our attachments be shaped by those algorithms we are guilty of social deference.  We are allowing our connections to other people to be shaped based on decisions and frameworks that are not our own.  In doing so we are ceding our social autonomy and we’re allowing one of the most essential parts of ourselves—the social part—to be molded by a third party.

               Most of us know, at least after adolescence, that we should not judge people simply by what others report about them.  Even if those reports are accurate, the intermediary in this case is apt to distort our picture of other people, thereby shaping our judgments about them.  It is important, indeed it’s our responsibility, to judge people as much as we can without intermediaries shaping our perception of them.  The problem isn’t just that falsehoods and misrepresentations enter the mix.  Even supposing they don’t, it is our responsibility to form our interpersonal relationships—especially our friendships—ourselves.  Forming and nourishing friendships requires a subtle navigation between revealing too much about oneself and not enough, foregrounding some features and not others.  This isn’t dishonest, it’s a recognition that not every fact is relevant to every relationship, and sometimes the order and emphasis of what one reveals about oneself says as much about oneself as the information revealed.  (If I start every conversation announcing my religion or political affiliation, that fact will tell you as much about me as whatever you learn about my faith or politics.)

When we use Facebook, we are ultimately introducing an intermediary between us and our social world and are placing trust in it to provide an accurate picture of our social world.  In fact, what we get is a distorting lens that highlights some parts of our friends at the costs of others.  Importantly, the algorithms that generate what posts we see is not interested in generating or preserving true friendship, nor it is interested in showing us the truth about people.  It is interested in what keeps us clicking, and as such it tends to show us the most provocative parts of our social sphere.  People’s most outrageous opinions are foregrounded and those features that are relevant to true friendship are irrelevant.

               We needn’t rest with abstractions to see the point.  How many of us have seen the political posts of our family members and changed forever how we see them?  How many of us have seen the posts of our friends only to resent them for their self-righteousness or for what might appear to be their self obsession?  Our perspective on our social world is being shaped by the hidden algorithms that lead users to spend time on the site, not by anything that matters to friendship.  This is a kind of social deference and by engaging in it we are handing over responsibility for our relationships to a source we all know is untrustworthy.  The result is a weakening and cheapening of our relationships, but we can’t just blame Facebook.  It’s our decision to give a third party the power to distort and mediate our relationships, and to that degree we deserve a large share of the blame for abandoning our responsibilities to our friends and our social sphere.

Emotional Manipulation, Moving Too Fast and Profiting on the Broken Things

The task of keeping up with tech news has become rather harrowing as of late. The avalanche of information keeps the constant replacement of stories flowing and our attention overloaded. This has become so clearly the case that it’s easy to forget what happened just a few weeks ago. Facebook’s weak stance on political ads quickly became Google’s acquisition of our medical records before both companies then announced they would narrowly expand the minimum number of profiles required for targeted ads. In fact, I expect companies like Facebook bake our forgetting into their internal, day-to-day practices.

This hurtling forward coupled with our inability to keep up with the resulting scandals has allowed for the actualizing of the oft-derided ‘move fast and break things’ motto. While one takeaway might be that our attention spans have contracted due to informational overload, it’s certainly not the only possibility. One might suspect that we are incapable of focusing on any particular tech scandal, not because of the shrinking of our attention spans but because of the ever-evolving techno-scandal culture we now inhabit. To recognize the ease with which we forget, one need only revisit one particularly troubling example of ‘breaking things’ from just a handful of years ago.

In 2013, many people were unknowing subjects in a social experiment run by Facebook. Curious about whether the influence they had acquired could allow them to cause changes in the moods of its users, they manipulated their News Feeds. For some, they displayed only uplifting, cheerful stories while for others only negative, depressing stories. Their hypothesis was verified and the resulting paper was published by a prestigious peer-reviewed scientific journal (The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788–90). It’s worth having a look at the abstract for the paper:

Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. Emotional contagion is well established in laboratory experiments, with people transferring positive and negative emotions to others. Data from a large real-world social network, collected over a 20-y period suggests that longer-lasting moods (e.g., depression, happiness) can be transferred through networks [Fowler JH, Christakis NA (2008) BMJ 337: a2338], although the results are controversial. In an experiment with people who use Facebook, we test whether emotional contagion occurs outside of in-person interaction between individuals by reducing the amount of emotional content in the News Feed. When positive expressions were reduced, people produced fewer positive posts and more negative posts; when negative expressions were reduced, the opposite pattern occurred. These results indicate that emotions expressed by others on Facebook influence our own emotions, constituting experimental evidence for massive-scale contagion via social networks. This work also suggests that, in contrast to prevailing assumptions, in-person interaction and nonverbal cues are not strictly necessary for emotional contagion, and that the observation of others’ positive experiences constitutes a positive experience for people.[1]

And within the first page:

On Facebook, people frequently express emotions, which are later seen by their friends via Facebook’s “News Feed” product. Because people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends. News Feed is the primary manner by which people see content that friends share. Which content is shown or omitted in the News Feed is determined via a ranking algorithm that Facebook continually develops and tests in the interest of showing viewers the content they will find most relevant and engaging. One such test is reported in this study: A test of whether posts with emotional content are more engaging. [italics added][2]

It’s one thing to read this from an academic perspective. It’s an entirely different thing to truly consider the fact that Facebook manipulated the emotions and mental states of millions of people. It’s important to feel the outrage that’s appropriate toward something so outrageous. It’s worth reflecting upon the power that such an ability and the willingness to use it implies. And finally, it’s unnerving but necessary to acknowledge that we now live in a world where this power is wielded by numerous profit-driven companies that have come to dominate a significant portion of the global, online distraction economy.

Concerning such questionable activities, I fear we’re no longer shockable. We see that these companies absorb our health and fitness data, track our purchase and click patterns, and buy our driving, employment, arrest and voting records. All the while, another video of a ‘disenfranchised’, petulant white lady raging at the sight of a black child selling water ‘without a license’ goes viral. Because the latter is more visceral it becomes a more likely object of our fleeting anger, and hence a more likely object of our attention.

In light of all this, it’s natural to wonder, what’s the difference between a state-run media outlet that attempts to placate its citizens with inspirational, dangling kittens and a social media company that manipulates the emotions of its users? While one is powerful, immensely profitable and potentially oppressive, the other is unlikely to be run by a barely grown-up billionaire who stumbled upon too much power after launching a website aimed at rating the ‘hotness’ of women on his college campus.

It’s one thing for these companies to harvest then profit from our data. It’s another thing altogether to experiment on us — without our consent, mind you — while doing so. It’s about time that we ask, at what point does free access to their services no longer suffice as compensation for being unwitting subjects in a social experiment? I expect that our giving this the consideration it deserves would require us to remember the last scandal long enough to recognize that the experiment is ongoing and that many more ‘things’ have been broken.

[1] Adam D. I. Kramera, Jamie E. Guillory and Jeffrey T. Hancock. “Experimental evidence of massive-scale emotional contagion through social networks”. The Proceedings of the National Academy of Sciences, June 17, 2014, V. 111(24), p. 8788.

[2] Ibid.