The Norms that Undermine Online Privacy

When it comes to dominant online platforms like Google, Amazon and Facebook, the idea of ‘privacy norms’ may bring to mind the default settings that one might tweak if one cares at all about privacy. On the other hand, there are additional ‘norms’ that lurk in the background and provide the foundation for the riches these companies have accumulated.

These norms, developed and perpetuated by the corporate behemoths in question, provided the groundwork for their present-day domination by allowing them to harvest massive quantities of our data. These companies have come to occupy such a ubiquitous presence in our lives that it’s easy to overlook the background assumptions that gave rise to the paradigm shift that made the phrase ‘data is the new oil’ so fitting.

While introducing the topic of privacy and big data to my Tech Ethics students this semester, I mentioned Jaron Lanier’s idea that we ought to be paid for our data. If there’s anything problematic about sweatshop workers being paid next to nothing for their roles in the production of products that are then sold at exorbitant prices (just to mention one category of worker exploitation) then there’s certainly something to Lanier’s claim. These companies are selling us. We generate the data, data that simply would not exist without us. And while one might claim that the same information could be generated by any other platform user, the fact is that these companies profit from the fact that we generated it. Moreover, if anyone else can generate the same data, then why do they need ours? Because, obviously, more data is better than less.

My students’ reactions ranged from chuckles to furrowed brows. And I don’t blame them. I expect that these reactions are a result of the simple fact that we don’t (and not that we couldn’t or shouldn’t) get paid for generating this data, the fact that they came of age in a time where these companies have always dominated, as well as the fact that we have been socialized to acquiesce to certain default assumptions as though they were handed down from a higher power. And no matter how independently plausible Lanier’s idea may be, it’s likely to seem bizarre to most. Once again, there’s reason for this.

The default assumption is that we have little to no rights when it comes to our online data, as though any such rights evaporate when we touch our phones or open our laptops. We forfeit these rights (so goes the assumption), even if we do so unknowingly (or, as I would contend, without consenting) every time we click in the tiny, psychologically insignificant ‘I agree to the terms of service’ box which allows us to get to the next page of uncountably many sites that are central to our everyday lives (e.g., paying credit card bills, uploading checks, renting a car, booking a flight, sending money to a friend, ordering food online, paying our mortgage, signing up for newspaper subscriptions, etc.). Because it’s the ‘default’, we accept it without question. Nevertheless, I believe this assumption is outrageous when considered with any care.

If I went to my local Target and, when given the credit card receipt, signed next to an analogous, analog ‘terms of service’, would this really give Target the right to have someone follow and record me for the rest of my life? What if the cashier somehow affixed a tracking device to me that would accomplish the same thing? What’s the difference between tracking my activities with an algorithm and doing so with a camera or a microphone? While the latter is clearly wrong, we somehow accept the former as an unassailable fact of online life.

It’s as though these companies are driven by impersonal algorithms developed for nothing more than extracting data and the profits they bring (*incredulous look implied), all the while treating the originators of such data – flesh and blood human beings – as mere means to these ends. And the scope of these aims seems to have no boundaries. According to a March 2019 article in The Atlantic:

“Amazon has filed a patent for a voice assistant that would recommend cold and flu medicine if it overhears you coughing.”

And…

“The health-care start-up Kinsa drew sharp criticism from privacy experts last year for selling illness data. Kinsa makes a smart thermometer that takes a user’s temperature, then instantly uploads it to a server along with gender and location information.

Kinsa used this data to create real-time maps of where people were getting sick and to refine its flu predictions for the season, with accuracy levels matching those of the Centers for Disease Control and Prevention. But it also sold the information to Clorox, which beefed up its marketing of disinfecting wipes and similar products in zip codes where thermometers reported a spike in fevers.”[1]

When one considers these examples and others, such as iRobot’s plan to use their Roomba vacuum products to map the layout of users’ homes in order to sell the data to the Googles and Amazons of the world, it becomes difficult to imagine anything being off limits.

Moreover, I’m writing this within weeks of Google’s announcing its acquisition of Fitbit (the makers of one of many WiFi-enabled fitness and movement trackers) and just days after The Wall Street Journal’s reporting that Google, by partnering with Ascension (a Catholic network of hospitals and clinics across 21 states), had harvested medical records from over 2500 hospitals and clinics in a venture codenamed ‘Project Nightingale’. The data includes lab results, doctor’s visits, prescriptions, hospitalization records, diagnoses, as well as patient names and dates of birth. And all of this occurred without informing those whose records were harvested or the doctors who provided the services. Within hours of WSJ’s story breaking, Google and Ascension made their partnership public.

When condemnation from a handful of Senators on both sides of the aisle became public, Google released a statement including the following: “We believe Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and comes with strict guidance on data privacy, security, and usage.” Meanwhile, Ascension stated that “all work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

It turns out that they just might be right. According to HIPAA (i.e., the Health Insurance Portability and Accountability Act of 1996), health care providers can disclose health records to third parties if the goal of doing so is to improve the quality of the health care provided. If this is correct, then it highlights a clear case where what is legal and what ought to be legal part ways. If we care at all about privacy, especially when considering the sensitive information that will be extractable from us as technology inevitably continues to advance into the future, then we need to hold these companies to ethical standards and not just legal benchmarks or we risk losing far more control over our own lives.

According to a Whistleblower claiming to have worked on the project: “Patients haven’t been told how Ascension is using their data and have not consented to their data being transferred to the cloud or being used by Google. At the very least patients should be told and be able to opt in or opt out”.[2]

With that said, Google also announced that “under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”.

This might sound familiar. Before 2012 when Google announced that they would merge profile information across their many platforms (including Google search, Gmail, YouTube and Android OS) without allowing users to opt-out, they said they would not do so. In addition, a recent Wall Street Journal investigation revealed that Google does, in fact, curate its search results, despite stating on its blog that “we do not use human curation to collect or arrange the results on a page.”[3] Such tweaks to its algorithms include those that favor big businesses like Amazon, Facebook and eBay over smaller businesses as well as “blacklists to remove certain sites or prevent others from surfacing in certain types of results”.[4] Google employs contractors to evaluate the results of its search result rankings and, according to some of these contractors, they are informed about ‘the correct ranking of results’. The company then uses the evaluations from the contractors to adjust their algorithms. The overarching point is that Google has often said that it doesn’t or wouldn’t do things that they in fact do or that they eventually have done. In light of this, one can be forgiven for being skeptical of their claim that “patient data cannot and will not be combined with any Google consumer data”.

It’s worth stressing that, in light of the relatively recent mind-boggling advances in artificial intelligence and extraction algorithms, it may be impossible to conceive of the power of future technologies. As a result, the current importance of our privacy rights over our online data cannot be overstated.


[1]https://www.theatlantic.com/technology/archive/2019/03/flu-google-kinsa-sick-thermometer-smart/584077/

[2] https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

[3] https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/

[4] https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753?mod=hp_lead_pos7

Five Problems with Google’s Expanding Reach

 Five Problems with Google’s Expanding Reach

               This morning, within an hour of my first cup of coffee, I heard an ad for Google’s telephone services, and read news of Google’s foray into health care data processing and its plans to enter the banking sector.  I listened to the news on my Android phone, and not one to buck a trend I Googled “Google” to see what other news there was about the tech giant.   (Fitbit…Stadia …Quantum Supremacy.)  All this before I checked my email, some of which will come to me through Gmail, and looked at my schedule for the day, courtesy of Google calendar.  You get the point:  Google is everywhere, and it is on its way to doing just about everything. 

               Shouldn’t we be concerned about a company that is so all encompassing?  It’s not that I think Google is an evil company or that it is bent on a dystopian project of world domination.  Perhaps I’m naïve, but from what I can tell, those who run the various branches  of Google and its parent company Alphabet—which is, really, just Google under a different set of, um, letters—are well intentioned, idealistic people who believe they are part of an unprecedented force for good.  They have some good arguments on their side: their products have made so many important things so much easier for so many people, at a cost to the consumer of approximately zero dollars.  Google appears to be the leader of the pack in artificial intelligence, which will likely lead to incredible developments in medicine, education, communication, engineering, and, well, everything. 

               Yet I think we should be concerned, even if we grant that Google has done everything legally and in accordance with privacy regulations, and even though it might be the case that within any particular industry Google doesn’t constitute a monopoly.  Here are 5 reasons for concern:

1.   Too Big to Fail and Too Powerful to Counter

It was supposed to be a lesson from the Great Recession: when companies become too integral to the workings of the economy, the possibility of failure becomes remote, not just in the minds of the company executives but in actuality.  If a single company becomes too essential, it can virtually be guaranteed that it will be propped up in case of major failure.  Google will not be Lehmann Brothers.  At the moment, it seems extremely unlikely that it will face that sort of disaster, but if a major series of mistakes threaten Google, the U.S. Government will almost certainly step in.  This means that Google doesn’t face one of the biggest checks on private corporations—the possibility of failure.  The worry here is not so much that this will lead to financial recklessness, though that too is possible.  The worry is that it lacks a major check on ethical recklessness.  Set aside the fact that its lobbying power is astonishing, and that Google executives or ex-employees wind up having a hand in crafting regulation. If Google violates our trust there will be little we can do.  Consumers will find it difficult to escape their ecosystem, and even if there is a financial toll for ethical problems there is good reason to believe it will be protected.  Its failure is our failure.

2.  A Single Point of Vulnerability

There is a reason that nature encourages biological diversity, and it’s not just because of the Hapsburg jaw.  It’s because a diverse system is much less likely to be wiped out by single threats.  If our food chain, for example, lacked genetic diversity we risk starvation due to a single blight.  (See the Great Famine of Ireland.)  If our economic, social and personal lives are intertwined with a single company we face a similar threat.  No doubt the bigger they are, the more robust their security and the more established their corporate firewalls.  (I hope so, anyway.)  But given their involvement in every sector of our lives, a major mistake at Google, or a single successful attack, could be utterly disastrous.  Maybe this Titanic won’t sink, but will we bet everything on it?

3.  Power over the People

Google might abide by privacy regulations, but the fact is that these regulations are largely crafted with a poor understanding of the value of privacy.  The main danger of our information being held by a government or private corporation isn’t the possibility of leaks or hacking, it’s the power it gives others to shape our lives.  Google knows this, and intuitively we do too; it’s what enables Google to give us the best search results and deliver excellent products.  But that power is inextricably linked with the power to manipulate users, both individually and as a group.  This power increases with the scope of Google’s data collection: it grows exponentially, one imagines, with knowledge of health records, for example.  It’s not that Google will sell this information to your insurance company, or even  that it will become your insurance company (though don’t bet against it) but that it can influence you and your environment in ways you can’t even comprehend in order to achieve its goals.  This is made all the easier because as individuals who believe in their unassailable free will, we believe ourselves beyond such influence, even though hundreds of studies in social psychology and billions of dollars spent on advertising argue otherwise.

4.  Dominance over Norms

We are subtly shaped by the technology we adopt.  This occurs in obvious ways, such as the default margins and fonts in our word processing client, but it also occurs in more subtle ways, such as which emails make it to a priority inbox and which get relegated for later attention.  Do we memorize phone numbers anymore?  Carry cameras?  Do my students talk to each other during the breaks in class, or are they looking at their phones?  We know we are shaped by our devices and technological environment, but shouldn’t we worry about the fact that more and more our environment is shaped by a single corporation?  This is one of the themes brought out in Brett Frishmann’s and Evan Selinger’s excellent book Re-engineering Humanity, and though the point is somewhat subtle, it’s extremely important: the ability to shape our technologies comes with  the ability to shape our norms, and the shaping of those norms isn’t driven by an abiding concern for our own deepest values.  It’s driven at least in part by profit and market share.  When a company like Google becomes a Leviathon, we have to ask whether that is too much power for one company to wield.

5.  Artificial Intelligence Supremacy

Though Google may not be a monopoly in any particular sector now, they are set up to be a monopoly in the future, with utter dominance what might be mankind’s most powerful invention: artificial intelligence.  Artificial intelligence thrives off of data and the more domains in which an AI trains the more powerful it will be.  Alphabet and Google aren’t looking to dominate us in Starcraft and ancient Chinese board games.  They are aiming at leading the way to general artificial intelligence, and the more domains in which they gain traction the more dominant they will be in that field.  If we thought a telecommunication monopoly made Ma Bell too powerful, we had better open our eyes to the worries that will come with a single company dominating artificial intelligence.  It’s not an exaggeration to say that dominance in AI could easily lead to dominance in any field, especially if a singularity-style intelligence ramp-up is a possibility.  If there is such a thing as a company having too much power, that would surely be it.

These are just a few of the worries that come to mind as Google expands its reach.  I don’t claim that Google should be broken up, or that we should block them from new markets.  I’m not certain that the dominance of Google will be a bad thing.  But I do think we need to give it some thought and recognize that old models of the dangers of the monopoly might not do justice to the rise of the tech giants.  Things go badly in surprising ways, but the more centralized power becomes the more we have to lose in our next surprise.

Social Media, Democracy & Citizen Responsibility

In today’s climate of justifiable suspicion about the Googles and Facebooks of the world, it’s easy to overlook the responsibilities of the individuals using these platforms. While I’m always happy to point out the problematic nature of the data harvesting and information dissemination that these companies are built upon, I would also suggest that this does nothing to diminish our own social and moral obligation to make significant efforts to inform ourselves, resist contributing to increase polarization and do whatever necessary to escape our cozy echo chambers and information bubbles.

Being a good citizen in a democracy requires more than many of us seem to think and much more than our actions often suggest. Pointing fingers, even when done in the right directions, is nowhere near enough. We need to wade through the oceans of bullshit emanating from partisan talking heads, fake news peddlers, marketing driven, agenda-suffused cable-news stations and algorithmically curated newsfeeds in order to determine which actions, policies, and candidates best represent our values, as well as the values most conducive to the health of a thriving democratic society.

President Obama, in a recent interview at the Obama Foundation Summit offered the following: “This idea of purity and you’re never compromised, and you’re always politically woke and all that stuff, you should get over that quickly. The world is messy. There are ambiguities. People who do really good stuff have flaws. People who you are fighting may love their kids and share certain things with you.”

The point, I take it, is not to disregard the mistreatment of marginalized groups but to do something beyond mere posturing and attempting to appear ‘woke’.

Too many of us today seem to believe that it’s enough to call out the many who err or those we may simply disagree with. “Then I can sit and feel pretty good about myself”, said Obama, “because, ‘man, you see how woke I was? I called you out.’ That’s not activism. That’s not bringing about change. If all you’re doing is casting stones, you’re probably not going to get that far. That’s easy to do”.

And while I’m quick to agree that platforms like Twitter and Facebook lend themselves to this practice of racing to be the first to spot and out injustice or ignorant speech, we still need to recognize when we’re being lulled into an ineffectual gotcha game of virtue signaling that, though it may provide fleeting feelings of superiority, produces very little in the way of lasting change or dialogue.

—-

The speed with which Facebook and Google have come to play a central role in the everyday life of so many makes it easy to overlook how recent these companies are. Nonetheless their effects are undeniable. As we shared baby photos and searched for information on anything that might spark our curiosity, they’ve been aggregating our offerings and feeding us what they know will keep us coming back.

None of us like to be confronted with the possibility that we’re alone, that our beliefs might be false, or our deeply held values ultimately misguided. So social media curates our experience to provide us with the validation we so often seek. How better to do this than to gift us our own beliefs and values through the words and stories of others? This keeps us clicking and keeps the advertising dollars pouring in for the companies involved. Just like the angry incel savoring the hateful rantings of Donald Trump, we all feel the cozy pull of having our own views echoed back to us.

But, of course, none of this provides anything by way of truth or understanding. And more to the point at issue, none of this is conducive to an open-minded population willing to do the work required to breathe new life into an ailing democracy teetering on the precipice of unbridgeable polarization. While Aristotle, in the first democracy, aptly said (and I’m paraphrasing) it’s the mark of an educated mind to be able to entertain a thought without accepting it, social media has given us the means of reinforcing our own thoughts without subjecting them to the slightest scrutiny. In fact, one might find these two distinct ideas to be fitting bookends for the nearly 2500 year run of democracy.

While this characterization of things may be a bit hyperbolic, the existence of problematic echo chambers and curated tunnel vision is quite real. Fox News acolytes dig in their heels while liberals roll their eyes, and each side drifts further away from the possibility of honestly engaging with the views of the other. (*I refuse to equate the so-called ‘extremes’ on the left with those on the right. There’s a clear moral and epistemic difference between an oblivious (or worse) refusal to acknowledge, for example, the current resurgence of xenophobia and white supremacy and the desire for health care for all or basic income).

The online social media environment, with its intrusive notifications and conduciveness to mindless scrolling and clicking, falls short of providing an optimal arena for informed evaluation and close examination of information. It’s for this reason that I believe we need to slow our online experience. So many of us ‘grown-ups’ impose limits on our children’s technology usage but do so while staring into the void of facile stumpers and bottomless distraction. Maybe a simple break would do us well. Forget that. A break would do us well. Many of us need to take a goddamn walk…without the phones. Look someone in the eye. It might turn out that the ‘idiot Trump supporter’ or the ‘snowflake Socialist’ is just an ordinary, imperfect human like yourself (*hate-filled racist, nationalist misogynists to the side – there is nothing worthy of engaging in such cases).

Moreover, in these days where our every move is harvested and aggregated, and where massive data companies commodify our very lives, it’s crucial that we recognize all of this while avoiding a victim’s mentality. We have an obligation to inform ourselves, evaluate our beliefs, reevaluate when new information arrives, then incorporate or discard where appropriate.

Navigating the world of ideas and agendas has become far more difficult due to social media, the intentions of the all-pervasive corporate giants, the sheer quantity of information which leads to more skimming than careful consumption, the ever-lurking pull of fatigue-based complacency and politically-motivated fake news, amongst countless other factors. But, one way or another, we need to adapt if we’re going to have any hope of preserving democracy. Otherwise, we’re likely to revert to a power-dominated state-of-nature in which the only difference is the fact that this time around it was ushered in by technology.

Peeping Bots vs. Peeping Toms

Why do we care more about violations of privacy by conscious agents?

Most of us know that we have become data production engines, radiating our locations, interests and associations for the benefit of others. A number of us are deeply concerned about that fact. But it seems that people really get outraged when they find out that actually humans are listening to Alexa inputs or that Facebook employees are scoping out private postings. Why is that? We can call it the Peeping Tom effect: we have a visceral reaction to our private lives being observed by living breathing agents that we lack when the same information is collected by computers. Perhaps this seems too obvious to remark upon, but it deserves some serious scrutiny. One hypothesis, which I advance in a forthcoming paper with my colleague Ken Daley, is that we are likely hard wired–perhaps evolutionarily–to have alarm bells ring when we think about human agents in our “space” but that we have no such inborn reactions to the impersonal data collectors we have developed in the past fifty years. The fact that alarm bells ring in one instance and not another is not a reason to ignore the silent threat. There’s a good case to be made that the threat of corporate knowledge–even if it doesn’t involve knowledge by a human–is quite a bit more dangerous than the threats we are more inclined to vilify.

Two features of human versus machine knowers stand out. Humans are conscious beings, and they have personal opinions, plans and intentions. It’s hard to swallow the idea that corporations or computer networks are themselves conscious, and it’s therefore hard to think of them as having opinions, plans and intentions. I’m inclined to grant the former–though it’s an interesting thought experiment to imagine if computer networks were, unbeknownst to us, conscious–and for the sake of argument I’ll grant that corporations don’t have opinions, plans or intentions (though we certainly talk as if they do). It’s worth asking what extra threat these features of humans might cause?

It’s admittedly unappealing to think of a Facebook nerd becoming engrossed in the saga of my personal life, but what harm does it cause? Assuming he (pardon the assumption, but I can’t imagine it not being a he) doesn’t go rogue and stake me out and threaten me or my loved ones, why does it matter that he knows that information? From one perspective, assuming he’s enjoying himself, that might even be thought to be a good thing! If the same information is simply in a computer, no one is enjoying themselves and isn’t more enjoyment better than less? Perhaps we think the privacy violation is impermissible and so the enjoyment doesn’t even start to outweigh that harm. But we’re not really talking about whether or not it’s permissible to violate privacy–presumably it’s just as impermissable if my privacy is violated and the illicit information is stored in a network. We’re asking what is the worse situation–a violation of privacy with enjoyment by a third person and a violation of privacy without. I share the feeling that the former is worse, but I’d like to have something to say in defense of that feeling. Perhaps it’s the fear that the human will go rogue and the computer can’t. But my feeling doesn’t go away when I imagine the human is spending a life in prison, nor does it go away when I realize that computers can go rogue as well, causing me all sorts of harm.

There’s lots more to say and think about here. But for now let’s just let the question simmer: Are violations of privacy more harmful if they involve knowledge by conscious agents, and if so, why?