The Norms that Undermine Online Privacy

When it comes to dominant online platforms like Google, Amazon and Facebook, the idea of ‘privacy norms’ may bring to mind the default settings that one might tweak if one cares at all about privacy. On the other hand, there are additional ‘norms’ that lurk in the background and provide the foundation for the riches these companies have accumulated.

These norms, developed and perpetuated by the corporate behemoths in question, provided the groundwork for their present-day domination by allowing them to harvest massive quantities of our data. These companies have come to occupy such a ubiquitous presence in our lives that it’s easy to overlook the background assumptions that gave rise to the paradigm shift that made the phrase ‘data is the new oil’ so fitting.

While introducing the topic of privacy and big data to my Tech Ethics students this semester, I mentioned Jaron Lanier’s idea that we ought to be paid for our data. If there’s anything problematic about sweatshop workers being paid next to nothing for their roles in the production of products that are then sold at exorbitant prices (just to mention one category of worker exploitation) then there’s certainly something to Lanier’s claim. These companies are selling us. We generate the data, data that simply would not exist without us. And while one might claim that the same information could be generated by any other platform user, the fact is that these companies profit from the fact that we generated it. Moreover, if anyone else can generate the same data, then why do they need ours? Because, obviously, more data is better than less.

My students’ reactions ranged from chuckles to furrowed brows. And I don’t blame them. I expect that these reactions are a result of the simple fact that we don’t (and not that we couldn’t or shouldn’t) get paid for generating this data, the fact that they came of age in a time where these companies have always dominated, as well as the fact that we have been socialized to acquiesce to certain default assumptions as though they were handed down from a higher power. And no matter how independently plausible Lanier’s idea may be, it’s likely to seem bizarre to most. Once again, there’s reason for this.

The default assumption is that we have little to no rights when it comes to our online data, as though any such rights evaporate when we touch our phones or open our laptops. We forfeit these rights (so goes the assumption), even if we do so unknowingly (or, as I would contend, without consenting) every time we click in the tiny, psychologically insignificant ‘I agree to the terms of service’ box which allows us to get to the next page of uncountably many sites that are central to our everyday lives (e.g., paying credit card bills, uploading checks, renting a car, booking a flight, sending money to a friend, ordering food online, paying our mortgage, signing up for newspaper subscriptions, etc.). Because it’s the ‘default’, we accept it without question. Nevertheless, I believe this assumption is outrageous when considered with any care.

If I went to my local Target and, when given the credit card receipt, signed next to an analogous, analog ‘terms of service’, would this really give Target the right to have someone follow and record me for the rest of my life? What if the cashier somehow affixed a tracking device to me that would accomplish the same thing? What’s the difference between tracking my activities with an algorithm and doing so with a camera or a microphone? While the latter is clearly wrong, we somehow accept the former as an unassailable fact of online life.

It’s as though these companies are driven by impersonal algorithms developed for nothing more than extracting data and the profits they bring (*incredulous look implied), all the while treating the originators of such data – flesh and blood human beings – as mere means to these ends. And the scope of these aims seems to have no boundaries. According to a March 2019 article in The Atlantic:

“Amazon has filed a patent for a voice assistant that would recommend cold and flu medicine if it overhears you coughing.”

And…

“The health-care start-up Kinsa drew sharp criticism from privacy experts last year for selling illness data. Kinsa makes a smart thermometer that takes a user’s temperature, then instantly uploads it to a server along with gender and location information.

Kinsa used this data to create real-time maps of where people were getting sick and to refine its flu predictions for the season, with accuracy levels matching those of the Centers for Disease Control and Prevention. But it also sold the information to Clorox, which beefed up its marketing of disinfecting wipes and similar products in zip codes where thermometers reported a spike in fevers.”[1]

When one considers these examples and others, such as iRobot’s plan to use their Roomba vacuum products to map the layout of users’ homes in order to sell the data to the Googles and Amazons of the world, it becomes difficult to imagine anything being off limits.

Moreover, I’m writing this within weeks of Google’s announcing its acquisition of Fitbit (the makers of one of many WiFi-enabled fitness and movement trackers) and just days after The Wall Street Journal’s reporting that Google, by partnering with Ascension (a Catholic network of hospitals and clinics across 21 states), had harvested medical records from over 2500 hospitals and clinics in a venture codenamed ‘Project Nightingale’. The data includes lab results, doctor’s visits, prescriptions, hospitalization records, diagnoses, as well as patient names and dates of birth. And all of this occurred without informing those whose records were harvested or the doctors who provided the services. Within hours of WSJ’s story breaking, Google and Ascension made their partnership public.

When condemnation from a handful of Senators on both sides of the aisle became public, Google released a statement including the following: “We believe Google’s work with Ascension adheres to industry-wide regulations (including HIPAA) regarding patient data, and comes with strict guidance on data privacy, security, and usage.” Meanwhile, Ascension stated that “all work related to Ascension’s engagement with Google is HIPAA compliant and underpinned by a robust data security and protection effort and adherence to Ascension’s strict requirements for data handling.”

It turns out that they just might be right. According to HIPAA (i.e., the Health Insurance Portability and Accountability Act of 1996), health care providers can disclose health records to third parties if the goal of doing so is to improve the quality of the health care provided. If this is correct, then it highlights a clear case where what is legal and what ought to be legal part ways. If we care at all about privacy, especially when considering the sensitive information that will be extractable from us as technology inevitably continues to advance into the future, then we need to hold these companies to ethical standards and not just legal benchmarks or we risk losing far more control over our own lives.

According to a Whistleblower claiming to have worked on the project: “Patients haven’t been told how Ascension is using their data and have not consented to their data being transferred to the cloud or being used by Google. At the very least patients should be told and be able to opt in or opt out”.[2]

With that said, Google also announced that “under this arrangement, Ascension’s data cannot be used for any other purpose than for providing these services we’re offering under the agreement, and patient data cannot and will not be combined with any Google consumer data”.

This might sound familiar. Before 2012 when Google announced that they would merge profile information across their many platforms (including Google search, Gmail, YouTube and Android OS) without allowing users to opt-out, they said they would not do so. In addition, a recent Wall Street Journal investigation revealed that Google does, in fact, curate its search results, despite stating on its blog that “we do not use human curation to collect or arrange the results on a page.”[3] Such tweaks to its algorithms include those that favor big businesses like Amazon, Facebook and eBay over smaller businesses as well as “blacklists to remove certain sites or prevent others from surfacing in certain types of results”.[4] Google employs contractors to evaluate the results of its search result rankings and, according to some of these contractors, they are informed about ‘the correct ranking of results’. The company then uses the evaluations from the contractors to adjust their algorithms. The overarching point is that Google has often said that it doesn’t or wouldn’t do things that they in fact do or that they eventually have done. In light of this, one can be forgiven for being skeptical of their claim that “patient data cannot and will not be combined with any Google consumer data”.

It’s worth stressing that, in light of the relatively recent mind-boggling advances in artificial intelligence and extraction algorithms, it may be impossible to conceive of the power of future technologies. As a result, the current importance of our privacy rights over our online data cannot be overstated.


[1]https://www.theatlantic.com/technology/archive/2019/03/flu-google-kinsa-sick-thermometer-smart/584077/

[2] https://www.theguardian.com/technology/2019/nov/12/google-medical-data-project-nightingale-secret-transfer-us-health-information

[3] https://www.blog.google/products/search/how-we-keep-google-search-relevant-and-useful/

[4] https://www.wsj.com/articles/how-google-interferes-with-its-search-algorithms-and-changes-your-results-11573823753?mod=hp_lead_pos7

Google’s Wrist is Slapped for Violating Child Privacy

Ok, so maybe there’s nothing problematic about harvesting data from adults so that advertisers can microtarget us with ads in order to optimize their chances of persuading us to buy. I doubt it, but regardless…

There’s certainly something problematic about harvesting data on children.

Two issues:

First, when it comes to children, talk about consent is silly.

Second, and more important, the persistent harvesting of a child’s data, together with the ways in which such data can now be algorithmically analyzed and the unknown ways in which it may be used and disseminated in the future suggests the possibility that such a child may become entirely transparent to data brokers, advertisers, the government, police departments and anyone else who might purchase such data.

Of course, the child might become the greatest criminal mastermind of their time, but this has to be the case for every such child for this response to get off the ground.