Facebook’s Distorting Lens: The Danger of Social Deference

               Recently I got over my revulsion for Facebook and once again activated an account.  I did it in part because though I dislike the platform for some obvious reasons, I feel it’s important to engage with something that is so monumentally influential.  It’s important to know firsthand just what the atmosphere is like, what sorts of effects it has on its users, and what sorts of changes happen in the environment and the effects they seem to have.  I’m quite familiar with the way in which it creates echo chambers and epistemic bubbles, the draining effect it tends to have on my psyche, but in my recent interactions I feel most upset by what seems to be a lack of autonomy in the social realm.  I feel shuffled from post to post without knowing why and without having any sense that I can control what and who I see.  It’s all the more distressing that in Facebook my social interactions are being governed by unknown algorithms.  I am troubled by what seems to be an integral part of Facebook, something I’ll call social deference.

               It’s impossible to live in the modern world without deferring to others about a good deal of things.  We simply don’t have the ability to know firsthand and from the ground up all the information we need to know.  The most obvious form of deference is deference about facts.  When I accept someone’s word on something, I’m taking on what they say as my belief.  We defer to doctors about the safety of medications and treatments, to engineers about the safety of our planes and bridges, and to news organizations about the events of the day.  This sort of thing is both commonplace and necessary: it would be difficult to get out of bed without trusting other people to sort some of our facts for us.

               There are, on the other hand, facts about which it seems peculiar to defer.  Several years ago, I proposed the following thought experiment.  Suppose that Google offered an app called Google Morals.  You could enter in any question about morality—should I be a vegetarian? Is it permissible to lie to achieve one’s ends?  Is abortion permissible? —and Google Morals would give you the answer.  Set aside for the moment that it would be unclear just how the app would work and how it would have access to the moral truths.  Suppose we had reason to believe it did.  Nevertheless, I maintain, there is something peculiar about deferring to Google Morals, something that isn’t peculiar about deferring to Google Maps in order to learn how to get from Springfield to Capital City.  There is a way in which one is shirking one’s responsibility as a person when one simply takes Google’s word when it comes to moral matters.

               A good part of the problem with moral deference is that we don’t have access to why Google provides the answers it does.  It wouldn’t be a problem if we could “see the work” and understand why Google provides the verdicts it does.  In that case it’s likely we wouldn’t simply be deferring—we wouldn’t be accepting Google’s verdict simply because of Google’s output, we would be altering our beliefs because we understood the reasons why Google said what it said.  Understanding why something is true, being able to articulate the ins and outs, is important when it comes to some of our beliefs—namely the moral beliefs that make us who we are.

               Ok, so suppose this is right; what does this have to do with Facebook?  It strikes me that Facebook encourages a sort of deference as well that is likely as problematic as moral deference.  Call it social deference.

               Suppose that you systematically deferred to others about who was a good friend.  Instead of evaluating someone based on their merits, based on how they treated you, you simply asked a friend expert, a “friendspert,” whether someone was a good friend.  It’s not just that the friendspert recommends you check someone out and that they might be a good friend, but that you adopt the belief that the person is your friend based on their advice and you organized your life accordingly.  This is a sort of social deference—one is allowing one’s social circle to be determined based on the sayso of another.  In some sense one is shirking one’s duties as a friend and is offloading important work onto others that really should be done by each of us—evaluating people based on their perceived merits and demerits and befriending them based on how they treat you.  There would be something wrong if someone asked “why are you my friend” and your answer was “because the friendspert told me to be.”  Acting that way depreciates friendship to the point that it’s not clear that one really has a friend at all.

               The friendspert is an extreme case, and though it’s tempting to say that Facebook, with its friend suggestions, is acting like a friendspert, that’s probably not quite right.  There is perhaps a little truth to this, but it almost certainly overestimates what is really going on when we “friend” someone on Facebook.  It’s not as though when we click that blue button the person actually becomes our friend in any robust sense, and it’s not as though we shut down our independent evaluation of that person and just defer to Facebook’s algorithm.  We form beliefs about the person and make attachments based on what we see on our feed or how we interact with them.

               There is, though, a type of social deference involved in Facebook however that might even be more insidious.  We are deferring in this case to an algorithm that affects how our friends and social circles appear to us.  Who we see and which posts we see are determined by a system that is unknown to us.  To the degree that we let our attachments be shaped by those algorithms we are guilty of social deference.  We are allowing our connections to other people to be shaped based on decisions and frameworks that are not our own.  In doing so we are ceding our social autonomy and we’re allowing one of the most essential parts of ourselves—the social part—to be molded by a third party.

               Most of us know, at least after adolescence, that we should not judge people simply by what others report about them.  Even if those reports are accurate, the intermediary in this case is apt to distort our picture of other people, thereby shaping our judgments about them.  It is important, indeed it’s our responsibility, to judge people as much as we can without intermediaries shaping our perception of them.  The problem isn’t just that falsehoods and misrepresentations enter the mix.  Even supposing they don’t, it is our responsibility to form our interpersonal relationships—especially our friendships—ourselves.  Forming and nourishing friendships requires a subtle navigation between revealing too much about oneself and not enough, foregrounding some features and not others.  This isn’t dishonest, it’s a recognition that not every fact is relevant to every relationship, and sometimes the order and emphasis of what one reveals about oneself says as much about oneself as the information revealed.  (If I start every conversation announcing my religion or political affiliation, that fact will tell you as much about me as whatever you learn about my faith or politics.)

When we use Facebook, we are ultimately introducing an intermediary between us and our social world and are placing trust in it to provide an accurate picture of our social world.  In fact, what we get is a distorting lens that highlights some parts of our friends at the costs of others.  Importantly, the algorithms that generate what posts we see is not interested in generating or preserving true friendship, nor it is interested in showing us the truth about people.  It is interested in what keeps us clicking, and as such it tends to show us the most provocative parts of our social sphere.  People’s most outrageous opinions are foregrounded and those features that are relevant to true friendship are irrelevant.

               We needn’t rest with abstractions to see the point.  How many of us have seen the political posts of our family members and changed forever how we see them?  How many of us have seen the posts of our friends only to resent them for their self-righteousness or for what might appear to be their self obsession?  Our perspective on our social world is being shaped by the hidden algorithms that lead users to spend time on the site, not by anything that matters to friendship.  This is a kind of social deference and by engaging in it we are handing over responsibility for our relationships to a source we all know is untrustworthy.  The result is a weakening and cheapening of our relationships, but we can’t just blame Facebook.  It’s our decision to give a third party the power to distort and mediate our relationships, and to that degree we deserve a large share of the blame for abandoning our responsibilities to our friends and our social sphere.

Why We Shouldn’t Be Allowed to Waive our Privacy Rights

There is little doubt that privacy clauses and terms of service agreements don’t support the moral burden they are meant to carry.  All too often they are designed to provide political cover rather than to generate informed consent.   Not only does no one read them, but even if someone did and had the attention span and intelligence to follow them, it’s doubtful that they would find all the policies hidden in documents several clicks deep. Interesting fact: If the average American actually read all the policies they encountered, they would lose 76 full workdays in the process. The cost to productivity if all Americans were so conscientious would approach $1 trillion.

There is no arguing it, really: clicking on an AGREE button no more means that you agree with the content of a terms of service agreement than politely nodding your head during a mumbled conversation in a noisy bar means you are agreeing with the opinion you aren’t really hearing.

              This is a big problem with the way we are doing things, but there is another, more fundamental issue that few have recognized: our privacy rights aren’t ours to waive. 

              That sounds paradoxical, but there are other rights we intuitively can’t waive—I cannot waive my right to self-determination by selling myself into bondage, for example, and I can’t waive my right to my body by selling myself to a cannibal for Thanksgiving Dinner.  It’s not plausible, though, that privacy violations inflict such extreme harms, so those probably aren’t the best places to look for analogues. 

A closer analogy to privacy rights is voting rights.  I cannot waive my right to vote.  I can choose not to exercise it, but I cannot waive it.  I cannot exchange my right to vote for internet access or for a cushy job. I certainly can’t transfer my right to you, no matter how much you want to pay me. It’s my right, but that doesn’t mean I can give it up. That’s because my right to vote doesn’t only protect me—it protects my fellow citizens and the institution of democracy we collectively cherish. 

If I have the right to sell my vote, it endangers the entire democratic franchise.  It is likely to make your vote less valuable in comparison to someone else’s—plug in your favorite malevolent billionaire here for a scenario in which electoral outcomes are determined by the mass purchase of voting rights.  We cannot waive our right to vote because that right doesn’t primarily prevent a harm to us as individuals; it prevents a harm to an institution that undergirds the rights of others.

              I suggest privacy rights are like voting rights in this respect.  While we can suffer individual harm if someone knows our political preferences or gains access to the subtle triggers that sway us for or against a product or a candidate, the more important harm comes with the threat to the valuable institutions we collectively comprise. 

If I have the ability to waive my access to privacy rights so does everyone else.  If we all waive those rights we enable the collection of data that enables significant control over the electorate as a whole.  Given enough information about the thoughts and behaviors of voters, propaganda and advertising can be extremely effective in swaying enough attitudes to change the outcome of an election. Though votes aren’t being bought, the result is similar: each individual vote is now outweighed by the statistically certain outcome of a data-informed campaign of voter manipulation.

              If this is right, we’ve largely been looking in the wrong direction both for the harms of privacy rights violations and for the harms involved in our wanton disregard of those rights.  In an age where data-analytics can discern surprising connections between different elements of human personality and behavior, our data is not our own.  By giving up our own data, we are essentially informing on those like us and enabling their manipulation.  We shouldn’t do that just because we have an itch to play Clash of Kings.

So where does this leave us?  I like to play Clash of Kings as much as the next guy and frankly, when I think of it in terms of the harms likely to come to me, Clash of Kings can win pretty easily.  When I realize that my own visceral reaction to privacy harms really isn’t to the point, I’m a little less cavalier about parting with my data.  The truth is, though, that this is a place for governmental regulation, just as it is in the case of voting rights.  In today’s political climate I won’t hold my breath, but the way we all think of these issues needs to undergo a shift away from our worries about our own individual private lives.  As important as each of us is as an individual, some of the most worrisome harms come from the effect on the groups to which we belong.  We need to shift our focus toward the harm these privacy violations cause all of us by enabling the manipulation of the public and the vitiation of our democracy.

Originally appeared in The Hill

Five Problems with Google’s Expanding Reach

 Five Problems with Google’s Expanding Reach

               This morning, within an hour of my first cup of coffee, I heard an ad for Google’s telephone services, and read news of Google’s foray into health care data processing and its plans to enter the banking sector.  I listened to the news on my Android phone, and not one to buck a trend I Googled “Google” to see what other news there was about the tech giant.   (Fitbit…Stadia …Quantum Supremacy.)  All this before I checked my email, some of which will come to me through Gmail, and looked at my schedule for the day, courtesy of Google calendar.  You get the point:  Google is everywhere, and it is on its way to doing just about everything. 

               Shouldn’t we be concerned about a company that is so all encompassing?  It’s not that I think Google is an evil company or that it is bent on a dystopian project of world domination.  Perhaps I’m naïve, but from what I can tell, those who run the various branches  of Google and its parent company Alphabet—which is, really, just Google under a different set of, um, letters—are well intentioned, idealistic people who believe they are part of an unprecedented force for good.  They have some good arguments on their side: their products have made so many important things so much easier for so many people, at a cost to the consumer of approximately zero dollars.  Google appears to be the leader of the pack in artificial intelligence, which will likely lead to incredible developments in medicine, education, communication, engineering, and, well, everything. 

               Yet I think we should be concerned, even if we grant that Google has done everything legally and in accordance with privacy regulations, and even though it might be the case that within any particular industry Google doesn’t constitute a monopoly.  Here are 5 reasons for concern:

1.   Too Big to Fail and Too Powerful to Counter

It was supposed to be a lesson from the Great Recession: when companies become too integral to the workings of the economy, the possibility of failure becomes remote, not just in the minds of the company executives but in actuality.  If a single company becomes too essential, it can virtually be guaranteed that it will be propped up in case of major failure.  Google will not be Lehmann Brothers.  At the moment, it seems extremely unlikely that it will face that sort of disaster, but if a major series of mistakes threaten Google, the U.S. Government will almost certainly step in.  This means that Google doesn’t face one of the biggest checks on private corporations—the possibility of failure.  The worry here is not so much that this will lead to financial recklessness, though that too is possible.  The worry is that it lacks a major check on ethical recklessness.  Set aside the fact that its lobbying power is astonishing, and that Google executives or ex-employees wind up having a hand in crafting regulation. If Google violates our trust there will be little we can do.  Consumers will find it difficult to escape their ecosystem, and even if there is a financial toll for ethical problems there is good reason to believe it will be protected.  Its failure is our failure.

2.  A Single Point of Vulnerability

There is a reason that nature encourages biological diversity, and it’s not just because of the Hapsburg jaw.  It’s because a diverse system is much less likely to be wiped out by single threats.  If our food chain, for example, lacked genetic diversity we risk starvation due to a single blight.  (See the Great Famine of Ireland.)  If our economic, social and personal lives are intertwined with a single company we face a similar threat.  No doubt the bigger they are, the more robust their security and the more established their corporate firewalls.  (I hope so, anyway.)  But given their involvement in every sector of our lives, a major mistake at Google, or a single successful attack, could be utterly disastrous.  Maybe this Titanic won’t sink, but will we bet everything on it?

3.  Power over the People

Google might abide by privacy regulations, but the fact is that these regulations are largely crafted with a poor understanding of the value of privacy.  The main danger of our information being held by a government or private corporation isn’t the possibility of leaks or hacking, it’s the power it gives others to shape our lives.  Google knows this, and intuitively we do too; it’s what enables Google to give us the best search results and deliver excellent products.  But that power is inextricably linked with the power to manipulate users, both individually and as a group.  This power increases with the scope of Google’s data collection: it grows exponentially, one imagines, with knowledge of health records, for example.  It’s not that Google will sell this information to your insurance company, or even  that it will become your insurance company (though don’t bet against it) but that it can influence you and your environment in ways you can’t even comprehend in order to achieve its goals.  This is made all the easier because as individuals who believe in their unassailable free will, we believe ourselves beyond such influence, even though hundreds of studies in social psychology and billions of dollars spent on advertising argue otherwise.

4.  Dominance over Norms

We are subtly shaped by the technology we adopt.  This occurs in obvious ways, such as the default margins and fonts in our word processing client, but it also occurs in more subtle ways, such as which emails make it to a priority inbox and which get relegated for later attention.  Do we memorize phone numbers anymore?  Carry cameras?  Do my students talk to each other during the breaks in class, or are they looking at their phones?  We know we are shaped by our devices and technological environment, but shouldn’t we worry about the fact that more and more our environment is shaped by a single corporation?  This is one of the themes brought out in Brett Frishmann’s and Evan Selinger’s excellent book Re-engineering Humanity, and though the point is somewhat subtle, it’s extremely important: the ability to shape our technologies comes with  the ability to shape our norms, and the shaping of those norms isn’t driven by an abiding concern for our own deepest values.  It’s driven at least in part by profit and market share.  When a company like Google becomes a Leviathon, we have to ask whether that is too much power for one company to wield.

5.  Artificial Intelligence Supremacy

Though Google may not be a monopoly in any particular sector now, they are set up to be a monopoly in the future, with utter dominance what might be mankind’s most powerful invention: artificial intelligence.  Artificial intelligence thrives off of data and the more domains in which an AI trains the more powerful it will be.  Alphabet and Google aren’t looking to dominate us in Starcraft and ancient Chinese board games.  They are aiming at leading the way to general artificial intelligence, and the more domains in which they gain traction the more dominant they will be in that field.  If we thought a telecommunication monopoly made Ma Bell too powerful, we had better open our eyes to the worries that will come with a single company dominating artificial intelligence.  It’s not an exaggeration to say that dominance in AI could easily lead to dominance in any field, especially if a singularity-style intelligence ramp-up is a possibility.  If there is such a thing as a company having too much power, that would surely be it.

These are just a few of the worries that come to mind as Google expands its reach.  I don’t claim that Google should be broken up, or that we should block them from new markets.  I’m not certain that the dominance of Google will be a bad thing.  But I do think we need to give it some thought and recognize that old models of the dangers of the monopoly might not do justice to the rise of the tech giants.  Things go badly in surprising ways, but the more centralized power becomes the more we have to lose in our next surprise.

Peeping Bots vs. Peeping Toms

Why do we care more about violations of privacy by conscious agents?

Most of us know that we have become data production engines, radiating our locations, interests and associations for the benefit of others. A number of us are deeply concerned about that fact. But it seems that people really get outraged when they find out that actually humans are listening to Alexa inputs or that Facebook employees are scoping out private postings. Why is that? We can call it the Peeping Tom effect: we have a visceral reaction to our private lives being observed by living breathing agents that we lack when the same information is collected by computers. Perhaps this seems too obvious to remark upon, but it deserves some serious scrutiny. One hypothesis, which I advance in a forthcoming paper with my colleague Ken Daley, is that we are likely hard wired–perhaps evolutionarily–to have alarm bells ring when we think about human agents in our “space” but that we have no such inborn reactions to the impersonal data collectors we have developed in the past fifty years. The fact that alarm bells ring in one instance and not another is not a reason to ignore the silent threat. There’s a good case to be made that the threat of corporate knowledge–even if it doesn’t involve knowledge by a human–is quite a bit more dangerous than the threats we are more inclined to vilify.

Two features of human versus machine knowers stand out. Humans are conscious beings, and they have personal opinions, plans and intentions. It’s hard to swallow the idea that corporations or computer networks are themselves conscious, and it’s therefore hard to think of them as having opinions, plans and intentions. I’m inclined to grant the former–though it’s an interesting thought experiment to imagine if computer networks were, unbeknownst to us, conscious–and for the sake of argument I’ll grant that corporations don’t have opinions, plans or intentions (though we certainly talk as if they do). It’s worth asking what extra threat these features of humans might cause?

It’s admittedly unappealing to think of a Facebook nerd becoming engrossed in the saga of my personal life, but what harm does it cause? Assuming he (pardon the assumption, but I can’t imagine it not being a he) doesn’t go rogue and stake me out and threaten me or my loved ones, why does it matter that he knows that information? From one perspective, assuming he’s enjoying himself, that might even be thought to be a good thing! If the same information is simply in a computer, no one is enjoying themselves and isn’t more enjoyment better than less? Perhaps we think the privacy violation is impermissible and so the enjoyment doesn’t even start to outweigh that harm. But we’re not really talking about whether or not it’s permissible to violate privacy–presumably it’s just as impermissable if my privacy is violated and the illicit information is stored in a network. We’re asking what is the worse situation–a violation of privacy with enjoyment by a third person and a violation of privacy without. I share the feeling that the former is worse, but I’d like to have something to say in defense of that feeling. Perhaps it’s the fear that the human will go rogue and the computer can’t. But my feeling doesn’t go away when I imagine the human is spending a life in prison, nor does it go away when I realize that computers can go rogue as well, causing me all sorts of harm.

There’s lots more to say and think about here. But for now let’s just let the question simmer: Are violations of privacy more harmful if they involve knowledge by conscious agents, and if so, why?

Facebook’s Free-Speech Charade

Seeing through Zuckerberg’s Talk at Georgetown University

               Mark Zuckerberg spoke in defense of Facebook this week at Georgetown, and then later in the week faced pushback from Democrats in Congress.  The complaint is that by maintaining a hands-off policy with respect to political misinformation, Facebook is setting up our democracy to be hijacked once again by those who would rather confuse than inform.  His defense is to wrap himself and Facebook in the mantle of liberty and trumpet the virtues of free speech.  It’s natural to view Zuckerberg’s position as self-serving.  He certainly flailed under the questioning of Congresswomen Ocasio-Cortez, Waters and Porter.  (It’s also true that his obvious disorientation under scrutiny gives lie to idea that he had anything to do with writing the Georgetown speech.  It resembled nothing so much as a student talking to a professor about a paper he’s just plagiarized.)  But set these things aside.  What about the argument itself?  It’s worth a close look, because as is usually the case, the situation isn’t simple.  A lot of what Zuckerberg says is right, and the fact that it’s him saying it—self-serving though he is—doesn’t make him any less right.  The problem is that it’s difficult to get clear on what he’s really arguing for.  It’s not at all obvious that what he’s right about—namely the value of free speech—supports what he’s really defending—namely, Facebook’s permissive approach to political information and advertising. 

               Here’s the gist of Zuckerberg’s argument.  (I encourage you to read the whole thing here.)

               Free, uncensored speech is a necessity in a democratic society, and Facebook is a platform for that speech.  But, Zuckerberg admits, lines have to be drawn. It’s permissible, even desirable, to restrict speech that “puts people in danger” as well as things like pornography which “would make people uncomfortable using our platforms.”  It’s tough to draw the line. In general, he seems to think, it’s best that Facebook avoid regulating the speech on its platform.  There is a need to protect ourselves from the sort of manipulation Russian hackers perpetrated in 2016, but this is best done by requiring user verification:  they now “require you to provide a government ID and prove your location if you want to run political ads or run a large page.”  While Facebook works to weed out viral hoaxes, they want to avoid trying to restrict misinformation in general, which might include satire or the unintentionally wrong views many of us hold.  In the end, we should be careful, because Zuckerberg doesn’t “think most people want to live in a world where you can only post things that tech companies judge to be 100% true.” Facebook allows speech by political figures, even when it is wrong, and doesn’t fact check political ads. “I don’t think it’s right for a private company to censor politicians or the news in a democracy,” Zuckerberg says.  Again, there are difficult lines to be drawn.  If they were to ban political ads, should they also ban ads on political issues?  Zuckerberg seems to think we face a choice: we could constrain free expression on the internet, as it’s done in China, or we can have an internet that privileges open speech.  Zuckerberg’s position is basically that Facebook has “two responsibilities: to remove content when it could cause real danger as effectively as we can, and to fight to uphold as wide a definition of freedom of expression as possible — and not allow the definition of what is considered dangerous to expand beyond what is absolutely necessary.”

               It’s obviously hard to disagree with the general notion that people should be allowed to speak their mind, and that free speech is important to democracy.  That’s the strategic brilliance of Zuckerberg’s speech.  But what exactly is Zuckerberg arguing for?  What is he arguing against?  This matters.  It’s one thing to say that people should be allowed to say and think what they want. It’s another to say that a company should accept money to promote content that is demonstrably false.  The biggest problem with Zuckerberg’s argument is that he makes statements that are true, all things being equal, when applied to society as a whole, but that don’t obviously apply to the behavior of companies like Facebook.  Facebook’s very business model is predicated upon doing things we would strenuously object to if they governed social discourse as a whole.  Would we want democratic discourse to be governed by proprietary algorithms that bring certain voices to our attention and push others to the background?  Would we want democratic discourse to be engineered to addict us to having that discourse in a particular place, for the benefit of a particular company?  Would we want to have to verify our identity and location in order to speak our mind?  Would we want a government to record and store everything we say, only to turn around and market that information to advertisers?  These are all things that Facebook does, and if it’s ok for Facebook to do them, it’s because Facebook is a private business, not a government or a country.  Zuckerberg and Facebook can’t have it both ways: if they embrace the arguments for open democratic discourse, they need to hold themselves to those standards across the board.  You can’t be laissez faire while rigging the circulation of ideas behind the scenes.

               It’s one of the ironies of Zuckerberg’s speech that he almost makes the case for turning Facebook into a public utility.  His arguments are really only plausible if Facebook is such an essential platform for public discourse that restricting speech on the network would be tantamount to censoring free speech.  But if it is such an essential platform, should it really be governed by a company that isn’t democratically representative, that doesn’t answer to the public or the government, and that is driven by a profit motive?

               I don’t expect Facebook to be turned into a public utility anytime soon, and there would be some obvious drawbacks to doing so.  Given that, can’t Zuckerberg make the argument that it should be as free as it can be within the limitations of being a company with a profit motive?  That is, given Facebook is what it is, shouldn’t it avoid restricting speech?

               The fact is, we do restrict certain speech in certain places, not because it risks physical harm but because it is a danger to our democratic institutions.  You cannot stand by a voting booth and make a stump speech (or even brandish an advertisement) because doing so would threaten to corrupt the political process.  There are numerous rules about political advertising that limit what can be done—they must include disclaimers, for example, indicating whether they are affiliated with a particular campaign. Regulations like this arise in part as a response to new forms of media and the particular threats they pose.  We shouldn’t let our love of free speech blind us to the need to make sure our political processes can flourish, and that there are likely to be unique threats posed by new technologies that require considering new rules.  I don’t pretend to know what all of those are, or what the appropriate steps are, but given the potential impact, does it not make sense to err on the side of protecting our political heritage while we find our footing?

               Despite his repeated insistence that these problems are nuanced, Zuckerberg completely fails to recognize that the solutions can be as well.  No one is suggesting that Facebook police Uncle Joe’s posts to make sure he’s got his facts straight—though it’s certainly not morally wrong or anti-democratic to erect a social network that tried to do so.  (Wikipedia, it’s worth saying, has done pretty well holding user generated content to strict standards.)  And while it’s true that there are some tricky distinctions to be drawn between lying political ads, ads with scientifically inaccurate information and ads about issues with controversial truths, that doesn’t mean those distinctions shouldn’t be drawn.  There’s no clear way to draw a line between a kid who is too immature to use Facebook and one who can handle it, but Facebook manages to draw the line anyway, at the rather young age of 13.

               Zuckerberg is right that Facebook needs to tread carefully here.  It has become too influential in our political system for decisions to be made rashly.  That’s some reason to believe that it has simply become too influential period.  In the end, though, taken as a defense of Facebook, Zuckerberg’s paean to free speech is unconvincing.  Facebook’s policies now as ever are justified more by keeping people on the network than by democratic ideals, and Zuckerberg—whose own shares in the company confer super-voting rights many times greater than the typical share—knows this perfectly well.  

The Danger of Manipulated Images

Faces created by NVIDIAs AI Algorithm

A couple of days ago, probably at the suggestion of an AI, I read Sonia Klug’s Medium article AI is Changing How You See the World. The main argument of the piece is that AI enhanced images lead us to misrepresent reality, and that this, among other things, leads us to believe in unrealistic ideals, or perhaps even to miscontrue the way the world actually is. The problem has diverse manifestations. On the one hand, you have the presence of deepfakes that might well persuade us that a public figure said or did something she didn’t, and on the other you have filtered and enhanced pictures that lead us to believe our own blotchy and bumped faces are decidedly subpar. I’m inclined to agree with the main points in the article, but I’m particularly interested in another idea, clearly presented by Katy Cook, author of the forthcoming Ethical Threats and Emotional Unintelligence in the Tech Industry, who argues that “When this basic function [our ability to rely on what we see] is compromised by deepfakes or manipulated images, our ability to agree on basic facts diminishes and our social cohesion suffers as a result.” I think that’s right, and as Cook hints, this sort of thing might just be another step in the fracturing of our body politic, furthering the damage done by the siloed news channels and the bubbles caused by facebook algorithms.

An interesting thought here is that as we lose our ability to rely on common evidence, the more likely we are to retreat to our ideological corners so that we don’t have to adduce evidence at all. (How often do we object to those who agree with us because their reasons for their belief aren’t airtight? Philosophers aside, that is.) We either don’t talk to those who might debate us, or we avoid talking about the topics of disagreement.

In general, it seems likely that as trust in evidence weakens, so too does the drive to seek evidence. “You can’t trust anything” becomes an excuse to stick with whatever beliefs one already has, or to adopt the beliefs that are most convenient. What makes it particularly insidious in the case of images is that we tend to give credence to what we see, but if we lose that, we are apt to lose the last bit of hope we have that anyone can be convinced by the truth. At that point, the peddler of convenient truths wins, likely at the cost of the real world.

Can AI be Conscious?

            We’ve heard about the extraordinary capabilities of specialized artificial intelligence, and we have reason to believe that General AI is on its way, with superintelligent AI hard on its heels.  These machines will surpass humans even in their most impressive feats of intelligence, not merely in playing games like chess and Go, but in generalized reasoning about previously unanticipated problems and domains.  Some incarnations of this AI is likely to be humanoid, on two legs and with expressive faces, but it’s likely that most of it will not be.  An AI doesn’t need limbs to move or eyes to see in order to be a reasoning mind.  Humanoid or otherwise, though, we can ask about these minds: are they conscious?  And perhaps just as importantly, does it matter?

            Questions like this about consciousness are ambiguous, so it is important to be clear about just what we mean.  On the one hand, there is what we might call causal awareness.  This is the sense in which it seems right to say that a system is “aware” or “conscious” of something if it receives and processes information stemming from it.  On a primitive level, there is a sense in which the automatic doors at the supermarket are aware of people approaching.  They sense or detect people and slide open in response.  Something like a robotic vacuum cleaner—a Roomba, for example—is aware of the walls and furniture that is must avoid as it cleans the floors.  In both of these cases there is a sense in which the relevant systems are “aware” of things, but there is no temptation to say that they are actually having feelings of awareness.  There is no subjective quality—there to be enjoyed or not, or consciously entertained—corresponding to the door’s perceiving your approach or the Roomba’s sensing the wall.  The systems’ actions are blindly triggered by elements of the environment.  There is simply an impressive system of cause and effect there, with items in the environment being the cause. 

            When we say that a person is conscious of her environment, we usually have something much more robust in mind.   There is not only a causal process in us, triggered by our environment—there is also a sensation, a feeling that comes with the use of our five senses.  When I see you and raise my hand in greeting, there is an inner awareness—I see the blue shirt you are wearing, and I have an inner sensory state that corresponds to the bluishness of your shirt.  When the soundwaves caused by your speech reach my ear, and those waves are translated into electrical impulses in the brain, there is something missing if I leave out the fact that it actually sounds like something.  Philosophers often say that “there is something it is like” to hear, smell, taste or touch.  What they are getting at is the idea that conscious experience has a subjective aspect, a feeling to it, that seems to be something over and above the mere relay of causal information through a processing system.  Someone is at home, feeling these things.  We can call this subjective consciousness.

            So, back to the original question.  Will a sufficiently sophisticated AI be conscious?  It should be clear that if we are talking about causal awareness the answer is yes.  As long as the system has states that respond to the environment, it can be said to be conscious in this sense.

            But the issue of subjective consciousness isn’t handled so easily. It might seem that as long as these machines say subjective things and react in a certain way we have reason to believe they have subjective consciousness.  So, for example, we can imagine that the artificial intelligence is humanoid.  In that case, if it was kicked and grabbed its shin and said “ow!” we would naturally think it has pain.  If it tasted chocolate, assuming we made it with sensors in the tongue, it might say “mmm.  Sweet with just a hint of bitterness.” That’s the sort of evidence we use to tell whether other humans are conscious, isn’t it?  That’s true, for the most part, but we have to admit that this sort of evidence isn’t anywhere near conclusive.  A rather simple computer, or video game AI, can be programmed to issue subjective responses.  Most computers monitor their own internal temperature so they can turn on the fans or shut down if the processor gets too hot.  It’s implausible that the computer has a sensory experience of heat just because its processor is hot, or just because it detects heat in its processor.  It is hot—just as a piece of metal becomes hot when held over a fire—but it doesn’t feel heat.  Nor would it feel heat if we make it say “Man, I’m hot!  Shut me down!” when it overheats.  Putting the same machine in a humanoid body and wiring it to facial expressions and sweat glands clearly wouldn’t make it feel heat either.  Of course this isn’t likely to be the way an AI is set up, but the general point is that there are many ways we might have something that looks and acts like it is in pain, but it is just generating physical reactions that have been programmed in.

            Going through the many debates about the nature of conscious experience is beyond the scope of what we can do, but one thing should be clear:  something can be just like us externally, with any number of apparently subjective reactions, but nonetheless lack subjective consciousness.  In fact, it seems likely that one can know as much as one wants about the ways an artificial intelligence works, but as long as that knowledge is “objective” or from the outside, that won’t tell us whether or not there is consciousness corresponding to its expressions or even its internal processes.

            It’s worth pausing for a second to ask how this is any different to our knowledge about each other.  I don’t feel your pains, after all.  I just go by what you say and how you behave to determine what’s going on subjectively.  So, do I know that you are in pain?  This is the classic problem of other minds.  Each individual only has access to her own subjective consciousness, and in the case of others she only has access to outer behavior.  How, then, can I know that you are feeling pain?

            The most natural answer to this comes in the form of an argument by analogy.  When I am shouting “ow” as a result of being kicked, and am hopping around with a grimace on my face, I’m in pain.  I can establish this connection in my own case, and I can use my own case to produce a general rule:  When someone is showing certain behavior as a result of a certain cause, they are in pain.  I can then use this rule to infer that when you, or someone else, is showing that behavior, you are in pain.

            Unfortunately, this argument from analogy really isn’t very good.  The main problem is that you aren’t justified in believing a general rule connecting behavior with conscious states, on the basis of one example of that connection: yourself.  Given that you only know that in your own case these outer behaviors are a result of inner conscious states, how can you thereby conclude that this is the case for everyone?  Consider the following analogy.  You are shopping for cars.   Across the enormous parking lot you see silver cars, black cars, red cars and white.  You decide you’d like to check out a white car, and you open the door to find it has a tan interior.  You don’t like tan interiors, understandably enough so you move on.  The dealer suggests you look at another white car and you say “no thanks, I don’t like tan interiors.”  “Why do you think it has a tan interior?” The dealer asks.  “Well, the previous white car had a tan interior, so I used that connection to form a general rule: all white cars have tan interiors.  That car is white, so it has a tan interior.”  Hopefully it is clear that you have very little reason, almost none at all, for thinking that this car has a tan interior.  You simply can’t form a general rule connecting car body colors with car interior colors based on looking at one car.  But this is precisely the sort of thing we are doing when we conclude based only on our own case—which is the only case in which we are privy to the “interior” states—that there is a general rule connecting outer behaviors to inner states.

            Given this, it might seem the problem of other minds means that we are in as bad a situation with respect to other human minds as we are to the question of whether or not AIs are conscious.  If that’s the case, it seems we should give AIs the benefit of the doubt and call them conscious.  This suggests the following argument:

  1.  Our evidence for concluding other humans have subjective consciousness is as good as our evidence for concluding AI’s have subjective consciousness.
  2. If we have the same evidence for concluding a is F as we do for concluding b is F, we should conclude one only if we conclude the other.
  3. We do, and should, conclude that other humans are conscious.  (To fail to do so would be morally outrageous)

Therefore, we should conclude that AI’s are conscious.

Basically, according to this argument, even though we can recognize that we don’t have enough evidence to conclude that other people have subjective consciousness, we should extend each other the courtesy of that belief.  That courtesy should extend as well to AI, even though we might not know that AI are conscious.

            There’s some reason to doubt this argument.  Among other things, according to this argument it seems our reason for concluding other humans have minds isn’t the evidence for the conclusion, but the deeply troubling moral repercussions of not doing so.  But if that is what is determining whether or not we think AIs are conscious, its not obvious that we should accept premise 2.  Premise 2 seems to presuppose that our belief is based on evidence, but the third premise seems to presuppose that our belief is really based on moral reasons.  So there is some tension in the argument.  It could be that we have the same evidence to believe AI are conscious as we do that other humans are conscious but that we have morally compelling reasons to believe humans are conscious, but we aren’t similarly morally compelled to conclude the same thing about AI. 

            That’s not the main problem with the argument, however.  The problem is with premise 1.  We are in fact much less justified in believing AI have consciousness than we are that other humans are conscious.  Our justification for believing other humans have conscious states is not merely, or perhaps even primarily, a matter of believing in a general rule connecting behavior to mental states.  Crucial to our inference about other people is the premise that we are the same sorts of creatures with the same sorts of brains, and those brains are responsible for the mental states we have.  This is also why we tend to think that non-human animals, like dogs and pigs, have conscious states even though they behave quite differently from us in many ways: they have brains that are relevantly similar to ours, and have parts of those brains that are relevantly similar to the parts of our brains that seem to be associated with pain.  Assuming we continue building AI on the basis of computer architecture, AI will not have these commonalities with humans.  They will have more commonalities, in fact, with things we have reason to believe are not conscious—computers.  This fact should undermine our readiness to ascribe conscious states to them on the basis of behavior, since we know that unlike us their behavior is underwritten by systems that themselves do not seem to be of the sort that underwrite conscious experience.

            In light of these considerations, it’s tempting to conclude that we can know that computers will not be conscious.  Consider the following argument:

  1.  Consciousness is a result of the sort of organic brains that humans and many other naturally evolved animals possess.
  2. AI will not have organic brains.

Therefore, AI will not be conscious.

While this argument looks good, it is not.  The problem is that just because our consciousness is a result of organic brains doesn’t mean that there can’t be things that are conscious with different sorts of systems.  It is similar to the following argument:

  1.  Breathing is a result of our having oxygen in our lungs.
  2. Fish do not have lungs.

Therefore, fish don’t breathe.

The problem, of course, is that while we breathe with lungs, there are other ways to breathe.  So we need an argument that there aren’t other ways to be conscious.

            Another argument was suggested by our earlier discussion, however.  Quite plausibly computers are not conscious, and AIs will just be more complicated versions of the same thing.  So perhaps that’s reason enough to think they aren’t conscious.  The argument would be something like,

  1.  Computers (like home pcs) are not conscious.
  2. AI will be based on similar architecture to computers.
  3. If AI use similar architecture to computers it will not be conscious.

Therefore, AI will not be conscious.

This is argument can seem persuasive at first, but it probably shouldn’t be accepted.  Computers are probably not conscious, but that is no reason to believe that something like them in certain respects—and very unlike them in others—won’t be.  To put the point in other words, it might not be the type of physical architecture that prevents our home computers from being conscious.  It could be because their relative simplicity, or something having to do with how they are programmed that prevents them from being conscious.  AI will likely be very different in these respects.  It might be different enough to generate the same sort of consciousness that we get from our biological brains.

            So, where does this leave us?  Unfortunately, it might leave us in the position that we simply cannot know whether or not AI are conscious.  Our best reason for thinking other people are conscious doesn’t apply in the case of beings so different from us, but we can’t conclude that those beings aren’t conscious because we simply don’t know what sort of systems can possibly give rise to conscious.  If there are such systems, only they will be in a good position to conclude that others of their ilk are conscious.  That is, the following argument might be sound:

  1. Conscious beings only have justification for believing that other beings of their physical type are conscious.
  2. AI will not be of the same physical type as human beings.

Therefore, human beings cannot know whether AI are conscious.

If this argument is sound, it seems that we might be in the very frustrating position with AI that we cannot know that it is conscious.  Does this matter?  Arguably it does. A lot. 

            We are often mostly concerned with what artificial intelligences can do, not whether they are conscious or have certain conscious feelings.  The Turing Test, the classic benchmark for artificial intelligence, doesn’t even take consciousness into consideration.  (Remember—just because the robot is aware of what you are saying in the sense that it registers it and generates an output doesn’t mean that it is conscious in the sense that there is something it is like for it to register what you say or generate an output.)  Nevertheless, there are several reasons we really should care about whether or not Robots are conscious. 

            Most obviously we should care about whether artificial intelligences are conscious for their sake.  To keep things simple suppose we have a humanoid robot, and it is programmed to be as human as possible.  If you hit it, it will say “ow” and withdraw, begin to distrust you or, depending on its disposition, to hit you back.  As we’ve seen, this behavior seems to leave open the question of whether it is actually feeling anything when it is hit.  If it is, though, surely that gives us an important moral reason not to hit it or to cause it pain.  As Peter Singer points out, the mere ability to feel pain would seem lead us to have certain obligations toward it.  The fact that a cat feels pain when it is burned generates an obligation not to burn it.  It may not be the only reason burning a cat is morally wrong, but it is a big reason.  If toy manufacturers sold a stuffed cat that screeched when burned, we might not want to burn it—it might smell, the screech might be horrific, and burning toys to make them scream is generally not something nice people want to do—but it wouldn’t be harming the toy cat.   Put another way, if a child owns a toy cat, and we set it aflame just for fun, we are wronging the child.  We are, plausibly, violating our obligations to the child.  If, however, because of its complicated AI the toy cat is conscious, we are not only wronging the child, we are wronging the cat.  We are violating our obligations to the cat (as well as those we owe to the child as owner of the cat).  The cat is suddenly morally relevant in itself, and not merely as a possession of the child.

            If artificial intelligences are conscious, they appear to be morally relevant in this significant sense—we ought to behave a certain way toward them regardless of who they are owned by or what purposes they serve.  Indeed, if the purposes they serve or if that ownership causes a certain amount of conscious harm, it is prima facie morally impermissible to continue using them in that way.  Long story short, if AI is conscious, then robots have rights that we’ll have to consider.  Consciousness alone doesn’t generate the whole slate of rights that humans have.

            If this is right, and if our earlier consideration about knowledge of AI consciousness is correct, then with AI we are likely to be in a position of not knowing whether they have rights.  Given that artificial intelligences are likely to comprise a big part of our future, this means that we will not know of major players in our lives whether they have rights.  This is significant, and it is likely to be a point of social tension in the future.  There will likely be those who argue that AI deserve rights and that we cannot use them like tools, and there are likely to be those who think that manmade, nonorganic machines can never be conscious and that they do not have rights.  One can imagine something not unlike the friction between abolitionists and slaveholders arising from this tension, and it probably has a similar possibility to tear society apart. 

            We’re a long way from this social situation, and I strongly suspect we’re a long way from this technological situation, but that’s not a reason to put off thinking about it.  As convenient as it is to focus on what AI can do, it is morally imperative that we think about what it can feel.

Google’s Threatening Argument against Blocking Third-Party Cookies

Because Google cares so much about your privacy, they have announced a new standards initiative to increase privacy on the web.  They call it the Privacy Sandbox. You will be shocked to learn that it is a mixed bag, and has proposals that are deeply problematic for privacy.  There’s a good discussion of some of the pros and cons at EFF. Here I just want to remark on a very bad, quite self-serving argument that they make in their proposal which maintains that blocking thir-party cookies is bad for privacy.

Third-party cookies are basically files that get put on your computer by a party other than the site your are visiting.  So you visit bighats.com and there is a cookie from, say, Google that allows them to learn what you’re up to there.  This helps people like, um, Google, place ads that target your behavior.  Recently, browsers such as Firefox have started blocking third-party cookies by default.  Google thinks this is bad.  Of course it is bad for them, but the surprising part of the argument is that they maintain it is bad for privacy.  They say:

…large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong.

Basically, the argument is that you shouldn’t protect yourself from cookies because then companies will track you by more dastardly means.  While there is an argument here, there is also the hint of a threat.  Surely Google doesn’t mean to say that if you don’t let them track you this way they’ll track you in a way that gives you less choice, so shut up and stop blocking?  Nah. They wouldn’t say that.  Even though they are the world’s biggest user of this sort of information and therefore the biggest threat if things get worse privacy-wise, surely they’re just making an argument, not a threat.

Even if we give them the benefit of the doubt, the fact that this argument is made by dictators and oppressors throughout history should give us pause.  The form:  You’d better accept Bad because otherwise Worse! (“Don’t remove the star of David, because then we’ll have to really make sure we know where you are.”)  The obvious response is that we should have the option to protect ourselves both from the Bad and the Worse!  Of course if the worse really is inevitable without the bad, the argument might be persuasive, but it clearly isn’t inevitable that we will be fingerprinted if we don’t accept third-party cookies.  If a company with Google’s influence decided that’s not how things should go, I doubt things would go that way.    In addition, there are ways one could foil things like fingerprinting.  Not only can one block fingerprinting, but I suspect it’s not difficult to generate masks that generate a false fingerprint.  This doesn’t seem to be an insoluble problem.

Our response to a mass move to more coercive forms of tracking shouldn’t be to volunteer to be tracked, but to raise hell about those more coercive forms, demanding policy protections or simply boycotting websites that employ things like fingerprinting.  The fact that Google makes self serving arguments like this–arguments that sound suspiciously  like threats–should make you think twice about playing in their sandbox.