The Danger of Manipulated Images

Faces created by NVIDIAs AI Algorithm

A couple of days ago, probably at the suggestion of an AI, I read Sonia Klug’s Medium article AI is Changing How You See the World. The main argument of the piece is that AI enhanced images lead us to misrepresent reality, and that this, among other things, leads us to believe in unrealistic ideals, or perhaps even to miscontrue the way the world actually is. The problem has diverse manifestations. On the one hand, you have the presence of deepfakes that might well persuade us that a public figure said or did something she didn’t, and on the other you have filtered and enhanced pictures that lead us to believe our own blotchy and bumped faces are decidedly subpar. I’m inclined to agree with the main points in the article, but I’m particularly interested in another idea, clearly presented by Katy Cook, author of the forthcoming Ethical Threats and Emotional Unintelligence in the Tech Industry, who argues that “When this basic function [our ability to rely on what we see] is compromised by deepfakes or manipulated images, our ability to agree on basic facts diminishes and our social cohesion suffers as a result.” I think that’s right, and as Cook hints, this sort of thing might just be another step in the fracturing of our body politic, furthering the damage done by the siloed news channels and the bubbles caused by facebook algorithms.

An interesting thought here is that as we lose our ability to rely on common evidence, the more likely we are to retreat to our ideological corners so that we don’t have to adduce evidence at all. (How often do we object to those who agree with us because their reasons for their belief aren’t airtight? Philosophers aside, that is.) We either don’t talk to those who might debate us, or we avoid talking about the topics of disagreement.

In general, it seems likely that as trust in evidence weakens, so too does the drive to seek evidence. “You can’t trust anything” becomes an excuse to stick with whatever beliefs one already has, or to adopt the beliefs that are most convenient. What makes it particularly insidious in the case of images is that we tend to give credence to what we see, but if we lose that, we are apt to lose the last bit of hope we have that anyone can be convinced by the truth. At that point, the peddler of convenient truths wins, likely at the cost of the real world.

FTC sues Match.com for misleading users

Machine-learning together with deepfakery will exacerbate this problem as companies like Match will be able to use one’s profile information together with their ‘likes’ and saved profiles to generate profiles (esp. pictures) of men and women that targeted users are most likely to want to connect with. Despite their not being actual people, this will increase sales (as one needs to be a paid member to respond to messages) while providing nothing to users. Once again, technology allows the profit-motive to run wild (i.e., morally wrong).

Social Media & Consent

Suppose John is in China.  While he doesn’t understand any Chinese, he wants a cup of coffee, so he enters the nearest coffee shop and points toward a coffee pot.  The barista pours some coffee into a paper cup, hands it to John, then asks (in Chinese) “may I charge you double the price for this cup of coffee?” John, understanding nothing of what’s been asked, nods his head to expedite the process of getting his coffee.  Has John consented to being charged double the regular price for his coffee?

We think not, but maybe you disagree. Suppose rather than asking to overcharge him, the barista asks if he can violently take possession of John’s heart. It seems clear that John has not consented to that. You might say that John ought not to have nodded his head in agreement, that he acted carelessly, maybe he even did something wrong, but that’s not the point. His nodding or saying ‘yes’ does not, by itself, entail that he has consented to whatever has been asked.

I would argue that, in the majority of cases, users of social media platforms likewise have not consented to the relevant ‘terms of service’ (TOS). The length (consider not only the relevant text but the embedded links) and opaque legalese supports the analogy to John in the relevant ways. This calls into question many current practices including the use of personal data, the extraction of further data never explicitly provided, the monitoring of surfing and purchasing patterns, and researching connections between emotional states and behavior, all in order to better generate profits. We think these worries can be eliminated, or at the very least reduced, by making it easier for users to become truly informed regarding the relevant TOS.

Can AI be Conscious?

            We’ve heard about the extraordinary capabilities of specialized artificial intelligence, and we have reason to believe that General AI is on its way, with superintelligent AI hard on its heels.  These machines will surpass humans even in their most impressive feats of intelligence, not merely in playing games like chess and Go, but in generalized reasoning about previously unanticipated problems and domains.  Some incarnations of this AI is likely to be humanoid, on two legs and with expressive faces, but it’s likely that most of it will not be.  An AI doesn’t need limbs to move or eyes to see in order to be a reasoning mind.  Humanoid or otherwise, though, we can ask about these minds: are they conscious?  And perhaps just as importantly, does it matter?

            Questions like this about consciousness are ambiguous, so it is important to be clear about just what we mean.  On the one hand, there is what we might call causal awareness.  This is the sense in which it seems right to say that a system is “aware” or “conscious” of something if it receives and processes information stemming from it.  On a primitive level, there is a sense in which the automatic doors at the supermarket are aware of people approaching.  They sense or detect people and slide open in response.  Something like a robotic vacuum cleaner—a Roomba, for example—is aware of the walls and furniture that is must avoid as it cleans the floors.  In both of these cases there is a sense in which the relevant systems are “aware” of things, but there is no temptation to say that they are actually having feelings of awareness.  There is no subjective quality—there to be enjoyed or not, or consciously entertained—corresponding to the door’s perceiving your approach or the Roomba’s sensing the wall.  The systems’ actions are blindly triggered by elements of the environment.  There is simply an impressive system of cause and effect there, with items in the environment being the cause. 

            When we say that a person is conscious of her environment, we usually have something much more robust in mind.   There is not only a causal process in us, triggered by our environment—there is also a sensation, a feeling that comes with the use of our five senses.  When I see you and raise my hand in greeting, there is an inner awareness—I see the blue shirt you are wearing, and I have an inner sensory state that corresponds to the bluishness of your shirt.  When the soundwaves caused by your speech reach my ear, and those waves are translated into electrical impulses in the brain, there is something missing if I leave out the fact that it actually sounds like something.  Philosophers often say that “there is something it is like” to hear, smell, taste or touch.  What they are getting at is the idea that conscious experience has a subjective aspect, a feeling to it, that seems to be something over and above the mere relay of causal information through a processing system.  Someone is at home, feeling these things.  We can call this subjective consciousness.

            So, back to the original question.  Will a sufficiently sophisticated AI be conscious?  It should be clear that if we are talking about causal awareness the answer is yes.  As long as the system has states that respond to the environment, it can be said to be conscious in this sense.

            But the issue of subjective consciousness isn’t handled so easily. It might seem that as long as these machines say subjective things and react in a certain way we have reason to believe they have subjective consciousness.  So, for example, we can imagine that the artificial intelligence is humanoid.  In that case, if it was kicked and grabbed its shin and said “ow!” we would naturally think it has pain.  If it tasted chocolate, assuming we made it with sensors in the tongue, it might say “mmm.  Sweet with just a hint of bitterness.” That’s the sort of evidence we use to tell whether other humans are conscious, isn’t it?  That’s true, for the most part, but we have to admit that this sort of evidence isn’t anywhere near conclusive.  A rather simple computer, or video game AI, can be programmed to issue subjective responses.  Most computers monitor their own internal temperature so they can turn on the fans or shut down if the processor gets too hot.  It’s implausible that the computer has a sensory experience of heat just because its processor is hot, or just because it detects heat in its processor.  It is hot—just as a piece of metal becomes hot when held over a fire—but it doesn’t feel heat.  Nor would it feel heat if we make it say “Man, I’m hot!  Shut me down!” when it overheats.  Putting the same machine in a humanoid body and wiring it to facial expressions and sweat glands clearly wouldn’t make it feel heat either.  Of course this isn’t likely to be the way an AI is set up, but the general point is that there are many ways we might have something that looks and acts like it is in pain, but it is just generating physical reactions that have been programmed in.

            Going through the many debates about the nature of conscious experience is beyond the scope of what we can do, but one thing should be clear:  something can be just like us externally, with any number of apparently subjective reactions, but nonetheless lack subjective consciousness.  In fact, it seems likely that one can know as much as one wants about the ways an artificial intelligence works, but as long as that knowledge is “objective” or from the outside, that won’t tell us whether or not there is consciousness corresponding to its expressions or even its internal processes.

            It’s worth pausing for a second to ask how this is any different to our knowledge about each other.  I don’t feel your pains, after all.  I just go by what you say and how you behave to determine what’s going on subjectively.  So, do I know that you are in pain?  This is the classic problem of other minds.  Each individual only has access to her own subjective consciousness, and in the case of others she only has access to outer behavior.  How, then, can I know that you are feeling pain?

            The most natural answer to this comes in the form of an argument by analogy.  When I am shouting “ow” as a result of being kicked, and am hopping around with a grimace on my face, I’m in pain.  I can establish this connection in my own case, and I can use my own case to produce a general rule:  When someone is showing certain behavior as a result of a certain cause, they are in pain.  I can then use this rule to infer that when you, or someone else, is showing that behavior, you are in pain.

            Unfortunately, this argument from analogy really isn’t very good.  The main problem is that you aren’t justified in believing a general rule connecting behavior with conscious states, on the basis of one example of that connection: yourself.  Given that you only know that in your own case these outer behaviors are a result of inner conscious states, how can you thereby conclude that this is the case for everyone?  Consider the following analogy.  You are shopping for cars.   Across the enormous parking lot you see silver cars, black cars, red cars and white.  You decide you’d like to check out a white car, and you open the door to find it has a tan interior.  You don’t like tan interiors, understandably enough so you move on.  The dealer suggests you look at another white car and you say “no thanks, I don’t like tan interiors.”  “Why do you think it has a tan interior?” The dealer asks.  “Well, the previous white car had a tan interior, so I used that connection to form a general rule: all white cars have tan interiors.  That car is white, so it has a tan interior.”  Hopefully it is clear that you have very little reason, almost none at all, for thinking that this car has a tan interior.  You simply can’t form a general rule connecting car body colors with car interior colors based on looking at one car.  But this is precisely the sort of thing we are doing when we conclude based only on our own case—which is the only case in which we are privy to the “interior” states—that there is a general rule connecting outer behaviors to inner states.

            Given this, it might seem the problem of other minds means that we are in as bad a situation with respect to other human minds as we are to the question of whether or not AIs are conscious.  If that’s the case, it seems we should give AIs the benefit of the doubt and call them conscious.  This suggests the following argument:

  1.  Our evidence for concluding other humans have subjective consciousness is as good as our evidence for concluding AI’s have subjective consciousness.
  2. If we have the same evidence for concluding a is F as we do for concluding b is F, we should conclude one only if we conclude the other.
  3. We do, and should, conclude that other humans are conscious.  (To fail to do so would be morally outrageous)

Therefore, we should conclude that AI’s are conscious.

Basically, according to this argument, even though we can recognize that we don’t have enough evidence to conclude that other people have subjective consciousness, we should extend each other the courtesy of that belief.  That courtesy should extend as well to AI, even though we might not know that AI are conscious.

            There’s some reason to doubt this argument.  Among other things, according to this argument it seems our reason for concluding other humans have minds isn’t the evidence for the conclusion, but the deeply troubling moral repercussions of not doing so.  But if that is what is determining whether or not we think AIs are conscious, its not obvious that we should accept premise 2.  Premise 2 seems to presuppose that our belief is based on evidence, but the third premise seems to presuppose that our belief is really based on moral reasons.  So there is some tension in the argument.  It could be that we have the same evidence to believe AI are conscious as we do that other humans are conscious but that we have morally compelling reasons to believe humans are conscious, but we aren’t similarly morally compelled to conclude the same thing about AI. 

            That’s not the main problem with the argument, however.  The problem is with premise 1.  We are in fact much less justified in believing AI have consciousness than we are that other humans are conscious.  Our justification for believing other humans have conscious states is not merely, or perhaps even primarily, a matter of believing in a general rule connecting behavior to mental states.  Crucial to our inference about other people is the premise that we are the same sorts of creatures with the same sorts of brains, and those brains are responsible for the mental states we have.  This is also why we tend to think that non-human animals, like dogs and pigs, have conscious states even though they behave quite differently from us in many ways: they have brains that are relevantly similar to ours, and have parts of those brains that are relevantly similar to the parts of our brains that seem to be associated with pain.  Assuming we continue building AI on the basis of computer architecture, AI will not have these commonalities with humans.  They will have more commonalities, in fact, with things we have reason to believe are not conscious—computers.  This fact should undermine our readiness to ascribe conscious states to them on the basis of behavior, since we know that unlike us their behavior is underwritten by systems that themselves do not seem to be of the sort that underwrite conscious experience.

            In light of these considerations, it’s tempting to conclude that we can know that computers will not be conscious.  Consider the following argument:

  1.  Consciousness is a result of the sort of organic brains that humans and many other naturally evolved animals possess.
  2. AI will not have organic brains.

Therefore, AI will not be conscious.

While this argument looks good, it is not.  The problem is that just because our consciousness is a result of organic brains doesn’t mean that there can’t be things that are conscious with different sorts of systems.  It is similar to the following argument:

  1.  Breathing is a result of our having oxygen in our lungs.
  2. Fish do not have lungs.

Therefore, fish don’t breathe.

The problem, of course, is that while we breathe with lungs, there are other ways to breathe.  So we need an argument that there aren’t other ways to be conscious.

            Another argument was suggested by our earlier discussion, however.  Quite plausibly computers are not conscious, and AIs will just be more complicated versions of the same thing.  So perhaps that’s reason enough to think they aren’t conscious.  The argument would be something like,

  1.  Computers (like home pcs) are not conscious.
  2. AI will be based on similar architecture to computers.
  3. If AI use similar architecture to computers it will not be conscious.

Therefore, AI will not be conscious.

This is argument can seem persuasive at first, but it probably shouldn’t be accepted.  Computers are probably not conscious, but that is no reason to believe that something like them in certain respects—and very unlike them in others—won’t be.  To put the point in other words, it might not be the type of physical architecture that prevents our home computers from being conscious.  It could be because their relative simplicity, or something having to do with how they are programmed that prevents them from being conscious.  AI will likely be very different in these respects.  It might be different enough to generate the same sort of consciousness that we get from our biological brains.

            So, where does this leave us?  Unfortunately, it might leave us in the position that we simply cannot know whether or not AI are conscious.  Our best reason for thinking other people are conscious doesn’t apply in the case of beings so different from us, but we can’t conclude that those beings aren’t conscious because we simply don’t know what sort of systems can possibly give rise to conscious.  If there are such systems, only they will be in a good position to conclude that others of their ilk are conscious.  That is, the following argument might be sound:

  1. Conscious beings only have justification for believing that other beings of their physical type are conscious.
  2. AI will not be of the same physical type as human beings.

Therefore, human beings cannot know whether AI are conscious.

If this argument is sound, it seems that we might be in the very frustrating position with AI that we cannot know that it is conscious.  Does this matter?  Arguably it does. A lot. 

            We are often mostly concerned with what artificial intelligences can do, not whether they are conscious or have certain conscious feelings.  The Turing Test, the classic benchmark for artificial intelligence, doesn’t even take consciousness into consideration.  (Remember—just because the robot is aware of what you are saying in the sense that it registers it and generates an output doesn’t mean that it is conscious in the sense that there is something it is like for it to register what you say or generate an output.)  Nevertheless, there are several reasons we really should care about whether or not Robots are conscious. 

            Most obviously we should care about whether artificial intelligences are conscious for their sake.  To keep things simple suppose we have a humanoid robot, and it is programmed to be as human as possible.  If you hit it, it will say “ow” and withdraw, begin to distrust you or, depending on its disposition, to hit you back.  As we’ve seen, this behavior seems to leave open the question of whether it is actually feeling anything when it is hit.  If it is, though, surely that gives us an important moral reason not to hit it or to cause it pain.  As Peter Singer points out, the mere ability to feel pain would seem lead us to have certain obligations toward it.  The fact that a cat feels pain when it is burned generates an obligation not to burn it.  It may not be the only reason burning a cat is morally wrong, but it is a big reason.  If toy manufacturers sold a stuffed cat that screeched when burned, we might not want to burn it—it might smell, the screech might be horrific, and burning toys to make them scream is generally not something nice people want to do—but it wouldn’t be harming the toy cat.   Put another way, if a child owns a toy cat, and we set it aflame just for fun, we are wronging the child.  We are, plausibly, violating our obligations to the child.  If, however, because of its complicated AI the toy cat is conscious, we are not only wronging the child, we are wronging the cat.  We are violating our obligations to the cat (as well as those we owe to the child as owner of the cat).  The cat is suddenly morally relevant in itself, and not merely as a possession of the child.

            If artificial intelligences are conscious, they appear to be morally relevant in this significant sense—we ought to behave a certain way toward them regardless of who they are owned by or what purposes they serve.  Indeed, if the purposes they serve or if that ownership causes a certain amount of conscious harm, it is prima facie morally impermissible to continue using them in that way.  Long story short, if AI is conscious, then robots have rights that we’ll have to consider.  Consciousness alone doesn’t generate the whole slate of rights that humans have.

            If this is right, and if our earlier consideration about knowledge of AI consciousness is correct, then with AI we are likely to be in a position of not knowing whether they have rights.  Given that artificial intelligences are likely to comprise a big part of our future, this means that we will not know of major players in our lives whether they have rights.  This is significant, and it is likely to be a point of social tension in the future.  There will likely be those who argue that AI deserve rights and that we cannot use them like tools, and there are likely to be those who think that manmade, nonorganic machines can never be conscious and that they do not have rights.  One can imagine something not unlike the friction between abolitionists and slaveholders arising from this tension, and it probably has a similar possibility to tear society apart. 

            We’re a long way from this social situation, and I strongly suspect we’re a long way from this technological situation, but that’s not a reason to put off thinking about it.  As convenient as it is to focus on what AI can do, it is morally imperative that we think about what it can feel.

Google’s Wrist is Slapped for Violating Child Privacy

Ok, so maybe there’s nothing problematic about harvesting data from adults so that advertisers can microtarget us with ads in order to optimize their chances of persuading us to buy. I doubt it, but regardless…

There’s certainly something problematic about harvesting data on children.

Two issues:

First, when it comes to children, talk about consent is silly.

Second, and more important, the persistent harvesting of a child’s data, together with the ways in which such data can now be algorithmically analyzed and the unknown ways in which it may be used and disseminated in the future suggests the possibility that such a child may become entirely transparent to data brokers, advertisers, the government, police departments and anyone else who might purchase such data.

Of course, the child might become the greatest criminal mastermind of their time, but this has to be the case for every such child for this response to get off the ground.

More Fodder for the Hard Problem of Consciousness?

It’s unlikely that many would attribute consciousness to these clusters of cells, but it looks like the question might need to be asked sooner rather than later: can non-embodied lab-grown brains feel pain? Barring some magic sprinkles in the case of embryo brains it’s not obvious that there’s any difference between embryo brains and those grown in labs. “If pre-term babies have some level of moral status, she says, perhaps these mini-brains should too.” 

I also can’t help but think about full or partial brain transplants and issues of personal identity…

Google’s Threatening Argument against Blocking Third-Party Cookies

Because Google cares so much about your privacy, they have announced a new standards initiative to increase privacy on the web.  They call it the Privacy Sandbox. You will be shocked to learn that it is a mixed bag, and has proposals that are deeply problematic for privacy.  There’s a good discussion of some of the pros and cons at EFF. Here I just want to remark on a very bad, quite self-serving argument that they make in their proposal which maintains that blocking thir-party cookies is bad for privacy.

Third-party cookies are basically files that get put on your computer by a party other than the site your are visiting.  So you visit bighats.com and there is a cookie from, say, Google that allows them to learn what you’re up to there.  This helps people like, um, Google, place ads that target your behavior.  Recently, browsers such as Firefox have started blocking third-party cookies by default.  Google thinks this is bad.  Of course it is bad for them, but the surprising part of the argument is that they maintain it is bad for privacy.  They say:

…large scale blocking of cookies undermine people’s privacy by encouraging opaque techniques such as fingerprinting. With fingerprinting, developers have found ways to use tiny bits of information that vary between users, such as what device they have or what fonts they have installed to generate a unique identifier which can then be used to match a user across websites. Unlike cookies, users cannot clear their fingerprint, and therefore cannot control how their information is collected. We think this subverts user choice and is wrong.

Basically, the argument is that you shouldn’t protect yourself from cookies because then companies will track you by more dastardly means.  While there is an argument here, there is also the hint of a threat.  Surely Google doesn’t mean to say that if you don’t let them track you this way they’ll track you in a way that gives you less choice, so shut up and stop blocking?  Nah. They wouldn’t say that.  Even though they are the world’s biggest user of this sort of information and therefore the biggest threat if things get worse privacy-wise, surely they’re just making an argument, not a threat.

Even if we give them the benefit of the doubt, the fact that this argument is made by dictators and oppressors throughout history should give us pause.  The form:  You’d better accept Bad because otherwise Worse! (“Don’t remove the star of David, because then we’ll have to really make sure we know where you are.”)  The obvious response is that we should have the option to protect ourselves both from the Bad and the Worse!  Of course if the worse really is inevitable without the bad, the argument might be persuasive, but it clearly isn’t inevitable that we will be fingerprinted if we don’t accept third-party cookies.  If a company with Google’s influence decided that’s not how things should go, I doubt things would go that way.    In addition, there are ways one could foil things like fingerprinting.  Not only can one block fingerprinting, but I suspect it’s not difficult to generate masks that generate a false fingerprint.  This doesn’t seem to be an insoluble problem.

Our response to a mass move to more coercive forms of tracking shouldn’t be to volunteer to be tracked, but to raise hell about those more coercive forms, demanding policy protections or simply boycotting websites that employ things like fingerprinting.  The fact that Google makes self serving arguments like this–arguments that sound suspiciously  like threats–should make you think twice about playing in their sandbox.

Fake News, from an AI near you

As if the normal sources of flim-flam journalism weren’t enough, now OpenAI has developed an artificial intelligence that can generate news stories that seem as real to most users as stories by The New York Times. The code hasn’t been fully released, but even if it isn’t, you can bet there are others developing the same technology. Good times!

https://www.foreignaffairs.com/articles/2019-08-02/not-your-fathers-bots