We’ve heard about the extraordinary capabilities of specialized artificial intelligence, and we have reason to believe that General AI is on its way, with superintelligent AI hard on its heels. These machines will surpass humans even in their most impressive feats of intelligence, not merely in playing games like chess and Go, but in generalized reasoning about previously unanticipated problems and domains. Some incarnations of this AI is likely to be humanoid, on two legs and with expressive faces, but it’s likely that most of it will not be. An AI doesn’t need limbs to move or eyes to see in order to be a reasoning mind. Humanoid or otherwise, though, we can ask about these minds: are they conscious? And perhaps just as importantly, does it matter?
Questions
like this about consciousness are ambiguous, so it is important to be clear
about just what we mean. On the one
hand, there is what we might call causal awareness. This is the sense in which it seems right to
say that a system is “aware” or “conscious” of something if it receives and
processes information stemming from it. On
a primitive level, there is a sense in which the automatic doors at the
supermarket are aware of people approaching.
They sense or detect people and slide open in response. Something like a robotic vacuum cleaner—a
Roomba, for example—is aware of the walls and furniture that is must avoid as
it cleans the floors. In both of these
cases there is a sense in which the relevant systems are “aware” of things, but
there is no temptation to say that they are actually having feelings of
awareness. There is no subjective
quality—there to be enjoyed or not, or consciously entertained—corresponding to
the door’s perceiving your approach or the Roomba’s sensing the wall. The systems’ actions are blindly triggered by
elements of the environment. There is
simply an impressive system of cause and effect there, with items in the
environment being the cause.
When
we say that a person is conscious of her environment, we usually have something
much more robust in mind. There is not only a causal process in us,
triggered by our environment—there is also a sensation, a feeling that comes
with the use of our five senses. When I
see you and raise my hand in greeting, there is an inner awareness—I see the
blue shirt you are wearing, and I have an inner sensory state that corresponds
to the bluishness of your shirt. When
the soundwaves caused by your speech reach my ear, and those waves are
translated into electrical impulses in the brain, there is something missing if
I leave out the fact that it actually sounds like something. Philosophers often say that “there is
something it is like” to hear, smell, taste or touch. What they are getting at is the idea that conscious
experience has a subjective aspect, a feeling to it, that seems to be something
over and above the mere relay of causal information through a processing
system. Someone is at home, feeling
these things. We can call this subjective
consciousness.
So,
back to the original question. Will a
sufficiently sophisticated AI be conscious?
It should be clear that if we are talking about causal awareness the
answer is yes. As long as the system has
states that respond to the environment, it can be said to be conscious in this
sense.
But
the issue of subjective consciousness isn’t handled so easily. It might
seem that as long as these machines say subjective things and react in a
certain way we have reason to believe they have subjective consciousness. So, for example, we can imagine that the
artificial intelligence is humanoid. In
that case, if it was kicked and grabbed its shin and said “ow!” we would
naturally think it has pain. If it
tasted chocolate, assuming we made it with sensors in the tongue, it might say
“mmm. Sweet with just a hint of
bitterness.” That’s the sort of evidence we use to tell whether other humans
are conscious, isn’t it? That’s true,
for the most part, but we have to admit that this sort of evidence isn’t
anywhere near conclusive. A rather
simple computer, or video game AI, can be programmed to issue subjective
responses. Most computers monitor their
own internal temperature so they can turn on the fans or shut down if the
processor gets too hot. It’s implausible
that the computer has a sensory experience of heat just because its processor
is hot, or just because it detects heat in its processor. It is hot—just as a piece of metal becomes
hot when held over a fire—but it doesn’t feel heat. Nor would it feel heat if we make it say
“Man, I’m hot! Shut me down!” when it overheats. Putting the same machine in a humanoid body
and wiring it to facial expressions and sweat glands clearly wouldn’t make it
feel heat either. Of course this isn’t
likely to be the way an AI is set up, but the general point is that there are
many ways we might have something that looks and acts like it is in pain, but
it is just generating physical reactions that have been programmed in.
Going
through the many debates about the nature of conscious experience is beyond the
scope of what we can do, but one thing should be clear: something can be just like us externally,
with any number of apparently subjective reactions, but nonetheless lack
subjective consciousness. In fact, it
seems likely that one can know as much as one wants about the ways an
artificial intelligence works, but as long as that knowledge is “objective” or
from the outside, that won’t tell us whether or not there is consciousness
corresponding to its expressions or even its internal processes.
It’s
worth pausing for a second to ask how this is any different to our knowledge
about each other. I don’t feel your
pains, after all. I just go by what you
say and how you behave to determine what’s going on subjectively. So, do I know that you are in pain? This is the classic problem of other minds. Each individual only has access to her own
subjective consciousness, and in the case of others she only has access to
outer behavior. How, then, can I know
that you are feeling pain?
The
most natural answer to this comes in the form of an argument by analogy. When I am shouting “ow” as a result of being
kicked, and am hopping around with a grimace on my face, I’m in pain. I can establish this connection in my own
case, and I can use my own case to produce a general rule: When someone is showing certain behavior as a
result of a certain cause, they are in pain.
I can then use this rule to infer that when you, or someone else, is
showing that behavior, you are in pain.
Unfortunately,
this argument from analogy really isn’t very good. The main problem is that you aren’t justified
in believing a general rule connecting behavior with conscious states, on the
basis of one example of that connection: yourself. Given that you only know that in your own
case these outer behaviors are a result of inner conscious states, how can you
thereby conclude that this is the case for everyone? Consider the following analogy. You are shopping for cars. Across the enormous parking lot you see
silver cars, black cars, red cars and white.
You decide you’d like to check out a white car, and you open the door to
find it has a tan interior. You don’t
like tan interiors, understandably enough so you move on. The dealer suggests you look at another white
car and you say “no thanks, I don’t like tan interiors.” “Why do you think it has a tan interior?” The
dealer asks. “Well, the previous white
car had a tan interior, so I used that connection to form a general rule: all
white cars have tan interiors. That car
is white, so it has a tan interior.”
Hopefully it is clear that you have very little reason, almost none at
all, for thinking that this car has a tan interior. You simply can’t form a general rule
connecting car body colors with car interior colors based on looking at one
car. But this is precisely the sort of
thing we are doing when we conclude based only on our own case—which is the
only case in which we are privy to the “interior” states—that there is a
general rule connecting outer behaviors to inner states.
Given
this, it might seem the problem of other minds means that we are in as bad a
situation with respect to other human minds as we are to the question of
whether or not AIs are conscious. If
that’s the case, it seems we should give AIs the benefit of the doubt and call
them conscious. This suggests the
following argument:
- Our evidence for concluding other humans have
subjective consciousness is as good as our evidence for concluding AI’s have
subjective consciousness.
- If we
have the same evidence for concluding a is F as we do for concluding b is F, we
should conclude one only if we conclude the other.
- We do,
and should, conclude that other humans are conscious. (To fail to do so would be morally
outrageous)
Therefore,
we should conclude that AI’s are conscious.
Basically, according to this
argument, even though we can recognize that we don’t have enough evidence to
conclude that other people have subjective consciousness, we should extend each
other the courtesy of that belief. That
courtesy should extend as well to AI, even though we might not know that AI are
conscious.
There’s
some reason to doubt this argument.
Among other things, according to this argument it seems our reason for
concluding other humans have minds isn’t the evidence for the conclusion, but
the deeply troubling moral repercussions of not doing so. But if that is what is determining whether or
not we think AIs are conscious, its not obvious that we should accept premise
2. Premise 2 seems to presuppose that
our belief is based on evidence, but the third premise seems to presuppose that
our belief is really based on moral reasons.
So there is some tension in the argument. It could be that we have the same evidence to
believe AI are conscious as we do that other humans are conscious but that we
have morally compelling reasons to believe humans are conscious, but we aren’t
similarly morally compelled to conclude the same thing about AI.
That’s
not the main problem with the argument, however. The problem is with premise 1. We are in fact much less justified in
believing AI have consciousness than we are that other humans are
conscious. Our justification for
believing other humans have conscious states is not merely, or perhaps even
primarily, a matter of believing in a general rule connecting behavior to
mental states. Crucial to our inference
about other people is the premise that we are the same sorts of creatures with
the same sorts of brains, and those brains are responsible for the mental
states we have. This is also why we tend
to think that non-human animals, like dogs and pigs, have conscious states even
though they behave quite differently from us in many ways: they have brains
that are relevantly similar to ours, and have parts of those brains that are
relevantly similar to the parts of our brains that seem to be associated with
pain. Assuming we continue building AI
on the basis of computer architecture, AI will not have these commonalities
with humans. They will have more
commonalities, in fact, with things we have reason to believe are not
conscious—computers. This fact should
undermine our readiness to ascribe conscious states to them on the basis of
behavior, since we know that unlike us their behavior is underwritten by
systems that themselves do not seem to be of the sort that underwrite conscious
experience.
In
light of these considerations, it’s tempting to conclude that we can know that
computers will not be conscious.
Consider the following argument:
- Consciousness is a result of the sort of
organic brains that humans and many other naturally evolved animals possess.
- AI will not have organic brains.
Therefore, AI will not be conscious.
While this argument looks good,
it is not. The problem is that just
because our consciousness is a result of organic brains doesn’t mean that there
can’t be things that are conscious with different sorts of systems. It is similar to the following argument:
- Breathing
is a result of our having oxygen in our lungs.
- Fish do not have lungs.
Therefore, fish don’t breathe.
The problem, of course, is that
while we breathe with lungs, there are other ways to breathe. So we need an argument that there aren’t
other ways to be conscious.
Another
argument was suggested by our earlier discussion, however. Quite plausibly computers are not conscious,
and AIs will just be more complicated versions of the same thing. So perhaps that’s reason enough to think they
aren’t conscious. The argument would be
something like,
- Computers (like home pcs) are not conscious.
- AI will
be based on similar architecture to computers.
- If AI use
similar architecture to computers it will not be conscious.
Therefore,
AI will not be conscious.
This is argument can seem
persuasive at first, but it probably shouldn’t be accepted. Computers are probably not conscious, but
that is no reason to believe that something like them in certain respects—and
very unlike them in others—won’t be. To
put the point in other words, it might not be the type of physical architecture
that prevents our home computers from being conscious. It could be because their relative
simplicity, or something having to do with how they are programmed that
prevents them from being conscious. AI
will likely be very different in these respects. It might be different enough to generate the
same sort of consciousness that we get from our biological brains.
So,
where does this leave us? Unfortunately,
it might leave us in the position that we simply cannot know whether or not AI
are conscious. Our best reason for
thinking other people are conscious doesn’t apply in the case of beings so
different from us, but we can’t conclude that those beings aren’t conscious
because we simply don’t know what sort of systems can possibly give rise to
conscious. If there are such systems,
only they will be in a good position to conclude that others of their ilk are
conscious. That is, the following
argument might be sound:
- Conscious
beings only have justification for believing that other beings of their
physical type are conscious.
- AI will
not be of the same physical type as human beings.
Therefore,
human beings cannot know whether AI are conscious.
If this argument is sound, it
seems that we might be in the very frustrating position with AI that we cannot
know that it is conscious. Does this
matter? Arguably it does. A lot.
We are
often mostly concerned with what artificial intelligences can do, not whether
they are conscious or have certain conscious feelings. The Turing Test, the classic benchmark for
artificial intelligence, doesn’t even take consciousness into
consideration. (Remember—just because
the robot is aware of what you are saying in the sense that it registers it and
generates an output doesn’t mean that it is conscious in the sense that there
is something it is like for it to register what you say or generate an
output.) Nevertheless, there are several
reasons we really should care about whether or not Robots are conscious.
Most
obviously we should care about whether artificial intelligences are conscious
for their sake. To keep things simple
suppose we have a humanoid robot, and it is programmed to be as human as
possible. If you hit it, it will say
“ow” and withdraw, begin to distrust you or, depending on its disposition, to
hit you back. As we’ve seen, this
behavior seems to leave open the question of whether it is actually feeling
anything when it is hit. If it is,
though, surely that gives us an important moral reason not to hit it or to
cause it pain. As Peter Singer points
out, the mere ability to feel pain would seem lead us to have certain obligations
toward it. The fact that a cat feels
pain when it is burned generates an obligation not to burn it. It may not be the only reason burning a cat
is morally wrong, but it is a big reason.
If toy manufacturers sold a stuffed cat that screeched when burned, we
might not want to burn it—it might smell, the screech might be horrific, and
burning toys to make them scream is generally not something nice people want to
do—but it wouldn’t be harming the toy cat.
Put another way, if a child owns a toy cat, and we set it aflame just
for fun, we are wronging the child. We
are, plausibly, violating our obligations to the child. If, however, because of its complicated AI the
toy cat is conscious, we are not only wronging the child, we are wronging the
cat. We are violating our obligations to
the cat (as well as those we owe to the child as owner of the cat). The cat is suddenly morally relevant in
itself, and not merely as a possession of the child.
If
artificial intelligences are conscious, they appear to be morally relevant in
this significant sense—we ought to behave a certain way toward them regardless
of who they are owned by or what purposes they serve. Indeed, if the purposes they serve or if that
ownership causes a certain amount of conscious harm, it is prima facie morally
impermissible to continue using them in that way. Long story short, if AI is conscious, then
robots have rights that we’ll have to consider.
Consciousness alone doesn’t generate the whole slate of rights that
humans have.
If
this is right, and if our earlier consideration about knowledge of AI
consciousness is correct, then with AI we are likely to be in a position of not
knowing whether they have rights. Given
that artificial intelligences are likely to comprise a big part of our future,
this means that we will not know of major players in our lives whether they
have rights. This is significant, and it
is likely to be a point of social tension in the future. There will likely be those who argue that AI
deserve rights and that we cannot use them like tools, and there are likely to
be those who think that manmade, nonorganic machines can never be conscious and
that they do not have rights. One can
imagine something not unlike the friction between abolitionists and
slaveholders arising from this tension, and it probably has a similar
possibility to tear society apart.
We’re
a long way from this social situation, and I strongly suspect we’re a long way
from this technological situation, but that’s not a reason to put off thinking
about it. As convenient as it is to focus
on what AI can do, it is morally imperative that we think about what it can
feel.