As I am investigating the topic of morality (whether I have
anything interesting to say about it is yet to be discovered), I bought “The Moral Landscape: How science can determine human values” by Sam Harris. I was
not surprised to see that on the first page, in the Introduction, Harris
writes: “I will argue, however, that question about values – about meaning,
morality, and life’s larger purpose – are really questions about the well-being
of conscious creatures.” I am glad to have yet another example that humans use
consciousness as a property defining objects of morality.
The notion of “well-being of conscious creatures” is
repeated numerous times throughout the book. On page 32, Harris explains why he
chose consciousness as the basis for morality:
“Let us begin with the fact of consciousness: I think we can
know, through reason alone, that consciousness is the only intelligible domain
of value. What is the alternative? I invite you to try to think of a source of
value that has absolutely nothing to do with the (actual or potential)
experience of conscious beings. Take a moment to think about what this would
entail: whatever this alternative is, it cannot affect the experience of any
creature (in this life or in any other). Put this thing in a box, and what you
have in that box is – it would seem, by definition – the least interesting
thing in the universe.
So how much time should we spend worrying about such a
transcendent source of value? I think the time I will spend typing this
sentence is already too much. All other notions of value will bear some
relationship to the actual or potential experience of conscious beings. So my
claim that consciousness is the basis of human values and morality is not an
arbitrary starting point.“
There are a couple of problems here. First of all, Harris
does not present any constructive argument in favor of using consciousness as
the starting point. He only says that all alternatives he can think of are either
uninteresting or related to consciousness. This is an argument from ignorance,
a logical fallacy, which Harris should be familiar with as an outspoken
atheist. Unfortunately, Harris keeps on using arguments from ignorance in his
book (see also p. 62 and p. 183).
Secondly, let us for a second consider the world of ants
(rather than humans). We know that most ants are insects with complex rules of
social interactions. The problem is how to design these rules in order to
maximize ants’ well-being (e.g. “thou shalt not kill another ant from your
nest”). Or to put it more generally, let us say we have any population of any
social agents: they may be simple computer programs implemented in a cellular automaton, or super-intelligent aliens who have no characteristic that we would
recognize as consciousness by any modern definition (they do not have brain
tissue, they do not smile, frown, sleep, cry, nor talk). How do we go about
designing optimal interaction rules for their population, i.e. how do we design
their morality? If use of consciousness is necessary, does it mean that we
cannot design morality for creatures that do not have it? It seems that this is
what Harris is thinking: “altruism must be (…) conscious (…) to exclude ants”
(p. 92). Why not ants? It seems that if we are to solve much more complicated
problem for humans, maybe it would be a good idea to start with much simpler
problem for ants? Harris seems to think that we cannot optimize ants’ behavior but
we can optimize human behavior. Why?
We already know that despite what Harris is
claiming, the choice of consciousness is arbitrary. This is what evolutionary psychology dictates and Harris tries (and fails) to rationalize this human intuition. And the question that needs to be answered in the first place is: should we follow our intuitions? Or, more precisely: why, when, and which intuitions should we follow, and which should we discard?