Saturday, October 17, 2009

Beyond right and wrong

There is a serious problem that moral philosophy faces, and until it is properly confronted the work of ethicists, no matter how interesting and illuminating, is doomed to ultimate failure. This problem goes by the unassuming name of moral realism.

Moral realism is the idea, which most of us hold by default, that there are moral facts that are true in the same way that, say, 2 + 2 = 5 is true. It means that saying "murder is wrong" is capable of being literally true or false (and obviously, it is usually said to be true). Most, though by no means all, moral philosophers are moral realists. They believe that while individuals may be mistaken about moral truths, these truths do exist waiting to be discovered through the philosophical endeavor and are glimpsed through our intuitions.

Even though I know it is the default, I am frankly astounded this is still considered to be a reasonable position by the bulk of active philosophers, whose collective intellect is admittedly vast. It only takes a moment's thought for anyone who understands evolution to see that moral realism must be an illusion — though disproving it using the methods of philosophy is a challenge.

If we evolved our moral intuitions over time from prosocial instincts that proved useful to our primate ancestors, how and when, exactly, did moral facts become real? It seems there are three possibilities.

1. God did it. Any serious philosopher has already rejected this joke of an answer.

2. Moral facts are features of the universe and always existed. Really? Before there were humans thinking about these things and acting morally, before moral choices were even an option, there were already concepts of right and wrong floating in the ether, waiting for humans to evolve and find them? This is as absurd as god's decree.

3. At some point, humans evolved morality and then these facts became real. This at least admits that humans are the genesis of morality. But it still seems rather silly that, if human minds create morality, it could be anything other than whatever human minds create — which means that moral truths can't be "out there" waiting to be discovered, because they can't exist until they are invented.

As it turns out, neuroscience shows where "moral truths" come from. When people are placed in an MRI machine and asked moral questions, they give two kinds of answers. First are intuitive answers about things that are "just plain wrong." These are quick emotional responses or gut feelings, and they are remarkably consistent among various people and light up emotional regions of the brain. Second, for complicated or unfamiliar situations, there are cognitive responses based on thinking through the problem that light up the general thinking parts of the brain. What's interesting, however, is that when people's answers to questions go against the typical moral intuition, they do so by cognitive reasoning, not by having different intuitions.

Through what is no doubt an astounding coincidence, the emotional gut feeling responses magically map onto the rules, rights, and duties that the various systems of deontological (that is, duty-based) ethics require, regardless of the convoluted logical reasons those systems contain. It's almost as if these moral philosophers just decided that their gut reactions are moral facts and invented a justification for them. My tongue is firmly in cheek, of course: it is obvious that this is precisely what they did. Some plainly admit it, others genuinely believe they derived these facts independently.

Equally unsurprising is that when people use their cognitive reasoning to find answers to moral dilemmas, they tend to be consequentialist answers. They override the "rules" and look at the consequences of the acts in question, then choose the act with the best outcome. Nearly everyone agrees with consequentialism to an extent, or in certain cases. We all want to do what's best for people, we just restrain that impulse when our intuition tells us otherwise.

Deontologists have to account for the fact that our intuitions are inconsistent. They evolved as convenient mental shortcuts to problems faced by our ancestors over millions of years, and they aren't necessarily suited to the situations we find ourselves in today. As a result, deontologists find themselves invented ever more complicated addenda to their rules, so we get things like "Do not kill, unless a greater good would result from the killing, you do not intend the death of the victim even if it is a foreseeable consequence, and the killing is the result of merely redirecting an already existing threat onto a person previously unthreatened." All this because our ancestors had no indirect ways of killing each other, so we have a evolutionarily useful intuition against "personal" killing, even for a good reason like saving more lives, but not against "impersonal" killing for the same reason. Our mind knows five deaths are worse than one death, but the rightness or wrongness of that depends on how the deaths happen. An arbitrary accident of evolution is promoted to a moral fact by deontological philosophers.

Saying something is "wrong" is not saying that there is a fact that this something is wrong, no matter how much we feel like it is the case. Our feeling, even our overwhelming "it just plain is" kind of feeling, is nothing more than an instinct, and that it applies to some situations in the modern world and not to others is arbitrary, except for in the sense that we can pretty well see the non-arbitrary reason it arose in the first place.

Following Joshua Greene, I think in recognizing this we should do away with "right" and "wrong" in their moral senses. I also agree that we should do away with being for and against things without having reasons beyond gut feelings and prehistoric intuitions. Instead of saying "torture is wrong," we should say "I am opposed to torture because..." and give our reasons. Different people will find different reasons compelling, but only by airing them can we hope for at least some consensus. Saying various things are just plain wrong, when we disagree about those things, gets us nowhere.

It's obvious why philosophers are hesitant to do away with moral realism. Aside from the air of authority moral truths bring, they fear that without real moral truths we would descend into nihilism or moral relativism.

I don't think this is the case. I think that morality is both subjective (as opposed to objective as the realists have it) and universal (as opposed to relative). That is, while it is true that making a moral judgment is in a sense just giving my opinion, it doesn't follow that I should then accept other people's opinions as valid and shrug my shoulders when we disagree. It is my opinion that everyone should agree with me, after all. What antirealist morality leads to is not relativism but a world in which moral disputes are settled through argument and evidence rather than claiming to have all the answers.

And now comes another part that a great number of moral realists also fear: I tend to further agree with Greene that when you strip away prehistoric gut feelings and start basing your moral judgments on evidence, you are naturally led to utilitarianism, or at least some form of consequentialism. I have fairly recently stated opposition to consequentialism because this is a conclusion that I've tried to fight intellectually for some time (literally years, at this point). After all, consequentialism occasionally leads to counterintuitively wrong outcomes. Nobody wants to be thought a monster. But if we accept as we must that there is no real "right" and "wrong," if we accept that intuitively correct outcomes are often arbitrary, all without rejecting our empathy with beings living lives that fare well or ill for them, we find ourselves simply wanting to make those lives go as well as we can.

That doesn't mean consequentialism is right (because, objectively, nothing is), but it does mean that consequentialism is almost certainly the inevitable remainder of morality once we are freed from our evolutionary baggage. Consequentialism should be seen then as a goal, but we needn't beat ourselves up if we fail to adhere to bringing about the best consequences in absolutely all cases. It isn't "right." It isn't our "duty." We are animals and will often find our instincts guiding our choices. But consequentialism is making everyone as well as they can be, and that's surely something we can strive for.

No comments:

Post a Comment