Categories
Ethics

Choosing a Kantian Maxim

Explaining anything about Immanuel Kant’s philosophy in a short blog post is a daunting and perhaps foolish task, but I am nothing if not undaunted and foolish.

I’d like here to address a particular problematic aspect of Kant’s ethical philosophy (and don’t let the terminology scare you off — it’s not as difficult as it’s about to sound): How one is supposed to go about applying Kant’s categorical imperative by way of universalizing a personal maxim?

Kant’s categorical imperative is the only pure (he had a thing about purity) moral law he could come up with, and it boils down to this: “Act only on that maxim by which you can at the same time will that it should become a universal law.” A maxim is a personal “ought” statement, like “I ought to save that puppy from that oncoming truck”. A universal law is generated from a maxim by applying it to the entire rational population. E.g., “Every rational person ought to save puppies from oncoming trucks.” And Kant’s categorical imperative asks us to use this process every time we wish to make an ethical choice: Come up with a personal maxim for the situation; universalize that maxim; and see if that universal law is something that should be followed by every rational person in every such situation.

Lying

Let’s go through an example of Kant’s process. Let’s say you’re faced with an instance where lying would be expedient. Here, then, is your personal maxim for the situation:

Maxim: I ought to lie in order to get out of a jam.

And then Kant asks you to universalize it:

Universal Law: Everyone ought to lie in order to get out of a jam.

According to Kant, this universalized version of your personal maxim shows us that your maxim is in fact immoral. Even though your maxim may seem harmless, and is certainly beneficial to you in the short term, by extending its reach to the whole of humanity, there arises something very bad. If we look at a world where everyone lies in every dicey situation, well, this is a world that is in trouble. And, thus, according to Kant, you should never lie. Period. No exceptions.

Lying to Nazis

This position leads to some obvious problems.

Say you’re in 1940 Germany, and you are harboring your Jewish neighbor in your attic, in order to protect her from the Nazis, who would like to find and kill her. Now imagine that the Nazis knock on your door and ask you: “Are you hiding any Jews in your attic? We’d like to kill them if you are.” The relevant moral question here, of course, is what do you do? Perhaps, as Kant thought, lying is a bad thing, but if you tell the truth in this situation, it will lead to your neighbor’s unwarranted death, which certainly seems worse, on the face of it.

Let’s look in a little detail at how Kant might have examined this situation. His logic went something like this:

  • If it’s okay for you to lie, then (according to the universalization of this maxim) it’s okay for everybody to lie.
  • But if everyone lies, then no one will ever believe anything anyone says.
  • And, thus, lies would become completely ineffectual.
  • Therefore, lying is a rationally inconsistent activity — it leads to its own conceptual destruction.

This rational inconsistency is at the heart of Kant’s claim that lying is immoral — he thinks that ethics has to be based on irrefutable, logical principles in order for it to be anything besides an argument over opinions. A concept that leads to its own self destruction certainly shows us that there is something inherently wrong with it. And so lying, in virtue of this, is immoral.

Choosing Your Maxim

But let’s look more closely at the procedure of picking your maxim in the lying example.

I should lie in order to help someone.

Is this a good candidate for a personal maxim? Well, no, not really. It’s certainly not generally applicable to moral situations. For instance, one could pretty easily argue that lying in order to help a mad bomber who is about to kill a thousand innocent people is probably not a very ethical thing to do.

I should lie in order to keep someone safe.

No, this has the same problem… what if you’re lying in order to keep the mad bomber from being arrested? This is arguably not a moral thing to do.

I should lie in order to save a life.

We’re getting better, but we still have the same problem lurking. If your lie is to save the life of an evil person, it’s at least arguable that the lie is not the morally right thing to do.

So let’s include something in our maxim to account for the idea that you are lying to protect someone innocent:

I should lie in order to save an innocent person from death at the hands of an evil person.

What happens if we universalize this maxim?

Everyone should always lie in order to save an innocent person from death at the hands of an evil person.

This is not bad, actually, but there’s still the Kantian objection of conceptual self-destruction lurking: If we always lie to evil people who want to kill innocent people, the evil people will start to catch on, and thus the lies will become self-defeating.

In fact, the example of lying is one of the best for Kant’s system — when he applies his system to other sorts of moral cases, it all starts to go to hell. But with lying, he has found a case where there is something internally irrational about the endeavor, when applied universally. But I’d like for a moment to talk about a general problem with Kant’s procedure. How, exactly, do you go about choosing your maxim?

The Problem of Specificity

One major problem here is that of specificity of the maxim you choose.

You could make your maxim very general:

I should lie to strangers.

This is just about the most general maxim you could use here; and certainly this isn’t universalizable. Not only would you not want to universalize it (everyone should lie to every stranger would be an odd moral rule!), but it harbors the same problem of lies being self-defeating.

What about if you go to the other extreme, and choose a very specific maxim?

I should lie in order to save the life of the Jewish person hiding in my attic in 1940 Germany from the Nazis who will kill her.

This is about as specific as you can get with your maxim. And actually this is pretty well universalizable, because by universalizing it you don’t lose much specificity — your universalized law is still quite specific and actually probably a good moral rule:

Everyone should lie in order to save the life of the Jewish person hiding in Alec’s attic in 1940 Germany from the Nazis who will kill her.

(You might generalize the universal law here a bit more: Everyone should lie in order to save the life of the Jewish person hiding in his or her own attic in 1940 Germany from the Nazis who will kill that Jewish person. Still, this is arguably easy to accept as a good universal law.)

The issue here is that very specific maxims will be easy to universalize, while very general ones won’t. And this is a problem because very specific maxims will usually be very uninteresting as the basis of moral tenets. Very general ones will usually be interesting.

Imagine instead of a moral law like “Murder is wrong”, we had a law that said “Murdering Joe Smith on August 24, 1968, because he applied the wrong postage to a letter, is wrong”. Other ethicists would mercilessly laugh us out of the business. Our law may be true, but is not very interesting.

So the only way to use Kant’s procedure to generate a sound moral rule is by picking a maxim that is so specific that it is morally mundane.

Other Problems With Kant

There are a million and one problems for Kantian ethics (although there are a million and two Kantian ethicists in the philosophical community today). But perhaps the most obvious concern with Kant’s ethics is that it doesn’t (in fact, explicitly so) account for the ends of one’s actions. Most of us are disposed to say that killing a mad bomber in order to save a thousand innocent lives is a moral action, regardless of the fact that it involves killing someone. Kant disagrees, saying we can’t rely on a good outcome (saving a thousand lives) as the basis of our ethics.

He’s got a point. What if you decide to kill the mad bomber, but by a fluke of luck you actually wind up wounding him instead, and he escapes, only to kill ten thousand people the next day? That fluke of luck turns you from a hero into a villain. This idea of moral luck is a fascinating topic on its own, but for our purposes here, it does cast Kant’s hardcore position in a somewhat better light. If good outcomes are dependent on luck, then perhaps a genuinely moral decision shouldn’t depend on its outcome — perhaps a good act is good no matter what the outcome.

Famously, a school of moral philosophy called utilitarianism (or more generally consequentialism) sprang up in direct opposition to this perspective. We’ll talk about some of its pluses and minuses in a future post.