Pascal’s Wager is a famous argument created by 17th Century polymath Blaise Pascal, urging us, even if we’re not convinced of it, to bet on God’s existence.
In its basic form, it goes something like this: If you believe in God and he doesn’t exist, you incur a small loss (you’re wrong, and you have given up some of your freedom and time worshiping something that isn’t real). If you believe in God and he does exist, you win an infinite reward (eternity in Heaven). If you don’t believe in God, and he doesn’t exist, you get a small gain (you’re right, and you get to live life without religious strictures). If you don’t believe in God, and he does exist, you are infinitely punished (eternity in Hell). The argument is usually presented with a 2×2 matrix, like this:
Viewed this way, the Belief column looks more attractive than the No Belief column! And so, Pascal says, you should put your metaphysical chips there. Case closed!
Of course, you might be thinking: Obviously, you can’t simply come to believe something because it seems like a good bet. Pascal has an answer for this, and claims that if you’ll only continue to act as if God exists, sooner or later, you’ll actually come to believe it.
Even if he’s right about this cognitive-behavioral outcome, it turns out that the wager isn’t as simple as Pascal would have it. For one thing, there are thousands of Gods one could believe in, each with its own infinitely good Heaven. (Of course there are thousands more one could worship with no heavens at all, or with only finitely good heavens. But we’ll leave that aside for now.)
Now, if all of these possible gods really do offer up an infinitely good Heaven, but only one of them actually exists, the math says you should still bet on one of them. Lord knows (ha) how you’d decide, but Pascal’s wager is still reasonable here.
However, if there are actually an infinite number of gods that could exist and offer infinitely good Heavens, things change. How could there be infinitely many gods like this, you ask? It doesn’t seem unreasonable to me. Take one of the versions of the Christian God. Now say that he prefers people who own one bunny. (I’m saying this is possible, not necessarily likely!) Now take that God and say he prefers people who own two bunnies. Well, this can’t be the same God, because there are contradictory beliefs involved. So there are two possible gods generated this way. Well, keep going: A god that prefers people who own three bunnies; four bunnies; etc., ad infinitum.
I have no idea how the math would work in such a case: An infinite number of gods with infinitely good heavens, and only one of those gods actually exists… How do you choose which to bet on? Any mathematicians out there, please let me know what you think! My mathematical intuition says that choosing one item in an infinite bucket is an infinitesimally small gesture; that is, that it’s actually a zero-probability bet to choose one god out of an infinite number of them.
In any event, we’re not even done tweaking Pascal’s wager at this point. There are a lot more possible gods we have to consider. There are, of course, thousands (if not infinitely many) “normal” gods that promise an infinitely good Heaven if you believe and behave, and an infinitely bad hell if you don’t believe and don’t behave. But there are also possible bizarro gods in the mix: Gods that promise infinitely good heavens for bad behavior and lack of belief, and infinitely bad hells for those who believe and act well. (Again, I’m not saying this is likely — just that it’s possible. I don’t see why it’s any less likely than the normal god scenario, but that’s a topic for another day.)
But wait! There are also possible nice gods, who reward believers and non-believers alike with an infinitely good heaven! And there are possible mean gods, who punish believers and non-believers with an infinitely bad hell, no matter what.
What are the odds now? Especially if any/all of these categories allow for infinitely many potential gods.
The math is completely beyond me. Which isn’t to say that it’s undoable, of course! But the simplistic math that Pascal’s wager generally relies on is completely inadequate, once you plumb its depths a bit.
The eminent mathematician Leopold Kronecker was reported to have said: “natural numbers were created by God, everything else is the work of men”. Stephen Hawking entitled a recent book of his God Created the Integers, in honor of Kronecker’s slogan. (Let’s not split hairs over the difference between integers and natural numbers. I’ll stick with the integers from here on out.)
This statement, despite its theistic and metaphysical flavor, was meant to be taken foundationally, not metaphysically. Kronecker was speaking about the idea that if we assume the existence of integers axiomatically, definitions of and theorems about other sorts of numbers can be rigorously provided. For instance, if you assume that integers exist, then you can define a rational number as the quotient of two integers.
Speaking of rational numbers, it’s time to brush up on a bit of your old school math. Remember that some rational numbers are finite, when turned into decimal representations. For example, 1/8 is 0.125, a nice, neat, terminating decimal expansion. Some rational numbers will have infinitely many decimal places, but even these will still be well-behaved in one way or another. For example, 2/3 is 0.6666… where the sixes repeat forever. Another well-behaved rational number is 1/7, which is 0.142857142857…, where the ‘142857’ group of digits repeats forever.
The subject of this essay’s title — irrational numbers — are not so tidy. They are infinitely long, but don’t behave nicely like rational numbers do — i.e., they don’t terminate or cycle. Pi is a famous irrational number — it just goes on forever, never repeating, and no one can find any pattern within its endless chain of digits.
I would like here to posit that, contra the metaphysical interpretation of Kronecker and Hawking, irrational numbers — infinite and infinitely messy numbers — underlie (though, as you’ll see, I think even this is too strong a concept) the fabric of the universe, and that the integers are humanity’s unnaturally well-behaved grand creation. In fact, the universe does not contain anything genuinely integral or numerically tidy.
The Number 1
Let’s take the most basic of all integers: The number 1. When you speak about one apple or one table or one person, you are using the number 1 in its most starkly metaphysical role: you are using it to try to perfectly demarcate an object. This is the glory of integers: If we had to talk about 1.1258345257… apples, or pi tables, life would be difficult. And saying we have 1 apple in front of us not only lets us speak more easily about the world, it’s what allows us to talk about the world (and all its objects) at all. “An apple”, “the apple”, “one apple”,… all are ways to say that there is a thing called an apple, and that here’s an example of such a thing in front of us. This apple is perfectly demarcated — it sits completely formed and completely separated from everything else in the universe.
Integers, indeed, are epistemologically fundamental, and this is where they get their epistemological primacy from. Without them, we couldn’t understand much about the world.
But this doesn’t necessarily make them metaphysically fundamental (foundational, basic) the way Kronecker, et al imply they are.
In fact, perfectly demarcated objects simply don’t exist in the physical world. They, and the integers behind them, are human fictions.
The world is inherently vague — all of its objects are ill-defined and imperfectly demarcated. We’ve blogged about this in the past, but I’ll recap what I mean by this here.
Heaps and Cats are Vague
There are objects that are obviously vague — that is, very few would argue that we can be utterly precise about them. Heaps are like this. Nobody thinks that when we say “a heap”, or “the heap”, or “one heap” we are speaking with much precision. A heap of sand, for instance, is still a heap if we take away (or add) a grain of sand from it. The heap is inherently vague and imperfectly demarcated.
Of course, you might be aware that this leads to an age-old paradox — the sorites paradox. Recapping our premise: A heap of sand is still a heap of sand if you remove one grain of sand from it. Well, if this is the case, then it’s still a heap if you remove another grain of sand from it. And another. And so on. But soon we will be in the position of saying that we still have a heap of sand even after all of the grains of sand have been removed. Paradox.
The problem is that there’s no absolute cutoff point where a heap becomes not a heap. E.g., it’s not like a collection of 500,000 grains of sand is a heap, but 499,999 grains is no longer a heap. If this were the case, then our initial premise would be wrong. In fact, there would be a clear case in which removing one grain of sand would turn it from a heap to a mere collection.
So there’s no genuine integral description of a heap. Perfectly demarcating a heap is impossible. But perhaps that’s because heaps are a vague sort of thing in the first place. What about things that generally aren’t considered vague? What about a cat?
Well, let’s take my cat, Herbie, who, as I type, is staring at me, wondering when I’ll feed him. What if (as is no doubt true) Herbie has a semi-detached hair on him, on the verge of falling to the floor? Is this hair a part of Herbie or not? If there’s a fact of the matter here, then Herbie is in fact a perfectly well-defined, non-vague object.
But could there really be a fact of the matter about this? If there is, and, say, that stray hair is a part of Herbie, then I’d better be damned sure that hair never falls off him, or else he’ll suddenly be a different cat. But this isn’t what cats are like. They’re vague objects, losing and gaining parts constantly. This vagueness is inherent. We need, epistemologically, to speak about “the cat” or “one cat”, because otherwise we wouldn’t operate very well in the world. (Imagine a caveman denying that there was exactly one saber-toothed tiger in front of him, much to his detriment.) But cats (and saber-toothed tigers) don’t have to be perfectly well demarcated in order to tear you to bits — it’s just a convenient short-hand to think this way. (Does it matter if you get smooshed by one boulder and a pebble, or two boulders, or two pebbles, or, as is more rightly the case, 1.03123124… boulders? You’re still getting smooshed. Same thing with 1.000041424553… saber-toothed tigers.)
We all learned in geometry class that world is divided into objects that are 1-dimensional (straight lines and their ilk), 2-dimensional (flat shapes like triangles and circles), or 3-dimensional (things like spheres and cubes).
Actually, geometry lied to you, or at least your geometry teacher did. The “world” of geometry isn’t real — it’s a mathematical fiction meant to show us what a perfectly tidy realm would be like. But the real world contains none of these sorts of tidy objects. In fact, there is no such thing as an integral dimension at all, and genuine 1-, 2-, and 3-dimensional objects (things that “exist” in such integral dimensions) are a mathematical myth. A 1-dimensional line segment is a human fabrication — an abstraction. Any line segment you can physically create and/or interact with is bumpy, gappy, and wobbly, bringing it into the second dimension. It also has thickness — if, say, it’s drawn on paper, the ink on the page is raised slightly off of the second dimension, bringing it into the third dimension.
What does this mean for the realm of the physical? Well, if the dimensionality of a physical line segment is non-integral, that means its measure is irrational — that is, it is only measurable by irrational numbers, not by integers. (I know I’m making the leap from non-integral to irrational here, but anything truly measurable by a rational number would have to be some sort of incredible anomaly. The Sierpinski triangle, for example — one of the nicest, neatest fractal shapes there is — has an irrational dimension of 1.58496…. If a relatively well-behaved mathematical object has an irrational dimension, what hope is there for the messy real world to be any less messy?)
Reality is irrational-number based, not integer-based.
Perhaps this will be clearer with a brief discussion of the seemingly straightforward question: What if we try to measure the coastline of England? Well, it turns out there is no straightforward answer, thanks to the real world’s irrational messiness. Whatever answer we get, it turns out, depends on the length of whatever ruler we use.
If our coast-measuring ruler is a mile long, when we lay it along the coast, it will cut through parts of England’s interior, wherever the coast is convex, and it will also cut through parts of the ocean, wherever the coast is concave. If we do this around the entire coast, we will get a very rough, rational measurement, that will be wrong (though perhaps useful). Well, we could decrease the size of our ruler in order to get a more precise measurement. Our calculation will be very different for a one inch ruler than for a one mile ruler. Well, it turns out that it’s more correct to think of things like coastlines having what’s called in mathematics a “fractal” dimension — a dimension that’s not an integer. And, yes, that means they are irrational.
It turns out that coastlines’ dimensions are somewhere between 2 and 3, depending on the intricacy of the coast in question. We are taught to think of these things abstractly — coastlines are, mathematically, just smooth 2-D curves. But reality isn’t so tidy.
Abstract is Too Nice
Actually, I don’t think that even infinitely messy irrational numbers genuinely underlie the fabric of reality. The idea that anything mathematical is somehow more ontologically foundational than the actual world is simply giving humanity too much credit (and the world too little). Mathematics is, despite what some philosophers believe, a human endeavor, subject to human foibles and error. It is without a doubt incredible, the usefulness of mathematics applied to problems in the real world. We can travel to the moon without (too much) fear of exploding in space; we can pinpoint small objects from great distances; we can create artificial cherry flavorings that (hopefully) won’t kill us. But, in the end, to think that mathematics underlies the natural world is an example of human hubris. It’d be better to say that mathematics describes things about the natural world, but even this could grant mathematics too much. Is it genuinely descriptive to say that the coast of England is of dimension 2.18747636658698…? Or is it just pointing out that our knowledge of this fact is limited, because we can’t plumb the depths of this ugly, non-repeating, infinitely long number?
So, really, the title of this post should’ve been “God (or the Big Bang) Created the World; Humanity Tries to Describe it With Irrational Numbers”. But that’s sort of unpoetic.
Sometimes doing the right thing involves a morally bad consequence. For instance, if someone is about to murder your family, and the only thing you can do to stop him is to yourself kill that person, it certainly seems that the right thing to do is to kill the murderer. And yet there is the morally bad consequence of killing someone at play here.
It’d be great for moral philosophers if we could adopt simple moral rules that apply in every situation, like “thou shalt not kill”. But situations like the above make it clear that the world is seldom so kind to those of us who would plumb the depths of ethical reality. So, if you’re looking to create a coherent moral system, you’d better be able to explain why it is that you are justified in killing a murderer who is intent on killing you and your family. Under what circumstances is killing okay?
Perhaps if we view the killing in this situation as a regrettable consequence of doing the right thing… That is, perhaps the moral action of saving your family — even if it results in the killing of someone — is the real action that you are undertaking. And perhaps the killing of the murderer is a tangential, unavoidable, bad moral consequence. In this analysis, we might be able to work things out to the effect that you are not a killer — you are a family-saver whose actions led (regrettably) to an unintended killing.
Aquinas, back in the 13th century, was thinking of a similar situation, and came up with four conditions that he thought must be met for acting morally with a tangential bad moral consequence:
The Nature-of-the-Act Condition. The action itself cannot be morally wrong.
The Means-End Condition. The bad effect must not lead directly to the good effect.
The Right-Intention Condition. The intention must be the achieving of only the good effect with the bad effect being only an unintended side effect. The bad effect may be foreseen, but not desired.
The Proportionality Condition. The good effect must be at least as morally good as the bad effect is morally bad.
If Aquinas’ analysis is on the money, then you can save your family with a clear moral conscience, despite the fact that you wound up killing someone in order to do it.
Unfortunately, in the case of killing the murderer, we hit a pretty significant problem right off the bat with condition one. The action itself here seems to be one of killing. Isn’t this almost definitionally morally wrong? To get himself out of this fix, Aquinas argues that the actual action undertaken here is saving one’s family, and that the killing is the bad but unintended side effect: “Accordingly, the act of self-defense may have two effects: one, the saving of one’s life; the other, the slaying of the aggressor.” I’m not sure I buy that, but let’s step through the other conditions…
Actually, condition two seems problematic as well. Indeed, the saving of your family’s lives seems to be a direct consequence of you killing the murderer. But Aquinas would argue that actually the bad effect of killing the murderer somehow comes later in the chain of cause-effect than the good effect of saving your family. Honestly, this seems like complete bullshit to me, but let’s keep riding this train to the station and see where we end up.
Condition three seems really to get at the heart of the matter. You don’t first and foremost intend to kill the murderer; you intend to save your family. Perhaps this is really the keystone of moral goodness. If you don’t intend to kill the murderer, then you’re not committing murder yourself. But if killing the murderer is something that has to happen in order for you to save your family, then so be it.
Condition four is also conceivably well-met by our case. Saving your family, ceteris paribus, is arguably at least as morally important in the positive as killing the murderer is in the negative.
Abortion and Euthanasia
The Catholic Church has used Aquinas’ thoughts on double effect to weigh in on two weighty moral issues of our time: abortion and euthanasia.
Many have argued that even if abortion is immoral, it is morally permissible to perform an abortion to save the life of the mother. The Church, contrary to this, has argued that saving the life of the mother in this sort of case would fail to meet both criteria one and two above.
But you can apply this same reasoning to the case of self-defense above. I’ll leave it to the reader to cogitate on this further. (Hint: If saving-your-family is the true and moral act in the first case, then why isn’t saving-the-mother the true and moral act in this case? In both cases, then, the killing would be consequent to the saving.)
The Church meant to draw a distinction between plain abortion and, for instance, performing a hysterectomy on a pregnant woman with uterine cancer. In the case of our cancerous woman (so goes the Church’s logic), the result of the hysterectomy would be an abortion, but the actual intention of the doctors is to save the woman from cancer, not to kill her fetus. This is a nifty bit of face-saving, but, again, isn’t the real intention of the doctors in the abortion case to save the woman’s life? And thus the abortion is secondary to the life-saving, and should be morally acceptable.
There’s a similar Church line taken on euthanasia. A doctor killing a patient with an overdose of morphine is (argues the Church) unacceptable, because it fails conditions one and two again. That is, even if the desired end-result is that of mercy, getting to that end via a morally bad act (killing) is wrong.
However, the Church allowed for doctors overdosing patients on morphine under the circumstance where the intention is to prevent pain. That is, if the act in question is the morally good one of pain prevention, then the unintended consequence of death is morally okay.
We’ll leave it to another day to discuss the absurdity of the presumed immorality of euthanasia, but note again that these two situations are really not that different. No doctor (or no doctor I’ve ever met, anyway) outright intends to kill her patients. They intend to ease suffering, and they know that death is often the ultimate and only suffering-ender that will work in some unfortunate circumstances.
Some will use the doctrine of double effect to justify their intuitions about trolley cases. For instance, in the standard case, a driver of a train with no brakes can either continue down his track and kill five unsuspecting workers, or divert the train down a spur and kill one unsuspecting worker. It turns out that most people believe that killing the one worker is the right thing to do in this situation. And often people will cite utilitarian reasoning here: ‘Well, one life isn’t as valuable as five, so it’s the right thing to kill one if you can save five.’
But if we change the circumstances of our thought experiment, the utilitarian justification loses some weight. Say the only way to save the five workers is to push a heavy object in front of the train. But the only object heavy enough is a fat man who happens to be above the tracks on a bridge. Would it be the right moral thing for you to push the fat main off the bridge and let the train run over him, saving the five lives further down the tracks? Well, it turns out that the general moral intuition here is that it’s actually not the right thing to do. And, if this intuition is correct, utilitarianism fails here. But the doctrine of double effect could be used to explain things! In the first trolley case, you don’t intend to kill the one worker on the spur. And your action isn’t really killing that worker — the action is saving the five workers by steering the train down a different track. The killing of the one worker that results from your action is regrettable, but is not the intended effect of the whole affair. But in the case of the fat man, you have to take direct action against the one person in order to save the five. Your action is directly killing the fat man.
As with the above analyses, I think there’s something actually amiss here. If you put an intermediate step in between your action and the fat man dying, that wouldn’t make it any more or less acceptable. There has got to be another analysis that we can apply.
And, in the spirit of cliffhanger serial short movies from the golden age of Hollywood, I’ll leave you with the promise that we’ll explore this different analysis in a future post…
I’ve been teaching my Intro Philosophy students about supposed proofs of God’s existence, and the problem of evil, and it dawned on me (years later than it should have) that those wanting to reconcile free will with God’s existence have a rather intractable problem with one aspect of God that is generally taken to be inarguable: God is omniscient; that is, God knows everything (or, if you want to be a little more wishy-washy about things: God can know everything — he needn’t necessarily know something until he wants to know it).
If I’m right, theists have two options here: Give up the notion that God is omniscient; give up the notion that we have free will. Neither is a comfortable position for most theists.
What I’m Going to Eat for Lunch
Let’s assume that God, as per most religious beliefs, is omniscient — he knows everything. If this is true, then God knows what I’m about to eat for lunch. If he knows what I’m about to eat for lunch, then there’s a fact of the matter about what I’m going to eat for lunch — that is, if he knows what I’m going to eat for lunch, then he can’t be fooled about it. If God knows I’m going to eat a peanut butter and jelly sandwich for lunch, then I will eat a peanut butter and jelly sandwich for lunch — I can’t suddenly change my mind and eat a veggie burger, because God would’ve seen that one coming from a mile away. That is, if I were going to eat a veggie burger, God, being omniscient, must have known I was going to do so.
Do you see the problem here, for free will? I’d like to be able to say that I can change my mind about my lunch — i.e., that I have a genuine choice in the matter of what I will eat for lunch. I’d like, in other words, to say that I have free will about my lunch choice. (Indeed, the word “choice” presupposes that there is free will involved here.) But if I appear to change my mind, this can’t be a genuine choice in a universe with an omniscient God. No matter how many decisions I appear to make on the subject of my lunch, God knows the end result. And if God knows the end result, then there is no choice in the matter — my lunch has been predetermined somehow.
Even if we take the squishier position that God doesn’t necessarily know what I’m going to eat for lunch — his omniscience is of the variety where he could know about my lunch if he wanted to — we run into the same problem for free will. If God could know what I’m going to eat for lunch, it follows that there is still a fact of the matter about it. If he could know that I’m going to eat peanut butter and jelly, then it is the case that I will eat peanut butter and jelly, and thus I don’t possess genuine free will here.
Determinism: The Home Game
If you still think that an omniscient God would allow for free will, play along with me and see if you get my point…
Me: God is omniscient, right?
You: Yup, that’s what they tell me.
Me: So God knows what you’re going to have for lunch, right?
You: Yes, that follows.
Me: Can you change your mind about what you’re going to have for lunch?
You: It sure seems like I can. When it hits noon, I get unpredictable!
Me: So let’s say I’m tight with God, and I get him to write down your choice of lunch for me in a sealed envelope.
Me: What were you just thinking you’d have for lunch?
You: I was thinking of a huge cheeseburger from Joe’s Diner.
Me: Oh, I heard that they just got cited for making their burgers out of rat parts and feces.
You: Gross! Okay, I’m changing my mind. I’m going to make myself a salad.
Me: [opening God’s envelope] Indeed, that’s just what God wrote down.
You: So it was predetermined the whole time!
Me: Yup. You didn’t really have a choice in the matter.
The metaphysical question at hand is this: Is the semi-detached hair a part of Pinky or not?
Any way you slice it, there’s some vagueness here. The more usual thought in philosophy is that the world is perfectly unvague — the world is utterly precise (the loose hair either does or does not belong to Pinky), everything just is whatever it is, and whatever vagueness humans encounter is simply a matter of human imprecision. Either our knowledge-generating faculties or our language faculties (or both, if there’s a difference), are imperfect, and incapable of discovering/representing the perfection of the world.
But there’s another possibility: The world itself is a vague place, and, even if we had perfect knowledge-generating faculties, we’d still struggle with issues of vagueness, because those issues are embedded in the fabric of nature.
So, let’s agree that there is indeed some vagueness at play, and ask: Is this vagueness actually in the world, or is it in our language/thoughts about an unvague world?
Unvague Cats; Vague Language/Thought
If the vagueness is just in our language, and not in the world, then there is a fact of the matter as to whether or not Pinky has that loose hair as a part of itself. If Pinky does indeed own that hair, then “Pinky” picks out the cat-like mass along with the loose hair.
As Michael Morreau sees it, this actually generates a metaphysical problem:
If vagueness is all a matter of representation, there is no vague cat. There are just the many precise cat candidates that differ around the edges by the odd whisker or hair. Since there is a cat,… and since orthodoxy leaves nothing else for her to be, one of these cat candidates must then be a cat. But if any is a cat, then also the next one must be a cat; so small are the differences between them. So all the cat candidates must be cats. The levelheaded idea that vagueness is a matter of representation seems to entail that wherever there is a cat, there are a thousand and one of them, all prowling about in lockstep or curled up together on the mat. That is absurd. Cats and other ordinary things sometimes come and go one at a time.
If the world is not vague, then both of these are perfectly unvague cat objects, and if one is a cat then there’s every reason to say that they both are. In fact there are thousands (billions? trillions?) of cats here, all walking around in one lump. So on the world-is-not-vague side, we have the repercussion of “Pinky” picking out one specific cat out of many taking up mostly the same space; Winky, Glinky, Zinky, Inky, Kinky, etc.
So, let’s try the world-is-vague approach instead. On the world-is-vague side, there’s just one cat, but that cat is itself vague. There’s no metaphysical fact of the matter as to whether or not that loose hair counts as a part of Pinky. But that loose hair doesn’t suddenly create two unvague cats: Pinky and Blinky.
What would be problematic about a vague world like this?
Perhaps the biggest problem would be representational. If Pinky is a vague cat, then we have no chance of ever compiling the perfect representation of him. (The perfect representation would include a representation of that loose hair, if it’s a part of Pinky; and it would not include that hair if it’s not a part of Pinky. But if it’s vaguely attached to Pinky, our representations will fail in one direction or the other.) Those prone to thinking that representations should strive for perfection will be most unhappy with this state of affairs.
A related problem crops up in the philosophy of language. Language philosophers like to think that names (like “Pinky”) pick out unique, unvague objects (like Pinky). But if Pinky is himself vague, then the name “Pinky” can’t unambiguously refer to Pinky. This is particularly problematic for anyone harboring vestiges of a description theory — if that loose hair may or may not belong to Pinky, then we have a problem coming up with a complete description, wherein that hair plays a part (or not).
What would be the payoff for accepting vague cats into our ontologies? The non-proliferation of tightly bound brother cats to Pinky, for one thing. (There is no need, if Pinky is vague, to posit the existence of Blinky, Winky, Glinky, et al, existing in nearly the same space as Pinky.)
It also buys us a platform to talk intelligibly about such metaphysical conundrums as the Sorites paradox. If, similar to cats, heaps are vague, as opposed to just our knowledge of heaps being vague, we can escape some of the problems inherent with talking about heaps changing over time.
We’ll be talking about the Sorites paradox in a future post.
For now, take some comfort in the idea that your knowledge of the world isn’t inherently imperfect. The world itself is inherently imperfect.
Of course, knowing that might make you uncomfortable again. Sorry.
Morreau, Michael. “What Vague Objects Are Like,” Journal of Philosophy 99, 2002.
If you spend any time mucking around in the philosophy of language, you’re going to run headlong into Gottlob Frege at some point. Frege, round about the turn of the 20th century, was a key figure in the emerging fields of logic and the philosophy of mathematics, but he may well be best remembered for his contributions to the theory of meaning.
What is Meaning?
The basic question that any philosophy of language must address is this: What can we say about the meaning of a word (and — what perhaps amounts to the same thing — the meaning of a sentence)?
A first stab at analyzing this is to say that the meaning of a word is just what it points to — what it designates or refers to. For instance, the word (or name, in this case) “Herbie” refers to my cat, Herbie. (Make sure to get your head around the difference between a word and an object referred to by that word. We’ll have a post about this “use/mention” distinction soon. For now, just stay alert to the use of quotation marks to distinguish a word from its associated object.) The word “Herbie” points to the creature that is at the time of this writing tapping my leg with his paw, trying to get me to play with him. (I’ll be right back…)
We can apply the same analysis to numbers. The ink-on-paper numeral “7” that you might write down in your checkbook or on a math test, refers to the actual number 7, which for the sake of argument we’ll take to be some object out there in the universe somewhere. Similarly, and perhaps easier to comprehend, the words “seven”, “siete”, “sept”, and “sieben” all refer to the number 7 as well (in English, Spanish, French, and German, respectively).
If this is the right picture, it would give us a convenient way to explain how “seven” and “siete” both mean the same thing: It’s because both words refer to the same thing.
Reference Ain’t Enough
If this were all there is to meaning, then “12” and “7 + 5” would mean the same thing, because they both refer to the number 12.
But as Kant famously pointed out (in his analytic/synthetic, a priori/a posteriori distinctions), these two words/phrases might well mean different things.
To see why, let’s look at the following statement: “12 = 12”.
Compare that with this statement: “7 + 5 = 12”.
The first statement doesn’t say much except that a thing is always identical with itself. The second says something significantly new about 12 (that it’s the sum of 7 and 5).
If this is true, then “12” and “7 + 5” do not have the same meaning; and if this is the case, then there has to be more to meaning than the idea of reference. You can see this difference more clearly if you look at these in a different context.
“I know that 12 = 12.” One can know this without knowing anything about addition.
“I know that 7 + 5 = 12.” To know this, one has to know something about addition.
This becomes even clearer with a more complex mathematical fact.
“I know that 812,285,952 = 812,285,952.”
“I know that 24,789 x 32,768 = 812,285,952.”
Anyone can utter the first sentence without any more knowledge than ‘everything is equal to itself’. But to say the second sentence with any sort of certainty, you’d have to have done some complex calculations (or had a calculator do them for you). There’s something about the second statement that is differently meaningful than the first.
The Morning Star and The Evening Star
The more usual example philosophers of language use is (happily for most of you) not mathematical.
The ancient Greeks, looking at the dark sky above them, noticed two very bright stars. One came up shortly after the sun went down in the evening, and was brighter than any other star around it; the other star came up shortly before the sun came up in the morning and was similarly bright. They named these two stars: “The Morning Star” and “The Evening Star”.
Well, maybe you saw this coming, but it turns out that these two stars were actually the same object: Venus. (Of course, not even a star after all, but a brightly reflective planet.) So here’s the referential picture the ancient Greeks had:
A few centuries later, astronomers gave us this picture instead:
Now, if reference is all there is to meaning, then these two sentences would have the same meaning:
“The Morning Star is the Morning Star.”
“The Morning Star is the Evening Star.”
Because by just considering reference those two sentences translate to this one sentence:
“Venus is Venus.”
But clearly these sentences have very different meanings — the first sentence is obvious to anyone, even those without any knowledge of astronomy; the second sentence is something that one would only know by virtue of synthesizing some significant piece of astronomical knowledge, namely that “The Morning Star” and “The Evening Star” both refer to the same heavenly body: Venus.
So hopefully you’ll agree that reference can’t be all there is to meaning.
Frege’s idea was that while reference is important to meaning, there is another important dimension to meaning as well, which he called sense. He called the sense of a term the “mode of presentation” of the referent. So while “the Morning Star” and “the Evening Star” both refer to the same thing, they have different senses: the sense of “the Morning Star” is something like “the bright star that rises in the early morning”, while the sense of “the Evening Star” is something like “the bright star that rises in the early evening”. Same reference; different sense.
On this scheme, when we say “the Morning Star is the Evening Star”, we’re comparing senses, not references, and this is why it’s a statement of new knowledge (synthetic, a la Kant) and not just an obvious truth (analytic, a la Kant). “The Morning Star is the Morning Star” is comparing two things that not only have the same references, but the same senses. And this is semantically not interesting.
Sense Without Reference
One interesting consequence of Frege’s philosophy of language is that it turns out that not everything with a sense has a reference.
“The novel written by Richard Nixon” has a sense — it presents an idea to us in a clearly understandable way — but has no reference — Nixon never (as far as I know) actually wrote a novel. So in fact the meaning of a sentence might not have to rely at all on reference. “The novel written by Richard Nixon is long and boring” has a meaning even though the subject of the sentence doesn’t exist. We’ll take up this interesting idea in a future post.
Quandary Corner: In this new topic section, Alec and I will each discuss a particular ethical quandary and how we would deal with it, based on whatever ethical theory we take to be applicable at that time or in that situation. Will there be consistency across quandaries? Hahahahahahaha! Good question.
Today’s ethical quandary is an oldie but a goodie: A scientist has created a time machine that can take you back in time to when Adolf Hitler was a baby (let’s say at five months old). You will be sent to his nursery at a time that baby Hitler is alone. The nature of the time machine is such that you are only there for a maximum of two minutes. You cannot bring baby Hitler back to the future with you. You can, however, kill him. Ignoring the question of whether it is possible to change the past or not, should you kill Hitler when he was a baby or not?
My students, following my example, don’t take long before offering up “Hitler and the Nazis” whenever I ask for an example of great moral evil. Indeed, if there were a prohibition on mentioning Hitler, I’m not quite sure what I’d do, pedagogically. I suppose I’d have to resort to fictional super-villains. (Maybe I should anyway. It’s not as depressing to talk about Lex Luther.)
So, faced with the opportunity to wipe out the source of perhaps the greatest evil perpetrated in the last couple of centuries, what would I do?
My initial thought is: Of course I would kill baby Hitler. But there are some mitigating ideas that cross my mind soon after.
For one thing, Hitler was not alone in carrying out the Holocaust, and perhaps it would have happened regardless of his existence. It could well be that social forces were pointing Germany in that direction, and that another figurehead would’ve stepped into Hitler’s place, if Hitler hadn’t been available. If we’ve learned anything from time travel movies it’s that the whole thing is very unpredictable. But let’s put this worry aside for the time being, and address my feeling that the baby Hitler should be killed.
I imagine that this is the popular intuition here. But what is the basis for this intuition? Well, I can see two probable bases: utilitarianism, and righteous vengeance.
Utilitarianism is the philosophical stance that ethical decisions should be made strictly on the basis of weighing the possible good and bad outcomes of your actions. Famously, utilitarianism says that, other things being equal, ten lives are more valuable than one life, and so an action that kills one person in order to save ten is ethically justified. (We’ll save the finer points of this theory for another post. Utilitarianism is not, as you might imagine, free from problems.) Well, if you can save ten lives by killing one, just imagine the scenario where you can save millions by killing one. That’s the scenario we’ve been tasked to analyze here. By killing the baby Hitler, we are ostensibly saving millions of lives that will eventually die at his command.
Another thing that makes me pause, when thinking about killing baby Hitler, is that I’m tremendously squeamish about death, and the idea of killing a baby (even an evil one) makes me blanch.
Now, psychological squeamishness, one could argue, has no place in ethics. If something is the right thing to do, and I don’t want to do it, I’m just wrong about that, despite whatever my superego is telling me.
But there is an aspect of this squeamishness that is actually philosophically relevant, and it revolves around the issue of virtue. Most theories of ethics are action-based. Utilitarianism, for instance, is supposed to tell you what to do in any given situation. Virtue ethics (Aristotle is most often credited as its founder) is based more on the idea of developing a good character — the idea is that if you are a generally virtuous person, you’ll generally make the right decisions when faced with moral dilemmas.
Well, it is reasonable to argue that it’s the sign of a virtuous character to be squeamish about killing a baby. And so perhaps killing the baby Hitler is the wrong thing to do, if we are to take virtue ethics seriously. Of course, it’s also the sign of a virtuous character to save millions of lives if you can, and so virtue ethics sends us mixed signals on this one.
The Right Thing To Do
In the end, I think that killing the baby Hitler is the right thing to do. And if you try to justify sparing his life, on the basis of virtuous squeamishness, you’re probably displaying the unvirtuous trait of cowardice.
There are some philosophers who argue that utilitarianism goes wrong in cases of human death, because it’s just wrong (and impossible) to weigh a life. But I say there’s something wrong with a theory of ethics that tells us we can’t weigh one life against millions.
Alec makes some good points in his write-up of the issue. I think that baby Hitler should also be killed, but I am not sure that it is for the same reasons. Let’s see.
What should we do if we have the chance to save a life, and we can do so at little to no peril to our own life? We should save the life. Few people would argue that point. Well, I might argue that point. I am not entirely convinced that life, least of all human life, is always worth saving, and that is not even based on the character or worth of the (human) life. That is a matter for another post though, perhaps. Here, I am content to go with the status quo and agree that life should be preserved.
If you see a person about to cross the street, but he does not see an oncoming car, you should alert the person to the danger, grabbing him if necessary (and, again, if it does not immediately imperil your own life). Should we save life if the only way that is available to us is to end the life of another? Whew. Good question. Do we have time left for this? We do? Dammit.
Hitler was a bad, bad, superbad, person. If anyone ever deserved to die, it was Hitler. Perhaps, as Alec suggested above, Hitler was not the sole person responsible for all the evil attributed to his movement, but he was close enough to the sole person. Would most of it still have happened had Hitler not have been? While that is an interesting question, I don’t see that it matters much here. Baby Hitler is, as a baby (philosopher-speak: qua baby), is not evil, has not committed evil, and does not, in any modern understanding, exhibit evil tendencies or character. We are going to kill baby Hitler for what adult Hitler will bring about.
This is an interesting sense of justice: we are trying to balance a wrong that has yet to happen, but will certainly happen. The ‘balancing’ act however, is such that it will ensure the evil never occur. If that is so, then our act is unjust. If we do not do this act though, then great evil will result and that seems to make not committing the act unjust as well. There is a true dilemma here, and each horn is going to do whatever you think would make for the least possible cliche here.
This is not a true dilemma of the ordinary sort as we know what will happen if we do not act. We are 100% positive (Mel Gibson’s father aside) about what will occur if Hitler is not killed as a baby. Hence, I suggest that all the time travel does for this scenario is modify our verbs in a way that bothers us. In a justice sense, baby Hitler has to die for what we know adult Hitler will do.
Squeamishness, Utility, and the Right Thing
As for the squeamish factor that Alec notes above, I think I agree, but only as an interesting artifact. Can we trust squeamishness as a guide to morality? No, but nor does Alec think so. It is a suggestion or a clue at best.
Would I feel squeamish about killing baby Hitler? I would like to say no, because of the greater good I would be serving, but that would be a lie. I would be a little squeamish; he is a baby after all. Were I to look at his little mustache and think for even a moment about the evil commands that would march out, albeit much later, from beneath it, however, I would full on vomit with squeamishness were I not to kill him.
Lives will certainly be saved by killing baby Hitler and an enormous evil will be excised from the world. Will some other evil fill that void? More than likely, as the world seemingly sucks. Will that substitute evil be more evil than Hitler? Who can say? Baby Hitler should be killed because more lives will be saved, including infant lives, than will be lost by the singular act of killing baby Hitler, and this we can be remarkably certain of given our futuristic knowledge.
And if it turns out we were wrong, surely we would appear to ourselves in the past to stop us before it is too late. Right?
Arguing Over Nothing: A regular feature on the blog where we argue over something of little consequence, as if it were of major consequence. Arguing is philosophy’s raison d’être, and the beauty of an argument is often as much in its form as its content.
Today, we argue about the rough points of personal identity in Star Trek style teleportation cases. Given that the debate is essentially one about personal identity, the argument isn’t really over nothing; but the fact that teleportation is impossible makes the debate one that skirts around the edges of nothing.
Each philosopher is granted up to a 500-750 words to state his/her case as well as up to 250-500 words for rebuttal. The winner will be decided by a poll of the readers (or whoever happens to have admin privileges at the appropriate time).
Alec: I Am On Venus
So the standard teleportation scene in sci-fi goes something like this: You step into the teleportation chamber here on Earth, the technician presses a few buttons, a beam sweeps over you, and moments later you materialize in a teleportation chamber on Venus (a lovely vacation destination; bring your sunscreen).
Of course, sci-fi is not science, so it can gloss over the finer points of how this might work. Philosophers (bless them) parading as scientists have given us a couple of options regarding these finer points. (Scientists have stayed away from the issue because of it being “impossible” or some-such. Such negative nancys.)
Option 1: Each particle of you is converted to energy and actually beamed through space to be reconstituted into matter on Venus.
Option 2: Each particle of you is scanned, and the teleportation chamber on Venus pulls particles from a pile of carbon and constitutes them one by one to match the original you on Earth.
The first option is “cleaner”, in that the you on Venus is pretty incontrovertibly you. It’s all of the same particles, in the exact same configuration, after all. The messy part is in the details of how exactly you could survive being ripped apart into atoms and rebuilt. Imagine your brain being deconstructed, particle by particle. At some point your identity will be in question, as your brain will be half gone.
The second option is more interesting, and seems (to this non-scientist) to be the more likely scenario. Your physical structure is essentially computed (analyzed in the minutest detail), and rebuilt as a perfect replica. Once the replica is created, it will be atom-for-atom identical with you, and so how could it fail to have the exact same memories and thoughts as you? How could it, in other words, fail to be you?
The problem, of course, with this scenario, is that there are now two of you. In the standard case, the original you is supposed to be anesthetized and killed after the scanning/reconstructing process. The new you (with all of your memories, and the exact molecular structure of you), wakes up on Venus, with no concern about the dead original on Earth. What happens (so asks the thought experiment) if the Earth-side anesthetization goes wrong, and the original you wakes up on Earth before being killed? The technician sheepishly says: “Um, sorry, but you have been successfully replicated on Venus, and you weren’t supposed to wake up here on Earth. I’m gonna get fired if I don’t kill you right now.” Would you be okay with this? Clearly not. And this intuition fuels some philosophers to say that the original you is you, and the replicant you is not you.
But what’s good for the goose is good for the gander. What happens if the technician on Earth has an untimely heart attack, and dies before anesthetizing and killing the original you. Now the technician on Venus says to the replicant you: “Um, sorry, but the original you on Earth hasn’t been successfully killed. There can only be one you, and since you’ve only existed for a few seconds, we figure you should be killed now.” Would the replicant you be okay with this? Of course not. The replicant you has the same memories, feelings, and thoughts as you do, and would not want to be killed by a technician, no matter what the circumstances. So the same intuition that causes some philosophers to say that the original you has sacrosanct rights and is thus clearly the one you, lets us argue that the replicant you also has sacrosanct rights and is also clearly you.
What are we to say about this? Well, I think we have to bite the unpleasant bullet that there are actually two yous in this scenario, each with complete human rights and responsibilities. They almost immediately will diverge, and so the “problem” of personal identity of having two of the “same” person is not really a problem after the initial reconstitution on Venus. In fact, as soon as the replicant you has any new experience, he is effectively a different person. Which one is the real you? It’s an unanswerable (and therefore bogus) question, I think.
Well, first of all, congrats to Alec for presupposing a big objection and then biting that bullet clean in two. I still disagree with him, but such is the nature of a friendship based on unquenchable hate.
Because Alec was kind enough to break his argument into two options, I will respond in kind. Option One is just as I would have described it, and though Alec takes the you-ness of the arrivee as incontrovertible, I controvert it just the same.
What are we to mean when we talk about a person and her identity, when questions of preservation or sameness arise? I tried to address the difficulties of this question over the past few weeks, and hopefully no one solution was seen as much better than any other (I strive for objectivity as much as possible). Let’s see what is going on here.
If you are just your parts, then option one results in you arriving at the destination, and Alec is right. Your parts are deconstituted at pod 1, shot through space in some sort of stream (the mechanics here do not matter), and are reconstituted at pod 2. However, I doubt that anybody else, Alec included, thinks of a person as just the collection of her parts. Should a serial killer dismember an individual and place all those body pieces in a bag, is that person in the bag? Surely not — who among us would say that was the person we once knew instead of saying that is what is left of that person? Were we to sharpen a pencil until all that was left was a pile of wood shavings, graphite, and an eraser, no one would point to that collection and label it as a pencil. Instead, we would all correctly say, that once was part of a pencil. I mention both a person and pencil to show that I am not going to argue for some sort of ineffable soul as the missing piece.
I think it has to be the parts and how those parts are put together, how they function, that makes the person or the object what it is. What does this say for the ‘stream’ of parts as they travel from one place to the other? You are, then you are not, then you are again. But what are we to say of you when you are not? Are you dead, only to return at a later time and different place? Are you merely gone? But to where? And how did you get there?
I am going to guess that Alec is against such a conception, once he thinks about it. Perhaps he will just bite the bullet and say that the stream is you, recognizable or not. Okay. What if we put a reflector dish in front of the receiver at pod 2 so that instead of being put together, you are bounced off into deep space, never to be caught in another receiver? Are you still existing? Do you cease to exist only when the potential for reincorporation ceases to exist? That then puts personal identity into terms of potentiality, of what ifs, of what you would be in the right circumstances. I am what I am because of what I am likely to be? But what an odd conception that seems.
As for the second option, I wholeheartedly agree with Alec. The person at the other end, the one who steps from the transporter, is not you, though it looks as much like you as you ever have and has the same thoughts and desires as you ever have had. It is not you though. You are who you are because of what you do (function) and why you do it (mental states) and what makes that possible (physical states). Those three together, in just the way they are, is what gives rise to you. The movie clone that steps out of the destination pod can only be identical in two of those three requirements (function and, possibly, mental states). Now I just have to hope that Alec does not press me on what proportions of the three above ingredients are necessary for identity to be preserved.
That’s a great way to think of the first scenario: “You are, then you are not, then you are again. But what are we to say of you when you are not? Are you dead, only to return at a later time and different place? Are you merely gone?” That observation definitely makes dubious my belief that the you on Venus is the you from Earth. I should’ve picked up on this from my own quote: “The messy part is in the details of how exactly you could survive being ripped apart into atoms and rebuilt. Imagine your brain being deconstructed, particle by particle. At some point your identity will be in question, as your brain will be half gone.” At some intermediate point, indeed your identity will in fact be null and void.
So now the question is whether or not a discontinuity in your identity is enough to call that identity into question. Well, if we take consciousness as central to the question of one’s identity, then, no, a discontinuity is not enough to call that identity into question. A dreamless sleep; an period of unconscious drunkenness; going under general anesthesia for an operation;… all of these put a big discontinuity in our conscious lives, and we still think we’re the same people after such events. But if you take bodily intactness and continuity as the key element in personal identity, then the discontinuity that your body goes through in the first scenario is indeed a big problem. The atoms all scrambled and shooting through space obviously don’t keep your physico-functional form — we can’t say that your particles flying through space are a person, any more than we can say that our pile of pencil parts is a pencil.
Jim wrote: “I think it has to be the parts and how those parts are put together, how they function, that makes the person or the object what it is.” Yes, indeed. The pile of pencil parts is not a pencil, because being a pencil is more than just the sum of its parts — it’s also the functional configuration of those parts. But if you took the pencil parts and were able to thwart the laws of thermodynamics and put all of those parts back together in the exact same configuration as when it was a pencil, then don’t you have the same pencil again? Well, then if you put those human atoms all back together again on Venus, in the exact configuration they were in before, then don’t you have the same person again, on Venus?
In fact, that’s really what this debate boils down to, metaphysically speaking. If you have a thing, and duplicate every last atom of that thing, in terms of function and construction, then don’t you have two of that exact thing? Jim thinks not; I think so. We’ll debate this more closely in a future post on the possibility of making diamonds out of Cheetos. (Seriously.)
To make the transition to Einstein’s universe, the whole conceptual web whose strands are space, time, matter, force, and so on, had to be shifted and laid down again on nature whole.
One problem metaphysicians have been dealing with for, well, forever, is the unfortunately necessary intertwining of metaphysics and epistemology. Metaphysics is the philosophical study of what exists; epistemology is the philosophical study of knowledge. And it’s trivial to point out that the best we can do in detailing what there is that exists is to rely on our best epistemology: We can’t talk about what we know about, without talking about what (and how) we know. If we know about quarks, it’s not simply the case that quarks exist, but that we figured out that they exist. Our catalogue of items in the universe is inherently tied to our knowledge of those items.
Why is this problematic? Well, many metaphysicians are very conscious and conscientious about keeping existence separate from knowledge of existence. Much of the problem can be traced back to the venerable Bishop Berkeley, who posited that everything in the universe in actually mind-dependent for its very existence — it’s not, Berkeley thought, just that the computer screen in front of you is merely hidden from view when you close your eyes, but that this lack of observation actually means the computer screen is not really there when your eyes are closed. Problems with this theory forced Berkeley to say that God observes everything at all times, and so there’s no worry about things blinking in and out of existence with the blink of an eye. God never blinks. But regardless of the absurdity of this centuries-old bit of philosophy, the aftershocks have stayed with us. There’s something very compelling, apparently, about the idea that our minds have metaphysical power — that minds can create some of reality.
The great irony is that the best scientifically-minded philosophers of the 20th Century, while trying to shore up the mind-independence of the external world, actually gave proponents of mind-dependence a strong foothold in the metaphysical debate.
Naturalized epistemology — the brain child of W.V.O. Quine, though it was clearly anticipated hundreds of years earlier by David Hume — takes science to be the paragon of knowledge-farming; the discipline whose results we are most certain about. Naturalism, though, if we accept it, forces us also to acknowledge the following: We can’t make judgements about the world from some point of privileged access outside of science. That is, there is no way to step outside science and see what there is in the world; we don’t get a clearer picture of quarks without science — science itself tells us about quarks, and without science this piece of ontological furniture would not be accessible to us whatsoever. Our metaphysical house, chock full of interesting furniture, wouldn’t merely look somewhat different without science; it would be a bare, dirt-floored cabin with very little of interest in it.
This leads to a very tantalizing point. Science often changes its mind, and in such episodes of change what we take to be our ontology (our catalogue of things that exist) changes as well. For instance, once upon a time science told us that there was a substance called phlogiston that is released from things when they are burned. This substance — a consequence of a good scientific theory that explained several phenomena related to chemistry — was taken by scientists (and the informed public) as existing in the world. If science is our best arbiter of what exists, then, at the time during which science told us that phlogiston existed, there’s a strong sense in which it actually existed. Science, remember, tells us what there is, and there’s not privileged perspective outside of science to figure out our metaphysics. It turned out, however, that the phlogiston theory of chemistry ran into serious problems, and was more or less wholesale replaced by the oxygen theory of Lavoisier. In this new theory, there was no place for phlogiston. At this point, science told us that phlogiston does not exist.
There are (at least) two conclusions that can be drawn from this, each of which I will encapsulate using the Kuhnian metaphor at the top of this entry:
Standard Naturalism: The whole of science forms a conceptual web from which vantage point we purvey the world. There is no spot outside of the web from which to purvey the world. We can change science by changing some part of the web — this amounts to changing our ideas about an unchanging world. The world is independent of our ideas about it, even as we discover new ways to look at what exactly is in it. For instance, we were simply wrong about the existence of phlogiston. It never existed.
Kuhnian Mutant Naturalism: A scientific theory is a conceptual web that uniquely lays upon the world giving it its shape. When a new theory is developed, an entirely new web is made. There is still no place outside of the web from which to purvey the world, but we can shuck off the entire web in favor of a new one. The world is partly dependent for its existence on our ideas about it — whichever web we throw onto the world actually gives the world its shape. When we change our ideas, we change the world. For instance, phlogiston actually did exist while scientists were working with phlogiston theory. When Lavoisier came up with a new chemical theory, the world actually changed — phlogiston disappeared, and in its place oxygen and other items filled our metaphysical cupboards.
Many have noted from Kuhn’s version of naturalism that he is an anti-realist in the Kantian vein. We won’t get into the thickets of Kantian metaphysics here, but, in short, he believes that our ideas are not merely a pre-condition for theorizing about things, but that theorizing indeed is a pre-condition for the very existence of things. Contrary to this, standard naturalism usually goes hand in hand with common-sense and scientific realism, wherein, as Philip Kitcher notes: “Trivially, there are just the entities there are. When we succeed in talking about anything at all, these entities are the things we talk about, even though our ways of talking about them may be radically different.”
One reason Kuhn is led to his odd metaphysics is because of his implicit description theory of reference. On a description theory, the only way to correctly refer to an entity is to have its unique description in mind; but if a scientific revolution changes the description associated with a key scientific term, then the old description no longer refers. This leads Kuhn to the idea that competing scientific paradigms are incommensurable. It also motivates his metaphysics. If a term once referred and now it does not, all on the basis of our changing descriptions, then by some inferential jump one could think that this correlation was causal; i.e., that our changing descriptive thoughts cause a change in the world.
We’ll examine description theories and the philosophy of language in an upcoming post. Stay tuned…
Explaining anything about Immanuel Kant’s philosophy in a short blog post is a daunting and perhaps foolish task, but I am nothing if not undaunted and foolish.
I’d like here to address a particular problematic aspect of Kant’s ethical philosophy (and don’t let the terminology scare you off — it’s not as difficult as it’s about to sound): How one is supposed to go about applying Kant’s categorical imperative by way of universalizing a personal maxim?
Kant’s categorical imperative is the only pure (he had a thing about purity) moral law he could come up with, and it boils down to this: “Act only on that maxim by which you can at the same time will that it should become a universal law.” A maxim is a personal “ought” statement, like “I ought to save that puppy from that oncoming truck”. A universal law is generated from a maxim by applying it to the entire rational population. E.g., “Every rational person ought to save puppies from oncoming trucks.” And Kant’s categorical imperative asks us to use this process every time we wish to make an ethical choice: Come up with a personal maxim for the situation; universalize that maxim; and see if that universal law is something that should be followed by every rational person in every such situation.
Let’s go through an example of Kant’s process. Let’s say you’re faced with an instance where lying would be expedient. Here, then, is your personal maxim for the situation:
Maxim: I ought to lie in order to get out of a jam.
And then Kant asks you to universalize it:
Universal Law: Everyone ought to lie in order to get out of a jam.
According to Kant, this universalized version of your personal maxim shows us that your maxim is in fact immoral. Even though your maxim may seem harmless, and is certainly beneficial to you in the short term, by extending its reach to the whole of humanity, there arises something very bad. If we look at a world where everyone lies in every dicey situation, well, this is a world that is in trouble. And, thus, according to Kant, you should never lie. Period. No exceptions.
Lying to Nazis
This position leads to some obvious problems.
Say you’re in 1940 Germany, and you are harboring your Jewish neighbor in your attic, in order to protect her from the Nazis, who would like to find and kill her. Now imagine that the Nazis knock on your door and ask you: “Are you hiding any Jews in your attic? We’d like to kill them if you are.” The relevant moral question here, of course, is what do you do? Perhaps, as Kant thought, lying is a bad thing, but if you tell the truth in this situation, it will lead to your neighbor’s unwarranted death, which certainly seems worse, on the face of it.
Let’s look in a little detail at how Kant might have examined this situation. His logic went something like this:
If it’s okay for you to lie, then (according to the universalization of this maxim) it’s okay for everybody to lie.
But if everyone lies, then no one will ever believe anything anyone says.
And, thus, lies would become completely ineffectual.
Therefore, lying is a rationally inconsistent activity — it leads to its own conceptual destruction.
This rational inconsistency is at the heart of Kant’s claim that lying is immoral — he thinks that ethics has to be based on irrefutable, logical principles in order for it to be anything besides an argument over opinions. A concept that leads to its own self destruction certainly shows us that there is something inherently wrong with it. And so lying, in virtue of this, is immoral.
Choosing Your Maxim
But let’s look more closely at the procedure of picking your maxim in the lying example.
I should lie in order to help someone.
Is this a good candidate for a personal maxim? Well, no, not really. It’s certainly not generally applicable to moral situations. For instance, one could pretty easily argue that lying in order to help a mad bomber who is about to kill a thousand innocent people is probably not a very ethical thing to do.
I should lie in order to keep someone safe.
No, this has the same problem… what if you’re lying in order to keep the mad bomber from being arrested? This is arguably not a moral thing to do.
I should lie in order to save a life.
We’re getting better, but we still have the same problem lurking. If your lie is to save the life of an evil person, it’s at least arguable that the lie is not the morally right thing to do.
So let’s include something in our maxim to account for the idea that you are lying to protect someone innocent:
I should lie in order to save an innocent person from death at the hands of an evil person.
What happens if we universalize this maxim?
Everyone should always lie in order to save an innocent person from death at the hands of an evil person.
This is not bad, actually, but there’s still the Kantian objection of conceptual self-destruction lurking: If we always lie to evil people who want to kill innocent people, the evil people will start to catch on, and thus the lies will become self-defeating.
In fact, the example of lying is one of the best for Kant’s system — when he applies his system to other sorts of moral cases, it all starts to go to hell. But with lying, he has found a case where there is something internally irrational about the endeavor, when applied universally. But I’d like for a moment to talk about a general problem with Kant’s procedure. How, exactly, do you go about choosing your maxim?
The Problem of Specificity
One major problem here is that of specificity of the maxim you choose.
You could make your maxim very general:
I should lie to strangers.
This is just about the most general maxim you could use here; and certainly this isn’t universalizable. Not only would you not want to universalize it (everyone should lie to every stranger would be an odd moral rule!), but it harbors the same problem of lies being self-defeating.
What about if you go to the other extreme, and choose a very specific maxim?
I should lie in order to save the life of the Jewish person hiding in my attic in 1940 Germany from the Nazis who will kill her.
This is about as specific as you can get with your maxim. And actually this is pretty well universalizable, because by universalizing it you don’t lose much specificity — your universalized law is still quite specific and actually probably a good moral rule:
Everyone should lie in order to save the life of the Jewish person hiding in Alec’s attic in 1940 Germany from the Nazis who will kill her.
(You might generalize the universal law here a bit more: Everyone should lie in order to save the life of the Jewish person hiding in his or her own attic in 1940 Germany from the Nazis who will kill that Jewish person. Still, this is arguably easy to accept as a good universal law.)
The issue here is that very specific maxims will be easy to universalize, while very general ones won’t. And this is a problem because very specific maxims will usually be very uninteresting as the basis of moral tenets. Very general ones will usually be interesting.
Imagine instead of a moral law like “Murder is wrong”, we had a law that said “Murdering Joe Smith on August 24, 1968, because he applied the wrong postage to a letter, is wrong”. Other ethicists would mercilessly laugh us out of the business. Our law may be true, but is not very interesting.
So the only way to use Kant’s procedure to generate a sound moral rule is by picking a maxim that is so specific that it is morally mundane.
Other Problems With Kant
There are a million and one problems for Kantian ethics (although there are a million and two Kantian ethicists in the philosophical community today). But perhaps the most obvious concern with Kant’s ethics is that it doesn’t (in fact, explicitly so) account for the ends of one’s actions. Most of us are disposed to say that killing a mad bomber in order to save a thousand innocent lives is a moral action, regardless of the fact that it involves killing someone. Kant disagrees, saying we can’t rely on a good outcome (saving a thousand lives) as the basis of our ethics.
He’s got a point. What if you decide to kill the mad bomber, but by a fluke of luck you actually wind up wounding him instead, and he escapes, only to kill ten thousand people the next day? That fluke of luck turns you from a hero into a villain. This idea of moral luck is a fascinating topic on its own, but for our purposes here, it does cast Kant’s hardcore position in a somewhat better light. If good outcomes are dependent on luck, then perhaps a genuinely moral decision shouldn’t depend on its outcome — perhaps a good act is good no matter what the outcome.
Famously, a school of moral philosophy called utilitarianism (or more generally consequentialism) sprang up in direct opposition to this perspective. We’ll talk about some of its pluses and minuses in a future post.