Categories
Metaphysics

Do Numbers Exist?

According to your disposition, you might have an immediate gut reaction to this question. My initial reaction (oh so long ago) was: “Of course numbers don’t exist. You can’t pick up the number 3 and throw it through a window.” That is, my intuition was that the only things that exist are the kinds of things that can be physically manipulated, and numbers, by almost every account, just aren’t this kind of thing.

To be clear about our terms, you can pick up numerals — that is, you can pick up concrete instances of numbers, like the plastic number signs at the gas station telling you how much gas costs, or the printed numerals in a book, denoting page numbers. But you don’t, by virtue of tearing out page three of a book and tossing it out a window, throw the number 3 out the window, any more than you throw me out of a window by drawing a picture of me and throwing that out the window.

Numbers, if they exist, are generally what philosophers call abstract objects, and those who maintain that such things exist claim that they exist outside of space and time. If you’re like me, you shake your head at such talk. “Outside of space and time? What does that even mean? Gibberish!” If you are similarly disposed, you might be a nominalist (in case you’re accumulating self-descriptive philosophical terms), and you are part of a long, proud philosophical tradition that thinks that existence is the exclusive domain of the physical.

However, your nominalism begins to run into problems pretty quickly. Never mind numbers. What about things like, say, novels? What exactly is the novel The Catcher in the Rye? It’s not any of the particular instantiations of it — it’s not the copy on your bookshelf; it’s not the copy on mine. All of the print copies on the planet could be eradicated and still the novel could be able to be said to exist. Is the novel the original manuscript sitting in a safe somewhere? But that could be burned and you could still argue that the novel exists. But if the novel itself is not identified with any of its particular instantiations, then the nominalist is in a bit of a quandary. On this perspective, the copies of the novel are instantiations of the novel itself, and the novel itself is seeming to be something abstract — something non-physical.

So the idea of something somehow existing outside space and time is suddenly not as absurd as it may have seemed. What about numbers, then? Of course there are disanalogies between numbers and novels. Novels are invented by humans, while, on most views of the subject, numbers exist whether or not humans ever happened to discover them. But, putting such differences aside for the moment, perhaps the existence of novels as abstract objects gives us some traction to say that numbers exist as abstract objects.

Abstract objects

What other sorts of things could be included in the category of abstract objects? The funny thing is that in many seminal texts on the subject, one has to plumb deep to find mention of what would count as an abstract object. Mathematical objects generally top the list (numbers, points, lines, triangles, etc.), followed by things like chess moves, games in general, pieces of music, and propositions. How are these things abstract? We generally think of a chess move, for instance, as something that exists by virtue of a concrete chess player actually moving a concrete chess piece in accordance with the rules of the game (which could themselves be considered abstract, but never mind this for the moment). But that seemingly concrete move can be instantiated in so many concrete ways — you could be replicating someone else’s game on your own chess board, you could make the move on a hundred different boards all at (nearly) the same time, you could make the move in your head before you make it on the board,… and all of these concrete possibilities point to the metaphysical problem here: If you believe there is only one move, and it’s concrete, then which move is the one move? And then what are the other moves? Copies of the move? Or instantiations of the same move? If you believe in abstract objects, you have, on some takes, an easier time of it. The move itself is an abstract object, and every physical version of that move is a concrete instantiation of that move. That is, none of the concrete, physical moves are actually the move — there is only one move and it is abstract, and any physical move is a copy, like a sculpture of a real person. (You can have a thousand sculptures of a person, but there’s only one person. The sculptures are imitations or instantiations of the person.)

This perspective is (loosely) called platonism, named after Plato’s idea that there are ideal “forms” — perfect archetypes of which objects in the real world are imperfect copies.

Why would these ideal forms not exist in space-time? I.e., why would they have to be abstract? Well, objects in space-time (the real world) are all imperfect copies of something. So if an ideal form existed in, say, your living room, then it would be non-ideal by virtue of existing in your living room. To put it perhaps less question-beggingly, if, say a chess move were instantiated in a thousand ways, how would you pick out the ideal version from which all others were copied? All of the instantiations would have similar properties, and so no one instantiation would stand out as different enough to count as the move, the platonic form of that move. Therefore, it makes sense to posit an abstract version of the move — something perfect, and outside of space-time, from which all the worldly versions are copied.

Thinking about geometric objects is perhaps the clearest way to think about abstract objects. A line segment (a true, geometric line segment) is a perfectly straight, one-dimensional object with a determinate length. There are no such objects in space-time. Every object you could possibly interact with is three-dimensional — no matter how thin a piece of, say, plastic you create, it always has a height and a thickness, giving it three dimensions. Nothing, therefore, in the concrete world, is a real geometric line segment. We have things that approximate line segments — very straight, very thin objects. But none of those things will ever be perfectly straight and with zero thickness. So if there does, somehow, exist a true line segment, it certainly isn’t in the concrete world, and therefore it must be in some sort of abstract realm.

Knowledge of abstract objects

One of the most damning aspects of platonism is its failure to come to terms with how we learn things about abstract objects. The general picture of how we acquire knowledge goes something like this: We perceive an object in the physical world, via physical means (e.g., light bounces off the physical object and hits our eyes), and eventually we process such perceptions in our brains and work with mental representations — i.e., brain states — of the object in question. But an abstract object can’t be processed like this. It is non-physical, and so, e.g., light can’t reflect off of it. So our usual causal theory of knowledge acquisition fails for things like numbers.

Well, then, how is it that we come across any knowledge of abstract objects, if they indeed exist? Some mathematical platonists, like the venerable logician Kurt Gödel, resorted to the idea that we just know truths about mathematical abstracta. As he wrote:

But, despite their remoteness from sense experience, we do have a perception also of the objects of set theory, as is seen from the fact that the axioms force themselves upon us as being true. I don’t see why we should have less confidence in this kind of perception, i.e., in mathematical intuition, than in sense perception…

But this is clearly an unacceptable answer to the problem of knowledge of abstract objects. How exactly do the axioms of set theory force themselves upon us? Waving your hands and saying “they just do” isn’t an account of the process, and leaves us in the dark as to how they just do, which is precisely what we need before we can take the platonist seriously as an epistemologist. (One need merely look at the history of geometry to see one serious problem with seeing the “obvious” truth of axioms. Until Lobachevsky and Riemann came along with consistent non-Euclidean geometries, nearly everyone though that Euclid’s fifth postulate “forced itself upon us”.) How does some feature of a non-spatiotemporal object force itself upon our spatiotemporal brains? The only way would be somewhat magical, and you could look to Descartes to see the folly of such a plan. Descartes posited that minds are distinct substances from brains, and indeed were non-spatiotemporally located. Of course, this leaves the problem of how the mind somehow slips into the brain and affects it. Descartes’ answer was that it crept in through the pineal gland. But this is no answer; it merely delays the answer for a moment. How does the non-spatiotemporal mind creep in through the pineal gland, and then into the brain? Descartes had no answer for this, of course, because the whole thing would be terribly mysterious, explaining how the non-physical interacts with the physical.

Worries like this keep nominalists well-motivated to stay on their side of the debate.

The argument from indispensability

Even if you’re dead-set against granting the existence of numbers, you think platonism is absurd, you have challenged platonism’s picture of knowledge, and you somehow have all of your nominalist ducks in a row, there is still one very influential argument to contend with as regards numbers’ existence: The argument from indispensability. Hardcore nominalists are often quite scientifically-minded, scientifically-motivated philosophers. And it is this love of science that gets them into trouble with denying the existence of numbers. The argument runs, in broad strokes, like this:

  1. Science is the best arbiter of what exists.
  2. Therefore, if science says something exists, we should accept it.
  3. Science relies (heavily and intractably) on mathematics.
  4. Therefore, science says that numbers exist.
  5. Therefore, numbers exist.

If you’re a good nominalist, you’re more than likely feeling obliged to accept this argument as sound. But if you accept its conclusion, then you’re right back to the issue of explaining what numbers are. They can’t be physical objects, therefore they must be abstract. But, as a nominalist you claim that there are no abstract objects! And you are caught in an intractable dilemma.

Many nominalists give up at this point. Hilary Putnam wrote resignedly:

Quantification over mathematical entities is indispensable for science…; but this commits us to accepting the existence of the mathematical entities in question. This type of argument stems, of course, from Quine, who has for years stressed both the indispensability of quantification over mathematical entities and the intellectual dishonesty of denying the existence of what one daily presupposes.

The talk of “quantification” is a bit of logic talk, but we can paraphrase it into regular English: “If science uses numbers, then science is committed to the existence of numbers.” You might see a glimmer of nominalist hope here. Science also uses frictionless planes, for example, and yet no scientist feels committed to the existence of those. Perhaps there is a way out of our commitment to numbers in the same way. Or perhaps, one might argue, frictionless planes actually do exist as platonic, abstract objects.

But there are two more “obvious” ways to be a nominalist about mathematics.

First, you could argue that numbers exist, and are actually physical objects. Penelope Maddy argues something close to this in her early work, Realism in Mathematics. She actually is here arguing for a version of naturalized platonism — the idea being that what is usually thought of as abstract objects are actually somehow existent in the physical world. But, platonist labels aside, the gain for nominalism on this take would be obvious: numbers, if they are physical objects, would be just another part of the down-to-earth nominalist physical world, like cats, trees, and quarks. This brave strategy, however, ultimately fails. It would take us into some metaphysical thickets to explain why, so I have relegated this to a paragraph at the very end of this post.

Second, you could argue that numbers aren’t actually indispensable to science. Hartry Field famously tried this strategy, claiming that science in fact only seems to rely on mathematics. On Field’s view, this seeming reliance is really just a fiction. In order to prove this Field attempted to nominalize a chunk of physics, by removing all reference to numbers within it. This complicated, counterintuitive project has met with equal parts awe and criticism. The consensus is that his project is untenable in the long term.

So do numbers exist or not?

Well, if you’re a platonist, you would answer “yes, numbers exist”. And further you would claim that they possess a sort of existence that is abstract — different from the sort of existence that stones, trees, and quarks enjoy. Of course, this means you are in the unenviable position of explaining the coherence of this sort of existence, along with the herculean task of explaining how we know about anything in this abstract, non-physical realm.

If you’re a nominalist, you’d probably answer “no, numbers do not exist”. However, now you have the unenviable job of explaining why mathematics seems so indispensable to science, while science is perhaps our best tool for saying which things exist. The two best nominalist answers to this conundrum seem untenable.

Probably, as is usually the case in philosophy, dogmatically sticking to one side of a two-sided debate will be inadequate. Maddy’s attempt at naturalizing platonism was a brave bridge across the nominalist-platonist divide, but clearly isn’t the right bridge. We’ll examine some other options in a future post.


References and Further Reading

Balaguer, Mark. (1998) Platonism and Anti-platonism in Mathematics. Oxford: Oxford University Press.

Benacerraf, Paul. (1973) “Mathematical Truth”, Journal of Philosophy 70.

Colyvan, Mark. (2001) The Indispensability of Mathematics. Oxford: Oxford University Press.

Irvine, A.D. (1990) Editor. Physicalism in Mathematics. Dordrecht: Kluwer.

Lowe, E. & Zalta, E. (1995) “Naturalized Platonism Versus Platonized Naturalism,” Journal of Philosophy 92.

Maddy, Penelope. (1992) Realism in Mathematics. Oxford: Clarendon Press. Revised paperback edition.


A note on Maddy’s naturalized platonism

Maddy actually thinks that we perceive sets. Number theory, as many logicians are proud to point out, can be reduced to set theory — i.e., numbers can be reduced to sets, which are, of course, generally seen as just another sort of abstract object. Maddy’s move is to bring those sets into the natural world. So that when we see an egg, we are perceiving that egg, but are also perceiving the set containing that egg. (A set containing an object is different from the object itself, you may recall from your math studies.) And that set containing the egg is a natural object, different from the egg itself. But now we run into trouble. Certainly there must be something different between an egg and a set containing that egg; otherwise ‘set containing that egg’ is just a proper name denoting the egg in question, and nothing metaphysical hangs on the distinction. (If you call me “Alec” or “author of this post”, you are not positing the existence of two people — these are just two different names for the same person.) Well, the usual distinguishing feature of abstracta is that they are not spatiotemporally located; but on Maddy’s scheme sets are spatial objects. The problem: Our egg and the set containing it necessarily co-exist in the same exact region of space-time, and yet they are supposed to be different things. In what does this difference consist? Well, certainly nothing physical, otherwise they wouldn’t co-exist in the exact same region of space-time. But then the difference must be something non-physical — i.e., something about the set must be abstract. And if this is the case, then we’re right back to all of the problems inherent in platonism, particularly the problem of how we can have any knowledge of such abstracta.

Categories
Ethics

Trolley Problems

The so-called trolley problems form a set of ethical thought experiments meant to delve into our intuitions about killing, letting die, rights, and obligations.

Driver’s Two Options

The problems come in many forms, but here is the original version. There is a train (or trolley, but who the hell thinks about trolleys anymore) with failed brakes, about to barrel down upon and kill five unsuspecting rail workers. The driver can continue down this track, or steer to the right onto a spur where there is one unsuspecting rail worker awaiting certain doom. What should the driver do?

The intuition that is generally thought to be prompted by this is: the driver should steer to the right, killing one but saving five. It’s a numbers game wherein, other things being equal, one should kill as few people as possible. Killing one person, it is thought, is better (or less horrible) than killing five.

Of course, one may take issue with this intuition in any number of ways. For instance, there’s the “other things being equal” clause, which we’ll address shortly. (As a preview, imagine that the one worker is close to discovering a cure for cancer, and the five are shiftless hooligans. Perhaps in such a case the numbers game changes, and the utility of the one outweighs the utility of the five. More on this soon.) But to get at a more subtle problem with the case, let’s examine another trolley problem — one without any trolleys.

Judge’s Two Options

This time, imagine a judge faced with the following dilemma. A serial killer has been killing people for months, and everyone is getting understandably nervous. A vigilante group takes five innocent people hostage, and says to the judge: “if you don’t catch the killer and sentence him immediately to be executed, we will kill all five hostages.” The judge, not knowing who the killer is, has the following choice: do nothing and let the five innocent people die, or sacrifice an innocent person as a scapegoat to appease the vigilantes, thus killing one but saving five.

The intuition meant to be provoked here is that the judge has no moral right to sacrifice an innocent person’s life, regardless of any good consequences that act might have. So, in this case, as opposed to the initial trolley problem, the supposed moral is that it is not acceptable to save five by killing one.

So now we have two cases where killing one person would save five other lives, but in one case the killing of the one seems to be morally acceptable, and in the other the killing of the one seems to be morally unacceptable. What is the morally significant difference between these cases?

Killing versus Letting Die

Perhaps the difference is between killing and letting-die. In the case of the judge, she is not actually killing the five hostages (the vigilantes will do the killing), she is letting them die. If she were to sentence the one innocent person to execution, that would be much more of a case of direct killing. In the original trolley case, the driver has the choice between directly killing five or directly killing one. You might argue that faced with such a choice, the only morally significant factor is the numbers. But the judge is faced with a different situation, wherein she can either kill one or let five die. The numbers add up differently here, perhaps.

But perhaps not.

What happens if we eliminate the driver in the trolley case? Our train is driverless and brakeless, and barreling towards our five workers. A bystander is standing by a switch in the tracks, and can either do nothing, letting the five workers die, or throw the switch and send the train to the right, killing the one worker on the spur. What should the bystander do?

The intuition here is generally that the bystander should throw the switch and kill the one, saving the other five. But wait — our judge was supposed to let the five hostages die, so as to avoid killing one. Why is our bystander obligated to kill one in order to save five, when the circumstances seem so similar?

Well, you could argue that bystander’s case isn’t different at all from the judge’s case, and that, therefore he should not throw the switch. What if the bystander had three options: throw the switch one way and kill the one, do nothing and let the five die, or throw the switch the other way and kill himself.

Is the bystander morally obligated to throw the switch and kill himself? It would certainly be nice of him, but it would generally be regarded that this would be an act of a Super Samaritan, and that it would go above and beyond the normal obligations of morality. But if our bystander is not obligated to save five lives by sacrificing his own life, then perhaps he is not obligated to pay this price with someone else’s life. That is, perhaps the bystander in the two-options case should indeed, like the judge, let the five die, rather than sacrifice one in order to save five.

The Medicine

We’re getting further away from our initial reasoning in the first trolley case, in which we thought numbers were the primary factor. (I.e., if you have a choice between saving one life and saving five, you should generally choose to save five.) But now we’ve seen some cases in which we should choose to save one instead of five. Could it be that in general the numbers aren’t the relevant moral factor?

Here’s another trolley case to consider (another one without any trolley). Six people all need a special drug in order to live. You have enough to treat either one of the five (who needs all of the medicine you have), or to treat the other five (who each need a fifth or the medicine you have). What should you do?

This is another case that, on the face of it, harkens back to our original trolley case. It seems as if, everything else being equal, you should probably save the five instead of the one (let’s call the one “David”), because surely the numbers matter here. Of course, there could be special circumstances involved, and here we have to return to the “everything else being equal” clause that I promised to talk about earlier. Perhaps David has a far greater utility than the five — perhaps he is a cancer researcher, while the five are ne’er-do-wells. Or perhaps the five are all evil — murderers or nazis or CEOs or what have you — while David is a relatively good person. Or perhaps the five are all old and otherwise sick and fairly near death, while David is young and vibrant. Or perhaps there is a more hybrid socio-moral reason to choose to save David over the five: perhaps you are David’s parent, or David’s doctor, or you have signed a binding legal contract to give your medicine to David. These are all justifiable moral factors that break the “everything else being equal” clause here, and would morally allow you to give the medicine to David.

But what if you were simply David’s friend, and had no other reason to give him the medicine than that you want to. Would this make it into the list of justifiable moral reasons to save David instead of saving the five? Well, generally the intuition is that it is indeed not such a reason. You have no moral or contractual obligation to save David, you just want to save him. And generally this isn’t thought to be a good moral reason to act.

But maybe this is wrong. Suppose now that the drug is owned by David, not you. Would you try to persuade him to give his medicine to the five others? Should you? I should think not. David values his life more than the five strangers’ lives, and no amount of utilitarian mathematics would convince him otherwise (“come on, David — five lives are worth five times the value of your life, and so you should give the five your medicine…”). And David is certainly not violating anyone’s rights by keeping his own medicine — none of the five has any claim to the drug. It would be an act of supreme Samaritanism to give up his own medicine to save others.

But given this new analysis, perhaps in our previous case we were too hasty in throwing “I want to give David my medicine” into the category of morally unacceptable reasons. Perhaps valuing David’s life is a morally acceptable reason for saving his life to the detriment of five others. It is still the case that none of the five has any claim to the drug. (Nor does David, of course.) It’s my drug. But perhaps my valuing David’s life is enough to eclipse the concern about the numbers here.

What, then, about the original case where you have no special concerns for any of the parties involved? Perhaps the numbers still aren’t an important concern here. John Taurek (from whom I took this example) claims just this, and says we should simply flip a coin. Heads: we save David. Tails: we save the other five. This way, each of the five has a 50-50 chance of living. Taurek’s point is that we can’t measure the value of human lives — at least not in the way that we can measure the value of, say, jewelry. And so, left without this sort of measure, and without any other factors that would count towards breaking the “everything else being equal” deadlock (such as friendship), we should fall back to a random choice. Of course, like many philosophers, he goes a bit too far with his zealotry. He says there’s no difference in a case where you’d be weighing 50 lives against one; I suppose he’d go to the extent of saying there’s no difference in a case where you’d be weighing 5,000,000 lives against one, or 5,000,000,000 against one. But clearly this is just insanity. Just because you can’t weigh a human life’s value in the same way as a necklace’s doesn’t mean there’s no way to measure its value at all. And it certainly doesn’t mean that 5,000,000,000 lives can’t be seen as more valuable than one.

Avoidability

Perhaps the correct account of trolley cases must examine avoidability. Take for instance a new non-trolley trolley case: The Surgeon’s Two Options. A surgeon has six patients, five of whom will die very soon without various organ transplants, and one of whom has a broken toe but is otherwise vital and healthy. By an extreme coincidence, the patient with the broken toe has the exactly right blood and tissue types to match all of the five other patients, and thus would be as perfect a transplant match as could be without being a relative. The surgeon is thus presented with two choices: harvest the organs of the patient with the broken toe, and thus save the lives of the other five patients; or merely fix the patient’s toe and let the other five patients die.

In this case, if the surgeon harvests the organs, she has avoidably violated the rights of the patient with the broken toe. That is, she could have not taken the organs, and thus not violated the patient’s right to life, liberty, and the pursuit of metabolism.

In the original trolley case, whichever decision is made, someone will die. But it will be unavoidable. There’s nothing the driver could do to stop the killing. And if he decides to take the spur and kill the one worker instead of the five, there’s nothing about his decision that could have been impacted by the worker’s wishes. In the surgeon’s case, she could simply have asked the toe patient if he minded having his organs harvested, and the matter would have been perfectly clear.

On one reading of the surgeon’s case, the numbers don’t count, simply because rights are being avoidably violated. On a similar reading of the trolley case, the numbers do count, simply because there are no other morally relevant factors. (And, despite what Taurek claims, the numbers are indeed morally relevant.)

Killing versus Letting Die versus Withdrawing Aid

There’s one more thing we should look at regarding the killing versus letting-die discussion: namely, we have to consider a grey-area between them. Withdrawing aid. It will take us to an interesting place, in the end.

One take on the difference between killing and letting-die is that killing is an act of doing, and letting-die is an act of allowing. (You might have picked up on the strangeness of an act of allowing. That is, you might think these things reside in different metaphysical categories; i.e., you don’t act in order to allow something to happen — in fact, you have to not act in order to allow something to happen. But I think there’s an implicit action in deciding not to act. More on this, soon.) And if this is the proper analysis, then we can apply a similar analysis to the original trolley case and the surgeon’s two options. The driver could just stay on the main track, allowing the train to do what it would have done on its own; and this could be seen as an act of letting-die. If letting-die is a less serious moral offense than actively killing, then perhaps letting five die is still less egregious than killing one. In the case of the surgeon, we have the same issue: letting five die might be less morally egregious than killing one, and thus you’d have your moral decision.

But what about murkier cases of withdrawing aid? Take for example, this: You are swimming with a friend, and she starts to drown. You start to rescue her, but she is so scared and disoriented that she begins to pull you down with her. You realize that you will both die if you don’t disengage from your rescue attempt. You abort the attempt, and she dies. Did you kill your friend, or allow her to die? Well, you have certainly acted, by pushing your friend off of you, but is this really rising to the level of killing? Perhaps you want to say that your action was one of withdrawing aid, which you might well argue is less morally egregious than an act of killing.

We can fruitfully look here to Judith Jarvis Thomson’s famous thought experiment of the violinist. You have been kidnapped by a radical music-lovers group, and you wake up in a hospital bed next to a world-famous violinist. You are told that the violinist needs your kidneys in order to survive, and so has been hooked up to you while you were unconscious. The question is whether or not unhooking yourself from the violinist is murder. (The original case is meant to show us something about the ethics of abortion.) You might argue, as in the last example, that this is a case of withdrawing aid rather than that of outright killing the violinist.

What if, in a similar scenario, while you ponder what to do, the violinist’s arch-enemy sneaks into the room and disconnects you. This is withdrawing aid as much as the last case, but may strike you differently somehow. Is seems more like killing somehow than when you disconnect the violinist yourself.

My take is that these cases are both acts of killing. But when you disconnect yourself, it’s a justified killing. That is, you have rights that have been violated, and it is thus a right you have to disconnect yourself. That said, I think it’s still an act of killing — justified or not, we should call it what it is. The violinist’s enemy does not have the right to kill him, and so this is not a justified killing, though a killing it obviously still is.

The Proper Analysis

Is the trolley problem solvable in every variation via the same reasoning? I doubt it. Hundreds have tried, of course, and perusing the literature is a fascinating pastime for those who are curious. But I do think that, as in many of the cases above, the proper analysis will usually involve an examination of the rights involved, and that this will often take the moral high-ground above any arguments regarding killing, letting-die, or anything similar. We’ll take a closer look at rights-based systems of ethics in future posts.

Bibliography

McMahan, Jeff. (1993) “Killing, Letting Die, and Withdrawing Aid”. Ethics 103.

Naylor, Margery Bedford. (1988) “The Moral of the Trolley Problem”. Philosophy and Phenomenological Research 48.

Taurek, John. (1977) “Should the Numbers Count?” Philosophy & Public Affairs 6.

Thomson, Judith Jarvis. (1971) “A Defense of Abortion”. Philosophy & Public Affairs 1.

Thomson, Judith Jarvis. (2008) “Turning the Trolley”. Philosophy & Public Affairs 36.

Categories
Arguing Over Nothing

The Athletes-on-Steroids Debate

Arguing Over Nothing:A regular feature on the blog where we argue over something of little consequence, as if it were of major consequence. Arguing is philosophy’s raison d’être, and the beauty of an argument is often as much in its form as its content.Today, we argue about the acceptability of professional athletes using performance-enhancing steroids. Sides have been randomly assigned. Jim argues here for a pro-steroid position, while I take the con.

Each philosopher is granted up to a 500-750 words to state his/her case as well as up to 250-500 words for rebuttal. The winner will be decided by a poll of the readers (or whoever happens to have admin privileges at the appropriate time).


Jim: Arguing for the Use of Steroids

Really? I’ve been assigned the pro position? Dammit. Fine.

Before I present my arguments in favor of this very fine topic, I want to first lay out the purpose of the athlete, showing what I think the athletic endeavor is meant to display or do and what it is not meant for. Once that groundwork has been properly lain, my actual arguments will be easily to see and agree to.

Sports itself is merely a kind of communal physical activity, the doing of which I need not bother defending, as it is patently clear that nearly everyone (and my claim is not affected if ‘nearly everyone’ only picks out a bare majority) desires the company of others (while not every possible ‘other’, at least some people who are not oneself), and it is physically beneficial to take part in physical activities. Watching sports is another matter entirely, and it is here that the role of the professional athlete needs to be made clear.

One claim about athletes is that they show to the rest of us what the human body can do, what an object of grace and strength and fortitude can accomplish. I will not dispute that (though if the topic comes up later, I will be more than happy to give it a shot). I will claim that such displays are well handled by the amateur athletes — those who play college sports or in the olympics, or even in minor league sports. Such athletes are the ones who are competing for, among other things, the glory of the game or to personify the strength of the human spirit, and other such claims. The important aspect to note about such athletes (for the most part) is that they are not payed to play. They compete because of the joy and sense of accomplishment and what have you that they receive from the mere fact of competition. Our being allowed to watch such activities is enjoyable, but their purpose is not solely, I suggest, for our entertainment. Our entertainment is a by-product of their true purpose. Such is not so for the professional athlete.

The professional athlete exists to entertain us, the non-professional (perhaps even non-) athletes. The pro achieves remarkable things, often moreso than does the amateur, but professional accomplishments are the by-product of their athleticism — their purpose is to entertain, and any crossover into the realm of ‘attaining human perfection’ and the display thereof is but icing on the cake. We watch professional sports, if we do, because it entertains us. It entertains us with its athleticism, its drafting us into particular communities of comrades and opponents (our city/division/league is better than yours). We are happiest when our team wins, when we see events that we could not imagine happening otherwise, when we see records broken (records that mean nothing when not used to compare our team to that of another), when we see impressive feats of scoring or the prevention of which — we are happiest when we are entertained.

Professional athletes who take steroids are better capable of amazing physical performances than are those who do not. We watch professional sports because we want to be entertained by amazing physical performances. In fact, professional sports exists solely to provide such a venue. Therefore, athletes should be allowed to take steroids. They should be monitored to ensure they are as safe as possible, of course, but if steroids makes them better serve their purpose, then take them they should.


Alec: Arguing Against Steroid Use

First things first, the line between amateur and professional athlete is quickly evaporating, and we should reframe the argument appropriately. The Olympics, once the exclusive domain of amateur athletes, now allow professional athletes into the mix, because Olympic officials decided to let the best athletes in the world compete, not merely the poorest. There is also more parity than ever between professional and amateur athletes, in that college athletes are not only closer to professional level than ever before, but are also as equally involved in the entertainment aspect of the athletic industry. College football ad revenues in 2010 — for just the top 15 programs in the U.S. — topped a billion dollars.

So let’s dispense with the pro/amateur distinction. The bottom line is that athletes of any sort have two concerns: being the best they can be, and entertaining others. Even a middle-aged weekend warrior worries about looking good in front of the twenty people watching him play a very mediocre third base. The same warrior revels in the glory of an unusually graceful moment in the field. Alternately, the most jaded professional athlete can still revel in his athleticism even in the face of the realization that he is just there to make money. And obviously the professional has to worry about his entertainment value, even as he might be conflicted about it.

Now that the metaphysics are out of the way, we can analyze things more easily. The goals of being an athlete are two-fold (at least): being the best human specimen, and being the best entertainer. And this impacts the steroid debate in two ways.

Being the best human specimen. The key word here is, of course, “human”. What we (and the athletes in question) should be concerned with is developing the human body to its greatest potential. Once we start adding manufactured chemicals into the mix, we are getting into the realm of superhuman, or, perhaps more aptly hyperhuman. You will argue, no doubt, that athletes should be able to take, say, ibuprofen without being considered enhanced to an unnatural degree. And I agree. But surely also there is a line past which we cannot cross. By the time we get to adding bionic body parts to an athlete, we have certainly crossed that line. I argue that steroids have crossed that line as well.

Being the best entertainer. To some degree, it strikes me, no one cares (nor should they) what an entertainer does to enhance themselves for the benefit of the performance. But if we think about it for a moment, we might change our tune. Take, for instance, the extreme case of the singer who lip-syncs in concert. This is the ultimate enhancement to the singer’s biology. (Never mind that the enhancement is external to the singer. Picture a bodily embedded vocal track if it helps you.) But when we discover such an enhancement in practice, we become upset, and rightly so. We want to see live vocal feats — the human body stretched to its limits in a beautiful performance. We don’t want to hear prerecorded “perfection”, just because it’s possible. Similarly with athletes. When we find one cheating (corking a bat, taking steroids, doping), we are rightly dismayed. And this dismay is founded on the same basis of the previous paragraph. We want to see human-ness developed; not hyperhuman-ness.


Jim’s Reply to Alec

I will grant you the pro/amateur distinction is not a large one for this topic, but my so granting is due more to lack of space than to agreement. I will say this before moving on to the bulk of my reply: That various countries (America included) are now including professional athletes in the Olympics does not show a change in the inherent status of what an athlete is or is meant to be, it is instead a very successful attempt to move the Olympics from a showcase of Athletic achievement into something very much like a “our team is better than yours, so nah nah nah” mentality. The athletes that competed in the games were, for the most part, professional in the sense that they often only ever trained for the Olympics to the exclusion of anything else. Be that as it may, let’s move on.

Being the Best Human: This, I think, is, or is traditionally thought to be, the main purpose of the athlete. When we tend to think of the classical athlete, it is the Greek ideal we think of, and their supposed desire to reach perfection with the body. That desire was steeped in the idea of natural perfection, but why must we be trapped in such a conception? You mention a line we should not cross, but where we draw that line is arbitrary. Ibuprofen is allowed, but why? Because it is commonly used? That was not always the case — it only became so over time. Reconstructive knee surgery is not natural, is it? And yet it is quite common among athletes. Apparently taking steroids is the norm among bicyclists. Does that make it natural now? Otherwise, what is your definition of natural? It cannot be, or I suspect you do not want it to be, just whatever naturally (without our intervention) occurs in nature, so what else is it besides what is commonly accepted? Steroids in that latter sense were once unnatural, but are no longer so. Is constant excercise natural? Not in America. Does that disqualify athletes who work out in order to be better? You tell me.

Being the Best Entertainer: I wholeheartedly agree with you about singers being given false aid through the use of auto-tune or whatever the new audio enhancement is going to be called. Those are not biological enhancements though. There is nothing about that which I believe can properly be labeled as a human improvement. Steroids work on the muscles themselves, or so I gather; at the very least, they work on the body directly, and amplify its abilities to do more than what it can presently do. Auto-tune modifies a feature of the body that is separate from biology itself. It takes what the body does and works on it as a separate entity, treating it no differently than one might hair that one donates to a charity. Studio work modifies an entertainer no more than CGI does. Most of us know that we are not being entertained by a person, but by a computer’s rendering of some aspect of a person. Steroids do not make the body work differently — they make it work better (leaving open the meaning of ‘better’ here as something that is common sensical in the realm of sports).


Alec’s Reply to Jim

You have hoisted me by my own metaphysical petard! Well played! Indeed, the line to be drawn between acceptable and unacceptable performance aids is arbitrary. I was sort of hoping you’d miss that. Maybe, however, there’s a slender non-arbitrary thread at which to grasp here.

Ibuprofen doesn’t enhance one’s performance; it merely lets one perform through some minor discomfort. Reconstructive knee surgery doesn’t create a better knee than one originally had; it merely gets a knee back into useable form. These, I claim, fall safely on the acceptable side of the line separating acceptable from unacceptable. Steroids are meant not as an ameliorative nor as a repair, but explicitly as an enhancement to one’s otherwise natural ability. With steroids, one can be a better athlete. With, e.g., knee surgery, one can at best resume one’s career at the same level as previously.

Your example of constant exercise throws an undeniable wrench in my theory, however. Exercise is, clearly, meant to be something that improves one’s athleticism, and therefore could be seen as falling on the unacceptable side of my fine line. Yet obviously this is at best unintuitive and at worst a crushing blow to my theory. I admit that I have no unassailable defense against this. However, let me try one last maneuver. Let’s call “natural exercise” any form of exercise that one could undertake without advanced technology. Any form of running, stretching, weight-lifting, etc., would fall under this umbrella. (Never mind that most, e.g., modern weight machines are obviously technologically enhanced — someone with the appropriate set of rocks and sticks could emulate the majority of this technology.) Now let’s call “enhanced exercise” any form of exercise that relies inherently on technology. For instance, I’m imagining some sort of computer-aided analysis of muscle fibers during a workout, with an algorithm that instantaneously guides the athlete through electrical feedback into better postures. I claim that natural exercise is always acceptable, and enhanced exercise always unacceptable. And with this arguable line drawn anew, I rest my case. Tenuously.

Categories
Metaphysics

What Is Realism?

Do you believe in the existence of stones, trees, cats, and the other everyday objects around us? It’s not a rhetorical question — there are actually philosophers who don’t believe in the existence of this sort of thing. What about the objects of mathematics? — numbers, abstract triangles, infinite quantities? How about the entities and laws of science? Or moral and aesthetic properties? What sorts of furniture are you willing to stow in the universe’s metaphysical attic?

Realism and Language
There is a school of thought that categorizes realism as a doctrine about truth and language instead of existence — the idea being that talking about the existence of sorts of things, and what it means for our talking to correspond to some sort of truth, is the real job of this branch of philosophy. This school of thought will be blatantly ignored by me in this piece, though those who are interested in finding out more can get lots of great references from Michael Devitt’s Realism and Truth.

Realism in philosophy is, broadly, a belief in the existence of some sorts of things. If you believe in the existence of numbers, you are a mathematical realist. If you believe in the existence of unobservable subatomic particles, you are a scientific realist. And so on, through the rest of the disciplines into which philosophy delves — ethics, aesthetics, language, and logic, to name a few.

Of course, just as there are realists about each of these sorts of things, there are also anti-realists — self-proclaimed disbelievers in the existence of those types of objects — and many a heated battle has been waged between the two camps in just about every arena.

Common-Sense Realism

What I’d like to talk about initially is realism (and anti-realism) about a sort of object that many of us take for granted as having an uncontroversial existence: stones, trees, and cats; the everyday objects of the external world. Let’s call this doctrine common-sense realism.

I said “external world” in the previous paragraph as a hat-tip to the debate that pretty much gave birth to the very notion of realism in the modern era: Cartesian skepticism. Rene Descartes, in his Meditations, pondered the objects about the existence of which he could be absolutely certain. In the end, he cast considerable and powerful doubt on the existence of even such mundane and seemingly certain things as stones, trees, and cats, saving his indubitable belief solely for the existence of his own mind. Descartes’ skepticism was so powerful, in fact, that it spawned an incredible genealogy of philosophers arguing about it for centuries to come. In the end, Descartes himself, with the generous and dubious help of his God, wound up believing in stones, trees, and cats, but other philosophers would not so easily shake off the doubts Descartes had raised. George Berkeley, for one, posited that, post-Descartes, it only made sense to believe in the existence of minds — not in the existence of stones, trees, and cats at all. (Stones, trees, and cats, on Berkeley’s take, are actually collections of ideas, which are, as ideas, completely dependent on minds for their existence.) So, thanks in part to Descartes, we have a divide that persists in our thinking about these things to this very day: there is the internal world (our minds) and the external world (stones, trees, and cats). Thanks to the certainty Descartes uncovered, almost no one until recently has been an anti-realist about minds; but many have been anti-realists about the external world.

So there are two facets to being a common-sense realist. It means, for one thing, that you believe in the existence of things like stones, trees, and cats; but for another thing, it means that you don’t think such things are dependent on minds for their existence. A tree, for a common-sense realist, is a real object in the real world, and would exist whether or not humans ever thought about it.

Mind Dependence
What sorts of things would be dependent for their existence on minds? Unless you are a disciple of Berkeley, this might be an odd question. But ponder things like dreams, emotions, and ideas. Clearly these sorts of things are mind-dependent. A stone, on the contrary, if you’re a common-sense realist, exists whether or not any minds exist. The fact that, e.g., dreams are mind-dependent puts them in an odd metaphysical category based on our criteria here. But we can put this to one side for the time being, noting that the fact that mind-dependent entities are mind-dependent is more of a boring truism than an earth-shaking metaphysical revelation.

So if you believe that stones, trees, and cats exist even when you’re not thinking about them, you are a common-sense realist. You might, naturally, be wondering why anyone would be an anti-realist about this sort of thing, or bout anything, for that matter. Well, actually, not a lot of philosophers since Berkeley are common-sense anti-realists. But anti-realism becomes a lot more attractive in other realms.

What other sorts of realism or anti-realism might you buy into?

Scientific Realism

If you’re a common-sense realist, you are also likely to be a scientific realist. Scientific realism is the doctrine that not only do stones, trees, and cats exist, but so do the objects that science posits. If you’re a scientific realist, you include amongst the furniture of your universe such so-called “unobservable” subatomic particles as electron, bosons, and quarks, as well as objects and phenomena on the other end of the magnitude spectrum such as black holes, gravity, and an expanding universe. One problem with scientific realism is that scientific theories are sometimes wrong, and so the objects that these theories posit can be fictional in the end. One favorite example in the literature is a 17th century theory of combustion that posited the existence of a substance called phlogiston. The theory, while scientifically accepted at the time, turned out to be wrong, and phlogiston was shown not to exist. So a 17th century scientific realist would have been put in the awkward position of believing in the reality of something that didn’t in fact exist.

How does a scientific realist come to grips with such uncertainty? Well, the general response from scientific realists is that this is the best we can do. Sure, science is sometimes wrong, but it’s still our best bet for uncovering the true nature of the universe. There is no non-scientific, privileged position from which we will ever be able to see the entire truth about the world — there is no window into the room that holds all of the furniture of the universe. Our current scientific theories provide the best view we can get.

Mathematical Realism

If you are a scientific realist, you might also be a mathematical realist, seeing how science and math seem to be so tightly bound together.

Actually, though I am a common-sense and scientific realist, my favorite brand of anti-realism is mathematical anti-realism. I have a hard time stomaching the idea of numbers and similar abstract objects existing independently of minds. I’m not alone in my distaste of mathematical realism, but those of us so disposed do face many issues — chief among them the seeming indispensability of math to science. If one is a scientific realist, and science relies indispensably on math, then on the face of it it seems as if one is committed to the existence of mathematical entities, whether or not one likes it. This indispensability argument has kept philosophers of math very busy over the last few decades.

For many of us on the anti-realist side of the mathematics debate, the big problem is that of causal inertness. Mathematical objects are, by consensus at any rate, abstract — that is, they take up no space and have no causal powers whatsoever. You can’t throw the number 8 through a window, for instance. And yet for a mathematical realist the number 8 still exists, somehow, and is indispensable to science. This sort of existence, for many of us, is just a completely different sort of thing from trees and quarks, which are just the sorts of things that can be thrown through windows. (Though you have to be pretty skilled to throw a quark anywhere.) For a mathematical anti-realist, it makes more sense to think of the number eight as a useful fiction; like Holden Caulfield with an advanced degree in particle physics.

Moral Realism

I’m probably the worst person to be writing about moral realism, because I never really understood what it was supposed to accomplish to posit actual entities/properties (philosophers talk more about moral properties than entities) of ethics. And yet, on certain takes, this is exactly what moral realism posits. At least mathematical entities are tightly bound to the entities of physics. Moral properties, if they were to exist, would be tightly bound to human psychology and systems of justice — both clearly dependent on human minds for their existence.

Mostly, the case for moral realism is stated in terms of semantics instead of existence — moral realists say that moral statements can be taken to be objectively true or false, in opposition to some common-sense intuitions that moral statements are subjective and/or dependent for their validity on the cultures in which they are uttered. But if “that cat is black” is a true statement because there is indeed a black cat in front of you, then “that person is virtuous” could be held to be true in the same way: there is indeed a virtuous person in front of you. This would, on a naively reasonable take, put virtuousness on a par with blackness; but while one is easily cashed out in terms of low-grade, mind-independent physics, the other is all bundled up with arguably less objective mind-dependent concepts. That’s why I am a moral anti-realist.

What Kind of Realist Are You?

Chat us up in the comments!

Categories
Metaphysics

Are We Living In A Computer Simulation?

We recently explored Cartesian skepticism, and its dark conclusion that we can’t know for sure that the external world exists. This post is in a similar vein, as it asks the question: Are we unknowingly living in a computer simulation? One difference between this dark idea and Descartes’ is that if we are indeed living in a computer simulation, there definitely would exist an external world of some sort — just not the one we think there is. Our simulators, after all, would have to live in some sort of an external world, in order for there to be computers upon which they could simulate us. But, of course, the world, on this scenario, that we think of as existing would be a mere virtual creation, and so, for us (poor unknowingly simulated beings) the depressing Cartesian conclusion would remain: our external world does not truly exist.

Of course, if you’ve been even a marginal part of contemporary culture over the last decade or two, you know the movie “The Matrix”, the premise of which is that most of humanity is living mentally in a computer simulation. (Physically, most of humanity is living in small, life-sustaining pods, in a post-apocalyptic real world of which they have no awareness.) You no doubt see the parallel between “The Matrix” and the topic of this post. (Other movies with similar premises include “Total Recall” and “Dark City”, and surely many more that I can’t think of off the top of my head. Which makes me think we have to do a philosophy-in-the-movies blog post soon…) But rest assured that this is no banal foray into Keanu Reevesean metaphysics. (“Whoa.”) The subject of existing in a computer simulation has been pored over to a dizzying extent by philosophers. There’s a lot of meat on this philosophical bone.

Nick Bostrom’s Simulation Argument

Nick Bostrom, a philosopher at Oxford, has developed a most interesting argument, the gist of which is to strongly suggest (with a high degree of probability) that we may indeed all be living in a computer simulation. His clever argument discusses advanced civilizations whose computational technology is so powerful that they can easily and cheaply run realistic simulations of their ancestors — people like you and me.

If these advanced civilizations are possible, then, says Bostrom, one of these three hypotheses must be true:

(1) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage wind up going extinct. (The Doom Hypothesis)

(2) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage see no compelling reason to run such ancestor simulations. (The Boredom Hypothesis)

(3) We are almost certainly living in a computer simulation. (The Simulation Hypothesis)

Bostrom claims that (1) and (2) are equally as likely as (3), but, really, it’s fairly straightforward to assume that they are both actually false. The Boredom Hypothesis, in particular, seems rather implausible. Though we don’t know what such an advanced civilization would think of as worth its time, it’s not unlikely that some significant fraction (at least) of advanced societies would run such easy and cheap simulations, either out of anthropological curiosity, or even for just entertainment purposes. (A lot of our best scientists surely play video games, right?) The Doom Hypothesis is slightly more plausible. Perhaps there’s a technological boundary that most civilizations cross that is inherently dangerous and destructive, and only a negligible fraction of civilizations make it over that hurdle. But it’s still tempting and not unreasonable to think that such a barrier isn’t inherent to social and scientific progress.

So, if civilizations don’t generally extinguish themselves before reaching computational nirvana, and if they don’t think that the idea of running ancestor simulations is a silly waste of time, then we have a clear path to the Simulation Hypothesis. Say that a thousand civilizations reach this computational stage and start running ancestor simulations. And say these simulations are so easy and inexpensive that each civilization runs a trillion simulations. That’s a quadrillion simulations overall. Now divide a quadrillion by however many civilizations there are in the universe, which is perhaps far less than a quadrillion, and you get the odds that you are living in a simulated civilization. Say, for the sake of argument, that there are a million civilizations in the universe. The odds are then a billion to one that you are living in a real civilization. The far more likely proposition is that you are living in a computer civilization.

Functionalism

One key assumption upon which this argument relies is that things like minds and the civilizations in which they reside are in fact simulatable. This is a contentious claim.

The theory that minds are able to be simulated is often labeled “functionalism” — it gets its traction from the idea that perhaps minds can emerge from hardware besides human brains. If we meet an alien from an advanced civilization, learn her language, and converse with her about the meaning of life, we’d like to say that she has a mind. But, if upon scanning her body, we discover that her brain is in fact made up of hydraulic parts, rather than our electro-chemical ones, would her different hardware mean that she isn’t possessed of a mind? Or would it be the case that, in fact, minds are the kinds of software that can run on different sorts of hardware?

If this is indeed the case, than minds can be classified as functional things — that is, a mental state (say, of pondering one’s own significance in an infinite cosmos) is not identical with any particular brain state, but is some sort of functional state that can be realized on all different sorts of hardware. And if this is true, then there’s no reason, in principle, that a computer couldn’t be one of those sorts of hardware.

Given our “successes” in the field of Artificial Intelligence (AI), I have long been skeptical of our ability to create minds in computers. And there’s a proud tradition in philosophy of this sort of skepticism — John Searle, for instance, is one of the more famous anti-AI philosophers out there. (You may have heard of his Chinese Room argument.) But, by and large, I think it is fair to say that most philosophers do come down on the side of functionalism as a philosophy of mind, and so Bostrom feels comfortable using it as a building block to his argument.

I can’t, in this post, get into the debate over AI, functionalism, and the mind, but I will pick on one interesting aspect of the whole simulation issue. Every time I think about successful computer simulations, my mind goes to the simulation of physics rather than the simulation of mental phenomena. Right now, I have a cat in my lap and my legs are propped up on my desk. The weight and warmth of my cat have very diverse effects on my body, and the extra weight is pushing uncomfortably on my knees. My right calf is resting with too much weight on the hard wood of my desk, creating an uncomfortable sensation of pressure that is approaching painful. My right wrist rests on the edge of my desk as I type, and I can feel the worn urethane beneath me, giving way, in spots, to bare pine. My cat’s fur fans out as his abdomen rises with his breathing — I can see thousands of hairs going this way and that, and I stretch out my left hand and feel each of them against my creviced palm. The fan of my computer is surprisingly loud tonight, and varies in pitch with no discernable rhythm. I flake off one more bit of urethane from my desk, and it lodges briefly in my thumb’s nail, creating a slight pressure between my nail and my flesh. I pull it out and hold it between my thumb and finger, feeling its random contours against my fingerprints.

At some point, you have to wonder if computing this sort of simulation would be just as expensive as recreating the scenario atom-for-atom. And maybe if a simulation is as expensive as a recreation, in fact the only reliable way to “simulate” an event would actually be to recreate it. In which case the idea of functionalism falls by the wayside — the medium now matters once again; i.e., feeling a wood chip in my fingernail is not something that can be instantiated in software, but something that relies on a particular sort of arrangement of atoms — wood against flesh.

Who knows, really? Perhaps future computer scientists will figure out all of these issues, and will indeed usher in an era of true AI. But until it becomes clearer that this is a reasonable goal, I’ll stick with my belief that I am not being simulated.

If I am being simulated, a quick aside to my simulator: Perhaps you don’t like meddling in the affairs of your minions, but I could really use a winning lottery ticket one of these days. Just sayin’…

Categories
Epistemology

Cartesian Skepticism

Welcome to the blog’s first foray into epistemology: the philosophical study of knowledge. Today we will be talking about René Descartes, who will be ensconced in infamy for two feats: creating a system of geometry that would annoy high school students for hundreds of years to come, and for presaging “The Matrix”. Much as I actually liked high school geometry, I would like here to talk about the Cartesian skepticism of the external world that made so many science fiction movies possible.

For those of you who haven’t yet read Descartes’ famous Meditations on First Philosophy (mostly referenced plainly as the Meditations), what are you waiting for? Here’s an old translation into English to get you started. There are also approximately a billion print versions available on Amazon, in case you want a more contemporary translation, along with the ability to scribble in the margins.

The Meditations start with Descartes recounting the none-too-astounding realization that he had been wrong about some things as a youngster.

Several years have now elapsed since I first became aware that I had accepted, even from my youth, many false opinions for true, and that consequently what I afterward based on such principles was highly doubtful; and from that time I was convinced of the necessity of undertaking once in my life to rid myself of all the opinions I had adopted, and of commencing anew the work of building from the foundation, if I desired to establish a firm and abiding superstructure in the sciences.

So his project in the Meditations was very much foundational. Descartes wanted to tear down all things that passed for knowledge, in order to find a kernel of certainty, from which he would build back up a magnificent structure of infallible knowledge. Those of you who remember high school geometry might be having nightmarish flashbacks at this point, remembering how the subject was built up from just a few, allegedly very certain axioms. The axioms were the firm, unassailable foundation upon which the science of geometry was built. Descartes had similar plans for every other science and in fact every human epistemological endeavor.

His method was, simply enough, to sit comfortably in his pajamas and begin doubting everything that he possibly could doubt. The first victim of his skepticism was his senses. “All that I have, up to this moment, accepted as possessed of the highest truth and certainty, I received either from or through the senses. I observed, however, that these sometimes misled us; and it is the part of prudence not to place absolute confidence in that by which we have even once been deceived.” A pretty reasonable place to start doubting things. After all, there are a million and one ways in which we are regularly deceived by our senses: optical illusions abound, hallucinations occasionally crop up, and physical ailments of the eyes and brain can cause misperceptions.

But there’s an even more radical skepticism that can crop up from this line of thought. What if it’s not just the case that the senses deceive, but that they don’t exist at all? Take this picture of the human knowledge machine:

On this picture (which, I think, is a pretty sound depiction of what philosophers of that age thought, and indeed is still how a lot of people picture the mind), the only reliable access to knowledge is via an inner screen that has projected upon it images of the external world. The screen here is inside the brain/mind, and the little person viewing the screen is one’s consciousness. If the senses exist, then sometimes they project something misleading on the inner screen, and this gives rise to optical illusions and hallucinations. But on this picture, a skeptic could go so far as to say that the senses might be fictional. If all we have access to is this inner screen, then we just can’t be sure from where its images come. Maybe they come from the senses, and maybe they don’t. Of course, given that there were really no computers or any decent science fiction at the time, the only 17th Century source that would be powerful enough to accomplish this illusory feat would be God. But since God is supposed to be omnibenevolent, and would therefore not deceive us in this way, Descartes conjured up a reasonable facsimile of sci-fi for the time, and said that perhaps there is an evil demon who deceives each of us in this way.

Well, that’s a lot of doubt, and a lot of the world’s furniture that has suddenly become dispensable. Stones, trees, and cats might not exist. Neither might other people, for that matter. Descartes found himself at this point in an extremely solipsistic position. He might be the only person in the universe. And this person might not even have a body.

At this point, Descartes took some certainty back from the skeptical vortex into which he was falling. He might not have a body, but if he was indeed being deceived by some evil demon, then he was being deceived. “I am, I exist,” he concluded. And each time he thinks this (or anything else, for that matter), his existence is assured.

At this point, we could veer off into metaphysics and the philosophy of mind, and discuss the ontological corollary to this barely optimistic offramp of the Cartesian skeptical superhighway: Dualism. According to Descartes’ theory, the mind is not necessarily connected to a body; that is, it is logically possible for a mind to exist without a brain.

But let’s save this subject for another post. Now, let’s examine where Cartesian skepticism has taken us, epistemologically.

Skepticism of the external world is a very strong philosophical position. It is really quite difficult to debate a skeptic on matters of epistemology, because the default answer of “but can you really know that the external world exists” is very defensible. Try it out for yourself:

Me: This iPhone is great.
You: If it exists.
Me: What do you mean? I’m holding the thing in my hand!
You: You think you are. Maybe you’re dreaming.
Me: I know the difference between a dream and reality.
You: You think you do. But maybe you’re in a dream, and in that dream you dream that you’re awake, but really you’re still just dreaming.
Me: Oh, come on. That leads to an absurd infinite regress of dream states.
You: Well, it’s still possible. And anyway, you could be living in a computer simulation. Or you could be crazy and hallucinating all of this. In any event, you can’t know for sure that you’re holding an iPhone in your hand. You can know that you have an image of holding an iPhone in your mind. Therefore your mind exists. Does that make you feel better?

And you have won the debate!

The Way Out

So do we have to just give in to the skeptic? Is there no hope for those of us who would like to assume the existence of stones, trees, and cats? Real ones… not just images of them in our minds.

Well, yes, there is. It’s called Naturalized Epistemology (or just “naturalism”), and it was foreshadowed by David Hume way back in 1748. I’ll quote a lengthy passage, because it’s so beautifully crafted:

For here is the chief and most confounding objection to excessive scepticism, that no durable good can ever result from it; while it remains in its full force and vigour. We need only ask such a sceptic, What his meaning is? And what he proposes by all these curious researches? He is immediately at a loss, and knows not what to answer. A Copernican or Ptolemaic, who supports each his different system of astronomy, may hope to produce a conviction, which will remain constant and durable, with his audience. A Stoic or Epicurean displays principles, which may not be durable, but which have an effect on conduct and behaviour. But a Pyrrhonian cannot expect, that his philosophy will have any constant influence on the mind: or if it had, that its influence would be beneficial to society. On the contrary, he must acknowledge, if he will acknowledge anything, that all human life must perish, were his principles universally and steadily to prevail. All discourse, all action would immediately cease; and men remain in a total lethargy, till the necessities of nature, unsatisfied, put an end to their miserable existence. It is true; so fatal an event is very little to be dreaded. Nature is always too strong for principle. And though a Pyrrhonian may throw himself or others into a momentary amazement and confusion by his profound reasonings; the first and most trivial event in life will put to flight all his doubts and scruples, and leave him the same, in every point of action and speculation, with the philosophers of every other sect, or with those who never concerned themselves in any philosophical researches. When he awakes from his dream, he will be the first to join in the laugh against himself, and to confess, that all his objections are mere amusement, and can have no other tendency than to show the whimsical condition of mankind, who must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them.

So, the idea (if you had a hard time navigating the old-school English), is that if skepticism of the external world is true, it leaves one in the unenviable position of nothing mattering. It is not a stance from which one can do any productive theorizing about science, philosophy, or, well, anything except for one’s own mind. (And even that bit of theorizing will stop at the acknowledgement of one’s inner screen accessible to consciousness.)

Do we have a stance from which we can do productive theorizing about things? Assuming that science is generally correct about the state of the world is a good start! After all, science has some of the smartest people in the world (if they and the world exist) applying the most stringent thinking and experimentation known to humanity. And science assumes the existence of things like stones, trees, and cats — things that exist in the world, not merely as ideas in our minds.

Here’s one of the more interesting perspectives on subverting skepticism, from Peter Millican at Oxford:

The gist of the video is that there are two ways to argue every issue. In the case of skepticism of the external world, you can argue, like a naturalist, that you know that stones, trees, and cats are real, therefore you know that there is an external world; or, like a skeptic, you could argue that we don’t know that there is an external world, therefore you don’t know that stones, trees, and cats exist. They are really quite equally plausible strategies, from a strictly logical point of view. And in both cases you have to assume something to be the case in order to get to your desired conclusion. So do you want to assume that you don’t know there’s an external world, or would you rather assume that you know that stones, trees, and cats exist? Your choice.

If you choose the skeptical path, I hope you’ll choose to pass your solipsistic time entertaining dreams of this blog.

Categories
General

On Definitions in Philosophy

When trying to define a term, we think generally of providing a set of necessary and sufficient conditions: a recipe for including or excluding a thing in a particular category of existence. For instance, an even number (definitions tend to work best in the mathematical arena, since definitions there can be as precise as possible) is definable as an integer that when divided by 2 does not leave a remainder. It is easy, given this definition, to ascertain whether or not a given number is even. Divide it by two and see if it leaves a remainder. If it does, then it’s not even; if it doesn’t, then it is. We have here a clear test for inclusion or exclusion in the set of even numbers.

Outside of mathematics, things get trickier. (Inside mathematics, things can be tricky as well. Imre Lakatos‘ excellent book Proofs and Refutations details some of the problems here. If you are mathematically and philosophically inclined, this is a must-read book.)

In Ludwig Wittgenstein‘s Philosophical Investigations, he famously talks about the travails of defining the term “game”. Is there a set of necessary and sufficient criteria that will let us neatly split the world into games and non-games? For instance, do all games have pieces? (No, only board games have these.) Winners and losers? (There are no winners in a game of catch.) Strategy? (Ring-around-the-rosie has no strategy.) Players? (Well, since games are a particularly human endeavor, it would be an odd game that had no human participants. But, of course, some games have only one player.) There seems to be no single set of characteristics that spans across everything we’d like to call a game. Wittgenstein’s solution was to say that games share a “family resemblance” — “a complicated network of similarities overlapping and criss-crossing”. A great many games have winners and losers, and so share this family trait; and then there are games that have pieces, and this is another trait that can be shared. Many (but not all) of the games with pieces also have winners and losers, and so there is significant overlap here. Games with strategy span another vast swath of the game landscape, and many of these games have winners and loses, many of which also have pieces. But not all. And so a networks of resemblances between games is found — not a single boundary that separates games from non-games, but a set of sets that is overlapping and more or less tightly connected.

This is a brilliant idea, but one that often leaves analytical philosophers with a bad taste in their mouths. If you try to formalize family resemblances (and analytical philosophers love to formalize things), you run up against the same problems as you had with more straightforward definitions. Where exactly do you draw the line in including or excluding a resemblance? Games are often amusing, for instance. But so are jokes. So jokes share one resemblance with games. But jokes are often mean-spirited. And so are many dictators. And dictators are often ruthless. As are assassins. So now we have a group of overlapping resemblances that bridges games to assassins. And if you want to detail the conditions under which this bridge should not take us from one group of things (games) to the other (assassins), you are back to specifying necessary and sufficient conditions.

Wittgenstein, I imagine, would have laughed at this “problem”, telling us that we just have to live with the vague boundaries of things. Which is all well and good, but is easier said than done.

Knowledge

The defining of knowledge gives us a great example of definitions at work and their problems. For those of you who haven’t been indoctrinated in the workings of epistemology, it turns out that a good working definition for knowledge is that it is justified true belief.

Is Knowledge Justified True Belief
I take it as axiomatic as can be that something has to be believed to be known. If you have a red car but you don’t believe that it’s red, you don’t have knowledge of that fact. But, clearly, belief isn’t sufficient to define something as knowledge. For instance, if I believe that my red car is actually blue, I still don’t have any knowledge of its actual color. So we have to bring truth into the picture. If I believe that my car is red, and it is actually red, I’m certainly closer to having a bit of knowledge. But, again, this isn’t sufficient. What if my wife has bought me a red car that I haven’t seen yet. I believe it’s red because I had a dream about a red car last night. Do I have knowledge of my car’s color? I’d say not. We need a third component: Justification. If I believe that my new red car is indeed red because I’ve seen it with my own eyes (or analyzed it with a spectrometer, if the worry of optical illusions bugs you), then we should be able to say I do indeed have a bit of knowledge here.

In 1963, Edmund Gettier came up with a clever problem for this definition — one that presents a belief that is justified and true, but turns out to not be knowledge. Here is the scenario:

  • Smith and Jones work together at a large corporation and are both up for a big promotion.
  • Smith believes that Jones will get the promotion.
  • Smith has been told by the president of the corporation that Jones will get the promotion.
  • Smith has counted the number of coins in Jones’ pocket, and there are 10.

The following statement is justified:

(A) Jones will get the promotion and Jones has 10 coins in his pocket.

Then this statement follows logically (and is therefore also justified):

(B) The person who will get the promotion has 10 coins in his pocket.

But it turns out that the president is overruled by the board, and Smith, unbeknownst to himself, is actually the one will be promoted. It also turns out that Smith, coincidentally, has 10 coins in his pocket. Thus, (B) is still true, it’s justified, and it is believed by Smith. However, Smith doesn’t have knowledge that he himself is going to get promoted, so clearly something has gone wrong. Justification, truth, and belief, as criteria of knowledge, let an example of non-knowledge slip into the definitional circle, masquerading as knowledge.

More Games

Let’s get back to the problem of defining games, and say that, contrary to Wittgenstein, you’re sure you can come up with a good set of necessary and sufficient conditions. You notice from our previous list of possible necessary traits that games certainly have to have players. Let’s call them participants, since “player” is something of a loaded word here (a player presupposes a game, in a way). And now you also take a stand that all games have pieces. Board games have obvious pieces, but so, you say, do other games. Even a game of tag has objects that you utilize in order to move the game along. (In this case, you’re thinking of the players’ actual hands.) So let’s add that to the list, but let’s call it what it is: not pieces so much as tools or implements. And perhaps you are also convinced that all games, even games of catch, have rules. Some are just more implicit and less well-defined than others. So let’s stop here, and see where we are. We have participants, implements, and rules.

And now we begin to see the problem. If we leave it at that, our definition is so loose as to allow under the game umbrella many things that aren’t actually games. A group of lab technicians analyzing DNA could fall under the conditions of having participants, implements, and rules. But if we tighten up the definition, we run the risk of excluding real cases from being called games. For instance, if we tighten the definition to exclude our lab workers from the fun by saying that games also have to have winners and losers we immediately rule out as games activities like catch and ring-around-the-rosie.

Lakatos coined two brilliant phrases for these definitional tightenings and loosenings: “monster-barring” and “concept-stretching”. Monster-barring is an applicable strategy when your definition allows something repugnant into the category in question. You have two options as a monster-barrer: do your utmost to show how the monster doesn’t really satisfy your necessary and sufficient conditions, or tweak your definition to keep the monster out.

Concept-stretching allows one to take a definition and run wild with it, applying it to all sorts of odd cases one might not have previously thought to. For instance, perhaps we should expand entry into the realm of games to include our intrepid DNA lab workers. What would that mean for our ontologies? And what would it mean for people who analyze games? And for lab technicians?

Philosophers love to define terms; they also love to find examples that render definitions problematic. It’s a trick of the trade and a hazard of the business.

Categories
Arguing Over Nothing

The Peanut Butter and Jelly Debate

Arguing Over Nothing: A regular feature on the blog where we argue over something of little consequence, as if it were of major consequence. Arguing is philosophy’s raison d’être, and the beauty of an argument is often as much in its form as its content.

Today, we argue about the proper way to make a peanut butter and jelly sandwich. Jim argues for a radical, new approach, while I side with a more standard approach to the endeavor.

Each philosopher is granted up to a 500-750 words to state his/her case as well as up to 250-500 words for rebuttal. The winner will be decided by a poll of the readers (or whoever happens to have admin privileges at the appropriate time).


Jim: Arguing for the bowl method

The purpose of a peanut butter and jelly sandwich, the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance. There are ‘sandwich artists’ in the world, but I have trouble imagining such people working in the medium of peanut butter and jelly. Therefore, the sooner the sandwich is made, the sooner its purpose can be met. Were one to take the time to, after opening two jars and securing two utensils (surely we can both agree that cross-contamination of the ingredients should not occur within the jars), much time has already been lost and invested. From that point, mixing the two ingredients in bowl is the best and most efficient way of creating the sandwich. This is so for, primarily, two reasons.

First, peanut butter, even the creamiest sort, is not so easily spread on bread. I will grant that toasted bread provides a more durable spreading surface, but, again, the sandwich is made for a quick repast so that toasting is often overlooked or bypassed. Inevitably, large divots are raised or even removed from the bread by even the most experienced spreader. Once that has been accomplished, if it were accomplished at all, the jelly must be attended to. Securing jelly from jar with a spreading knife is a feat best left to the young and others with plenty of idle time on their hands. Repeated stabbings into the jar will secure, at best, scant amounts of jelly. It is, obviously, better to use a spoon. However, as is clear to even the dullest imagination, spreading with a spoon leaves much to be desired, literally, as the result tends to be scattered hillocks of jelly, between which are faint traces, like glacial retreatings, of ‘jelly flavor’. Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.

The second reason against separate spreads, and so for one bowl of mixed, is corollary to the above. When one makes a peanut butter and jelly sandwich, one is looking to taste both in, maximally, each bite. Given the condition of the bread on the peanut butter side and the pockets of flavor intersticed with the lack thereof on the jelly side, one is lucky to get both flavors in half the bites taken.

By mixing both peanut butter and jelly in a bowl prior to application, both of the concerns above are fully redressed. The peanut butter, by virtue of its mixing with jelly, becomes much more spreadable for two reasons: it is no longer as thick and it is no longer as dry. A thin and moist substance is always much easier to spread. Furthermore, because of the aforementioned mixing, both flavors will be available in every bite taken. The end result is a much more delicious, easily made (and so efficient), quick meal. As an added bonus, one’s fingers end up with less mess since only one slice of bread has needed attending to and so one’s fingers are only up for mess-exposure for the one time and not twice as with the other method.

While there is the bowl left to clean, in addition to the utensils, what has not been removed from the bowl is easily rinsed. The peanut butter-jelly mix, given its thin and moist nature is almost always able to be fully removed from the bowl and transferred to the bread. What is not so transferred, whether by design or not, is, by the the previously mentioned nature, easily washed or wiped away in disposal.

The bowl is clearly the way to go when making a peanut butter and jelly sandwich.


Alec: Arguing Against the Bowl Method

I will grant your utilitarian premise on sandwich making (“the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance”), though I will point out that aesthetics could have a valid role to play in this debate. If your PB&J-from-a-bowl sandwich is singularly visually unappetizing (as I imagine it might be) then it will not provide any sustenance whatsoever, but will end up in the trash can instead. Also, note that your utilitarianism here could lead to the creation of a “sandwich” that is made by tossing the ingredients in a blender and creating a PB&J smoothie the likes of which would be eschewed by any rational hungry person.

But I digress.

You claim that peanut butter — even the creamy variety — is difficult to spread on bread. I have two points to make in regards to this claim. First, I haven’t had difficulty spreading peanut butter on bread since I was 12. Perhaps you should have your motor skills tested by a trained kinesiologist. I grant you that spreading a chunky peanut butter on a thin, wispy white bread can be problematic; but a smooth peanut butter on a hearty wheat bread? Not problematic at all. Second, you have pointed to no scientific research that shows that mixing peanut butter and jelly in a bowl makes it easier to spread than plain peanut butter. I remain skeptical on this point. And even if it is easier to spread, the labor involved in mixing it with jelly in a separate bowl might be far more work than it is worth in the end.

The knife/jelly problem is a thorny one, indeed, as you have noted. Trying to extricate an ample amount of jelly from a jar with a knife is difficult and annoying. You claim that: “Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.” However, you have overlooked the obvious: one can use the knife from the peanut butter to spread the jelly that has been extricated with the spoon. Here is some simple math to show how utensil use plays out in both of our scenarios:

You: 1 knife for dishing peanut butter + 1 spoon for dishing jelly + 1 bowl for mixing.

Me: 1 knife for dishing and spreading peanut butter + 1 spoon for dishing jelly, and reuse the knife for spreading jelly.

So we are equal on our utensil use, and you have used an extra bowl.

And on the subject of this extra bowl, it will be readily admitted by all that a knife with peanut butter on it is annoying enough to clean, while an entire bowl with peanut butter on it is proportionately more annoying to clean. (Again, you claim that a peanut butter / jelly mixture is easier to clean than pure peanut butter, but the research on this is missing. Surely you will allow that a bowl with some peanut butter on it is not a simply rinsed affair.) Plus there’s the environmental impact of cleaning an extra bowl each time you make a sandwich. Add that over the millions of people who make peanut butter and jelly sandwiches each day, and you’ve got a genuine environmental issue.

Creating a peanut butter and jelly sandwich my way also leads to an easier-to-clean knife. After spreading the peanut butter on one slice of bread, you can wipe the knife on the other slice of bread to remove upwards of 90% of the residual peanut butter (Cf. “Peanut Butter Residue in Sandwich Making,” Journal of Viscous Foods 94, 2008, pp. 218-227.) This makes cleanup far easier than in your scenario, and results in potential environmental savings as well.

You do make two solid points. First, your PB&J mixture is potentially much more homogenous than the usual sandwich mixture, resulting in a more equitable PB-to-J ratio per bite. Here I can only revisit my aesthetic claim that eating a standard PB&J sandwich is more appealing than the greyish mixture you propose we slather on bread. Second, your sandwich creation process is indeed potentially less messy on the fingers than mine. To this I have no defense. Into each good life some jelly must fall.


Jim: Rebuttal

I must say, I find many of your points and counterpoints intriguing. All wrong, of course, but still intriguing. Let’s go through them, one at a time, and see where you go astray.

1) I grant both the utilitarian and aesthetic aspects to the sandwich. There are some truly beautiful sandwiches out there; few of them, however, are made at home and are made solely of bread, peanut butter, and jelly. The maker of such a sandwich is often working in a limited environment with a limited medium with a time crunch, otherwise, utility be damned and let the sandwich artist sing. As for the smoothie sandwich, I doubt, as surely so do you, that the sole goal of the creator (of the sandwich) is to ingest those ingredients as soon as possible. Ignoring a lack of teeth or the presence of an extremely tight throat, such an option is insane.

2) While I appreciate a gentle jibe as much as the next fellow, to imply that I lack the wrist strength to apply peanut butter to bread is going a bit far. Ad hominem attacks should have no place in philosophical discourse. It is not impossible to spread peanut butter on bread and I will happily grant you the point that it is so much easier to do so on ‘hearty wheat bread’. My point was and is that it is easier to do so if, to use a turn of phrase, the wheels have been greased a bit, and it is my contention that a peanut butter and jelly mixture does just that. However, you are correct that I have no scientific data to back that up. I was under the impression that science need not enter civil discussion, but I will agree that I have no data to back that claim up. Common sense, mere intuition, though, seems to suggest that if jelly is easier to spread than peanut butter, and who would contest that, then surely a mixture of peanut butter and jelly would be easier to spread than peanut butter simpliciter.

3) I fear I only have enough space left to deal with your point concerning the extra cleaning of a bowl. I did take a bit of latitude with that argument and will concede it to you with but one addendum. In almost every home, at the very least in a great many homes, I would guess that the dishes are not washed one at a time, but rather several at once, and rarely immediately after use. If the utilitarian nature of the PB&J sandwich is granted, time is at a minimum and I suspect clean-up will have to wait a more opportune time. While an extra bowl is required during the creation of the sandwich, I do not think that an extra bowl needing to be washed would extend such washing time unduly.

Categories
Logic

Nonmonotonic Logic and Stubborn Thinking

I was struck recently by some similarities between the psychology of stubborn thinking and the history of science and logic. It’s not just individuals that have trouble changing their minds; entire scientific, logical, and mathematical movements suffer from the same problem.

Logic

When people think about logic (which I imagine is not very often, but bear with me on this), they probably think about getting from a premise to a conclusion in a straight line of rule-based reasoning — like Sherlock Holmes finding the criminal perpetrator with infallible precision, carving his way through a thicket of facts with the blade of deduction.

Here’s a sample logical proof that would do Holmes proud.

Birds fly.
Tweetie is a bird.
Therefore Tweetie flies.

We have here a general principle, a fact, and a deduction from those to a logical conclusion.

The problem is that the general principle here is just that: general. It is generally the case that birds fly. In fact, some birds do not fly at all. (In fact, there’s not ever a general principle that universally applies: even the laws of physics are arguably fallible. Cf. Nancy Cartwright’s wonderful How the Laws of Physics Lie.) Tweetie could be an ostrich or an emu, or Tweetie could have lost his wings in a window-fan accident, or Tweetie could be dead.

You could shore up your general principle in order to try to make it more universal: Birds that aren’t ostriches, emus, wingless, or dead, fly. But this sort of backpedaling is really an exercise in futility. As the past several decades of research in artificial intelligence through the 90s showed us, the more you expand your general principle to cover explicit cases, the less of a general rule it becomes, and the more you realize you have to keep covering more and more explicit cases, permutations upon permutations that will never end. (E.g., even in the case of death, Tweetie might be able to fly. He could be dead, but also in an airplane at 20,000 feet. Would you amend your general principle to cover this case? It would be a strange sort of “scientific” law that stated “Birds fly, except dead birds that aren’t in airplanes.”)

A brilliant solution to this sort of problem was found via the creation of nonmonotonic logic, a logical system that is what they call defeasible — that is, it allows for making a conclusion that can be undone by information that eventually emerges to the contrary. So the idea is that a nonmonotonic system allows you to conclude that Tweetie flies via the logic above, but also allows you to change that conclusion if you then find out that Tweetie is, in fact, e.g., dead.

This may not seem like a big deal, since this is how a rational human is supposed to react on a regular basis anyway. If we find out that Tweetie is dead, we are supposed to no longer hold to the conclusion, as logical as it may be, that he flies. But for logicians it was huge. The old systems of logic pinned us helplessly to non-defeasible conclusions that may be wrong, just because the logic itself seemed so right. But now logicians have a formal way of shaking free of the bonds of non-defeasibility.

Science

The history of science is rife with examples of this principle-clinging tenacity from which it took logic millennia to escape. A famous case is found in astronomy, where the concept persisted for more than a dozen centuries that the earth was at the center of the universe. As astronomy progressed, it became clear that to describe the motion of the planets and the sun in the sky, a simple model of circular orbits centered around the Earth would not suffice. Eventually, a parade of epicycles was introduced — circles upon circles upon circles of planetary motion spinning this way and that, all in order to explain what we observed in the earth’s sky, while still clinging to the precious assumption that the Earth is centrally located. The simpler explanation, that the Earth was in fact not the center of all heavenly motion, would have quickly done away with the detritus of clinging to a failed theory, but it’s not so easy to change science’s mind.

In fact, one strong line of thought, courtesy of Thomas Kuhn has it that the only way for scientists to break free from such deeply entrenched conceptions is nothing short of a concept-busting revolution. And such revolutions can take years to gather enough momentum in order to be effective in mind-changing. (Examples of such revolutions include the jarring transition from Newtonian to Einsteinian physics, and the leap in chemistry from phlogiston theory to Lavoisier’s theory of oxidation.)

Down to Earth

If even scientists are at the mercy of unchanging minds, and logicians have to posit complicated formal systems to account for the ability to logically change one’s mind, we should be prepared in our daily lives to come up against an immovable wall of opinions. Despite what the facts tell us.

Indeed, it isn’t very hard to find people that have a hard time changing their minds. Being an ideologue is the best way of sticking to an idea despite evidence to the contrary, and ideologues are a dime a dozen these days. What happens in the mind of an ideologue when she is saving her precious conclusion from the facts? Let’s revisit Tweetie. (You can substitute principles and facts about trickle-down economics or global warming for principles and facts about birds, if you like.)

Ideologue: By my reasoning above, I conclude that Tweetie flies.

Scientist: That is some nice reasoning, but as it turns out, Tweetie is dead.

Ideologue: Hmmm. I see. Well, by “flies” I really mean “flew when alive”.

Scientist: Ah, I see. But, actually, Tweetie was an emu.

Ideologue: Of course, of course, but I mean by “flies” really “flew when alive if not an emu”.

Scientist: But so then you’ll admit that Tweetie didn’t actually fly.

Ideologue: Ah, but he could have, if he had had the appropriate physical structure when he was alive.

Scientist: But your conclusion was that Tweetie flies. And he didn’t.

Ideologue: Tweetie was on a plane once.

Scientist: But isn’t that more a case of Tweetie being flown, not Tweetie flying?

Ideologue: You’re just bogging me down in semantics. In any case, Tweetie flies in heaven now. Case closed.

Categories
General

Philosophy Resources on the Web

There is a great wealth of serious philosophy out there on the internet, though you have to dig deep through a great deal of philosophical detritus to get to the good stuff. Here are some of our picks for genuinely good philosophy on the web…

Audio

One of our favorite resources out there is Philosophy Bites: a collection of “podcasts of top philosophers interviewed on bite-sized topics”. The hosts, respected philosophers themselves, Nigel Warburton and David Edmonds, have interviewed a lot of renowned philosophers for the show, ranging from Daniel Dennett to Philip Pettit to Frank Jackson to Martha Nussbaum, all in easy-to-digest 15 minute sessions. The website itself is not exactly a treat to navigate, but you can skip the site and go directly to iTunes to download free podcasts. Or you can shell out three bucks for the iPhone app which I can vouch for being well worth it. They also have the MP3s hosted on libsyn.com, if you like to work through these things old school. One of my favorite podcasts is Nick Bostrom on the Simulation Hypothesis — absurdist metaphysics at its finest! — but really there are very few uninteresting interviews on the site.

Another great resource of philosophy audio is Philosophy Talk, a radio program with podcasts by eminent Stanford philosophers John Perry and Ken Taylor.

Encyclopedias

The next time you’re about to head over to Wikipedia to check out something philosophical, stop yourself and try either the Stanford Encyclopedia of Philosophy or the Internet Encyclopedia of Philosophy. Both sites are peer-reviewed and generally are excellent sources for delving more deeply into philosophy. Check out the Stanford article on thought experiments, for example, or the IEP’s article on Searle’s Chinese Room. Both fine pieces.

Public Domain Texts

If you’re looking for public domain philosophy texts, there are plenty out there, although be prepared to find very little contemporary work. Everybody’s favorite public domain repository, Project Gutenberg, has a respectable collection of philosophy works. The EServer also has a collection of public domain philosophy texts available for download, along with some contemporary pieces that have been appropriately licensed.

If you are looking for more contemporary works online, your best bet is JStor, which scans most of the top philosophy journals and creates PDFs. There are a few journals and articles available for free through JStor to the general public, but if you really want to get the most out of the service, you have to be connected to a university that pays for their best services. If you are so connected, you will have an incredible wealth of philosophy articles at your disposal. If you are a philosophy teacher, or interested in philosophical pedagogy, check out the Philosophy Documentation Center — a subscription service that has all sorts of articles available about teaching philosophy.

Free Online Courses

Universities are starting to beef up their online course offerings, and there are several that offer free courses, consisting of syllabi, lecture notes, slides, audio, and video. Everything short of interaction with and feedback from professors. MIT was one of the first to make freely available such resources. Yale has a couple of courses available as well. As does Notre Dame.

We haven’t gone through any of these courses with a fine-tooth comb, so we can’t say how instructive they really are, but we certainly applaud academia for opening up the ivory tower a bit. If any of you have ever tried any of these courses, let us know what you thought!

The Profession

No list of professional philosophical resources would be complete without a link to the American Philosophical Association — the major professional organization for philosophy professors and students. Their website could use an update from 1999, but there is a good amount of information on the site regarding the profession of philosophy.

If you’re thinking about grad school in philosophy, you should definitely check out the Philosophical Gourmet Report — Brian Leiter’s ranking of graduate programs in philosophy in the English-speaking world.


Let us know if you think of other good web resources for philosophy lovers.