Are We Living In A Computer Simulation?

We recently explored Cartesian skepticism, and its dark conclusion that we can’t know for sure that the external world exists. This post is in a similar vein, as it asks the question: Are we unknowingly living in a computer simulation? One difference between this dark idea and Descartes’ is that if we are indeed living in a computer simulation, there definitely would exist an external world of some sort — just not the one we think there is. Our simulators, after all, would have to live in some sort of an external world, in order for there to be computers upon which they could simulate us. But, of course, the world, on this scenario, that we think of as existing would be a mere virtual creation, and so, for us (poor unknowingly simulated beings) the depressing Cartesian conclusion would remain: our external world does not truly exist.

Of course, if you’ve been even a marginal part of contemporary culture over the last decade or two, you know the movie “The Matrix”, the premise of which is that most of humanity is living mentally in a computer simulation. (Physically, most of humanity is living in small, life-sustaining pods, in a post-apocalyptic real world of which they have no awareness.) You no doubt see the parallel between “The Matrix” and the topic of this post. (Other movies with similar premises include “Total Recall” and “Dark City”, and surely many more that I can’t think of off the top of my head. Which makes me think we have to do a philosophy-in-the-movies blog post soon…) But rest assured that this is no banal foray into Keanu Reevesean metaphysics. (“Whoa.”) The subject of existing in a computer simulation has been pored over to a dizzying extent by philosophers. There’s a lot of meat on this philosophical bone.

Nick Bostrom’s Simulation Argument

Nick Bostrom, a philosopher at Oxford, has developed a most interesting argument, the gist of which is to strongly suggest (with a high degree of probability) that we may indeed all be living in a computer simulation. His clever argument discusses advanced civilizations whose computational technology is so powerful that they can easily and cheaply run realistic simulations of their ancestors — people like you and me.

If these advanced civilizations are possible, then, says Bostrom, one of these three hypotheses must be true:

(1) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage wind up going extinct. (The Doom Hypothesis)

(2) Most (as in an overwhelmingly high statistical majority) civilizations that get to this advanced computational stage see no compelling reason to run such ancestor simulations. (The Boredom Hypothesis)

(3) We are almost certainly living in a computer simulation. (The Simulation Hypothesis)

Bostrom claims that (1) and (2) are equally as likely as (3), but, really, it’s fairly straightforward to assume that they are both actually false. The Boredom Hypothesis, in particular, seems rather implausible. Though we don’t know what such an advanced civilization would think of as worth its time, it’s not unlikely that some significant fraction (at least) of advanced societies would run such easy and cheap simulations, either out of anthropological curiosity, or even for just entertainment purposes. (A lot of our best scientists surely play video games, right?) The Doom Hypothesis is slightly more plausible. Perhaps there’s a technological boundary that most civilizations cross that is inherently dangerous and destructive, and only a negligible fraction of civilizations make it over that hurdle. But it’s still tempting and not unreasonable to think that such a barrier isn’t inherent to social and scientific progress.

So, if civilizations don’t generally extinguish themselves before reaching computational nirvana, and if they don’t think that the idea of running ancestor simulations is a silly waste of time, then we have a clear path to the Simulation Hypothesis. Say that a thousand civilizations reach this computational stage and start running ancestor simulations. And say these simulations are so easy and inexpensive that each civilization runs a trillion simulations. That’s a quadrillion simulations overall. Now divide a quadrillion by however many civilizations there are in the universe, which is perhaps far less than a quadrillion, and you get the odds that you are living in a simulated civilization. Say, for the sake of argument, that there are a million civilizations in the universe. The odds are then a billion to one that you are living in a real civilization. The far more likely proposition is that you are living in a computer civilization.


One key assumption upon which this argument relies is that things like minds and the civilizations in which they reside are in fact simulatable. This is a contentious claim.

The theory that minds are able to be simulated is often labeled “functionalism” — it gets its traction from the idea that perhaps minds can emerge from hardware besides human brains. If we meet an alien from an advanced civilization, learn her language, and converse with her about the meaning of life, we’d like to say that she has a mind. But, if upon scanning her body, we discover that her brain is in fact made up of hydraulic parts, rather than our electro-chemical ones, would her different hardware mean that she isn’t possessed of a mind? Or would it be the case that, in fact, minds are the kinds of software that can run on different sorts of hardware?

If this is indeed the case, than minds can be classified as functional things — that is, a mental state (say, of pondering one’s own significance in an infinite cosmos) is not identical with any particular brain state, but is some sort of functional state that can be realized on all different sorts of hardware. And if this is true, then there’s no reason, in principle, that a computer couldn’t be one of those sorts of hardware.

Given our “successes” in the field of Artificial Intelligence (AI), I have long been skeptical of our ability to create minds in computers. And there’s a proud tradition in philosophy of this sort of skepticism — John Searle, for instance, is one of the more famous anti-AI philosophers out there. (You may have heard of his Chinese Room argument.) But, by and large, I think it is fair to say that most philosophers do come down on the side of functionalism as a philosophy of mind, and so Bostrom feels comfortable using it as a building block to his argument.

I can’t, in this post, get into the debate over AI, functionalism, and the mind, but I will pick on one interesting aspect of the whole simulation issue. Every time I think about successful computer simulations, my mind goes to the simulation of physics rather than the simulation of mental phenomena. Right now, I have a cat in my lap and my legs are propped up on my desk. The weight and warmth of my cat have very diverse effects on my body, and the extra weight is pushing uncomfortably on my knees. My right calf is resting with too much weight on the hard wood of my desk, creating an uncomfortable sensation of pressure that is approaching painful. My right wrist rests on the edge of my desk as I type, and I can feel the worn urethane beneath me, giving way, in spots, to bare pine. My cat’s fur fans out as his abdomen rises with his breathing — I can see thousands of hairs going this way and that, and I stretch out my left hand and feel each of them against my creviced palm. The fan of my computer is surprisingly loud tonight, and varies in pitch with no discernable rhythm. I flake off one more bit of urethane from my desk, and it lodges briefly in my thumb’s nail, creating a slight pressure between my nail and my flesh. I pull it out and hold it between my thumb and finger, feeling its random contours against my fingerprints.

At some point, you have to wonder if computing this sort of simulation would be just as expensive as recreating the scenario atom-for-atom. And maybe if a simulation is as expensive as a recreation, in fact the only reliable way to “simulate” an event would actually be to recreate it. In which case the idea of functionalism falls by the wayside — the medium now matters once again; i.e., feeling a wood chip in my fingernail is not something that can be instantiated in software, but something that relies on a particular sort of arrangement of atoms — wood against flesh.

Who knows, really? Perhaps future computer scientists will figure out all of these issues, and will indeed usher in an era of true AI. But until it becomes clearer that this is a reasonable goal, I’ll stick with my belief that I am not being simulated.

If I am being simulated, a quick aside to my simulator: Perhaps you don’t like meddling in the affairs of your minions, but I could really use a winning lottery ticket one of these days. Just sayin’…


Cartesian Skepticism

Welcome to the blog’s first foray into epistemology: the philosophical study of knowledge. Today we will be talking about René Descartes, who will be ensconced in infamy for two feats: creating a system of geometry that would annoy high school students for hundreds of years to come, and for presaging “The Matrix”. Much as I actually liked high school geometry, I would like here to talk about the Cartesian skepticism of the external world that made so many science fiction movies possible.

For those of you who haven’t yet read Descartes’ famous Meditations on First Philosophy (mostly referenced plainly as the Meditations), what are you waiting for? Here’s an old translation into English to get you started. There are also approximately a billion print versions available on Amazon, in case you want a more contemporary translation, along with the ability to scribble in the margins.

The Meditations start with Descartes recounting the none-too-astounding realization that he had been wrong about some things as a youngster.

Several years have now elapsed since I first became aware that I had accepted, even from my youth, many false opinions for true, and that consequently what I afterward based on such principles was highly doubtful; and from that time I was convinced of the necessity of undertaking once in my life to rid myself of all the opinions I had adopted, and of commencing anew the work of building from the foundation, if I desired to establish a firm and abiding superstructure in the sciences.

So his project in the Meditations was very much foundational. Descartes wanted to tear down all things that passed for knowledge, in order to find a kernel of certainty, from which he would build back up a magnificent structure of infallible knowledge. Those of you who remember high school geometry might be having nightmarish flashbacks at this point, remembering how the subject was built up from just a few, allegedly very certain axioms. The axioms were the firm, unassailable foundation upon which the science of geometry was built. Descartes had similar plans for every other science and in fact every human epistemological endeavor.

His method was, simply enough, to sit comfortably in his pajamas and begin doubting everything that he possibly could doubt. The first victim of his skepticism was his senses. “All that I have, up to this moment, accepted as possessed of the highest truth and certainty, I received either from or through the senses. I observed, however, that these sometimes misled us; and it is the part of prudence not to place absolute confidence in that by which we have even once been deceived.” A pretty reasonable place to start doubting things. After all, there are a million and one ways in which we are regularly deceived by our senses: optical illusions abound, hallucinations occasionally crop up, and physical ailments of the eyes and brain can cause misperceptions.

But there’s an even more radical skepticism that can crop up from this line of thought. What if it’s not just the case that the senses deceive, but that they don’t exist at all? Take this picture of the human knowledge machine:

On this picture (which, I think, is a pretty sound depiction of what philosophers of that age thought, and indeed is still how a lot of people picture the mind), the only reliable access to knowledge is via an inner screen that has projected upon it images of the external world. The screen here is inside the brain/mind, and the little person viewing the screen is one’s consciousness. If the senses exist, then sometimes they project something misleading on the inner screen, and this gives rise to optical illusions and hallucinations. But on this picture, a skeptic could go so far as to say that the senses might be fictional. If all we have access to is this inner screen, then we just can’t be sure from where its images come. Maybe they come from the senses, and maybe they don’t. Of course, given that there were really no computers or any decent science fiction at the time, the only 17th Century source that would be powerful enough to accomplish this illusory feat would be God. But since God is supposed to be omnibenevolent, and would therefore not deceive us in this way, Descartes conjured up a reasonable facsimile of sci-fi for the time, and said that perhaps there is an evil demon who deceives each of us in this way.

Well, that’s a lot of doubt, and a lot of the world’s furniture that has suddenly become dispensable. Stones, trees, and cats might not exist. Neither might other people, for that matter. Descartes found himself at this point in an extremely solipsistic position. He might be the only person in the universe. And this person might not even have a body.

At this point, Descartes took some certainty back from the skeptical vortex into which he was falling. He might not have a body, but if he was indeed being deceived by some evil demon, then he was being deceived. “I am, I exist,” he concluded. And each time he thinks this (or anything else, for that matter), his existence is assured.

At this point, we could veer off into metaphysics and the philosophy of mind, and discuss the ontological corollary to this barely optimistic offramp of the Cartesian skeptical superhighway: Dualism. According to Descartes’ theory, the mind is not necessarily connected to a body; that is, it is logically possible for a mind to exist without a brain.

But let’s save this subject for another post. Now, let’s examine where Cartesian skepticism has taken us, epistemologically.

Skepticism of the external world is a very strong philosophical position. It is really quite difficult to debate a skeptic on matters of epistemology, because the default answer of “but can you really know that the external world exists” is very defensible. Try it out for yourself:

Me: This iPhone is great.
You: If it exists.
Me: What do you mean? I’m holding the thing in my hand!
You: You think you are. Maybe you’re dreaming.
Me: I know the difference between a dream and reality.
You: You think you do. But maybe you’re in a dream, and in that dream you dream that you’re awake, but really you’re still just dreaming.
Me: Oh, come on. That leads to an absurd infinite regress of dream states.
You: Well, it’s still possible. And anyway, you could be living in a computer simulation. Or you could be crazy and hallucinating all of this. In any event, you can’t know for sure that you’re holding an iPhone in your hand. You can know that you have an image of holding an iPhone in your mind. Therefore your mind exists. Does that make you feel better?

And you have won the debate!

The Way Out

So do we have to just give in to the skeptic? Is there no hope for those of us who would like to assume the existence of stones, trees, and cats? Real ones… not just images of them in our minds.

Well, yes, there is. It’s called Naturalized Epistemology (or just “naturalism”), and it was foreshadowed by David Hume way back in 1748. I’ll quote a lengthy passage, because it’s so beautifully crafted:

For here is the chief and most confounding objection to excessive scepticism, that no durable good can ever result from it; while it remains in its full force and vigour. We need only ask such a sceptic, What his meaning is? And what he proposes by all these curious researches? He is immediately at a loss, and knows not what to answer. A Copernican or Ptolemaic, who supports each his different system of astronomy, may hope to produce a conviction, which will remain constant and durable, with his audience. A Stoic or Epicurean displays principles, which may not be durable, but which have an effect on conduct and behaviour. But a Pyrrhonian cannot expect, that his philosophy will have any constant influence on the mind: or if it had, that its influence would be beneficial to society. On the contrary, he must acknowledge, if he will acknowledge anything, that all human life must perish, were his principles universally and steadily to prevail. All discourse, all action would immediately cease; and men remain in a total lethargy, till the necessities of nature, unsatisfied, put an end to their miserable existence. It is true; so fatal an event is very little to be dreaded. Nature is always too strong for principle. And though a Pyrrhonian may throw himself or others into a momentary amazement and confusion by his profound reasonings; the first and most trivial event in life will put to flight all his doubts and scruples, and leave him the same, in every point of action and speculation, with the philosophers of every other sect, or with those who never concerned themselves in any philosophical researches. When he awakes from his dream, he will be the first to join in the laugh against himself, and to confess, that all his objections are mere amusement, and can have no other tendency than to show the whimsical condition of mankind, who must act and reason and believe; though they are not able, by their most diligent enquiry, to satisfy themselves concerning the foundation of these operations, or to remove the objections, which may be raised against them.

So, the idea (if you had a hard time navigating the old-school English), is that if skepticism of the external world is true, it leaves one in the unenviable position of nothing mattering. It is not a stance from which one can do any productive theorizing about science, philosophy, or, well, anything except for one’s own mind. (And even that bit of theorizing will stop at the acknowledgement of one’s inner screen accessible to consciousness.)

Do we have a stance from which we can do productive theorizing about things? Assuming that science is generally correct about the state of the world is a good start! After all, science has some of the smartest people in the world (if they and the world exist) applying the most stringent thinking and experimentation known to humanity. And science assumes the existence of things like stones, trees, and cats — things that exist in the world, not merely as ideas in our minds.

Here’s one of the more interesting perspectives on subverting skepticism, from Peter Millican at Oxford:

The gist of the video is that there are two ways to argue every issue. In the case of skepticism of the external world, you can argue, like a naturalist, that you know that stones, trees, and cats are real, therefore you know that there is an external world; or, like a skeptic, you could argue that we don’t know that there is an external world, therefore you don’t know that stones, trees, and cats exist. They are really quite equally plausible strategies, from a strictly logical point of view. And in both cases you have to assume something to be the case in order to get to your desired conclusion. So do you want to assume that you don’t know there’s an external world, or would you rather assume that you know that stones, trees, and cats exist? Your choice.

If you choose the skeptical path, I hope you’ll choose to pass your solipsistic time entertaining dreams of this blog.


On Definitions in Philosophy

When trying to define a term, we think generally of providing a set of necessary and sufficient conditions: a recipe for including or excluding a thing in a particular category of existence. For instance, an even number (definitions tend to work best in the mathematical arena, since definitions there can be as precise as possible) is definable as an integer that when divided by 2 does not leave a remainder. It is easy, given this definition, to ascertain whether or not a given number is even. Divide it by two and see if it leaves a remainder. If it does, then it’s not even; if it doesn’t, then it is. We have here a clear test for inclusion or exclusion in the set of even numbers.

Outside of mathematics, things get trickier. (Inside mathematics, things can be tricky as well. Imre Lakatos‘ excellent book Proofs and Refutations details some of the problems here. If you are mathematically and philosophically inclined, this is a must-read book.)

In Ludwig Wittgenstein‘s Philosophical Investigations, he famously talks about the travails of defining the term “game”. Is there a set of necessary and sufficient criteria that will let us neatly split the world into games and non-games? For instance, do all games have pieces? (No, only board games have these.) Winners and losers? (There are no winners in a game of catch.) Strategy? (Ring-around-the-rosie has no strategy.) Players? (Well, since games are a particularly human endeavor, it would be an odd game that had no human participants. But, of course, some games have only one player.) There seems to be no single set of characteristics that spans across everything we’d like to call a game. Wittgenstein’s solution was to say that games share a “family resemblance” — “a complicated network of similarities overlapping and criss-crossing”. A great many games have winners and losers, and so share this family trait; and then there are games that have pieces, and this is another trait that can be shared. Many (but not all) of the games with pieces also have winners and losers, and so there is significant overlap here. Games with strategy span another vast swath of the game landscape, and many of these games have winners and loses, many of which also have pieces. But not all. And so a networks of resemblances between games is found — not a single boundary that separates games from non-games, but a set of sets that is overlapping and more or less tightly connected.

This is a brilliant idea, but one that often leaves analytical philosophers with a bad taste in their mouths. If you try to formalize family resemblances (and analytical philosophers love to formalize things), you run up against the same problems as you had with more straightforward definitions. Where exactly do you draw the line in including or excluding a resemblance? Games are often amusing, for instance. But so are jokes. So jokes share one resemblance with games. But jokes are often mean-spirited. And so are many dictators. And dictators are often ruthless. As are assassins. So now we have a group of overlapping resemblances that bridges games to assassins. And if you want to detail the conditions under which this bridge should not take us from one group of things (games) to the other (assassins), you are back to specifying necessary and sufficient conditions.

Wittgenstein, I imagine, would have laughed at this “problem”, telling us that we just have to live with the vague boundaries of things. Which is all well and good, but is easier said than done.


The defining of knowledge gives us a great example of definitions at work and their problems. For those of you who haven’t been indoctrinated in the workings of epistemology, it turns out that a good working definition for knowledge is that it is justified true belief.

Is Knowledge Justified True Belief
I take it as axiomatic as can be that something has to be believed to be known. If you have a red car but you don’t believe that it’s red, you don’t have knowledge of that fact. But, clearly, belief isn’t sufficient to define something as knowledge. For instance, if I believe that my red car is actually blue, I still don’t have any knowledge of its actual color. So we have to bring truth into the picture. If I believe that my car is red, and it is actually red, I’m certainly closer to having a bit of knowledge. But, again, this isn’t sufficient. What if my wife has bought me a red car that I haven’t seen yet. I believe it’s red because I had a dream about a red car last night. Do I have knowledge of my car’s color? I’d say not. We need a third component: Justification. If I believe that my new red car is indeed red because I’ve seen it with my own eyes (or analyzed it with a spectrometer, if the worry of optical illusions bugs you), then we should be able to say I do indeed have a bit of knowledge here.

In 1963, Edmund Gettier came up with a clever problem for this definition — one that presents a belief that is justified and true, but turns out to not be knowledge. Here is the scenario:

  • Smith and Jones work together at a large corporation and are both up for a big promotion.
  • Smith believes that Jones will get the promotion.
  • Smith has been told by the president of the corporation that Jones will get the promotion.
  • Smith has counted the number of coins in Jones’ pocket, and there are 10.

The following statement is justified:

(A) Jones will get the promotion and Jones has 10 coins in his pocket.

Then this statement follows logically (and is therefore also justified):

(B) The person who will get the promotion has 10 coins in his pocket.

But it turns out that the president is overruled by the board, and Smith, unbeknownst to himself, is actually the one will be promoted. It also turns out that Smith, coincidentally, has 10 coins in his pocket. Thus, (B) is still true, it’s justified, and it is believed by Smith. However, Smith doesn’t have knowledge that he himself is going to get promoted, so clearly something has gone wrong. Justification, truth, and belief, as criteria of knowledge, let an example of non-knowledge slip into the definitional circle, masquerading as knowledge.

More Games

Let’s get back to the problem of defining games, and say that, contrary to Wittgenstein, you’re sure you can come up with a good set of necessary and sufficient conditions. You notice from our previous list of possible necessary traits that games certainly have to have players. Let’s call them participants, since “player” is something of a loaded word here (a player presupposes a game, in a way). And now you also take a stand that all games have pieces. Board games have obvious pieces, but so, you say, do other games. Even a game of tag has objects that you utilize in order to move the game along. (In this case, you’re thinking of the players’ actual hands.) So let’s add that to the list, but let’s call it what it is: not pieces so much as tools or implements. And perhaps you are also convinced that all games, even games of catch, have rules. Some are just more implicit and less well-defined than others. So let’s stop here, and see where we are. We have participants, implements, and rules.

And now we begin to see the problem. If we leave it at that, our definition is so loose as to allow under the game umbrella many things that aren’t actually games. A group of lab technicians analyzing DNA could fall under the conditions of having participants, implements, and rules. But if we tighten up the definition, we run the risk of excluding real cases from being called games. For instance, if we tighten the definition to exclude our lab workers from the fun by saying that games also have to have winners and losers we immediately rule out as games activities like catch and ring-around-the-rosie.

Lakatos coined two brilliant phrases for these definitional tightenings and loosenings: “monster-barring” and “concept-stretching”. Monster-barring is an applicable strategy when your definition allows something repugnant into the category in question. You have two options as a monster-barrer: do your utmost to show how the monster doesn’t really satisfy your necessary and sufficient conditions, or tweak your definition to keep the monster out.

Concept-stretching allows one to take a definition and run wild with it, applying it to all sorts of odd cases one might not have previously thought to. For instance, perhaps we should expand entry into the realm of games to include our intrepid DNA lab workers. What would that mean for our ontologies? And what would it mean for people who analyze games? And for lab technicians?

Philosophers love to define terms; they also love to find examples that render definitions problematic. It’s a trick of the trade and a hazard of the business.

Arguing Over Nothing

The Peanut Butter and Jelly Debate

Arguing Over Nothing: A regular feature on the blog where we argue over something of little consequence, as if it were of major consequence. Arguing is philosophy’s raison d’être, and the beauty of an argument is often as much in its form as its content.

Today, we argue about the proper way to make a peanut butter and jelly sandwich. Jim argues for a radical, new approach, while I side with a more standard approach to the endeavor.

Each philosopher is granted up to a 500-750 words to state his/her case as well as up to 250-500 words for rebuttal. The winner will be decided by a poll of the readers (or whoever happens to have admin privileges at the appropriate time).

Jim: Arguing for the bowl method

The purpose of a peanut butter and jelly sandwich, the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance. There are ‘sandwich artists’ in the world, but I have trouble imagining such people working in the medium of peanut butter and jelly. Therefore, the sooner the sandwich is made, the sooner its purpose can be met. Were one to take the time to, after opening two jars and securing two utensils (surely we can both agree that cross-contamination of the ingredients should not occur within the jars), much time has already been lost and invested. From that point, mixing the two ingredients in bowl is the best and most efficient way of creating the sandwich. This is so for, primarily, two reasons.

First, peanut butter, even the creamiest sort, is not so easily spread on bread. I will grant that toasted bread provides a more durable spreading surface, but, again, the sandwich is made for a quick repast so that toasting is often overlooked or bypassed. Inevitably, large divots are raised or even removed from the bread by even the most experienced spreader. Once that has been accomplished, if it were accomplished at all, the jelly must be attended to. Securing jelly from jar with a spreading knife is a feat best left to the young and others with plenty of idle time on their hands. Repeated stabbings into the jar will secure, at best, scant amounts of jelly. It is, obviously, better to use a spoon. However, as is clear to even the dullest imagination, spreading with a spoon leaves much to be desired, literally, as the result tends to be scattered hillocks of jelly, between which are faint traces, like glacial retreatings, of ‘jelly flavor’. Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.

The second reason against separate spreads, and so for one bowl of mixed, is corollary to the above. When one makes a peanut butter and jelly sandwich, one is looking to taste both in, maximally, each bite. Given the condition of the bread on the peanut butter side and the pockets of flavor intersticed with the lack thereof on the jelly side, one is lucky to get both flavors in half the bites taken.

By mixing both peanut butter and jelly in a bowl prior to application, both of the concerns above are fully redressed. The peanut butter, by virtue of its mixing with jelly, becomes much more spreadable for two reasons: it is no longer as thick and it is no longer as dry. A thin and moist substance is always much easier to spread. Furthermore, because of the aforementioned mixing, both flavors will be available in every bite taken. The end result is a much more delicious, easily made (and so efficient), quick meal. As an added bonus, one’s fingers end up with less mess since only one slice of bread has needed attending to and so one’s fingers are only up for mess-exposure for the one time and not twice as with the other method.

While there is the bowl left to clean, in addition to the utensils, what has not been removed from the bowl is easily rinsed. The peanut butter-jelly mix, given its thin and moist nature is almost always able to be fully removed from the bowl and transferred to the bread. What is not so transferred, whether by design or not, is, by the the previously mentioned nature, easily washed or wiped away in disposal.

The bowl is clearly the way to go when making a peanut butter and jelly sandwich.

Alec: Arguing Against the Bowl Method

I will grant your utilitarian premise on sandwich making (“the purpose of any sandwich, I suppose, is to provide a quick bit of sustenance”), though I will point out that aesthetics could have a valid role to play in this debate. If your PB&J-from-a-bowl sandwich is singularly visually unappetizing (as I imagine it might be) then it will not provide any sustenance whatsoever, but will end up in the trash can instead. Also, note that your utilitarianism here could lead to the creation of a “sandwich” that is made by tossing the ingredients in a blender and creating a PB&J smoothie the likes of which would be eschewed by any rational hungry person.

But I digress.

You claim that peanut butter — even the creamy variety — is difficult to spread on bread. I have two points to make in regards to this claim. First, I haven’t had difficulty spreading peanut butter on bread since I was 12. Perhaps you should have your motor skills tested by a trained kinesiologist. I grant you that spreading a chunky peanut butter on a thin, wispy white bread can be problematic; but a smooth peanut butter on a hearty wheat bread? Not problematic at all. Second, you have pointed to no scientific research that shows that mixing peanut butter and jelly in a bowl makes it easier to spread than plain peanut butter. I remain skeptical on this point. And even if it is easier to spread, the labor involved in mixing it with jelly in a separate bowl might be far more work than it is worth in the end.

The knife/jelly problem is a thorny one, indeed, as you have noted. Trying to extricate an ample amount of jelly from a jar with a knife is difficult and annoying. You claim that: “Were one to use a spoon for jelly retrieval and a knife for jelly spreading, that is yet another utensil to clean.” However, you have overlooked the obvious: one can use the knife from the peanut butter to spread the jelly that has been extricated with the spoon. Here is some simple math to show how utensil use plays out in both of our scenarios:

You: 1 knife for dishing peanut butter + 1 spoon for dishing jelly + 1 bowl for mixing.

Me: 1 knife for dishing and spreading peanut butter + 1 spoon for dishing jelly, and reuse the knife for spreading jelly.

So we are equal on our utensil use, and you have used an extra bowl.

And on the subject of this extra bowl, it will be readily admitted by all that a knife with peanut butter on it is annoying enough to clean, while an entire bowl with peanut butter on it is proportionately more annoying to clean. (Again, you claim that a peanut butter / jelly mixture is easier to clean than pure peanut butter, but the research on this is missing. Surely you will allow that a bowl with some peanut butter on it is not a simply rinsed affair.) Plus there’s the environmental impact of cleaning an extra bowl each time you make a sandwich. Add that over the millions of people who make peanut butter and jelly sandwiches each day, and you’ve got a genuine environmental issue.

Creating a peanut butter and jelly sandwich my way also leads to an easier-to-clean knife. After spreading the peanut butter on one slice of bread, you can wipe the knife on the other slice of bread to remove upwards of 90% of the residual peanut butter (Cf. “Peanut Butter Residue in Sandwich Making,” Journal of Viscous Foods 94, 2008, pp. 218-227.) This makes cleanup far easier than in your scenario, and results in potential environmental savings as well.

You do make two solid points. First, your PB&J mixture is potentially much more homogenous than the usual sandwich mixture, resulting in a more equitable PB-to-J ratio per bite. Here I can only revisit my aesthetic claim that eating a standard PB&J sandwich is more appealing than the greyish mixture you propose we slather on bread. Second, your sandwich creation process is indeed potentially less messy on the fingers than mine. To this I have no defense. Into each good life some jelly must fall.

Jim: Rebuttal

I must say, I find many of your points and counterpoints intriguing. All wrong, of course, but still intriguing. Let’s go through them, one at a time, and see where you go astray.

1) I grant both the utilitarian and aesthetic aspects to the sandwich. There are some truly beautiful sandwiches out there; few of them, however, are made at home and are made solely of bread, peanut butter, and jelly. The maker of such a sandwich is often working in a limited environment with a limited medium with a time crunch, otherwise, utility be damned and let the sandwich artist sing. As for the smoothie sandwich, I doubt, as surely so do you, that the sole goal of the creator (of the sandwich) is to ingest those ingredients as soon as possible. Ignoring a lack of teeth or the presence of an extremely tight throat, such an option is insane.

2) While I appreciate a gentle jibe as much as the next fellow, to imply that I lack the wrist strength to apply peanut butter to bread is going a bit far. Ad hominem attacks should have no place in philosophical discourse. It is not impossible to spread peanut butter on bread and I will happily grant you the point that it is so much easier to do so on ‘hearty wheat bread’. My point was and is that it is easier to do so if, to use a turn of phrase, the wheels have been greased a bit, and it is my contention that a peanut butter and jelly mixture does just that. However, you are correct that I have no scientific data to back that up. I was under the impression that science need not enter civil discussion, but I will agree that I have no data to back that claim up. Common sense, mere intuition, though, seems to suggest that if jelly is easier to spread than peanut butter, and who would contest that, then surely a mixture of peanut butter and jelly would be easier to spread than peanut butter simpliciter.

3) I fear I only have enough space left to deal with your point concerning the extra cleaning of a bowl. I did take a bit of latitude with that argument and will concede it to you with but one addendum. In almost every home, at the very least in a great many homes, I would guess that the dishes are not washed one at a time, but rather several at once, and rarely immediately after use. If the utilitarian nature of the PB&J sandwich is granted, time is at a minimum and I suspect clean-up will have to wait a more opportune time. While an extra bowl is required during the creation of the sandwich, I do not think that an extra bowl needing to be washed would extend such washing time unduly.


Nonmonotonic Logic and Stubborn Thinking

I was struck recently by some similarities between the psychology of stubborn thinking and the history of science and logic. It’s not just individuals that have trouble changing their minds; entire scientific, logical, and mathematical movements suffer from the same problem.


When people think about logic (which I imagine is not very often, but bear with me on this), they probably think about getting from a premise to a conclusion in a straight line of rule-based reasoning — like Sherlock Holmes finding the criminal perpetrator with infallible precision, carving his way through a thicket of facts with the blade of deduction.

Here’s a sample logical proof that would do Holmes proud.

Birds fly.
Tweetie is a bird.
Therefore Tweetie flies.

We have here a general principle, a fact, and a deduction from those to a logical conclusion.

The problem is that the general principle here is just that: general. It is generally the case that birds fly. In fact, some birds do not fly at all. (In fact, there’s not ever a general principle that universally applies: even the laws of physics are arguably fallible. Cf. Nancy Cartwright’s wonderful How the Laws of Physics Lie.) Tweetie could be an ostrich or an emu, or Tweetie could have lost his wings in a window-fan accident, or Tweetie could be dead.

You could shore up your general principle in order to try to make it more universal: Birds that aren’t ostriches, emus, wingless, or dead, fly. But this sort of backpedaling is really an exercise in futility. As the past several decades of research in artificial intelligence through the 90s showed us, the more you expand your general principle to cover explicit cases, the less of a general rule it becomes, and the more you realize you have to keep covering more and more explicit cases, permutations upon permutations that will never end. (E.g., even in the case of death, Tweetie might be able to fly. He could be dead, but also in an airplane at 20,000 feet. Would you amend your general principle to cover this case? It would be a strange sort of “scientific” law that stated “Birds fly, except dead birds that aren’t in airplanes.”)

A brilliant solution to this sort of problem was found via the creation of nonmonotonic logic, a logical system that is what they call defeasible — that is, it allows for making a conclusion that can be undone by information that eventually emerges to the contrary. So the idea is that a nonmonotonic system allows you to conclude that Tweetie flies via the logic above, but also allows you to change that conclusion if you then find out that Tweetie is, in fact, e.g., dead.

This may not seem like a big deal, since this is how a rational human is supposed to react on a regular basis anyway. If we find out that Tweetie is dead, we are supposed to no longer hold to the conclusion, as logical as it may be, that he flies. But for logicians it was huge. The old systems of logic pinned us helplessly to non-defeasible conclusions that may be wrong, just because the logic itself seemed so right. But now logicians have a formal way of shaking free of the bonds of non-defeasibility.


The history of science is rife with examples of this principle-clinging tenacity from which it took logic millennia to escape. A famous case is found in astronomy, where the concept persisted for more than a dozen centuries that the earth was at the center of the universe. As astronomy progressed, it became clear that to describe the motion of the planets and the sun in the sky, a simple model of circular orbits centered around the Earth would not suffice. Eventually, a parade of epicycles was introduced — circles upon circles upon circles of planetary motion spinning this way and that, all in order to explain what we observed in the earth’s sky, while still clinging to the precious assumption that the Earth is centrally located. The simpler explanation, that the Earth was in fact not the center of all heavenly motion, would have quickly done away with the detritus of clinging to a failed theory, but it’s not so easy to change science’s mind.

In fact, one strong line of thought, courtesy of Thomas Kuhn has it that the only way for scientists to break free from such deeply entrenched conceptions is nothing short of a concept-busting revolution. And such revolutions can take years to gather enough momentum in order to be effective in mind-changing. (Examples of such revolutions include the jarring transition from Newtonian to Einsteinian physics, and the leap in chemistry from phlogiston theory to Lavoisier’s theory of oxidation.)

Down to Earth

If even scientists are at the mercy of unchanging minds, and logicians have to posit complicated formal systems to account for the ability to logically change one’s mind, we should be prepared in our daily lives to come up against an immovable wall of opinions. Despite what the facts tell us.

Indeed, it isn’t very hard to find people that have a hard time changing their minds. Being an ideologue is the best way of sticking to an idea despite evidence to the contrary, and ideologues are a dime a dozen these days. What happens in the mind of an ideologue when she is saving her precious conclusion from the facts? Let’s revisit Tweetie. (You can substitute principles and facts about trickle-down economics or global warming for principles and facts about birds, if you like.)

Ideologue: By my reasoning above, I conclude that Tweetie flies.

Scientist: That is some nice reasoning, but as it turns out, Tweetie is dead.

Ideologue: Hmmm. I see. Well, by “flies” I really mean “flew when alive”.

Scientist: Ah, I see. But, actually, Tweetie was an emu.

Ideologue: Of course, of course, but I mean by “flies” really “flew when alive if not an emu”.

Scientist: But so then you’ll admit that Tweetie didn’t actually fly.

Ideologue: Ah, but he could have, if he had had the appropriate physical structure when he was alive.

Scientist: But your conclusion was that Tweetie flies. And he didn’t.

Ideologue: Tweetie was on a plane once.

Scientist: But isn’t that more a case of Tweetie being flown, not Tweetie flying?

Ideologue: You’re just bogging me down in semantics. In any case, Tweetie flies in heaven now. Case closed.


Philosophy Resources on the Web

There is a great wealth of serious philosophy out there on the internet, though you have to dig deep through a great deal of philosophical detritus to get to the good stuff. Here are some of our picks for genuinely good philosophy on the web…


One of our favorite resources out there is Philosophy Bites: a collection of “podcasts of top philosophers interviewed on bite-sized topics”. The hosts, respected philosophers themselves, Nigel Warburton and David Edmonds, have interviewed a lot of renowned philosophers for the show, ranging from Daniel Dennett to Philip Pettit to Frank Jackson to Martha Nussbaum, all in easy-to-digest 15 minute sessions. The website itself is not exactly a treat to navigate, but you can skip the site and go directly to iTunes to download free podcasts. Or you can shell out three bucks for the iPhone app which I can vouch for being well worth it. They also have the MP3s hosted on, if you like to work through these things old school. One of my favorite podcasts is Nick Bostrom on the Simulation Hypothesis — absurdist metaphysics at its finest! — but really there are very few uninteresting interviews on the site.

Another great resource of philosophy audio is Philosophy Talk, a radio program with podcasts by eminent Stanford philosophers John Perry and Ken Taylor.


The next time you’re about to head over to Wikipedia to check out something philosophical, stop yourself and try either the Stanford Encyclopedia of Philosophy or the Internet Encyclopedia of Philosophy. Both sites are peer-reviewed and generally are excellent sources for delving more deeply into philosophy. Check out the Stanford article on thought experiments, for example, or the IEP’s article on Searle’s Chinese Room. Both fine pieces.

Public Domain Texts

If you’re looking for public domain philosophy texts, there are plenty out there, although be prepared to find very little contemporary work. Everybody’s favorite public domain repository, Project Gutenberg, has a respectable collection of philosophy works. The EServer also has a collection of public domain philosophy texts available for download, along with some contemporary pieces that have been appropriately licensed.

If you are looking for more contemporary works online, your best bet is JStor, which scans most of the top philosophy journals and creates PDFs. There are a few journals and articles available for free through JStor to the general public, but if you really want to get the most out of the service, you have to be connected to a university that pays for their best services. If you are so connected, you will have an incredible wealth of philosophy articles at your disposal. If you are a philosophy teacher, or interested in philosophical pedagogy, check out the Philosophy Documentation Center — a subscription service that has all sorts of articles available about teaching philosophy.

Free Online Courses

Universities are starting to beef up their online course offerings, and there are several that offer free courses, consisting of syllabi, lecture notes, slides, audio, and video. Everything short of interaction with and feedback from professors. MIT was one of the first to make freely available such resources. Yale has a couple of courses available as well. As does Notre Dame.

We haven’t gone through any of these courses with a fine-tooth comb, so we can’t say how instructive they really are, but we certainly applaud academia for opening up the ivory tower a bit. If any of you have ever tried any of these courses, let us know what you thought!

The Profession

No list of professional philosophical resources would be complete without a link to the American Philosophical Association — the major professional organization for philosophy professors and students. Their website could use an update from 1999, but there is a good amount of information on the site regarding the profession of philosophy.

If you’re thinking about grad school in philosophy, you should definitely check out the Philosophical Gourmet Report — Brian Leiter’s ranking of graduate programs in philosophy in the English-speaking world.

Let us know if you think of other good web resources for philosophy lovers.


What is Philosophy?

What is philosophy? And why are we bothering to blog about it?

Even people trained in philosophy are often hard-pressed to come up with a pithy definition of it. The first time I taught Introduction to Philosophy, I stammered at the front of the class for a good five minutes trying to explain the sorts of things about which I was going to teach them for the next fifteen weeks. (Later in the semester, I stammered for significantly less time, but with the same significant stammering intensity, over the definition of “ethics”. So it’s not just the general term “philosophy” that’s the issue, I think.)

Of course, if you’re a philosophy aficionado you might well already know the problems attached to the process of defining terms. Wittgenstein, famously, in his Philosophical Investigations, took his readers down the rabbit hole in attempting to define the term “game” — even something so seemingly simple can be difficult to pin down with authority and without counterexamples getting in your way.

But it’s not just the general problem of defining terms that is difficult in the case of “philosophy”. The field to which the term attaches is so broad and so nebulous that it’s no wonder it’s so hard to describe.

It may be fruitful here to think of philosophy as a practice, rather than a field. And you learn about a practice (and how to participate in that practice) more by immersion than by definition. So, while it’s fairly unsatisfying to someone just starting out in the practice of philosophy, I think it’s actually not unfair to say at the beginning of a philosophy course “you’ll see what philosophy is by the end of the semester. For now, crack open your Descartes text and let’s talk…”

That doesn’t help you, our much-appreciated reader, to figure out what it is this blog is about, and whether or not you’ll still be a reader next week. So, despite my trepidation, let me take a stab at saying what philosophy is and why we’ll be blogging about it.

The roots of the word “philosophy” harken back to “lover of wisdom”. Indeed, philosophy is all about the love of knowledge, and unearthing pieces of knowledge wherever you can. And when I say “wherever” I’m not kidding. There are philosophical treatises on such abstruse topics as nonexistent objects, and on subjects as far ranging as everything from humor to subatomic physics.

What, you might ask, makes some bit of knowledge about subatomic physics a piece of philosophy rather than a piece of physics? There have actually been scientists who have argued that philosophy of science is about as useful as astrology; and even the great philosopher Bertrand Russell wrote: “as soon as definite knowledge concerning any subject becomes possible, this subject ceases to be called philosophy, and becomes a separate science.” (Bertrand Russell (1912). The Problems of Philosophy. New York: Henry Holt & Co.) His thought was that the sciences provide definite knowledge, while philosophy provides insightful burrowing into ideas that may someday become science. (Of course, Russell was writing in the heady days when it seemed as if science and mathematics would explain everything, but that’s another story…)

Whether or not that’s true, it is most assuredly true that the philosophy of science has hit upon and explored many important areas of knowledge that scientists, busy doing the important work they’re doing, might never have explored. The importance of exploring these areas remains an open question, but if you have a philosophical disposition, you would seldom if ever doubt the importance of what you were studying. Not because your area of exploration might yield anything, say, scientifically fruitful, but simply because if it’s an avenue of knowledge, whatever lays at the end of it, you want to go down that path. It’s the journey itself that is as important as what you find, along with the fact that whatever you’ve found, it was something that needed discovering.

I remember my first day as a graduate philosophy student, going to the library and just wandering down the aisles. At first, I stuck to the philosophy stacks, marveling at the breadth and depth of the tomes there. But eventually I wandered into the math stacks — a second academic love of mine — and spent some quality time there, once again marveling at the results of humankind’s curiosity. Then it was off to the psychology stacks, and the science stacks, and before I knew it, somehow I was in an aisle of books devoted to 18th Century England. I grabbed a book at random and read a chapter on witchcraft and its relation to the social norms of the times, and marveled at it, even though it was not really something I’d normally be interested in — someone had trodden down this path with great intellectual fervor, and had unearthed theories, knowledge, and connections that no one else had ever thought about in quite the same way. Before putting the book back, I noticed that no one had checked it out of the library for decades. This made me melancholy for a moment, until I realized that if I had written this book, though I’d certainly want people to read it, there would be a big part of me that would be content to have done the work and written it, regardless of my future audience. At least it was in a respected library, filling in a nook in our intellectual history.

If this story resonates with you, you might have a philosophical disposition.

Have I explained what philosophy is yet? Not really, I suppose; though I believe I have explained why we’re bothering to blog about it.

So what is philosophy? The first thing to keep in mind is that it’s always “philosophy of X”, where X can be just about any field. So we have philosophy of existence (generally called metaphysics), philosophy of knowledge (or epistemology), philosophy of morality (ethics), philosophy of art (aesthetics), philosophy of science, philosophy of mathematics, philosophy of language, philosophy of mind, philosophy of humor, philosophy of law, and so many other philosophies-of that it could make your head spin.

I was recently browsing for provocative philosophy paper titles (I thought it would be instructive to look at such titles in order to start to get a sense of what it is that philosophers do), and I came across this essay by Karel Lambert from back in 1974: “Impossible Objects”.

I haven’t read the article (come to think of it, I have to add that to my to-read list!), but I’m guessing that it’s a piece about such “things” as round squares. So now put yourself in a philosopher’s mindset for a moment. Someone says offhandedly to you: “why that’s as likely as a round square,” and you start thinking about that idea. A round square. Well, that’s impossible — such things couldn’t possibly exist. And this gets you thinking… there are things that don’t exist but could if the circumstances were right. Things like a 200-story building in Jamaica or six-legged cows. So there are two classes of things that don’t exist: possible (mammoth buildings in Jamaica) and impossible (round squares). Now you’ve begun carving up reality into interesting categories, and this is a particularly philosophical endeavor.

But wait… “Things” that don’t exist??? How could a thing be nonexistent? Is this really a problem of existence or just a trick of words? This well trod path leads one into the philosophy of language, where we ponder sentences like “The round square doesn’t exist.” Is this sentence true? Does “round square” refer to something in the same way that “George Washington” refers to something in the sentence “George Washington existed.”? These are very philosophical questions as well.

To be interested in why no skyscrapers exist in Jamaica is to be (probably) some sort of historian, sociologist, economist, or architect. To be interested in the difference between non-existent Jamaican skyscrapers and non-existent round squares, well, that’s being a philosopher.

See you next time. (?)