Categories
Philosophy of Science

Science and What Exists

To make the transition to Einstein’s universe, the whole conceptual web whose strands are space, time, matter, force, and so on, had to be shifted and laid down again on nature whole.

—Thomas Kuhn

One problem metaphysicians have been dealing with for, well, forever, is the unfortunately necessary intertwining of metaphysics and epistemology. Metaphysics is the philosophical study of what exists; epistemology is the philosophical study of knowledge. And it’s trivial to point out that the best we can do in detailing what there is that exists is to rely on our best epistemology: We can’t talk about what we know about, without talking about what (and how) we know. If we know about quarks, it’s not simply the case that quarks exist, but that we figured out that they exist. Our catalogue of items in the universe is inherently tied to our knowledge of those items.

Why is this problematic? Well, many metaphysicians are very conscious and conscientious about keeping existence separate from knowledge of existence. Much of the problem can be traced back to the venerable Bishop Berkeley, who posited that everything in the universe in actually mind-dependent for its very existence — it’s not, Berkeley thought, just that the computer screen in front of you is merely hidden from view when you close your eyes, but that this lack of observation actually means the computer screen is not really there when your eyes are closed. Problems with this theory forced Berkeley to say that God observes everything at all times, and so there’s no worry about things blinking in and out of existence with the blink of an eye. God never blinks. But regardless of the absurdity of this centuries-old bit of philosophy, the aftershocks have stayed with us. There’s something very compelling, apparently, about the idea that our minds have metaphysical power — that minds can create some of reality.

The great irony is that the best scientifically-minded philosophers of the 20th Century, while trying to shore up the mind-independence of the external world, actually gave proponents of mind-dependence a strong foothold in the metaphysical debate.

Naturalized epistemology — the brain child of W.V.O. Quine, though it was clearly anticipated hundreds of years earlier by David Hume — takes science to be the paragon of knowledge-farming; the discipline whose results we are most certain about. Naturalism, though, if we accept it, forces us also to acknowledge the following: We can’t make judgements about the world from some point of privileged access outside of science. That is, there is no way to step outside science and see what there is in the world; we don’t get a clearer picture of quarks without science — science itself tells us about quarks, and without science this piece of ontological furniture would not be accessible to us whatsoever. Our metaphysical house, chock full of interesting furniture, wouldn’t merely look somewhat different without science; it would be a bare, dirt-floored cabin with very little of interest in it.

This leads to a very tantalizing point. Science often changes its mind, and in such episodes of change what we take to be our ontology (our catalogue of things that exist) changes as well. For instance, once upon a time science told us that there was a substance called phlogiston that is released from things when they are burned. This substance — a consequence of a good scientific theory that explained several phenomena related to chemistry — was taken by scientists (and the informed public) as existing in the world. If science is our best arbiter of what exists, then, at the time during which science told us that phlogiston existed, there’s a strong sense in which it actually existed. Science, remember, tells us what there is, and there’s not privileged perspective outside of science to figure out our metaphysics. It turned out, however, that the phlogiston theory of chemistry ran into serious problems, and was more or less wholesale replaced by the oxygen theory of Lavoisier. In this new theory, there was no place for phlogiston. At this point, science told us that phlogiston does not exist.

There are (at least) two conclusions that can be drawn from this, each of which I will encapsulate using the Kuhnian metaphor at the top of this entry:

Standard Naturalism: The whole of science forms a conceptual web from which vantage point we purvey the world. There is no spot outside of the web from which to purvey the world. We can change science by changing some part of the web — this amounts to changing our ideas about an unchanging world. The world is independent of our ideas about it, even as we discover new ways to look at what exactly is in it. For instance, we were simply wrong about the existence of phlogiston. It never existed.

Kuhnian Mutant Naturalism: A scientific theory is a conceptual web that uniquely lays upon the world giving it its shape. When a new theory is developed, an entirely new web is made. There is still no place outside of the web from which to purvey the world, but we can shuck off the entire web in favor of a new one. The world is partly dependent for its existence on our ideas about it — whichever web we throw onto the world actually gives the world its shape. When we change our ideas, we change the world. For instance, phlogiston actually did exist while scientists were working with phlogiston theory. When Lavoisier came up with a new chemical theory, the world actually changed — phlogiston disappeared, and in its place oxygen and other items filled our metaphysical cupboards.

Many have noted from Kuhn’s version of naturalism that he is an anti-realist in the Kantian vein. We won’t get into the thickets of Kantian metaphysics here, but, in short, he believes that our ideas are not merely a pre-condition for theorizing about things, but that theorizing indeed is a pre-condition for the very existence of things. Contrary to this, standard naturalism usually goes hand in hand with common-sense and scientific realism, wherein, as Philip Kitcher notes: “Trivially, there are just the entities there are. When we succeed in talking about anything at all, these entities are the things we talk about, even though our ways of talking about them may be radically different.”

One reason Kuhn is led to his odd metaphysics is because of his implicit description theory of reference. On a description theory, the only way to correctly refer to an entity is to have its unique description in mind; but if a scientific revolution changes the description associated with a key scientific term, then the old description no longer refers. This leads Kuhn to the idea that competing scientific paradigms are incommensurable. It also motivates his metaphysics. If a term once referred and now it does not, all on the basis of our changing descriptions, then by some inferential jump one could think that this correlation was causal; i.e., that our changing descriptive thoughts cause a change in the world.

We’ll examine description theories and the philosophy of language in an upcoming post. Stay tuned…

Categories
Epistemology

Materialism and Doubt

A student emailed me asking me about the role of doubt in a materialist/science dominated culture. It was an excellent question. What role would doubt play in someone who believed that science could find all the answers? We do doubt, but the materialist is often portrayed as a person with a particular sort of confidence in her worldview. The materialist not only believes that everything that exists or could exist is physical, or physically based, but that all such things can be given fully physical explanations as well. While not all materialists do believe such a strong claim, enough do to lend strength to the stereotype.

I suggested to the student that doubt is what drives materialism, and that it is doubt that the materialist uses to suggest it is superior to dualism. What follows is how I tried to portray this to my student.

Doubt

I think that most materialists would accept the description of them as ‘big bang until now’ kind of believers. There was the beginning, whatever that was (and whatever that was, it was entirely physical), and, given the laws of physics, everything has turned out as it has. That we are able to peer into the earliest times of the universe with our telescopes backs up this materialist perspective. It is, of course, possible that there are places or parts of the universe that are not bound by the laws of physics, but that seems less and less likely the more we learn about the universe.

What of doubt though, you ask? Is it evolutionarily beneficial? I have not read much on that issue specifically, but I have read quite a bit about it in a roundabout fashion. Here is what I think a materialist/scientist would suggest as the role and purpose, naturalistically speaking, of doubt. We are born not as blank slates, but as probability machines. What that means is that while we are not born with knowledge of how the world works, nor are we born with no rules or inclinations at all. Rather, we are born with a set of ingrained tools that allow us to figure out how the world seemingly works. Babies and children (and some adults), rarely take things at face value, despite appearances to the contrary. A child does not know how gravity works until it has seen many things fall (and many things, such as balloons and planes, not fall). The child is constantly touching and tasting and probing its way about and through the world to learn what the world is made of and how it works. But, one might say, that is curiosity, not doubt. I think that is right — at least, partially right.

Curiosity is the drive to learn, but the truly curious, which children are, do not merely accept what they encounter. They seek out not just new experiences, but the commonality that exists between and within those experiences. That means that, along with the curiosity, there is doubt present. There is doubt that what the child has just experienced is enough to understand, is correct, is the right sort of standard by which other experiences can be judged. We doubt, though not always (or even often) in the philosophical sense, because of its survival benefits. Should I trust that sound, just because it was trustworthy the first time I heard it? Should I believe that all red fruits are healthy and all blue breads are bad? Doubt drives curiosity drives doubt. If we did not doubt, the first suspected causal unions would have been good enough for us. A virgin in the volcano seems to have forestalled an eruption, therefore, the gods have been appeased. What need would we have of science if we had no doubt?

Curiosity is the desire to learn, but doubt is the tempering of what we have learned into knowledge. A creature that does not doubt will not survive long. And it is doubt that is built into science itself. The idea of falsifiability is based on doubt. If there is no way in which a theory could be shown to be false, it is not considered to be a good or strong theory. That is doubt.

While we are or can be 100% certain of how things seem to us on a sensory basis (I seem to be seeing green; I seem to be tasting an apple; I seem to be hearing crunching; etc.), often what we sense does not fit with what we have previously sensed or with what we currently believe. That is where the doubt comes in. Suppose I hear a voice telling me it is Volthoon and that I must kill my neighbors. I cannot doubt that it seems to me that I am hearing such a voice and that I am hearing it say such a thing, but I can doubt whether there is such a voice saying such things. Maybe I won’t question it (there are many who do not), or maybe I will not think to question it (there are many in this group as well), but I can certainly accept what I am sensing as something that I seem to be sensing without also accepting that it is a real and genuine thing that has not been concocted by my mind alone.

If I were to see a cat bark like a dog, it would confuse the hell out of me, not because I would doubt what I sensed, but because what I seemingly sensed did not fit in with any of my previous sensory experiences. Now I wonder which belief or set of beliefs I will have to drop or alter (and there comes doubt again). Now I wonder if I can trust my eyes or my ears (see the McGurk effect for a cool example of this), or neither or both. This doubt leads to the “why” question, I think, though you are entirely correct that it is a question that I may never be able to answer.

Final Thoughts

Doubt, unsurprisingly, is the philosopher’s bread and butter and beer and pillow. We all want to know what is going on, but we all want to be right. Those are desires that are at unfortunate odds with one another, but they are so because we doubt. I am not sure that the world is a better place because we doubt, but I am reasonably sure that we have survived as a species, and, less importantly, philosophy has thrived as a discipline, because we do.

Categories
Logic

Nonmonotonic Logic and Stubborn Thinking

I was struck recently by some similarities between the psychology of stubborn thinking and the history of science and logic. It’s not just individuals that have trouble changing their minds; entire scientific, logical, and mathematical movements suffer from the same problem.

Logic

When people think about logic (which I imagine is not very often, but bear with me on this), they probably think about getting from a premise to a conclusion in a straight line of rule-based reasoning — like Sherlock Holmes finding the criminal perpetrator with infallible precision, carving his way through a thicket of facts with the blade of deduction.

Here’s a sample logical proof that would do Holmes proud.

Birds fly.
Tweetie is a bird.
Therefore Tweetie flies.

We have here a general principle, a fact, and a deduction from those to a logical conclusion.

The problem is that the general principle here is just that: general. It is generally the case that birds fly. In fact, some birds do not fly at all. (In fact, there’s not ever a general principle that universally applies: even the laws of physics are arguably fallible. Cf. Nancy Cartwright’s wonderful How the Laws of Physics Lie.) Tweetie could be an ostrich or an emu, or Tweetie could have lost his wings in a window-fan accident, or Tweetie could be dead.

You could shore up your general principle in order to try to make it more universal: Birds that aren’t ostriches, emus, wingless, or dead, fly. But this sort of backpedaling is really an exercise in futility. As the past several decades of research in artificial intelligence through the 90s showed us, the more you expand your general principle to cover explicit cases, the less of a general rule it becomes, and the more you realize you have to keep covering more and more explicit cases, permutations upon permutations that will never end. (E.g., even in the case of death, Tweetie might be able to fly. He could be dead, but also in an airplane at 20,000 feet. Would you amend your general principle to cover this case? It would be a strange sort of “scientific” law that stated “Birds fly, except dead birds that aren’t in airplanes.”)

A brilliant solution to this sort of problem was found via the creation of nonmonotonic logic, a logical system that is what they call defeasible — that is, it allows for making a conclusion that can be undone by information that eventually emerges to the contrary. So the idea is that a nonmonotonic system allows you to conclude that Tweetie flies via the logic above, but also allows you to change that conclusion if you then find out that Tweetie is, in fact, e.g., dead.

This may not seem like a big deal, since this is how a rational human is supposed to react on a regular basis anyway. If we find out that Tweetie is dead, we are supposed to no longer hold to the conclusion, as logical as it may be, that he flies. But for logicians it was huge. The old systems of logic pinned us helplessly to non-defeasible conclusions that may be wrong, just because the logic itself seemed so right. But now logicians have a formal way of shaking free of the bonds of non-defeasibility.

Science

The history of science is rife with examples of this principle-clinging tenacity from which it took logic millennia to escape. A famous case is found in astronomy, where the concept persisted for more than a dozen centuries that the earth was at the center of the universe. As astronomy progressed, it became clear that to describe the motion of the planets and the sun in the sky, a simple model of circular orbits centered around the Earth would not suffice. Eventually, a parade of epicycles was introduced — circles upon circles upon circles of planetary motion spinning this way and that, all in order to explain what we observed in the earth’s sky, while still clinging to the precious assumption that the Earth is centrally located. The simpler explanation, that the Earth was in fact not the center of all heavenly motion, would have quickly done away with the detritus of clinging to a failed theory, but it’s not so easy to change science’s mind.

In fact, one strong line of thought, courtesy of Thomas Kuhn has it that the only way for scientists to break free from such deeply entrenched conceptions is nothing short of a concept-busting revolution. And such revolutions can take years to gather enough momentum in order to be effective in mind-changing. (Examples of such revolutions include the jarring transition from Newtonian to Einsteinian physics, and the leap in chemistry from phlogiston theory to Lavoisier’s theory of oxidation.)

Down to Earth

If even scientists are at the mercy of unchanging minds, and logicians have to posit complicated formal systems to account for the ability to logically change one’s mind, we should be prepared in our daily lives to come up against an immovable wall of opinions. Despite what the facts tell us.

Indeed, it isn’t very hard to find people that have a hard time changing their minds. Being an ideologue is the best way of sticking to an idea despite evidence to the contrary, and ideologues are a dime a dozen these days. What happens in the mind of an ideologue when she is saving her precious conclusion from the facts? Let’s revisit Tweetie. (You can substitute principles and facts about trickle-down economics or global warming for principles and facts about birds, if you like.)

Ideologue: By my reasoning above, I conclude that Tweetie flies.

Scientist: That is some nice reasoning, but as it turns out, Tweetie is dead.

Ideologue: Hmmm. I see. Well, by “flies” I really mean “flew when alive”.

Scientist: Ah, I see. But, actually, Tweetie was an emu.

Ideologue: Of course, of course, but I mean by “flies” really “flew when alive if not an emu”.

Scientist: But so then you’ll admit that Tweetie didn’t actually fly.

Ideologue: Ah, but he could have, if he had had the appropriate physical structure when he was alive.

Scientist: But your conclusion was that Tweetie flies. And he didn’t.

Ideologue: Tweetie was on a plane once.

Scientist: But isn’t that more a case of Tweetie being flown, not Tweetie flying?

Ideologue: You’re just bogging me down in semantics. In any case, Tweetie flies in heaven now. Case closed.