What is “science”?

I’m not sure who writes the website The Barefoot Bum (he appears to be named “Larry” in his website cartoon), but I’m sorry I didn’t run across it a while back, for he’s written two great posts in a row (the other one, which I may discuss later, is on the dreadful dialogue between Gary Gutting and Alvin Plantinga that recently appeared in The New York Times).

The Bum’s first piece, “The limits of science,” is a critique of a paper I’ve written about—attack on New Atheism published by Massimo Pigliucci.

Pigliucci’s paper, which appeared in Midwest Studies in Philosophy, is called “New Atheism and the scientistic turn in the atheism movement” (free download), and it reprises the author’s familiar gripes about New Atheism: people like Dawkins and Harris are philosophically unsophisticated and haven’t grappled with the best arguments for and against theism by philosophers (Pigliucci even claims that “it seems clear to me that most of the New Atheists [except for the professional philosophers among them] pontificate about philosophy very likely without having read a single professional paper in that field”); and that we define science unduly broadly, especially when seeing religious claims as empirical hypotheses open to examination by reason and observation (and to dismissal if they can’t be so adjuciated).  By broadening the definition of science to something like “investigating any claims about reality using reason, observation, testing, and the attitude of doubt and falsifiability,” Pigliucci claims that we’re engaging in the Deadly Sin of Scientism. (Pigliucci’s own definition of science is the activities engaged in by professional scientists, while I—and apparently Larry—see “science” as a method of finding things out that can in principle be used by anyone.)

At any rate, The Barefoot Bum’s critique is both better reasoned and more temperate than mine, and I’d recommend your reading his whole piece.   As I’m still preoccupied with other stuff (I finished the first draft of my book and have begun revising it), I’ll just post some of what “Larry” says for you to ponder. As you might suspect, I agree with much of it:

Pigliucci’s definition [of science] is too narrow in that we can easily conceive of science being done without many of the institutional characteristics he lists. How general must a theory be to be “scientific”? Is, for example, forensic science really a science? Forensic science seeks to discover what actually happened at a particular point in time, almost the exact opposite of the construction of a general theory about the world. If forensic science is not a science, what is it? Do we need systematic peer review — in something other than the trivial, over-broad sense that all communication is received and modified by listeners — for an endeavor to be scientific? Must we have public or private funding, again in other than the trivial sense that everything is in some sense economic? For decades, science was self-financed, pursued by people with their own income from other sources. Pigliucci’s definition of “science” is as absurd as defining “dining” as something being done in a restaurant using food, which would include eating at McDonalds and exclude my friend, who is an excellent amateur cook, preparing dinner at home.

He claims as well that Pigliucci is being philosophically inconsistent by insisting on a narrow definition of “science” while taking a very broad and loose view of the term “fact”:

Pigliucci argues that the word “fact” connotes “too heterogeneous a category” for science to encompass. Pigliucci asserts a broad definition of “facts,” which includes all statements that one cannot successfully deny; Pigliucci asserts, for example, that one cannot, for example, deny that the sum of the interior angles of a triangle (on a plane) add up to 180° (150). But this argument can be read as simply the tendency of speakers of natural languages to apply the same word to different categories. Pigliucci’s example is telling: Euclidean geometry is not a fact even in the loosest empirical sense of a fact as a true statement about the world. Instead, Euclidean geometry is a mathematical formalism; to determine whether or not Euclidean geometry accurately describes the real world, we need to actually observe and measure angles. And we find that often, Euclidean geometry does not accurate describe the world, as when we draw triangles on a sphere or the Reimann surfaces near a large mass. We can take the amorphous mass of meanings that constitute the lexicographical content of “fact” and easily divide them into distinct* categories: common observation, deductive certainty, settled scientific theories, social totems, and confident assertions. There is no need to hold that broadening the definition of “science” requires that the broader definition include every lexicographical denotation of “fact.”

I’ve thought a lot about mathematics and am coming around to the view that it doesn’t reveal truths about the world, but simply the inevitable consequences, worked out by logic of a set of axioms. That is why we speak of “proof” in mathematics but not in science. Fermat’s Last Theorem was “proven,” but nobody says “We’ve proved evolution,” for something could always surface that showed evolution to be wrong. (I don’t, by the way, anticipate that!)

Finally, “Larry,” constructs his own definition of science, which I like quite a bit. Go over to his site to see it, but in summary it incorporates investigations limited to the real world, the formation of theories about phenomena, the insistence that those theories be falsifiable through general agreement by rational people, and the idea theories should be parsimonious, invoking no more assumptions or entities than necessary to explain the observations. This definition of “science,” of course, includes plumbing and car mechanics (“my hypothesis is that there’s a bad fuse in the electrical system”). To me it’s not so important what the dictionary says as that there is methodology held in common by plumbers and molecular biologists.

In the end, The Barefoot Bum applies his definition to religion, showing that it is in principle “scientific” because it makes empirical claims about the world, but then doesn’t follow the scientific method to examine those claims. His paragraph on this is a marvel of concision:

This definition seems to exclude a lot of religious thought as either unscientific or scientifically false. In The God Delusion, Richard Dawkins proposes the “God Hypothesis.” Dawkins asks: what happens when we try to construct religious thought as science, broadly conceived? Applying the criteria, we hypothesize that God is real, with real properties. Second, we make a logically connected theory that includes God and His properties. Third, we make this theory falsifiable, it entails logically possible facts which would disprove the theory. Fourth, we demand commonly observable facts that would disprove the theory. If we do so, then we find that either a real God has properties that are entirely different than the properties we normally ascribe to persons; a theory of God compatible with the commonly observable facts requires a God who is, unlike ordinary human persons, not only mechanical and sphexish. Reject any of the criteria, and you concede the argument by contradiction, absurdity, or vacuity. If God is not real, you’re already an atheist. If you cannot make a logically connected theory, you are just babbling. If your theory cannot be falsified, then there’s no way of telling if it’s true or false. If your theory is not falsifiable by commonly observable facts, you are unjustifiably claiming private knowledge. And if your theory is observationally identical to a universe with no personal God, then you’re again already an atheist; a God who makes no difference is no God at all. The only remaining question is whether some people would find this analysis useful, and I know many people who, applying this analysis, have abandoned their religion.

I suspect Pigliucci won’t be happy with Larry’s conclusion: that all empirical claims are ultimately totally within the purview of science. That is, there are no “ways of knowing” other than through science, though there are ways of understanding that fall outside science’s bailiwick:

Does this definition include or exclude anything obviously objectionable? We seem to admit lawyering, but lawyers are not obviously unscientific. This definition excludes pure mathematics (even if a lot of mathematicians are Platonists), but I suspect most mathematicians would not object to being placed outside the boundaries of science. This definition definitely excludes philosophy; I do not know, however, whether Pigliucci would be encouraged or enraged by such exclusion.

Finally, the question remains: does this definition of science “encompass all aspects of human knowledge and understanding”? It certainly does not encompass all aspects of human understanding (even if the definition of “understanding” is so broad as to render the term meaningless). As noted above, it does not include mathematics, literature, or even philosophy, which are uncontroversially parts of human understanding. Perhaps, however, it does encompass all knowledge; it is perhaps the case that anything that legitimately deserves the name “knowledge” really must be scientific, in the sense described above. But I need not answer this question to dispose of Pigliucci’s case; it is enough to find that this broad definition of science is useful and largely unproblematic.

The hallmark of New Atheism is its insistence on two things: seeing religious dogma as comprising real claims about what is true in the universe—as hypotheses—and regarding “faith” as exactly the wrong way to assess those claims. In contrast, the hallmark of New Theology is to desperately elude that New Atheist stance by rendering religious claims immune to empirical examination and reason. Plantinga, as Larry shows in his other article, gets around New Atheism by insisting that the Christian God is simply obvious to anyone who looks.


  1. gbjames
    Posted February 23, 2014 at 10:15 am | Permalink


  2. ekinodum
    Posted February 23, 2014 at 10:18 am | Permalink

    this is superb

  3. Mark Joseph
    Posted February 23, 2014 at 10:18 am | Permalink

    In my never-ending quest to be the firstest with the naïvest, I use Richard Feynman’s definition of science: “That principle, the separation of the true from the false by experiment or experience, that principle and the resultant body of knowledge which is consistent with that principle, that is science.”

    I have seen this explained (by Carl Sagan, I think), that the word “science” is used is two senses. The first is that science is a method, the notorious “scientific method”, seen in the expression “she’s doing science” (i.e., forming a hypothesis, collecting data, theorizing from the results of the data, etc.). By the way, in this sense, I take science in the wide sense that Dr. Coyne does, from the plumber figuring out why the pipe is clogged to the team figuring out the nature of the Higgs boson. The second sense of word is the content of science, that is, what is (hopefully) taught in science classes, mostly ending in -ology, -onomy, or -ics.

    Note that the two parts of the explanation correspond to the two parts of Feynman’s definition. For me, this explains what science is almost perfectly, and gives me a useful definition of the word.

    In a probably pompous effort to distil a lapidary phrase out it, I’ve reduced Feynman’s definition to the following: “Methodological naturalism and its results”. Use at your own risk.

    • Diana MacPherson
      Posted February 23, 2014 at 10:42 am | Permalink

      Yes, I agree. The broad definition of science is really using the scientific method as a tool to understand truth. I actually said in a meeting last week that it’s about time we started using data and analysis to inform our decision because the scientific method is the ONLY way to know truths. No more gut feels, no more opinions based on limited observation! Happily, people agreed with me. This first method (the broad one) is part of being rational. This should be used in all aspects of life and all academic disciplines.

      The second definition is more the upper case Science and is specific, as you say, to scientific disciplines. That’s the narrow definition. I think people get all up in arms when they don’t understand the distinction of the two.

      • Mark Joseph
        Posted February 23, 2014 at 1:08 pm | Permalink

        I think people get all up in arms when they don’t understand the distinction of the two.

        If I’m not mistaken, that comes directly out of the root philosophy of American culture, namely anti-intellectualism. They want part 2, the content (read: technology) without part 1 (the method, which involves thinking, which is evil, because once you start to think, there’s a chance you might decide that Magic Man didn’t done it).

        Depressing. Frustrating. Dangerous. But not surprising.

        • Diana MacPherson
          Posted February 23, 2014 at 1:12 pm | Permalink

          There is also the “scientism” charge. People get up in arms about that too & I think it’s because they interpret “science” as “Science”.

          • Mark Joseph
            Posted February 23, 2014 at 1:53 pm | Permalink

            While I would love to read every post and every comment on this website, time prohibits it, and I have to give short shrift to some topics; among which have been the numerous posts about free will and about scientism.

            So, would you, oh most exalted #1 commentator of the year (I’m jealous!) be able to answer a question for me? Have the discussions of “scientism” shown that there is anything more to it than just a slur, along the lines of “atheists have no meaning in their lives”? I suspect it’s just a slur, but don’t really know if there is anything to the charge. Thanks.

            • Diana MacPherson
              Posted February 23, 2014 at 4:16 pm | Permalink

              I think those who are not scientists have mostly taken offence & thought that scientists were telling them they were doing things wrong. I’m pretty sure “scientism” is a word made up by those who have felt slighted — I’m thinking about the whole Steven Pinker vs. Wieseltier extravaganza.

  4. Mattapult
    Posted February 23, 2014 at 10:43 am | Permalink

    When I’m uncertain about the definition, boundaries, or capabilities of science, I substitute “science” with “intellectual honesty,” and that clears it up for me.

    • Mark Joseph
      Posted February 23, 2014 at 12:03 pm | Permalink

      How very Feynmanian of you!

      “Science is a long history of learning how not to fool ourselves.”

      “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that. After you’ve not fooled yourself, it’s easy not to fool other scientists. You just have to be honest in a conventional way after that.”

  5. Posted February 23, 2014 at 11:17 am | Permalink

    I’m not comfortable with the claim that mathematics is not knowledge. From the standpoint of engineering or computer science, we have mathematical models that predict systems very well, because those systems have been heavily engineered to fit the models. In those cases, a mathematical proof is an excellent demonstration of how a system will operate in the world, or how efficient it will be, or whether it can be built, etc.

    While I’m sure this can be parsed out in many different ways, I think most mathematicians and computer scientists would want to retain the words “knowledge” and “truth” when speaking about mathematical findings. It ultimately comes down to the definition and treatment of “knowledge,” which the Bum acknowledges but doesn’t treat with much precision.

    • Latverian Diplomat
      Posted February 23, 2014 at 11:43 am | Permalink

      Since there’s no way to precisely express scientific results without mathematics, the idea that mathematics says nothing about the real world, whereas that is all that science does, seems a little odd.

      • Posted February 23, 2014 at 11:49 am | Permalink

        Mathematics helps us interpret and understand the world (as does philosophy and logic), but one can make a case that these are analytical tools—”helpers,” if you will— and cannot, by themselves, tell us what is true about the universe.

        • Scote
          Posted February 23, 2014 at 12:06 pm | Permalink

          Yet Carroll’s arguments about Cosmology int he debate against WLC were largely about how Cosmology is all about creating models – mathematical models.

          I can see how various kinds of mathematics are abstract and unrelated to reality, yet math is how science models the world and the universe in order to make theories with predictive value. It seems to me that the issue is complicated and that that math isn’t so easily pigeonholed.

          • Posted February 23, 2014 at 12:17 pm | Permalink

            Sean made quite clear that creating models is only one early step in the process, or otherwise of limited utility.

            As vital as modeling in general and math in particular are, you still don’t know if they have any bearing on reality until you make the observation.

            Remember, Calvinball is every bit as valid a form of math as algebra. It’s just that Calvinball isn’t very useful in making sense of reality. The job of a mathematician is to discover maths that are interesting; the job of a scientist is to discover maths that are useful. Utility and interest often correspond for somewhat obvious reasons, and so scientists and mathematicians make great teammates…but they’re doing fundamentally different things, and most of them are aware of the fact.



            • Posted February 23, 2014 at 1:32 pm | Permalink

              “Utility and interest often correspond for somewhat obvious reasons”. Maybe you could let us in to the secret?

              • Posted February 24, 2014 at 6:51 am | Permalink

                Something that’s useful is going to catch your interest. Things that catch your interest, even if not obviously useful, are going to get additional attention; if there’s anything about them that turns out to be useful after all, you’re more likely to discover that fact as a result.



            • Latverian Diplomat
              Posted February 23, 2014 at 8:47 pm | Permalink

              Your statement about models just being an early stage is not really accurate. Models are used to make the predictions that obesrvations test. And the then the observations are used to support, reject or refine the model. The Standard Model is the description of particles and their interactions, the model is the theory, it is the expression of the state of the scientific understanding. Make a new discovery not predicted by the model, and the model will have to be adjusted.

              Not all scientific work is that deeply mathematical, but counting and measuring are fundamentally mathematical ideas, and almost all science does some of that.

              • Posted February 24, 2014 at 1:16 am | Permalink

                You may be confusing the map with the territory. Mathematics simply provides a compact way of expressing the model; the model itself is not just mathematics.

                As someone else said, mathematics is just a language, albeit a very precise one with its own very rigorous grammar.

                It is certainly possible to have mathematical knowledge (what follows from what axioms; proofs of theorems; &c.). But (as Jerry noted in the OP), not all mathematical knowledge is applicable to the real world. And we do not know /what/ mathematically knowledge is applicable to the real world until we do science.


              • Posted February 24, 2014 at 3:09 am | Permalink

                @ant – The point is that scientific theories (models) often start out as guesses based on patterns and maths is particularly useful for relating patterns. It’s always good to get evidence to confirm that you are on the right track, of course, but nature isn’t always that obliging. Fortunately, you can produce consistent scientific theories without empirical evidence. Very often constructing such models does lead to predictions than can be tested within the bounds of current technology, but we are still waiting WRT theories such as string theory, many worlds interpretation of QM etc. In any case it’s pretty silly to argue that all useful discourse about the world is “empirical”. What does that even mean? You have to have a theory, before you can test it. Science would be pretty dull if the only way to formulate an idea was by ploughing through mountains of empirical data.

              • Posted February 24, 2014 at 3:15 am | Permalink

                But what is the point of a theory – better, a hypothesis – if it is not based on (explains) some existing empirical observations?


              • Posted February 24, 2014 at 4:09 am | Permalink

                So would Einstein’s theory of general relativity have been useless, if we hadn’t had the precision instruments to measure the curvature of light around the sun or the deviation in Mercury’s orbit (or some other empirical fact), at the time he formulated it?

                Does knowledge have to have a point anyway? What is the point of pure mathematics? Prime number theory turned out to be useful in encryption, but I don’t think anyone (certainly not a pure mathematician) would argue that encryption is the point of prime number theory.

                Perhaps one might say that theories are about consistent possibilities and the point of testing them empirically is to decide whether those possibilities are actualised in the part of the physical universe we happen to be in. But consistent possibilities, even ones that aren’t actual, according to current data, can still be interesting, not least because the rules that apply at our particular location in time and space may have a limited reach.

              • Posted February 24, 2014 at 4:36 am | Permalink

                Frankly, yes. Empirically based theories of gravity and special relativity already existed when Einstein developed general relativity. Absent that empirical grounding (and absent test such as those you mentioned — or the need to explain such things) it wouldn’t have been a terribly useful model.

                I think you example about encryption rather supports my point.

                Of course things can still be interesting, just as certain examples of fiction can be interesting (chacun à son goût).

                But they don’t constitute knowledge about the real world.


              • Posted February 24, 2014 at 11:01 am | Permalink

                Well I am not denying that the way to find out if there is a tree in your garden is to look out the window, just that there’s a whole bunch of other interesting things to know that aren’t about whether or not there is a tree in your garden.

                Empiricism is about the constraints that apply in the actual world, but we can also have knowledge about the constraints that apply in all possible worlds or subsets thereof and that’s a part of science too (not to mention mathematics and philosophy). That’s why you can prove stuff in those areas, whereas empirically we are in the position Neo was in before he took the pill.

                Good science melds empirical observations with reasoning, that’s why mathematics is so useful in physics, but once we go up a few levels of reduction we can’t have the precision of mathematics and must rely on less formal methods of reasoning.

              • Posted February 24, 2014 at 11:16 am | Permalink

                “we can also have knowledge about the constraints that apply in all possible worlds or subsets thereof”

                But unless it is something intended to be tested against the real world it is not science. As I already conceded, many fictions can be interesting. But knowledge of Albus Dumbledores middle names* is not scientific knowledge about the real world.


                * Percival Wulfric Brian

              • Posted February 24, 2014 at 11:21 am | Permalink

                …with, of course, the caveat that the printed books themselves are very real, as are the thoughts that constitute the imagined reality of the Harry Potter universe.

                That is, you cannot catch a train in London that’ll take you to Hogwarts where you can meet Dumbledore and shake his hand. But you can imagine doing so, and your imagination thereof is real. It’s just a very limited reality that’s a pale shadow of the real reality. Even if it’s still a fun place to visit.



              • Posted February 24, 2014 at 11:24 am | Permalink

                Agreed. I nearly added, “although it is verifiable knowledge of a secondary creation.” And, indeed, I had just verified it, via Wikipedia! 😀


              • Posted February 24, 2014 at 11:17 am | Permalink

                You’re skirting very close to a very dangerous cliff. Specifically, when one doesn’t have an answer, the solution isn’t to make up your best guess and declare that to be the answer; rather, the solution is to admit you don’t know and then set about trying to figure it out. But, until you do have it figured out, you absolutely must (if you are to be honest) remain acutely aware of your ignorance.

                Sean Carroll does this brilliantly. He’ll be the first to tell you that we don’t yet have a solution to quantum gravity and that the answers to a great many very significant questions in physics hang on that solution. He’ll tell you everything you could ever want to know about the parts we do understand, such as the Standard Model, and how we can be confident that we actually do understand them. As soon as he gets close to the areas that aren’t settled, he’ll start using hedging language. And, once he’s well over the line into purely speculative stuff, he’ll say right up front that that’s the case — and, indeed, often introduce models that are known to be worng by saying, “We know this is worng, but it’s helpful to understand it anyway.” As a bonus, while he’s doing that, he gives a great demonstration of how science generally proceeds: by ruling out possibilities, by setting boundaries, and by increasingly narrowing down the options. With cosmology, for example, we know that no magic men are responsible, and we further know that the answer has to result in the standard model plus Relativity; anything that doesn’t have that as an answer is worng…but it still might be an useful stepping stone if it gets part of the answer right.

                What I’m afraid you’re suggesting is taking that last one, or even something that we don’t yet know if it’s right or worng, and running with it. That’s philosophy, not science.

                Never lose sight of the error bars, in other words.



              • Posted February 24, 2014 at 12:14 pm | Permalink

                One can prove that it’s possible to create a Turing machine in a deterministic simulation with particular rules. This isn’t knowledge that is based on empiricism, it is a consequence of the rules that apply to deterministic systems. But, that just as much represents something one can know, as that one can know there is a tree in the garden by looking out of the window. In fact I’m more sure of it, because we don’t have a clear idea what the world outside of us actually is and there’s no empirical way of determining that.

                @ant – if you want to define science as looking out of the window, then obviously if you aren’t looking out of the window, you aren’t doing science. However, there is a lot of stuff you can know or at least have good reason to believe, that doesn’t involve that.

              • Posted February 24, 2014 at 12:43 pm | Permalink

                When you’re looking out the window at a tree, you’re justified in having very narrow error bars that there is, in fact, a tree out the window that you’re looking at. You could be hallucinating, or a brain in a vat, or an infinite number of other conspiracy theories might apply, but they always do and there’s basically never any practical point in addressing them other than to the extent that they prevent your error bars from ever actually touching; that’s what scientists mean when they say that science never actually proves anything.

                When you look out the window at a tree, you might reasonably surmise that there might be other trees elsewhere that you can’t see. Your error bars for this assumption should be wider than for the direct observation, but it still might be the case that these error bars are touching for all practical non-paranoid porpoises.

                The farther out you speculate, the more disconnected you get from observation, the wider you should space your error bars…until, eventually, the error bars get so wide as to be useless. You could very plausibly speculate that you are a brain in a vat, but your error bars on that speculation would have to be wide enough to also encompass the speculation that you’re a subroutine in a computer simulation. With error bars hat wide, you can’t meaningfully distinguish between any of a number of radically competing ideas, and so there’s no point in pretending that you actually know anything about the subject.

                Of course, once you’ve gone and made some observations, that then gives you a chance to start to ratchet down the error bars.

                The fundamental problem with philosophy is twofold: first, it doesn’t see the value in error bars in the first place; second as a result, it never bothers trying to set them appropriately. As such, philosophers assume they know things they really don’t, and they’re unable to acknowledge knowledge they should actually know.



              • Posted February 24, 2014 at 1:31 pm | Permalink

                “there is a lot of stuff you can know or at least have good reason to believe, that doesnt involve that”

                Not if you want to know things about what’s out of the window.



              • Posted February 24, 2014 at 2:03 pm | Permalink

                Sometimes I want to know what’s out of the window, sometimes I want to solve some logical problem in computer science. And I don’t think that when engaged in the latter activity, that I’m doing anything that has any less intrinsic “truth” value, than if I don my lab coat and measure some bubbling test tubes.

                Cameras look, but to understand what you are looking at you have to engage your logical faculties and interpret what you see in the light of some theory. And likely that theory is a product of creative pattern matching aka intelligent guessing and that isn’t something that is directly deducible from empirical evidence, not least because “all observations are theory laden”.

              • Posted February 24, 2014 at 2:06 pm | Permalink

                …and how do you know when your pattern-matching algorithm is doing what you think it’s doing or what you want it to do?

                You test it against real-world pictures, of course.



              • Posted February 24, 2014 at 2:58 pm | Permalink

                No we don’t need to test ideas that can be proved logically. As an example from finance: If I write a valuation routine for a financial instrument based on a logical principle (such as arbitrage theory), then that does not depend on empirical testing to verify it’s correctness. That doesn’t imply that you will be able to use it in the world we are actually in, because of liquidity and timing (sometimes you can, sometimes not, it’s usually volume dependent). Hence, solutions to problems in the real world are often a mixture of logic and practicality. If your point is that real world problems require real world solutions, as in this case, then I (of course) agree with that. But I would also point out that general solutions to such problems are 1) true and 2) can have a great degree of practical utility in a wide range of different circumstances. So the point is that this is one (of many) ways that you can know stuff often of wide applicability that doesn’t require empirical evidence. And unlike your average empirical evidence such results apply to Neo, before *and* after he took the pill.

              • Posted February 24, 2014 at 3:02 pm | Permalink

                And, yet, you don’t actually know whether the instrument has the value you think it has until somebody goes ahead and buys it.

                If somebody buys it for a different price than what your model says they should have, which value are you going to record in the books you show to the IRS? The model’s, or the actual sale price?



              • Posted February 24, 2014 at 3:35 pm | Permalink

                No it doesn’t matter what price some particular person pays, what matters is the differential between bid/offer prices in related markets at any particular time. The theoretically calculated value *is* the actual value of that instrument at that time (provided your calculation has taken into account all related markets) and it’s a value you can realise in practice (provided you can do the arbitrage and pick up the phone quickly enough to hit it, before someone else does). So just one example why your contention that all knowledge is empirical is nonsense… I could certainly come up with many more.

              • Posted February 24, 2014 at 3:43 pm | Permalink

                Actually, all you’ve done is display my ignorance of the nuances of arbitrage…and your ignorance of what constitutes empiricism.

                Your model tells you when to buy and when to sell, right? And when you follow your model you make money, right? And when you don’t make the money that you expect to make, you look to revise your model to better reflect the realities of when to buy and sell and for how much, right?

                That’s the textbook definition of empirical science in practice.



              • Posted February 24, 2014 at 4:21 pm | Permalink

                No, that’s not how it works at all. The correct value *is* the correct value, not only can you prove it logically using arbitrage, but you can trade it! There is no empirical testing anywhere in sight, it’s irrelevant – my *correct* valuation is your valuation is the valuation. In a world of near instant communication that’s how valuation works for liquid, fungible commodities traded in multiple markets. It’s nice to see that there is an area in which you don’t have expertise!? In any case computing is full of examples whereby you can *prove* various aspects of computational behaviour without any reference to empirical data. That’s hardly surprising, since computing is to a large extent automated logic.

              • Posted February 24, 2014 at 4:27 pm | Permalink

                So, try an experiment. Replace all references to the function that calculate what you’re calling the correct value with a call to RAND(). The code still compiles, yes? No core dumps? Produces output, the works?

                How do you know that your carefully-crafted function is more correct than RAND()?

                Simple: because people are willing to trade with you based on the value you come up with in your function, as opposed to the value you’d get if you used RAND() instead.

                There’s your empirical proof that your function is more useful than a call to RAND().

                The next time you change the function, it’s going to be in a manner intended to make more money for the company, right? If it costs the company a billion dollars in the first hour of trading, are you going to insist that your value is still the correct one, even if you had your ironclad proof before you put it into production and an analysis confirms that there were no errors in the proof?



              • Posted February 24, 2014 at 5:30 pm | Permalink

                No, because only the correct valuation results in all the legs of the trade being possible. It hardly seems worth explaining this further though. But, the point is that you can generate proofs computationally and they represent stuff we didn’t know before we did it. Empirical knowledge is about what’s true in this world (or what we imagine to be this world), whereas Logic is about what’s true more generally. i.e. empirical knowledge is contingent: For instance there is a tree on my lawn, but it might just have well have been a flower. But there are also things we can know that that are more generally true: For instance, Neo would not have had to change his idea that 2+2=4 after he took the pill or that the ontological argument for the existence of God is nonsense. Similarly, we can prove lots of stuff about say deterministic systems and their behaviour, which are generally true of such systems (and may actually apply in the world we are in if it turns out to be deterministic or even partially so). Even in the circumstance that we exist in some kind of virtual reality, the aliens would somewhere have had to implement the algorithms that make a computer play chess and we can say general things about them that aren’t tied to empirical measurements.

              • Posted February 24, 2014 at 9:09 pm | Permalink

                But, again, you don’t know even that 2 + 2 = 4 in the real world until you go out and start counting things. And, if the real world worked like some video games, “bonus” points would wreak complete havoc with the notion of regular arithmetic.

                So you’ve got your mathematically pure correct valuation. But, again, you don’t know that it actually has any bearing on real-world financial markets until it starts making you money. Yes, you can be pretty confident that it’ll work, the same way that an aerospace engineer can be pretty confident her new airframe design will be more fuel efficient after computer simulations…but she’s not going to actually know that it is until at least the prototype is in the wind tunnel, if not the skies. And you don’t actually know that your valuation is right, either, until after it’s been traded enough to show up on the financial report.



              • Posted February 24, 2014 at 3:49 pm | Permalink

                “provided your calculation has taken into account all related markets”

                And how do you know that it has?!



              • Posted February 24, 2014 at 2:39 pm | Permalink

                The you are just looking through different windows! 😀

                I never claimed that hypotheses can be derived directly from observation. Of course you can guess! (Feynman said so!!) But what is the point of guessing if not to explain what you have observed?


  6. Mark Joseph
    Posted February 23, 2014 at 12:05 pm | Permalink

    I just ran into this, which will be of interest to this discussion; a compendium of definitions of science from a Richard Dawkins website.

  7. Posted February 23, 2014 at 12:12 pm | Permalink

    That’s an interesting critique. A couple of problems:

    1. I would of course reject the claim (that the author expresses sympathy for) that mathematics and philosophy may not provide “knowledge.” We know knowledge to be justified, true, anti-Gettiered belief, and so the author would need to argue that mathematical and philosophical beliefs are unlikely to be justified, or that if they are justified, it’s merely in Gettier ways. Since justification is a normative concept, this argument would require stepping outside of observation per se, for justification itself is invisible.

    2. The author imposes a few conditions on science, most prominently that it be falsifiable, justified by observation, and parsimonious. This commits him or her to several controversial philosophical theses. For example, falsifiability is notoriously difficult to explain in a worthwhile way, given that apparently falsifying observations can always be explained away by ad hoc hypotheses, such as that one’s telescope was malfunctioning. (Look up “underdetermination of theory by observation.”) Justification by observation requires a philosophical argument that observation and induction are reliable, on pain of circularity. And parsimony is a philosophical principle as well, also empirically unjustifiable on pain of circularity. So if philosophy doesn’t provide knowledge, then the author must either (a) admit that he or she doesn’t know what science is nor whether it has ever been practiced or (b) embrace circular arguments.

    • Posted February 23, 2014 at 12:31 pm | Permalink

      They are not philosophical theses, but tools that work. As for your denigration of falsifiability, you say that a falsification can always be explained away. Well, I’m sorry, but that’s not always the way it’s work. The faster-than-light neutrino observation was falsified by finding a loose wire. The static continent theory was falsified by looking at geographic patterns and now by watching the continents move. The “creation theory” (once a scientific claim) was falsified by mountains of data. So please do not imply that falsifiability is deeply problematic because scientists always try to save their theories. That’s just wrong. As for induction by observation, well, it works, and to me it doesn’t need philosophical justification. If philosophers can’t justify it because the argument is circular, then I guess we can’t do ANY science, can we?

      Finally, parsimony has generally worked–that’s why it’s used. It’s not used because it’s been philosophically justified.

      I’m starting to think that if we listened to all the reasons philosophers give us that we can’t really justify why we do science as we do, we should just give up and twiddle our thumbs. The fact is that we have a bunch of practical tools that work, and I don’t pay a lot of attention about whether those tools seem circular or anything–nor do most working scientists who have made all the advances from which we benefit. How has philosophy helped them do their work?

      Philosophy’s benefits to science, it seems to me, are to find inconsistencies in our reasoning, or help us find clarity, but not to provide some a priori grounding for what we do.

      • Posted February 23, 2014 at 12:40 pm | Permalink

        Philosphys benefits to science, it seems to me, are to find inconsistencies in our thinking, or help us find clarity, but not to provide the grounding for what we do.

        Jerry, I think you’ll find that even those benefits aren’t all they’re cracked up to be. Consider all the philosophical objections over the years and still common today to the findings of science, including the “argument from design” against Evolution and the general incomprehensibility of quantum mechanics to lay audiences.

        At best, philosophy could potentially be a source of ideas…but even there we see the most useful ideas coming from those, such as Darwin, who got their hands dirty with the data, and not from philosophers who argued from first principles.

        And, if you dig even further, you’ll find that everything useful that comes out of philosophy departments is only useful to the extent that it’s confirmed by empirical observation — which places it squarely in the realm of science, not philosophy.



        • Posted February 23, 2014 at 2:09 pm | Permalink

          Hello, Mr. Goren.

          Do you agree that philosophy can also provide perspectives that help scientists come up with new ideas? For example, Darwin was famously influenced by Malthus, and the invention of statistics was influenced by Hume’s account of causality. On the other hand, Stephen Jay Gould has argued that the success of the civil rights movement made it easier to see what was wrong with the arguments of the scientific racists who wrote at the beginning of the twentieth century. It is true that none of these scientific conclusions can be deduced from the philosophical developments that precede them, but it does seem like philosophy could have made the scientific developments easier.

          • John Scanlon, FCD
            Posted February 24, 2014 at 5:47 am | Permalink

            Those examples of ‘philosophy’ (from Malthus & Hume) look a lot like mathematical models, whatever else those authors may have occupied themselves with.

            • Posted February 24, 2014 at 7:53 am | Permalink

              Yes, exactly. Thinking up new ideas is obviously a vital part of the scientific process, and there have been people, especially in the past, who wore philosophers’s hats who provided some of that inspiration. But inspiration comes from all sorts of places; it was a dream of the Ouroborus myth of a snake eating its tale that inspired Kekulé to think of benzene as a ring-shaped molecule.

              But today, all the low-hanging fruit has been picked, and the only people making meaningful contributions are the ones who’ve devoted their lives to the endeavor. We saw a perfect example of that in Sean Carroll’s debate a few days ago with William Lane Craig. Sean is the real deal, making active an useful contributions to cosmology; WLC is a dilettante who’s read a bunch of the literature but is completely clueless about what any of it actually means. And Craig was trying to inject philosophy into cosmology…and he was still stuck on ideas that were new and exciting before the rise of the Roman Imperium. He couldn’t for the life of him let go of those ancient, long-since-discredited concepts; as a result, he made a complete mess of the modern science. Craig would be an active liability to any astronomy department, not only never contributing anything of value but constantly wasting everybody else’s time with his primitive superstitious nonsense.



      • Diane G.
        Posted February 23, 2014 at 2:22 pm | Permalink

        JAC: “…a bunch of practical tools that work…”

        Another good definition for (some senses of) science.

      • Posted February 24, 2014 at 8:30 am | Permalink

        Thanks for your reply.

        There’s a lot more to these issues than we can deal with here, as I’m sure you know. But I want to emphasize that I’m not arguing that falsifiability, parsimony, or observation are unjustifiable–instead, that they’re unjustifiable when we limit ourselves to science itself.

        Take some theoretical criterion C–perhaps it’s falsifiability, parsimony, elegance, predictive power, or observational support. Now consider this argument:

        1. Science employs criterion C.
        2. Science is successful.
        3. Therefore, criterion C is a mark of truth.

        This argument will be circular unless (2) can be supported without appealing to C. So if science uses falsifiability, then it’s circular to appeal to a track-record argument such as that one as a way of justifying falsifiability. And the same can be said, mutatis mutandis, for observation as well.

        So once again, most philosophers would hold that science is perfectly justified, including most of its theoretical criteria. Many would insist, however, that those theoretical criteria are unjustified when we’re employing only observation.

    • Posted February 23, 2014 at 12:31 pm | Permalink

      The only reliable method anybody has ever demonstrated for determining the degree of confidence warranted in a proposition is empiricism. Philosophy and logic sometimes help to winnow out useless propositions, but they’ve also often thrown the baby out with the bathwater — witness philosophical and logical objections to infinities, non-Euclidean geometries, quantum- and relativistic-scale weirdnesses and contradictions, and the like.

      Indeed, your own closing objection to circular arguments is a perfect example. The obvious conclusion to your objection is that external justification is always required, and that no self-contained system could ever have within it its own justification. Not only is that a minor variation on the Christian / Platonic “First Cause” argument, it’s neither self-consistent nor what we observe in reality. Evolution, for example, needs no external justification, and we see similar self-referrential recursive phenomenon everywhere we look. And I dare you to offer a justification of an objection to circularity that isn’t itself circular.

      While I’d certainly agree that one should be wary when dealing with circular arguments, since they’re a common rhetorical trick often used deceptively, we know empirically that there are times when there’s nothing worng whatsoever with circularity.

      And empiricism would be the perfect example of such a case. As Richard Dawkins quoted Randall Munroe: “Science. It works, bitches.”



      • Posted February 23, 2014 at 8:31 pm | Permalink

        Bad ideas are bad ideas, whether they are nominally a part of philosophy, science, logic, mathematics or whatever; on the science side witness phlogiston, the luminiferous aether, Piltdown man, Cold fusion, N rays etc. etc. Arguments need to be addressed on an individual basis and propagating two culture prejudice, just makes it harder for people in different fields to communicate with each other productively.

        • Posted February 24, 2014 at 7:41 am | Permalink

          The only relevant question is, how do we know which ideas are bad and which are good?

          And there’s only one answer that’s ever proved reliable: check your ideas against reality.

          That’s the heart and soul of science, and it’s either irrelevant or inconsequential in all other fields.



      • Posted February 24, 2014 at 8:31 am | Permalink

        Thanks for your reply. I think I just have one question then: How do I sort the good circular arguments from the bad ones? (How do you explain to the religionist why she can’t just cite her religion as prove of her religion?)

        • Posted February 24, 2014 at 8:44 am | Permalink

          Oh, that’s trivial — and the whole point of science.

          You go out and make observations based on the predictions and see how well they hold up.

          The Bible, for example, famously says that anything somebody asks in Jesus’s name shall come to pass. That might or might not be true within the pages of the Bible — I’m not enough of a Biblical scholar to know or care — but that’s really just an exercise in literary analysis. If the question is how well that claim hods up in the real world, all you have to do is ask for something in Jesus’s name and see if it does or doesn’t come to pass. And the results are overwhelmingly negative; invoking Jesus makes not the slightest difference as to whether or not anything happens.

          So, if you want to make up some fantasy world where invoking Jesus really does work, if you can figure out some way to live there, wonderful; have at it. But reality is that which persists whether or not you believe in it, and reality has a nasty habit of biting you in the ass if you try to ignore it. Statistically, over evolutionary timescales (if not much shorter), the prudent bet is to align your beliefs with reality, no matter what your religion or philosophy might suggest to the contrary.



    • Torbjörn Larsson, OM
      Posted February 23, 2014 at 1:20 pm | Permalink

      apparently falsifying observations can always be explained away by ad hoc hypotheses, such as that one’s telescope was malfunctioning.

      Preposterous! Testing until error (aka “falsification”) merely means that a specific hypotheses has to be rejected as not working. That means that its constrainst go with it.

      You are adding hypotheses and/or changing constraints, which means it is _another_ hypothesis. E.g. from “this hypothesis predicts these observations” to “this hypothesis and not that telescope predicts these observations”. See my comment on overfitting.

      More generally, if testing didn’t work we would never have developed technology because we would know a working stone tool from an ordinary stone.

      Other preposterous notions are claims that science rely on induction – only a sometimes tool for making hypotheses, never for testing them – and reject circularity – a well tested theory is perfectly circular with its observations by purpose*, and only new observations or theory can break that to move on.

      This type of comment is presumably what happens when you visit a Sophisticated Philosophy™ class instead of studying science.

      *Circularity is only dangerous when you have no observational input, as we can see from the example of how science use it.

      But no alive empirical area is static, as opposed to philosophy that has no means to move itself out from the hole its barren methodology has dug.

      • Posted February 24, 2014 at 8:36 am | Permalink

        My worry is that appealing to some theoretical criterion (falsifiability, observational testability, parsimony, elegance, predictive power, etc.) cannot be justified by the claim that science works, unless someone can use something other than science to discover that science works.

        If you accept circular arguments (at least sometimes), how do we sort the good ones from the bad ones? Why not appeal to Ouija boards to justify Ouija boards, and the Bible to justify the Bible?

        • Posted February 24, 2014 at 8:45 am | Permalink

          The Bible can justify the Bible, if all you care about is the Bible. But the Bible cannot justify reality; only reality can justify reality. And we have yet to discover a more effective method of discerning reality than by observing it, and no better method of observing it than empirically in a scientific manner.



    • Torbjörn Larsson, OM
      Posted February 23, 2014 at 1:33 pm | Permalink

      And parsimony is a philosophical principle as well, also empirically unjustifiable on pain of circularity.

      Oops, I missed that. More preposterous claims, and I refer to my comment where I attempt to analyse parsimony from an empirical standpoint.

      I don’t claim that it is entirely correct or complete, because I haven’t studied the areas where it is justified methods “on pain of” that they works (non-basic statistics and cladistics). But I claim that it is a tool kit of several empirically justified methods.

    • Posted February 23, 2014 at 1:51 pm | Permalink

      (I am the author of the works cited in the post.)

      I do not actually make the claim that mathematics and philosophy are not knowledge, nor do I actually have any particular “sympathy” for such a claim. Like almost every word in every natural language, “knowledge” is used to denote a variety of loosely related categories. I prefer to simply disambiguate equivocal words, e.g. talk about scientific knowledge, rather than argue the philosophical legitimacy of their various lexicographical uses.

      If your only criticism of my work is of a claim I explicitly do not make, I can conclude that I have done fairly well at justifying the claims that I did make.

      • Posted February 23, 2014 at 8:41 pm | Permalink

        Whoa! BFB – the link with your ID is some sort of sex site.

        • infiniteimprobabilit
          Posted February 24, 2014 at 1:07 am | Permalink

          barefootbum.blogspot.com appears to be the correct site.

          barefootbum.com (as linked from BFB’s sig just above) is indeed a sex site.

          • Posted February 24, 2014 at 1:42 am | Permalink

            Well, technically it is just a parking page for the domain (maybe BFBs old site?), with links to sex” sites. But certainly something BFB should fix!


        • Posted February 24, 2014 at 12:03 pm | Permalink

          Ruh roh! Should be fixed now. Thanks.

          • Posted February 24, 2014 at 6:30 pm | Permalink

            That should keep the NSA away for now 🙂

      • Kevin
        Posted February 23, 2014 at 10:10 pm | Permalink

        You have a great site. I will have to read more.

      • Posted February 24, 2014 at 8:32 am | Permalink

        Sorry for misrepresenting you. I retract my attribution of those positions, but not, of course, my criticisms of them.

        • Posted February 24, 2014 at 12:07 pm | Permalink

          An honest mistake, no offense taken. I do think your criticism is not entirely correct; if you want to discuss the issue at greater length, feel free to come over to my blog.

    • Posted February 23, 2014 at 2:22 pm | Permalink

      Occam’s razor should be seen as an invocation to determine what is mininally implied by some particular evidence, as compared to what is extraneous and doesn’t necessarily follow from the evidence. For instance, the theory that “evolution occurred but that god intervened to create man”, is on the face of it supported by all the evidence that supports evolution, but we can use Occam’s razor to slice off the god part of the hypothesis, since none of the evidence explicitly requires that. Then we have a *simpler* theory than we had before – don’t multiply hypotheses unnecessarily.

      • Posted February 24, 2014 at 8:33 am | Permalink

        I agree with that use of Ockham’s Razor, but I would want to know whether there is any purely empirical support for it. It strikes me as a philosophical principle employed to constrain science.

        • Posted February 24, 2014 at 11:31 am | Permalink

          I don’t think that all knowledge is empirical, in fact that is a thoroughly confused (and self refuting) idea IMHOP. But, in the case of Occam’s razor, perhaps one could say that theories that don’t have ad hoc additions make better predictions than ones that do and that could be tested empirically, at least in certain cases. For instance the theory that “gravity applies to everyone else, but not to me if I jump off the Eiffel Tower”, probably would be falsified by evidence if I did try and jump off the Eiffel tower. But, I haven’t really thought about how one might generalise that to cases such as “if I had jumped off the Eiffel Tower yesterday”, which couldn’t in principle be falsified.

          In order to establish that Occam’s Razor was a universal principle, rather than an empirical one that just applies in the circumstances we happen to be in, I think one would have to come up with a good *logical* argument as to why it should apply universally.

          • Posted February 24, 2014 at 12:01 pm | Permalink

            In order to establish that Occams Razor was a universal principle, rather than an empirical one that just applies in the circumstances we happen to be in, I think one would have to come up with a good *logical* argument as to why it should apply universally.

            That would be a perfect example of not just a philosophical problem, but why philosophy itself is irrelevant and generally useless.

            It is practically an universal principle that there are no universal principles. Rather, there are relevant domains over which certain theories are useful. Suggesting that there might be universality to Occam’s Razor is equivalent to suggesting that the Pythagorean Theorem is similarly universal, when we already know for a fact that it’s actually never true (even if it’s mind-bogglingly useful). And suggesting that the Razor is only useful if it’s universally applicable is another philosophical problem, as is the notion that we should seek or even prefer universally-applicable principles.

            All of that only makes sense in a world of Platonic idealism…and, again, we’ve known for centuries that the universe just isn’t like that.

            Occam’s Razor is a very powerful and very useful tool, when used appropriately. There is no more reason to be disappointed in the fact that it can be used inappropriately than there is reason to be disappointed that Pythagoras can’t deal with Relativity — or, indeed, that hammers make lousy screwdrivers.



  8. Posted February 23, 2014 at 12:20 pm | Permalink

    I would agree with many of the above comments that, just as Evolution is both a theory and a fact, science is both a method and a body of knowledge. The method, I would argue, is the process of apportioning belief in proportion with that indicated by a rational analysis of empirical observation. Most other definitions of science don’t capture the idea of error bars, which I consider the most important part of the process.



    • DianeAlliLangworthy
      Posted February 23, 2014 at 1:42 pm | Permalink

      Made me think of Carl Sagan: “Science is more than a body of knowledge. It’s a way of thinking.” Also, I really like your second sentence!

  9. Torbjörn Larsson, OM
    Posted February 23, 2014 at 12:36 pm | Permalink

    Theist: “I propose an invisible magic Man created the universe.”

    Atheist: “I don’t see any invisible man, I don’t see that magic was neither necessary nor sufficient in the cosmological process we see.”

    Theist: “Ah, but you haven’t looked hard enough! The Man is invisible so we have to believe.”

    Atheist: “By the way. If no one else was around then, why was that magic man always invisible?”

    In other news, time to drag out philosophy in the light again:

    Pigliucci even claims that “it seems clear to me that most of the New Atheists [except for the professional philosophers among them] pontificate about philosophy very likely without having read a single professional paper in that field”

    Well, the same can be said for Sophisticated Philosophers™. Science and atheism isn’t philosophical questions, it is theology and agnosticism that is.

    SopPhil™ has a 2 000 year old history of pontificating about science even before there _were_ professional papers in the area!

    we’re engaging in the Deadly Sin of Scientism.

    Translation: “Never mind that SopPhil™ has that long history of Philosophism.”

    • Diane G.
      Posted February 23, 2014 at 2:26 pm | Permalink

      Would love to see “philosophism” used as often as appropriate (which is often!).

  10. Posted February 23, 2014 at 1:04 pm | Permalink

    As a devout Christian, I disagree with many of your conclusions, but I find your blog an intellectual oasis in a desert of talking points from both sides. Like many here I desire to observe and, at times, participate in the debate at a level where I actually feel challenged. This blog more than meets that standard. So here’s a thank you from “the opposition.”

    • Mark Joseph
      Posted February 23, 2014 at 2:02 pm | Permalink

      Howdy RJ:

      You’ve hit on a key point; Dr. Coyne has done a magnificent job of keeping this “blog” (more about that in a second) a great place for people to interact and to keep on topic; I think it also helps a lot that the people here respect each other. The intensity level can get pretty high, but as you’ve indicated, it’s a much higher signal-to-noise ratio than on far too many other sites.

      Now, about the word “blog”. Dr. Coyne prefers it to be referred to as a “website”. It’s pretty much the biggest insider’s badge here, so you might want to play along. In fact, the preferred spelling of the dreaded “b-word” here is “bl*g”.

      Let it never be said that we (well, at least the vast majority of us) atheists are humorless, mechanistic automatons!

  11. Torbjörn Larsson, OM
    Posted February 23, 2014 at 1:06 pm | Permalink

    Jerry’s article evokes some questions:

    the idea theories should be parsimonious, invoking no more assumptions or entities than necessary to explain the observations.

    It seems to me there are two, likely more, senses of parsimony used in science.

    First we have overfitting. “In statistics and machine learning, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex”.

    I’m reminded of how the current inflationary cosmology is “standard” because when you try 6 instead of 5 basic parameters you don’t gain much from WMAP or Planck data. Instead you run the risk of overfitting. Hence cosmologists place penalties on number of parameters.

    Then we have cladistic parsimony. My understanding is that it is used because it works robustly, spreading the errors without bias. “Parsimony analysis uses the number of character changes on trees to choose the best tree, but it does not require that exactly that many changes, and no more, produced the tree. As long as the changes that have not been accounted for are randomly distributed over the tree (a reasonable null expectation), the result should not be biased. In practice, the technique is robust: maximum parsimony exhibits minimal bias as a result of choosing the tree with the fewest changes.”

    Both of these seems, as opposed to philosophy parsimony of Occam’s Razor, to be based in empirical worth and more specifically in robustness and non-bias.

    But I note that they are not always the most valuable methods: ” Maximum likelihood is a parametric statistical method, in that it employs an explicit model of character evolution. Such methods are potentially much more powerful than non-parametric statistical methods like parsimony, but only if the model used is a reasonable approximation of the processes that produced the data. Maximum likelihood has probably surpassed parsimony in popularity with nucleotide sequence data, and Bayesian phylogenetic inference, which uses the likelihood function, is becoming almost as prevalent.” When understanding deepens, other methods come to the fore.

    Speaking of understanding:

    That is, there are no “ways of knowing” other than through science, though there are ways of understanding that fall outside science’s bailiwick:

    “As noted above, [this definition of science] does not include mathematics, literature, or even philosophy, which are uncontroversially parts of human understanding.”

    Really? Is literature a way of “understanding” rather than a way of experience?

    To me, understanding is not correlated with experiences (which can be confusing, hallucinatory, or simply drab) but to the ability to predict facts. Theories allow understanding, facts and theories describes facts so knowledge.

    The claim that mathematics by itself, literature, and (oh darwin!) philosophy are parts of understanding seems _very_ controversial to me.

    If The Barefoot Bum takes understanding to be very loose, then any experience can be understanding. But that sounds like drug trips allow understanding. I don’t see the value of such “understanding”.

    • Torbjörn Larsson, OM
      Posted February 23, 2014 at 1:24 pm | Permalink

      Blockquote fail, but it seems legible anyway.

      Sorry about that. :-/

    • Torbjörn Larsson, OM
      Posted February 23, 2014 at 1:40 pm | Permalink

      Also,link fail. cladistic parsimony.

  12. Posted February 23, 2014 at 1:08 pm | Permalink

    I’m not sure it makes sense to conflate Richard Dawkins with Sam Harris. Whilst I respect Sam Harris’s views on religion, in his earlier books and his debating ability, The Selfish Gene and it’s follow ups (Blind Watchmaker, Extended Phenotype) are in a different league.

  13. Posted February 23, 2014 at 1:13 pm | Permalink

    I’ve thought a lot about mathematics and am coming around to the view that it doesn’t reveal truths about the world, but simply the inevitable consequences, worked out by logic of a set of axioms.

    I agree with Professor Ceiling Cat on nearly all of this issue, but not quite on that point about mathematics.

    As I see it, those axioms and the logic used to reason from them are themselves empirically derived, adopted from our experience of nature.

    Thus, ultimately, we hold that 1 + 1 = 2 owing to experience of the world, not because it has been proved from any principles of maths (and anyhow, those principles are themselves empirically derived). Mathematics is thus “distilled empiricism”.

    Thus mathematics does tell us about the world and is indeed a branch of broad-definition science.

    • Posted February 23, 2014 at 7:36 pm | Permalink

      Agree. I myself have made the “distilled empiricism” point at this site in the past.

      But wouldn’t it also be appropriate to say that math is one of the languages with which we describe the relationships between things, be those things concrete or abstract? It’s just a much more efficient language than, say, English. And we wouldn’t say that the words in the English language are knowledge. We use them to convey knowledge.

      • Diana MacPherson
        Posted February 23, 2014 at 8:02 pm | Permalink

        That’s how I see it – a language to describe something. I used to think it was the actual truth but I’ve changed my mind recently and blame the Romantic poets for making me think the former.

    • Posted February 24, 2014 at 7:00 am | Permalink

      Yes, exactly. Draw a right triangle, draw squares on the sides, compare the areas covered by the squares, and you’ll find that the bigger always covers as much area as the other two combined — and I guarantee you that was the original derivation of Pythagoras’s famous Theorem.

      I’d again note that Calvinball is every bit as valid a mathematical system as any other; it’s just nowhere near as useful as the ones we’re familiar with. But even the ones we’re familiar with aren’t as universally useful as often naïvely understood; Einstein suggested and Eddington demonstrated that Pythagoras is actually always worng, in the exact same way that Calvinball is always worng; it’s just that Pythagoras’s errors are rarely anything worth staying up at night worrying about (even though they’re always present) whilst Calviball almost never produces anything you can actually use.



  14. Gordon
    Posted February 23, 2014 at 1:36 pm | Permalink

    “We seem to admit lawyering, but lawyers are not obviously unscientific.”

    I would hope that lawyers, me being one, are not obviously unscientific. I would not see the law (in the sense of the legal rules) itself as scientific in the broad sense used above but both the development of the law and its application should be.

    Laws, in principle, should be developed using scientific principles – is there a problem with X?, how is that best resolved legally?, did that work? In practice of course politics, ideology and the general human trend for sticking its nose into everyone else’s lives, inertia etc tends to get in the way.

    The law itself is not obviously scientific – it is set of rules to govern behaviour, allocate risk etc: certain actions constitute theft; the original owner of goods retains title against a finder but not necessarily someone who bought the goods in an open market and in good faith; a person spraying herbicide may be liable for damage to a neighbour’s plants.

    And of course the application of the law should be a scientific process-is there a factual basis for any claim made (was the neighbour spraying herbicide or fertilizer)? does the law provide a remedy in that situation (for example was the damage too remote?)has loss been established and quantified?

    • Posted February 23, 2014 at 1:50 pm | Permalink

      But isn’t it the responsibility of a lawyer to put forward the best interpretation of events from the point of view of a particular actor in those events, their client, rather than take a dispassionate view of the overall circumstances, as is required by science?

      • Richard Olson
        Posted February 23, 2014 at 3:40 pm | Permalink

        Legal disputes with two or more adversarial parties arguing claims, depending on the nature of the matter(s) to hand, may be science, art (if one considers debate skill an art), or some mixture of the two. Existing statute/precedent, attorneys, disputants, evidence or lack thereof, judge — and jury if one is involved — all combined form an environment not comparable to a sterile laboratory by any means, yet comprise a set of field conditions, of sorts, for a real world experiment with real outcomes.

        Ken Ham/WLCraig debate strategy/behavior is not prohibited, although legal structure includes means to control this behavior and undermine it to the point of impeachment, which results in the principal distinction between debates and, e.g., trial proceedings.

        A rules-based legal process exists, therefore, not precisely the same as, yet not too dissimilar from, the scientific process. Erroneous behavior may creep into both fields, and often does; legal verdicts, like scientific conclusions, are subject to review and reversal. The actions of some attorney’s is less than optimum, blundering, perhaps stupid, even downright scurrilously & unscrupulously dishonest. True, too, of some in the science community.

        The two are not perfectly analogous, but development of laws lumbers along with the same end goal in sight as scientific inquiry, I think. Which is, despite all obstacles, to arrive at the best possible outcome presently obtainable.

        • Gordon
          Posted February 23, 2014 at 7:58 pm | Permalink

          Couldn’t have put it better myself.

          The legal system is not perfect but the objective is that a decision in any particular dispute be reached on the basis of the available and relevant evidence and that the overall result (ie the judge’s or jury’s decision) should be dispassionate.

    • Posted February 24, 2014 at 12:23 pm | Permalink

      For the record, I quite admire some lawyers, including many personal friends. Unfortunately, like economics (my own academic field), 99% of lawyers and economists (and philosophers) give the other 1% a bad name.

  15. DianeAlliLangworthy
    Posted February 23, 2014 at 1:37 pm | Permalink

    “At any rate, The Barefoot Bum’s critique is both better reasoned and more temperate than mine…”

    Professor CC is a good hoomanbeing.

  16. Torbjörn Larsson, OM
    Posted February 23, 2014 at 2:19 pm | Permalink

    I have now read Bum’s article (it’s a very good one), and I have seen the light: I shall henceforth count myself to the Newest New Atheists, the Shallow Atheists.

    I assume because I, like Harris, take a practical approach to philosophy and its attendant Philosophism. [To paraphrase Pigliucci: ‘a totalizing attitude that regards philosophy as the ultimate standard and arbiter of all interesting questions; or alternatively that seeks to expand the very definition and scope of philosophy to encompass all aspects of human knowledge and understanding.’ E.g. ontology, metaphysics, philosophy of science, induction/circularity/parsimony and other solipsist notions.]

    I swear, the darn thing has sat on my shelf for 2 000 years. I have turned it over and over. I have shaken it, I have opened it, and I have looked with alarm at its unfondled intestines to see how it works. I have sweet talked it and I have cussed it.

    But nothing seems to get it to help me with empirical matters, or anything else really. What more can I do?

    And it isn’t only me. When did philosophy predict anything useful for anyone? When did it stop telling different histories depending on its audience, when did it stop playing nice with its sinister cousin theology, when did it stop confusing people over even simple science and its entirely empirical underpinnings, and when did it attempt to tell reliable facts?

    So I have Hoose Roolz. If it sits in my house and it hasn’t been of use in a year (when it is expected to be so used), it goes to recycling. That’s where my philosophy went.

    And that is why I am now a proud, Clean Shelf, Shallow Atheist. Because after all, the current runs fastest in the shallow waters.

    • Diane G.
      Posted February 23, 2014 at 2:31 pm | Permalink

      Love that!

      I’m Shallow, too!

  17. Diane G.
    Posted February 23, 2014 at 2:31 pm | Permalink

    “I finished the first draft of my book and have begun revising it”

    Yay! Congrats!

  18. Ian Belson
    Posted February 23, 2014 at 2:53 pm | Permalink

    I like this guy, not only because him being right and have good clarity of thought but also because he completely does away from the argument from authority by not identifying himself other then by his gender. We then have to focus on his arguments rather then just say that he must be right because he is Jerry Coyne (or Richard Dawkins, Steven Pinker or the pope.)

  19. Rikki_Tikki_Taalik
    Posted February 23, 2014 at 7:23 pm | Permalink

    “Plantinga, as Larry shows in his other article, gets around New Atheism by insisting that the Christian God is simply obvious to anyone who looks.”

    Plantinga then should explain why it is that so many of us who were raised in the religion realized there is an obviousness about it, just not the way he means it. We have looked.

    As Bertrand put it …

    “It doesn’t seem to me that this fantastically marvelous universe, this tremendous range of time and space and different kinds of animals, and all the different planets, and all these atoms with all their motions, and so on, all this complicated thing can merely be a stage so that God can watch human beings struggle for good and evil—which is the view that religion has. The stage is too big for the drama.”

    We are to believe that Yahweh-Yeshua, the leftover war god from a Jewish polytheistic pantheon, who can do anything and everything, has created a tiny planet within an almost unimaginably immense universe especially for the humans he created to live on it, that have let him down so badly that he has been required to curse them and all their descendants for behaving as he created them in The Fall, then wipe them out in a genocide save a handful along with the planets wildlife during The Flood aboard a boat, and having failed completely in fixing the matter at any point thus far, became a sort of demi-god who performed tricks like walking on water, transforming water to wine, rubbing spit and mud in humans eyes to cure blindness to convince them of his reality, all of this occurring in one relatively remote place in front of and to impress and convince a select few, to have itself tortured and murdered by humans* only to return from the dead and fly into the heavens, all of this to fix the betrayal by humans but which really did nothing but change the aspect in which humans are to worship it for the same unevidenced reward of living forever, leaving humans to convince others of it’s history and inspired words written by the same fallible humans requiring subsequent translation and editing, with the end result that all of this is to be confirmed as true by having humans self-exploit their emotions and engage in philosophical gyrations to bolster the faith required to accept it and convince others.

    The obviousness is palatable. It’s obviously a human mythological construct. Pardon the run on.

    *Judas got a bad rap.

    • DianeAlliLangworthy
      Posted February 23, 2014 at 7:43 pm | Permalink

      The run-on is effective. The little bits are what many of us were taught in nice stories by nice teachers each week in our Sunday school or Sabbath school classes or in parochial school religion classes at various levels of immaturity. The big picture is stunningly unbelievable.

    • Mark Joseph
      Posted February 23, 2014 at 8:09 pm | Permalink

      Plantinga then should explain why it is that so many of us who were raised in the religion realized there is an obviousness about it, just not the way he means it.

      Or why it’s not so obvious to Jews and Muslims.

      • Posted February 23, 2014 at 11:05 pm | Permalink

        Well said Mark. Bringing up the other religions is enough to dispel so many arguments from monotheists.

    • Posted February 24, 2014 at 1:07 am | Permalink

      Richard Feynman, I think, not Bernard (Russell).


      • Posted February 24, 2014 at 7:30 am | Permalink

        Think you’re right that it was Feynman, but who is this Bernard Russell chap? :).

        • Posted February 24, 2014 at 8:42 am | Permalink

          Bertrand’s older brother, he would proofread his brother’s manuscripts.

          • Posted February 24, 2014 at 8:53 am | Permalink

            /Me culpa/!


            • Posted February 24, 2014 at 9:04 am | Permalink

              Hola, culpa! Me Ben. Nice meet culpa. Where culpa from? And what culpa do with Ant?


              • Posted February 24, 2014 at 9:24 am | Permalink

                Damn you OS X autocorrect!


              • Mark Joseph
                Posted February 24, 2014 at 9:33 am | Permalink

                Me Mark. Mark hope culpa not myrmecophage.

              • Richard Olson
                Posted February 24, 2014 at 10:05 am | Permalink

                Me encounter third new word (so far) found today on WEIT.

              • Diana MacPherson
                Posted February 24, 2014 at 5:26 pm | Permalink

                Don’t throw Ant off a cliff like last time!

              • Posted February 24, 2014 at 9:04 pm | Permalink

                But he started it!


    • Posted February 24, 2014 at 10:45 am | Permalink

      In the context of the story, what would make more sense is if Judas was the most revered of all saints: he was given the task of being one of the most hated people in history, to (supposedly, if you buy the stuff) betray an innocent man, etc. And it too, took a life (his). And so, why is he not regarded as a hero? This came up in M*A*S*H briefly, after a fashion – see “Quo Vadis, Captain Chandler?”.

  20. Jimbo
    Posted February 23, 2014 at 7:24 pm | Permalink

    Maybe I’m missing something about math in the real world. Didn’t mathematical “laws” of Nature come from analyzing observational data to reveal an underlying order? Isn’t F=m*a the truth about force when “real world” perturbations (e.g. friction) are eliminated?

    • Posted February 24, 2014 at 10:47 am | Permalink

      There’s a difference between mathematics, and mathematics used as a tool for describing, explaining, etc. Newton’s laws are *not* mathematics because they have factual reference (to properties and things), not to formal objects.

      (Reference, for those who care, and to Massimo Pigliucci, who should but may not, is to vol. 1-2 of Bunge, _Treatise on Basic Philosophy_.)

  21. Alan Feuerbacher
    Posted February 23, 2014 at 8:50 pm | Permalink

    In the NY Times article Jerry referenced, Alvin Plantinga’s arguments basically rest on the standard straw men of standard Sophisticated Apologetics that have been debunked with standard debunking technology. It’s astonishing that Plantinga and his ilk might actually think that such nonsense would work with anyone with a smidge more thinking ability than the standard Christian sheep with whom they’re used to dealing. But I’m sure they don’t, and I’m equally sure that they know they’re spouting lies.

  22. Kevin
    Posted February 23, 2014 at 10:20 pm | Permalink

    Science rules but it is not everything. Knowledge is.

    Arbol is Spanish for tree. That’s a useful sentence. Is it sciene or require science to parse? A little bit, but not entirely.

    Or Dennett would sometimes say: you can’t find love in the dictionary (a deepity, he would suggest). There is poetic meaning there, even insightful, but where is all the science in it? I haven’t seen enough to suggest science has everything, but it’s still the best thing that ever happened to human beings.

    • Scientifik
      Posted February 24, 2014 at 7:32 am | Permalink

      “I haven’t seen enough to suggest science has everything, but it’s still the best thing that ever happened to human beings.”

      What do you mean by “has everything”?

      • Kevin
        Posted February 24, 2014 at 10:10 am | Permalink

        I meant to say “I have not seen enough to suggest that everything can be reduced to science. For example, my desire to mash up some heavy metal with bluegrass on a ukulele is far from science. My wife would call it pained punishment for the cats. Or, I am going to think of a way to cool a room temperature brick down to 10 K using only transducers. Truth is, it is probably impossible and just wrong, but my attempts to think about solutions, which ultimately will require scientific methods may lead to some new knowledge, however my motivating enterprise does not qualify as science, just an idea, like more real than Hamlet.

        • Scientifik
          Posted February 24, 2014 at 12:24 pm | Permalink

          “For example, my desire to mash up some heavy metal with bluegrass on a ukulele is far from science. ”

          Experimentation is a fundamental part of the scientific process. 🙂

  23. Posted February 23, 2014 at 10:33 pm | Permalink

    Every baby knows the scientific method:

  24. Michael Waterhouse
    Posted February 24, 2014 at 12:50 am | Permalink

    I want to read Pigliucci’s paper but it is not, to me, a free download. Wiley online require $35 dollars for 24 hour access. The sickening absurdity of such an amount for short access to one article is worthy of comment but that is not the point. I would like to evaluate the validity of this public criticism of Pigliucci’s paper by reading it.
    Either I’m an idiot for being unable to work out how to access the article or the claim that it is a free download is wrong and public criticism of a restricted article is dubious.

  25. Michael Waterhouse
    Posted February 24, 2014 at 1:00 am | Permalink

    I must be an idiot. I found a different link and managed to download the article. Please disregard previous comment.

One Trackback/Pingback

  1. […] nature of the universe. I actually read an article just yesterday over at Jerry Coyne’s blog WEIT which brought up some clarity for me and perhaps could be helpful for you as […]

%d bloggers like this: