Is falsifiability a good criterion for a scientific theory?

UPDATE: In a comment below this post, reader Peter Beattie calls attention to a short summary of Popper’s falsifiability criterion that he thinks will be helpful to readers who want the nuances of Popper’s views.

_____________

This will be short.  As many of us know, Karl Popper demarcated a scientific theory from a nonscientific one because the former is falsifiable—there are experiments or observations that can be done to disprove it.   The “theory” of evolution, for example, could be disproven if we regularly found well-dated fossils out of the proper order (like mammals in the Devonian, for instance), if species didn’t have genetic variation to respond to selection, or if we often found “adaptations” in member of one species that were useful only for another species (e.g., a special nipple on a female mole that was only used for suckling mice).

I’m told that falsification is naive as a criterion for good science, and that scientists no longer accept or use that as a criterion.  Some assert that, in contrast, a good scientific theory is one that best explains the data we have.  But it seems to me that this is equivalent to falsifiability, for a theory that best explains the data we have could be shown not to explain the data we have.

At any rate, putting this musing aside, my question is this: is there any scientific fact or theory that is widely accepted despite the fact that it is not in principle capable of being falsified? I am referring to real theories here, not possible theories.

It is my impression, for instance, that string theory in physics isn’t widely accepted as true simply because we haven’t found a way to test it—to test that its predictions are verified or not. And I often hear—Anthony Grayling and Hitchens both said this, I believe—that a theory that can explain everything explains nothing (i.e., God constructed the process of evolution). In other words, a theory that can’t be shown wrong is useless. The presupposes falsifiability as a criterion for scientific truth.

Englighten me here, but note that this discussion deals with the philosophy of science, so if you think that endeavor is useless you shouldn’t be responding!

231 Comments

  1. Posted April 29, 2012 at 8:51 am | Permalink

    I think falsifiability is an important defining feature of science, but it is not the only one. In other words, the mere fact your idea is falsifiable does not make it scientific.

    But I to have heard falsifiability is not the be all and end all of science but most of that centres around the fact other criteria should also be involved, not that falsifiability should be dropped. My thoughts above reconcile the two views, imo.

    The most convincing alternative to falsifiability as a measure of science is pragmatism, that ideas be judged on their results. However, I don’t see how that is mutually exclusive with falsifiability.

    • Posted April 29, 2012 at 12:51 pm | Permalink

      Falsification is more a special case of simplicity… which criterion Popper also noted as in use. He just got the ultimate philosophical justification wrong.

      • Torbjörn Larsson, OM
        Posted April 29, 2012 at 1:09 pm | Permalink

        If it is describing testability it isn’t simplicity, but whether it works at all. However that may be, I think Benton nailed it.

    • Posted April 30, 2012 at 2:36 am | Permalink

      Are ther any scientific claims (outside mathematics) that are not falsifiable but are nonetheless provabale? Proven? True?

      One thinks of astranomical processes distant in time and space – but doesn’t that just underline the fact that they, like all scientific claims are provisional, and while falsifiablity may not be possible or practicable, it remains the gold standard to which all claims aspire?

      • Posted April 30, 2012 at 5:03 am | Permalink

        Arguably the most general of hypotheses (see my earlier posts on metaphysics and science) cannot be directly refuted; only refuted by having unfruitful consequences.

        This is why I would say things like automata theory count as metaphysics (what could refute *that*) and yet automata theory is clearly a factual field (and, as it happens, a fruitful one).

  2. Mike W
    Posted April 29, 2012 at 8:52 am | Permalink

    Re string theory et al, I recommend that you check in to the Not Even Wrong blog from time to time: http://www.math.columbia.edu/~woit/wordpress/

    • Jason
      Posted April 29, 2012 at 11:08 am | Permalink

      It’s worth pointing out that Woit’s views on this issue are pretty far outside the mainstream physics community. While there are some prominent physicists who remain skeptical, the majority of theorists still view string theory as the most promising approach to an extraordinarily difficult problem. The lack of testability to date is not due to a failure of the theory, but the fact that the theory only differs from well-understood theories at energy levels well beyond what we now know how to study. And the mathematical relationships between string theory and more well-understood physical theories suggests that something about it must be on the right track.

      See for example the first answer here:
      http://www.quora.com/Physics/If-a-theory-doesnt-make-any-testable-predictions-what-good-is-it

      • Posted April 29, 2012 at 12:55 pm | Permalink

        I’d note, there’s at least two senses of “testible” in use. There’s the more conventional “experimental” sense, of having command of the resources to set up an experiment, giving nice clean distinct yes/no answers. However, testing can also refer to a mathematical sense, evaluating the properties of the hypotheses under the experimental results available resources make accessible — and is implicitly part of the former.

        • Torbjörn Larsson, OM
          Posted April 29, 2012 at 1:10 pm | Permalink

          +1.

      • Torbjörn Larsson, OM
        Posted April 29, 2012 at 1:20 pm | Permalink

        I think it is more than that. When string theory was proposed, it was used to predict flux tubes of strong force. It has also predicted black hole entropy.

        The problem in both cases are that more simple theories have done the same. So it has passed testing, but not unequivocal testing. That is, you have to complement with comparison between theories because you have competitors for the same predictions.

        Related to this, I think the unequivocal testing is a better concept than “direct” observation. We can never observe reality “directly”.

        But, it seems, by some poorly understood process we can observe reality unequivocally.

        This is how atoms became accepted long before we could observe ions “dircetly”, blinking in ion traps by way of photonic pumping, as unequivocal “indirect” observation of chemistry’s stochiometric ratios combined with models of brownian motions.

        The problem for string theory is that it still has contenders where it can be tested, and it can’t yet be tested where it has none.

    • Marta
      Posted April 29, 2012 at 3:48 pm | Permalink

      Thanks so much for this link.

  3. Posted April 29, 2012 at 8:53 am | Permalink

    “I’m told that falsification is naive as a criterion for good science, and that scientists no longer accept or use that as a criterion. Some assert that, in contrast, a good scientific theory is one that best explains the data we have.”

    Yes, some think Popper’s falsifiability criterion for how good science is conducted doesn’t quite do the trick. An alternative is a Bayesian approach to evaluating evidence:

    “Support for Bayes’ theorem as a model of scientific inference is bolstered by its ability to formally capture many features of scientific practice, such as confirmation and disconfirmation by logical entailment, i.e, the hypothetico-deductive model of scientific explanation, the confirmatory effect of surprising evidence, and the differential effect of positive and negative evidence (for further discussion of how a Bayesian framework elucidates common scientific reasoning practices, see Howson and Urbach, 1993).”

    From “Can science test supernatural worldviews?” by Yonatan Fishman, Department of Neurology, Albert Einstein College of Medicine, at http://www.naturalism.org/Can%20Science%20Test%20Supernatural%20Worldviews-%20Final%20Author's%20Copy%20(Fishman%202007).pdf

    • BSimon
      Posted April 29, 2012 at 9:41 am | Permalink

      Yes, and a great book is Jaynes, “Probability, the logic of science”. As I see it, the big problem with falsifiability is that one can never be 100% sure of falsification – or anything else. Maybe there was a problem with the experiment – who knows. But the Bayesian approach is on a solid foundation.

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 12:11 pm | Permalink

      I have never heard anyone claim that the use of Bayes theorem is a criteria for a theory though. You can use it on anything.

      As I understand it, it is a useful tool for comparing hypotheses, extending and easing the use of parsimony. It can’t tell whether a theory is valid or not however, (unless you substitute statistical testing with bayesian testing).

      I think it has also been evident in practical use that model-less methods are GIGO. If you try to optimize the parameter space at the same time as you optimize parameters you won’t get anything sensible out of it.

      This is akin to what happens in statistical modeling, where you can shove degrees of freedom in to improve fit. Eventually it will not have anything to do with the actual process.

      Testing works every time – it has to, or the theory won’t go anywhere. Bayesian approaches do so-so in the papers, certainly not used anywhere close to “100 %”.

      • Posted April 29, 2012 at 12:59 pm | Permalink

        You might find the paper “Minimum Description Length Induction, Bayesianism and Kolmogorov Complexity”.

        Ultimately… no, no-one uses it directly for practical work, because converting the theorem to an algorithm requires being able to solve the halting problem. That aside, the conventional approach results as a special case.

        (The analog of sticking in degrees of freedom doesn’t help here, BTW; they add to the cost of the model.)

    • Scientismist
      Posted April 29, 2012 at 1:24 pm | Permalink

      Thank you, Tom Clark. With 100 comments, I just did a search for the word “Bayes” to see if anyone had pointed out that Bayesian analysis is what we all do, in fact, to judge the validity of a theory. It’s just that we don’t always do it as rigorously as we might. A non-falsifiable theory is just one that has a (zero vs. certainty) probability that a particular (or any) observation will be (unexpected vs. expected) under the theory in question; and so observation becomes futile. Enough said.

  4. Posted April 29, 2012 at 8:54 am | Permalink

    Why would a theory that explains everything really explain nothing? Isn’t “everything” what physicists are after in trying to unite Grand Unification and gravity?

    I think the reason “god” explains nothing is precisely because it doesn’t do any explaining. “God” is simply a placeholder. I don’t think the same could be said of those other theories.

    But I’m not a physicist. Or any kind of scientist, really.

    • Posted April 29, 2012 at 9:09 am | Permalink

      Physicists (or scientists in general) are looking to explain everything that is true. Everything that really is. What Hitchens et al. mean is that a theory cannot explain every possible thing whatsoever.

      For example, the theory of evolution (in addition to a few facts about what life forms seem to have existed when) explains why there are no fossil rabbits in the precambrian – they hadn’t evolved yet. The theory of evolution cannot also explain the hypothetical existence of fossil rabbits in the precambrian, you see? Evolution is a theory about the way things work, and based on what we know of life on Earth, it’s impossible for rabbits to have existed during the precambrian. So if we found that they did exist, Evolution would be unable to explain it. A theory that could explain both their existence and non-existence would be useless, because the theory doesn’t tell us anything.

      • Posted April 29, 2012 at 9:21 am | Permalink

        Yes. I was about to comment on your post that we were using different definitions of “everything.”

        • bernardhurley
          Posted April 29, 2012 at 9:26 am | Permalink

          In the philosophical slogan “A theory that explains everything explains nothing,” the word “everything” means something like “every possibility.” Incidentally it would be interesting to know where the idea originates.

          • Posted April 29, 2012 at 1:37 pm | Permalink

            Some physics theories are actually broad explanatory frameworks that can be stretched to explain almost anything. Newton’s laws of motion are a bit like that. They are not falsifiable, because we can always invent new forces to make them fit data. They are discarded when the forces get so complicated as to resemble Ptolemaic epicycles, and when a competing framework provides simpler and more general explanations.

          • Posted April 30, 2012 at 3:55 pm | Permalink

            “A theory that explains everything explains nothing.”

            But you can get a Universe from nothing! So… 

            ;-)

            /@

          • Posted April 30, 2012 at 3:59 pm | Permalink

            Facetious comments aside, Grayling seems to attribute it to Popper (although he may be paraphrasing): “he seems to forget Popper’s killer point, namely, a theory that explains everything explains nothing.”

            /@

      • David Evans
        Posted April 30, 2012 at 4:15 am | Permalink

        My preferred theory for fossil rabbits in the precambrian would be an idiot with a time machine.

    • Posted April 29, 2012 at 1:04 pm | Permalink

      Imagine a “theory” that gives a set of instructions for translating a natural number into a universe’s worth of experimental results; and then says that our universe is described by one of the natural numbers… but doesn’t say WHICH. Not very useful.

      God is similarly not very useful. Sure, your basic omnipotent God can do anything; but that means there’s no reason why he couldn’t have made the sky “green” instead of “blue”… which means “goddidit” doesn’t allow inferring that the sky is blue, and thus does not suffice to “explain” it.

  5. Leinil
    Posted April 29, 2012 at 8:56 am | Permalink

    Massimo Pigliucci asserts this in Nonsense on Stilts but never expounds on it. only saying “Philosophers today no longer…..”.

    Would love to be enlightened about this.

  6. Posted April 29, 2012 at 8:57 am | Permalink

    Hi Jerry,

    I was fortunate enough to study under David Miller, Karl Poppers research assistant.

    Additionally, and perhaps ‘therefore’, I am one of the people who still take Popper seriously.

    As you say, evolution could be falsified by ‘rabbits in the precambrian’ (as JBS Haldane put it).

    String Theory might, in part, be falsifiable. Yet the means to achieve this are presently beyond our capabilities. So there is an interesting discussion to be had about the difference between ‘logically unfalsifiable’ and ‘presently unfalsifiable’. From the vantage of our ignorance, how can we tell the difference?

    As for your request for examples about unfalsifable theories presently accepted in scientific discourse, there are numerous unfalsifable assumptions that are carried along with falsifiable ones.

    Let’s consider some early version:

    ‘Forces make objects accelerate.’
    ‘Space is Euclidean.’

    All of those ontological ideas of ‘what the universe is really like’ are often just assumptions tagged onto a mathematics that computes predictions. Of course, there’s a lot more philosophy of science that branches from that consideration.

    Then, there are the methodological assumptions behind most scientific inquiry (that there are universal laws of the universe, for example).

    Regards,

    James Sheils

    • Tyro
      Posted April 29, 2012 at 9:36 am | Permalink

      Is the “space is Euclidean” really a part of modern physics and is it really not falsifiable? I thought that for decades there was an active debate over the shape of space and this was eventually answered based on detailed studies of the CMB by the WMAP.

      It might have been a difficult question to answer, but I don’t think it was ever an assumption or unfalsifiable.

      • Posted April 29, 2012 at 9:50 am | Permalink

        Well, I think Einstein demonstrated that assuming space isn’t Euclidean leads to a more coherent physics. Yet, it doesn’t seem to be something we can ever disprove.

        Is light bending around objects through Euclidean space, or is it traversing straight lines in curved space?

        • Tyro
          Posted April 29, 2012 at 10:17 am | Permalink

          As I mentioned, the measurements of the structure of the cosmic microwave background using the WMAP can tell us the shape of space. The way it does this is fairly clever and probably a bit complicated to leave in a comment but it isn’t just musing but some concrete measurements.

          • Posted April 29, 2012 at 10:33 am | Permalink

            Please explain! I am of the understanding that WMAP is interpreted once one assumes a GR account of the Big Bang. And this *pre-supposes* that space is not Euclidean!

            • Tyro
              Posted April 29, 2012 at 10:58 am | Permalink

              I am of the understanding that WMAP is interpreted once one assumes a GR account of the Big Bang. And this *pre-supposes* that space is not Euclidean!

              I strongly disagree with your use of “assume” and “pre-supposes” :-)

              Okay, a summary…

              1. GR was proposed and supported through may separate lines of evidence.
              2. Big Bang also was proposed and supported through other lines of evidence. One consequence was the cosmic microwave background radiation.
              3. During the early Big Bang, there were several phase transitions. For the earliest phases, light would be absorbed and the universe was opaque but when it cooled sufficiently light could finally propagate, giving us the CMB. Early random irregularities were expanded during inflation but gravity and other forces played a role. The timing of the CMB and our theories (supported by other evidence) lets us predict the sizes of the potential irregularities.
              4. Cosmologists noted that the observed size of these irregularities would vary with the shape of space, as light could converge or diverge.
              5. The WMAP measured the CMB with high sensitivity, allowing us to compare observations with the predictions of the different geometries of space.

              In the end, the WMAP had shown that the universe is flat on a large scale which was a bit of a surprise to many people and led to the accelerating universe and dark energy. (I hope I’m close to accurate with these summaries – I’m not a cosmologist!)

              Having said all of that, we know that the shape of space is not flat on the small scale. We have amazing images of gravitational lensing and other effects of non-Euclidean space. Pretty wonderful. Ultimately, whatever the shape of space might be on the large scale, we know for certain that it isn’t Euclidean on the “small” scale (where small is anywhere from the size of planetary systems to galactic-clusters). Not an assumption, a direct observation.

              • Posted April 29, 2012 at 11:10 am | Permalink

                With respect, I don’t think we’ll get much further with this discussion.

                You speak of ‘evidence’ and say theories are ‘supported’. If you mean what I think you do, then you disagree with Popper (and me) in a very fundamental way about the logic of scientific discovery.

                Thanks for your summary – I was aware of that stuff (did you get it from a Lawrence Krauss video?). Yet I am not convinced that any of it disproves Euclidean space. Rather, all the different physics is threaded together more coherently if we suppose General Relativity (and non-Euclidean space).

                As I said earlier – how can you tell if space is curved and light is traveling in geodesics; or that space is flat and light is bending through it…?

              • Tyro
                Posted April 29, 2012 at 11:21 am | Permalink

                Are you talking about how to distinguish between two theories which have different descriptions but identical predictions?

                I am more familiar with these in QM and not GR so I can’t comment much on your example, sorry. In QM, I think we ultimately have to accept that our theories aren’t so much about telling us what’s “really” going on, they’re about giving us theories which predict or describe the world. In that sense, two theories which result in the same predictions are equivalent even if they start from different ideas.

              • Posted April 29, 2012 at 11:42 am | Permalink

                Yes, I suppose I am! What I mean is that our data-gathering, no matter how broad, always makes assumptions about the space (and time) between the data-gathers.

                There are no doubt other ways of describing the universe – yet they are probably less coherent. There are perhaps rules that are fit the data just as well, yet assume Euclidean space. They’ll probably need to be fractured – they wouldn’t apply universally.

                And the criterial to find universal laws is a methodological assumption (something else we presuppose before doing physics!).

              • Tyro
                Posted April 29, 2012 at 11:52 am | Permalink

                I would have said that we can describe the space as if it were Euclidean, not that we are making assumptions. I think that’s a little misleading.

                Is this somewhat like the way we can describe the solar system as if the earth were at the centre? I thought the maths can work out but even so, I would not say that a heliocentric solar system is an unfalsifiable assumption. I can also see the opposite side, where one model makes more sense than another. (This seems especially relevant in QM.) Perhaps this is a place for genuine philosophy :)

            • bernardhurley
              Posted April 29, 2012 at 2:01 pm | Permalink

              Interestingly a lot of cosmologists think that space is flat on a large enough scale, with local bumps in it.

        • Dave Ricks
          Posted April 29, 2012 at 11:38 pm | Permalink

          “Is light bending around objects through Euclidean space, or is it traversing straight lines in curved space?”

          This question has an answer, against bending in Euclidian space, and in favor of straight through curved space. Because my understanding is the Einstein Field Equations are impossible to solve using 3 Euclidian spatial dimensions plus 1 temporal dimension (except for a trivial solution like everything being zero). Instead the EFE can be solved using 3 non-Euclidian (curved) spatial dimensions plus 1 temporal dimension. If I had more time, I’d review the Schwarzschild coordinates as the first non-trivial solution.

          And if the EFE define the curvature of space-time, then the curvature can be measured. And nonzero values of that measurement would show how wrong Euclidian geometry is.

          You’re speculating a Euclidian kludge could exist, but you can’t provide one for us to evaluate. While you’re speculating, what if Napoleon had a B-52 at the Battle of Waterloo? That would be the analogous “philosophy of history”.

          • Posted April 30, 2012 at 2:42 am | Permalink

            Hi Dave,

            Your words suggest you know quite a lot about GR.

            What I have been saying about space is that we must decide the variables that effect geometry before doing any experiments.

            With GR, Einstein was found in better agreement with experiment regarding Eddington’s eclipse. Yet, a flat geometric solution could probably be derived, if we didn’t use SR. What I am saying is that this new assumption bring coherence to physics, yet is ‘proved’, by any means.

            Your last paragraph sounds like the dismissive tone Krauss has been taking with philosophy and philosophers.

            Yet, if you do know about GR, i’m sure you’ll be familiar with the great influence Ernst Mach had on Einstein. It was Mach that helped Einstein realize that spatial geometry was an assumption of Newtonian Mechanics, and one that can be reformulated.

            • Posted April 30, 2012 at 2:50 am | Permalink

              Sorry, read: “What I am saying is that this new assumption brings coherence to physics, yet is *not* ‘proved’, by any means (or falsifiable).”

        • Posted April 30, 2012 at 4:07 pm | Permalink

          Is light bending around objects through Euclidean space, or is it traversing straight lines in curved space?

          Whichever makes the mathematics easier for a given problem.

          There’s a great story recounted about Elizabeth Anscombe saying to Wittgenstein, that she can “understand why people thought that the sun revolves around the earth.” Ludwig asks, “why?” Anscombe says, “Well, it looks that way.” Wittgenstein responds, “And how would it look if the earth revolved around the sun?”

          /@

          • Posted May 2, 2012 at 2:19 am | Permalink

            If only light had joints so we could see if they bent.

            • Posted May 2, 2012 at 3:03 am | Permalink

              :-D

              Then there’s the Buddhist interpretation: The mind bends.

              /@

  7. Posted April 29, 2012 at 9:00 am | Permalink

    I don’t have an answer to Jerry’s question, but rather a few thoughts on the debate in general.

    It seems to me that falsifiability is a good criterion. The alternative is that a theory “explains” the data, but what does “explain” mean? Post-hoc explanations, a la Freudian psychoanalysis, can be created to explain anything, so one certainly cannot suggest that to “explain” in this sense is of any use to science. What, then, are the criteria for an explanation?

    It seems to me that good theories consist of if/then statements. “If my theory is true, then we should expect these results.”
    If T, then R1.
    If T, then R2.
    If T, then R3.
    Etc.

    You’ll notice that these are what we usually call “predictions.”

    These predictions are falsifiable, according to simple logic. If the consequent of an if/then statement is false, then the antecedent is also false. If we fail to get the expected result (barring the influence of confounding variables or experimental error), then our theory is wrong.

    This also explains what Hitchens and Grayling said (and I believe Popper meant it as well) regarding a theory explaining everything explaining nothing. If you can come up with post-hoc explanations for any result (again, a la Freud), that means you can explain both a result and its opposite, or A and ~A. Which gives us the following logic:
    If T, then A.
    If T, then ~A.

    Which means if your theory is true, we have a contradiction, because both A and ~A follow from it! So, literally, a theory that explains everything (if this is what you take “explain” to mean), explains nothing. It is simply nonsense.

    • Posted April 29, 2012 at 9:34 am | Permalink

      In the spirit of friendly debate, let me ask what the consequences for this philosophy would be if we acknowledge that not every T has exactly one A, and the inverse.

      For (an admittedly absurd) example:

      “If I am in a car, then the car is red” might be shown to be true. But I could also be in a car that is not red. Or I could be in a red vehicle that is not a car.

      It seems to me that when we are proposing hypotheses at the edge of our understanding, it would be very difficult to keep each T/A construct at a 1:1 ratio.

      • Posted April 29, 2012 at 6:34 pm | Permalink

        I’m not sure I understand you. If “If I am in a car, then the car is red” is true, then it would be impossible for you to be in a car that isn’t red.

        Perhaps I’m confused because I don’t know what you mean when you speak of multiple A’s. “A” is a symbolic stand-in for some proposition. Different propositions would be assigned different letters of the alphabet. ~A is simply the logical negation of A.

        • Posted April 29, 2012 at 7:52 pm | Permalink

          Pardon the unclear language. I meant “if one finds oneself in a car, then the car will be red” in the abstract. This if/then doesn’t allow for the possibility that one could, in fact, find oneself in a car, but not a red one. Like I said, this is a silly example; I’m sure there are better examples of this kind of if/then. In fact, somewhere else on this thread someone suggested that a rabbit in the precambrian might demonstrate something funky about time rather than disproving evolution.

          All I’m saying is that it seems to me falsifiability is a tricky thing when your antecedent could have various and possibly mutually exclusive consequents.

          • Posted April 30, 2012 at 7:32 am | Permalink

            Ok… I think you misunderstand how this works. You’re using an example of an If/Then statement that isn’t true. The whole point of a theory is to logically entail something that is true. If you’re successful, then “If T, then A” is true. When the if/then statement is false, that simply means your theory is wrong.

            • Posted April 30, 2012 at 10:00 am | Permalink

              I wouldn’t be surprised if I’m misunderstanding something…but I do get the point you make about good theories. I almost concluded my last comment by saying that I suppose the answer is that such open-ended if/thens are simply bad/useless theories. More work would need to be done to refine the theory until T would in fact necessitate A.

              But then I thought: “there seems to be a problem with ‘necessitate’.” How would we know we’ve reached absolute necessity? You’re own example of a good if/then: If a rabbit in the Precambrian, then not evolution. But abb3w’s comment a little south of here tried to show that a rabbit in the Precambrian would <i

            • Posted April 30, 2012 at 10:09 am | Permalink

              I wouldn’t be surprised if I’m misunderstanding something…but I do get the point you make about good theories. I almost concluded my last comment by saying that I suppose the answer is that such open-ended if/thens are simply bad/useless theories. More work would need to be done to refine the theory until T would in fact necessitate A.

              But then I thought: “there seems to be a problem with ‘necessitate’.” How would we know we’ve reached absolute necessity? Your own example of a good if/then: If a rabbit in the Precambrian, then not evolution. But abb3w’s comment a little south of here tried to show that a rabbit in the Precambrian would not necessitate ‘not evolution’.

              In more mundane, everyday matters I think our approach to truth-seeking is amenable to falsificationism. But when charting new scientific territory, how would you come up with nice, neat if/thens?

              • Posted April 30, 2012 at 10:56 am | Permalink

                Right, good question.

                The simple statement “If T, then R” compresses a lot of statements, assuming T stands for some theory. The long version for why rabbits cannot exist in the precambrian is something more like:

                1. Animals evolved over billions of years from non-living matter (they did not come from anywhere else).
                2. Evolution takes place via gradual changes.
                3. In order for an animal, like a rabbit, to evolve, there must have been very similar animals that lived before it (such that gradual changes could turn those animals’ descendants into rabbits).
                4. There are no rabbit precursors in or before the precambrian. In fact, the animals that most likely are the rabbit precursors did not evolve until well after the precambrian.
                5. Therefore, we will not find evidence of rabbits living in the precambrian.

                Each of these statements 1-4 has a host of facts that count as evidence for it, so if any of these were wrong, we would have to look at a lot of evidence to decide where we made a mistake. If we found rabbits in the precambrian, we would have to figure out which of these was wrong. Perhaps we’re wrong about the evolution of rabbits. Maybe they aren’t actually mammals (and are thus descended from an entirely different, and perhaps earlier, lineage). Of course, in the precambrian, no animals like the kind we have today existed, so it would be hard to think of anything back then that the rabbit could have evolved from. So, in this case, it would be most parsimonious to say that the theory of evolution is just fine, but #5 above is wrong – because it doesn’t take into account time travel.

                So I guess what you’re saying is that it’s hard to know that we’re 100% correct in all of our if/then statements, and I’d say that’s true. But the point is that we do make if/then statements, and the true ones stand. The false ones, we modify.

    • Posted April 29, 2012 at 1:09 pm | Permalink

      For the math on “explain”, see “Minimum Description Length Induction, Bayesianism and Kolmogorov Complexity” by Vitanyi and Li.

      Essentially, in an information theory sense an explanation corresponds to a set of program rules plus an input tape to re-create and output the set of experimental data.

      The post-facto bit ultimately gets taken care of not by falsification, but by parsimony. Unless you find a unifying rule (which in turn is thus a significantly new hypothesis), post-facto additions to the input tape very quickly increase information cost, and render the hypothesis non-competitive.

  8. Posted April 29, 2012 at 9:07 am | Permalink

    The “theory” of evolution .. could be disproven if we regularly found well-dated fossils out of the proper order

    Regularly? Wouldn’t once be enough?

    See: http://en.wikipedia.org/wiki/Precambrian_rabbit

    • jaxkayaker
      Posted April 29, 2012 at 9:19 am | Permalink

      In principle, once would be enough, but I think this is what philosophers would call naive falsificationism, which they reject. I suppose it’s to allow for human error or fraud or minor misunderstandings in the theory needing revision.

      • Posted April 30, 2012 at 4:13 pm | Permalink

        So, falsifiability (and validation) depend on consistent, repeatable results. Consider superluminal neutrinos.

        /@

    • Tyro
      Posted April 29, 2012 at 9:41 am | Permalink

      Maybe a couple centuries ago this would have been enough. Today, there’s so much support that this would be filed away as an exception and evolution would go on largely unaffected.

      • Posted April 29, 2012 at 1:12 pm | Permalink

        A pre-cambrian rabbit these days would not be most simply explained as a problem with Evolution in biology, but instead support for the existence of closed time-like curves in Physics.

    • Filippo
      Posted April 30, 2012 at 3:51 am | Permalink

      In my mind is the image of walking off a cliff and my altitude not decreasing, as well as that of discovering a black swan.

      Seems that the gauntlet any theory or proposition runs should be more, not less, rigorous. Falsifiability contributes to that rigor.

      I would think that those who swear by personal, private revelation and “just so” statements would like to see falsification fall by the wayside.

  9. Posted April 29, 2012 at 9:09 am | Permalink

    James,

    I’m sure I’m misunderstanding, but I’m a little confused when you say that “forces make objects accelerate” is non-falsifiable. Surely applying a force without a resistant force to an object and finding that it did not accelerate would give a disproving example of that statement.

    And the statement that space is euclidean is not only falsifiable but it has been falsified using the principles of general relativity, in much the same way one proves the earth isn’t flat by measuring the angles of a large triangle very precisely and discovering that they add up to more than 180 degrees.

    I suspect, though, that you meant something else by what you were saying than my simplistic interpretation, so I would really like to read more about your thoughts.

    I do agree with you that the statement that there are universal laws is a bit of an assumption…though it IS predicated on inductive reasoning, I suppose. And it would be very interesting to discover that it wasn’t true, and how that might appear. What would it even mean to learn that there are other laws of nature or places where there aren’t any? Would logic or reason still even apply in any sense?

    • Posted April 29, 2012 at 9:58 am | Permalink

      Doctor Elessar,

      Forces are never directly observed, yet are inferred from motion observations. The Newtonian program was to find accelerations and try to guess the variables (distance, charge, …) and an equation that could predict the accelerations.

      So, I think that the idea of a ‘force’ is very much an unfalsifiable assumption – an axion of Newton’s physics. Of course, Newtonian Mechanics can be remolded without forces (with Lagrangian mechanics).

      I have replied about Euclidean space above. I’ll add that from direct visual observation, we only have two 2-dimensional views of the world. We infer (intuitively) that space is flat-3D. Other observations (measurements in experiment) seem to me to be interpreted on top of that assumption.

      Einstein showed that if you abandon that assumption, physics can be rendered more coherent. Specifically, that one could generalize Special Relativity to include gravitation.

      I don’t think he would have claimed to have falsified the hypothesis that space was Euclidean.

      Regards,

      James Sheils

      • bernardhurley
        Posted April 29, 2012 at 10:42 am | Permalink

        One of the earliest criticism of Newtonian mechanics was that forces were occult. Newton took a strictly instrumentalist view saying that it was enough that his theory correctly explained the motion of the celestial bodies and of the tides.

        • Posted April 29, 2012 at 10:52 am | Permalink

          Agreed! Newton took the same instrumentalist view on light. In a letter to Hooke he wrote of a disagreement about the spectrum. Hooke denied that the spectrum was as Newton supposed (I think he thought there were only three spectral colours), and thought light was some sort of wave.

          Newton thought light was corpuscular.

          Yet, Newton makes the important point that what one actually observes (the spectrum) is what really matters and unfalsfiable assumptions about what light really *is* must yield to the data.

          • Posted April 30, 2012 at 5:10 am | Permalink

            But also note if you read carefully Newton (like Galileo) was *not* an instrumentalist more generally. He does *not* say don’t bother trying to find out what light is or how gravity actually works; when he says “I do not feign hypotheses” it is (a) only for that context (he clearly and even says he makes up hypotheses elsewhere) and (b) it means that “I have no idea”, and hence cannot begin to guess.

      • Dave Ricks
        Posted April 29, 2012 at 11:23 am | Permalink

        Forces are never directly observed? When I throw a ball, I feel a force on my hand. And a variety of sensors can measure the force associated with that acceleration. So we can define experiments to check both sides of F = m a for a test mass m.

        I think you mean that the Newtonian model of an orbit says momentum is changed by a “gravitational force” that we never directly observe while the body is in orbit. But my iPhone screen rotates in response to measurements of gravitational forces on tiny test masses.

        I vaguely recall a professor saying inertial mass and gravitational mass must be equal for Newton’s theory to work, and this point is not trivial. Maybe there’s a point to be made there.

        • Posted April 29, 2012 at 11:37 am | Permalink

          Yes – specifically, the gravitational force isn’t locally observed. This lead Einstein to general relativity.

          More generally, what I mean to say is that forces as the *causes* of accelerations and deformations are never observed. Some people (high-school students to start) suppose forces are the causal explanations of mechanics, yet they are unfalsifiable assumptions.

          You can rethink the whole thing in terms of energy and Lagrangians. Works (and fails) the same way, but with different unfalsifiable assumptions.

        • bernardhurley
          Posted April 29, 2012 at 12:57 pm | Permalink

          When you throw the ball up into the air, which force do you observe, the force exerted by you hand on the ball or the reaction of the ball on your hand?

          The question of the equality of inertial mass and gravitational mass was also pointed out as a weakness in Newton’s theory quite early on but not using those terms. It is only with general relativity that a theoretical account could be given about why we would expect them to be equal.

          • Dave Ricks
            Posted April 29, 2012 at 6:15 pm | Permalink

            If I put a load cell between my hand and the ball (and the load cell can have negligible mass compared to the ball), then the load cell reports one value of force which is both: A) the force my hand exerts toward the ball and B) the reaction force of the ball back toward my hand.

            • bernardhurley
              Posted April 29, 2012 at 9:55 pm | Permalink

              Yes, if you make the assumption that Newtonian mechanics is correct you can find ways of measuring the two forces although interestingly you only get one measurement. Is there a way of measuring the two forces independently and showing that they are equal?

              Prior to Newton the general consensus was that when a cue hit a billiard ball the ball moved because there as a “push” in the direction of the ball the idea that the ball pushed back would have seemed bizarre. Among those with little scientific education this is the overwhelming view today. I observed some court proceedings concerning demonstrations against a tour of the Springboks in the early 70’s. Witness after witness for the police said the demonstrators were pushing them but they did not push back. This was never questioned even though it’s physically impossible. Do you think your load cell measurement would convince these policemen that there was a force going in both directions?

      • Posted April 29, 2012 at 12:59 pm | Permalink

        I have to say that to my estimation you’ve spent a great deal of effort finding clever ways of saying nothing more than Godel’s incompleteness theorem can be a real pain in the ass. Also, you seem to be quite happy to exploit the native ambiguities that attend common uses of hypothesis and assumption to such an extent that it seems to be nothing other convenient equivocation.

        So, pick your poison: ‘space is Euclidean’ (or its obverse if you prefer that style of phrasing better) is an assumption or a hypothesis, but it is not both in the same way for the same purpose.

        • Posted April 29, 2012 at 3:48 pm | Permalink

          Well, my estimation is that you’ve not read enough Karl Popper!

          • Posted April 29, 2012 at 7:28 pm | Permalink

            While it’s true I can’t recite him from memory, I fail to recall any point in his writings where he argues that a hypothesis is an assumption, for the former is the thing to be tested and the latter is the thing that is taken as true but isn’t being justified in the context of evaluating the hypothesis.

            But even if he has said that, it doesn’t bear one jot on my nothing that you are doing exactly that. Nor is pointing out that Popper did or said anything in any sense a response to my comment.

            • Posted April 30, 2012 at 2:16 am | Permalink

              I think you’ll find Popper took great pains to demonstrate that all theories are guesses, logically speaking. There may be psychological ‘reasons’ for believing a universal hypothesis, yet there are no logical ‘reasons’. This is ‘Hume’s problem of induction’.

              Again, with respect, I think you are taking issue with definitions of words, without having a grasp of Popper’s philosophy – which is what this discussion is about.

              • Peter Beattie
                Posted April 30, 2012 at 3:27 am | Permalink

                » jamesthenabignumber:
                I think you are taking issue with definitions of words, without having a grasp of Popper’s philosophy

                That’s what most would-be critics in this thread are doing, I’m afraid: run with the first naive interpretation of falsifiability that pops into their heads and then declare triumphantly that they can easily disprove it, so that Popper guy must really be way overrated.

                It’s like those people who learn about evolution and that fitter organisms tend to be better at survival and then conclude that evolution must rest on a circular argugment because the definition of fitness seems (to them) to be survival. Or those who proclaim that the idea that matter is mostly empty space must be complete nonsense because they would be falling through their floor if that were true.

              • Posted May 1, 2012 at 2:05 am | Permalink

                Again, you keep rampantly ignoring what I have said *to* you about what *you* wrote; viz., the conflation of an assumption as a hypothesis with respect to Einstein above. I am pointing out that these are not he same things, but you use them interchangeably within the same train of thought. Namely, you discuss Einstein taking as an assumption that the universe isn’t Euclidean, and that in so doing made the physics more coherent. And then a moment later, you say he wouldn’t have claimed to have disproved the universe is Euclidean hypothesis. Well, of course not – he wouldn’t have needed to make the assumption if it could be demonstrated that the universe isn’t Euclidean.

                At any rate, this is my third and final reply to you since your *only* riposte has been to claim that I don’t understand Popper’s reasoning *despite* the fact that I have at no point indicted Popper in anything. I said that *you* are the one conflating two different things as being equivalent. It wouldn’t matter what Popper has ever said – it will remain just as true that I addressed you about what you said. Anyone else who may or may not say the same thing is irrelevant.

              • Posted May 1, 2012 at 4:20 am | Permalink

                Yes – you’re correct that I am using ‘assumption’ as synonymous with ‘hypothesis’, and with ‘guess’ and ‘theory’.

                A universal hypothesis or methodological hypothesis must be a guess. Both are unprovable, and are not rendered ‘more likely’ or ‘more probable’ in light of new data.

                This is central to Popper’s work. So, you seem to be getting very cross about my use of words, but my ideas are shaped by Popper’s solution the problem of induction.

      • Posted April 29, 2012 at 6:14 pm | Permalink

        Thank you, now I understand what you meant, and it makes perfect sense.

      • Posted April 29, 2012 at 6:48 pm | Permalink

        “Force” is defined as “any influence that causes an object to undergo a certain change, either concerning its movement, direction, or geometrical construction,” so the statement “forces make objects accelerate” is true by definition. We have defined “force” to mean “the thing that makes objects accelerate.”

        I’m not very advanced in physics, but perhaps it is an open question whether this is the right way to think about things. Perhaps “the thing that makes objects accelerate” isn’t really a thing. In this way it would be similar to “energy” – most people think of energy as a thing, but energy is defined as “the ability a physical system has to do work on other physical systems,” which doesn’t answer the question, “When sunlight lands on my face and makes me feel warm, what is it that traveled through space at the speed of light and made my face warm?” We don’t know. Call it a photon, but wtf is a photon?

  10. jaxkayaker
    Posted April 29, 2012 at 9:11 am | Permalink

    Some of the predictions of the theory of relativity were untestable at the time Einstein developed it and its consequences, but were tested much later, as technological innovations permitted. Same for the predicted existence of the Bose-Einstein condensate form of matter, which, iirc, had to wait 80 years to be demonstrated.

    I’ve often puzzled over the phrase “a theory that can explain everything explains nothing”, which, again, iirc, Pigliucci also touches on in “Nonsense on Stilts” with regard to Freudian psychoanalysis. But doesn’t evolutionary theory attempt to explain everything about how organisms evolved?

    • bernardhurley
      Posted April 29, 2012 at 10:50 am | Permalink

      The meaning of “everything” is the phrase is something like “all possible situations.” The theory of evolution, especially in its modern form implies very specific constraints on what sort of life is possible.

    • Posted April 29, 2012 at 12:49 pm | Permalink

      In the phrase ‘a theory that explains everything explains nothing’, read it as a shorter way of saying: ‘a theory that is constructed whereby no fact or set thereof, real or imagined, is capable of counting against it, and all facts, and all sets thereof, real or imagined, are said to prove or justify is a theory that really explains nothing.’ That’ll clear up the ‘everything’ bit.

      It isn’t saying a theory that explains every actual fact explains no facts; it’s saying that a theory which explains all potential facts says nothing, for everything is compatible with it and nothing is able to be contrasted against it.

  11. Sigmund
    Posted April 29, 2012 at 9:11 am | Permalink

    Imre Lakatos gave a great talk on this at the LSE in 1973 – the transcript of which (‘Science and Pseudoscience’)is here
    http://www2.lse.ac.uk/philosophy/About/lakatos/scienceAndPseudoscienceTranscript.aspx

    His take on the matter is that falsifiability has been succeeded by the idea of “research programmes” – meaning, essentially, theories with predictive power. Those theories that fail in their predictive power – or which offer little or no predictions – are dropped in favor of this that do offer predictions that turn out to be correct.

    • Sigmund
      Posted April 29, 2012 at 9:15 am | Permalink

      To get the full flavor of the talk click on the mp3 link at the bottom of this page
      http://www2.lse.ac.uk/philosophy/about/lakatos/scienceandpseudoscience.aspx
      which is Lakatos – in a Bela Lugosi soundalike accent.

    • Posted April 29, 2012 at 9:26 am | Permalink

      Indeed, I think Lakatos seems to do the best job of recovering the core Popperian insight, but avoiding some of the problems.

      To elaborate a bit: We can talk about ‘degenerating’ and ‘progressive’ research programmes. The degenerating ones spend most of their time performing ad hoc twists, modifying the theory to account for anomalies, and the progressive research programmes continue to make risky predictions and have most of those predictions confirmed. So, for example, Creationism has to spend most of its time trying to explain away all the evidence against it, whereas the Theory of Evolution can progress merrily along with very few anomalies compared to its successful predictions.

      Popper was right, then, that there’s something about good science that’s falsifiable, but it’s a kind of in practice falsifiability: we tend to learn about that falsifiability by actually observing a theory’s proponents responding to anomalies.

      • Peter Beattie
        Posted April 29, 2012 at 11:20 am | Permalink

        » Tom:
        The degenerating ones spend most of their time performing ad hoc twists, modifying the theory to account for anomalies, and the progressive research programmes continue to make risky predictions and have most of those predictions confirmed.

        This doesn’t actually add anything to what Popper himself said. Cf. points 6 and 7 in this short summary of falsifiability by Popper.

        • Posted April 29, 2012 at 11:46 am | Permalink

          Right, Popper was aware of those kind of moves. Lakatos took himself to be extending and refining Popper’s ideas.

          If I remember correctly (it’s been nine years since I took a grad seminar in scientific progress), Lakatos’s main contributions were to develop this idea further, for example with the ideas of the core of the theory and the auxiliary hypotheses, and the terms ‘progressive’ and ‘degenerating.’ In some ways, his work is kind of a synthesis of Popper and Kuhn.

          • Peter Beattie
            Posted April 29, 2012 at 12:32 pm | Permalink

            As to auxiliary hypotheses, Popper says this (The Logic of Scientific Discovery, p. 62):

            As regards auxiliary hypotheses we propose to lay down the rule that only those are acceptable whose introduction does not diminish the degree of falsifiability or testability of the system in question, but, on the contrary, increases it.

            In other words, ad hoc additions to a theory are admissible if they increase the epistemic content of the theory. Again, I don’t really see how Lakatos improved on what Popper had already written in 1934.

            • Posted April 29, 2012 at 3:43 pm | Permalink

              Well, I’m no student of Popper’s, nor of Lakatos’s. I always thought Lakatos said a lot more about why theories have auxiliary hypotheses, why it’s not a bad thing, which auxiliary hypotheses should be rejected and which should be retained, and so on.

              You might think that Lakatos was wrong to disagree with Popper, but I don’t think very many philosophers of science believe Lakatos didn’t say a lot of different things from what Popper said.

  12. rtkern
    Posted April 29, 2012 at 9:14 am | Permalink

    Popper was trying to cast “proper science” as purely deductive. According to him (or at least the version of his views that you are being told is naive), theories need to make a prediction such that if that prediction does not come true, then the theory is strictly false. There is no room for probabilities in this view. A theory cannot state that “X will happen 99% of the time” and be disconfirmed when “not X” happens 50% of the time. That does not count as falsifiability according to the naive Popperian view. Most of modern particle theory would have a rough time in this regime. Their predictions are inherently probabilistic, and induction rules the day.

    That said, Popperian falsifiability does have an inductive form that is quite useful. If a theory predicts an observation with high probability and that observation has a low probability on the background knowledge (and other theories), then actually observing that gives more weight to the theory than other observations would. A well-formulated theory should predict observations that could easily turn out to be false on the background knowledge. It just doesn’t need to be a strictly deductive exercise to be scientific.

    • Posted April 30, 2012 at 2:40 pm | Permalink

      This is partially why Popper changed his mind and worked on versimilitude for a while. His account doesn’t work, and arguably using probability theory itself is confused in such a context. However, Bunge and Niiniluoto and others have also worked on partial truth (and other matters to avoid strict deductivism, too.)

  13. Posted April 29, 2012 at 9:16 am | Permalink

    (Pardon the long comment; I’ll ensure any replies are much shorter.)

    I don’t know whether scientists have abandoned naive falsificationism. But philosophers of science are pretty much unanimous that at least naive falsificationism is false.

    One reason (the ‘Duhem-Quine thesis‘): If your theory T predicts X, but you observe not-X, that only falsifies a conjunction of propositions that includes T, not T itself. Either T is false, or your instruments were wrong, or T doesn’t really predict X, or you misrecorded your data, or you made a miscalculation interpreting your data, etc.

    Therefore, to answer your question at the end, there’s a sense in which every theory is not in principle in capable of being falsified by empirical observation alone. We need non-empirical theoretical criteria. Naturally, instead of rejecting all of science, scientists rejected naive falsificationism instead.

    But yes, I have read several scientists and philosophers of science charge string theory in particular with at least not making any predictions that would favor any of the myriad variations of it over each other.

    As for the replacement for falsificationism that you mention, I think falsifiability is normally taken to be part of what makes an explanation best, but there are other theoretical criteria.

    You’re right that there’s a sense in which unfalsifiable theories explain nothing, but I think it’s because they predict nothing. If we accept a more-or-less confirmation-based theory of scientific learning (thereby setting aside the Problem of Induction), a theory is confirmed when it makes a “risky” prediction: one that could have been false, but isn’t. Unfalsifiable theories don’t make risky predictions.

    One more quick note: We can analyze why falsifiability matters, and why unfalsifiable theories predict nothing, in Bayesian terms, since in our calculus, P(H|E) = P(H|~E). Thus that evidence doesn’t raise our confidence in the hypothesis.

    • That Guy Montag
      Posted April 29, 2012 at 11:42 am | Permalink

      Pretty spot on there Tom and well put. Pity there’s no vote up button.

    • Peter Beattie
      Posted April 29, 2012 at 11:47 am | Permalink

      » Tom:
      But philosophers of science are pretty much unanimous that at least naive falsificationism is false.

      Which isn’t all that hard to achieve since no one has ever advocated naive falsificationism, least of all Popper himself.

      • MH
        Posted April 29, 2012 at 12:44 pm | Permalink

        I’d worry that an article from 1963 might not be the greatest evidence possible here, since the points about falsification were pretty well known at that point.

        Also what Popper is saying there is very close to naive falsificationism (though not phrased the same way, partially because the simple explanation of naive falsificationism makes it clear that it won’t work).

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 12:20 pm | Permalink

      In practice the problem of sorting out whether your experiment works or not is part of the process. There this D-Q thesis isn’t relevant for science as we know it.

      I would object to your claim that scientists have rejected naive falsificationism, as they use testing which it is generally supposed to map to. I don’t think scientists are interested in what philosophers think is science theories or not, because they consider themselves the expert and, more importantly, have to know. Hence I doubt very many have either accepted or rejected “naive falsificationism”.

      • Posted April 29, 2012 at 3:48 pm | Permalink

        I’m not sure whether you think you have an argument against the Duhem-Quine thesis here. That is, I don’t see how figuring out whether the experiment worked being part of the process is inconsistent with the thesis.

        I don’t know whether scientists accept or reject naive falsificationism. I should have said that philosophers of science reject it. I imagine those scientists who are also philosophers of science reject it too.

        Whether scientists use testing is obviously irrelevant to whether they accept naive falsificationism; the latter isn’t a thesis about whether scientists use testing.

      • Posted April 30, 2012 at 4:28 pm | Permalink

        Hmm… doesn’t D-Q apply to the superluminal neutrinos? Everyone was looking for another proposition, rather than accept (at least initially) that “nothing can travel faster than the speed of light in a vacuum” had been falsified.

        /@

  14. Egbert
    Posted April 29, 2012 at 9:18 am | Permalink

    I suppose the double-slit experiment falsifies the particle theory.

    If we held it as a strict principle, then we’d throw out much of physics.

    • अहंनास्मि (Ahannāsmi)
      Posted April 29, 2012 at 9:31 am | Permalink

      No, the double-slit experiments (whether for photons or electrons) just falsified our ideas about how particles were supposed to behave. That “falsification” stands at the foundations of modern physics, rather than as a threat to much of its content.

    • bernardhurley
      Posted April 29, 2012 at 9:39 am | Permalink

      For over a century Thomas Young had the reputation of being the person who established the wave theory of light. However while it falsifies a theory that says light is a stream of particles that obey the laws of Newtonian mechanics (i.e. like minute billiard balls) it does not falsify every possible particle theory.

    • Tyro
      Posted April 29, 2012 at 9:49 am | Permalink

      It certainly showed that the particle model is incomplete and yes, to that extent it did falsify it. However at a sufficiently large scale (and by “large”, we’re still talking about atomic or subatomic) the particle approximation is close enough.

      You can compare it to Newton and Einstein. Newton’s gravity isn’t wrong exactly and we don’t throw it all out, but we now have to recognize that it has limitations.

      • Posted April 30, 2012 at 2:42 pm | Permalink

        This is where partial truth helps (see above).

      • Posted May 2, 2012 at 2:32 am | Permalink

        I prefer to say that parts of it are wrong, exactly, and parts of it are not wrong at all.

    • SLC
      Posted April 29, 2012 at 10:14 am | Permalink

      Not true. The Copenhagen interpretation of quantum mechanics says that, in the case of two slits, every photon that ends up on the other side passes through both slits. However, this cannot be observed because any attempt to observe this results in the collapse of the wave function that describes the photons and results in each photon that ends up on the other side being observed to pass through one slit or the other. This is just one of the many conundrums of quantum mechanics and the reason why Lawrence Krauss can state that nobody understands quantum mechanics.

      • Posted April 30, 2012 at 4:31 pm | Permalink

        More precisely, that no-one understands the Copenhagen interpretation of quantum theory.

        Other interpretations are less bothersome in this respect.

        /@

  15. Posted April 29, 2012 at 9:20 am | Permalink

    Kitteh are awesome!

    Unfalsifiable.

    Well, except for those damn Squid Squad Illuminati.

  16. Myron
    Posted April 29, 2012 at 9:27 am | Permalink

    “Popper’s demarcation criterion has been criticized both for excluding legitimate science (Hansson 2006) and for giving some pseudosciences the status of being scientific (Agassi 1991; Mahner 2007, 518–519). Strictly speaking, his criterion excludes the possibility that there can be a pseudoscientific claim that is refutable. According to Larry Laudan (1983, 121), it “has the untoward consequence of countenancing as ‘scientific’ every crank claim which makes ascertainably false assertions”. Astrology, rightly taken by Popper as an unusually clear example of a pseudoscience, has in fact been tested and thoroughly refuted (Culver and Ianna 1988; Carlson 1985). Similarly, the major threats to the scientific status of psychoanalysis, another of his major targets, do not come from claims that it is untestable but from claims that it has been tested and failed the tests.”

    (http://plato.stanford.edu/entries/pseudo-science/)

  17. Posted April 29, 2012 at 9:28 am | Permalink

    It’s my understanding that string theory is accepted more for the explanatory/predictive power of the maths rather than because it is falsifiable. That doesn’t mean to say that it is forever falsifiable.

    It could be that one day we build some kind of ultra-powerful microscope that can “see” strings, whatever they may be, or at least detect some kind of trace of them. Although is being able to prove the existence of something the same as falsifiability?).

    • articulett
      Posted April 29, 2012 at 9:55 am | Permalink

      If there was nothing measurable where we would expect something measurable to exist, then it’s a falsifiable claim– so yes, detecting something consistent with the hypothesis makes the claim falsifiable (because we may have detected nothing if the hypothesis was false.)

      Falsifiability boils down to, “If the claim is true we should expect to see x when we do y; if the claim is not true, then we would expect z when we do y.

    • articulett
      Posted April 29, 2012 at 10:05 am | Permalink

      I’m not familiar with string theory– is it more like a “tool” for understanding– like “memetics”. I’ve heard religionists who are angry at Dawkins assert that “You can’t prove memes exist” which makes them feel like they’ve proven Dawkins wrong about something or other. I also hear creationists say that atheists subscribe to string theory even though it’s unfalsifiable in order to defend their own faith. I don’t subscribe to string theory because I don’t understand it– however, I get my science from scientists — not those who imagine themselves saved for what they believe. I don’t need to know an answer to dismiss answers that boil down to the “magic”.

      Scientific explanations tend to be useful for finding out more.

  18. Posted April 29, 2012 at 9:29 am | Permalink

    Jerry,
    My understanding of this (which may also be naive), is that while falsifiability is not sufficient to demarcate science from non-science, it is still necessary, so that if a hypothesis cannot at least conceptually be falsified, you can still say for certain that it isn’t scientific. So to my (limited) understanding, String Theory is in the borderlands of science/non-science, because at least for now, there is no way that has been even conceived to test it.

    The problem with falsifiability that was pointed out by Duheme and Quine, is that conceptually you can’t know when you falsify a hypothesis whether you are actually falsifying the hypothesis itself or falsifying one of the assumptions upon which it rests.

    Which poses no problem for me. I’m sure that was helpful,
    JK

    • Ben Murray
      Posted April 29, 2012 at 12:33 pm | Permalink

      It happens all the time in experimental science. We didn’t consider all the possibilities, or the right controls weren’t run, or there was an unrecognized problem with the experimental design. The resolution generally comes from more experiments – often by different groups, sometimes by the same group. It’s a problem with knowing when you actually have falsified the hypothesis, not with falsifiability itself.

      • Posted April 29, 2012 at 1:17 pm | Permalink

        Yes. This seems to be something that is conveniently ignored in such discussions. In the abstract, a counter-example is able to demonstrate the falsity of a given hypothesis. But the real world doesn’t work as an abstraction, and isn’t a simplified model that is amenable to discrete claims ‘if x is seen, y is false’ necessarily. It’s why we don’t run an experiment just once and call it a done deal.

        As a note on an earlier statement: it’s good that philosophers of science have caught on that an observation that on its face would seem to be able to disprove a given conjecture might well just be a problem with some testing apparatus, thereby obviating the need to throw out what is actually a workable conjecture.

        If only philosophers were in labs to help scientists figure out that if an experiment goes sour, it might just *possibly* be the case that there was human error in running it. Or a loose cable. Or a practical joke.

  19. articulett
    Posted April 29, 2012 at 9:45 am | Permalink

    Real things should be distinguishable from delusions, misperceptions, etc. when scientifically tested. I think this is at the heart of falsifiability.

    In addition to being the best explanation for the observed facts, a scientific theory should be able to predict new evidence. It’s a tool for discovering more.

    I think falsifiability is essential for a scientific hypothesis to become a theory. If we are on the wrong track, there should be something which allows us to know that.

  20. Posted April 29, 2012 at 9:52 am | Permalink

    I don’t agree that “for a theory that best explains the data we have could be shown not to explain the data we have”.

    Given a theory and some data, the explanatory power of that specific theory is fixed. It can only change with new data. And I think that the exercise of adding new data to decrease explanatory power is equivalent to falsifiability. But I don’t think that this can be considered a demarcation criteria in the context of “best hypothesis inference” because the hypothesis must be scientific to begin with.

    • Posted April 29, 2012 at 9:57 am | Permalink

      I don’t think that “best explanation” is a good demarcation criteria, but is a good way to evaluate the best explanation (tautologically, of course). Falsifiability could be a demarcation criteria, but in this context, it seems pointless because no model is falsifiable in itself.

  21. Cortrm
    Posted April 29, 2012 at 9:59 am | Permalink

    Alan Sokal, “Fashionable Nonsense” criticizes the Popperian view that falsifiability can serve as a simple criterion for establishing a scientific theory. In his book he gives an example of a theory which is now accepted but couldn’t be falsified at the time because of technological limitations (I don’t remember which theory).

    • bernardhurley
      Posted April 29, 2012 at 10:53 am | Permalink

      Popper makes it plain on various occasions that he is talking about falsification in principle. So whatever theory Sokal was talking about would not be a counterexample.

      • Posted April 30, 2012 at 2:47 pm | Permalink

        Problem there is the vagueness of “in principle”. For example, does it require direct visual inspection outside our light cone? That’s not (to speak loosely) self-contradictory, so it is “logically possible” in the sloppy way often used. But it is certainly nomologically impossible for us? Popper doesn’t do metaphysics, which is in my view the greatest problem with his philosophy of science. It leaves many of these matters vague or underdeveloped. (And leaves him a sitting duck for bad philosophy of mind – Eccles.)

  22. Posted April 29, 2012 at 10:00 am | Permalink

    The oxidative theory of ageing seems quite immune to falsification: I love this take on it: http://www.landesbioscience.com/journals/cc/BlagosklonnyCC7-21.pdf [PDF]

  23. Kevin
    Posted April 29, 2012 at 10:01 am | Permalink

    This simply follows from Baye’s Theorem. If you want to raise the probability of something, you propose a test and the result of that test is either positive (raise hypothesis probability) or negative (decreases hypothesis probability). If the outcome is always positive (as in, hypothesis is consistent with every possible outcome) towards the hypothesis, the ‘test’ does not constitute evidence for the hypothesis. So in order for a hypothesis to be raised from a mere guess to a theory, it must have passed some tests where it could have failed (i.e. been falsified).

    • Kevin
      Posted April 29, 2012 at 10:04 am | Permalink

      Just read Jerry’s quip at the end, sorry, I used math instead of the philosophy of science. I’m not really well-read on Karl Popper or philosophy of science for that matter.

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 12:24 pm | Permalink

      Baye’s theorem doesn’t describe probabilities in this use though. It is better to say it predicts bayesian expectations. It is a way to judge how to bet.

      • Kevin
        Posted April 29, 2012 at 10:14 pm | Permalink

        Are you saying that we can’t quantify the probability of hypotheses? If so, I would agree, but that is relevant to what is being discussed here. We start with a hypothesis, it then passes multiple tests until it becomes theory. The question was, why is falsification a good or bad indicator of science. I was simply saying that it is a side-effect/requirement of the testing process.

  24. Posted April 29, 2012 at 10:02 am | Permalink

    It was the evolutionists who objected to falsification being taught in Kansas.

    • bernardhurley
      Posted April 29, 2012 at 10:55 am | Permalink

      ?

      • Posted April 29, 2012 at 1:22 pm | Permalink

        One of the common Cdesign Proponentists arguments is that evolution is not falsifiable. This may have been raised back in Kansas.

        A quick response to this is “Popper was only close; the rigorous version requires college-level math.”

        • bernardhurley
          Posted April 29, 2012 at 1:30 pm | Permalink

          This may well be true, but Roger’s post suggested that he knew of people, whom he identified as “the evolutionists”, who apparently had objected to falsification being taught. I was hoping he could expand on this so I knew what the heck he was talking about.

          • Posted April 30, 2012 at 8:43 am | Permalink

            Ah. Sorry; wasn’t paying close attention.

        • Posted April 29, 2012 at 3:13 pm | Permalink

          I am referring to the Kansas evolution hearings. One of the points of contention was whether the concept of falsification should be used when describing science.

          • Peter Beattie
            Posted April 29, 2012 at 3:22 pm | Permalink

            Since you helpfully provided the link, can you tell us where on that Wikipedia page a reference is made to something that you can read e.g. in Popper’s summary essay linked at the top of this page?

          • Posted April 29, 2012 at 6:17 pm | Permalink

            The 2001 Kansas science standard had a section on “Learn about falsification”. The AAAS and the evolution lobby were determined to eliminate that, and they did.

            • bernardhurley
              Posted April 29, 2012 at 8:50 pm | Permalink

              Correct me if I am wrong, but I understood that mainstream scientists had boycotted the hearings. I also understood that the amended science standards were eventually rejected as a whole, but again I may be wrong.

              Can you give a reference to anyone who testified at the hearings who objected to the section you mentioned and tell us what their objections were? It would also be nice to know what this section contained before being able to comment on it. Since I and many others don’t have to time to go trawling the internet for the information and you obviously know more about it than we do. If you want a sensible discussion of this it would only be polite of you to supply a decent set of references.

            • Posted April 30, 2012 at 12:05 am | Permalink

              Yes, the mainstream scientists boycotted the hearings. Their opinions would be more clear if they had testified. They did succeed in removing falsification. My guess is that they were influenced by those philosophers who hate the concept, or maybe they thought that it was a creationist plot. But ask them why they were chicken to testify, not me.

              • bernardhurley
                Posted April 30, 2012 at 2:32 am | Permalink

                I’m sorry you haven’t either supplied enough information to assess what you are saying or any evidence for it. Without knowing what was in this section it is difficult to know whether it itself is reasonable or not. And without knowing who criticised it and why it is unclear whether that was reasonable or not. Simple referring to “they” or “evolutionists” tells me nothing. Maybe it means something to people who live in Kansas but I live in the UK. I have no idea who you are talking about.

                If you cannot come up with the references I will have to consider you somewhat ethically challenged.

              • Posted April 30, 2012 at 9:02 am | Permalink

                Transcripts of the hearings are available at the TalkOrigins website. The opposition to falsification was not from “Evolutionists”, but from Dr. Stephen C. Meyer and other Intelligent Design advocates. EG:

                Q. That you were saying that the idea of falsification is inadequate? Is that what you’re saying?A. Yes, that– there’s a wealth of literature in the philosophy of science now showing that scientific theories– that explanation of already known facts is as important or more important to testing of scientific theory as predictions.

                See link for full context, and feel free to look through the rest of the testimony. The main support for falsification appears to be from board members who oppose the pro-ID minority report; and the ID-advocates largely have problems with falsification.

                I’d also note that the historical/operational distinction Dr. Meyer tries making comes from the Young-Earth Creation camp (such as Answers In Genesis), and is not generally accepted in the philosophy-of-science community.

              • Posted April 30, 2012 at 10:03 am | Permalink

                You can read the full description of falsification in the 1999 Kansas science standards. The mainstream scientists called these creationist-influenced science standards, and removed falsification.

    • jaxkayaker
      Posted April 29, 2012 at 1:31 pm | Permalink

      [citation needed]
      [context needed]

  25. DiscoveredJoys
    Posted April 29, 2012 at 10:04 am | Permalink

    How about ‘Nothing can travel faster than the speed of light’?

    How can you falsify (if true) an ultimate limit? All you can say is that we have not yet discovered anything that travels faster than the speed of light (for increasingly large values of ‘not yet’).

    • bernardhurley
      Posted April 29, 2012 at 11:02 am | Permalink

      You seem to have missed the point. Obviously no statement that is true can be falsified. The point of Popper’s criterion is that if a theory in fact is false then there is something it predicts that we could in principle check. If the prediction turns out to be false then the theory is falsified. The prediction that nothing can travel faster than light is a paradigm case of such a prediction.

    • Peter Beattie
      Posted April 29, 2012 at 11:43 am | Permalink

      In addition to what Bernard said, you’ve got it exactly right that the best knowledge we have is never certain and is perhaps best described as ‘not yet falsified’, or as ‘true to the evidence so far’.

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 12:34 pm | Permalink

      There are absolutes in science, it is a severely wrong portrayal to claim there isn’t.

      – We have solid math in a few cases, Noether’s theorems are famous examples.

      – We have solid no-go theorems in some cases. No ftl is one of them, if you try it physics goes to pieces.

      – And as I commented elsewhere, Carroll notes justly that the physics of everyday life is completely known. Nothing is going to change that.

      This last bit has more to do with the process of science than the first two. (But of course there are motivations for the other two as well, ties to relations between theories as for Noether’s theorem). It is due to the elimination process that theories and facts have undergone through time, I describe it in a long comment below.

      Possibly one could also add that we know mechanics are based on reality, as all examples includes a principle of “constrained reaction to constrained action”. Reality seems a solid fact.

      Speaking of facts, it is easier to understand remaining uncertainty (which always exist), if you recognize that facts aren’t philosophically “true” depending on a philosophy/axiom scheme, but absolute.

      • Schenck
        Posted April 29, 2012 at 5:01 pm | Permalink

        “No ftl is one of them, if you try it physics goes to pieces.”
        How does “physics goes to pieces” make it a no go theorem? Are you saying there’s no possible other explanations of what we think of as physics? If FTL is observed, and modern physics has to crumble because of that, then so be it no?

        “Carroll notes justly that the physics of everyday life is completely known. Nothing is going to change that.”
        Except for observations that contradict it no? I agree that this is super-unlikely, but that hardly means that if they’re observed, they won’t change it. What if there were some sector of normal space where abberant physics? That’d be a contradiction of everday stuff and we’d need new theories to explain that.

  26. Tyro
    Posted April 29, 2012 at 10:08 am | Permalink

    Re string theory – I thought the problem is that it has so many variables that haven’t yet been constrained. Rather than being a single theory, I thought that it’s more of a class of theories.

  27. Posted April 29, 2012 at 10:09 am | Permalink

    The short answer to your question is no.

    Falsifiability is not a good criterion because all theories can be protected from being falsified. In other words, no theory can ever be falsified if its supporters want to preserve it. This seems to be counter-intuitive but as a physicist who also teaches courses on the history and philosopher of science, I have had a long-standing interest in this question and why philosophers of science seem to be at odds with practicing science on the answer.

    To address it, I wrote a 17-part series of blog posts on The Logic of Science that addresses it but the most pertinent ones are #10, #11, #12, #13, and #14.

    • Posted April 29, 2012 at 6:42 pm | Permalink

      For those who don’t want to read the 17 parts, he argues that Thomas Kuhn falsified Popper by showing that science is irrational and never makes any progress.

      • Posted April 29, 2012 at 9:42 pm | Permalink

        Unless someone falsifies your comment, you’ve just saved me a lot or reading.

  28. Greg G
    Posted April 29, 2012 at 10:13 am | Permalink

    If a theory is falsifiable by making several predictions and all those predictions are verified, it is no longer falsifiable without chucking basic physics but it is still scientific.

    The bathroom scale showing an increase in weight has better explanations than variable gravitational constants.

    • bernardhurley
      Posted April 29, 2012 at 11:07 am | Permalink

      It is unlikely for any but the most trivial of theories we will actually run out of predictions that can be tested.

      • Greg G
        Posted April 29, 2012 at 8:45 pm | Permalink

        It has been said that the theory of evolution has been so well verified that if a human skeleton was found in the Cretaceous, it would be better evidence for time travel than against evolution.

        • bernardhurley
          Posted April 29, 2012 at 10:22 pm | Permalink

          But time travel would undermine our understanding of the world completely. If some future generation had the technology to do that, why would they do it? To confuse us, maybe?

          • infiniteimprobabilit
            Posted May 1, 2012 at 3:14 am | Permalink

            Future Creationists?

            (After all, not everyone who uses advanced technology understands it. Our present Creationists can use computers. Unfortunately).

  29. Ethan
    Posted April 29, 2012 at 10:14 am | Permalink

    The way that I’ve understood a good scientific theory (from my Undergrad studies) is that a good theory:
    – can explain observed phenomena (the data)
    – allows us to predict future phenomena (provides heuristic value).
    – generates further research and/or allows us to take action (for example, Germ theory allows us to eventually develop antibiotics).
    – has Parsimony
    – is compatible across many domains / other schools of thought (what I call integration).

    The way I always picture this working is that the more of these characteristics a theory has, the better the theory is. Some of these characteristics are at the core of being a theory. I see the first three characteristics as the core characteristics of a scientific theory.

    The characteristics of falsifiability, parsimony, and integration, are more of the supplemental characteristics of a theory. Possessing any of these characteristics helps strengthen the standing of the theory in academia. Having these characteristics kind of allows us to accept the theory more easily. Not having these characteristics doesn’t mean the theory isn’t a good one, but if a competing theory does, then I’d be more inclined to accept the latter. Of these three “supplemental characteristics” integration is the most important, followed closely by falsifiability, then more distantly by parsimony (I don’t think parsimony is that big of a deal, personally).

    Evolution, for example, has very strong integration. Biology, Geology, Chemistry, for example, all support evolution. It can be falsifiable (see Rabbits in Devonian). Compared to its rival “theory” (ID), evolution might not be as parsimonious (though I could even argue against this). ID, on the other hand, has neither integration nor falsifiability. ID could be parsimonious in the fact that “God did it.” However, having one natural process (natural selection) that created all the different species vs. God making every species individually, via presumably different methods, could be considered less parsimonious. That fact that ID possesses 1 or none of these characteristics (as well as not having predictive value), while evolution possess all or most of these, means that evolution is the better theory.

  30. Posted April 29, 2012 at 10:22 am | Permalink

    §

  31. Peter Beattie
    Posted April 29, 2012 at 10:31 am | Permalink

    I light of more than a few comments that are, frankly, almost completely ignorant of what Popper’s demarcation criterion means and entails (shadows of Mary Midgley), may I suggest that we refer to this four-page summary of falsifiability by Popper himself to make sure that we at least understand the bare essentials and have a handy reference for some pertinent quotes?

    • Ben Murray
      Posted April 29, 2012 at 12:48 pm | Permalink

      What a delight to read Popper’s actual words again! In graduate school (microbiology), I read several his works (including *Conjectures and Refutations*, from which the linked excerpt is taken). I found them to be very useful in forming my ideas of scientific testability as I did my research, and I continue to do so to this day (now in teaching).

      It certainly seems to me that a Bayesian approach is at least implicit in the linked excerpt. Popper also deals, at least implicitly, with the Duhem-Quine thesis.

      The other point that I found most striking in the excerpt is that scientific testing requires the right ATTITUDE – you should be TRYING to refute your hypothesis. It’s only worth maintaining the hypothesis if those good-faith attempts fail. PRETENDING to try to refute the hypothesis doesnt count! In this respect, the psychological aspect of the scientific method is extremely important.

      • Peter Beattie
        Posted April 29, 2012 at 1:07 pm | Permalink

        A relevant quote amplifying the point in your last paragraph would be Popper’s reply to the criticism that it is always possible to ‘rescue’ a system of statements from refutation by introducing ad hoc auxiliary hypotheses, or by simply denying the refuting observation. (Which, by the way, is a strictly logical point: even in deductive logic, the only thing an argument can force you to do is to choose between accepting the correctness of the premises or the falsity of the conclusion.)

        I must admit the justice of this criticism; but I need not therefore withdraw my proposal to adopt falsifiability as a criterion of demarcation. For I am going to propose … that the empirical method shall be characterized as a method that excludes precisely those ways of evading falsification which, as my imaginary critic rightly insists, are logically admissible. According to my proposal, what characterizes the empirical method is its manner of exposing to falsification, in every conceivable way, the system to be tested. Its aim is not to save the lives of untenable systems but, on the contrary, to select the one which is by comparison the fittest, by exposing them all to the fiercest struggle for survival. (LoSD, p. 20)

      • Posted April 30, 2012 at 4:49 pm | Permalink

        That’s very much like developing software: The aim of software testing is not to demonstrate that the software works as designed as such, but to discover where it doesn’t work — to try to break it.

        /@

    • MH
      Posted April 29, 2012 at 1:06 pm | Permalink

      I am (now, after the previous response to a previous post) a little worried about what Popper is up to there, honestly.

      The problem that the Quine-Duhem point raises isn’t that falsifiability is somehow different in a way that makes it unreliable, after all. It’s that falsification is in exactly the same epistemic boat as verification. We can just as easily go looking for ways to falsify theories, and succeed, as verifying them. In both cases the effort is likely to be at least somewhat dishonest – which strikes me as the basic intuition that he’s really hitting there (see last bit of comment) – but we can certainly do both.

      The easy way to see this is to look at what the Quine-Duhem problem is actually pointing out: that theories are only testable in large clumps, and not on their own. This is a point that Popper still seems to be missing, when he talks about “genuine tests of a theory”. The point of the problem is that there are no genuine tests of just one theory, in the fullest sense.

      An easy way to “falsify” a true theory would just be to test it in a way that included as one of the unstated backgroud theories something that was false (this could mean as little as “the bit that says that this theory applies to this sort of case in this sort of way is not true”, and for any of the background theories involved.) This is something that could be done intentionally, but also could be inadvertant to some degree. (And all this puts falsification in the same place as verification.)

      I think also there’s something a bit odd about how a bunch of those criteria seem to be talking more about sincerity of the scientists (genuine attempts to test the theory, etc.) than about the theory itself. I don’t mean that this is bad – it seems like the right thing to say when talking about the difference between pseudoscience and science. But to go from that way of talking to dividing up theories on the basis of whether or not they have a property of falsifiability is very slippery indeed.

      If Popper really is saying that what he means by “This theory is falsifiable” is “the scientists involved were really sincere in their attempts to test this theory” then he’s doing something very strange indeed. Otherwise it looks like he’s trying to put a more convincing gloss on something very much more like naive falsificationism than he wants to admit.

      • Peter Beattie
        Posted April 29, 2012 at 1:56 pm | Permalink

        » MH:
        The problem that the Quine-Duhem point raises isn’t that falsifiability is somehow different in a way that makes it unreliable, after all.

        Could you please give us some actual quotes so that we can assess what those two gentlemen actually said?

        It’s that falsification is in exactly the same epistemic boat as verification.

        Popper regards only strictly universal statements as empirical, and these are in fact not verifiable. But an accepted, or “genuine”, counter-example can falsify them. (Cf. LoSD, p. 49)

        I think also there’s something a bit odd about how a bunch of those criteria seem to be talking more about sincerity of the scientists (genuine attempts to test the theory, etc.) than about the theory itself.

        Well, at least your description, if not the evaluation, is exactly right. Popper says he argues that “empirical science should be characterised by its methods” (LoSD, P. 29). Why? Because “it is impossible to decide, by analysing its logical form, whether a system of statements is a convential system of irrefutable implicit definitions, or whether it is a system which is empirical in my sense; that is, a refutable system.” (LoSD, p. 61)

        • MH
          Posted April 29, 2012 at 3:04 pm | Permalink

          The Quine-Duhem problem isn’t called that because they worked on it together: Quine was only eight years old when Duhem died. It’s called that because something generally similar was separately formulated by both of them (in reference to similar though not identical views). Quine was arguing about underdetermination of theories by evidence (as a general question, but also in science). Duhem (as I recall though it’s been a few years since I read it) was actually arguing against an account of science that put a lot of weight on critical experiments (tests that chose one theory over the other). More importantly, though, the problem is typically understood in abstraction (it isn’t referring to a single argument, so there isn’t really any canonical source to refer to – just a general problem).

          More importantly, and this goes back to the earliest comment I made a bit above this, it really doesn’t look like Popper is explaining how he wasn’t a naive falsificationist. Actually it looks like he’s accepting that naive falsificationism is a poor account, and trying to salvage what he took to be the relevant and important insight behind it. (And, as a result, it’s probably not good evidence that he wasn’t actually arguing for the naive version.) For example, as the above commenter points out nicely, there’s this passage (which I had initially took to be from the article you linked, and was embarassed to have missed, but actually it is from pg 20 of The Logic of Scientific Discovery):

          I must admit the justice of this criticism; but I need not therefore withdraw my proposal to adopt falsifiability as a criterion of demarcation. For I am going to propose … that the empirical method shall be characterized as a method that excludes precisely those ways of evading falsification which, as my imaginary critic rightly insists, are logically admissible.

          Here he’s pretty explicitly proposing that instead of falsifiability being a (logical) difference between systems the weight is being carried instead by how the people doing the testing are behaving (calling “empirical inquiry” only the sorts when they’re actually doing it right/sincerely/etc.). This is, I think, basically the same as the later points in the list that he gives in the link that you provided.

          The slipperyness that I was pointing out, though, is that he also portrays the capacity to be falsified as being a property of the systems themselves. He does the same thing in the book quoted (before, on page 18):

          But I shall certainly admit a system as empirical or scientific only if it is capable of being tested by experience. These considerations suggest that not the verifiability but the falsifiability of a system is to be taken as a criterion of demarcation. In other words: I shall not require of a scientific system that it shall be capable of being singled out, once and for all, in a positive sense; but I shall require that its logical form shall be such that it can be singled out, by means of empirical tests, in a negative sense: it must be possible for an empirical system to be refuted by experience.

          The response to the objection that he thinks has some justice, then, is basically him shifting around one of the accounts attached to the falsificationism he’s advancing (something like: I mean “tested by experience” to only mean “when someone is being sincere and not making mistakes about sub-hypotheses and being properly scientific about the whole thing and so on”). He more or less asserts that it means “only when the testers aren’t doing the sorts of things that can lead to true systems being falsified”. I think he’s probably missing some of the pointedness of the criticism by taking these sorts of things to always involve malfeasance (since the criticism certainly doesn’t say that – one can also evade falsification accidentally when sincerely attempting to do real science. This is why science is so difficult.

          I’m not accusing Popper of any sort of serious malfeasance here, but this is the sort of reasoning that philosophers often parody (in an exaggerated and self-deprecating way) by portraying someone as arguing “So and so has argued that such and such is a counterexample to my theory. However they are understanding my theory in a way that I did not intend, since I intended it to have no counter-examples.”

          • Peter Beattie
            Posted April 29, 2012 at 3:18 pm | Permalink

            » MH:
            Quine was arguing about underdetermination of theories by evidence (as a general question, but also in science).

            Which can only be a criticism against a justificationist theory, which Popper’s is emphatically not.

            Duhem (as I recall though it’s been a few years since I read it) was actually arguing against an account of science that put a lot of weight on critical experiments (tests that chose one theory over the other).

            That’s all very well, but what arguments did he use?

            And in any case, if he were to argue that critical experiments cannot be devised or agreed upon, what are we to make of Einstein’s famous assertion that “if the redshift of spectral lines due to the gravitational potential should not exist, then the general theory of relativity will be untenable”? (Popper, Unended Quest, p. 38) This chimes in perfectly with Deutsch’s idea of good explanations: so hard to vary that a single alteration might ruin the whole thing.

            • MH
              Posted April 29, 2012 at 7:27 pm | Permalink

              I’m not fully sure of what the confusion here is. The argument has already been explained – it’s not really a complicated one. And the general point that Quine made certainly tells against what Popper was arguing (how on earth could it not?)

              The point is simply that hypotheses cannot be tested in isolation since testing any hypothesis requires a significant number of auxiliary hypotheses, claims, etc. to be added in before it can be tested. (One needs to know all sorts of other assumptions about how the world works, how the testing equipment works, etc.) This means that the apparent validity of a falsifying experiment is actually misleading since while it appears to be an instance of modus tollens, it is in fact an instance of modus tollens with a transparently fallacious argument afterwards.

              In other words, it appears to be:
              1. P -> Q
              2. ~Q
              Therefore
              3. ~P

              And that would be a straightforward case of a hypothesis being falsified by empirical evidence (specifically, ~Q). But in reality what is happening is actually:

              1. {P&R&S&T&…..} -> Q
              2. ~Q
              Therefore (implicitly, validly)
              3. ~{P&R&S&T&….}
              Therefore (fallaciously)
              4. ~P

              So I’m really unclear about why you keep asking for what the argument really is, since it’s a really straightforward point. Popper even grants the point, in the quoted excerpt (though, as I’ve said, his defense is a bit slippery). I guess if it still doesn’t make sense the wikipedia entry for it is relatively straightforward. http://en.wikipedia.org/wiki/Duhem%E2%80%93Quine_thesis

              • Peter Beattie
                Posted April 30, 2012 at 4:12 am | Permalink

                I referred to all that in the comment you replied to. Duhem and Quine seem to be looking for justifications for theories; Popper is not. Popper calls the Duhem–Quine stance ‘conventionalism’, a system he says is “self-contained and defensible”, but:

                Underlying it is an idea of science, of its aims and purposes, which is entirely different from mine. Whilst I do not demand any final certainty from science (and consequently do not get it), the conventionalist seeks in science ‘a system of knowledge based upon ultimate grounds’, to use a phrase of Dingler’s. This goal is attainable; for it is possible to interpret any given scientific system as a system of implicit definitions. (LoSD, p. 59)

                The solution is a methodological one: taking the decision not to save your system of statements from refutation by “any kind of conventionalist stratagem” (LoSD, p. 61).

                And as the paragraph about Einstein and Deutsch was intended to show, your assertion that falsifiability relies on “a transparently fallacious argument” is bogus. The point that Einstein made is two-fold: One, it relies on the purely logical point that Popper also makes, namely to say that accepting the falsity of the conclusion of a deductive argument forces you to accept the falsity of one or more of its premises (your P, Q, R etc.). And two, Einstein looked at his theory as a highly integrated (and thus hard to vary) system of statements that thus functioned as a good explanation, but that would be refuted as a whole (and not just some P) if certain observations would have to be accepted as facts.

                For more detail, see section 4 (“Falsifiability”) of LoSD.

            • Brian
              Posted May 29, 2012 at 8:08 pm | Permalink

              “should not exist”

              What he did not mean by that, or so I think, was “are not seen.”

              If the telescopes did not see them, one would check the equipment, check for obstacles between them and the target, etc., unto checking for unknown phenomena of a type such that their existence would be a simpler explanation than the falsity of the entire theory complex as combined with its buttressing theories.

              Falsifying is verifying that something is wrong from evidence, with the both the “something” and the “evidence” composed of conjoined but independent theories and asserted facts. Verifying is falsifying all other possible explanations.

              “Falsifying” and “verifying” are both verbs that in this contexts should mean (Popper is not using them this way) “to shift one’s probability mass,” with the difference being in whether the outcome was a shift to “tested theory” from the implicit “all other theories” or vice versa. If the shift is from one theory to the others as a group, a theory has been somewhat “falsified.” If the shift is from each member of the group to the one theory, that theory is somewhat “verified.”

              As evidence is never certainly understood, the accuracy of any evidence is itself a theory with weaknesses similar to the theory one is attempting to test.

              It’s easier to take probability mass from one theory and disperse it among an infinity of them than to do the opposite. This is not surprising. Humans are not good at recognizing that some probability near zero is not actually zero, nor at recognizing that probability near one is not actually one.

    • stevenjohnson
      Posted April 29, 2012 at 7:21 pm | Permalink

      “It is easy to obtain confirmations, or verifications, for nearly every theory — if we look for confirmations.”

      This is very close to setting up a straw man, which is not a very good way to begin.
      Every statistically controlled experiment or study sets up a bar for confirmation which in my opinion is not making it easy to find confirmation. We are apparently meant to assume the crudest and simplest (most naive) kind of activities, it appears.

      “Confirmations should count only if they are the result of risky predictions; that is to say, if, unenlightened by the theory in question, we should have expected an event which was incompatible with the theory — an event which would have refuted the theory.”

      Unenlightened by the germ theory, we should expect people to die, perhaps from cancer, without any signs of microbes. A risky prediction that the germ theory makes is that fatal diseases will be accompanied by germs, a risky prediction promptly disconfirmed by the autopsy of a victim of lung cancer. Ergo the germ theory is scientific, but wrong. This erroneous conclusion demanded by this principle of Popper’s suggests that no theory can be stretched beyond what it’s evidential foundation and internal logic permit, whether that allows “risky” conjectures or not. The failure of risky prediction may merely mean that the theory’s limits of validity have been reached.

      “Every “good” scientific theory is a prohibition: it forbids certain things to happen. The more a theory forbids, the better it is.”

      It is not clear how this is very different frm saying that the more a theory predicts, the better it is. The cell theory forbids that living things be composed of anything but cells or cell products. This formulation makes the cell theory sound like it is refuted by the abiogenetic origin of life.
      Worse, this formulation forgets the most basic prohibition in science, the repudiation of the supernatural. Far better to say that every good scientific theory accepts the metaphysical postulates of philosophical materialism.

      “A theory which is not refutable by any conceivable event is non-scientific. Irrefutability is not a virtue of a theory (as people often think) but a vice.”

      This is just wrong. String theory or the multiverse concept may be wrong, and are as of this time immune to testing and refutation, but they are scientific in that they are deductions from well-tested scientific theories. Comparing the spectra of
      the Moon and green cheese to test the theory they are the same makes for a testable theory. The theory that there is however a Moon in a green cheese in another galaxy however is impossible to refute. According to this criterion, the first is a scientific
      theory, but the latter is not.

      “Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability; but there are degrees of testability: some theories are more testable, more exposed to refutation, than others; they take, as it were, greater risks.”

      The testability of theories lies partly in the phenomena they attempt to explain. This is not relevant to the scientificity of the theory. This criterion seems to be what I think they call a category mistake.

      “Confirming evidence should not count except when it is the result of a genuine test of the theory; and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak in such cases of “corroborating evidence.”)”

      Again, disease germs are not found in all dead bodies. I suppose Virchow’s postulates could be rewritten as negative, disproof steps, but it is hard to see how this is useful. Worse, in complex phenomena, false negatives due to overextension of the theory will be far more likely. The real need for theories to be compatible with each other is entirely overlooked. (Incidentally, this is not really different from the insistence on
      the insistence on “risky” hypotheses.)

      “Some genuinely testable theories, when found to be false, are still upheld by their admirers — for example by introducing ad hoc some auxiliary assumption, or by reinterpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering, its scientific status. (I later described such a rescuing operation as a “conventionalist twist” or a “conventionalist stratagem.”)”

      Only the simplest phenomena do not require numerous auxiliary hypotheses in theories. Further, life is ad hoc and it is not at all clear what possible criteria Popper has for labeling an auxiliary hypothesis ad hoc. Is the breaking of a natural damn an ad hoc hypothesis for the Channeled Scablands, which would otherwise refute theories of stratigraphy? A legitimate philosophy of science would be asking those questions, I thnk, not foolishly denying the necessity of auxiliary hypothese in perfectly good science.

      “We can sum up all this by saying that the criterion of the scientific status of a theory is its falsifiability, or refutability, or testability.”

      There are many trivial experiments. There are many false negatives from significant experiments. Experimental results must be interpreted in light of theories of many kinds of phenomena, and the experimental evidence for them, which means that scientificity also lies in the unitarity of explanations of nature. A parapsychologist can hypothesis ESP, make a risky prediction, test it, fail to refute it, yet we can confidently assert that sending information faster than light is impossible and that parapsychology therefore is not a scienctific enterprise. No matter how much it
      fits Popper’s criteria.

      I understand that Popper once denied the theory of natural selection scientific status, which seems to me a natural conclusion of these criteria. Any experiments that falsify natural selection a la Popper really do tend to be rather trivial, don’t they?

    • Schenck
      Posted April 30, 2012 at 11:25 am | Permalink

      FWIW, this extract was one of the first readings in a required class in my Grad School’s PhD program; basically this was one of the first things everyone in the program reads. (well, is assigned to read, not necessarily read, and in the end it wasn’t discussed either, but hey, at least it was there I guess!)

  32. Ougaseon
    Posted April 29, 2012 at 10:47 am | Permalink

    My favorite allegory for the failure of the naive Popperian view of falsifiability comes from Harry Potter and the Deathly Hallows:

    “All right,” said Hermione, disconcerted. “Say the Cloak existed…what about the stone, Mr. Lovegood? The thing you call the Resurrecion Stone?”
    “What of it?”
    “Well, how can that be real?”
    “Prove that it is not,” said Xenophilius.
    Hermione looked outraged.
    “But that’s — I’m sorry, but that’s completely ridiculous! How can I possibly prove it doesn’t exist? Do you expect me to get hold of — of all the pebbles in the world and test them? I mean, you could claim that anything’s real if the only basis for believing in it is that nobody’s proved it doesn’t exist!”
    “Yes, you could,” said Xenophilius. “I am glad to see that you are opening your mind a little.”

    We see pretty much every creationist objection to evolution (and indeed most other kinds of quackery) wrapped up in this little bit of text in a children’s book about wizards…

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 12:01 pm | Permalink

      Before Beattie described naive falsifiability as not testing, I would have said that this wasn’t a failure at all. Testing eliminates, so it works.

      The problem is the strawman that the process doesn’t converge. We can eliminate “the Rsurrection Stone” eventually, see my longer comment on this.

      So maybe we should say that naive falsifiability fails but the testing it is supposed to portray does not?

      Hmpf, philosophy _is_ confusing … or confused.

  33. Dominic
    Posted April 29, 2012 at 11:37 am | Permalink

    I was just reading yesterday, Max Perutz (Nobel Chemistry 1962) on Popper on Darwin. Popper split Darwinism, says Perutz, into Passive & Active, Passive being random mutation & natural selection ‘leading inexorably to the evolution of higher forms of life’, which Popper does not like as he considers it deterministic. Active is the seeking of organisms for better niches for themselves & Popper sees that at the driving force of evolution.

  34. Peter Beattie
    Posted April 29, 2012 at 11:38 am | Permalink

    One more general recommendation and two specific points about falsifiability. Maybe the best beginner’s-to-intermediate-level introduction to Popperian epistemology and philosophy of science is David Deutsch’s The Beginning of Infinity. A superbly written book that I cannot recommend enough.

    As regards so-called “naive falsificationism“ (‘I only need to observe X, which I think contradicts theory Y, in order to discard Y’), that is a grotesque Midgley-esque caricature of Popper’s ideas. Please see at least this short essay by Popper for a little more detail.

    Also, a theory has to be a good explanation for some phenomenon in order to even be considered a candidate for falsification. In the words of David Deutsch (TBoI, p. 25): “it is only when a theory is a good explanation – hard to vary – that it even matters whether it is tetable. Bad explanations are equally useless whether they are testable or not.” Also, simply being willing to change your mind about some theory isn’t good enough: “Being proved wrong by experiment, and changing the theories to other bad explanations, does not get their holders one jot closer to the truth.” (TBoI, p. 22) And finally, falsification involves a comparative judgment of competing theories, since the negative deductive process of falsfication never confers justification on a theory itself but only on our preference for one theory over another.

    • Torbjörn Larsson, OM
      Posted April 29, 2012 at 11:55 am | Permalink

      Well, if that is what Popper says, he isn’t describing testing properly. Testing means precisely naively deducing that which work. Say a specific parameter set, or a range of parameters, in a specific theory. You don’t have to compare theories for that, only have a sound null hypothesis. (Not in the statistical sense where your null is what you test for, but in the adopted science sense of what you test against.)

      The null hypothesis can vary, it can often be absence of a phenomena, et cetera. Sometimes it can be a competing theory of course, but the point is not often and in no way always.

      • Posted April 29, 2012 at 10:01 pm | Permalink

        Absence of a phenomenon is most certainly a competing theory to the existence of the phenomenon.

    • Posted May 1, 2012 at 2:22 am | Permalink

      I wondered if someone else would think of Deutsch. He covers the same ground in The Fabric of Reality, esp. Chapter 3, “Problem Solving”.

      Scientific problem-solving always [sic] includes a particular method of rational criticism, namely experimental testing … Ideally we are always seeking crucial experimental tests – experiments whose outcomes, whatever they are, will falsify more than one of the contending theories. … once an experimental theory has passed the appropriate tests, any less testable rival theories about the same phenomena are summarily rejected, for their explanations are bound to be inferior. This rule is often cited as distinguishing science from other kinds of knowledge-creation. … this rule is really a special case of something that applies naturally to all problem-solving: theories that are capable of giving more detailed explanations are automatically preferred. … 

      [But] even in science most criticism does not consist of experimental testing. … we can criticise and reject [some theories] without bothering to do any experiments, purely on the grounds that they explain no more than the prevailing theories which they contradict, yet make new, unexplained assertions.

      /@

  35. Torbjörn Larsson, OM
    Posted April 29, 2012 at 11:49 am | Permalink

    note that this discussion deals with the philosophy of science, so if you think that endeavor is useless you shouldn’t be responding!

    Is that supposed to be a trick question? As I commented, oh earlier today, if philosophy is studying success of empiricism they are doing science.

    Irregardless of that of course, this is philosophy describing a scientific method, testing. So we could anyway discuss what their terms mean, how well they understand the phenomena, et cetera.

    Finally, if this was a general criteria no one of us would ever be discussing the details of religion. Say, whether of not Adam and Eve existed.

    If we go a few years back, I used to say that Popper’s result was a demonstration of how philosophy helps science. I no longer think so, which is part of the reason I have a problem with it.

    Popper analyzed, I think, testability from a philosophic view. It was a good point for him to point out that it is a deductive method.

    From a science viewpoint it doesn’t matter. Testing is robust, and it eliminates faulty observation and theory both. For observation by retesting, for theory by hypothesis testing of predictions.

    But, as I have argued here before, it isn’t enough. We can use parsimony and other criteria besides to compete equally predictive, or fairly so, against each other: “a good scientific theory is one that best explains the data we have”. Such measures complements testing, but it is still not enough.

    By some not well understood mechanism, the process converges. As Carroll notes, “The Laws Underlying The Physics of Everyday Life Are Completely Understood“. Quite possibly there are only a finite set of possible theories at any one time – the observable universe is finite despite inhabiting an infinite dimensional Hilbert space. Certainly we can only propose and test a finite set.

    So currently I think “falsifiability” is the philosophical description, testability is the scientific observation of criteria underlying a method. As many physicists have said through the years, I believe.

    The “demarcation criteria” is a purely philosophic question. Science is much larger than that, it is a social phenomena with remarkable and useful consequences. Testability is a criteria that fruitful theories absolutely needs, and it is a sanity criteria for the scientific community.

    I don’t know if one can make it more precise than that, there is so much a theory needs to be able to compete, like good people and other resources. It is like evolution in that, contingency is rampant.

    The testing of string theory is a longer story, and I won’t repeat yet what I have said before here. Theory is a working title as well, so it isn’t required to tell that story in order to get to the core issues of science.

  36. Jim Thomerson
    Posted April 29, 2012 at 12:35 pm | Permalink

    As you probably know there was a good bit of soul searching by evolutionary biologists in the late 1960s. We asked ourselves if we were doing science or just telling just so stories. This led to various people you have heard of declaring “I am a Popperian.”

    Both Popper and Kuhn, in slightly different terms, speak of using theories where they work. They used terms like common or ordinary science.

    My favorite example is to point out that my house is built on a flat earth, no adjustment for the 0.6 ft per mile deviation between geodetic earth and a level line. This does not, however, support flat earth as a broad explanatory theory of the earth’s shape.

    The moon is a green cheese is a scientific theory because it is falsifiable. There was an hypothesis that the moons of Mars were, in fact, alien space ships. Eventually this hypothesis was tested by close observation. They look like rocks. The theory must then either be considered falsified, or modified to propose that the Martian moons are alien spaceships disguised as rocks.

    • Posted April 29, 2012 at 4:15 pm | Permalink

      “Both Popper and Kuhn, in slightly different terms, speak of using theories where they work. They used terms like common or ordinary science.”

      Popper’s philosophy of science focuses on the EPISTEMOLOGY of the human empirical science enterprise, on how fallible scientific theories developed by and tested against the empirical data of the physical world by fallible scientists manage over time to produce reliable knowledge of the physical world…

      …While Kuhn’s philosophy of science focuses on the SOCIOLOGY (and psychology, individual to group and vice-versa) and its dynamic influences on the process of producing acceptance within the scientific community of reliable knowledge of the physical world.

      Kuhn and Popper could both be right. Or Kuhn could be right and Popper wrong. Or Popper could be right and Kuhn wrong. Or they could both be wrong. What they (by and large ) are NOT is that they are NOT in competition; we are NOT facing a choice of: Popper, or Kuhn?

      I am presently a Popperian falliblist AND a Kuhnian shifting-paradigmist [so to speak ] who thinks scientific knowledge advances via conjectures and empirical refutations (which pare-down the competing theories by empirically falsifying those that are false, thus increasing the verisimilitude of the remaining theories) and sometimes does so in “seismic” leaps/fits (shifts of paradigms embraced within the scientific community).

      I could be wrong (and many can be found who think I am wrong), but there it is.

  37. Posted April 29, 2012 at 12:49 pm | Permalink

    Some assert that, in contrast, a good scientific theory is one that best explains the data we have. But it seems to me that this is equivalent to falsifiability, for a theory that best explains the data we have could be shown not to explain the data we have.

    The distinction involves parsimony versus falsification. You may have two alternate theories, each of which can explain the data. However, the one which does so requiring less net information (essentially, a function of rule complexity and initial condition complexity) is more probably correct. For some sufficiently versatile rule sets, they may be able to explain whatever happens, given some initial condition sets. As such, they may be non-falsifiable, but competitive on parsimony grounds.

    This can be mathematically formalized; and the formalization even derived as theorem from a more basic assumption (roughly, that evidence has a pattern). While he conjectured alternate justification for it, Popper also noted the use of parsimony/simplicity in science.

    This also leaves the sense of “testing” ambiguous. The tests here are not experimental, but mathematical tests of the description of experimental results. Which, despite Zombie Feynman, is true even with more ordinary experiment; it’s not the experiment that’s the actual test, but the bookkeeping done on the experimental results, requiring a function mapping the set of (perhaps sets of) possible results for the experiment to the set of “yes” and “no”. (Experiment design largely deals with keeping the bookkeeping simple and correct.)

    Most of the stuff that is at this level of non-falsification, however, is just the pure mathematics that is implicitly part of any scientific theory.

    • Posted April 29, 2012 at 4:25 pm | Permalink

      I don’t follow.

      Observations of phenomena and experiments with phenomena produce data — qualitative data and (with measurements) quantitative data).

      Mathematics describes — it describes phenomena quantitatively.

      Theories predict.

      Using mathematics, the predictions of theories are tested (evaluated against) the quantitative data of additional and new observations and experiments.

      The results for a given theory might crucially corroborate that theory (with results that the theory predicted, leaving that theory “on the table” for future tests), or be neutral (not crucial) to the theory (which also leaves it “on the table” but without advancing the empirical corroboration it enjoys), or falsify the theory (crucially contradict a prediction of the theory).

      It is the DATA (most particularly the quantitative data) of the “actual experiment” that are crucial to the corroborating or falsifying of a theory — though granted, if the “bookkeeping” is poor/errant and/or mathematically invalid, we cannot learn what the “actual experiment” tells us.

      WAIT…HOLD ON a sec…

      “…Zombie Feynman…” ???

      Oops, I see that we may not be able to get even get into the same conversation, let alone succeed in communicating! Never mind! (Sorry about that, I didn’t catch that bit of sentiment in time to avoid wasting a bit or your time and mine, please forgive me.)

      • Posted April 30, 2012 at 9:29 am | Permalink

        Not surprising you don’t completely follow, since I’m trying to translate pure math into English. But you come close. =)

        All science involves comparison between multiple hypotheses. A form of “null hypothesis” (basically “shit happened”) is always a trivial option for explaining a data set. So, there’s always more than one hypothesis on the table; it’s always comparing two or more.

        Yes, it is the data (and particularly, which data) that are being described that is, in one sense, crucial. However, translating dataset and two models to an ordering of “Model A better than Model B” type implicitly involves using a mathematical function. “Bookkeeping” refers to the math (in a derogatory way). Thus… the test is not the experiment itself that gathers the data, but the bookkeeping done to evaluate the models via the data at the end of the experiment — using a test-function.

        “Zombie Feynman” is an XKCD reference. (Feynman himself didn’t say the bit about bookkeeping while alive, so far as I can tell.)

  38. Posted April 29, 2012 at 1:18 pm | Permalink

    The theory that ALL cognitive volition (exercises of “free will” as such are commonly thought of) is rigorously strictly mechanically (pre)determined (ultimately by the initial conditions of the universe) and is therefore utterly REALISTIC ILLUSION seems to me to be a theory that cannot even in principle be empirically falsified but is widely accepted.

    But just how widely accepted this empirically unfalsifiable theory (as I have stated it) is, I don’t know; judging from this blog and other recent published writings acceptance IS wide and growing (and dissenters wary of being excoriated and marginalized may be increasingly reluctant to “speak”).

    (My own position on that theory is that it MAY or may NOT be right; but that even IF it IS right and we could somehow KNOW that it is right, we would STILL have to slog our way through life day-by-day and even minute-by-minute “making decisions” that are realistically genuine illusions which [being realistic] MUST be “made,” and so such knowledge would not get us out of the illusory “game” of “decision-making” cognitive volition-exercising.)
    _ _ _ _ _ _ _ _ _ _

    I am a Popperian falliblist who takes Popper’s epistemology VERY serious (my avatar photo might give that away). I grant that Popper did not settle all philosophical questions that professional philosophers still wrestle with (I even grant that there are other philosophers of science who have formulated significant improvements in Popper’s philosophy of science), but it sure seems to me (as a physical scientist) that Popper’s philosophy of science has satisfyingly settled the burning philosophical questions we physical scientists see needing to be settled in order for us to employ (forever fallible) empirical science to produce reliable knowledge of the physical world.

    To borrow from Winston Churchill, Popper’s empirical-based epistemology is not perfect, but it seems to (most of) we physical scientists as being WAY out ahead of whatever alternative epistemology is in second place. No scientific theory should ever be regarded as certainly and beyond all possible doubt true, but there are some scientific theories which are the point-in-time “way to bet” if/when you find that you MUST cast a bet. Which scientific theories are those? They are those scientific theories which (at that point-in-time) are the last theories standing.

    (Might you lose a bet thusly cast? Sure, you might, but that is nonetheless the point-in-time WAY to bet, IF bet you MUST; but DO NOT bet if you do not NEED to bet.)
    _ _ _ _ _ _ _ _ _

    If being (at least in principle even if not yet in practice) empirically falsifiable is not both the necessary AND sufficient condition/criterion for demarcating physical propositions (theories) from META-physical propositions, what the heck is???

    • Posted April 30, 2012 at 2:55 pm | Permalink

      What makes you think there is one such criterion? What makes you think the matter does not come in degrees? What makes you so disdainful of (science oriented) metaphysics, which is vital to the success of science (even if largely tacit in the work of same)?

  39. Emma
    Posted April 29, 2012 at 1:39 pm | Permalink

    My impression is that talking about falsification, the predictive power of a theory, or the fact that “a theory that can explain everything explains nothing ” are indeed different ways of presenting a very similar idea.

    If a theory is defined as a set of rules or propositions that allow to determine which part of the space of possible will be observed in the reality (I hope it is a possible definition of a theory!), then a good theory will be a theory for which reality is a very small part of the whole space of possible
    –> it is falsifiable because a large part of the space of possible is incompatible with the predictions of the theory
    –> It has predictive power, for the same reason
    –> It is the opposite of “a theory that can explain everything”

    What do you think (and apologies for my clumsy english!)?

  40. Posted April 29, 2012 at 2:31 pm | Permalink

    Part of the problem, I think, is that it’s just hard to tell what’s at fault when observation doesn’t match experiment. Was the experiment set up or measured incorrectly? Was the experiment influenced by an outside factor? Was the experiment an accurate reflection of the theory? Or was there something wrong with the theory?

    Which one of these does a falsified experiment mean?

    • Peter Beattie
      Posted April 29, 2012 at 2:49 pm | Permalink

      The Opera experiment that seemed to have detected faster-than-light neutrinos is an instructive case here. Suppose that Einstein had agreed to acknowledge FTLNs as a genuine counter-example to his general theory of relativity, then all the worldly concerns about faulty equipment, bungled measurements, out-of-sync GPS etc. would have had to be settled before the genuineness of the counter-example could have been admitted. And, interestingly enough, that is exactly the way that the scientists involved in the case behaved.

      • Posted April 30, 2012 at 5:20 pm | Permalink

        Ah! I’d mentioned that twice before reading this far! :-)

        /@

  41. MadScientist
    Posted April 29, 2012 at 5:19 pm | Permalink

    I still hold that Popper’s work are only musings and post-hoc rationalizations and are not essential to the progress of science, nor even terribly useful. In the hundreds of years before Popper, ideas were thrown out because they were falsified and others were thrown out because they were trivial (could not even be shown to be true so what’s the point in discussing if it can be shown to be false).

    It is not even essential that a claim be falsifiable for it to be a scientific claim. For theories, hypotheses, and claims of fact, falsifiability (aka fallibility) is a trivial inherent aspect. If you look at the early publications of the Royal Society it is clear that demonstrability and reproducibility were among the most important criteria (thus the adherence to “Nulla in Verba”). Mystical claims like “god did it” are not even demonstrable – they are merely nonsensical claims. Don’t forget that the origins of modern science lies in the question “how do we know that this is reliable information?” – truth and falsifiability were there from the start and science owes nothing even to Popper.

    It seems to me that many people have a vastly over-inflated notion of philosophy’s relevance to science.

    • Posted April 30, 2012 at 5:32 am | Permalink

      What???

      Alas, what you said (and the way you said it) is not even wrong! (Apologies to Wolfgang.)

      “MadScientist,” you have not actually given Popper a thoughtful read, have you?

      Pick up a copy of, say, CONJECTURES AND REFUTATIONS, give it a thoughtful read.

      Or not; I do recognize that (as with mathematics) Popper is not for everyone…but he SHOULD be (as math should be).

      http://en.wikipedia.org/wiki/Not_even_wrong

  42. Greg Fitzgerald
    Posted April 29, 2012 at 6:11 pm | Permalink

    As a novice at philosophical argument, I’m finding it hard to judge the quality of the arguments being marshaled in the previous comments.

    Can someone please give me a clear, concrete example of how taking falsification as the measure of a good theory could ever lead to a false description of the universe? And how adopting another criterion (say, Bayesian) would produce a more accurate description?

    I doubt I’m alone in my befuddlement, and would appreciate any response.

  43. Charles R
    Posted April 29, 2012 at 6:19 pm | Permalink

    Jerry,

    The Many Worlds interpretation of quantum mechanics is not falsifiable if you take the Standard View as your prior. In other words, we have two theories, each explains our observations but no experiment can tell the difference between the two. Falsifiability is not enough.

    • Posted April 30, 2012 at 6:20 am | Permalink

      Yes, falsifiability IS enough to distinguish (demarcate) physics from METAphysics (empirical science from non-science). If the Many Worlds (universes)” hypothesis cannot be empirically falsified (and presently it seems not empirically falsifiable even in principle), then that simply means that the MWH is a METAphysical proposition (rather than a physical or scientific) proposition which MAY or may NOT be true but cannot be empirically tested.

      On the other hand, quantum theory is falsifiable and thus far experiments which could falsify QM if QM if false have corroborated QM stunningly. QM explains chemistry exquisitely! If current QM theory IS actually false, we can expect to empirically discover that it is false yet, and when we do, we will modify QM theory (or abandon it altogether) as future discoveries warrant. That is why QM theory is scientific.

      But the “Many Worlds” hypothesis seems not empirically testable (in principle, not just in practice); that is why it is presently NOT regarded to be a scientific theory (at least, as I suspect you may recognize, not as we presently define “universe;” that too may be revised in the future, depending on future empirical discoveries and what their implications may warrant).

      Charles R, if you are a physical scientist you already know this (or should already know it).

      If you are not a physical scientist and would like to understand why physical scientists say the things they say (and why they are, um, rather fond of Popper’s philosophy of science even though most philosophers seem hell-bent on marginalizing Popper’s wonderful insight into how empirical science’s historically spectacular success in producing forever fallible (yet nonetheless increasingly reliable) knowledge of the physical world rests on conjectures and refutations, try thoughtfully reading some Popper for your own self.

      If you have already done that, then I reckon we’ll just have to content ourselves with mutually irreconcilable perspectives — which means all is well and normal!

      And if you don’t really care to do that, well, I’m learning how that is pretty normal too…

      • Charles R
        Posted April 30, 2012 at 12:31 pm | Permalink

        If Many Worlds had been proposed first as the answer to “quantum weirdness”, then we would be having exactly the same conversation about the Standard View. The point is both theories describe our observations, but we don’t have enough evidence to tell between the two. However, since Many Worlds is strictly simpler, we should probably go with that one. So falsifiability isn’t enough. You also need some version of Occam’s razor.

        • Posted April 30, 2012 at 5:32 pm | Permalink

          Actually, MW may be falsifiable: www hedweb com/manworld htm#detect [fill in the .s; WordPress eats comments with this URL for some reason]

          /@

  44. Piero
    Posted April 29, 2012 at 6:34 pm | Permalink

    I’m sorry if this sounds naïve, but I’ve read the whole thread and understood about half of it (being neither a physicist nor a cosmologist). But from my ignorant vantage, I’d like to put forward some observations:

    Everything we know about the universe is construed by our minds. But there’s no guarantee that our minds are adequate tools to understand the universe: they evolved for a wholly different purpose, namely to keep us alive for a few years on the surface of a tiny speck of dust where “reproduction” implies “destruction”.

    I tend to think that the concept of “truth” is overrated. What really matters is whether a given theory can predict future events with a better-than-average probability. That 100% percent certainty should be unattainable is, from my point of view, irrelevant: so far no theory has ever been proven to be 100% accurate. Does the gravitational force depend on the square of the distance or on the 1,99999999999999999997584325 power of the distance? It seems logical to accept that the square of the distance is the better answer, but only because we reason that the gravitational force (or the intensity of light, or of any other field) spreads out according to Euclidean geometry. Can we ever prove that space is actually Euclidean, or merely quasi-Euclidean, in the sense that it might have, as a whole, a 0,00000000000000000000000000232 curvature?

    Hence, I’d propose a new taxonomy for scientific theories: not even wrong (creationism), definitely wrong (guided evolution), most probably wrong (teleological evolution), most probably right (evolution), according to the probability that predictions based on them turn out to be the case (as determided by our minds). Notice that the range does not include “definitely right”, for the reasons already noted (in much the same way, we can in all certainty state that a bubble is not a book, but we cannot pin down the definition of “bubble” precisely and unambiguously).

    In summary, I believe (but cannot prove) that a probabilistic interpretation of Popper’s criterion is still valid; meteorology is often wrong, but it usually predicts the weather with a better-than-chance probability. If it does so consistently, then it falls withing the “most probably right” category of my proposed taxonomy.

    • SelfAwarePatterns
      Posted April 29, 2012 at 7:23 pm | Permalink

      I like this. It seems to cover a broader range of theories, such as those in the social sciences.

      However, I wonder about predictability. What about anthropological theories such as the reasons humans first adopted agriculture? I don’t necessarily see any predictive value to these theories except possibly maybe on what future evidence might be discovered by archeologists.

      • Posted April 30, 2012 at 7:11 am | Permalink

        Piero, alas blog “readers commentary” is extraordinarily confined to (essentially) “drive-by commentary” and not a venue for penetrating dialogue and insight-rich expositions. I recommend that you give thoughtful reads for your own self to the following (and I think if you do, you will be glad you did — and you will know and understand WAY more than 99-44/100ths of your fellow nonscientists (and more than perhaps 50% of scientists — not kidding), and all of these are quite comprehensible by literate persons, no special mathematical, scientific or philosophical background is required for these to make great sense and provide important insights:

        THE CHARACTER OF PHYSICAL LAW, Richard Feynman, MIT press, 1965.

        THE MEANING OF IT ALL, Richard Feynman, Helix Books, 1998 (of lectures delivered in 1963).

        THE FABRIC OF THE COSMOS, Brian Greene, Alfred A. Knopf, 2004

        WHY DOES E-mc^2?, Brian Cox & Jeff Forshaw, Da Capo Press, 2009.

        THE QUANTUM UNIVERSE, Brian Cox & Jeff Forshaw, Penguin Books, 2011 (and if you have a chemistry background, you might also appreciate ABSOLUTELY SMALL: HOW QUANTUM THEORY EXPLAINS OUR EVERYDAY WORLD, Michael Fayer, 2010).

        A UNIVERSE FROM NOTHING, Lawrence Krauss, Free Press, 2012

        PHILOSOPHY OF SCIENCE: A VERY SHORT INTRODUCTION, Samir Okasha, Oxford University Press, 2002

        Of the next two, in the vernacular: “These be tougher.” But (for me, for one whose formal education was in physical science, not in philosophy), WELL worthwhile, and still quite comprehensible without severe burning of mental rubber (WAY easier for most literate readers than, say, the calculus or linear algebra — these books are not math books):

        CONJECTURES AND REFUTATIONS: THE GROWTH OF SCIENTIFIC KNOWLEDGE, Karl Popper, Harper Torchbooks, 1963.

        OBJECTIVE KNOWLEDGE,: AN EVOLUTIONARY APPROACH, Karl Popper, Oxford at the Clarendon Press, 1972.

        Countless other excellent books are of course also available (and perhaps other commenters will offer alternative recommendations), but those are my suggestions. Peace!

        • Piero
          Posted April 30, 2012 at 8:57 am | Permalink

          Thank you very much for you suggestions. I’m currently very interested in (and very ignorant of) these matters, so the books you recommend will be first on my reading list (which now sadly amounts to some 200 books).

    • Posted April 30, 2012 at 9:32 am | Permalink

      Depends what sense of the word “prove” you’re using. This also ties to Hume’s Problem of Induction.

      Once again, I plug the Vitanyi-Li paper “Minimum Description Length Induction, Bayesianism and Kolmogorov Complexity”.

  45. Kevin
    Posted April 30, 2012 at 1:42 am | Permalink

    Assuming falsifiability is a necessary condition for a scientific theory, it appears, by popular accounts, that the theory that evolution is a necessary and sufficient condition of DNA is neither falsifiable nor empirically warranted as an inductive hypothesis. By popular accounts, there is no observed evolutionary process that produces a programming language.

    • Posted April 30, 2012 at 5:38 pm | Permalink

      Hmm… 

      1. Are you implying that DNA is a programming language? That would need to be demonstrated before your other claim makes sense.

      2. In a sense, all programming languages have been produced by evolutionary processes, inasmuch as they can be considered as part of the extended phenotype of H. sap.

      /@

  46. barael
    Posted April 30, 2012 at 2:52 am | Permalink

    According to my lay understanding of string theory, it has an additional problem with respect to falsification besides the unpractical energy requirement of experiments.

    This additional problem has to do with the fact that string theory seems to permit a huge number of solutions (I’ve seen the number 10^500 tossed out a lot) and picking a solution that exactly describes our universe is, by odds alone, nearly impossible.

    This afaik more like a feature than a bug in string theory, but it’s rather obviously problematic from a falsificationist point of view.

  47. Posted April 30, 2012 at 4:22 am | Permalink

    The issue is that falsification in Popper is specifically aimed at logical incompatibility. Popper’s main aim was to end the problem of induction by showing how science could proceed entirely deductively. Popper’s falsifiability is, as such, just about the worst thing a philosopher of science can do: try and radically alter scientific methodology. I don’t know why people here would like it at all.

    The problem is that any position can be made logically consistent with just about any body of evidence. The “God put fossils there to test our faith” idiocy of some creationists is an example. It certainly logically reconciles the existence of fossils with their claim that the earth is 6,000 years old. The issue is that it does not rationally reconcile the two claims that are in tension. It fails on grounds that go beyond deductive logic, because rationality goes beyond deductive logic.

    So when the person who posted the comment says: “But it seems to me that this is equivalent to falsifiability, for a theory that best explains the data we have could be shown not to explain the data we have.” —— It really is only equivalent if you ignore Popper’s motivation. Certainly this notion of “falsifiability” can’t even presume to solve the problems Popper was hoping to solve. It isn’t Popperian falsifiability in the slightest.

    What it amounts to is basically this: scientific theories must confront the evidence available (and to be collected), and must be able to stand up to that evidence (I leave “stand up to” vague beyond noting that it must invoke a richer notion of rationality than Popper countenanced). The question of whether that’s a noble goal for scientists seems to me pretty boring because the answer is obviously yes. And I’m even fine with calling this a falsifiability criterion (though note that it no longer demarcates science from non-science unless you want to include stuff like literary theory as a science). Just don’t mistake it for Popper.

    • Peter Beattie
      Posted April 30, 2012 at 4:29 am | Permalink

      » dyssebeia:
      The problem is that any position can be made logically consistent with just about any body of evidence.

      And that is exactly the problem that Popper solved. He devised a methodology that prevents us from using that route and steers as towards (systems of) statements that are logically inconsistent with certain (possible) observations.

      • Posted April 30, 2012 at 4:36 am | Permalink

        Can you say more about how he solved it? The Quinean (e.g.) emphasis on the failure of mere deductive falsifiability, so far as I know, gained prominence primarily after Popper. If I am wrong I would like to know.

        • Peter Beattie
          Posted April 30, 2012 at 4:43 am | Permalink

          Can I say more? :D

          Okay, granted, this is a long thread. But still, I’ll have to refer you first to all the comments I (and others) have made in this thread that address your question (and lots of others). If after reading those comments you have some specific questions, I’ll be glad to try and answer them. Deal?

        • Posted April 30, 2012 at 7:20 am | Permalink

          Dyssebeia, alas blog “readers commentary” is extraordinarily confined to (essentially) “drive-by commentary” and constitutes NOT a venue for penetrating dialogue and insight-rich expositions.

          It’d be easier (and more productive for you and me and Peter Beattie and EVERYone if (as a starter, and maybe as a finsh too, depending on how it may “grab” you) you’d simply fetch a copy of Karl Popper’s OBJECTIVE KNOWLEDGE and give it a thoughtful read of your own (and perhaps also peruse the abundant essays on the subject available at Rafe Champion’s website, http://www.the-rathouse.com/).

          Or not (Most people opt for “not”).

          • Posted April 30, 2012 at 7:46 am | Permalink

            Note to my own self: DO NOT compose commentary posts whilst piddlin’ on the privy, and especially not when using a cellphone that does not have a spellchecker and nesting parentheses-tracking app!

          • Posted April 30, 2012 at 12:16 pm | Permalink

            Well, yes and no.

            Yes, there is a limit to what can be achieved in a discussion on a comments board like this, and yes, as someone who will be going to graduate school in history and philosophy of science, I do need to read and critically think about Popper directly at a level that I have not yet.

            But I strongly disagree that readers’ commentary does not provide an opportunity for useful and insightful discussion. I can’t count the number of times I’ve received useful foreign perspectives on issues from even the sort of passing discussion that you frequently get in places like this. No, I may not get to the heart of an issue, but I will get perspectives different from my own, perspectives that can challenge my own when I do engage in further exploration on my own. I can’t speak for anyone else, but I find it much easier to evaluate my views if I have an alternative to contrast them with, and frequently other people are better able to provide such alternatives.

          • Posted April 30, 2012 at 12:17 pm | Permalink

            I should add that I have not yet had time to take Peter’s suggestion of reading his and others’ earlier posts. I hope to get around to that later today but I never know with myself.

  48. William Stewart
    Posted April 30, 2012 at 5:43 am | Permalink

    Dear Professor Coyne,
    For what I believe to be an effective critique of Popper’s philosophy of science, see Scientific Reasoning-The Bayesian Approach by Colin Howson and Peter Urbach. I believe that in the philosophy of science the falsifiability problem has been addressed by W.V.O. Quine and Pierre(?) Duhem. See the Wikipedia entry for Duhem-Quine Problem. Howson and Urbach also effectively (I think) criticize frequentist theories of statistical inference. For the record, I am a psychologist and a regular reader of Why Evolution is True.
    Cordially,
    William Stewart

  49. Posted April 30, 2012 at 5:52 am | Permalink

    Falsifiability is not (just) a philosophical justification for doing science. Falsifiability comes about as a necessity of probability (just like Occam’s Razor). A falsifiable hypothesis is just more probable than an unfalsifiable hypothesis.

    Let’s say we stumble upon some binary evidence E and we only have two competing hypotheses — H and ~H — that are attempting to explain E. If one hypothesis can only explain E and the other hypothesis can equally explain both E and ~E, then the hypothesis that can only explain E is more probable. This is because the hypothesis that can only explain E would be falsified by the presence of ~E, yet the alternative hypothesis is unfalsifiable since both E and ~E can be equally explained by ~H.

    To make this abstraction more solid, let’s say that E is a headache (~E is not having a headache) and H is brain tumor and ~H is a head cold. Let’s further assume that in this alternate reality these were the only two ways of getting a headache, and that the number of people with a brain tumor was equal to the number of people with a head cold.

    The number of people who have a headache due to their brain tumor is 100 out of 100 in this alternate reality. The number of people who have a headache due to a head cold is 50 out of 100. If you wake up one day in this alternative universe with a headache, which is more likely: That you have a brain tumor or that you have a head cold? The brain tumor is more likely. But if you don’t have a headache, you most certainly don’t have a brain tumor due to falsifiability, but you could still have a head cold since it is unfalsifiable when it comes to headaches.

    This, by necessity, makes something that can explain everything (i.e. is unfalsifiable) — like god — less probable than any alternatives. Like I said, falsifiability is a logical necessity of probability, just like Occam’s Razor.

    Imagine if E was “evolution” and ~E is “no evolution”, and the hypothesis is “all powerful god” and ~H is “atheism”. Atheism can only explain the arrival of intelligent life by evolution, yet god can explain both evolution and creation exactly as the Bible describes. Thus, atheism (or something else besides an all powerful god) is more probable than an all powerful god because an all powerful god is unfalsifiable.

    • Posted April 30, 2012 at 7:37 am | Permalink

      DITTO what J.Quinton just said !

      Moreover, in addition to not being empirically falsifiable, “Goddidit” (or any hifalutin semantic equivalent like “[supernatural] Intelligent Design”) — EVEN IF TRUE — doesn’t really EXPLAIN anything and actually HALTS (rather than advances) scientific inquiry!

  50. DV
    Posted April 30, 2012 at 7:22 am | Permalink

    >>note that this discussion deals with the philosophy of science, so if you think that endeavor is useless you shouldn’t be responding!

    I’m responding because I have time to waste and Hitchens said: “seek disputation for their own sake; the grave will supply plenty of time for silence” or something like that. :)

    In the end, falsifiability is a criteria to define the boundary of the term “science” – a semantic problem. Nevertheless science continues on moving forward observing, explaining, predicting, while philosophers argue its definition.

    There, that’s my contribution to the argument.

    • Peter Beattie
      Posted April 30, 2012 at 7:47 am | Permalink

      So you didn’t even bother to read the first three paragraphs of the essay by Popper linked to at the top of Jerry’s post, consequently present two of the most common and thoughtless misrepresentations of falsifiability, and yet you think you have made a contribution? *facepalm*

      • DV
        Posted April 30, 2012 at 8:56 am | Permalink

        Hmm… I read it, and I’m not sure where the misrepresentation is. Care to elaborate?

  51. Posted April 30, 2012 at 8:16 am | Permalink

    I could be wrong, but it seems to me that a significant number (perhaps a majority, I dare surmise) of the good folks who’ve commented here have not thoughtfully considered what Karl Popper himself said in expositing his philosophy of science and empirical “falsification” as the demarcation between science (physics) and non-science (METAphysics and the rest of philosophy).

    For the 3 or 4 here who may have an interest in actually checking into Popper’s philosophy of science as he himself articulated it, a thoughtfully abbreviated version of Popper’s CONJECTURES AND REFUTATIONS is available OnLine here:

    http://www.the-rathouse.com/CRContents.html

    (And the full text of C&R is also available OnLine at http://www.questia.com/reader/action/readchecked?docId=78146549 )

    Despite how popular accepting critiques of Popper’s epistemology and philosophy of science without first reading Popper for one’s own self seems to be, in my own personal experience I’ve learned that it is OH-SO-MUCH-EASIER to understand (and evaluate the merits of) critiques of Popper after first thoughtfully reading Popper for one’s own self.

  52. Posted April 30, 2012 at 10:39 am | Permalink

    Well I have written about the subject in my blog:

    Here: http://trippleblue.wordpress.com/2011/09/02/my-thoughts-on-falsifiability-meaning-truth-and-information/
    And Here: http://trippleblue.wordpress.com/2012/04/27/science-and-the-problems-of-falsifiability/

    Briefly though: Falsifiability by itself is not enough, though we may consider it useful to some extend. It does not mean it is worthless or naive, it means in its raw form it is trivial. Later by adding or choosing other criterions of knowledge we can complete it.

    • Posted April 30, 2012 at 6:01 pm | Permalink

      “Falsifiability by itself is not enough…”

      Not enough for what?

      The only claim made by Popper (or anyone else) about empirical falsifiability that I am aware of is that being empirically falsifiable (vs. not being empirically falsifiable) demarcates scientific (physical) propositions (which CAN be tested against the empirical data of the physical world and demonstrated to be false IF INDEED they ARE false) from non-scientific (METAphysical) propositions (which canNOT be tested against the empirical data of the physical world and demonstrated to be false IF INDEED they ARE false).

      If you disagree that empirically falsifiable (vs. not being empirically falsifiable) is sufficient for separating scientific (empirically testable) propositions from UNscientific (not empirically testable) propositions, what else do you see needed to distinguish scientific from UNscientific propositions?

      And if you do NOT disagree, then what exactly is it that “falsifiability by itself” is not enough for? I want to learn what additional claim Popper (or ANYone) has made about what else (other than demarcating physics from METAphysics) empirical falsifiability sufficiently establishes that I missed, THANKS!

      • Posted April 30, 2012 at 7:19 pm | Permalink

        Well for one someone like Imre Lakatos (after Thomas Kuhn) pointed out that scientific statements are not as trivial as “All swans are white”.

        Pointing out the Duhem-Quine hypothesis, Lakatos wrote: “The typical descriptive unit of scientific achievements is not an isolated hypothesis, but rather a research programme.”

        After Popper, and particularly after finding out the importatnt problems of falsification (namely Duhem-Quine) philosophers of science moved into thinking about science as a system of thought, not isolated theories.

        //

        I myself happen to agree with Lakatos, but not Kuhn (or feyerabend).

        //

        Also, if you look at social sciences (like economics which is my own field), you see that they do not seem to be as falsifiable as Popper wants them to be.

        //

        Bottom line: Falsification is quite unclear. There are gray areas in which it fails, and there are areas which we see it work, but with logical inconsistencies.

        The only area which I happen to think it works very well is when we talk of something which is “not” science, such as religion of medium cold reading or other superstitions.

  53. emmageraln
    Posted April 30, 2012 at 6:44 pm | Permalink

    Reblogged this on emmageraln.

  54. friendsofdarwin
    Posted May 1, 2012 at 1:24 am | Permalink

    It seems to me that falsifiability is a pretty good criterion with which to judge a theory that claims to be scientific – albeit it is not the only criterion.

    If the inventor of a theory cannot suggest one or more tests that would show the theory to be incorrect, then the theory has some serious explaining to do – assuming it claims to ‘explain’ anything.

  55. Posted May 2, 2012 at 12:43 pm | Permalink

    To my uncredentialed, nonscientist, layperson’s brain, many of the forces-dancing-on-pins arguments here confirm a skeptic’s view of the philosophy of science as mental masturbation—extremely pleasurable, legitimate to engage in, but ultimately unproductive.

    If one wants to make an omelet, an egg is still an egg, whether you call it a potential chicken or an unhatched chicken. How does philosophy help us make a better omelet?

    • Filippo
      Posted May 2, 2012 at 3:30 pm | Permalink

      Let’s assume that philosophy/philosophy of science does science and scientists no good.

      How do scientists go about honing their thinking abilities? How do they figure out the best practices in research/experimental design?

      How did Carl Sagan, for example, arrive at the critical thinking skill components of his Baloney Detection Kit? Do junior scientists/researchers receive these higher level thinking skills from mentor scientists/profs on a more-or-less apprentice basis?

      There are non-scientists out there who surely are pretty good thinkers. Consider The Hitch and his Oxford PP&E (Philosophy, Politics and Economics) as the foundation for his own most formidable autodidactical powers.

      If not the word “philosophy” in some shape or form, what words/phrases are generally acceptable to use to describe the source of these cognitive methods/techniques which constitute the “scientific method” and “critical thinking”?

      • Posted May 2, 2012 at 7:39 pm | Permalink

        My critique was directed specifically at philosophy of science. I omitted “of science” the second time. Apologies.

        I happen to be fascinated by philosophy in general. But discussions about whether science is useful, in the sense of how “real” it is, seem to me to be beside the point.

        Here’s another analogy to try and make the point clearer:

        The purpose of a fisherman’s net is to catch fish. It is uniquely designed for that purpose, and has been refined over many years.

        Let’s imagine that a fisherman has asked for help improving the yield of his net.

        This is clearly a problem of the type the scientific method is well-suited to address. There are many different approaches, and both theoretical and experimental science can be useful here.

        There is also some use in asking, why does the fisherman need more fish? Is it “responsible” for her or him to do so? Should the net be designed more humanely? How do we determine “moral responsibility” or measure “humaneness”?

        On the other hand, of what utility is pondering whether a better net will catch more fish, or whether more fish will be caught in a better net?

        There is more than a whiff of toxic Critical Studies in a lot of philosophy of science discussions I come across today. Just my uninformed but interested reaction

    • Posted May 4, 2012 at 10:09 am | Permalink

      Well, is it beside the point that how “real” it is?

      There was this proff given (I think by Aristotle) about “why do we need philosophy?”

      If we need philosophy, then good for us. Even If we don’t need philosophy, we still need philosophy to tell us why we don’t need philosophy.

      But I digress, since we want philosophy of science: Only when one asks oneself “What is science?”, one can truly realized that’s a valid, but hard question.

      If we believe that science is only “predicting” things, then yes asking how real that is would be beside the point. But tell me my friend, do scientists believe only that? Or do they say science “explains” as well?

      Is it not important to think about how real thos explanations are?

      //

      One has to be careful when talks about mental mastrbation. Philosophy of science helps scientist to perhapse one important thing: Is what they think is science, real? Or maybe not, just a semi accurate description of what reallity actually is? Or maybe its just all wrong, far from reality?

  56. Posted January 8, 2014 at 6:28 pm | Permalink

    This is a great summary. The idea of God is not a scientific theory. Creationism does not explain the origin of species and has no place in science classes. It’s impossible to find evidence of God’s existence or non-existence. That seems to be fairly clear.

    Why then scientists try to argue that god does not exist and ask religious people to show them evidence? It seems to me, the correct message should be “keep unscientific ideas out of science and come back when you understand what science is and how it works”. On the other hand, why do some scientists find it appropriate to express opinions on religious issues and attempt to use scientific method where it does not belong? This is what puzzles me.

    I understand the argument “but religion causes harm – that’s why”. Is that some biblical “eye for an eye” logic?

  57. Posted March 17, 2014 at 9:51 am | Permalink

    The Big Bang theory has become unfalsifiable, because one can invent ad hoc explanations, such as dark energy and dark matter, to explain away the discrepancies between theory and observation. The universe is expanding more quickly than Big Bang prediected? No problem. Just add some dark matter and dark energy and voila, the theory lives on. Find something else that is not predicted? Just keep on adding ad hoc explanations.


One Trackback/Pingback

  1. [...] Coyne asks, “Is falsifiability a good criterion for a scientific theory?“  My short answer is “No”, but I’ll try to flesh that out.  Coyne writes: [...]

Follow

Get every new post delivered to your Inbox.

Join 27,212 other followers

%d bloggers like this: