Victor Stenger and Janna Levin on (our lack of) free will

June 4, 2012 • 4:57 am

UPDATE: Apologies to Victor for suggesting that he himself was trying to save the words “free will,” which he wasn’t; he suggested replacing them with the word  autonomy” (I’ve modified the text to that effect).  As I read his piece yesterday and wrote mine this morening, I somehow forgot that Victor also called for a change in the retributive justice system, so I’ve crossed in the text the part that implies that he thinks otherwise.  On that, as on most issues, we’re in agreement. However, I still disagree with his attempt to find some virtue in compatibilism by saying that our decisions are “ours.”  His statement that “[compatibilists] also make another good point when they argue that even if our thoughts and actions are the product of unconscious processes, they are still our thoughts and actions” doesn’t seem like a very good point to me.  It’s bloody obvious, and has little bearing on the issues. I don’t think we need to trawl our philosophy to try to save some idea of “responsibility” beyond “that person did it.”

____________

Victor Stenger is still derailing the accommodationist juggernaut at PuffHo with his latest contribution, “Free will is an illusion.”  I think it’s drawn from his new book, God and the Folly of Faith: The Incompatibility of Science and Religion, in which he covers the topic.

If you know my stand on free will (we don’t have it, at least in the dualistic or “we could have chosen otherwise” sense), you’ll know Victor’s.  He argues that quantum indeterminacy can’t operate at the level of the brain, and I’ll take his word for that. But even if it did it wouldn’t give us the kind of freedom that all of us (especially the faithful) want.

Where Victor and I part company is that, like nearly all New Atheists who write on this topic (Alex Rosenberg is an exception), Stenger appears to be a compatibilist: that is, he thinks we can salvage our notion of free will by using a different definition:

But here’s some consolation. Even though at the quantum level there is no rigid determinism, the compatibilists are correct in viewing the operations of the brain as causal processes. They also make another good point when they argue that even if our thoughts and actions are the product of unconscious processes, they are still our thoughts and actions. In other words, “we” are not just our conscious minds, but rather the sum of both conscious and unconscious processes. While others can influence us, no one has access to all the data that went into the calculation except our unique selves. Another brain operating according to the same decision algorithms as ours would not necessarily come up with the same final decision since the lifetime experiences leading up to that point would be different.

So, although we don’t have libertarian free will, if a decision is not controlled by forces outside ourselves, natural or supernatural, but by forces internal to our bodies, then that decision is ours. If you and I are not just some immaterial consciousness (or soul) but rather our physical brains and bodies, then it is still “we” who make our decisions. And after all, that’s what the brain evolved to do, whatever role consciousness might play. And, therefore, it is “we” who are responsible for those decisions.

To me that’s not free will in any meaningful sense: all he’s saying here, really, is that individuals appear to make decisions and perform actions.  That’s true of every animal that’s even remotely sentient.  So if our decisions are “ours,” so are those of rotifers, snakes, and squirrels.  When he says  ‘we’ are not just our conscious minds, but rather the sum of both conscious and unconscious processes,” how does that confer freedom? How does our unconscious make our decisions more “free”, especially because “free will” is classically connected with conscious decisions?  Further, most of the decisions of other species are probably largely unconscious: programmed, hard-wired behaviors.

And so what if nobody else has access to our data, or that another brain wired differently would make different choices? Two computers that are wired or programmed differently would also make different decisions, but that doesn’t give them free will.

Our decisions are ours, a rotifer’s decisions are its, and Fluffy the Cat’s decisions are hers.  These are just words—almost deepities in the Dennett-ian sense. And what does Stenger mean by the fact that we are “responsible” for our decisions? That’s another deepity, for all it means is that my “decisions” appear to emanate from the cranium of a person known as Jerry.  But what about real responsibility: given determinism and the effect of our environment on our brains, how does that affect our notion of moral responsibility?  As you know, I think it does, and should have an effect on how we treat criminals.  Those who claim otherwise are, I think, ducking the scientific facts in favor of adhering to a comfortable status quo.  Determinism should promote compassion.

What is important to me is whether our decisions are predetermined (with perhaps a dollop of quantum indeterminacy), and therefore we lose our freedom to really make different choices when given alternatives.  The old notion of true freedom—the ability to do otherwise—has been killed dead by science.  Why are people trying to save the notion of free will by confecting other definitions? Why aren’t they, instead, telling the faithful that they can’t really choose whether to be saved or make Jesus their personal saviour? The faithful are dualists, and religion is our enemy. Much of religion is based on true dualism, and on the existence of a “soul.”  Shouldn’t we be dispelling that dualism instead of engaging in arcane philosophical arguments about what “free will” really means?

What galls me most is when philosophers make a virtue of necessity by telling us that despite determinism and the illusion of dualism, what we do have is actually the kind of free will we want, and the only kind worth wanting. That’s not the kind of free will I want! I want the ability to choose freely among alternatives, just as I want to live forever. But we’re so constituted that neither of these is true. Still, I, like all of us, pretend otherwise. Nevertheless, it’s better to live with the truth: our brains are computers made of meat, and some day that meat will spoil.

Let’s just get rid of the words “we have free will,” and say instead that “our behavior is controlled by factors we don’t understand.”  Isn’t that more accurate? (Stenger suggests using “autonomy”, which to me is less appealing because it means “free from external control and influence,” which still smacks of dualism).

***

Janna Levin, a polymath physicist at Columbia University (she does science, writes popular books as well as novels, and produces essays on art) was interviewed yesterday by Krista Tippett on the NPR show “Mathematics, purpose, and truth.” Have a listen: Levin is fiercely smart and articulate and has absorbed her science into her everyday life. She also agrees with Stenger and me about the lack of free will. Her discussion of God’s nonexistence is from 13:35 to 15:00 in the interview.  Levin’s denial of free will goes from 17:45 to about 19:57, but do listen on to at least 21:30, as she connects the non-intuitiveness of modern physics with the peculiar way we evolved.

Although the odious Tippett tries to turn Levin into some type of quasi-religious or spiritual person (that’s Tippett’s schtick), Levin won’t be moved. She’s a hidebound atheist and a determinist. I have to say that listening to Tippett is like listening to fingernails on a blackboard.  I can account for her popularity only by assuming that a large section of the educated, well-off, and liberal public that listens to NPR has a soft spot for spirituality.

99 thoughts on “Victor Stenger and Janna Levin on (our lack of) free will

  1. “To me that’s not free will in any meaningful sense”

    To me that is free will in the only meaningful sense, and one that is useful to boot.

    It differentiates between between accessible behaviours and non accessible behaviours – I can call an all in in poker with AK, and I can fold the hand, and have on occasion done both, though not at the same time. I can’t levitate, though.

    And it differentiates between behaviours which are forced on me at gunpoint, or its equivalent, and those free of that sort of external constraint.

    I don’t see the sort of freewill which demands magic as meaningful.

    Quantum indeterminacy is a red herring, purely and simple. It is the degree of predictability that an organism can come up with that allows the brain to come up with informed decisions.

    1. David, I have to agree with your assessment:

      “To me that’s not free will in any meaningful sense”

      To me that is free will in the only meaningful sense, and one that is useful to boot.

      The kind of absolute free will that is being denied is obviously not the kind of free will that we possess (if talking about free will is helpful in this context). But negating the possibility of choice seems to make a nonsense out of arguing that there is no free will, since the outcome of our agreement of non-agreement is simply determined. That makes a nonsense of acting for reasons. Of course, once a decision is made, the option of having been able to do otherwise is just a red herring. But arguing that we have no ability to choose makes a nonsense of arguing for or against free will, because it implies that we do not do things for reasons (as opposed to causes). Of course, quantum indeterminacy has no role at all to play in this philosophical game.

      1. Eric,

        “That makes a nonsense of acting for reasons.”

        Not to mention it seems to make nonsense of any normative or prescriptive language, moral or otherwise. It’s always puzzling to see folks who first assert “We don’t actually have a choice in any real sense” and follow that up with prescriptions for what we OUGHT to do in case X, Y, Z….

        Vaal

        1. Vaal.

          I completely agree. You could say the same about Richard Dawkins’s many statements about the beauty and grandeur of nature and the universe. It is not sufficient to invent just-so stories to vaguely link ethical and aesthetic perceptions and judgments to “kin selection” etc, nor to write off consciousness as an illusion. In fact, that pretty well assumes widespread dysfunction in the evolutionary process. Evolution is demonstrably real, but it is no more certain that Natural Selection is the whole story than it was that Newtons mechanical and gravitational laws were a complete description of the universe. The contents of our *minds* are the only data we have, and the material reality we believe in is just one subset of the inferences we draw.

      2. Eric,

        The following suggests you still think that determinism is a problem for rationality and that your notion of real choice, deliberation, etc. is therefore contra-causal:

        “But negating the possibility of choice seems to make a nonsense out of arguing that there is no free will, since the outcome of our agreement of non-agreement is simply determined.”

        It’s both the case that we are (mostly) rational and that we are fully determined in how we adjust our beliefs in response to arguments, so determinism doesn’t undercut the need or desirability of making arguments to change beliefs to become more reality-based. Adducing reasons, and having beliefs change in response, is a type of deterministic process that leverages logic, evidence and a person’s reasoning capacities.
        Having any sort of indeterministic slack or contra-causal capacity in the process wouldn’t add to the syllogistic force of an argument, or to the reliable influence of evidence on beliefs, rather quite the opposite.

        The argument that we all agree on is that there is no *contra-causal* free will, and to make the argument (or any other argument) we need not suppose – contra all the evidence – that we are exceptions to determinism, or that determinism makes nonsense of making arguments.

        http://www.naturalism.org/resource.htm#rationality

      3. Eric,

        In the end our “reasons” are the same as “causes”; they are just causes structured in a certain way, namely by brains and computers that represent the world and counterfactuals in certain ways.

        One of the easier ways to look at this is with the chess computer. Both the human and the chess computer will make moves in the game based on “reasons,” on an understanding or a programming that hooks into logical structures of the world or game. Our “reasoning” is different because we have lively conscious minds that sense things and feel our awareness constantly shifting and that also releases feel good emotions when we lock onto a good “reason” or logical structure that will help our self win the game (that is our “reason” hooks up to a good, coherent, perhaps undeniable, structure/reason of the world). But the running of our thoughts and decisions from one moment of time to the next is deterministic (non-free) just as it is for the chess computer. There is not significant reason to think it is different.

        That kind of structure of our brain/mind in chess games probably applies to the other kind of processing, about moral and political issues, as well.

        Our self is free, if all you mean is that our self/brain/mind is running some complex, deterministic internal program that is not immediately being pressed upon by external processes–but then so is the chess or poker computer you are playing.

    2. And it differentiates between behaviours which are forced on me at gunpoint, or its equivalent, and those free of that sort of external constraint.

      It sounds like you’re approaching free will from more of a legalistic standpoint than a philosophical one. After all, our decisions are always constrained by our physical circumstances, but you’re distinguishing direct coercion from other constraints. There’s a wide spectrum of coercive behavior, ranging from the threat at gunpoint you mention to the threat of losing your job if you choose to sleep in. I’m not sure where you draw the line between what counts as free will and what doesn’t.

      1. don’t worry about the boundaries. as with most concepts there is no line, but rather a gray area. it is the same with life versus non-life, conscious versus non-conscious, good versus bad.

  2. The more I look into the free will/determinism debate the more it strikes me as a solution in search of a problem. The dualism version is fairly easily dismissed, and the subsequent goal post shift of redefining free will in order to preserve it is so severe as to render it meaningless. While bemoaning the horrible meaninglessness that a lack of free will might imply, proponents of compatibilist free will have no coherent test of what decisions are or are not the product of their precious free will. There is in fact no way of differentiating a deterministic world with a world that possesses compatibilist free will that I know of.

    I have only found nervous philosophical hand wringing and arguments from consequences. I have personally put the matter aside as a waste of my time.

    1. Why would compatiblists need to make such a distinction? Compatiblist free will is deterministic to start with.

      1. You are correct, my statement was sloppy.

        I should have stated that the distinction between a world with or without compatibilist free will is indistinguishable. Compatibilist free will is indeed deterministic by definition.

    2. Observer,

      Agreed. John K. seems confused about the nature of compatibilism.

      John K.

      “proponents of compatibilist free will have no coherent test of what decisions are or are not the product of their precious free will.”

      Actually it’s pretty simple. Take the case of me sitting in my chair. Am I sitting there of my own Free Will? Well, on the compatibilist notion, if I am sitting here due to my desire to sit here, and if I could get up should I desire, then YES I’m sitting here of my own Free Will.

      Want to test this empirically? Have me sit in my chair, and observe how I can repeatedly get up or sit down if I desire to.
      Viola – you can empirically test and observe my claim and see Free Will in action.

      Want to test what it is like when such Free Will is absent? Restrain me to the chair and induce in my a desire to stand up. Now, I can not, in fact, fulfill my desire to stand and I’m therefore NOT sitting in the chair of my own free will.

      So…you were saying…?

      Vaal

      (If you want to deny the above by saying “but that’s not free will” you are just begging the question, since your challenge
      is for someone to explain a test of COMPATIBILIST free will – which I assert is a notion of free will quite consonant with our normal notions of choice and free will).

      1. So if I am able to create a machine that can repeatedly sit and stand in a chair, that robot is possessed of free will so long as it is not disabled or restrained?

        I think not. Your test is far from complete.

        We are not talking about legal definitions here, which have obvious meaning and test-ability.

        1. There is of course an implied prerequisite of consciousness. We’re only talking about conscious agents when we talk about free will.

          Your example of a machine mechanically sitting up and down isn’t anywhere close to straining the imagination. Nobody will mistake it for having free will. But if you can create a robot advanced enough that we can say it is a conscious agent, then you can begin to ask if 1) it can rightly be considered to own intentions and 2) it is unrestrained from acting on those intentions by other intention-owners.

          1. You are quite right, but we quickly find ourselves in very murky water. If you are going to add a requirement of consciousness to free will, you need to have a test for consciousness. We are still very noticeably lacking a way to consider an action and determine if it is the exercise of a free will or not. At what point does a computer program become conscious? How are we to determine when it can “rightly be considered to own intentions”? Computers are programmed with goals and intentions all the time, what kind of autonomy will be required for us to grant that they have free will? Can a human brain be damaged enough that it loses its free will?

            These distinctions are never made, and the term remains devoid of any real meaning.

            My robot example was mostly an effort to show how the test proposed by Vaal was lacking.

          2. The distinction that is usually made is “when we can no longer determine any differences in the responses of the machine compared to humans.” Though that may not render an absolutely accurate model of reality, it is the best we can possibly do. Then again not even formal scientific theories are absolutely accurate models of reality.

          3. The boundary conditions will always be murky. It is now with consciousness even without bringing free will into the discussion. Surely brain damage affects level of consciousness. Even sleep affects consciousness. This doesn’t mean that consciousness doesn’t exist or is devoid of meaning just because we have a practical problem drawing a boundary line.

            We have a practical problem with species boundaries too, but that doesn’t mean the classification is meaningless.

          4. I was only referring to consciousness vis a vis free will, I have no intention of denying the existence of consciousness. If my robot is to be disqualified from free will on the basis that it is not a consciousness, I want to make sure this is more than a magical requirement. If I can understand what wickets the robot fails to meet, then I can create a reasonable test for free will. I am talking about properties of an entity. A person is still a consciousness when asleep; dormant, not destroyed and reborn when waking. I want to know what properties an entity, in this case my theoretical robot, requires to be a consciousness. Distinguishing between an awake person and one that is asleep, in a coma, or dead is not the same thing.

            With different species we can at least refer to the ability of life forms to create viable offspring, even if not all of them fit neatly into sharp divisions. I am aware of no such reference point for conscious or “not-conscious” entities.

        2. John K,

          “So if I am able to create a machine that can repeatedly sit and stand in a chair, that robot is possessed of free will so long as it is not disabled or restrained?”

          See DV’s reply.

          You’ve forgotten some components there. To know whether I have free will or not requires
          that I have a will (desires), that I know what actions I could take to fulfill the desires, and that I can indeed take those actions and I’m not deceived about the situation.

          We humans have the necessary faculties for free will. IF you could build a robot that had such faculties, then yes we can start testing it for free will in the same way as you’d test me for free will.

          “We are not talking about legal definitions here, which have obvious meaning and test-ability.”,

          You telling us what we are talking about as it concerns free will does exactly what I mentioned before: it begs the question. You can’t ask for a compatibilist to explain how you’d test for free will and then reject it on the grounds that “that’s not free will.”
          Because your demand is based on GIVEN YOUR NOTION OF FREE WILL how would you test for it?

          Even if the compatibilist notion of free will ends up being similar to the legalistic versions, insofar as it is testable as you admit it would be, then your claim compatibilist free will is false.

          Vaal

          1. Rats, cross posting now. I apologize if you have already addressed this.

            I want a test in order for the notion of free will to be coherent. If I want to posit the existence of a McGuffin, it is not unreasonable for someone to ask for clarification on what that is and what its existence might mean.

            I have no meaningful notion of compatibilist free will, which is the whole problem. I do not think anyone does, which is why I think the term is meaningless. Your empirical test was “Have me sit in my chair, and observe how I can repeatedly get up or sit down if I desire to.” My robot can pass this test, unless you want to be more specific about what qualifies as “desire”, which was not part of the test you proposed. The robot either possesses free will or your test is invalid.

            If you do indeed want to ascribe free will to various children’s toys, which I doubt, then I can admit that the free will you are talking about does indeed exist. I have no difficulty in understanding the legal concept of free will if it applies only to humans by definition, but then the question of “do we possess free will?” is tautologically “yes” and still unworthy of discussion.

          2. Jon K.

            “I have no meaningful notion of compatibilist free will, which is the whole problem. I do not think anyone does, which is why I think the term is meaningless.”

            But I supplied one, and I do not think you are (or need to be) as baffled as you think.

            I have depicted free will claims as being empirical claims about our powers to fulfill our desires. What exactly is so baffling about that? It’s essentially the same logic you and most people use all day long.

            Presumably when you and I say things like “I put the money in my checking account, but I could have put it into my savings account if I wanted to,” or “I lifted the 40 lb weight but I could have lifted the 60 lb weights if I wanted to,” then you understand we are making quite normal empirical claims about our powers to act on our desires. We often make true, informative statements this way to one another. If you think when you and everyone else say these things they are actually nonsense, or meaningless and baffling, then I suggest your confusion and bafflement is more fundamental and all encompassing than is contained in your questions about compatibilist free will.

            But..I don’t think you are really baffled by yourself or others, when you make claims about what you could do our could have done. Just map the logic of our everyday claims about what can and can not happen on to compatibilism, as that is essentially what compatibilism (the version I accept anyway) is.

            Vaal.

          3. Jon K.

            “My robot can pass this test, unless you want to be more specific about what qualifies as “desire”, which was not part of the test you proposed.”

            First, “desire” certainly WAS part of the test I proposed. Go back and read it: every paragraph of my claim and test related free will to the ability to act on a desire!

            Second: Your robot issue is actually a red herring. I understood you to be challenging compatibilsts (humans) to show a test of “their precious free will” meaning, for example, any compatibilist’s free will. Which I did – a test of my own free will. Bringing in question about robots is a red herring.

            The compatibilist concept I espouse follows from an entity having “desires” and whether the entity has the power to fulfill those desires or not. We only need to admit that we are creatures who have desires, and who act to fulfil our desires to get this off the ground. If you accept we have these properties (desires, ability to take actions to fulfill those desires) then we have an example, us, to test for free will. Whether ANY OTHER entity, animal or robot, has these properties is an interesting question but not necessary for this discussion so long as WE have those properties.

            So do you want to deny we have desires? If so, I’d like to see how without you putting yourself on a trip to fringe-loony-town. So I presume you agree we have desires, which makes
            your demand concerning other entities like robots moot.

            But since you brought it up: What if you thought this REALLY DOES turn on what we mean by having a “desire.” Well, presuming you agree we humans have desires, I’d ask you, what do YOU mean when you say we have a “desire?’ If you are allergic to the idea of giving simple robots “desire,” perhaps you’ll end up with a definition that nicely delineates “desire,” within the scope of human properties, and not robots. If so, well then you’ve just given ammunition to the idea that a concept of free will based on desires does not needlessly open the door to accepting free will in any robot. Well done! But what if you end up with a definition that, it turns out, would apply to robots as well? Then it seems you would be in just the same uncomfortable position you think you see in my position, so what is your beef? And we’d still have “desires” on which to base free will. So if you really want to not just agree we generally know what we are talking about when we say we have a “desire,” I’d ask for your definition as well to see where it leads.

            As for me, I subscribe to the conception of desires/beliefs as follows: Desires and Beliefs are both mental attitudes toward propositions. A belief that “My house is painted blue” equate to a mental attitude that the proposition “my house is painted blue” is true. Whereas a DESIRE is a mental attitude that a proposition is to be made, or kept true. Hence the desire that “my house is
            painted blue” equates to the mental attitude that the proposition “my house is painted blue” is to be made, or kept true. (I.e. if my house isn’t blue, my attitude is that it is to be painted blue; if it is already blue, my attitude is that it is to be kept blue).

            Can robots have mental attitudes toward propositions, and hence beliefs, desires, and ways to rationalize about and act toward fulfilling those desires? Maybe this is possible. If so, then we would indeed start to legitimately ask questions like do they have free will?

            Vaal

            (There is a bit more to the compatibilist ideas as concerns the difference between humans and non-free-willed entities, but that’s a start).

          4. I have always thought, perhaps incorrectly, that the question of free will is more than the ability of humans to make decisions. People can imagine various consequences based on their understanding and work towards carrying particular scenarios out. This is not controversial. Computers, also, can make decisions. It is as simple as an “if statement” in code. Computers are not supposed to have free will just yet, however. I have been hammering on this distinction in an attempt to understand it. Animals, insects, and bacteria also make decisions yet are somehow not completely possessed of free will.

            A desire, to me, is a goal that a person has. The human element is part of the definition. Programs are routinely created to accomplish goals, but most people will not call those goals “desires” because they are not directly human. If desires do indeed require a human, then the argument that the robot has no free will boils down to the fact that it is not human. Moreover, humans are granted free will by definition. If humanity is a requirement for free will we need only say so, and the question of whether or not we have it is no real question at all. We might as well ask if humans possess human brains.

            The empirical portion of your test, observing a person using a chair as they desire, does not allow for a way to observe desire. That is why the robot passes the test, it fulfills the observable criteria. I could create a robot with a timer that will attempt to use the chair every 10 minutes. I want to know if the timer and “use chair” routine constitutes desire, and if not, why not. If it is only because it is not human enough, we need only say that free will is a property that only humans possess by definition once again.

            Perhaps I am drawing too much on the dualism model of free will, where our decisions are sacrosanct and free of the physical world, governed by a “spirit”. Since this invokes a supernatural agent, it is easily dismissed. Compatibilist free will seems to simply rip out the supernatural elements to maintain the concept, but the supernatural elements are so intrinsic that boldly removing them leaves the idea largely incoherent. Why attach the word “free” to a will that is so deterministic?

            You keep using criteria like belief, desire, and mental attitude. These are words we use to describe human behaviors. The problem I keep running into is the unknown method of applying these descriptors to things that are not human. I have no methodology for applying them to non-human things, which is what renders the compatibilist free will idea meaningless. If they cannot be applied to non-human things, then free will is only granted to humans by definition and still not very meaningful.

            (At times my tone has been a bit harsher that I would like. I appreciate the discussion and gladly note that you have been largely unaffected by the aggressive nature of my writing. Thank you for that.)

  3. “To me that’s not free will in any meaningful sense:”

    By my reading Stenger does not say, or imply that at all. Everything he says in the paragraphs you quoted seems to be accurate and straight forward. He also goes on to say,

    “Calling it “free will” (as compatibilists do) is too confusing, since it suggests some form of dualism, supernatural or not; so let’s call it “autonomy.””

    Which seems to pretty unambiguously show that Stenger does not think that what he described is free will either.

    “And what does Stenger mean by the fact that we are “responsible” for our decisions? That’s another deepity, for all it means is that my “decisions” appear to emanate from the cranium of a person known as Jerry. But what about real responsibility: given determinism and the effect of our environment on our brains, how does that affect our notion of moral responsibility?”

    It is almost like you didn’t read the whole article. Stenger’s views on responsibility seem to be quite similar to those you have espoused in this post and others in the past. He closes his article with this,

    “Obviously, we cannot have a functioning society if we do not protect ourselves from people who are dangerous to others because of whatever it is inside their brains and nervous systems that makes them dangerous. Still, given that we don’t have libertarian free will that sets us above causal laws, it would seem that our largely retributive moral and justice systems need to be re-evaluated, and maybe even drastically revamped.”

    I don’t know, maybe it’s my interpretation of Stenger’s article that is off, but it looks to me like you misread him here.

    1. Yes, I did forget about the “moral responsibility” part when I wrote the piece this a.m. (I read it yesterday); so I apologize for that and have issued an update, as well as crossing out my erroneous criticisms. I still don’t like the word “autonomy,” though, since it smacks of a freedom that we don’t have (I give the first definition I found in my online dictionary). But yes, Victor and I agree on nearly all these issues; I disagree only with his attempt to find some sort of compatibilism in the fact that our decisions are “ours.”

      1. The fact that we make decisions using our internal decision-making capacities is compatible with the fact that this process is deterministic. But you think we don’t *really* make decisions (so they can’t be “ours”) since it’s all determined – you’re holding on to a libertarian, contra-causal criterion for what constitutes a “real” decision. But as I point out in #11, being a contra-causal decider wouldn’t help matters, so you should give up that criterion.

        Meanwhile, “autonomy” (unlike “free will”) isn’t burdened with contra-causal connotations, so seems a good word to describe what Vic’s driving at: having robust internal behavior control resources, including decision-making capacities.

        1. Tom,

          “Free will”= choosing/deciding in a certain way.

          If a person believes in (libertarian) free will, as you argue that many people do, then that means they believe that humans (and their self) decides/chooses in certain ways.

          Their view of human choices and decision making is different than how they saw it before if they have now rejected libertarian free will. Now, such people could continue to say that humans “choose” things in the bare sense that computers do or that a non-computerized machine chooses or structures the world, but how they view human “decisions” has been changed.

          To put another way, the connotation of “choice” or “to decide” as it applies to humans carries the contra-causal belief of such decision making processes, given a belief in contra-causal free will by that individual. If such an individual denies free will, then they will also have to change the connotation of “to decide” as it applies to human choice.

          1. Agreed, but Jerry seems not to be able to revise his conception of decision-making in light of naturalism, rather he rejects the idea we make decisions at all, which is too bad since it makes us out to be non-autonomous puppets. That in turn will impede the acceptance of naturalism, which is why it’s important to correct him on this point. The cover of Harris’s book, and a few of the things he says in it, makes the same mistake.

  4. Note I avoided calling it “free will” and made it very clear that the normal notion of conscious free will is an illusion. I called it “autonomy.” Do you disagree with that designation?

    1. I do disagree with the word “autonomy,” but that’s just semantics. See above and the update for my apology for misreading you. However, I still take issue with the sort of compatibilism you adduce, because it allows everything that makes a “decision” (including protozoans) to have free will.

      Cheers!

      1. But Vic specifically rejects compatibilist talk of free will. Rather, we have a continuum of increasing autonomy, from the protozoan to us, based on increasingly complex internal capacities for flexible behavior to cope with changing environments.

    2. Why take out consciousness from the definition of free will? I think that consciousness has always been an obvious if implicit assumption. Mere autonomy won’t do. A protozoan is autonomous but nobody will defend the notion that it has free will.

      Consciousness has to be retained as a prerequisite. That’s the only way the concept of free will makes sense.

      1. I think I might defend the proposition that a protozoan has a sort of proto freewill.

        Consciousness – well, when I was learning to read, many years ago, if I was trying to read your post that I am replying to I would have had to think consciously about the sounds of the letters, making out syllables from that, and words from the syllables, and looking up some words in a dictionary.

        Now, with learning and practice, I read effortlessly, and take the meaning (with a certain scope for error) largely unconsciously.

        Delegating a lot of the process of reading to the unconscious seems to me to enhance my freedom (or ‘freedom’) to understand your post, rather than diminish it.

        Back at the main point, I’ve been thinking about the the expression ‘free as a bird’.

        Particularly with regard to migration.

        Many species of bird have migratory urges built in, but I don’t see that a bird that is caged has the freedom to migrate that an uncaged bird has.

        Further, I’d argue that a bird that hypothetically is programmed to follow a magnetic line of force to a destination would be less free than one that can learn to use landmarks to migrate by, learn feeding stations close the the route, process information about wind conditions which might lead it to stop on the route for a while, stuff like that…

        The latter, more realistic, creature, I’d suggest has – even metaphorically, but usefully – more degrees of freedom than one programmed to go as the crow flies, so to speak.

        David B

    1. Personally I think ‘autonomy’ is just the right word. It serves several purposes.

      It allows for a complete rejection of dualist free-will: not needed.

      It allows us to associate with it the notion of the human as an automaton, which in turn is what Jerry’s ‘computers made of meat’ implies. This in turn serves the purpose of emphasising this idea. Much to the annoyance of theists, dualists and even many atheist-Darwinists who still can’t bring themselves to reject notions of free-will and even aspects of dualism because of perceived consequences for human behaviour.

      I can be used to describe various degrees of autonomy, and this in turn can be applied to many aspects of human and animal behaviour. It can describe self-contained automatic subconscious behaviour of basic life functions. It can describe the complex feedback mechanisms involved in conscious behaviour that superficially seem most autonomous, to the extent they fool us into thinking we have free-will. And it does all this while acknowledging the external influences from the immediate and past environment of the individual, right back to environmental influences from our evolutionary past.

      Jerry says it implies “free from external control and influence”. But freedom from external control and influence is quite reasonable, as long as it isn’t used to imply the total freedom of dualism. There are degrees of autonomy; and the only degree we are rejecting is 100%.

      I can’t think of a better word for describing humans and their brains.

  5. If we had free will we would not have an identity. Our identity is shaped by our preferences and prejudices. Our choices are determined by them (barring a random element). But we cannot choose our preferences and prejudices. I can choose between strawberry and vanilla ice cream. I cannot choose to like strawberry ice cream better than vanilla ice cream. If I could I would have free will, but I would not be ‘me’.

  6. If your notion of free will evokes the image of a contracausal libertarian dualistic prime mover, then you need to change your view of free will, not jettison it altogether. There are other options out there.

    It’s like ‘consciousness.’ Some people use the term to refer to a spooky soul-substance. But that doesn’t mean consciousness, defined in a more general and reasonable sense, doesn’t exist.

    Free will is what we exercise when we make decisions throughout the day. When confronted with different options, we can weigh different possibilities by predicting their consequences, and select what we will do based on that. This seems the algorithm that brains use.

    I freely chose the M&Ms over Snickers. There is nothing occult going on: it is something we can and do study in the neuroscience lab all the time.

    A good little summary of this perspective in the NYT here.

    All this is much more reasonable, and culturally less disastrous, than suggesting we should get rid of the notion altogether. I cringe when I hear people say that ‘free will is an illusion’ because it shows they have let the religious nut dualist types define the problem for them.

    1. The same ‘compatibilist’ position as the NYT article has been defended in a (very dated but well-written) essay by Sydney Hook called “Moral Freedom in a Determined World” reprinted in his anthology “Quest for Being” published by MacMillan.

  7. Heavy handed alert. I wrote:
    If your notion of free will evokes the image of a contracausal libertarian dualistic prime mover, then you need to change your view of free will, not jettison it altogether.

    Change that to ‘You might consider changing your notion of free will rather than abandoning it altogether.’

  8. Stenger, of course, adduces science in his discussion of Free Will. I think we should reference scientific knowledge insofar as it does have implications for free will (to my mind, the most relevant these days are the experiments that seem to identify “decisions” happening before we are consciously aware of them).

    But the problems of Free Will were noticed long before science as we know it came along, simply from observing the apparent contradiction in our (purported) intuitions of our making free choices, with our OTHER intuition and assumption of Cause and Effect. It was noticed by Greek philosophers (and before) that the logic of cause and effect seemed to undermine free will (the stretching chain of causation seems to undermine ‘us’ as being the cause of our choices).

    Much later, the Newtonian scientific challenge to free will says “Someone with the appropriate amount of knowledge of all the physics could know what you are going to do tomorrow, so you don’t have free will.”

    But Christians (and other deity worshipers before) ALREADY had this problem in God’s Omniscient Foreknowledge of our actions. If God knows what you are going to choose A over B tomorrow, then it’s never true you ‘could’ choose B and it seems we haven’t free will.

    The debates over this issue put people into the various camps: Libertarian (God gave us this magic gift of free willed choice, and some say not even God could know our choices), Determinism (yup, that’s right, our choices are determined…suck it up fellow Christians, it’s all for God’s glory), and Compatibilism (yes God knows what we will choose, but we still have morally relevant free will).

    The apparent mismatch between God’s foreknowledge and our having free will is a common “gotcha” argument many atheists throw at Christians. Oddly enough, I’ve seen even some free will compatibilists throw it at Christians. But it seems to me compatibilism would agree with the Christians on this point: God’s ability to know or predict our choices does not entail a lack of “real” choice or free will on our part.

    Another reason for incompatibilists to be annoyed with compatibilists, I suppose…

    😉

    Vaal

  9. Jerry,

    I’d like to know a bit more about your views on revising our ethics, social philosophy, and political philosophy if we decide that libertarian free will does not exist.

    One obvious start would be to impose a system of legal punishment but only for consequentialistic reasons: not because criminals deserve to be harmed, but because punishing them promotes some benefit. But the obvious problem here is the set of traditional objections to consequentialism itself. For example, if we punish people because it deters other people, it doesn’t matter whether the people we punish actually committed the crime–as long as others believe they did. So if we have determinism + legal punishment, don’t we end up deciding to “punish” innocent people?

    As for ethics, since you’re a determinist, you think that people who commit intuitive moral wrongs could not possibly have done otherwise. But normally we take this to be a defense; if a mad scientist implants an electrode that forces you to commit a murder you wouldn’t otherwise have committed, you’re morally off the hook, right? So does your determinism commit you to saying that no one ever actually commits a moral wrong? If so, do you believe that position is more intuitive or less intuitive than the arguments for determinism?

    1. Unless you posit a perfect ignorance of the fact that some people punished are actually innocent, you’re ignoring the feedback that knowledge creates.

      If it’s a known fact that innocent people are punished for crimes they didn’t commit, that undermines the utility of punishment serving as a deterrent.

      Taking pains to punish only the guilty is the only stable way to provide deterrence at all.

      1. Thanny,

        I agree that the system I’m describing would have to take great pains to ensure that people believed the person “punished” was guilty.

        So should the determinist accept a system in which only occasionally, an innocent person is framed and “punished”? Guilty criminals regularly insist that they’re innocent, so the claims of the actually innocent people probably wouldn’t do much. And surely a government would have the resources to frame, say, a few people every year pretty reliably.

        For example, surely North Korea could implement (or perhaps has implemented) such a program successfully. It seems to me that the determinist who believes in a system of legal punishment should say that despite all the evil that North Korea commits, that program (of “punishing” a few innocent people every year) would be a good program, since it promotes goodness: deterrence etc.

        1. Such a system could never be publicly-announced policy implemented in an open society, since most folks would understandably reject punishing the innocent as a way to increase the deterrent effect of the law. But that rejection doesn’t depend on our being retributivists, rather it’s a widely accepted matter of taking people’s autonomy rights to be basic goods to be maximized under a liberal-democratic consequentialist regime, http://www.naturalism.org/morse.htm#autonomy

          1. Tom Clark,

            But in a consequentialistic system, we might still tolerate rights violations in order to prevent future rights violations. If punishment is supposed to prevent various crimes that themselves violate autonomy, then a secret program of framing and “punishing” innocent people, if it reduced crime (as I take the determinist fan of punishment to expect), would be permissible according to the determinist.

            Also, of course, punishing guilty people violates personal autonomy just as much as punishing innocent people.

          2. Thanks for pressing the point, and here’s what I’d say:

            If the right to autonomy can be violated at any time to produce future benefits, it doesn’t exist as a right. So to maximize it, it can’t be violated, even infrequently, as part of a secret policy to produce greater deterrence.

            The guilty are subject to punishment (or more broadly sanctions, restraints, rehab, etc.) since they’ve shown a propensity for misconduct which needs to be addressed for purposes of public safety, reformation, and communicating and reinforcing norms. The deterrent effect (for others) of such responses to wrongoers is a possible benefit (many are not deterred, obviously), but deterrence itself can’t be pursued independently of responses aimed at the guilty, for the reason I suggest above.

          3. Tom Clark,

            Thanks for your response. I guess I would still worry that if you start talking about rights that can’t permissibly be violated to produce greater goods, you’re not really a consequentialist anymore. You might be okay with that, but my initial response was aimed mostly at consequentialist defenders of punishment.

          4. Thanks Tom.

            I’m mainly concerned with whether there are viable naturalistic justifications for retribution, and whether the notion of deserved (retributive) punishment is necessary to prevent violations of autonomy rights, e.g., punishing the innocent. Thus far I’m not convinced either is the case.

            Human rights, including autonomy, is obviously a value or good that progressives like myself want to protect, such that if a policy has the consequence of compromising such rights, we should avoid it. So it seems to me consequentialism can include human rights as one of the primary values it aims to protect in its quest to maximize human flourishing. But if for some reason consequentialism can’t be construed this way, I’d reconsider consequentialism.

        2. You are bringing all kinds of other issues into the discussion that don’t have anything to do with the implications of there being no free will for our justice system. The occurance of errors, and the methods and procedures for reducing errors don’t have anything to do with free will or the lack of it.

          The question of whether or not it is ethical to accept a certain error rate in determining guilt in our justice systems, whether from accident, framing or other, remains the same whether free will exists or not.

          1. darrelle,

            These wouldn’t be “errors” per se.

            My question is: If you accept a consequentialistic justification of a legal punishment system, why do you not also accept that some secret programs of framing and “punishing” innocent people would be permissible?

          2. Ok. But, I just don’t see how you are getting from A to B. Am I misunderstanding that you are implying that A must lead to B for some reason?

            It seems very simple to me. By whatever mechanisms, human beings have developed morals and ethics, and tend to have ethical issues with framing and punishing innocent people. Especially when it is entirely unnecessary to achieve the stated goal. Even if one decides the masses need examples in order to prevent crimes, why frame and punish innocents when there are always guilty people to use for that purpose? I guess I am not understanding what you are trying to get at, because your question makes no sense to me.

            I think the only issue on this topic is what “punishment” to use on guilty persons. The more recent data gleaned from modern cognitive studies certainly indicates that we have huge room for improvement in methods of “punishment”. Its gonna take time though.

          3. darrelle,

            Thanks for your response.

            You’re right that in general, there will be plenty of guilty people to punish.

            My worry (and I didn’t originally come up with it) is that it would still be true, under the system people like you and Jerry might want, that if there were an opportunity to “punish” an innocent person in a way that produced a very large benefit, without anyone finding out that the person was innocent, you would say that the state should do it.

            That follows from the more general consequentialistic punishment principle: ‘If you can achieve a greater benefit than harm by punishing someone, you should do it.’ The consequentialist might argue that an innocent person being “punished” is a very great harm, but it’s not clear how he or she could independently motivate that position. We all agree that happiness is valuable and unhappiness is disvaluable, but what exactly is supposed to be intrinsically wrong according to the consequentialist with “punishing” innocent people?

          4. Thank you for another attempt at clarifying your argument/question for me, I believe I understand you much more clearly.

            “We all agree that happiness is valuable and unhappiness is disvaluable, but what exactly is supposed to be intrinsically wrong according to the consequentialist with “punishing” innocent people?”

            I am not a philosopher so I don’t know the history of this issue, but my laymen’s response is that the consequence of doing so would eventually lead to a society that I would not want to live in. One were happiness is disvalued.

            What is intrinsically right or wrong about any ethical or moral issue? Simply what we decide is right or wrong, which is largely informed by our evolutionary history. Not so simple, actually.

            Note: Earlier I put “punishment” in quotes because my thought is we need to think more like “protecting others from harm” rather than “punish the guilty for their crimes” when devising methods of dealing with the guilty. The focus should be on rehabilitation. Perhaps one day we will make enough progress in the cognitive sciences to make that a reality. I think that institutional poor treatment of prisoners in a society has a negative impact on the society as a whole.

  10. Jerry says:

    “I want the ability to choose freely among alternatives, just as I want to live forever.”

    If you think about it, having the power of undetermined, libertarian, contra-causal choice (to have done otherwise in actual situations) would render you terminally undecided, since there would be no sufficient cause of your choosing one way or another. The only way choices get made is by a preference or value winning out over another. Having an *uninfluenced* radically free decider in charge of deciding the contest of values would stop the process cold – you wouldn’t be able to decide. It’s therefore irrational to want contra-causal freedom, so I hope you and any libertarians out there reconsider your desire for it.

    More in “The flaw of fatalism” at http://www.naturalism.org/fatalism.htm#The%20Flaw%20of%20Fatalism

    1. Even if a radically free decider could somehow come to a decision, such decisions wouldn’t be ours in any meaningful sense, since they’re by definition independent of our personal desires, histories, and brain states.

      I thought we had established quite early in these discussions that uncaused free will is not just illusory but incoherent. So I’m shocked and frankly disappointed to find a smart guy like Jerry claiming he still wants it anyway.

  11. “The ability to do otherwise” only makes experiential sense when the correct Powerball Lottery numbers have been obtained. Illusions aside, this may be the only stochastic solution *worth* having.

  12. I find the problem is one of semantics and homonymous confusion. What is “free” in compatibalist free will?

    Many refer to Dan Dennett’s version of compatibalist free will, but when you look at the details all he is doing is re-defining the illusion as free will. (See, for instance, my comparison of Sam Harris and Dan Dennett on the subject: http://adnausi.ca/post/21526057659)

    Compatibalist free will is essentially a recognition of chaos complexity as in the Butterfly effect. Under the exact same set of conditions you’ll get exactly the same results. Under ever-so-slightly different conditions you’ll get a different outcome. In one case the butterfly flapping its wings leads to a hurricane years later. In the other case it doesn’t. Does weather have compatibalist free will?

    I dislike the use of the word “free” here and it leads to confusion in the public. If we say, “Yes, we have free will.” people interpret that to mean their concept of free will, not some re-defined deterministic version. Just look how the homonymous confusion of the word “theory” causes problems with creationists. Semantics are important here. We need to ditch the term “free will”.

    More important is the usage of the concepts, as in the justice system. We can rid ourselves of the “sunk cost fallacy” version of retribution and focus on actual effective measures for reducing future risks via deterrence and rehabilitation. If we look at people as machines we can fix them. It’s much more efficient, successful, and cost effective than the free agent model.

    1. Chad English,

      “Under ever-so-slightly different conditions you’ll get a different outcome. In one case the butterfly flapping its wings leads to a hurricane years later. In the other case it doesn’t. Does weather have compatibalist free will?

      No, weather doesn’t have free will because it doesn’t have a WILL. It doesn’t have desires (as some compatibilists would say, the weather can not have the mental attitude that a proposition is to be made or kept true, wheres creatures like human beings can). Let alone the fact it can’t rationalise about which desires to fulfill and how. Like humans can. That’s what that second big word “WILL” is for in Free Will.

      It is a continued source of bemusement that critics of compatibilism so blatantly ignore obvious parts of the equation, and then profess to be baffled.

      Vaal.

  13. The “some studies have shown that decisions are made prior to us being aware of them” argument has always bothered me a bit.

    For one, I don’t see how whether we make decisions with our conscious minds or our unconscious minds matters at all regarding free will. Unless you also accept a dualist definition of consciousness. Our conscious minds are just as much a product of a mass of deterministic processes as our unconscious minds. I understand that the argument undercuts the dualist concept of free will because they do also have a dualist concept of consciousness, but why play by there rules? I think it is better to argue “conscious, unconscious doesn’t matter both are the result of deterministic processes”, rather than, “even given your idea of consciousness you are wrong because . . .”.

    The other reason is that it seems to me that we just don’t have enough information yet on how the mind makes decisions. I wouldn’t say that I know the literature well, but what studies I have seen are few and they all involve very simple decisions. What about complex decision making processes that take place over time scales of minutes, hours or even days? For example doing research to figure out what kind of car to buy? I am not suggesting that I have an answer, but I am confident that we don’t know enough yet to say conclusively that all decision making happens at an unconscious level and that any conscious awareness of decision making is post hoc rationalization.

    So, why use the argument when a more direct, more accurate argument can be made? And why use the argument when new data down the road could give a dualist ammunition to counter your argument?

    1. Time does not seem to matter here. Consciousness is always a consciousness of the here and now. Even if I am thinking about yesterday or trying to relive a moment of yesterday, I am only conscious of a memory of yesterday (that somehow, hopefully, ties in with the present conscious thought).

      When buying a car over a week, over days of weighing different factors and looking into my different desire, each conscious moment must be taken singularly. If something like Libet’s structure is right, your conscious thoughts are always behind the underlying unconsciousness. The final AHA! moment, “I will buy it,” may have been thoroughly vetted by the unconscious, your conscious awareness probably felt your self pushing towards different questions and sorting through different aspects of the decision. But, again, if Libet holds across the board, any single moment, any single thought or reasoning process that you consciously experience will have been behind the present awareness of it and that last moment will have also been preceeded as well.

      It does not make sense, to me, to think of a conscious choice as a chunk of three days that made the decision. Though, in the end, we may have a division problem, similar to trying to adjudicate cause/effect or trying to cut up a continuous process into different autonomous sections.

      On another note, denying epiphenomenalism, I believe that whatever “consciousness” is it is structuring the future in some way. It therefore may impact future structures of the brain/mind, say if a “present conscious thought” is “perceived” in some way. In other words, over time a present conscious system may play back into the decision, but, still, how we introspect and what it feels like our consciousness is doing at any one time will be misrepresented to our self.

      I am only guessing, but I would think such interaction between consciousness and the unconscious may be robust and important in structuring who we are over time; that such interaction may make consciousness an important structure in our self-programming. But perhaps like many others I may feel too strong of a need to find some reason for consciousness existing and being “chosen for” by evolution; it is becoming more and more likely that we can create beings that do everything we do and better without such creatures being conscious (see Watson on Jeopardy, for instance).

      1. Good comment, thank you.

        “that such interaction may make consciousness an important structure in our self-programming.”

        That seems likely to me as well. And, as you also said, it might seem likely to me because I want it to be that way.

        1. For many narrow (cognitive) tasks we have created computers that out-perform us. Chess, Jeapordy!, data analysis, mathematical calculations, etc.

          We have long created machines that do some physical things better than our bodies do. Strength and speed and repetitions, etc.

          If there is a special aspect to our mental life and mental abilities (and thus in consciousness) it may be some kind of global processing of our selves and thoughts and actions. In time, if we create robots that out-perform us it may require that such beings will also be conscious and have the “mental life” that we do, because they are centralizing processing and self-representing in a similar way. If we do that, given the other capacities that these machines already have, they will without doubt be “better” than us.

          Even if consciousness and that self-representing function is what makes us “special”, and we could not be bettered unless you recreated that, it is still some kind of physically structured relational structure that is capable of being recreated by us, by scientists. Something that we should eventually figure out.

          I guess that is backtracking a bit, but things like logic and reasoning (chess computers) and language use (Watson) were once thought to be hallmarks of our conscious mental life that has been recreated in better ways without such processors being conscious, we assume. The only things left that we have not recreated are things like global awareness, self-regulating abilities, capacity to navigate rich environments quickly (deer in forest, for example). To add on top of that, human brain/mind structures were not created to be “perfect” but were created to fit homo sapiens in hunter-gatherer groups; the fact that through scientific manipulation we can create better structures should not be very surprising, assuming that its all physical.

          1. How does one measure and assess better? Are there non-anthropocentric ideas about this? Over what period of time? How does one avoid the reflexive human exceptionalism and other ideologies.

            All we can truly say is that the current adaptions exist and not really measure any qualitative statements?

            In addition, it all has to do with local ecologies.

          2. Good point. I just meant something like Watson (computer brain) is better at Jeopardy! than human brains.- Watson will always win.

            Our brains/bodies, for now, are better at the game of life, if we define that by being able to reproduce and explore and manipulate our world; and we use our machines as tools to further that goal.

            Assuming we do not script it in, there is no reason to think that machines will ever care about their own reproduction . . . ? though they may care about not having their being extinguished . . . ?

          3. Not so sure. We look at bacteria and social insect behaviors. That helps us glimpse the core principals of (social) living things. Of course, maybe plants are social too.

            The question we ask of mechanical, non-biological systems is — where will the energy come from?

            Even a millions years is nothing for the problems of self reproduction to be solved. Billions of years seems to be the minimum.

            Simplistically we assume that energy exchange processes by the human brain will crack the basic problems but that’s unlikely.

            We have read that all the energy of the sun is not enough for some of the fantasy future things promoted.

            Then there are the heat dissipation problems. It is all electron floes and thermodynamics ultimately.

  14. NPR sells two things, basically, because of audience demand: fear and warm fuzzies. The warm fuzzies on NPR are cloying although the 1-2 punch of fear and warm fuzzies is commercially proven.

    The question of beauty in science seems simple. Nervous systems evolved to align with empirical reality in a probabilistic way — so any sense of alignment with predictive reality will feel “good” or “beautiful.”

  15. In the full interview both people talk about logic>science>reason is not existentially sufficient – therefore — Turning and Godel committed suicide!! So dum.

    Suicide is a symptom of the brain condition of depression. It is a brain circuit birth defect. The rest is nonsense. Sheesh.

  16. I can account for her popularity only by assuming that a large section of the educated, well-off, and liberal public that listens to NPR has a soft spot for spirituality.

    Ha, not me! The only time I stopped donating to KQED in the last 16 years was a couple of years ago when they started to veer annoyingly close to woo in some locally-produced programs, both in terms of religion and medicine (trying to posit alt-med as “another way of healing” when it’s 98% BS and the 2% that isn’t is not “alt”). I let them know in no uncertain terms why I was canceling my monthly donation.

    I’m glad to say they’re back on a more even keel now (not because of the stand I took, I’m sure!) and I started donating again (though a large factor there was getting the This American Life USB drive as a gift…)

    1. I thought Tippett did a good job. She asked a question and let Levin respond at length. I think Levin enjoyed the interview, judging by the female bonding at the end of the unedited version.

      1. It was a cloying interview, with female bonding taking precedence over hard thinking and problem discussion.

        Tippet’s goal seemed to be to have a hard nosed scientist on who would NOT challenge her silly platitudes and ideas. By everyone “making nice” she succeed in supporting her ideology to her listeners.

        Tippet wraps her hard edged demagoguery in a halting “sweet”, giggling verbal style and lots of warm fuzzy ideas. Clever.

  17. Let’s just get rid of the words “we have free will,” and say instead that “our behavior is controlled by factors we don’t understand.”

    That is very agreeable, as it is compatible with the compatibilist stand. I find this attrition process more or less converging.

    Minor points:

    When he says ‘we’ are not just our conscious minds, but rather the sum of both conscious and unconscious processes,” how does that confer freedom? How does our unconscious make our decisions more “free”, especially because “free will” is classically connected with conscious decisions?

    While not having read Stenger’s piece, I would assume he means the unconscious factors (that we don’t understand) supports the illusion and strengthens the autonomy by magnifying.

    Why are people trying to save the notion of free will by confecting other definitions?

    As I said before, “will”, or here “autonomy”, isn’t about saving a notion but proposing an efficient, testable model.

    But there is also this:

    Stenger suggests using “autonomy”, which to me is less appealing because it means “free from external control and influence,” which still smacks of dualism

    Autonomous agents are part and parcel of computer science. I didn’t realize anyone could see dualism in a testable monistic model! If so, we have an unsurmountable problem.

    When did “compatibilists” become dualists, or supporters of dualism, really? I feel like I didn’t get the notice.

  18. I forgot this minor point:

    What is important to me is whether our decisions are predetermined (with perhaps a dollop of quantum indeterminacy),

    “Predermined” translates to physics as “block universe”, and that is an outstanding issue as most everything else connected with what time really is. This is IMHO overreaching.

    And if we get into physics I wouldn’t say that quantum indeterminacy is the larger dollop. The real modification of a global universal clock as measure by expansion of the universe would be local entanglement. Entangled systems would have to decohere to catch up with the global clock, temporarily confusing “predetermination” (if it exists).

    I assume Stenger could get into this a lot more expansively than I attempted here.

  19. Had I had a choice in the matter, I would apologize for not asking this a long long time ago, but my impression (now) is that this free will issue is so important because you link it to dualism and then link dualism to religion, and you want to use the no-free-will argument to dispatch dualism and then religion. Do I finally have this agenda right? But if the substitute for “free will” is “we don’t know why we behave like we do” then why would you believe that this no-free-will argument, however rational, will allow you to make any headway with a religious person? Perhaps I still don’t get it, but you seem to be thinking this is a great argument to use against religious people, while that same great argument suggests that people can’t be expected to listen to any rational argument if they have their own “reasons” to continue as they are. See, e.g., your own discussions of the apparent effect of living in Sweden as opposed to the U.S.A. Or am I still confused?

  20. One more question I forgot to mention, and then I’ll hold off on commenting for a while.

    For those who think that scientific discoveries provide evidence against libertarian free will, presumably they think so because there has been some set of experiments, tests, or observations that in principle might not have provided this evidence, but did. (The anti-libertarian hypothesis should be “falsifiable.”)

    So: What scientific discovery could have provided evidence for libertarian free will? If there’s no such imaginable discovery, then how could scientific discoveries have provided evidence against libertarian free will?

    In other words: What is it that we observed such that if we observed something different instead, we would have had evidence for libertarian free will?

  21. Jerry says:

    Much of religion is based on true dualism, and on the existence of a “soul.” Shouldn’t we be dispelling that dualism instead of engaging in arcane philosophical arguments about what “free will” really means?

    I think this explains the rather obstinate unwillingness to even properly consider whether the term ‘free will’ might be useful. It’s a political issue, as we should deprive believers of the oxygen that a term gives them that they interpret as supporting dualism.

    But that is a nonsense, as I have said here before. Two reasons:

    To say that freedom cannot mean something contra-causal, as Jerry insists on repeating as if it were somehow news, is to deal in the commonest of commonplaces. Not even the most devout believer thinks he can will himself to become a bicycle. Even for the most devout believer, free will does not mean to be able to do anything but to avoid making certain bad decisions, which always refers to our capacity to connect a choice to arguments that convice us of the superiority of one course of action over another. Many times, that happens only after a certain fact: we can learn and do things differently should a sufficiently similar situation arise again. Sometimes, we can just simulate courses of action in our minds and choose from them. Especially this last one is a freedom we do not share with the rotifer, or even our cats.

    What Jerry is doing is to insist that ‘freedom’ may only ever mean one thing, viz. being suspended from a skyhook. But that’s impossible, which is exactly why Dennett explains at length that freedom must evolve (and hence must be a thing that comes in degrees). To insist that this evolved thing cannot be freedom is like saying that ‘species’ can only mean something separately created, because that’s what “the whole world has meant for thousands of years” when they used the word. In the case of ‘free will’, they didn’t even do that (cf. the robot example). But as the ‘species’ example shows, how many people have used a word in a certain way is irrelevant anyway, as long as we can educate them about how to understand it properly and about what its limitations are.

  22. And on the ‘could have done differently’ idea, as I have also said here before:

    The statement, “My dad could easily have been drafted during ’Nam”, is a good example of what we actually do mean when we say that we could have done otherwise. I think we are perfectly aware (if we are aware of anything) that what we are saying is, ‘If things had been only a little different, my dad would have been drafted.’ And if that little difference is something that is part of us—e.g. part of our intentions, aspirations, our awareness etc.—then we are right to say that we could have done differently—notwithstanding the fact that every molecule in us works just according to the general laws of physics. But it’s our molecules, and our molecules work differently according to which intentions etc. we have.

    In other words, that we are not totally free is reflected in the fact that if we follow all our conscious states back (and at a sufficiently low level), then we will see that nothing could have been different only because it wasn’t different; but now that we are conscious of what that situation was like, we have a chance to react differently the next time that situation arises, or even to alter the (upcoming) situation so that we will be able to act differently. And even if that time that doesn’t make a difference, there will always be a next time for us to effect outcomes that are more in line with who we wish to be and how we would like to behave. This kind of feedback means that any future situations will be different; the only question is whether they will be different enough for our freedom to work on that difference.

    1. Nicely put in both posts, Peter.

      I’ve used “morality” in the same way as your “species” example to point out that compatibilists, Libertarian free willists and regular folk are essentially referring to the same thing when they say “free will,” in the same way that we reference the same concept by talking of “species” or “morality.”

      The difference concerns that we have different explanations for the BASIS for those notions. Morality concerns whether we have normative rules for our actions toward our fellow creatures, obligations, things we “ought” to do. Most secular ethicists and moral philosophers would agree with Christians that “yes we do” have morality.
      But morality exists not due to magic beings, but due to real facts about us and the world, making magic unnecessary and erroneous as the basis for morality.
      The fact that for most of human history people gave a wrong explanation for the basis of morality – a God – does not mean there is no morality, or that one can not be talking about “morality” without God as it’s foundation. People can be educated about this.

      Similarly, the issue of Free Will revolves around how you answer the question, concerning our choice-making: “Could I have done otherwise?” If your answer is “yes” then you are talking about free will, and have affirmed we have it. We then move to ask “what is the BASIS for asserting you could have chosen otherwise?”

      To the extent any religious or dualist or libertarian person tries to give some contra-causal “magic” answer as the basis – “I could have done differently because my choices are magically exempt from the chain of causation,” they can be dismissed (via argument).

      But as in the case of explaining the existence of species and morality, we can go on to point out there is an actual basis for free will – for intelligibly affirming “yes, I could have chosen otherwise.”
      It’s still free will, but we establish it on the basis of reality, not magic.

      This is why it is irksome when the idea saturates this conversation that compatibilism substitutes some “other” concept (e.g. “not the ‘real’ version”) of free will. So you have to give up the “real” version to accept the alternate compatibilist version. It’s not so. It’s still “free will” but with a different explanation for why we actually have it.

      Vaal

  23. Because I have free will, I now choose not to have my very next thought.

    Can I make it two out of three?

  24. From the determinism perspective: all the comments above were pre-determined by physical conditions present prior to the time the comments were made, indeed prior to the time the commentators were born! Whether their comments are true or false is a totally irrelevant consideration. They just say what they say and think what they think because that’s what they were pre-determined to think or say; and likewise for those who express different opinions. And likewise for my comment here and now – I just post it because that’s what I’m pre-detrmined to do and say. If I think that what I’m posting is true, it’s simply because I’m pre-determined to think that (even if it’s false); and likewise if I think it’s false (even if it’s true). There’s no independent way to judge whether pre-determined thoughts are true or false; they just are what they are predetermined to be from eons past.

  25. we are all just organic robots that’s all! Our brains are just an organ acting according to all the laws of chemistry and physics and even the decision making process is just that. Laplace’s demon in ‘freedom evolves’ by Dennett springs to mind. Everything is pre-determined including the future and every move you make in the future for the rest of our lives so get used to it, there is zero free will.

      1. A rock rolling down a hill causing destruction to objects such as houses and cars has a pre determined course governed by the laws of physics so when you look back in time you can say ‘all of this was going to happen anyway’. Dan Dennett covers this and it is termed ‘actualism’. its the same with the inside of your brain! All the atoms and processes going on in your brain are governed by the laws of physics and that includes consciousness and whether or not you decide to make tea or coffee! You can look back on your life and say ‘it was all going to happen anyway’. It is a devastating thought and took me a while to get used to it but I survived!

    1. You (like Frank Williams at #27) are confusing “physically caused” with “predetermined”. The story of your life is not predetermined, and I can prove it to you.

      Click here. You’ll see a page of random numbers generated by an intrinsically random quantum process. Now use those numbers to buy a lottery ticket. There’s some small chance you will become a millionaire as a result.

      Now nobody, not even your Laplacean demon, can say in advance how your life will turn out. You have erected a barrier of radical indeterminism that makes prediction impossible even in principle, because there is no predetermined fact of the matter as to which numbers you see when you click the link. It’s truly random.

      This doesn’t give you free will, of course. But it does make nonsense of the claim that the future is fixed and our lives are predetermined before we’re born. So if you want to argue against free will, you’ll need a better argument than that.

      1. I agree that prediction is impossible for us mere mortals, there is no way of predicting the future so you are correct there. However, do the physicists know everything about the quantum world yet? I doubt it and in any case the physical truth of what is going on at that level could probably be explained in years to come. I will for now meet you half-way about the ‘quantum indeterminacy’ issue as we just don’t know enough yet. I would like you to meet me half-way when I say that non-random pre-determined processes control just about everything we know about from evolution to how our brains work. Hamlet said ‘theres a divinity that shapes our ends’ but I’m happier with a ‘physics that shapes our ends’! Maybe you could be right that some completely random quantum process in my brain will change my life somehow by making me emigrate or join the army but that just makes me feel less free if anything and less liberated even though that process occured in ‘me’ in my brain.

      2. If you read physicists (e.g. Brian Greene), while saying that they think the current quantum theories are correct will concede (meet you half way) that the cause of quantum indeterminacy is unknown. The point being that situation is: quantum theory and experiment match.

        Many physicists will say, well, that’s all there is to say on the matter, that’s the current state of physics as a field of knowledge, so that’s the way the world is: it’s indeterminate. They will refuse to take part in metaphysical speculation about what is underlying this match between theory and experiment.

        But refusing to speculate isn’t the same as asserting that this is the end of the line, that there is no more to learn. I’ve not seen a physicist make that claim (though some non-physicists seem to). The ones I’ve read state quite clearly that we don’t understand what’s going on at any deeper level.

        One thing to consider is that all our theories are models. They themselves are not reality, only representations of it. Even our experiments are models of reality – in as much as what we observe is limited by our ability to observe it. With regard to quantum physics we have observations that seem spooky and incomplete, but they happen to match mathematical probabilistic models incredibly well.

        The location of the problem as I see it is in the meaning of ‘randomness’, as follows.

        We are very familiar with our probabilistic mathematical models. We happily apply them to observations that we consider to the deterministic, but for various reasons of complexity are indeterminate to us. What we call ‘randomness’ is often an expression of this indeterminate knowledge about an otherwise determinate system.

        But what is true ‘randomness’? What does it mean to say an event is random, and not merely an epistemologically indeterminate account of a determinate event? What is the distinction between a probabilistic model and any system it models? If our only model of a system is probabilistic, then how would we decide (i.e. ‘determine’ – this gets all circular), whether the reality modelled is a truly ‘random’ reality, or merely epistemologically-indeterminate determinate reality? If our uncertainty of measurement leaves us uncertain, can we say that the reality we are uncertain about is realy random?

        If random events are *not caused*, does this mean they are un-caused? What does this mean – an un-caused event (and one that goes on to have determinate causes)?

        If random events *are caused*, what causes them? What causes a specific ‘random’ decay on an atom in a mass, such that over time the collection of random decays in the mass follows a probabilistic model?

        When two particles go their separate ways, what is it that ’causes’ them to follow the same probabilistic outcome? What is the relationship between causation and randomness? Is ’cause’ the right word?

        Bell, by the consideration of how the outcome matches a probabilistic model rather than a non-probabilistic one, declares there are no hidden variables, because of this match. It’s certainly a viable inference from the result, which is fine as far as it goes. But then what does the term ‘entanglement’ mean? It seems like a nice technical word used to cover up a spooky epistemological gap – a physics of the gaps.

        I don’t think we understand ‘randomness’. It’s a term we give to events that follow our probabilistic models, but what does it really mean? And how does ‘causation’ fit into the picture?

        1. Well said Ron. I would love to be alive in a thousand years to see if physicists discover more about the quantum world. It’s over my unqualified head for the moment but maybe one day!

        2. It’s true that we have much still to learn. But what we have learned so far is sufficient to tell us that some ideas are just plain wrong. No amount of new knowledge can resurrect those failed theories. And the classical physics underlying Laplacean predeterminism is one of them. So my point stands that you don’t get to use that as a refutation of free will anymore.

          1. ok but you can still use other things to refute free will until we clear up all the other mysteries in physics and the human brain.

Leave a Comment

Your email address will not be published. Required fields are marked *