New paper on “free won’t” and its relevance to “free will”

I’ve written quite a bit on experiments showing that one can, using brain scans, predict “decisions” before the human subject is conscious of having made them.  These decisions include either things like deciding when to press a button, or “choice” experiments in which you decide to add or subtract, or to press a button with your right or left hand. In the “choice” cases, brain scans (EEGs or fMRIs) can predict which choice will be made with significant but not perfect accuracy (about 60-70%), but in some cases those predictions can be made up to 7 seconds before the subject is conscious of having made a decision.

The first one of these studies was published in 1985 by Benjamin Libet, who showed that a “readiness potential” (RP) for pressing a button could be seen in the brain about a third of a second (300 milliseconds) before the subjects were conscious of having decided to do the press. Since then the decisions have become more complex, the brain scans more refined, and the time of “readiness potential” pushed farther and farther back.

These results won’t surprise any determinists or even free-will compatibilists, who all agree that our decisions are made not by some spooky “will,” but by the laws of physics. And of course we all know of “decisions” we make that appear to derive from our unconscious (e.g., driving a well-travelled route, where you don’t think to yourself “turn here”, but where you seem to be operating on autopilot). But these brain-scan results are distressing to dualists and to those who believe in the religious (libertarian) form of free will, in which decisions are made by something detached from the physical brain.

The implications of these studies—that decisions can precede consciousness of having made them—even disturbed Libet, who, though admitting that his studies did cast doubt on “free will”, still opted for something dualistic: “free won’t.” That is, although one’s decision to do something might be decided in the brain before coming to consciousness, there was still a form of dualism in the decision to cancel or override one’s action. 

That doesn’t make much sense, since cancellation is still something that takes place in the brain. If you think about it for a minute, you can see that canceling or overriding a decision can in fact derive from simialar physical and neural antecedents as making a decision itself. That is, there’s no substantive difference between deciding to do something and then deciding not to do it. After all, both are decisions, and both might be predictable in advance by brain scans. I find this whole area of research fascinating because of its implications for how we make “decisions.”

A new paper in Proc. Nat. Acad. Sci.by Matthias Schultze-Kraft et al. (free download, full reference at bottom) investigates how the brain works when it cancels a decision. Although it doesn’t show the neural basis for canceling decisions, it does show that if you’ve made a decision to do something, and then are asked to cancel it, there’s a “point of no return” after which you simply can’t cancel it.

The experiment is a very complicated one, with lots of controls, training of computers and investigators, and analyses, so I’ll briefly describe the salient results. My apologies if I get some of this wrong.

Subjects were shown a green light, and then asked to push a button with their foot after counting (to themselves) two seconds after they saw the green light. An electromyogram (EMG) was connected to the button-pushing leg to detect when movement began. And an electroencephalogram (EEG) was connected to the head to monitor brain activity. (The average time to press the button after the green light went on was 5.4 seconds.)

After some human examination of the EEG’s, these brain readouts were analyzed and then programmed so that the scans themselves would flash a red light when the computer detected that the subject had started the “readiness potential” in the brain to push the button. The subject would then get “points” (towards a reward, I presume) if, after seeing the red light, they managed to NOT press the button. In other words, the subjects were asked to cancel a movement whose processing had already begun in the brain, but which had not yet produced a movement.

The readiness potential in the brain began about one second before the muscles gave an EMG reading from the leg muscle, and there was another 0.3 seconds before the button was actually pressed. The computer was trained for each subject based on their observed RPs, and when the RP crossed a threshold, the computer program turned on the red light, telling the subject “DO NOT PRESS BUTTON!”

Because of variations in threshold crossing and onset of an individual’s RP, the light went on at various times before the EMG lit up and before the subject pressed the button. Sometimes the red light didn’t go on, and subjects pressed the button. But sometimes the red light did go on but they still pressed the button, giving us the Big Result:

If the red light went on 200 milliseconds or less before movement began, subjects could not help starting their move toward pressing the button.

In other words, there’s a “point of no return” that occurs about 0.8 sec after the RP has started (but before the muscles move), after which—even if the subject sees the red light—he/she cannot help but move. Now sometimes they can still avoid pressing the button itself, but their leg is still moving towards it.

What does this mean?  Well, it doesn’t show that there’s “free won’t”. After all, the subjects are cancelling their movement (the “won’t”) as a reaction to seeing a light: an environmental stimulus rather than some conscious “decision”. What it does say is that there appear to be physical constraints in cancelling a decision, so that even if you “want” to to get your reward, you can’t. Now the constraint, I think, is likely to be the reaction time to the red light: that is, there’s a certain time you need to see the light, process the information in your brain, and then use it to send a signal to your leg to stop moving; and that time is about 200 milliseconds. In other words, you could still have “free won’t,” but this experiment says little about it. In fact, I’m not sure that this experiment CAN say anything about “free won’t”, since you are not making a “conscious” cancellation but are told to cancel in response to a light. But what it does show is that what is determined by unconscious brain activity is reversible by an external stimulus.

What would truly refute the notion of “free won’t” is the demonstration that cancellation of a movement itself previously decided and predicted by brain activity can show up as a brain signal (i.e., the cancellation can be predicted) before you’re conscious of it.  The authors report two studies of “spontaneous self-cancellation”, and one of them (Brass et al., J. Neurosci.  27:9141 [2007]) might indeed give evidence against “free won’t”, but I haven’t read it. Perhaps readers can and report back. But since cancellation is a brain output qualitatively similar to an “action” decision, I can’t imagine why there wouldn’t be libertarian free will but could be libertarian “free won’t”.

The authors of this paper themselves don’t appear to accept dualistic free will or free won’t (“free won’t”, of course, is just a form of free will), as is clear from their discussion. As you see below, they discuss their results in terms of naturalistic, materialistic brain phenomena, with cancellation associated with specific brain regions. Here’s an excerpt from the paper (I’ve left out the references, but you can see them in the paper). Note how they avoid discussing “free will”, though the senior author, John-Dylan Haynes,  said in an interview that he doesn’t think any of these experiments support the idea of free will.

It has been previously reported that subjects are able to spontaneously cancel self-initiated movements. This has been referred to as a “veto”. The possibility of a veto has played an important role in the debate about free will, which will not be discussed further here. Note that the original interpretation of the veto was dualistic, whereas in our case veto is meant akin to “cancellation.” Our study did not directly address the question of which cortical regions mediate the cancellation of a prepared movement. However, many previous studies have investigated the neural mechanisms that underlie inhibition of responses based on externally presented stop signals. Please note that, in contrast to stop signal studies, in our case the initial decision to move was not externally but internally triggered. Conceptually, this could be compared with a race between an internal go signal and an external stop signal. Many stop signal studies have reported that inhibition of a planned movement is accompanied by neural activity in multiple prefrontal regions, predominantly in right inferior PFC . It has been proposed that right inferior PFC [pre-frontal cortex] acts like a brake that can inhibit movements both based on external stimuli or on internal processes such as goals. Another region that has been proposed to be involved in movement inhibition is medial PFC; however, its role is more controversial. On the one hand, stop signal studies show that activity in medial PFC might not directly reflect inhibition. However, it seems to be involved in cancelling movements based on spontaneous and endogenous decisions rather than based on external stop signals.

At least one article (in Gizmodo) has suggested that this study gives some evidence for dualistic, libertarian free will, arguing that “the ‘readiness potential’ doesn’t govern our brain.” But I don’t think this study gives any solace to advocates of libertarian free will. All it shows is that a decision made by the brain, and later arriving at consciousness, can be halted by an external stimulus that also impinges on the brain. That’s exactly what we predict from the notion that the brain is a computer, that consciousness is an epiphenomenon that often follows a brain’s “decision”, and that we can affect the working of the brain by changing the environment of the brain-owner.

____________

Schultze-Kraft, M. D. Birman, M. Rusconi, C. Allefeld, K. Görgen, S. Dähne, B. Blankertz, and J.-D. Haynes. 2015. The point of no return in vetoing self-initiated movements. Proceedings of the National Academy of Sciences., early edition.

18 Comments

  1. Posted January 24, 2016 at 12:56 pm | Permalink

    This is the classic stop-go reaction time task, studied quite a bit by psychologists like Logan (cited in paper). There are individual differences in this capacity.

  2. Lou
    Posted January 24, 2016 at 1:03 pm | Permalink

    Sure sounds like woo!

  3. gordon hill
    Posted January 24, 2016 at 1:14 pm | Permalink

    The question remains, “To what extent are we to be held responsible for our behavior?”

    • Diana MacPherson
      Posted January 24, 2016 at 1:51 pm | Permalink

      I wonder if you could argue that you changed your mind about stabbing someone but it was too late. Dan brain couldn’t stop the stabbing arm.

      Incidentally, my parents’ dog will forget that I’m at their place and when I get up, he roars out of the bedroom, where he was sleeping, barking and ready to bite. He sees it’s me and not a home invader and his legs start stopping his forward motion but the barking and bite is slower. Usually he just puts a closed mouth on me and looks ashamed.

      • Gordon Hill
        Posted January 24, 2016 at 4:57 pm | Permalink

        Good question… don’t know about stabbing, but I have changed what I was going to say in mid-sentence which has been beneficial in my marriage of fifty nine years (which also may mean I don’t think for myself even when I think I am).

        As for dogs, the leg abuse I have experienced is censored… 😉

  4. Posted January 24, 2016 at 1:48 pm | Permalink

    Perhaps I am missing something, but this experiment sounds more like a test of reaction times rather than a choice experiment because the subjects are told how to react to the green and red lights. They do not mske a choice to do anything (except whether to comply with the order).

    BTW (and I know our esteemed host knows this), in evaluating the 60%-70% prediction success rate in the choice experiments one should keep in mind the null is 50% not zero.

    Has anyone ever done an experiment where subjects are evaluated on their ability to outwit the experimenter’s ability to predict their choice? I ask this because it has been speculated that human unpredictability may have been selected for game-theoretic reasons.

    • Posted January 24, 2016 at 2:05 pm | Permalink

      A variant (perhaps more relevant) of the MSK experiment could be where subjects are asked to choose whether to press the button upon seeing a green light and, if they do choose to press, be followed with an order to cancel that decision upon seeing a red light. People who chose not to press would need to do nothing on red.

  5. Kevin
    Posted January 24, 2016 at 2:06 pm | Permalink

    Woo will. Try doing the experiment on electrons… No outcome except that predicted by the laws of physics.

  6. LG
    Posted January 24, 2016 at 2:55 pm | Permalink

    Very interesting, thanks for posting.

  7. Warren Johnson
    Posted January 24, 2016 at 3:25 pm | Permalink

    As a physicist, I don’t see how you think my discipline has anything useful to say about human psychology.

    Nothing about this experiment is remotely like the kind of experiments that we can deal with. This does NOT mean that human behavior is outside the physical world, but only that the ironclad laws of particle physics can NOT often be used to make useful predictions in the macroscopic world. Heck, we can’t hardly predict what the physical properties of a new chemical compound are likely to be. Nobody expects the Spanish Inquisition! and nobody expects the high temperature superconductor!

    There is a lot to be said about the “deterministic” laws of physics, and how they do NOT determine some things. Consider the snowflake. Conventionally no two are alike. In fact they show an awesome variety and beauty and indeterminism. But no sane physicist thinks they are outside the laws of physics.

    For some beautiful pictures of snowflakes, get any of the books written or photographed by Cal Tech Professor Kenneth Libbrecht, who has made this a subject of his physics research.

    • Posted January 24, 2016 at 9:39 pm | Permalink

      A contradiction:

      You admonish: Quit thinking physical theories have any purchase on the question of free will and human psychology.

      Then: However, physicists are willing to extend their general theories into places they do not have the means to do so, such as the formation of snowflakes.

      The same reason why physicists think snowflakes fall within deterministic relationships is the same reason many people think brains do. Many free-will-denying people are going to use the same general beliefs that physicists use to proclaim snowflakes fall within deterministic processes.

      We do not take silly (folk) claims about the seeming indeterministic properties of snowflakes as being actual properties of snowflakes.

      Replace “human psychology” with “snowflake properties” in your opening line.

      “As a physicist, I don’t see how you think my discipline has anything useful to say about *snowflake properties*.”

      But you readily assure us, that despite physicists not having a good theory about the unique formation of snowflakes, that “sane physicists” have full confidence in the law-like nature of such things.

  8. Q-Tim
    Posted January 24, 2016 at 5:24 pm | Permalink

    As Sam Harris has suggested many times, a bit of experience with mindfulness meditation dispels the illusion of free will experimentally, on experiential level.

    Just try to do so-called ‘strong determination’ sit, where you supposed to meditate maintaining complete stillness of the body, and pay close attention to the mental content, which comes as mental images and/or mental talk. For me, the urge to move eventually arrives as a mental image of me moving and breaking the increasingly uncomfortable posture. It, however, remains totally inscrutable how this urge gets blocked. When it fails, the body moves—it just happens. What exactly blocks the urge, how does it do it, how/why does it fail, and exactly when is it going to fail—none of these comes into view of the consciousness.

    Why didn’t I move one second earlier? Why couldn’t I keep still for one more second? I have absolutely no idea. These things are hidden from me pretty much the same way as they are hidden from a person sitting next to me. That it, unless that person is a neuroscientist peering into my brain with fMRI or EEG—in which case they probably would have much better grasp of what is going on…

  9. Posted January 25, 2016 at 4:04 am | Permalink

    consciousness is an epiphenomenon that often follows a brain’s “decision”

    Prof.Coyne, what do you say about the idea that consciousness is formed by the high level thoughts, those thoughts that need more resources in our brains (time, space, synapses, neurons etc). Those thoughts that, simply, resonates more in our brains.
    Thank you,
    Calin

  10. Lyman Baker
    Posted January 25, 2016 at 10:15 am | Permalink

    Hi, Merlin:

    This was forwarded by my son-in-law in San Antonio (a toxicologist by PhD), hence a serious chemist (working now for a big environmental firm), who for some time has been sending me posts from this site. (He didn’t know it, but I’ve since subscribed to the same newsletter/blog/bulletin, which has lots of good stuff, though of course some things less worth attention. You might look it over and see whether you might want to subscribe to it, or bookmark it for an occasional visit to the archives to see what’s been going on. There’s also lots of critique of the anti-Darwinist bullshit that infects the atmosphere here in the U.S., pumped out constantly in the hopes of infecting the public schools.)

    As you can see, the person who posted this one gets pretty deeply into some trends in careful empirical work bearing on decision theory. For an easier introduction to this line of research I’d recommend David Eagleman’s *The Brain: The Story of You *.

    Eagleman is a brilliant guy. He took a bachelor’s degree from Rice University in English literature, then switched to neuroscience, where he’s made an impressive career. But he also did a wonderful book that I think I may have mentioned to you called *Sum: Forty Tales from the Afterlives *

  11. Posted January 25, 2016 at 11:46 am | Permalink

    Note how we are “quite far in” now, so Dan Dennett’s (he’s right about *this*, as far as I can tell) usual admonishments about not prejudging what “consciousness” is like here, especially with regards to timing. Life is messy – the brain is parallel, but *not* globally synchronized, so different bits do stuff at different times.

  12. Alastair Haigh
    Posted January 26, 2016 at 3:44 am | Permalink

    I hope this is sufficiently related, but I’m interested in this topic too. I came across a paper (Lages & Jaworska, 2012) that’s critical of the meaningfulness of the predictive capacity of the patterns found in brain activity by Soon et al (2008) and subsequent studies such as Bode et al (2011).

    They argue that, basically because humans are very poor at generating random sequences, you can predict behaviour with a similar accuracy (around 60%) just by analysing previous decision sequences – no fancy fMRI required.

    I don’t understand the technical details of either the pattern-classifier they used to predict behaviour from previous decisions, or the one they used to predict from the fMRI data, but are the two sources of information not completely unrelated to one another? You can predict behaviour from previous choices – so what – we can predict behaviour from brain activity. The accuracy for both is around 60%. Could that simply be a coincidence?

    It’s a fascinating but fiendishly slippery puzzle and it makes my brain hurt.

    Full reference:

    Lages, M. & Jaworska, K. (2012). How predictable are “spontaneous decisions” and “hidden intentions”. Comparing classification results based on previous responses with multivariate pattern analysis of fMRI BOLD signals. Frontiers in Psychology, 3, 1-8.

  13. Posted February 7, 2016 at 5:01 pm | Permalink

    Thought experiment.

    You have a computer. You programme it entirely yourself with an algorithm of your own devising. You spend years refining the algorithm. The algorithm selects when to push a button after it sees a light go on. You leave the room while an experiment is in progress – an experiment to “predict” when you “decide to push the button”. You reenter the room. The experimenter running the session says “I have proved you have no free will, the decision to push the button was made when you were out of the room, and you only became “conscious” of it when you reentered”
    You reply – YES – but that was ME operating in MY computer-form making the decision, a decision of MY free will.

    • Posted February 8, 2016 at 5:23 am | Permalink

      Implications of thought experiment:
      1) What matters is “ultimate responsibility” for a decision, not when you “became conscious” of it
      2) A decision made by a subordinate process which is formed as part of YOUR decision making system still makes YOU ultimately responsible for that decision
      3) Whether a Compatibilist or an Incompatibilist it makes no matter whatsoever WHEN you become “conscious” of that decision as long as it’s YOUR decision


%d bloggers like this: