A new hypothesis about consciousness

In my view, there are two big problems of consciousness. The first is mechanical: how does it work? (This is called “The Hard Problem of Consciousness”.) What configurations of neurons create “qualia”, the sensation of conscious experience that includes pain, pleasure, self-awareness, and so on? Many theologians and obtuse academics maintain that we’ll never be able to understand how materialism can explain this, and thus use it to either attack materialism and “scientism”, or to plump for God, the Thing That Can Explain Stuff That Science Hasn’t Yet. I’m pretty confident that we’ll one day understand this, but surely not in my lifetime.

That’s the proximal or mechanical problem. The other is evolutionary: what selective pressures, if any, gave rise to consciousness? It surely evolved one way or another, because I doubt that microbes are conscious, but somewhere on the line between us and our microbial ancestors, animals became conscious. (I’m pretty sure that humans aren’t the only conscious animals!)

Now it’s not clear that there was natural selection for consciousness itself, for it may simply be a spandrel—a byproduct of other aspects of brain evolution and activity.

That is in fact the idea of David Oakley and Peter Halligan, professors of psychology and neuropsychology respectively, as outlined in their essay at The Conversation, “What if consciousness is not what drives the human mind?” Here’s a precis of their view; it’s based on a paper that I haven’t yet read (see link at bottom).

Most experts think that consciousness can be divided into two parts: the experience of consciousness (or personal awareness), and the contents of consciousness, which include things such as thoughts, beliefs, sensations, perceptions, intentions, memories and emotions.

It’s easy to assume that these contents of consciousness are somehow chosen, caused or controlled by our personal awareness – after all, thoughts don’t exist until until we think them. But in a new research paper in Frontiers of Psychology, we argue that this is a mistake.

We suggest that our personal awareness does not create, cause or choose our beliefs, feelings or perceptions. Instead, the contents of consciousness are generated “behind the scenes” by fast, efficient, non-conscious systems in our brains. All this happens without any interference from our personal awareness, which sits passively in the passenger seat while these processes occur.

Put simply, we don’t consciously choose our thoughts or our feelings – we become aware of them.

As I interpret their essay (see below), our adapted brain is constantly taking in information in a “stream of unconsciousness”, and processes this information in a way to further our reproduction, of which survival and an ability to get along with our fellow humans are components. Some of this leaks into our awareness as “consciousness”, but we neither choose what leaks out nor use it to adjust our behavior:

. . . this may leave one wondering where our thoughts, emotions and perceptions actually come from. We argue that the contents of consciousness are a subset of the experiences, emotions, thoughts and beliefs that are generated by non-conscious processes within our brains.

This subset takes the form of a personal narrative, which is constantly being updated. The personal narrative exists in parallel with our personal awareness, but the latter has no influence over the former.

The personal narrative is important because it provides information to be stored in your autobiographical memory (the story you tell yourself, about yourself), and gives human beings a way of communicating the things we have perceived and experienced to others.

This, in turn, allows us to generate survival strategies; for example, by learning to predict other people’s behaviour. Interpersonal skills like this underpin the development of social and cultural structures, which have promoted the survival of human kind for millennia.

Thus the reception, processing, and acting upon information are the results of natural selection, but consciousness itself is not. As the authors argue, it “does not confer any particular advantage.” At the end they get into issues of free will and responsibility, arguing that these are social constructions (yet also “embedded in the workings of our nonconscious brain”!) that have “a powerful purpose in society” and a “deep impact on the way we understand ourselves.” I’d argue that this is confusing (perhaps it’s explained more clearly in their paper), and even though these concepts may affect how “we understand ourselves”, they are illusions: in the authors’ view, they are not what we think they are.

So much for that. What intrigues me more is their idea that our consciousness is a spandrel, with the real adaptive work going on independent of our awareness. (We know this is true for some things, like our ability to drive from one place to another on cerebral autopilot.) If that’s the case, why the leakage? Is it really a byproduct of deep and unconscious stirrings in our brain—something that’s simply unavoidable given our wiring? Why aren’t we just zombies, with our brain doing everything without the need for consciousness? I doubt that, given our ignorance of how the brain works, Oakley and Halligan have an explanation for this, but their hypothesis is surely intriguing to ponder. It’s sort of the biological equivalent of quantum mechanics: something that’s deeply weird. 

I’ve put their paper and the link below; by all means weigh in below if you’ve read it.

_____________

Oakley, D. A. and P. W. Halligan. 2017. Chasing the rainbow: the non-conscious nature of being. Front. Psychol., volume 814 November 2017 | https://doi.org/10.3389/fpsyg.2017.01924

138 Comments

  1. mikeyc
    Posted January 15, 2018 at 11:09 am | Permalink

    Intriguing. I’m not buying it yet – I still need to read the paper- but an interesting concept.

  2. Posted January 15, 2018 at 11:10 am | Permalink

    I’ve only skimmed the paper, but noticed:

    “embodied self or “center of narrative gravity””

    – This is *not* what Dennett is on about. In fact, the whole idea of an “executive” is in tension with the Dennettian approach to consciousness (which is one reason why I find his stuff on free will exasperating). I think the authors would do better to engage some of this stuff – the idea that consciousness is not “the thing that really matters” is Dennettian in spirit, so to speak, but the details aren’t compatible – I would have loved to see a “face off” between the two.

  3. Posted January 15, 2018 at 11:15 am | Permalink

    Reblogged this on Quaerere Propter Vērum.

  4. YF
    Posted January 15, 2018 at 11:19 am | Permalink

    It seems to me that before one can debate the bases of consciousness one has to define what ‘consciousness’ is in the first place. Awareness? Well, surely bacteria and plants are aware of their environment and adaptively respond to it. Studies show that they are also capable of learning. Are they conscious?

    My prediction is that ultimately the ‘folk’ term ‘consciousness’ will be eliminated by a more complete understanding of how complex biological systems, including brains, work.

    When you drive to work and have no memory of the trip your brain is in state A, and when you do remember the trip (e.g., if an unusual event occurs during the journey), your brain is in state B.

    When you are learning a new skill your brain is in state C, and when that skill eventually becomes automatic, your brain is in state D. And so on…

    • Posted January 15, 2018 at 11:47 am | Permalink

      “It seems to me that before one can debate the bases of consciousness one has to define what ‘consciousness’ is in the first place.”

      Excellent point. I’m an engineer, not a philosopher, so I’m no doubt missing some context, but I don’t see what’s so hard about the “Hard Problem”. Provide some rigorous operational definitions to eliminate confusion and equivocation, then see if it’s still an issue.

      • ppnl
        Posted January 15, 2018 at 3:43 pm | Permalink

        yeah, you can’t do that because consciousness is subjective. You can define the color red as a wavelength but you cannot define the subjective experience of the color red .

        Somthing is wrong with the comment editor. Every line is echoed. Using firefox. Hope it posts correctly.

        • Posted January 16, 2018 at 8:42 am | Permalink

          Consciousness might be a subjective experience, but what we mean by “consciousness” has to be a shared, preferably objective and operational, definition or else the word is literally meaningless. That would make communication impossible.

          • ppnl
            Posted January 16, 2018 at 11:27 am | Permalink

            Not at all. You could communicate perfectly well with an unconscious robot and never even know that it was not conscious. In fact I have no objective proof that you are conscious. You may be a philosophical zombie.

            • Posted January 16, 2018 at 1:53 pm | Permalink

              I guess I’m not making myself clear. I am trying to distinguish between the subjective experience of consciousness, how we perceive it individually, and the definition of consciousness, the objective, operational description of what we mean by the word in the context of our discussion. The latter might include measurable brain states or behaviors, for example. That’s what we need to avoid talking literal nonsense.

    • Posted January 15, 2018 at 1:48 pm | Permalink

      I know of no studies that show that bacteria and plants learn. But they do have behaviors, and interestingly bacteria at least have individualized behaviors. They differ in how rapidly they spin their flagellae, and how readily they suck up lactose.

      • YF
        Posted January 15, 2018 at 2:14 pm | Permalink

        Just a sample:

        On Having No Head: Cognition throughout Biological Systems.

        Baluška F, Levin M.

        Front Psychol. 2016 Jun 21;7:902. doi: 10.3389/fpsyg.2016.00902. eCollection 2016. Review.

        Sci Rep. 2016 Dec 2;6:38427. doi: 10.1038/srep38427.

        Learning by Association in Plants.

        Gagliano M1, Vyazovskiy VV2, Borbély AA3, Grimonprez M1, Depczynski M4,5.

        Science. 1974 Jun 21;184(4143):1292-4.

        “Decision”-making in bacteria: chemotactic response of Escherichia coli to conflicting stimuli.

        Adler J, Tso WW.

        J Mol Biol. 2015 Nov 20;427(23):3734-43. doi: 10.1016/j.jmb.2015.07.007. Epub 2015 Jul 17.

        Brainless but Multi-Headed: Decision Making by the Acellular Slime Mould Physarum polycephalum.

        Beekman M1, Latty T2.

        Proc Biol Sci. 2016 Dec 28;283(1845). pii: 20162382. doi: 10.1098/rspb.2016.2382.

        Direct transfer of learned behaviour via cell fusion in non-neural organisms.

        Vogel D1,2, Dussutour A3.

        Front Psychol. 2016 Apr 26;7:588. doi: 10.3389/fpsyg.2016.00588. eCollection 2016.

        Intelligence, Cognition, and Language of Green Plants.

        Trewavas A1.

      • ppnl
        Posted January 15, 2018 at 3:46 pm | Permalink

        Well yeah but then the weather has “behavior”. Any complex system will have individualized idiosyncrasies.

  5. Christopher
    Posted January 15, 2018 at 11:25 am | Permalink

    I tend to get lost quite quickly when attempting to comprehend issues such as consciousness or free will. I might as well be attempting to read it in French or Spanish! However, it often stimulates my own mind into meandering thoughts about these and other “big questions”. The one that popped into my relatively weak and feeble mind this time was when exactly do humans become conscious during our development? Certainly it is not at conception (sorry creationists & bible-thumpers), but is it some time after birth, or maybe around the time of our acquisition of language? (Lots of discussions for memeory as well, as there is a threshold, probably tied to language, below which we cannot form memories, probably around age 2) Does it come “online”, if you will, all at once or in stages? And how are people with various intellectual disabilities to figure into this (a difficult and delicate discussion indeed). So while I rarely feel as if I’ve got a handle on the subject presented, it at least drives me to ponder deep (at least for me) questions although it always leaves me with the feeling, to paraphrase Brian Cox, I don’t doubt there is anything science can’t figure out, I just worry I’m not smart enough to understand it!

    • Posted January 15, 2018 at 11:34 am | Permalink

      Excellent question. Hasn’t anyone looked into this? But then, how would one?

      • mikeyc
        Posted January 15, 2018 at 11:53 am | Permalink

        Here’s one attempt.

        https://www.nature.com/articles/pr200950

        • Christopher
          Posted January 15, 2018 at 1:34 pm | Permalink

          Interesting (what I could understand, anyway). I think much of the problem for me rests in the usage of so much jargon. I find it difficult to both process new words (and their definitions) and the new ideas or arguments being put forth in papers such as these.

        • Posted January 17, 2018 at 1:22 am | Permalink

          Just the abstract is interesting, informative — and understandable. Thanks for the link. Will read the rest later.

        • Posted January 17, 2018 at 6:08 am | Permalink

          One of the most lucid treatments of the subject. Some jargon is inevitable. The lay reader has to look up “myelination” for instance, but that should be fun, if you’re really interested.

    • darrelle
      Posted January 15, 2018 at 12:24 pm | Permalink

      You might find reading about what the cognitive sciences have to show about language. I can recommend The Language Instinct by Steven Pinker as a pretty darn good read that, based on your comment, you would find interesting.

      • Christopher
        Posted January 15, 2018 at 1:28 pm | Permalink

        Excellent to know, as I have already purchased that book and it sits upon my bookshelves awaiting its turn.

    • Posted January 15, 2018 at 2:02 pm | Permalink

      Maybe the best we can do is ask when do memories stay ‘remembered’. As in, what are your earliest memories? I think that is quite variable. I remember little snatches of time when in a crib, with a mobile of butterflies over me, and i could not yet talk, only babble. I also remember other brief moments like that. Then nothing. Then something. And the somethings became more frequent over time.

  6. darrelle
    Posted January 15, 2018 at 11:29 am | Permalink

    Don’t have much time, pardon the mess.

    In the excerpts provided the authors seem to contradict themselves.

    Regarding whether or not consciousness confers an advantage and was positively selected for, does it seem plausible that consciousness could provide an advantage in response to stimuli, for example pain? Non-conscious organisms presumably couldn’t anticipate pain but conscious organisms presumably could because they have some conception of self. Related, could consciousness also perhaps confer an advantage in modeling?

    • Posted January 15, 2018 at 9:46 pm | Permalink

      My thoughts, too.

      Is the ability to reason not an advantage? Only conscious organisms can do science. Only conscious organisms can detect and avoid non-obvious threats.

    • Posted January 16, 2018 at 7:46 pm | Permalink

      All your suggestions look extremely plausible to me, and I have no idea whether the authors tried to rule them out.

      A much better approach to consciousness IMO, especially qualia and their function in promoting the organism’s welfare, is Morsella 2005 (pdf format).

  7. rickflick
    Posted January 15, 2018 at 11:29 am | Permalink

    No time to read this now, but fascinating. I wonder why consciousness itself would not be involved as feedback to influence adaptive changes to the brain? Once it became manifest to some degree, it’s influence on fitness would, it seems, collect new mutations which enhanced this new skill.

    • Tom
      Posted January 15, 2018 at 12:37 pm | Permalink

      Agreed, chance and necessity always allows the possibility of some new strategy that may be of use in one way or another and in our case perhaps the “wiring” allowed a useful new way of constructing a more accurate model of external reality.
      I sometimes wonder if belief in its widest sense is all the brain really has to go on and perhaps consciousness helps the belief to be more in accordance to the actual reality.

  8. Posted January 15, 2018 at 11:33 am | Permalink

    I think this is probably right, and tightly linked to the illusion of free will, which also seems to be a spandrel.

  9. Ken Kukec
    Posted January 15, 2018 at 11:46 am | Permalink

    Don’t know if I’m the only one, but thinking about this too deeply starts to make me queasy — like a guy in a Twilight Zone episode bumping into himself coming around a corner. Essays like the one in The Conversation ought to come with a dose of Dramamine.

  10. W.Benson
    Posted January 15, 2018 at 11:52 am | Permalink

    I seem to remember having read arguments similar or identical to those of Oakley and Halligan here on WEIT. Somehow they seem neither new nor radical, but rather mainstream.

  11. Posted January 15, 2018 at 12:21 pm | Permalink

    “We do not consciously choose our thoughts or feelings – we become aware of them.”

    I think that’s the way it is.

    • ppnl
      Posted January 15, 2018 at 5:00 pm | Permalink

      I like it as well. But that means we are helpless witness to events that we have no control over. If that is so then what is the point of witnessing them? If we were philosophical zombies how would the world be different?

      But if we were philosophical zombies why would we evolve to discuss a consciousness that we do not posses. And if consciousness has no effect then by what mechanism can we discuss consciousness? If we are helpless witnesses then we cannot discuss our helplessness

      Man. The echo is making it impossible to see what I typed..

      • DiscoveredJoys
        Posted January 15, 2018 at 5:12 pm | Permalink

        I’ve often wondered if we expect too much of ‘consciousness’. My thoughts are that most humans spend most of their time awake following learned routines with automatic responses to situations, all modulated by emotions. However really salient automatic ‘stuff’ is also processed by a slower second-guessing function we identify as (at least one type of) consciousness.

        If the second-guessing ‘works’ fitness could be improved as people respond or learn to respond more effectively to stimuli. Thus second-guessing could be selected for by natural (and sexual) selection.

        Throw in culture and language and suddenly understanding the back-seat critic becomes the ‘hard problem’ because we separate it from the humdrum stimulus/prediction/correction/response of the bulk of daily existence.

        • ppnl
          Posted January 15, 2018 at 8:20 pm | Permalink

          Well, it’s kinda hard to parse your meaning here. But tell me how to program a computer to do this kind of “second guessing” and then tell me if you think that makes the computer conscious.

          Again the problem is once you understand the “second guessing” as a deterministic process there is no need for consciousness. It is just as automatic as your learned routines. They may be more complex but handwaving about complexity is no more useful than hand waving about epiphenomena.

          • DiscoveredJoys
            Posted January 16, 2018 at 4:15 am | Permalink

            I’m quite happy to dismiss the need for ‘consciousness’ as something special or some special activity.

            If consciousness is just part of brain behaviour, and it’s brain behaviours all the way down then the ‘hard question’ simplifies into how we subjectively ‘feel’ *anything*. Whether what we feel is consciousness, anger, free will, hunger, belief or revelation.

            • ppnl
              Posted January 16, 2018 at 11:34 am | Permalink

              Yes but how do you program a computer to have subjective experiences? If you can’t then there is really something special happening in the brain. if you can then show me.

              • rickflick
                Posted January 16, 2018 at 1:11 pm | Permalink

                We can’t say for certain what machines have consciousness. Dennett suggested maybe vending machines have a sliver of consciousness. On the other hand it seems clear to me that the salient difference between a robot and a human is the liquid matrix. Humans are vastly more complex than computers and have glands and hormones. Asking a robot what the temperature was yesterday invokes a relatively primitive search routine: “-12 degrees”. Asking a human invokes hundreds or thousands of electrical and chemical subroutines which operate in complex feedback routines producing a ejaculation of emotions: “Boy was it Goddamn Cold! My Chevy wouldn’t start and I was late for bowling.”

              • Posted January 16, 2018 at 2:14 pm | Permalink

                But if you ask for the temperature of yesterday – what do you want to know?
                The exact degree or rather feelings and a little small talk? And even if you like the latter, it is easily possible to implement an algorithm for the AI, thereby embedding a factual answer in a small conversation.
                Or you code the AI so that it reacts at certain temperatures with “emotional” reactions that resemble your own: So trouble in freezing cold and rain and pleasure at the sunshine and warm air 🙂

              • rickflick
                Posted January 16, 2018 at 4:16 pm | Permalink

                I think a well programmed machine would simulate having an internal conscious condition of some sort. But, it wouldn’t arise from hormones and emotion. It would come from a selection of preconceived states. Would it ever get bored and want to go home? Probably not.

      • Posted January 16, 2018 at 2:39 pm | Permalink

        We are not helpless. Have you ever had the feeling that your brain has failed you?
          (except, of course, those episodes that served the consume of alcohol or other drugs …)

        Do you think that your brain would let you down from the moment you accepted the fact of non-agency?
        Your brain works fabulously and presents you the best results; we call this presentation consciousness.
        That you do not have the control by consciousness, as you have always assumed, doesn’t matter.

        Compare it to animals: We never attibuted to them any form of agency. Imagine we could speak with them and we would tell them: you don’t have a free will, your actions are fully determined by the laws of physics.
        Would it plunge the elephant into a crisis of meaning knowing that he could not help but snort furiously and put on his ears?
        Would the lion be confused knowing that his hunting strategy of throwing himself down in a pack on a water buffalo was not conceived by himself, but that his lion brain had made him hunt in that form?
        Why should we be confused when we know that we are basically just like the animals, just bio-machines. Yes it’s the way it is – so what?

  12. Posted January 15, 2018 at 12:32 pm | Permalink

    I was just posting on this very topic (the second part). We do not “create” our thoughts through a conscious effort. There is some subconscious process involved but I do not think even that is voluntary. Our thoughts are generated outside of our control. In computer lingo, they are “pre-fetched data.”
    One of our greatest mental powers is of imagination. By using it we can consider the past or the future. Animals which have no imagination live in the present moment, reacting to stimuli but not anticipating them. Our imaginations allow us to consider scenarios; for example “Was that movement in the tall grass due to a zephyr of wind or is there a predator creeping up on me?” We can imagine both. Since wind zephyrs are not particularly harmful, the safest choice is to assume it is a predator and move away from it. This is a survival function that other animals, or at least most other animals, don’t have.
    In order for this to work, though, all of those scenarios need to be “in mind.” This is where our thoughts come from and why. Our imagination is a survival simulator and it is a conscious activity, driven by subconscious thoughts.

    • Posted January 15, 2018 at 2:55 pm | Permalink

      “Animals which have no imagination live in the present moment, reacting to stimuli but not anticipating them.”

      That’s not right

      Researchers showed that primates (chimpanzees) have anticipatory thinking, so they have a theory of mind about the mental states of other group members.

      Animals have no imagination? Have you ever watched a sleeping cat or a dog, oviously dreaming and fighting with their paws against imaginary (sic!) attackers?
      What do you think that caused them to do so, if it was not their brain that prompted these imaginations in a dream?

    • ppnl
      Posted January 16, 2018 at 2:29 pm | Permalink

      Yeah, animals have no imagination? Somebody has never had a pet.

  13. loren russell
    Posted January 15, 2018 at 12:52 pm | Permalink

    I’ve been pretty satisfied with the Cartesian theater [or perhaps Cartesian multiplex for those easily distracted]. Located conveniently next to the major sensors, neuron committees can view current taps and can up archival footage.

    Add in some neurochemicals in lieu of popcorn.. Much more fun than obsessing on spandrels and such..

    • Posted January 16, 2018 at 11:24 am | Permalink

      This runs afoul of Dennett’s criticisms in _Consciousness Explained_, though. For example, the regress that results.

  14. Mark Reaume
    Posted January 15, 2018 at 12:53 pm | Permalink

    A recent podcast of Sam Harris with Anil Seth was on the topic of consciousness. It was 3 hours long (!) and difficult for me to concentrate on fully but it was interesting nonetheless.

    I’d be interested to hear what others thought about their talk and how it relates to this post.

    • strongforce
      Posted January 15, 2018 at 1:00 pm | Permalink

      An excellent discussion well worth the time.

  15. Hal
    Posted January 15, 2018 at 1:08 pm | Permalink

    We are never aware of any of the physical processes underlying cognition, or any other bodily function. Moreover, were we aware of everything that is going on in our bodies, we would probably go mad.

    The Behaviorists concluded that consciousness is an epiphenomena decades ago, before the cognitive revolution in experimental psychology.

    • ppnl
      Posted January 15, 2018 at 5:11 pm | Permalink

      I would argue that there is no coherent definition of epiphenomena. At least not one useful for this discussion.

      • Hal
        Posted January 15, 2018 at 5:30 pm | Permalink

        That is a term used in the article, and it is generally understood to reflect a process that is a bi-product of another,is it not?

        • ppnl
          Posted January 15, 2018 at 7:38 pm | Permalink

          The term has many different definitions in many different contexts. I have not found the any of the meanings particularly useful in any context.

          For example gas is a byproduct of digestion and so a fart is an epiphenomena? What good is that?

          I can explain a fart in terms of chemistry and biology. I don’t need to hand wave about epiphenomena. Using epiphenomena to hand wave about consciousness is similarly useless but it is all we have. Nothing.

          • Posted January 16, 2018 at 11:26 am | Permalink

            Dennett has a discussion of epiphenomenalism (in several senses) in _Consciousness Explained_ and later work. “Non-functional by-product” is a good characterization of the useful notion. Is consciousness one of those? I don’t know, but one has to assume )(at least ex hypothesi) some details about what it is first, before one can do that investigation.

            • ppnl
              Posted January 16, 2018 at 12:20 pm | Permalink

              I hate Searle with a passion. Dennett OTOH is just intensely annoying.

              See this video to see why:

              Basically he is saying you can’t tell people that they do not have free will for the same reason you can’t tell people there is no God. If you do that they may rape their sister or something. Yeah… no.

              Sorry but if your brain is part of a deterministic causal network then you have no free will by any definition of free will that matters.

              In order to say that people have free will he changes the definition of free will. He changes it so that people obviously have free will by the new definition. But then coke machines, thermostats and the weather also seem to have free will.

              I still say epiphenomena is a word almost always used to say something dishonest. I just cannot find a situation where it was useful to explain anything. Ever. If it is well defined it is useless and if it is poorly defined it will be abused.

              • Posted January 17, 2018 at 11:29 am | Permalink

                I think that Dennett is more wrong about FW than about general philosophy of mind, including consciousness. If one has interesting things to say on one topic I don’t let the other stuff get to me as much.

  16. d3zd3z
    Posted January 15, 2018 at 1:12 pm | Permalink

    I wonder how this relates to what David Eagleman talks about in “Incognito, The Scret Lives of the Brain.” As I understand his point, he argues that that consciousness isn’t as much a passive observer, but an artifact of the mechanism that evolved in the brain to resolve conflicts between different “subsystems” in the brain. Most of our mental processes just happen in the involved modules, but when there are conflicts (different parts of the brain try to drive conflicting behaviors), input from those systems is brought together by a more central system to determine which to use.

    For some reason, this multiple systems of input being fed together then ends up being perceived by us as consciousness.

    I’m not sure which of these is more explanatory to me. There are definitely aspects (but not most) where I “feel” like I’m making a decision, which might be explained because this consciousness thing is embedded in that mechanism that makes those kinds of decisions.

  17. Posted January 15, 2018 at 1:19 pm | Permalink

    I found Sam Harris’ recent podcast that interviewed Anil Seth on consciousness very illuminating, although counter-intuitive.

    • Posted January 15, 2018 at 5:31 pm | Permalink

      “we know in part, and we prophecy in part….”

      couldn’t resist….

  18. Posted January 15, 2018 at 1:20 pm | Permalink

    I found Sam Harris’ recent podcast that interviewed Anil Seth on consciousness very illuminating, although counter-intuitive.

    • Posted January 15, 2018 at 3:09 pm | Permalink

      I’ve seen Anil Seth in some youtube videos and I found him very inspiring

      “”We’re all hallucinating all the time; when we agree about our hallucinations, we call it ‘reality.’ ”

      “My research is telling me that consciousness has less to do with pure intelligence and more to do with our nature as living and breathing organisms. Consciousness and intelligence are very different things. You don’t have to be smart to suffer, but you probably do have to be alive.”

      “So perception — figuring out what’s there — has to be a process of informed guesswork in which the brain combines these sensory signals with its prior expectations or beliefs about the way the world is to form its best guess of what caused those signals. The brain doesn’t hear sound or see light. What we perceive is its best guess of what’s out there in the world.”

      http://www.collective-evolution.com/2017/08/08/neuroscientist-shares-how-your-brain-hallucinates-to-create-reality/

  19. Posted January 15, 2018 at 1:29 pm | Permalink

    I am highly skeptical that consciousness (or free will) is a spandrel. Consciousness seems to be too big a part of who we are and how we behave for it not to be involved in selection. For something to be an evolutionary spandrel, it must be neutral with respect to our behavior. How can that be true of consciousness?

  20. Posted January 15, 2018 at 1:54 pm | Permalink

    I had felt that consciousness came about as an emergent property from selection for cleverer brains. Brains with an expanded ability to learn and reason and show insight learning and enhanced social skills are brains that have greater fitness in some contexts. As natural selection moved brains in that direction, then they start to become conscious — self aware. This too has a fitness aspect since such brains will also exhibit an ability to know what others’ are seeing and thinking. None of this is in contradiction to the thesis described above.

    • ppnl
      Posted January 16, 2018 at 12:04 am | Permalink

      Hand waving about “emergent properties” is no more helpful than hand waving about “epiphenomena”. Neither is well defined and I suspect you could use the terms interchangeably. For example it gives no clue how or even if you could program a computer to have similar emergent properties.

      The ability to learn and have enhanced social skills is an observable function of a brain. As such I expect it to be understandable as a causal chain. As such you should be able to implement it as a computer program. A computer program does not need consciousness in order to function. It does what it does conscious or not.

      • Posted January 16, 2018 at 11:29 am | Permalink

        Emergent properties are properties of things not possessed by their components. E.g., sufficiently large numbers of water molecules in aggregate under conditions thus and so are wet, have viscosity, etc.

        The interesting thing is trying to figure out when and how it occurs! Oftentimes (as Bunge has suggested) it calls for interdisciplinary work: in the case of consciousness likely biology and psychology, as well as perhaps linguistics and some of the social sciences.

        Incidentally, the above elucidation of emergence should be supplemented by a theory of components, a theory of properties and a theory of things, which I omit for lack of space. 🙂

        • ppnl
          Posted January 16, 2018 at 12:50 pm | Permalink

          By this definition everything is emergent. A house is an emergent property of boards. A board is an emergent property of cellulose fibers. Cellulose is an emergent property of glucose. Glucose is emergent from carbon atoms. Atoms are emergent from interacting protons and neutrons. These are emergent from interacting quarks.

          So saying something is emergent tells us precisely nothing. Yes showing how something emerges is the interesting thing. It is also the only thing.

          None of this offers a clue to how to breach the divide between the subjective and the objective.

          • Posted January 16, 2018 at 7:49 pm | Permalink

            Yes, pretty much everything is emergent, except for fundamental particles (string theory anyone?). Remind me why this is supposed to be a problem?

            • Posted January 17, 2018 at 5:08 am | Permalink

              No problem. Remember the time when fundamental particles emerged?

          • Posted January 17, 2018 at 11:30 am | Permalink

            Those aren’t *properties*.

  21. Posted January 15, 2018 at 1:58 pm | Permalink

    ” At the end they get into issues of free will and responsibility, arguing that these are social constructions (yet also „embedded in the workings of our nonconscious brain“!) that have „a powerful purpose in society“ and a „deep impact on the way we understand ourselves.“ I’d argue that this is confusin (…) and even though these concepts may affect how „we understand ourselves“, they are illusions: in the authors‘ view, they are not what we think they are.”

    It doesn’t matter that the concept of free will is (only) an illusion, because it is a very useful, helpful illusion, an illusion that serves purposes of surviving in a society of potentially harmful individuals. That’s why it will take much more efforts to change the mind of people on this issue than in relation to the issue of the god illusion.
    The illusion of free will is effecting our actions and our selfawareness and it is good for us to think others are evil and behave without moral and to think we are better than all these sinners. That’s why it is very good to have this illusion and it will stick possibly for ever in human minds. The objective, scientific truth doesn’t provide such nice self-deceptions.

  22. Diana MacPherson
    Posted January 15, 2018 at 2:12 pm | Permalink

    Consciousness is often easily mixed up with self-consciousness and for good reason as it’s often described as “what it’s like to be x”. I still don’t necessarily get that. What would be the difference between an conscious ant and an unconscious ant (that isn’t sleeping)?

    • Randall Schenck
      Posted January 15, 2018 at 2:30 pm | Permalink

      Your ant question reminded me of one thing I could recall from this paper. Communication is our advantage over other animals and this ability leads us to speculate or believe things concerning consciousness that we think we have but may not.

    • ppnl
      Posted January 15, 2018 at 4:38 pm | Permalink

      No, the question is more like why is it like anything to be anything at all. This “likeness” is neither useful to explain anything nor does it have an explanation. It just is.

      • Diana MacPherson
        Posted January 15, 2018 at 5:05 pm | Permalink

        But what does an ant without conciousness behave and look like vs an ant with consciousness? Would unconscious ant just engage in behaviours to survive and reproduce as an ant, while conscious ant would think about those behaviours? “Oh how I hate carrying these stupid leaves!”

        • ppnl
          Posted January 15, 2018 at 6:00 pm | Permalink

          Well that is the problem. If consciousness has no effect then there cannot be any difference in the ants.

          But what if the conscious ant attempted to have a discussion of consciousness with the unconscious ant? Could the unconscious ant discuss a consciousness that it does not posses?

          And again nobody is seeing the echo? I will try another browser.

          • Diana MacPherson
            Posted January 15, 2018 at 6:27 pm | Permalink

            Conscious Ant: “I hate carrying these leaves”
            Unconscious Ant: …..

            • ppnl
              Posted January 15, 2018 at 7:49 pm | Permalink

              If an ant has the thought “I hate carrying these leaves” that thought exists as a physical measurable state of the ant’s brain. That brain state has an in principle predictable effect on the observable behavior of the ant. For example it may shirk its duty and free load off the colony.

              But this is true regardless of whether the ant experiences the thought. So there can be no measurable difference between the conscious and unconscious ant. They are both just following a causal chain.

  23. Posted January 15, 2018 at 3:16 pm | Permalink

    I feel most discussions of consciousness suffer from tendency to reify abstract concepts. Just because somebody has created a word for a process or for the difference between two states does not mean that there is a thing with that name. In other words, our ancestors created ‘conscious’ to refer to the difference between a living and awake person and a dead, sleeping, knocked-out, or drugged person, and subsequently others started thinking of ‘consciousness’ as a thing that sits in our heads. In addition there is the tendency to see consciousness as a binary state instead of something that must come as a gradient.

    Taking that into account I do not understand the hard problem of consciousness, and I suspect that the question is misguided. It assumes that there is this mysterious thing that needs an explanation, but to me it looks just like the following:

    “How did growth evolve?”

    “At what point in evolution did animals suddenly acquire growth?”

    “At what point in development does a human child suddenly acquire growth?”

    “How does growth work?”

    “Why do we have growth if I can imagine an organism increasing in size without having growth in it? I mean, what is the adaptive advantage?”

    At least some of these research questions appear clearly misguided if applied to other abstract terms, but for some reason people find them entirely meaningful if applied to what is at its core ‘computing information on one’s surroundings and one’s own relation to them’.

    As for “We do not consciously choose our thoughts or feelings – we become aware of them”, I am likewise uncertain whether that sentence has meaning. Certainly we do more computing of possible outcomes of our next actions than e.g. a wasp, so we have more agency than a wasp (or an oak tree, for that matter). And that is what counts, the rest is semantics.

    • ppnl
      Posted January 15, 2018 at 3:58 pm | Permalink

      But a computer program playing a game may have much more agency than I in that game.

      But does that mean it experiences the game?

    • Posted January 15, 2018 at 4:23 pm | Permalink

      Yes. Who would find it in any way problematic to believe that someone (or something) is (a bit) more or (a bit) less conscious?

      • ppnl
        Posted January 15, 2018 at 4:33 pm | Permalink

        But the point is that consciousness need not be any part of the explination of how somthing works. It isn’t a mechanism like a feedback loop. It is more like a statement that “here be magic”.

        • Posted January 15, 2018 at 4:40 pm | Permalink

          What would be your take on the reasonable comparison between growth and consciousness?

          • ppnl
            Posted January 15, 2018 at 5:43 pm | Permalink

            No useful connection at all. It is possible that different things have different levels of consciousness. How would I know? Consciousness is subjective and so I can’t know how strongly something else experiences it. For all I know a mouses desire for cheese shines as brightly in its mind as anything I experience. The information content may not be as great but the power may be the same.

            Sorry for any typos and such. I cannot see what I’m typing.

    • Posted January 15, 2018 at 5:06 pm | Permalink

      “Certainly we do more computing of possible outcomes of our next actions than e.g. a wasp, so we have more agency than a wasp (or an oak tree, for that matter). And that is what counts, the rest is semantics.”

      No, the rest is not semantics,
      You compare a human with a wasp and your result tells you that we have more ability to act, we have more agency than a wasp, that should be all that counts
      And the wasp can compare itself to a protozoan and can happily say: oh I have more ability to act than this, that’s all that matters, the rest is semantics. And the protozoan looks at the bacterium: “Oh I can do more than this” and so on
      This way of looking at things leads to nothingness, the point of reference is to look within our species, so to pursue the question: why can not man X behave like man Y, and can we blame somebody for not being behave as we want it to? The physics says no. That’s the key, that’s what it’s about. We lie to each other in everyday life and the criminal judges lie to the people and the defendant about it, by making everyone believe that the perpetrator could have done otherwise, if he had just tried. The question of consciousness and free will affects us, as a society, it is not about looking at the cognitive possibilities and limitations of other species.

    • Posted January 16, 2018 at 12:40 am | Permalink

      ppnl,

      Unfortunately I am not sure what you trying to say. Your first comment sounds as if you think that there is more to ‘experience something’ than processing sensory input. If that is the case, then I would say that (a) it seems to me that simply processing sensory input with the higher brain functions is precisely what words like experience and conscious were invented to describe, and (b) I find it hard to understand what that extra should be. I would consider the burden of evidence to be on the side of those who believe that e.g. consciousness is an inexplicable magical thing sitting inside human heads. But then your response to jpvuorela reads as if you reject ‘here be magic’ style ad-hoccing yourself…

      As for using growth as an example, a better one may have been something closer to mental processes, e.g. vision. So for the question of the adaptive advantage of consciousness, consider “what is the adaptive advantage of vision if animals could evolve to just see things without having vision?” Asking the same question about consciousness seems to be just as odd.

      As for agency, I would say winning the game is not the same as agency. If faced with a human opponent who is likely to destroy the chess computer in a fit of anger after losing, the computer would have a hard time losing on purpose, for example. Or even figuring out the danger. Or even caring about self-preservation.

      sherfolder,

      Criminal justice being an important question that affects us does not in any way imply that nothing can be learned about consciousness by looking across the tree of life. When a theologian like David Bentley Hart, for example, argues that consciousness is irreducibly complex, his argument falls apart the moment somebody nonchalantly waves in the general direction of other animals and/or human development from zygote to adult. That is also an area of concern.

      • Posted January 16, 2018 at 5:57 am | Permalink

        “just see things without having vision”

        That is, indeed, even more to the point as a comparison.

      • ppnl
        Posted January 16, 2018 at 1:23 pm | Permalink

        ” Unfortunately I am not sure what you trying to say. Your first comment sounds as if you think that there is more to ‘experience something’ than processing sensory input. ”

        A computer can process input from a camera and react to it appropriately. My phone can recognize my face and unlock itself for me. Do you think it is happy to see me? Do you think it experiences anything at all? So yes I think there is more to “experiencing something” than processing sensory input. I think that is obvious beyond the need to say.

        “what is the adaptive advantage of vision if animals could evolve to just see things without having vision?”

        Well my phone can see things and respond to them appropriately without being conscious of them. There are even people with blindsight who have no conscience of their visual field and claim to be blind. Yet they can respond appropriately to visual stimulus. Oddly there are also people who are totally blind yet swear up and down that they can see. The mind is strange place to visit. Unfortunately we live there and that strangeness becomes invisible.

        ” As for agency, I would say winning the game is not the same as agency. If faced with a human opponent who is likely to destroy the chess computer in a fit of anger after losing, the computer would have a hard time losing on purpose, for example. Or even figuring out the danger. Or even caring about self-preservation. ”

        Well ok but you will have to explain what you mean by agency in a causal way. If you cannot tell me how to detect it, measure it and create it you may as well be talking about magic.

        I could program a computer to take into account the possibility of an angry player. I could even include an algorithm for throwing a game to avoid that anger. What I cannot do is make it “care”. It is a deterministic algorithm that does not need to care in order to function. Even if it did care that caring serves no function in the algorithm.

        • Posted January 16, 2018 at 2:49 pm | Permalink

          Again, if conscious of something or experience are supposed to be something qualitatively different than what the computer and phone do, as opposed to merely massively more complicated data processing, then I’d like to hear what it is. That is where the burden of evidence should be. If caring is more than having internal states that lead an entity into taking action to achieve something, then I’d like to know what it is, etc.

          The blindsight example is interesting. I understand then that you claim that we could indeed be philosophical zombies, be blindsighted to all sensory data, and not lose anything. I would have to look into that, but is it really the case that they are fully functional in their cognitive processing of visual input?

          I assume the question about “agency in a causal way” aims at claiming that we do not have agency because we are subject to cause-and-effect. But that is not what agency means. Yes, everything is subject to cause-and-effect (setting aside the observation that C&E is a human-invented concept to haphazardly describe observations in nature, not a deep truth of the universe handed down by the gods), but within that network of events some beings have more internal decision processes than others. Again, originally nobody came up with the word agency (or free will, or whatever) to describe magic and foist it onto the world, but they invented a word to describe the difference between a rock tumbling down a hill and you deciding to walk down the hill as opposed to staying on top of it, an option that the rock didn’t have. Does that difference between you and the rock still exist? Yes? Okay, then you have more agency than the rock.

  24. ppnl
    Posted January 15, 2018 at 3:55 pm | Permalink

    It seems like we experience our thought processes much like how we experience color. Experiencing it gives us no control over it. In that sense then consciousness has no effect. We would do the same thing with consciousness as without it. The same way a camera would take the same picture with or without experiencing color. It is a passive recorder of events that it has no control over.

    But wait! How then can we be haveing this discussion of consciousness if it has no effect?

    And I am still getting an echo from the comment editor.

    • Mark Reaume
      Posted January 15, 2018 at 4:03 pm | Permalink

      I think consciousness has to do with our brain’s ability to hold attention to something. The senses are sensing all the time but you don’t notice the colour of an object until your attention is pulled to it. The underlying information processing is competing for attention and certain pathways have a higher or lower threshold – like seeing movement at the corner of your vision.

      This may be one narrow aspect of consciousness anyway.

      • Mark Reaume
        Posted January 15, 2018 at 4:14 pm | Permalink

        A quick search on the same site shows that this is not a novel idea:
        The attention schema theory: a mechanistic account of subjective awareness
        https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00500/full

      • ppnl
        Posted January 15, 2018 at 4:29 pm | Permalink

        You can have “higher and lower thresholds” without reference to consciousness. There exists many systems that only pay attention to an input if its attention is pulled to it. These feedback and control mechanisms are important and useful but need not involve consciousness.

        And the line echoing in the editor is really getting annoying.

        • Mark Reaume
          Posted January 15, 2018 at 5:05 pm | Permalink

          “These feedback and control mechanisms are important and useful but need not involve consciousness.”

          Right, but they could lead to consciousness. I think that’s the point. These mechanisms were the things that evolved, our subjective experience is the experience of moving from one item of attention that has breached this threshold to the next.

          Anyway, I’m really out of my depth here, I just thought it was an interesting way of thinking about it.

          • ppnl
            Posted January 15, 2018 at 5:33 pm | Permalink

            Maybe they could but I see no logical connection at all. Maybe they could also lead to magic but you would have to supply an argument for that as well.

            • Mark Reaume
              Posted January 15, 2018 at 5:38 pm | Permalink

              Fine, I won’t comment on this site anymore I guess.

              • ppnl
                Posted January 15, 2018 at 5:52 pm | Permalink

                Sorry if I offended. FWIW I rarely comment here.

      • Posted January 15, 2018 at 4:37 pm | Permalink

        “I think consciousness has to do with our brain’s ability to hold attention to something. The senses are sensing all the time but you don’t notice the colour of an object until your attention is pulled to it.”

        No, that is not correct.
        There are experiments (eg by Christof Koch) in which pictures were shown in such a short period of time that the probands could not see them – but their unconscious has seen them, as later experiments have proved. But although they have not seen nothing they recognized the “invisible” unseen objects correctly.

        • Mark Reaume
          Posted January 15, 2018 at 5:07 pm | Permalink

          Interesting, I’m not sure how that pertains to what I said though. Its possible I’m just missing something.

          • Posted January 15, 2018 at 5:32 pm | Permalink

            You wrote:
            “The senses are sensing all the time but you don’t notice the colour of an object until your attention is pulled to it.“

            Your brain is knowing the colour of an object even in that cases your attention is not pulled to it. Your unconscious mind does know yet your consciousness won’t.

  25. Don Mackay
    Posted January 15, 2018 at 4:18 pm | Permalink

    I first heard of ‘spandrel’ at a lecture given by Stephen Gould in Auckland, NZ, in the late eighties. I got the distinct impression that the whole idea of ‘spandrel’ in evolution theory was that it was a place where innovation could escape selection. In his book ‘Evolution by Gene Duplication’ (1970)suggested gene duplication as the way forward to explain novelty in genomes. Is there a link between gene duplication and spandrels? I would love some enlightenment from a modern theorist. I am a retired Bio. teacher!

  26. ThyroidPlanet
    Posted January 15, 2018 at 4:36 pm | Permalink

    Sub

  27. Posted January 15, 2018 at 4:46 pm | Permalink

    I think Oakley and Halligan are on to something in questioning the causal role of consciousness. Naturalistic explanations of a phenomenon normally require that all elements playing a role in the explanation be, in principle, observable. In particular, when explaining your behavior, we can appeal to your brain processes as they are observed to control bodily movements, and we can appeal to your intentional states, plausibly construed as being realized by those (potentially observable) processes. However, we don’t and can’t observe your pain, your sensation of red, or any other of your experiences. Your experience only exists for you as an up-and-running cybernetic system; it isn’t the sort of thing that can be observed at all, not even by you. If, therefore, conscious experience is not an observable, we can’t appeal to it in third person explanations of behavior, any more than we can appeal to invisible ghosts, spirits or souls. Experience doesn’t appear in what we might call third person explanatory space: the space inhabited by observables such as brains, bodies and behavior itself.

    So why are we conscious? Stay tuned for the foreseeable future.

  28. ppnl
    Posted January 15, 2018 at 5:25 pm | Permalink

    Tom Clark,

    Yes this is excellent. But here is the problem. If consciousness is not observable even by us then how can we discuss consciousness? Isn’t this discussion an observable effect of consciousness?

    Is nobody else seeing every line typed into the message editor echoed? This is frustrating beyond any measure.

    • Posted January 15, 2018 at 7:13 pm | Permalink

      I don’t think we’re in an observational or perceptual relation to our own experiences. Rather, as conscious beings we *consist* of experiences that we can’t distance ourselves from. Yet we can discuss the having of experiences since we’ve learned that experience mediates our contact with the world. We make the distinction between experience and what is experienced, e.g., how the apple appears to me in experience and the apple itself. So experience doesn’t have to be an observed object for us to be able to speak of it.

      • Posted January 24, 2018 at 5:09 am | Permalink

        Sorry this is late, but that explanation won’t work. We learn words like “experience” and “subjective” and “painful” from other speakers of our language. We don’t invent a private language whose phonemes just happen to coincide with the private languages of others.

  29. ppnl
    Posted January 15, 2018 at 8:06 pm | Permalink

    I cannot attach any meaning to “we consist of experiences”. Physically what does that imply? For example how would I program a computer to consist of its experiences?

    As a metaphor it’s lovely but as an explanation of anything…

    • Posted January 16, 2018 at 6:38 am | Permalink

      What I mean is that subjectively all you have available to you is your experiences, e.g., of your body, emotions, thoughts, external world and all physical objects you encounter. It’s all experientially mediated. But since you don’t observe experiences, you consist of them subjectively, even the experience of the I that “has” them. This point isn’t meant to be explanatory, just descriptive of our situation, which is that the world in its entirety is accessed by us via experience, even though we don’t find experience in the world as thus accessed. About which see “Dennett and the reality of red”.

      • ppnl
        Posted January 16, 2018 at 1:31 pm | Permalink

        Does a computer program “consist of experiences”?

        As I said above I find Dennett deeply annoying. In some odd way he seems to want to have his cake and eat it as well.

  30. squidmaster
    Posted January 15, 2018 at 11:01 pm | Permalink

    I’ll buy part of this idea: consciousness is a perception. Brains have developed a perception of what they just did (almost in real time), such that the brain says, ‘I just decided to get a drink’, even though the ‘decision’ to get a drink (for many complex reasons) occurred several seconds before the brain was aware of it. Agreed. The brain certainly makes up a narrative, etc. of it’s sequential perceptions that we cal ‘personal history’. I maintain that this is well established in the neuroimaging literature.

    Is this adaptive and subject to natural selection. This is less clear, but I can certainly argue that perception of ‘what the brain just did’ may well provide vital information to the organism. Working memory provides the organism with the ability to chain several perceptions/actions and associate them with an outcome. This presumably allows animals, among other things, to have a ‘language’ (whether it be crows, dolphins or humans).

    Certainly, the awareness of our brains’ doings (consciousness) is more highly developed in humans than our nearest relatives. It’s certainly a reasonable hypothesis that selection took place on some substrate other than the neural perception of precedent brain activity (I just decided to eat an apple), but it’s also a reasonable (I think more likely) hypothesis that, once awareness of precedent activity reached a certain level of sophistication, that ability itself became fodder for selection.

    I’m not sure how one would test this idea. Any thoughts?

    • ppnl
      Posted January 16, 2018 at 12:23 am | Permalink

      Yes the brain creates a narrative and yes this can be selected for. But you can view that narrative creation as an algorithmic process that can be implemented as a computer program. And as such it need not involve consciousness at all in order to have the same observable effect and utility.

      You like others here are confusing the ability to have experiences with the content of those experiences. The mystery is not that we create narratives. The mystery is that we experience those narratives or anything else for that matter. Color is not a narrative for example.

      • squidmaster
        Posted January 16, 2018 at 1:04 am | Permalink

        Color is a perception based on the wavelength of light reflected from surfaces and our visual system’s perceptual apparatus. The observation that every organism that can respond to the color ‘red’ agrees that the color is ‘red’ is sufficient to show that the individual experience of ‘red’ is, indeed, ‘red’. The ‘red’ cones in the retina, the cells in the lateral geniculate nucleus of the thalamus, the primary visual cortex and the visual association cortex of humans all react identically when the individual is shown the color ‘red’. This clearly shows, absent obscurantism, that everyone experiences ‘red’ the same way. Consciousness and perception are not a mystery, merely phenomena that are incompletely described.

        • rickflick
          Posted January 16, 2018 at 7:05 am | Permalink

          This consistency and physicality of perception has always suggested to me that the hard problem may not be hard at all. It’s plausible to me that perceptions are accompanied by an “emotional” component involving chemicals and signals that come with the perception. A red color “suggests” feelings associated with the structures involved in it’s perceptions along with memories of past experience. So, it doesn’t feel like “nothing” to see a red object, it always is accompanied by the same set of electrical and chemical fireworks along with the echos of memory.
          A crude analogy is the feeling of pain when touching a hot stove. Emotions of regret, anger, and disgust follow the experience giving it a memorable tone or reverberation in the brain – what it’s “like” to burn your hand.

        • ppnl
          Posted January 16, 2018 at 1:38 pm | Permalink

          You use the word “perception” as if it had the magical ability to explain everything. But perception is the very thing that needs explaining. You are going circular.

          Hook a camera to a computer and the computer can detect and report the same color that you see. Did it experience redness? Or did it just detect it in the mechanical sense of a mouse trap detecting a mouse? How do you tell the difference?

  31. Dale Franzwa
    Posted January 15, 2018 at 11:59 pm | Permalink

    An entirely different view from that expressed in this post is that consciousness is a “fundamental and ubiquitous feature of the universe. Mind is everywhere.” This view is called ‘panpsychism’. The magazine, Philosophy Now, issue 121, devotes a section of four articles that both present the view and criticize it scientifically and philosophically.

    If interested, go to the magazines’ website, look up Issue 121 among their back issues and you can read the four articles for free. If you want to read more, you’ll have to spend money.

    • Posted January 16, 2018 at 12:43 am | Permalink

      Unfortunately that sounds like it would first require a redefinition of ‘consciousness’ and ‘mind’, specifically a watering down to the degree that the terms lose all their original and useful meanings.

    • ppnl
      Posted January 16, 2018 at 12:48 am | Permalink

      That’s pretty good. Well at least it recognizes the problem with consciousness.

      But in what way is the idea testable? What experiment could you do to give evidence? How could you tell the difference between a physicalist universe and a panpsychic universe?

      In the end I fear it just reduces consciousness to a Platonic essence. That really isn’t the way to an explanation.

      • Posted January 16, 2018 at 5:45 am | Permalink

        I don’t think some theorists of panpsychism would accept any other sort of universe than a physicalist one. The point is that consciousness is everywhere on a quantum level. Don’t ask me how to test this.

        • ppnl
          Posted January 16, 2018 at 1:47 pm | Permalink

          But what useful derives from it? I can study the lint in my navel until I starve. I can study panpsychism to the same end.

          Or I can study a fungus and invent penicillin. Or crystals and invent the transistor.

          Unless panpsychism connects to something useful in the world I have no use for it. It is just a castle in the air.

    • Posted January 16, 2018 at 5:40 am | Permalink

      There are theories based on the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain; while the basic thinking process is at the quantum-theoretically describable level.

      This begs the question: Does consciousness need neurons? Which came first?

      I haven’t yet read Roger Penrose’s Fashion, Faith and Fantasy, so I don’t know if he’s tackled some problems of his earlier theories.

    • Posted January 16, 2018 at 11:34 am | Permalink

      Panpsychism in all the varieties I know fails to explain how the miniminds aggregate. For example, in Leibnizian terms, what I experience as “the me” is a “dominant monad”, with my body (cells?) as the reification of other monads. But how do they “work together”? What makes their little bits of mentality not affect mine?

      It is worse in the “materialist” case, that of Chalmers, whose book I am rereading now by chance (for another reason). We are partially composed out of electrons (say) and thus the little “mental pole” of each one has to contribute to the “nonmaterial” experiencing I have. How??

      • Dale Franzwa
        Posted January 17, 2018 at 12:02 am | Permalink

        I’m pleased to see all the interest my post about panpsychism generated. The following issue of PN (#122) contained some letters to the editor reacting to this “radical theory of consciousness”. The latest issue (#123) has a bunch more pushback letters but I haven’t had a chance yet to read them.

  32. Posted January 16, 2018 at 3:42 am | Permalink

    “…the contents of consciousness are generated “behind the scenes”…. without any interference from our personal awareness, which sits passively in the passenger seat…”

    This must be true to some extent because it is possible to go to sleep with a problem and wake up with the solution fully formed.

    But it is also possible to train oneself to concentrate on something, to use the mind like a searchlight in the night sky.

    I agree that few if any of us now alive will see consciousness explained.

    In my opinion the evolutionary explanation is that consciousness is and was a spandrel.

    Like many other adaptation theories, the evolution of consciousness seems to be a “just so” story, a tautology.

    • rickflick
      Posted January 16, 2018 at 7:27 am | Permalink

      “few if any of us now alive will see consciousness explained.”

      Unless perhaps it already has been explained but the explanation has not been widely accepted.

  33. Thanny
    Posted January 16, 2018 at 3:52 am | Permalink

    I think the philosophical concept of a zombie is incoherent.

    Consciousness surely is an emergent property of the complex web of neural activity which provides the cognitive abilities that some philosophers worry could be mimicked by “zombies”. That is, I don’t think their concept of a zombie is even possible. If the entity can do what they posit it can do, it must be conscious as a result.

    And I don’t think it matters what the substrate is. A sufficiently fast computer running sufficiently complex code that’s capable of the kind of processing a brain is would be just as conscious as the brain it’s matching the functional complexity of.

    • Posted January 16, 2018 at 11:37 am | Permalink

      Dennett (to pick a professional) and some others (like me) have argued that (philosopher’s) zombies are incoherent, so you have some company.

    • ppnl
      Posted January 16, 2018 at 2:21 pm | Permalink

      Again with the “emergent property”. Can you give a definition?

      I like the idea of substrate independence.

      A philosophical zombie may be incoherent but it is difficult to see why. To see this consider my version of Searle’s Chinese room:

      Say there is a dog and someone tortures that dog to death with a cattle prod. That is a serious crime that must be punished.

      Now say someone recorded the event on video. Does playing back the video recreate the pain. Should this person be punished for inflicting pain on an innocent animal? Most people will say no.

      But what if it were a special “camera” that recorded more details like the state of all the cells and even the brain state of the dog. You can imagine it going all the way down to recording the state of individual atoms if you wish. Now you play this recorded data out in the memory of a computer. Have you recreated the pain of the dog? Have you committed a crime? I mean it is just recorded data right? There is just a pattern of charging and discharging capacitors. It is no more real than a movie right?

      But if you are serious about substance independence then you have just caused a dog intense pain and killed it. You are a criminal. If you are not a criminal then philosophical zombies exist.

      Pick one.

      • Posted January 17, 2018 at 11:35 am | Permalink

        Ask Bentham’s question. Does the system suffer? The answer to that, by Dennett’s understanding and mine, is yes, some systems do, and we should avoid inflicting it. We simply think that Nagel’s “something like it is to be” is way too simple. Dennett’s quotation from Wittgenstein about the “conjuring trick” is exactly right, and shows (also) how close the eliminative materialism of the Churchlands (*not* that of Quine) is *almost* the same position.

      • Thanny
        Posted January 24, 2018 at 7:32 pm | Permalink

        Searle’s Chinese room does, in fact, both speak and understand Chinese.

        Where he goes wrong, and perhaps you, too, is to oversimplify the description of what’s going on in these hypothetical constructs.

        Consider this: “He took the car’s engine apart and put it back together again.”

        Very easy to describe what took place. We all understand the concept of splitting something into parts, and putting those parts back together into the original thing. But the actual disassembly and reassembly of a modern automobile engine is very complex, with hundreds of parts. The actual task is much more difficult than the simple description of it.

        What a car engine does is actually pretty simple. Set of a series of explosions, and convert the resulting kinetic energy of expanding gases into kinetic energy of linear motion.

        What goes into understanding and producing a human language is many, many orders of magnitude more complex. Yet it’s almost as easy to describe a contraption with a person inside that does the same thing. That simplicity of description is extremely deceptive.

        Same goes for your “recording device”. Your description is fairly simple, and your analogy to a video recording suggestive (deceptively so) of even more simplicity. But you’re not talking about anything remotely simple. What you describe is nothing short of completely recreating a dog’s brain in the act of experiencing pain. It doesn’t matter what it’s made of. All that matters is the pattern of interacting parts – and that pattern will be suffering, just as the original one made of meat was.

        And yes, if you did manage to achieve such a stupendously complex feat of engineering, and wasted it on reproducing a suffering dog’s brain, you should go to jail for a bit.

  34. Eric
    Posted January 16, 2018 at 6:52 am | Permalink

    I’m not buying the “no effect” claim. If that was true, they’re stating that humans would have built Notre Dame, gone to the moon, etc…. even as unconscious or semi-conscious animals. But we don’t see anything approaching human technological sophistication in other animals, and there are some very smart animals out there, that have been on the earth for millions of years longer than humans or even hominids have been.

    I do think they’re right to say that much of what we decide is decided before we consciously ‘think’ it. The evidence for that from tests is AIUI pretty clear. But I also think that the technological and cultural sophistication in humans is almost certainly related to our fairly uniquely over large forebrains, and what goes on in them…which is consciousness, AIUI.

    Now I guess it’s possible that the human forebrain does both things but they are utterly separate and independent. But that seems unlikely to me. My guess is that they are intimately related; the same neuronal pathways and linkages that produce the awareness of consciousness are the things that allow us to execute ‘build rocket, go to the moon’ type plans.

    • Posted January 16, 2018 at 7:20 am | Permalink

      I totally agree with the last paragraph. However, this doesn’t address the question of whether consciousness arose or, more precisely, got so complex as a spandrel or as an adaptation.

  35. Posted January 16, 2018 at 7:17 am | Permalink

    About “free will and responsibility as social constructions”:

    The mundane fact is only things resulting from chance and laws of nature can be studied. If a god can change the laws at will, it’s goodbye to all method.

    On a deeper, quantum, level it’s not possible to predict every single occurrence, even in principle, but for a fairly 🙂 complex system probabilities apply.

    On the superficial :), that is to say moral, level: people, like any decision makers, have to make choices. This is as free as it gets. In this sense most adults are free, but sometimes courts of law have to make decisions (are free to make decisions?) of an individual’s competence.

  36. Wotan Nichols
    Posted January 16, 2018 at 7:54 am | Permalink

    That business of becoming conscious of parts of an ongoing unconscious personal narrative reminds me very much of Julian Jaynes & his proposal that our self-conscious notion of “I” is a very recent change in the brain’s organization, going back only a few thousand years.

  37. Posted January 16, 2018 at 1:37 pm | Permalink

    The paper´s introduction reminds me what Thomas Metzinger says in “The Ego Tunel” (a splendid book about counciousness).

  38. GalvestonTommy
    Posted January 16, 2018 at 1:47 pm | Permalink

    Are we the only creatures that are conscious that we are conscious?

  39. cyan
    Posted January 17, 2018 at 8:45 pm | Permalink

    Consciousness might allow more comparison of different scenarios an action might result in.

    Compare groups of animals with others of similar metabolism (after all, giant tortoises live long lives but they have a slow metabolism compared to mammals).

    Is there a correlation to the degree of consciousness between the groups and lifespan? Think of African grey parrots and pigeons.

    Is there a correlation with increased life span of groups with a greater degree of consciousness?
    – If so, living in a social group with individuals with more consciousness might result in individual living past reproductive age helping to care for others still within reproductive age. An advantage for the group, so selection, and so evolution.

    – If so, groups of individuals who do not live within a social group that cares for others, but who are able to reproduce to the end of that long life, would be a survival advantage over those who live a shorter lifespan. Selection and evolution.

    – If so, groups of individuals who do not live in social groups that care for others of reproductive age and who live long after reproductive age would have no survival advantage. A spandrel.

    Could be evolution in some groups (primates, dolphins – the 1st scenario) and other groups (African grey parrots – the 2nd scenario) and spandrel in others (I cannot think offhand of any species that fit the third scenario, but there certainly could be some).

    • cyan
      Posted January 18, 2018 at 9:40 pm | Permalink

      Could what we call consciousness, as opposed to unconsciousness such as when we sleep, is somehow connecting what is coming in now via sensory input with memories – stored information about previous sensory inputs.
      Sleep is needed to do maintenance in other parts of the body as well as the brain – physical strengthening of the pathways, connections between neurons that were more fleetingly activated during consciousness.

      Groups with more memory stored than groups with the same metabolic needs require more sleep? Groups that undergo torpor are not doing so because of the tremendous comparison of memory to the just experienced input but because of decreased metabolism. Felines’ enormous needs for sleep due to their high muscle mass that is being maintained compared to other groups of the same size but less muscle mass.

      Groups of fish the same size as humans requiring much less sleep or none because their memory capacity is so much less than that of humans.

      How those neural pathways are transformed from initial transient connections into more long-lasting physical structure and then mapping them – that will be fascinating. Of course each individual map will be slightly different from another.

      Human consciousness in terms of amount has probably not changed since the first modern man – Homo sapiens sapiens, since the brain is the same ratio to body since then. If intelligence is how quickly one can recognize patterns, that is, comparing memory to new situations, human intelligence has not changed since then. A baby time-swapped from the stone age and raised in a contemporary environment would not be distinguishable in IQ or behavior from anyone else.

      Although humans are evolving in many different phenomes, I doubt that they are evolving in consciousness, because what selects for that, now that we live in societies? The seemingly excess consciousness we in modern society have (that results in reading and thinking deep thoughts, art, etc) was the same in our human ancestors but would not have been excess because our groups of early humans would have had to use all of it in an attempt to just stay alive.

      There is no selection for even more consciousness now in society. Those individuals with apparently more, or more of it put to use deeply thinking, do not produce more children those who apparently do little with it. And if this were attempted to do as a society, which I hope will never happen, what would result is the increased venality of our species, eventually destroying societies. Our species would not survive long without societies.

      I guess that extra consciousness now, that ante-society used for survival, is more like an appendix. There is no selection against it, just in a few individuals that ignore that ignore that lower right abdominal ache, and in some people the bacteria in it provide a bit more digestive aid but has no effect on reproductive rates.

      I am most interested in the architecture of consciousness and of course think that it will look the same in all groups of animals – the difference will only be in the amount of structure.


Post a Comment

Required fields are marked *
*
*

%d bloggers like this: