Has the evolution of consciousness been explained?

Michael Graziano is a neuroscientist, a professor of psychology at Princeton University, and, on the side, writes novels for both children and adults. His speciality is the neurology and evolutionary basis of consciousness, about which he’s written several pieces at The Atlantic.

His June 6 piece, “A new theory explains how consciousness evolved“, attempts to trace how consciousness (which I take to be the phenomenon of self-awareness and agency) could arise through evolution. This is a very good question, although resolving it will ultimately require understanding the “hard problem” of consciousness—the very fact that we are self-aware and see ourselves as autonomous beings. We’re a long way from understanding that, though Graziano is working on the neuroscience as well as the evolution.

In the meantime, he’s proposed what he calls the “Attention Schema Theory,” or AST, which is a step-by-step tracing of how consciousness might have arisen via evolutionary changes in neuronal wiring. To do this, as Darwin did when trying to understand the stepwise evolution of the eye, you need to posit an adaptive advantage to each step that leads from primitive neuronal stimuli (like the “knee reflex”) to full-fledged consciousness of the human sort.

That, of course is difficult. And we’re not even sure if the neuronal configurations that produced consciousness were really adaptive for that reason—that is, whether the phenomenon of consciousness was something that gave its early possessors a reproductive advantage over less conscious individuals.  It’s possible that consciousness is simply an epiphenomenon—something that emerges when one’s brain has evolved to a certain level of complexity. If that were the case, we wouldn’t really need to explain the adaptive significance of consciousness itself, but only of the neural network that produced it as a byproduct.

Now I haven’t read Graziano’s scholarly publications about the AST; all I know is how he describes it in the Atlantic piece. But, as I’ve already said, if you’re describing some complex science in a popular article, at least the outline of that science should be comprehensible and make sense. And that’s what I find missing in the Atlantic article. Graziano lucidly describes the steps by which a lineage could become more complex in its sensory system, with each step possibly enhancing reproduction. But when he gets to the issue of consciousness itself—the phenomenon of self-awareness—he jumps the shark, or, rather, dodges the problem.

Here are the steps he sees in the AST, and when each step might have occurred in evolution.

1.) Simple acquisition of information through neurons or other sensory organs. This could have happened very early; after all, bacteria are able to detect gradients of light and chemicals, and they were around 3.5 billion years ago.

2.) “Selective signal enhancement,” the neuronal ability to pay attention to some environmental information at the expense of other information. If your neuronal pathways can compete, with the “winning signals” boosting your survival and reproduction, this kind of enhancement will be favored by selection. This will confer on animals the ability to adjudicate conflicting or competing signals, paying attention to the most important ones. Since arthropods but not simpler invertebrates can do this, Graziano suggests that this ability arose between 600 and 700 million years ago.

3.) A “centralized controller for attention” that could draw one’s “overt attention” among inputs from several different sensory systems (for example, you might want to go after the smell of food rather than toward the darkness, as in that moment it’s better to get food than to hide). This, says Graziano, is controlled by the part of the brain called the tectum, which evolved about 520 million years ago.

The tectum, Graziano adds, works by forming an “internal model” of all the different sensory inputs. As he says,

The tectum is a beautiful piece of engineering. To control the head and the eyes efficiently, it constructs something called an internal model, a feature well known to engineers. An internal model is a simulation that keeps track of whatever is being controlled and allows for predictions and planning. The tectum’s internal model is a set of information encoded in the complex pattern of activity of the neurons. That information simulates the current state of the eyes, head, and other major body parts, making predictions about how these body parts will move next and about the consequences of their movement. For example, if you move your eyes to the right, the visual world should shift across your retinas to the left in a predictable way. The tectum compares the predicted visual signals to the actual visual input, to make sure that your movements are going as planned. These computations are extraordinarily complex and yet well worth the extra energy for the benefit to movement control. In fish and amphibians, the tectum is the pinnacle of sophistication and the largest part of the brain. A frog has a pretty good simulation of itself.

I’m still not sure what this “internal model” is: the very term flirts with anthropomorphism. If it’s simply a neuronal system that prioritizes signals and feeds environmental information to the brain in an adaptive way, can we call that a “model” of anything? The use of that word, “model,” already implies that some kind of rudimentary consciousness is evolving, though of course such a “model” is perfectly capable of being programmed into a computer that lacks any consciousness.

4.) A mechanism for paying “covert” as well as “overt” attention. Covert attention is stuff that we attend to in our brains without directly paying attention to it. An example is focusing your hearing on a specific conversation nearby and ignoring extraneous sounds. Of course the very concept of “paying selective attention” sort of implies that we have some kind of consciousness, for who is doing the “paying”?

The part of the brain that controls covert attention, says Graziano, is the cortex. That evolved with the reptiles, about 300 million years ago.

And here’s where the problem with the article lies, for Graziano subtly, almost undetectably, says that with this innovation we’ve finally achieved consciousness. His argument is a bit tortuous, though. First he gives a thought experiment that implies cortex = consciousness, then undercuts that thought experiment by saying that that that doesn’t really explain. consciousness. He then reverses direction again, bringing consciousness back to center stage. It’s all very confusing, at least to me.

Here’s the part where consciousness comes into his piece. Graziano starts with crocodiles, which have a selectively attentive cortex, and describes a Gedankenexperiment that explictly suggests consciousness:

Consider an unlikely thought experiment. If you could somehow attach an external speech mechanism to a crocodile, and the speech mechanism had access to the information in that attention schema in the crocodile’s wulst, that technology-assisted crocodile might report, “I’ve got something intangible inside me. It’s not an eyeball or a head or an arm. It exists without substance. It’s my mental possession of things. It moves around from one set of items to another. When that mysterious process in me grasps hold of something, it allows me to understand, to remember, and to respond.”

But then Graziano takes it back, for he realizes that selective attention itself could be a property of neuronal networks, and doesn’t imply anything about the self-awareness and sense of “I” and “agency” that we call consciousness. (Note that the words “I’ve got something intangible inside me” is an explicitly conscious thought.) But in denying the intangibility of consciousness, he simultaneously affirms his presence. Here’s where the rabbit comes out of the hat:

The crocodile would be wrong, of course. Covert attention isn’t intangible. It has a physical basis, but that physical basis lies in the microscopic details of neurons, synapses, and signals. The brain has no need to know those details. The attention schema is therefore strategically vague. It depicts covert attention in a physically incoherent way, as a non-physical essence. And this, according to the theory, is the origin of consciousness. We say we have consciousness because deep in the brain, something quite primitive is computing that semi-magical self-description. Alas crocodiles can’t really talk. But in this theory, they’re likely to have at least a simple form of an attention schema.

But an “attention schema” isn’t consciousness, not in the way that we think of it. Nevertheless, Graziano blithely assumes that he’s given an adaptive scenario for the evolution of consciousness, an evolution that’s only enhanced because you also have to model the consciousness of others—what Dan Dennett calls “the intentional stance.” Graziano:

When I think about evolution, I’m reminded of Teddy Roosevelt’s famous quote, “Do what you can with what you have where you are.” Evolution is the master of that kind of opportunism. Fins become feet. Gill arches become jaws. And self-models become models of others. In the AST, the attention schema first evolved as a model of one’s own covert attention. But once the basic mechanism was in place, according to the theory, it was further adapted to model the attentional states of others, to allow for social prediction. Not only could the brain attribute consciousness to itself, it began to attribute consciousness to others.

So here he’s finessed the difficulty of self-awareness by simply asserting that once you have mechanisms for providing both covert and overt attention, you have consciousness. I don’t agree (though of course I’ve read only this article). Why couldn’t a computer do exactly the same things, but without consciousness? In fact, they do those things, as in self-driving cars.

Graziano goes on to say that figuring out what other members of your species do, based on the notion that they have consciousness, is itself a sign of consciousness. And again I don’t agree. A computer can have an “intentional stance,” using a program and behavioral cues to direct its own behavior, without consciousness. The “hard problem”—that of self-awareness—has been circumvented, assumed without a good reason.

Graziano finishes by talking about semantic language, something that’s unique to humans and surely does require consciousness (I think! Maybe I’m wrong!). But that’s irrelevant, for the evolution of consciousness has already been assumed.

I admire Graziano for realizing that if consciousness, which is closely connected with our sense of agency and libertarian “free will”, evolved, there may be an adaptive explanation for it. He doesn’t consider that consciousness may be an epiphenomenon of neural complexity, which is possible.

I myself think consciousness and agency are indeed evolved traits, traits whose neuronal and evolutionary bases may elude our understanding for centuries. I take a purely evolutionary view rather than a neuroscientific view, for I’m not a neuroscientist. And using just evolution, one can think of several reasons why consciousness and agency might have been favored by selection. I won’t reiterate these here as I discuss them at the end of my “free will” lectures that you can find on the Internet.  And I always say that the problem of agency is unsolved. It still is, as it is for consciousness.

Graziano is making progress with the neuroscience, but the AST is still a long way from being a good theory of how consciousness evolved.


  1. Dominic
    Posted July 27, 2016 at 11:17 am | Permalink

    This is going to take me a while to digest – a proper essay!🙂
    (typo of ‘neuroscientist’ in penultimate paragraph by the way)

  2. YF
    Posted July 27, 2016 at 11:20 am | Permalink

    I think one first needs to define ‘consciousness’ before attempting to explain what it is and how it evolved.

    Also, studies by Lamme and colleagues suggest that attention is not required for conscious awareness. They are related but distinct phenomena.

  3. Matt
    Posted July 27, 2016 at 11:36 am | Permalink

    Very interesting article. I think one issue with the hard problem is that we don’t have a good idea of what we mean by “consciousness” because our only example is our own. Graziano is attempting to distill what he considers the crucial aspects of it, but if you built a machine with just those aspects it still may not *seem* conscious because what we really mean by that is that they wouldn’t seem human.

    Also, I think he’s using the term model more in the way it’s used in information systems. A model is just a simplified or abstracted representation of something.

    A factory control program might get complex status inputs from every machine in the factory and then use those to maintain a simplified model of the entire system.

    • Posted July 28, 2016 at 11:49 am | Permalink

      Well said. I would agree with Graziano that this is *a* crucial aspect of consciousness, just not the only one. The fact that we have a model of our own internal states – not just eye and head position but also pains, hot and cold sensations, etc. – is extremely relevant to consciousness, even if it isn’t the whole story.

  4. Diana MacPherson
    Posted July 27, 2016 at 11:50 am | Permalink

    I think he has maybe oversimplified with the internal model. I get what it means from his description, but the assumption is the brain works like a computer, which it does not. This may not affect his conclusions but it can certainly lead to confusion.

    • DiscoveredJoys
      Posted July 27, 2016 at 4:32 pm | Permalink

      There are some philosophers and scientists that think that we don’t build ‘internal models’ in the brain but train our environmentally, socially and embodied brain to compare learned ‘predictions’ against physical inputs and act accordingly. So we don’t model a parabola to catch a thrown ball but reiterate a view, compare, move cycle such that the ball ‘moves’ linearly, in comparison to the background, into our hand.

      Now I can see ways in which reviewing the environment, society, and our body in repetitive loops could build into a centralised perspective that maintains itself ready for the next departure from the predicted input. No internal model required, only an accumulation of predictions.

      Unfortunately just as there is a tendency to use teleological phrasing to discuss evolution, and to discuss agency in terms of free will, we also tend to use computing terms to describe activity in the brain. I suspect our language(s) are woefully inadequate and built on unsound axioms for the tasks.

      • Michael Waterhouse
        Posted July 27, 2016 at 8:09 pm | Permalink

        I wonder how it works in animals.
        For example a cat can jump over a significant gap and land with great accuracy on a tiny spot. A ledge, or the top of a fence for example.
        Or birds zooming around a forest.

        I’m an epiphenomenalist.

        • DiscoveredJoys
          Posted July 28, 2016 at 3:10 am | Permalink

          I’m not brave enough to assert ‘how a cat thinks’ but I can see how a big action like jumping onto the top of a fence is just the macro result of a lot of micro sense/prediction/action/review cycles, some at a very low level. Perhaps the macro action is an epiphenomenon, or simply our oversimplification of a complex matter?

          If you watch a cat jumping it often tenses up and ‘poses’ in a ready to jump position. There *could* be many micro cycles going on, assessing the cat’s internal states (such as muscles prepared to contract, balance and footing established, whiskers and tail sensors functioning and so on), target landing site steady in view, plus continuous prediction/action/review updates during the jump itself to ensure that the feet and legs are in the best position for the final landing.

          Which is perhaps why the considered predator cat is so different from the reflex cat frightened by a cucumber.

          • rickflick
            Posted July 28, 2016 at 5:36 am | Permalink

            Another thing to remember about the athletic ability of our pets is that their perceptions and reflexes seem to operate at a much higher rate than our own. Faster clock speed. Thus, something that seems miraculous to our lethargic mental processing is foolishly simple for a cat, just because it has more quickly computed the geometry.

            • Michael Waterhouse
              Posted July 29, 2016 at 6:45 pm | Permalink


          • Michael Waterhouse
            Posted July 29, 2016 at 6:44 pm | Permalink

            Good observation.
            I may had led you astray a bit with the epiphenonelist comment. It was unrelated to the question about cats.

    • Pliny the in Between
      Posted July 27, 2016 at 5:06 pm | Permalink

      While I agree that a simple comparison of brains to computers is problematic, I also see that brains are often thought of as a single organ rather than a dense association of a number of highly specialized parts. My own work in this area shows that complex computing environments utilizing a distributed array of individual (and highly specialized) sub-applications can produce some pretty impressive output that mimics abstractions we see from human decision-makers. Although the overall decision environment is very complex, the individual sub-applications are often pretty straight forward.

      • DiscoveredJoys
        Posted July 28, 2016 at 3:18 am | Permalink

        Indeed. The tricky bit is establishing how far out the ‘computing’ extends. There are many that argue that it makes sense to call the brain ’embodied’ – using the sensors of the body as pre-processed data. There are others who argue that the environment itself cues the embodied brain.

        So if you argue that that a desktop computer is made of many parts, not just the CPU, and extends over the power and data networks to distant locations then perhaps we can refine the ‘brain is a computer’ language further. We need new metaphors, I think.

  5. Kevin
    Posted July 27, 2016 at 11:59 am | Permalink

    We are machines making measurements. How those measurements help us survive is important and complex. We are limited, physically, by our capabilities to make measurements, but that’s only part of the story.

    Some birds and fish and insects, fly in patterns where they are not completely in conscious control, and yet the coordinated actions of the group help the species survive. The capacity for making measurements and calculating has to be weighted by minimized effort or energy.

    If I can survive, with less energy, I probably will win. If I can survive with less computation or measuring capability, then I will also probably win.

    But not everything is equal in the world where consciousness is present. Higher intelligence and consciousness complicate things seriously because capacities can be thrown ‘sideways’. An uneducated, obese Texan driving an SUV with an AR-16 can take out more buffalo than a highly intelligent, physically superior Souix hunter on foot.

  6. Posted July 27, 2016 at 12:05 pm | Permalink

    Becoming self-referential would ensure the survival of microbiota. Perhaps consciousness was the bonus feature for integrating the “GPS system” of a complex mashup.

    • Sastra
      Posted July 27, 2016 at 12:41 pm | Permalink

      I am a strange loop.

      • Diana MacPherson
        Posted July 27, 2016 at 3:20 pm | Permalink


      • Michael Waterhouse
        Posted July 27, 2016 at 8:11 pm | Permalink

        Good one.

  7. Posted July 27, 2016 at 12:17 pm | Permalink

    If consciousness is not an evolved trait, it must be a by-product of our evolved large neo-cortex. Given the centrality of consciousness to human nature, I sort of doubt that.

    • juan martinez juan
      Posted July 27, 2016 at 4:29 pm | Permalink

      ‘Given the centrality of consciousness to human nature”

      This could be a bit tautological, since by “human nature” you probably mean that part of our nature that can be explained to other fellow humans, that is, the part we can verbalise and we’re conscious about.
      But I do think there’s a large part of our nature that unfold under our consciousness’ radar. This would cast some doubt over the alleged centrality of our consciousness, which may well be an epiphenomenon, regardless or high esteem of it.

      • Posted July 27, 2016 at 6:02 pm | Permalink

        By centrality I meant our ability to attribute similar mental states to others and to use that attribution to anticipate their impending actions.

        • Jeremy Tarone
          Posted July 27, 2016 at 8:42 pm | Permalink

          Theory of mind.

  8. JonLynnHarvey
    Posted July 27, 2016 at 12:28 pm | Permalink

    The normally scintillating playwright Tom Stoppard disappointed a few folk with his 2015 play “The Hard Problem” which was perceived as being overly anti-science, including a character who is a bit of a caricature of Richard Dawkins.
    Review in New Scientist here:

    (However, I look forward to seeing it this fall at the ACT in San Francisco.)


    What Dennett calls “the intentional stance” may partly account for the rise of belief in deities. This has been argued by anthropologist Stewart Guthrie. Effectively, this grants Alvin Plantinga psychologically what one might with to deny him philosophically when he says he knows God exists the same way he knows other minds exist.

    • Posted July 27, 2016 at 12:53 pm | Permalink

      I talked to Stoppard, and was on a panel with him, a few years ago at the Hay Festival in Wales. At that time he was already questioning evolution, showing some sympathy for the teleological views of Jerry Fodor and Tom Nagel.

    • Posted July 27, 2016 at 9:55 pm | Permalink

      Speaking of Dennett, Graziano’s “AST” seems to be very similar to Dennett’s “fame in the brain”, which goes back to at least 2008.

  9. rickflick
    Posted July 27, 2016 at 12:40 pm | Permalink

    I think consciousness is an emergent property of complex mental activity. In a sense, it doesn’t require an explanation, any more that does “weather”. Humans are unique in that they are capable of recursive consideration of the world. Language is also unique to humans, and it also illustrates the potential complexity of recursive thought. Language is probably necessary for complete self awareness, for without it the notions of “I” and “you” are only implied, never clearly specified. Chimps have this weaker form of consciousness.

    Iterative, recursive thought allows for vast networks of understanding. Once you are able to see other minds as intentional, you can easily include yourself as one of them. The self becomes just an important element in a network of ideas. In other words a theory of mind ultimately implies a theory of self. I could be wrong.

    • juan martinez juan
      Posted July 27, 2016 at 4:34 pm | Permalink

      ‘Given the centrality of consciousness to human nature”

      This could be a bit tautological, since by “human nature” you probably mean that part of our nature that can be explained to other fellow humans, that is, the part we can verbalise and we’re conscious about.
      But I do think there’s a large part of our nature that unfold under our consciousness’ radar. This would cast some doubt over the alleged centrality of our consciousness, which may well be an epiphenomenon, regardless or high esteem of it.

      • rickflick
        Posted July 27, 2016 at 5:09 pm | Permalink

        I think this was your reply to Darwinwins. Is the next one below addressed to me?

    • juan martinez juan
      Posted July 27, 2016 at 4:57 pm | Permalink

      Interesting view. But something’s missing, I don’t think that just”seeing” yourself would qualify as consciousness. You’d need probably your self perception to induce an alteration of your neural/emotional mapping, a change whose acknowledgement would be the first truly self-conscious act.
      I would bet some sort of this interactiveiterative- process is at the root of consciousness. And would also bet it’s been already stated in cleverer and better grounded terms than mine.

    • Posted July 27, 2016 at 10:57 pm | Permalink

      I think there are studies (which I learned about from Pinker) that show much of our thought is abstract and not limited by nor even necessarily shaped by language.

  10. Sastra
    Posted July 27, 2016 at 12:47 pm | Permalink

    One of the common arguments made against creationists and vitalists is that there IS no “magic moment” when one species turns into another, or when non-life becomes life. If scientists were taken back by a time machine to the moment when the critical juncture took place, the likelihood is that only some of them would be saying “see — the critical juncture has taken place!” The rest would be arguing that “no, the important step was earlier” and “No, THAT’S not a new species/life: you’re missing the most important element!”

    And they’d all be right.

    • Posted July 27, 2016 at 2:11 pm | Permalink

      Yeah, I hear the “Did the mother of the first mammal nurse it with milk?” question often. I can’t say I have ever answered it well.

      • darrelle
        Posted July 27, 2016 at 2:45 pm | Permalink

        Like many such questions the easy, short and correct answer is that the question is nonsense. Explaining why it is nonsense takes a whole lot more words. Most people who will ask such a question won’t find either the short answer or the explanation satisfying.

  11. Steve Pollard
    Posted July 27, 2016 at 12:51 pm | Permalink

    The first time I read that book I thought it was great. The second time I wasn’t so sure. Maybe I need to read it again!

    • Steve Pollard
      Posted July 27, 2016 at 3:02 pm | Permalink

      Gosh, that was meant to be a reply to Sastra’s reference to “I am a strange loop” (Hofstadter) a few posts back. Comment still stands, though.

  12. Gregory Kusnick
    Posted July 27, 2016 at 1:00 pm | Permalink

    A computer can have an “intentional stance,” using a program and behavioral cues to direct its own behavior, without consciousness.

    Where do you stand on Chalmers’ notion of “p-zombies” that can do everything humans can do (including talk about consciousness) without actually being conscious? Do you find this idea plausible? If not, on what basis do you assume that a computer can adopt an intentional stance and still lack any degree of consciousness?

    • rickflick
      Posted July 27, 2016 at 1:12 pm | Permalink

      Doesn’t make any sense, except as a philosophical toy. As a thought experiment it’s pure genius. It’s pretty clear that lower animals through apes and man exhibit gradually more conscious minds. A p-zombie would be difficult/impossible to configure. It’s not all or nothing.

      • darrelle
        Posted July 27, 2016 at 3:17 pm | Permalink

        “It’s pretty clear that lower animals through apes and man exhibit gradually more conscious minds.”

        I agree. I think it is an important point to bring up in any discussion of consciousness because most people talk as if the advent of consciousness has only occurred in humans and that it was like having finally gotten the design right, flipped the switch and eureka, consciousness.

        If it is something that occurs to varying degrees among many species, which seems to be the case, then that would seem to say something useful about the epiphenomenon vs selected for issue and about how the brain generates consciousness. Not definitive things perhaps, but useful.

        It also appears that consciousness in individuals is not a discrete on or off phenomenon. Studies suggest that the change from unconscious to conscious is more like a spooling up process in which the degree of consciousness increases over time. And of course there are very many examples of how brain damage or drugs affect consciousness. By degrees rather than on / off.

    • Posted July 27, 2016 at 11:00 pm | Permalink

      If the simulation is accurate enough, does it make sense to call it a simulation?

  13. barn owl
    Posted July 27, 2016 at 1:06 pm | Permalink

    The article in the Atlantic was too vague and oversimplified for my tastes, but I’d lean towards consciousness as an epiphenomenon. I’d like a better definition of consciousness, however. While I think that the tectum is relevant for discussions of selective attention to visual and auditory stimuli, it isn’t involved in processing olfactory stimuli, which humans tend to think aren’t very important. Many of us like to ignore olfaction, because doing so plays into the illusion of free will … “no, we’re visual creatures and make conscious choices based on visual stimuli, none of this bypassing-the-thalamus business for us!” For a mammalian brain such as ours, I don’t see how olfaction can be left out of any discussion of consciousness, however it’s defined. Olfactory information goes straight into the limbic emotion and memory regions of brain – how could it not be important? Wouldn’t one’s spatial maps, memories, and emotional contexts all be relevant to the concepts of consciousness and self-awareness?

    For anyone else who wants more information on attention schema theory, here’s a link to an article by Graziano in an open access journal:


    • Michael Waterhouse
      Posted July 27, 2016 at 8:19 pm | Permalink

      I lean toward epiphenomenalism too.

  14. Virgil Reese
    Posted July 27, 2016 at 1:25 pm | Permalink

    Just seeking clarification. In your statement “We’re a long way towards understanding that”, might you have meant the word “from” rather than “towards”?

  15. Torbjörn Larsson
    Posted July 27, 2016 at 2:09 pm | Permalink

    So here he’s finessed the difficulty of self-awareness by simply asserting that once you have mechanisms for providing both covert and overt attention, you have consciousness. I don’t agree (though of course I’ve read only this article). Why couldn’t a computer do exactly the same things, but without consciousness? In fact, they do those things, as in self-driving cars.

    Isn’t this begging the question?

    Graziano could simply say that the computer is analogous to the crocodile, it has lists of its active processes and those processes (should) have log records, but it doesn’t express it in language.

    In Graziano’s words computer webs* of self-driving cars could “have at least a simple form of an attention schema.”

    * I hear they typically have lots of processor cards distributed over the cars.

  16. W.Benson
    Posted July 27, 2016 at 2:23 pm | Permalink

    There is an interesting side issue: Limits on neuron packing inside the brain may constrain the evolution of ‘sophisticated’ behavioral repertoires. One study suggests that the primate mind has been enabled by increased neuron density in the cerebral cortex. A group centered in Rio de Janeiro found that in mammals cortex neuron density normally diminishes with brain size, whereas in primates it is practically unaffected. Correlation shows that in most mammals increasing brain size by a factor of 10 only increases the total number of cortex neurons four times; in monkeys and apes the increase is nearly 10 times. This means that monkeys and apes (including man) have approximately 5 times more cortex neurons than mammals of other groups with brains of comparable size. Cetaceans and Carnivores were not studied. Open access link:

  17. Posted July 27, 2016 at 4:17 pm | Permalink

    Provide a definition of ‘consciousness’ that is not anecdotal.


    • YF
      Posted July 28, 2016 at 9:30 am | Permalink

      Right. This discussion is pretty much pointless until someone can define what they mean by ‘consciousness’.

  18. Posted July 27, 2016 at 4:24 pm | Permalink

    “It’s possible that consciousness is simply an epiphenomenon—something that emerges when one’s brain has evolved to a certain level of complexity. If that were the case, we wouldn’t really need to explain the adaptive significance of consciousness itself, but only of the neural network that produced it as a byproduct.”

    I know I’m about 30 years late to the game here, but can someone tell me what some other examples of epiphenomenon are?

    And, why is it held that epiphenomenon aren’t adaptive?

    • Posted July 27, 2016 at 4:35 pm | Permalink

      We’re talking about spandrels here, right?

      I think Jerry posted something had me thinking about this paper about a year ago if memory serves:

      “The Spandrels of San Marco and the Panglossian Paradigm: A Critique of the Adaptationist Programme” (1979).

      However, there’s got to be a concise way of describing it. And my tiny brain is grappling with it all and could use interaction.

    • Michael Waterhouse
      Posted July 27, 2016 at 8:23 pm | Permalink

      I think epiphenomena can’t be adaptive because they are not causal.

      • Gregory Kusnick
        Posted July 27, 2016 at 8:38 pm | Permalink

        In that case consciousness cannot be epiphenomenal. The fact that we can talk about demonstrates that it participates in the causal chain of behavior.

        Like Charleen, I think Jerry meant that it’s a spandrel, not that it’s acausal.

        • Michael Waterhouse
          Posted July 29, 2016 at 7:29 pm | Permalink

          If epiphenomena participate in our causal chain of behavior and behavior affects survivability, how is it not adaptive?

          • Gregory Kusnick
            Posted July 29, 2016 at 8:44 pm | Permalink

            Epiphenomena, by your definition, are not causal. I’m saying that by that definition, consciousness is not epiphenomenal (because it is causal).

            Some behaviors affect survival; some don’t in any meaningful way (singing in the shower, for instance).

            All of that said, I think consciousness is adaptive, because it lets us mentally simulate possible courses of action and see how we feel about the consequences before committing to a particular action.

  19. jeffery
    Posted July 27, 2016 at 5:06 pm | Permalink

    A truly fascinating subject, though- I’m reminded of the story about a man who put an electrified wire around his garden to keep out his pigs, who kept breaking out of their pen. It worked well for except for one old sow who, after being “buzzed” several times, backed up and ran at the fence, squealing loudly before she ever hit it. She wriggled under it, squealing like mad all the time. This shows numerous “conscious” functions going on in her brain:
    (1) planning: perception of a reward, and what she’d have to suffer to get it.
    (2) anticipation of an event that hadn’t yet happened: squealing BEFORE she actually began to be shocked.
    (3) prediction: she had the “assumption” that, once she got past the wire, the shock would stop!

    The mind is SO amazing…..

  20. Posted July 27, 2016 at 5:52 pm | Permalink

    I have never understood the Hard Problem, and at least partly because I have yet to understand what is so supposedly strange about consciousness. The language concept of being conscious was invented by our ancestors to describe the closely related and undeniable observations of: (1) being awake and thus processing information inputs; (2) being an information processing entity as opposed to, say, a rock or a tree (here they were mislead in that they could not possibly know that the tree is also doing a bit of that); and (3) the difference between paying attention to a piece of information and making deliberate decisions as opposed to the more instinctive functions of our body handling something.

    So at best I can see this whole discussion being about how and why #3 could possibly arise, because the other two are trivial. But phrased like this even #3 becomes trivial because it should be obvious why thinking carefully before making some decisions is beneficial (just as NOT thinking carefully before evading a blow is beneficial, neatly explaining the evolutionary rationale for the consciousness-subconsciousness distinction).

    Indeed I can only agree with this: The use of that word, “model,” already implies that some kind of rudimentary consciousness is evolving, though of course such a “model” is perfectly capable of being programmed into a computer that lacks any consciousness. … except for the last few words. Isn’t a major insight that has to follow logically from our understanding of evolution that these kinds of traits come in a gradient? There would never have been a generation that is conscious while their parents were non-conscious, so “rudimentary” is clearly one step along the way.

    Is the problem behind the philosophers calling this problem oh so hard perhaps simply that consciousness is still mistaken for a magical spark that is either in an individual or not, like a light switch being on or off? And if the concept is garbled from the get-go, then there is no surprise why it cannot be explained properly.

  21. Posted July 27, 2016 at 10:56 pm | Permalink

    I think of the hard problem in terms of the following question. How could we ever make a computer feel pain? You can imagine a computer far more advanced than our contemporary versions. If you think about that question, it seems impossible, and that is why the hard problem is hard.

    • Posted July 27, 2016 at 10:59 pm | Permalink

      This is meant as a comment to Alex SL.

    • Gregory Kusnick
      Posted July 27, 2016 at 11:24 pm | Permalink

      Operationally, it seems like a plausible answer would be to map out in detail all the brain subsystems involved in our experience of pain, and build a computer that implements all of those subsystems, connected in the appropriate ways. Then verify that the resulting robot reports sensations of pain and reacts to pain in the expected ways. That doesn’t seem to be impossible.

      You could take the position that it’s impossible to know that the robot experiences pain. But that’s just special pleading; we have no difficultly accepting that other people (or animals) experience pain, on considerably less evidence.

      • Posted July 28, 2016 at 10:49 am | Permalink

        Instead of impossible, I should have said exceedingly difficult or currently infeasible. I didn’t mean impossible literally.

      • Posted July 28, 2016 at 6:55 pm | Permalink

        Thinking more about this, yes when I see a human or animal showing external signs of pain, I believe they are suffering. But I can write a simple program that causes my computer to print “ouch” when I press Alt-O, but I do not think it feels pain. I will need more than such external signals before I can believe a robot feels pain, but I am not sure what that “more” is.

        I find this a fascinating philosophical question.

    • rickflick
      Posted July 28, 2016 at 5:26 am | Permalink

      The key to the question is “feel”. Computers are normally designed around logical processing of information. A human being is vastly more complex. Our evolutionary roots involve avoidance of pain and seeking of pleasure. A simple flatworm avoids concentrated salt. Does the flatworm feel pain? This requires a network of nerves and a brain designed to respond negatively and positively to inputs. A simple reflex, like pulling your hand away from a hot stove, is followed by a sensation of pain. The brain registers the experience as something not to be repeated for fear of experiencing the pain again. Whatever pain is, it is part of our entire brain/body structure and design. Computers are never conceived this way. They are only designed around the purer forms of calculating – without emotion or feeling. In order to build a computer to feel pain, you’d have to virtually build an animal-like brain/body.

      • Posted July 28, 2016 at 7:10 am | Permalink

        Maybe there are two different ways of thinking about the puzzle. If we say the question is “how to build a robot that feels pain” or “how exactly is it that we perceive pain like this and the colour red like that” then I am happy to say that it is a mystery. But my feeling is that your average philosopher making a big thing out of consciousness is asking a question more on the lines of “how could matter ever feel?” or “why are we conscious if we could function just as well if we were philosophical zombies?”

        As for feelings like pain, I still don’t really see the big issue. Yes, we are just vastly more complex but fundamentally the same as the flatworm; and that is how it has to be because evolution. (A computer is perhaps not the best model here because it built in a totally different way.)

        So now we have to be able to cogitate carefully to be able to make better decisions than pure instinct, and for that we have to have internal representations of various types of input data. It seems fairly obvious that to produce a useful model of the world a different type of internal representation is necessary for pain in my shoulder than for a sound coming from behind me. Why they are exactly the way they are is a currently unanswerable question, but why they are and why they are different isn’t. And the fact that there are rare, weirdly wired humans who see sounds or smell visual input suggests that really the same kind of complicated neuronal cable salad is indeed behind it all.

        • rickflick
          Posted July 28, 2016 at 10:11 am | Permalink

          I’ve thought that pleasure and pain are just the way our brains happen to process inputs. Nagel’s question, “What is it like to be a bat?” makes us wonder if they suffer pain as we do or only screech and thrash about like we do. Early on experimenters thought they could experiment on dogs without worrying about their feelings. They had the right outward reactions, but no qualia. Dennett makes the point that qualia don’t really exist. Qualia are just what it’s like to be a human. No more need be said. Or not.

        • Posted July 28, 2016 at 12:39 pm | Permalink

          Good thoughts here. RE: philosophers who ask “why are we conscious if we could function just as well if we were philosophical zombies?” One could also ask, why do cars have internal combustion engines if a functioning car could be made all-electric? Well yes, one *could*, but that doesn’t mean we can take the internal combustion engine out of *your* car and have a working machine. Consciousness is “epiphenomenal” in exactly the same way that your car’s internal combustion engine is – i.e., not at all.

  22. Posted July 28, 2016 at 3:25 am | Permalink

    Spare me from these hand-waving idiots: Graziano is another of Dennett’s ilk, making what he imagines to be an argument, in the hopes of finessing his ignorance. I mean ignorance in the literal sense: he is ignoring the fact that it is pefectly possible to know, by direct observation, that consciousness is primary. For those who insist upon a definition I offer an experiential one which short-circuits all the conceptual bullshit: “Consciousness is that which is hearing these words right now” (after Francis Lucille).

%d bloggers like this: