Is the human mind like a skunk butt?

In a Philadelpha Inquirer piece on whether Catholics really accept the scientific notion of evolution, columnist Faye Flam interviewed me about the supposed evolutionary “specialness” of humans:

Many biologists are not religious, and few see any evidence that the human mind is any less a product of evolution than anything else, said Chicago’s Coyne. Other animals have traits that set them apart, he said. A skunk has a special ability to squirt a caustic-smelling chemical from its anal glands.

Our special thing, in contrast, is intelligence, he said, and it came about through the same mechanism as the skunk’s odoriferous defense.

I wasn’t really trying to be provocative here: I did have a pet skunk for several years, and it was the first thing that came to mind.  But as soon as I said this, I knew that comparing the human mind to a skunk butt would tick people off.

Sure enough, Vincent Torley at the Discovery Institute doesn’t like it at all. At the intelligent-design website Uncommon Descent, he responds to both my claim and Stephen Hawking’s recent statement about human mortality, “I regard the brain as a computer which will stop working when its components fail. There is no heaven or afterlife for broken down computers; that is a fairy story for people afraid of the dark. ”

Torley makes three arguments:

1.  The human mind is not like a computer. Torley reprises Chris Chatham’s list of differences between computers and human minds: computers are digital, minds analogue, there’s no distinction between hardware and software in the brain, the brain is often self-repairing while computers are not, and so on. Toley concludes:

The brain-computer metaphor is, as we have seen, a very poor one; using it as a rhetorical device to take pot shots at people who believe in immortality is a cheap trick. If Professor Hawking thinks that belief in immortality is scientifically or philosophically indefensible, then he should argue his case on its own merits, instead of resorting to vulgar characterizations.

Poor Torley; he wants to believe in the afterlife so badly that he dismisses Hawking’s statement as a “rhetorical” device. In fact it is not.  Hawking’s point was not, of course, that the brain works precisely like a computer—it was that the brain is a meat machine that cranks out thoughts and emotions, and when the brain dies, so does its products.  Hawking is alluding to the very real observations that damaged minds produce damaged thoughts, that you can influence behavior and personality by material interventions in the brain, that you can eliminate consciousness with drugs and reinstate it by withholding those drugs, and to the lack of evidence for any soul or immortal component that can survive the death of the brain.  It’s completely irrelevant whether or not the brain works exactly like a computer.

2.  Human minds are totally not like skunk butts. Here Torley relies on recent biological discoveries: a paper by Dorus et al. in Cell showing that the human lineage showed accelerated evolution of genes involved in the nervous system, and a news article in Science claiming that the human lineage shows accelerated evolution not just of gene sequence, but of gene expression.

Well, so what?  The skunk’s system of chemical defense evolved by natural selection (and nobody’s yet looked at the genes involved—maybe their evolution was fast as well), and so did the human mind.  Just because genes involved in building our brain evolved rapidly compared to genes in primates or rodents is not evidence for a fundamental difference between brains and skunk butts.  Some traits evolve fast, others evolve slowly. No biggie. But Torley plumps for God here:

I would argue that these changes that have occurred in the human brain are unlikely to be natural, because of the deleterious effects of most mutations and the extensive complexity and integration of the biological systems that make up the human brain. If anything, this hyper-fast evolution should be catastrophic.  We should remember that the human brain is easily the most complex machine known to exist in the universe. If the brain’s evolution did not require intelligent guidance, then nothing did.

He’s right; nothing did.  All Torley is saying here is that brains are complex and could not have evolved by Darwinian processes—both because they are complex and because most mutations are deleterious, which would make rapid evolution of the brain “catastrophic.”

That argument is nonsense.  So long as advantageous mutations occur, regardless of their rarity, natural selection can build a complex brain without “catastrophe.” And it had about five millions years to turn a chimp-sized brain into ours.  Just looking at volume, and assuming an ancestral brain of 500 cc, a modern brain of 1200 cc, and a generation time of 20 years, that’s a change in brain volume of 0.0028 cc/generation, or an average increase of 0.00056% per generation.  Where’s the evidence that this change—at least in volume—was too fast to be caused by selection? We know that current observations of selection, such as that seen in beak size in Darwin’s finches, can be much stronger than this without catastrophic effects!  The finches, after all, are still here, and behaving like finches.

3.  The human mind cannot be the same as the human brain.  Torley leans on the work of Edward Feser here, who has argued that neurochemical activity could never result in thoughts and words that had inherent meaning:

Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes.

I’ll leave this one to the philosophers, except to say that “meaning” seem to pose no problem, either physically or evolutionarily, to me: our brain-modules have evolved to make sense of what we take in from the environment. And that’s not unique to us: primates surely have a sense of “meaning” that they derive from information processed from the environment, and we can extend this all the way back, in ever more rudimentary form, to protozoans.  Saying that thoughts have meanings that “lie beyond themselves” simply assumes what Torley’s trying to prove.

And the lesson Torley draws from all this? You know what it is, and it shows with crystal clarity that all this intelligent-design palaver is motivated simply by a love of Jesus:

The fact that rational choices cannot be identified with, caused by or otherwise explained by material processes does not imply that we will continue to be capable of making these choices after our bodies die. But what it does show is that the death of the body, per se, does not entail the death of the human person it belongs to. We should also remember that it is in God that we live and move and have our being (Acts 17:28). If the same God who made us wishes us to survive bodily death, and wishes to keep our minds functioning after our bodies have cased to do so, then assuredly He can. And if this same God wishes us to partake of the fullness of bodily life once again by resurrecting our old bodies, in some manner which is at present incomprehensible to us, then He can do that too. This is God’s universe, not ours. He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.

125 Comments

  1. Tulse
    Posted May 30, 2011 at 7:04 am | Permalink

    Remind me how Jesus gives our brains semantics?

    • Al West
      Posted May 30, 2011 at 7:20 am | Permalink

      Well, basically, through the operation of a language module in the brain located in many individuals in Wernicke’s area and Broca’s area, capable of processing statements consisting of regular combinations of aural or visual items as sorted by innate classificatory procedures, arranged in regular codes. Except that Jesus dun it.

      • Sili
        Posted May 30, 2011 at 3:57 pm | Permalink

        I’m sure the crockus must play some part in it.

    • Posted May 30, 2011 at 9:54 am | Permalink

      Poof.

    • Brad
      Posted May 31, 2011 at 8:03 am | Permalink

      Seriously, Tulse! You didn’t read that section of the Bible?? It’s my favorite chapter and verses of the whole thing–how Jeebus does it! Exactly what part of “mi-ra-cle” don’t you understand?? That’s how he does it!

      He he!

  2. Chad
    Posted May 30, 2011 at 7:07 am | Permalink

    “This is God’s universe, not ours. He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.”

    Of course, Torley can only make the conclusions he does by completely ignoring all that humankind has actually learned about the universe. He’s trying to draw a veil to hide the fact that he’s strictly working from his prior assumption of a very particular kind of God (and resulting kind of universe), but it’s paper-thin.

    • Tulse
      Posted May 30, 2011 at 7:28 am | Permalink

      This is God’s universe, not ours

      Ah, so that’s what explains the 41 decillion cubic light-years of vacuum at 2.8K. I was confused, because I figured if it were, like, designed for us, it would have so much cold empty space.

      • Jeff Engel
        Posted May 30, 2011 at 9:12 am | Permalink

        God likes decillions of cubic light-years of vacuum at 2.8K. It’s His big chilly yard. Sometimes, He spices it up with a planet with liquid water. They’re lawn ornaments, and where He can keep His beetles.

  3. Posted May 30, 2011 at 7:11 am | Permalink

    The brain-computer metaphor is, as we have seen, a very poor one;

    To be fair, it is. However, none of the differences between brains and computers that he mentions would make it any more likely that the brain is either designed, or requires a soul to operate.

    • Posted May 30, 2011 at 7:16 am | Permalink

      Also, this made me laugh:

      We should remember that the human brain is easily the most complex machine known to exist in the universe.

      Who is “resorting to vulgar characterizations” now?

      • DiscoveredJoys
        Posted May 30, 2011 at 8:12 am | Permalink

        So his god is simpler than my brain?

        • Posted May 30, 2011 at 8:34 am | Permalink

          Well, that is what some would claim. It allows them to get around the problem that you can’t invoke a complex God as a solution to the problem why there is complexity. Of course, that throws up a heap of other problems for theists, but apologists are usually only interested in “solving” one objection at a time.

          But I suspect that in this case, your remark would probably be rebutted by asserting that God is outside of this universe.

          My point, however, was that Hawking isn’t allowed to compare the brain to a computer, but Torley can apparently compare it to a machine without a problem.

        • Sili
          Posted May 30, 2011 at 3:59 pm | Permalink

          Well, nothing is pretty simple, yes.

          (But very very very symmetric.)

  4. Al West
    Posted May 30, 2011 at 7:11 am | Permalink

    Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes.

    Well, this is garbage. While the link-up between mental states and physical states of the brain has yet to be fully worked out (and it will take some time, given the fact that, as Torley notes, human brains are the most complicated machines in the universe), there’s absolutely no reason to refer to meaning somehow outside of the brain or in any way “inherent” to make sense of it. That’s just the homunculus argument, but with a soul instead of a homunculus. That’s untenable nonsense, and there are plenty of naturalistic views on the idea of consciousness, intentionality, and ‘meaning’, including Dennett’s nuanced, albeit controversial, view. Maybe, if Torley is so open-minded as to consider the views of philosophers, he might like to survey Dennett’s work. It has been well-summarised in Tadeusz Zawidzki’s book, Dennett, so there’s no reason not to. John Searle has also produced cogent summaries of philosophy of mind that would be apropos. There’s certainly no need for recourse to supernatural arguments or nonexistent “inherent meanings”…

    • Tulse
      Posted May 30, 2011 at 7:40 am | Permalink

      there are plenty of naturalistic views on the idea of consciousness, intentionality, and ‘meaning’

      But none of those are convincing to any great degree.

      John Searle has also produced cogent summaries of philosophy of mind that would be apropos

      Although Searle’s own view seems to be that semantics is some sort of “inherent” property of brains (although by no means supernatural).

      Of course any account of intentionality or consciousness that requires supernatural explanations is no explanation at all. But I do think there is very good reason to believe that both those properties involve “hard problems”, and don’t have clear solutions.

      • Al West
        Posted May 30, 2011 at 7:49 am | Permalink

        I disagree with many of Searle’s views, and the reason I recommend his work is because he writes clearly and interestingly, and because he has written summaries of philosophy of mind that anyone can read to give them a background to the problems without necessarily selecting a position on them. I agree with you: most accounts of consciousness and intentionality are by their nature dubious, although I favour Dennett’s approach if not all of his conclusions. Methodologically-speaking, Dennett is probably right to try to dismantle the manifest image of human minds and the Cartesian Theatre than to stumble over this image and its inherent unscientific, unnaturalistic form. That is to say, we can’t take ‘meaning’ to be an inherent property of anything, let alone in its current ordinary language meaning.

        Anyway, my point is that there are many viewpoints, some undoubtedly more correct than others, and Feser’s is not one of the more correct ones.

        • Scott near Berkeley
          Posted May 30, 2011 at 9:54 am | Permalink

          I agree with you about John Searle, especially his book “The Mystery of Consciousness” (laying right now on my lap). The biggest benefit within this book is his introduction (to me) of Nobel prize winner Gerald Edelman. I have =attempted= to inculcate Edelman’s “Bright Air, Brilliant Fire: On the Matter of the Mind” (1992) into my reasoning, but it is a very dense read; every sentence carries weight. Edelmann has written three or four other books as well. For all those interested in pursuing arguments as to the mind and brain and the relationship of the material mind and the physical death of our being, mental thoughts, memory, and all factors when we die, Edelman is a source you must include. Of course, Terry McDermott’s “101 Theory Drive” cannot be passed by when considering the physical nature of memory and thoughts. No question about this: our thoughts and memories are complex, but physical, and since sodium, calcium, and phosphorous are left behind when we die, so remains with them our memories and every thing about our sense of self.

          • Al West
            Posted May 30, 2011 at 10:06 am | Permalink

            Thanks for the recommendation. I’m an amateur with regard to philosophy of the mind – I took a few classes at undergraduate level, and I’m an anthropology grad student at Ox now, constantly trying to engage with the cognitive science literature. In terms of the earlier influential works on the mind, I’m a newbie, to my shame. That’s why I find Searle so helpful: both “The Mystery of Consciousness” and “Mind” are sitting on the shelf behind me as I type, and I’ve found his books on social facts useful if incomplete and unnaturalistic. Clever chap indeed.

    • Posted May 30, 2011 at 7:01 pm | Permalink

      “Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such. Hence our thoughts cannot be the same as our brain processes.”

      Even so, a jpg stored on a memory stick can have no inherent meaning, nor be processed with an imaging programme, nor be printed on paper, nor understood as a picture by a person looking at it, because magnetic charges (or whatever they put on memory sticks nowadays) can have no inherent meaning as such – what?

  5. Posted May 30, 2011 at 7:13 am | Permalink

    Given that humans are obviously far from the only organisms with brains (even skunks have ’em!*), it’s amazing that they could cling to this notion of the human brain-mind as something qualitatively different.

    it shows with crystal clarity that all this intelligent-design palaver is motivated simply by a love of Jesus

    Or love of authoritarianism:

    He wrote the rules; our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.

    *http://brainmuseum.org/specimens/carnivora/skunk/index.html

  6. Posted May 30, 2011 at 7:20 am | Permalink

    But as soon as I said this, I knew that comparing the human mind to a skunk butt would tick people off.

    Sure enough, Vincent Torley at the Discovery Institute doesn’t like it at all.

    Given the quality of his mind-squirtings, I suspect it hit too close to home.

  7. Jason
    Posted May 30, 2011 at 7:24 am | Permalink

    I wouldn’t say meaning alone is the problem, but the whole idea of “inherent meaning.” To preface, it seems he’s using “meaning” in its anthrocentric, subjective sense. If this is the case, then “inherent meaning” makes no sense since the existence of such meaning is contingent upon the existence of minds capable of attaching a meaning to something. It seems to me that it can only be “inherent meaning” if it has meaning in and of itself which, if the usage is subjective or involves such heavy contingencies, is impossible.

    • Scott near Berkeley
      Posted May 30, 2011 at 10:00 am | Permalink

      We should have a “List of Fuzzies” that religions rely upon to prop up impossible foundations upon which religionists build their arguments. “Inherent Meaning” is a typical reduction to absurdity: it relies upon the meaning of meaning, which relies upon the meaning of meaning, inherent, of course, upon the meaning…etc etc.

      • Jason
        Posted June 4, 2011 at 11:19 am | Permalink

        I like this idea. We could also get to fuzzies peculiar to individual religions, like “Christianity isn’t a religion, it’s a relationship.”

  8. Posted May 30, 2011 at 7:25 am | Permalink

    The human mind is not like a computer.

    Three points.

    1. Though modern computers use binary logic, their electronics are as analog as ever. Voltages do not instantaneously jump from high to low; rather, there is a cutoff whereby any voltage above is considered high and any voltage below is considered low. The actual voltage changes continuously from the one to the other and hopefully has made it far enough away from the cutoff by the time the register is read to avoid ambiguity resulting in an error.

    b) Though brains are full of all sorts of messy electrochemical circuits as opposed to messy electromechanical circuits, there’s every reason to believe (and no reason not to believe) that brains are Turing-complete. That is, there is nothing that a brain can do that a Turing Machine with a long enough tape couldn’t do.

    III/ Any claim that human consciousness is not Turing-complete — that is, any claim that there are things that a human can do that a computer with the right programming and enough memory can’t do — can trivially be demonstrated as being equivalent to a claim that all sorts of other well-established scientific principles are invalid. In particular, you wouldn’t have much trouble constructing a perpetual motion machine by exploiting the mind’s ability to solve otherwise-intractible problems.

    I’ll grant that there is, as of yet, nothing that rises to the level of proof that consciousness is Turing-complete. However, it’s an even better bet that we are meat computers than that abiogenesis was the inevitable result of early terrestrial environmental chemistry. After all, it’s at least theoretically possible (though laughably improbable) that the Earth was seeded from life that got started elsewhere.

    Cheers,

    b&

    • Tulse
      Posted May 30, 2011 at 7:48 am | Permalink

      Any claim that human consciousness is not Turing-complete — that is, any claim that there are things that a human can do that a computer with the right programming and enough memory can’t do

      I’m not sure those two claims are equivalent. Those who argue that consciousness is not a result of abstract functional properties tend to claim that consciousness is an epiphenomenon that plays no causal role in behaviour. In other words, the claim is it is logically possible that one could have a brain that is Turing-complete, but which does not actually possess consciousness (but merely “acts” like it does).

      • AT
        Posted May 30, 2011 at 8:17 am | Permalink

        if it “acts” asif it does then it possesses it

        • Tulse
          Posted May 30, 2011 at 8:21 am | Permalink

          I’m reasonably certain that Tickle-Me-Elmo dolls don’t really feel ticklish.

          • Al West
            Posted May 30, 2011 at 8:55 am | Permalink

            …and I’m reasonably certain that Tickle-Me-Elmo dolls don’t appear to feel ticklish in the way that humans do, and that we’re all fairly good at sorting this fake tickling from “real” tickling. But that’s not the point: consider Dennett’s position with regard to the intential stance. If you want to find out whether something is ‘ticklish’, then we have to phrase the idea of ‘ticklish’ in terms of a disposition towards certain physical actions in certain circumstances, lest we end up with an unworkable, totally subjective “definition” and an essentialisation that leads to an unnaturalistic view of the mind – another chain of homunculi or a normal everyday infinite regress (“feeling ticklish is what it feels like when you’re tickled”).

            If you’re thinking of consciousness in terms of qualia and their absolute ineffability, then that produces greater metaphysical problems than it solves. It posits, basically, that human brains are different in kind from everything else in the universe. That’s untenable. If it really, really appears that a computer has consciousness, then there’s really no way to tell that it doesn’t, just as it may appear that you have consciousness but we, on the outside, have no way to tell that either, in precisely the same way.

            If there really were a doll that mimicked in every possible way the appearance of tickling, there would be no way to tell that it wasn’t being ‘tickled’, unless you take ‘tickled’ to mean something totally unverifiable and intersubjectively worthless.

            Now, that’s not the only position that exists. But it’s a good one.

            • Posted May 30, 2011 at 9:32 am | Permalink

              qualia

              While there’s a certain level of instinctive appeal to the notion of qualia, the concept falls down pretty hard once one starts to get into the physiology of perception.

              Classically, is what I perceive when I look at something red the same as what you perceive when you look at something red, or might it be that what I see in my mind is what you see in your mind when you look at something blue?

              But then when you get into the mechanics of color perception, you discover that light of a particular wavelength excites certain classes of photoreceptors and causes a predictable electrical signal to travel to a specific area of the brain. There are minor variations from person to person; the response curves of those photoreceptors vary very slightly in different individuals, for example, and even more so from species to species. And then there’re the examples of color blindness and tetrachromacy.

              Nevertheless, human color perception is extremely well understood, and the basic math describing it all was well established almost a century ago. To be sure, there are constant attempts at refinement, but the initial approximations are still far more than adequate for the purposes of consumer electronics today. It’s only researchers and artistic perfectionists who’re affected by the refinements.

              So, to the same extent that our eyes and visual cortexes are nearly identical, our visual qualia are equally nearly identical.

              Of course, It’s trivial to extend this to any of our other senses.

              Cheers,

              b&

              • Posted May 30, 2011 at 9:43 am | Permalink

                So, to the same extent that our eyes and visual cortexes are nearly identical, our visual qualia are equally nearly identical.

                I don’t think that follows from the observation that color receptors are well understood. While the visual inputs might be similar, that doesn’t necessarily mean that they are processed in the same way in every brain. Conditions like synesthesia, or color blindness due to brain damage, show that this is entirely possible.

              • Posted May 30, 2011 at 9:55 am | Permalink

                Well, of course. If the same inputs are processed in different ways, the result of that processing will (generally) be different.

                But the fact that we have a more thorough understanding of some parts of the visual processing system than others doesn’t open doors to magic in the less-well-understood parts.

                Indeed, synesthesia and the malleability of perception to mechanical and chemical modifications only more firmly point to the fact that perception is entirely the result of the electrochemical computation we already know is going on (even though we don’t fully understand that computation).

                Cheers,

                b&

              • Al West
                Posted May 30, 2011 at 9:58 am | Permalink

                The basic perceptual abilities of humans regarding colour may be well understood, but showing that this leads to the exact same percepts or the same categorisations (which are, after all, the only data we have to go on regarding the universality of colour perception) is very hard. Berlin and Kay’s work on colour classifications turned out to be wrong, despite all its influence (see David Turton on Mursi colour classification, for instance), and there could be any number of ways in which the perception of light rays could be affected to produce different subjective experiences or different, let’s say, “qualia”.

                I actually don’t endorse the qualia point of view; in fact, as you may be able to tell, I generally follow Dan Dennett’s views, which I find to be the most reasonable and naturalistic. But solving colour perception, or solving the issue of perceptual universals, is not the same as solving the problem of qualia. We may indeed see these things in the same way, but it’s the “redness” of red, and the subjective nature of the experience, that is supposed to define qualia, not the absolute difference in experience. It’s just that we each have the experience individually such that it is inaccessible to others, not that we might have totally different experiences.

                Well, that may be the case, but it’s not a good way to define anything, as it’s not intersubjectively verifiable (by definition), and it doesn’t seem necessary. Dennett’s view is that qualia are really the result of the responses to such stimuli being extremely complex, for evolutionary reasons, not that there’s some inherent ineffable and metaphysically troubling thing about them.

              • Posted May 30, 2011 at 2:59 pm | Permalink

                But the fact that we have a more thorough understanding of some parts of the visual processing system than others doesn’t open doors to magic in the less-well-understood parts

                Never said nor suggested it did 🙂

            • Tulse
              Posted May 30, 2011 at 10:32 am | Permalink

              If there really were a doll that mimicked in every possible way the appearance of tickling, there would be no way to tell that it wasn’t being ‘tickled’

              Right, but that is an epistemic problem and not an ontological one — it’s about whether we can tell if there is a difference rather than whether there is a difference. Those two things are not at all the same, unless one is a radical logical positivist.

              unless you take ‘tickled’ to mean something totally unverifiable and intersubjectively worthless.

              That’s begging the question — the issue is precisely whether “feeling like being tickled” is something more than merely convincing an observer that one is feeling like being tickled.

              • Al West
                Posted May 30, 2011 at 10:48 am | Permalink

                Yes, that’s true, but asserting the existence of this thing, ‘consciousness’, that a machine might or might not possess, is a problem that has no solution presently. Saying that it could appear to be conscious without actually being conscious implies that we already have a definition of consciousness that is intersubjectively valid and not based on behaviour. There is no coherent, naturalistic view, presently, as to the ontological status of consciousness, and so there’s not really much difference between ontology and epistemology here, troubling though that is.

              • Tulse
                Posted May 30, 2011 at 10:55 am | Permalink

                asserting the existence of this thing, ‘consciousness’, that a machine might or might not possess, is a problem that has no solution presently

                Al, I agree with the current limitations of our understanding, but to then say that if something acts conscious it must be conscious (or, perhaps more accurately, we might as well consider it conscious) seems like giving up to me. It’s the equivalent of the drunk looking for his dropped car keys under the street light because he can see better there.

              • Al West
                Posted May 30, 2011 at 11:32 am | Permalink

                It’s frustrating indeed, and I’m not saying that the computer must be conscious – just that ‘consciousness’ is impossible to define intersubjectively without recourse to behaviour at this point. Anything else that could be asked or said – does the computer have qualia?, for instance – asserts qualities of consciousness that have no verifiable aspects. That’s all.

              • Tulse
                Posted May 30, 2011 at 11:39 am | Permalink

                Anything else that could be asked or said – does the computer have qualia?, for instance – asserts qualities of consciousness that have no verifiable aspects.

                I completely agree, and I think that such qualities are not verifiable in principle, which is what makes consciousness (or at least its subjective nature) such a “hard problem”.

      • Posted May 30, 2011 at 8:19 am | Permalink

        In other words, the claim is it is logically possible that one could have a brain that is Turing-complete, but which does not actually possess consciousness (but merely “acts” like it does).

        I believe the good Dr. Turing had a test named after him which is especially apropos.

        If we have two devices which perform the exact same computation to achieve the exact same result, in what meaningful sense does it make to differentiate between the mechanisms performing the computation?

        Does Photoshop have a different “consciousness” when run on a Mac as opposed to a PC? What about when Photoshop is run in a virtual machine of a PC on a Mac?

        It comes down to the identity principle, really. If two things are identical in every aspect…well, then, they’re identical.

        Cheers,

        b&

        • Tulse
          Posted May 30, 2011 at 8:26 am | Permalink

          If we have two devices which perform the exact same computation to achieve the exact same result, in what meaningful sense does it make to differentiate between the mechanisms performing the computation?

          It makes sense if there is something other than objectively verifiable computation going on in one of the mechanisms. The whole issue is whether consciousness arises purely through the embodiment of abstract functionality, or if there is something more. You seem to be begging the question by implicitly claiming that consciousness is simply the result of computation — that’s precisely what is in potential dispute.

          It comes down to the identity principle, really. If two things are identical in every aspect…well, then, they’re identical.

          Of course, but the issue is whether they are identical in every aspect. And the problem with consciousness (or at least the subjective aspects of it) is that there is no objective way to have direct access to it, to determine if it varies between two mechanisms.

          • Posted May 30, 2011 at 8:56 am | Permalink

            You seem to be begging the question by implicitly claiming that consciousness is simply the result of computation — that’s precisely what is in potential dispute.

            Eh, I can very easily turn this ’round.

            My claim is that consciousness is like every other phenomenon ever observed: wholly natural, without the necessity of unobserved and unobservable meta-phenomena.

            In order to claim that consciousness is anything other than an emergent property of computation, one must first propose either a mechanism by which consciousness manifests itself separately from the computational mechanism or demonstrates why natural processes are inadequate to account for the phenomenon.

            As with all such discussions, the matter of definition is essential.

            For me, for these types of discussions, I find it sufficient to define consciousness as the ability to engage in meta-cognition. That is, a self-aware conscious entity is able to perform critical analyses of its own cognition. “Je pense donc je suis.”

            How would you define consciousness, and how is said definition incompatible with a purely naturalistic mechanical model?

            Cheers,

            b&

            • Tulse
              Posted May 30, 2011 at 10:43 am | Permalink

              consciousness is like every other phenomenon ever observed: wholly natural, without the necessity of unobserved and unobservable meta-phenomena

              I completely agree about the “natural” part, but the rest seems to require a kind of determined anesthesia. Can you “observe” my subjective experience? I know that I can’t “observe” the experience of echolocation, or sensing magnetic field lines, or detecting chemical trails (howevermuch I may be able to study the natural mechanism that produce those experiences), so it seems obvious to me that at least some subjective experiences are unavailable to objective observation.

              In other words, consciousness (or at least subjective experience) is not like other objectively observable phenomena, in that it is itself is not objectively observable. We can observe the behaviour it produces, and even the physical processes involved in its production, but we cannot observe the phenomenon itself.

              And yes, as anyone who has taken ingested interesting chemicals knows, subjective experience is most definitely a product of the brain and its electro-chemical environment and processes. But that doesn’t mean that it is identical to it, any more than a musical instrument is identical to music.

              • Posted May 30, 2011 at 11:19 am | Permalink

                But that doesn’t mean that it is identical to it, any more than a musical instrument is identical to music.

                Is there a difference between the sound produced by a musical instrument and that produced by an electronic recording of the instrument?

                To get around the question of fidelity, consider a live concert in a stadium with an electric guitar as opposed to somebody playing an unamplified acoustic guitar in a living room.

                What’s the difference between the music produced by the live musician strumming the guitar — where the sounds of the strings gets picked up by a microphone, the resulting electronic impulses fed to an A/D converter, modified by various effects algorithms, sent to a D/A converter, and then enormously amplified — and adding an extra delay loop whereby the digital signal gets written to a hard drive and then onto a CD before it gets read back from the CD before being sent to the D/A converter?

                Back to the target, you’re aware of Folding@Home, right? Imagine some far-distant supercomputer capable of accurately and thoroughly simulating biochemical processes down to the level of protein folding on the scale of an entire human body. Is the person thus simulated somehow (or not) differently conscious from you or me?

                If so, then how? If not, how much can you simplify the simulation before consciousness vanishes?

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 11:32 am | Permalink

                Is the person thus simulated somehow (or not) differently conscious from you or me?

                If so, then how? If not, how much can you simplify the simulation before consciousness vanishes?
                The problem as I see, Ben, is that there is in principle no way to objectively answer the final quoted question. What possible objective test could you use to determine if subjective experience has disappeared? You can’t rely on objective behaviour, since the question itself is precisely whether a simulation that behaves as if it is conscious actually experiences consciousness.

                Here’s another way to put it — some people (such as Searle) have claimed that consciousness is somehow an inherently biological property, and not merely a result of computing abstract functions. Presumably one could create two entities, both of which produce identical externally-observable behaviour, but one of which is biological and the other purely non-biological. What sort of test could one possibly do to determine if Searle is right or wrong? Both entities act exactly the same — how can you determine experimentally whether only one is conscious? As I understand it, you are arguing that such a question makes no sense, but arguing it essentially by fiat. (Or is that a misrepresentation?)

              • Posted May 30, 2011 at 12:11 pm | Permalink

                What possible objective test could you use to determine if subjective experience has disappeared?

                Tell me how you define, “subjective experience,” and I’ll let you know.

                Presumably one could create two entities, both of which produce identical externally-observable behaviour, but one of which is biological and the other purely non-biological. What sort of test could one possibly do to determine if Searle is right or wrong?

                First, I’d need something more concrete as to what’s meant by “biological” and “non-biological.” Of what significance is it that the one entity is calculating 1 + 1 = 2 by measuring voltage differences in a silicon-based substrate or by measuring electrochemical changes in a carbon-based substrate? Does carbon imbue computation with some form of subjectivity that silicon inhibits? If so, what if we make the computer out of graphene rather than silicon — will that make it capable of consciousness?

                I think you’ll find that, once you take this whole notion to reducto meta-extremes, it loses all coherence. On the other hand, when simply sticking with the identity principle that two things identical in all aspects are the same thing, no such absurdities ensue.

                Cheers,

                b&

      • Jeff Engel
        Posted May 30, 2011 at 10:53 am | Permalink

        If consciousness plays no causal role in behavior, then all discussion of conscious states – like this one – is causally independent of conscious states.

        If your conception of consciousness gets you to the point where it can have nothing to do with any discussion of it, then it’s either a myth or you’ve misconceived it along the way.

        • Tulse
          Posted May 30, 2011 at 10:59 am | Permalink

          If consciousness does play a causal role in behaviour, then something other than the causal chain of physical processes is involved, which seems to demand a radical form of dualism.

          • Gregory Kusnick
            Posted May 30, 2011 at 11:17 am | Permalink

            Does mathematics play a causal role in calculating the digits of pi? Even though the calculation can be traced down to a causal chain of physical processes, does that preclude any more abstract level of causal description, without invoking dualism?

            • Tulse
              Posted May 30, 2011 at 11:23 am | Permalink

              I’m not clear what your claim is, Gregory — are you saying that “consciousness” is merely a description, a label that gets applied objectively to a bunch of physical processes?

              • Posted May 30, 2011 at 11:28 am | Permalink

                Gregory can answer for himself, but I think that’s a reasonable statement.

                As I wrote above, for the purposes of these types of discussions, I define consciousness as the ability and / or process of metacognition, of thinking about what you’re thinking. It’s far from the only useful definition, but it seems to encompass everything that gets people riled up.

                What’s your definition of the term?

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 11:37 am | Permalink

                I define consciousness as the ability and / or process of metacognition, of thinking about what you’re thinking

                That captures the semantic, intentional, “cognitive” component, but does not at all capture the subjective experience, the “what it’s like” (to quote Nagel). I think it’s important to keep the semantic and subjective separate conceptually — we have made a huge amount of progress in psychology and cognitive science and AI in understanding how the former works, but as far as I can see, are nowhere close to understanding the latter.

              • Posted May 30, 2011 at 12:03 pm | Permalink

                That captures the semantic, intentional, “cognitive” component, but does not at all capture the subjective experience, the “what it’s like” (to quote Nagel).

                But why on Earth should we assume that the two are different and distinct? Without invoking dualism, of course, that is.

                No, really. I’m trying to formulate a way of expressing the two concepts in a coherent way that distinguishes between the two, and I’m failing miserably.

                If we agree that consciousness (for this discussion’s porpoises) is the capacity for and act of metacognition, and if the subjective experience one experiences while metacogitating is “what it’s like” to be conscious, all we’re doing is defining the exact same thing from two different perspectives. Either 1 + 1 = 2 or 2 – 1 = 1, makes no difference either way.

                It seems all you’ve done is made plain that “what it feels like” to be a conscious human is exactly the same as “what it feels like” to be a down-to-the-molecule accurate computer simulation of a human. And the implication remains that one needn’t go to such extremes in order to get human-grade consciousness indistinguishable from (and therefore identical to) the original.

                Again, unless you’re invoking dualism (which I’m pretty sure you’re not).

                Cheers,

                b&

              • Posted May 30, 2011 at 12:04 pm | Permalink

                Oh — and I still think it’d be really, really, really, really super helpful for you to offer your own definition of the term (even if it’s just to agree with one of the ones already in play here).

                b&

              • Gregory Kusnick
                Posted May 30, 2011 at 12:12 pm | Permalink

                I object to the word “merely”. Deep Blue’s algorithm is in some sense a description of how it plays chess, but it’s not like the physical processes came first and the description was applied later. The algorithm is part and parcel of the workings of the machine; you can’t separate it causally from the physical processes it controls, since those processes wouldn’t occur in the particular ways they do without the controlling algorithm.

                The same, I claim, goes for consciousness. It’s not merely a description of what goes on in our brains; it’s an integral part of what makes our brains work they way they do. It’s (obviously) not just atoms bumping around at random; natural selection built our brains to have properties of self-reflection and internal feedback, and those high-level architectural features affect what goes on at the molecular level.

                As for the subjectivity issue, the fact that we haven’t yet made much progress on that would seem to argue that it’s too early to conclude that it’s not physical (as you seem to be arguing).

              • Tulse
                Posted May 30, 2011 at 12:16 pm | Permalink

                If we agree that consciousness (for this discussion’s porpoises) is the capacity for and act of metacognition

                But we don’t agree on that, which is precisely what’s at issue. As I suggested previously, one can distinguish between the semantic/intentional/”computational” aspect of consciousness, which is traditionally called “cognition”, and the subjective “what it feels like” aspect.

                subjective experience one experiences while metacogitating is “what it’s like” to be conscious, all we’re doing is defining the exact same thing from two different perspectives.

                But that’s not an explanation of why subjective experience exists, of how it arises. I completely agree that (at least in my personal experience) it tends to be correlated with certain kinds of cognitive processes, but that doesn’t explain why it is those cognitive processes and not others that produce that experience. (And frankly, notions of “metacognition” and recursion a la Hofstadter are not explanations either, merely assertions.)

                So, to be clear, I think that the term “consciousness” typically gets applied both to the semantic content of the cognitive processes we are aware of, and to the subjective experiences that are also part of that awareness. While both are interesting, it is the latter that I think is most problematic, precisely because it seems inaccessible to objective empirical observation. That does not mean it is magic, or that it requires Jesus, or that it arises because of some non-material process. But it does mean that understanding how some material process gives rise to subjectivity is difficult to understand, and may in principle be impossible to determine empirically.

                Does that help?

              • Posted May 30, 2011 at 12:30 pm | Permalink

                Does that help?

                Hmmm…not much.

                Can you expand a bit on what you mean by:

                the subjective experiences that are also part of that awareness

                As best I can tell, you’re asserting that the subjective experience is metacognition from the perspective of the one doing the metacogitating. And I fail to see how that gets us anywhere.

                Is rotation somehow different whether you’re on the ferris wheel or on the ground?

                As important as perspective is, I fail to see how it has any bearing on the subject at hand.

                Cheers,

                b&

              • Gregory Kusnick
                Posted May 30, 2011 at 12:43 pm | Permalink

                As far as I can tell, Tulse is worried about why it “feels like something” to be alive and conscious. Why do we have the internal sensation of actually being present in the world, rather than (say) no internal sensations at all, and just going robotically about our business?

                I’m not sure these are coherent questions. Surely evolved creatures that interact meaningfully with their environment must have some awareness of being present in that environment, and that must feel like something to them if they’re to make any practical use of it. The more complex their interactions, the more complex their internal models of themselves and the world. Beyond that, it’s the red herring of qualia all over again (if you’ll pardon the pun).

              • Tulse
                Posted May 30, 2011 at 1:00 pm | Permalink

                Surely evolved creatures that interact meaningfully with their environment must have some awareness of being present in that environment, and that must feel like something to them if they’re to make any practical use of it

                What do you mean by “awareness”? A thermostat is “aware” of the temperature in a room, and the temperature it is set to maintain, but would you say it actually “feels” heat and cold? There is no reason that “internal models” need to be accompanied by subjective experience — Deep Blue maintains very complex models of chess, but I’d guess very few folks would say that it is “conscious”.

                So yeah, I suppose I am referring to experiences that would sometime get labelled “qualia”, although I don’t want to get deep in the weeds about the various notions of that term, since I think that dredges up connotations and arguments we don’t need to fight. But I do think that there is something fundamentally different about what happens when a car has its “Low Fuel Warning” light come on and when I feel hungry, and it can’t be accounted for merely by the complexity of the “model”.

              • Posted May 30, 2011 at 1:00 pm | Permalink

                I tend to agree with Tulse that subjectivity seems like a mysterious superfluity. It’s not hard to build a robotic system which includes a component that keeps track of the activities and states of all the other components, and takes corrective action of raises an alarm when something goes wrong. But I think few of us would claim that self-monitoring component “experiences” distress when, say, an actuator motor burns out in the way we do when we tear a ligament. Subjectivity does not obviously “add” anything to the capacity for self-monitoring.

                That being said, I agree the question may be incoherent. Really, I only know “what it’s like” to be myself. We all extrapolate that other humans have a similar inner life — though even there, we often go wrong in our assessments of what’s really going on in someone else’s head.

                None of this mandates resorting to a spooky explanation, of course. It may be this is more of a semantic problem than a real one.

              • Tulse
                Posted May 30, 2011 at 1:06 pm | Permalink

                None of this mandates resorting to a spooky explanation, of course

                Of course, but what counts as “spooky” (or “non-natural” changes over time. Einstein didn’t like the notion of quantum entanglement because he thought it was “spooky”. The ether was postulated because the notion of waves travelling without a medium seemed absurd. Something can be “natural” and still seem “spooky”.

              • Posted May 30, 2011 at 1:09 pm | Permalink

                As far as I can tell, Tulse is worried about why it “feels like something” to be alive and conscious.

                Hmmm…I think you’re on to something, here.

                If we’re agreed that the act (not perception) of consciousness is the perception of perception, then it follows that we’re already in a situation where there are perceptions to be perceived. And it should also follow that meta-perception is itself just another kind of perception.

                In that context, at least to me, it seems a bit silly to suggest that this one particular type of perception is fundamentally different from all other types of perception. Sure, each type of perception — pain, pleasure, hunger, and so on — is unique. But they’re all variations on the same theme.

                Is a cat’s hunger substantively different from a dog’s or a human’s? The physiology involved is essentially identical. Again, without invoking dualism, how can one claim they’re significantly different?

                So why is there a difference between the thoughts associated with hunger and the thoughts associated with thinking about hunger? Of course both invoke feelings and perceptions, and of course those feelings and perceptions are distinguishable — that’s how we tell one thought from another!

                If we couldn’t perceive our own thoughts, we wouldn’t be capable of metacognition. Without perception, without feeling, there’s nothing to perceive or feel. If there were some other mechanism used for self-reflection, it would — of necessity! — be associated with some sort of analogue for perception and feeling; without such, there’s no “there” there to perceive.

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 1:18 pm | Permalink

                If we’re agreed that the act (not perception) of consciousness is the perception of perception

                Depending on how you are using the term “perception”, I think there is a real risk of question-begging here.

                Without their subjective aspects, “perceptions” are just “data”, and “perception” is just “data-processing”. So, in one formulation, the above statement becomes “consciousness is data-processing of data-processing”, which seems to me to be clearly wrong (there are lots of things that process data about data, but which we don’t think are conscious).

                If, on the other hand, you are talking about “perception” in terms of what our subjective sensations are, then you’ve already imported subjectivity into the process.

              • Posted May 30, 2011 at 1:19 pm | Permalink

                But I think few of us would claim that self-monitoring component “experiences” distress when, say, an actuator motor burns out in the way we do when we tear a ligament.

                You’re missing two key differences.

                First, even the most advanced of robotic assembly lines is much simpler than the simplest of vertebrates.

                And, secondly, a vertebrate was designed by a billions-of-years evolutionary process resulting in certain types of injury causing distress in the injured individual being an advantage to survival. No engineer would program a robotic assembly line to panic and try to run away when a servomotor burns out, or to do more than ask for confirmation before taking an action likely to cause such damage.

                I never tortured ants with a magnifying glass when I was young, so this isn’t from personal experience. But I rather expect that it’s safe to say that a burning ant experiences distress. Wouldn’t you?

                Cheers,

                b&

              • Posted May 30, 2011 at 1:27 pm | Permalink

                Without their subjective aspects, “perceptions” are just “data”, and “perception” is just “data-processing”.

                We keep coming back to the subjective.

                What difference does it make if you’re observing the inner workings of your skull from the inside or if some other entity is making the same observations from the outside?

                So, in one formulation, the above statement becomes “consciousness is data-processing of data-processing”, which seems to me to be clearly wrong (there are lots of things that process data about data, but which we don’t think are conscious).

                On what basis do you make that claim?

                Again: assume a super-duper computer that could model a human down to the molecular level. Would the person thus simulated be conscious? Would that consciousness be substantively different from any other person’s consciousness?

                If not, where do you draw the line? If so, what property has the model failed to take into account?

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 1:31 pm | Permalink

                even the most advanced of robotic assembly lines is much simpler than the simplest of vertebrates

                But that’s simply an assertion that complexity is important, without offering an explanation as to why. I see a lot of this kind of argument from those advocating some sort of functionalism, using boiling down to “it’s all complexity/recursion/self-modeling”, but no account of how that is an explanation. There are lots of complex, self-regulating things in this world that are presumably not conscious.

                a vertebrate was designed by a billions-of-years evolutionary process resulting in certain types of injury causing distress in the injured individual being an advantage to survival

                But why does there have to be a subjective component to that information? As long as the organism responded correctly, why does the injury have to be accompanied by a sensation of pain, or a sensation of anything? Again, you seem to be assuming that certain types of information necessarily are correlated or involve certain subjective experiences, but don’t explain why that is the case.

              • Tulse
                Posted May 30, 2011 at 1:41 pm | Permalink

                Holy crap, this has become far more involved than I intended! Sorry for seemingly taking over the thread…

                What difference does it make if you’re observing the inner workings of your skull from the inside or if some other entity is making the same observations from the outside?

                Ripping off Nagel, I presume that you agree that bats have experiences. And I presume you agree that their sensory apparatus is very different from ours, at least with regards to echolocation. So, for me, this implies that bats have experiences of the world that I cannot have, and no amount of poking about in bat skulls, or simulations of bat neural systems, will give me that experience. What bats observe about their own experiences is inaccessible to me in principle.

                assume a super-duper computer that could model a human down to the molecular level. Would the person thus simulated be conscious?

                Assume a super-duper computer that could model an economy down to the last dollar. Would it actually have money? Again, you’re presuming your functionalist conclusion in your example. But that’s what’s at issue. Or, perhaps more accurately, what’s at issue is that it isn’t possible to empirically determine if you’re right or wrong. To go back to our bat example, suppose you create a bat neural simulation — how would you know if it actually had the experiences of a bat (not the behaviour, but experiences)? How could you tell?

              • Posted May 30, 2011 at 1:43 pm | Permalink

                As long as the organism responded correctly, why does the injury have to be accompanied by a sensation of pain, or a sensation of anything?

                In order for the organism to respond, it must perceive that there’s a need to respond. Yes? There must be some sort of stimulus (the sharp fang closing on its flanks); a mechanism to sense the stimulus (nerve endings); a mechanism to interpret the stimulus (the brain); and a mechanism to react to the analysis of the stimulus (run away!).

                Therefore, we know that there must be some sort of sensation, or else the chain is broken and the organism doesn’t respond at all (let alone correctly).

                At this point, we’re back to qualia: is the sensation that this particular organism experiences in reaction to being bitten by a predator that we label “pain” the same sensation that this other organism experiences in reaction to being by a similar predator?

                Hopefully, in light of this analysis, the concept of “qualia” should be self-evidently meaningless.

                Cheers,

                b&

              • Posted May 30, 2011 at 1:55 pm | Permalink

                Assume a super-duper computer that could model an economy down to the last dollar. Would it actually have money?

                Funny you should ask that.

                Last I heard, there were people who were making a living in the real world by earning virtual money in online multiplayer video games. There’re even exchange rates and all the rest you’d expect with an economy. And, presumably, the same sorts of inflationary potential any other fiat currency faces.

                Clearly, your hypothetical economy has a great deal of money in the context of the simulation. How much that money is worth in the context of the environment the simulation is running in is no different a calculation from how much a given E-book or movie download is worth.

                For that matter, you do realize that most of our financial system is already entirely electronic? How much do you think the database entries in your bank’s computers that hold your account balances are worth? Is that actually money? How much is that piece of plastic in your wallet worth — the few pennies it took to manufacture it, or the hundreds / thousands / whatever of dollars you can buy by presenting it to somebody? And, if you buy an expensive dinner and turn it into shit by way of your digestive system, is that really worth more than if you bought rice and beans and turned it into much the same shit?

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 1:57 pm | Permalink

                There must be some sort of stimulus (the sharp fang closing on its flanks); a mechanism to sense the stimulus (nerve endings); a mechanism to interpret the stimulus (the brain); and a mechanism to react to the analysis of the stimulus (run away!).

                The same is true of a thermostat.

                Again, it seems to me you’re importing the notion of some subjective percept in the term “stimulus” (and “sense”). If you use the more neutral term “information” or “data”, the chain outlined above doesn’t seem to need any notion of subjective “pain”.

                we know that there must be some sort of sensation, or else the chain is broken and the organism doesn’t respond at all

                Our bodies process all sorts of signals that don’t involve conscious subjective experiences. Our pancreas detects levels of glucose and insulin and regulates those without any “sensation”. Our blood pressure involves homeostatic functions that generally have no “sensations”. Our physical selves process vast amounts of information, take in all sorts of data and signals, without any “sensations”. So clearly “sensations” are not a necessary component of responding to environmental or internal change.

                At this point, we’re back to qualia: is the sensation that this particular organism experiences in reaction to being bitten by a predator that we label “pain” the same sensation that this other organism experiences in reaction to being by a similar predator?

                Ben, that is not the central question of qualia — the central question is why is some data-processing, some information-crunching, some signal analyzing, accompanied by subjectivity, and how can a purely third-person objective model of the world account for such a thing?

                If we agree that sensations exist, and aren’t solipsists, we can agree that similar organisms likely have similar subjective sensoria, without being able to explain why they have sensoria to begin with.

              • Gregory Kusnick
                Posted May 30, 2011 at 2:01 pm | Permalink

                Tulse, if you’re asking why the perception of pain is unpleasant, rather than neutral or subconscious, it’s because discomfort is a more powerful motivator than mere information. A correct response includes being afraid of letting it happen again.

                If you’re asking why an organism can’t be uncomfortable, afraid, etc. without actually feeling anything, then as far as I can see you’ve robbed the word “feeling” of any coherent meaning.

              • Posted May 30, 2011 at 2:15 pm | Permalink

                If you’re asking why an organism can’t be uncomfortable, afraid, etc. without actually feeling anything, then as far as I can see you’ve robbed the word “feeling” of any coherent meaning.

                I think the question Tulse and I are asking is: Why have “feelings” at all? One could program a robot, appropriately equipped with temperature sensors in its surface covering, to avoid objects hot enough to cause damage. Having once touched a heat source while exploring its world, it would presumably shun fire just as effectively as the proverbial burnt child — where is the necessity for pain or fear?

              • Posted May 30, 2011 at 2:41 pm | Permalink

                Our pancreas detects levels of glucose and insulin and regulates those without any “sensation”.

                I think a diabetic might disagree with you about the lack of sensation going on there, but never mind. Clearly, we are more perceptive of our hearts and better able to consciously control them than our pancreases, and we are much more aware of and much more capable of controlling our lungs.

                If you consider the metabolic cost of cognition and the adaptive benefits to awareness and control of those organs, I think the answer should become obvious: we have neither the hardware nor the software necessary to perceive those things, any more than a modern car’s engine computer is capable of perceiving or reacting to ambient relative humidity. (I’m guessing…maybe some cars do by now.)

                You might as well ask why bees can see into the ultraviolet by we can’t.

                I think the question Tulse and I are asking is: Why have “feelings” at all? One could program a robot, appropriately equipped with temperature sensors in its surface covering, to avoid objects hot enough to cause damage. Having once touched a heat source while exploring its world, it would presumably shun fire just as effectively as the proverbial burnt child — where is the necessity for pain or fear?

                My first response is to ask you why you wouldn’t characterize the robot’s experiences as a fear of pain? What, specifically, would it be necessary to add to the robot’s experiences in order for it to qualify as a fear of pain?

                My second response would be to remind you that our own reactions and perceptions are the result of an evolutionary process that has settled on a particular optimization.

                So long as you have organisms based on roughly the same biochemistry as we are, excessive heat will cause damage. Those organisms will evolve mechanisms for avoiding heat damage. Those mechanisms will involve perception of damage and imminent damage, and they will be quite effective (given sufficient evolutionary pressure). Avoiding such damage will be highly imperative. If the organism has the brain cycles to spare to examine its own perceptions, it’ll give a label to them.

                If we met such an organism, we’d translate that label as “pain” and / or “fear.”

                I’ve yet to see any reason why we shouldn’t do so, or why we should consider the other’s pain and fear as anything other than exactly that.

                Cheers,

                b&

              • Tulse
                Posted May 30, 2011 at 2:45 pm | Permalink

                If you’re asking why an organism can’t be uncomfortable, afraid, etc. without actually feeling anything, then as far as I can see you’ve robbed the word “feeling” of any coherent meaning.

                If I asked you if a thermostat can react to heat without actually feeling anything, I presume you’d say yes, right? Once again, you’re implicitly presuming what is at issue.

              • Posted May 30, 2011 at 3:00 pm | Permalink

                If I asked you if a thermostat can react to heat without actually feeling anything, I presume you’d say yes, right?

                Okay, I’ll take the bait.

                But, first, permit me a diversion.

                If I told you that Baihu has three legs, you’d immediately inquire as to what tragedy befell him that cost him his leg. Yet it is perfectly true that Baihu has three legs; it just so happens that he has a fourth leg in addition to the three I originally mentioned.

                So, yes. In the same sense that Baihu has three legs, the thermostat feels temperatures.

                For this discussion’s very limited porpoises, the mechanical thermostat is different only quantitatively, not qualitatively, from the system that comprises your heat-sensitive nerves and your brain and everything else that goes along with it.

                Now, obviously, the thermostat is, computationally, vastly dwarfed by even an ant — let alone a human. It’s like comparing an abacus to Deep Blue, only more so. But, you know what? They’re both members of the same class, and the same fundamental principles underlie both.

                The biggest difference is that the one is complex enough for emergent properties to manifest while the other…is a bimetallic strip hooked up to some switches. And there’s also the fact that you’re conflating one of those emergent properties — the capability for metacognition — with the fundamental properties. But, as I wrote above, Baihu has three^Wfour legs.

                Does that help?

                Cheers,

                b&

              • Gregory Kusnick
                Posted May 30, 2011 at 3:30 pm | Permalink

                I’m going to differ somewhat with Ben here. What makes us different from a thermostat is that the thermostat merely reacts; it learns nothing from the experience, and in that sense experiences nothing. If our whole goal were to react mechanically to heat in the moment, we wouldn’t need feelings either; unconscious reflex arcs would be sufficient. But natural selection has imbued us with purposes beyond that; we gain advantages from being able to learn from experience and model hypothetical experiences, and for that to happen we must have experiences for our mental experience-cruncher to crunch; there must be something it is like to be us, so that we can imagine what it’s like to be us in other circumstances.

              • Posted May 30, 2011 at 6:39 pm | Permalink

                What makes us different from a thermostat is that the thermostat merely reacts; it learns nothing from the experience, and in that sense experiences nothing.

                I’ll both agree and disagree with Gregory’s disagreement with me.

                I’ll agree with him to the extent that I’ve been consistently equating consciousness with metacognition, and the thermostat clearly doesn’t rise to the level of metacognition.

                But I’ll disagree in the narrow implication — and I suspect that Gregory would agree with my disagreement here — that a thermostat is inherently incapable in principle of metacognition.

                If the Sirius Cybernetics Corporation were to build a thermostat, it would most certainly be capable of metacognition (indeed, it would be neurotic if not psychotic) and would therefore be conscious.

                But the thermostat could rightfully be compared with the simplest of organisms whose response to stimuli is equally restricted. And I don’t think anybody here has any trouble with the concept that there’s a continuum linking those simplest of organisms with us.

                Let’s turn it back a bit. Are chimps conscious? Dolphins? New Caledonian crows? Tea partiers? Giant octopussies? Sheep? Two-year-olds?

                Cheers,

                b&

          • Jeff Engel
            Posted May 30, 2011 at 11:27 am | Permalink

            If consciousness is a part of the causal chain of physical processes involved in chatting about consciousness, then there’s no dualism at all. You only get dualism out when you assume dualism going in.

            • Tulse
              Posted May 30, 2011 at 11:34 am | Permalink

              consciousness is a part of the causal chain of physical processes

              What does that even mean? I can presumably trace all the electro-chemical events in your brain that lead to you typing the above post — where does consciousness interact in that chain?

              • Jeff Engel
                Posted May 30, 2011 at 11:41 am | Permalink

                The claim would be that consciousness is one or several of those physical processes.

                Me, I wonder if there aren’t too many unnatural connotations in our concept of consciousness for too many people not to accept an identity account. But if so, I would conclude that the problem is that the concept is at best a loose way of speaking and at worst simply bankrupt.

              • Tulse
                Posted May 30, 2011 at 11:48 am | Permalink

                The claim would be that consciousness is one or several of those physical processes.

                But the subjective nature (and sense of free will) clearly isn’t identical to a physical process — it may arise from such a process, but subjective consciousness is not literally (type-)identical to chemistry. That’s like saying that War and Peace is identical to the particular paper and ink it is printed on.

    • Posted May 30, 2011 at 8:23 am | Permalink

      While the mind is certainly Turning-complete – I can perform any binary operation a computer can do using pencil and paper – that doesn’t mean that the brain can’t be more than Turing-complete. There may exist more general classes of computation than those available through Turing machines. Check out “Super-recursive algorithms“, for instance.

      • Posted May 30, 2011 at 9:05 am | Permalink

        Yeah…the problem with such super-Turing concepts is that they all depend on physical impossibilities to manifest — such as a Turing machine that can perform an infinite number of calculations in a finite number of time with a finite amount of memory.

        One is left with either the safe dismissal of such concepts out of hand along with the perpetual motion machines or with the rejection of the Church–Turing thesis…which again comes with it the inescapable conclusion that such super-Turing computation could trivially be harnessed to violate the conservation of mass / energy.

        If it comes to a choice between a proposed theory and the first law of thermodynamics, I hope you’ll forgive me for sticking with the latter.

        Cheers,

        b&

        • Posted May 30, 2011 at 9:18 am | Permalink

          Quoting from the link above:

          In comparison with other equivalent models of computation, simple inductive Turing machines and general Turing machines give direct constructions of computing automata that are thoroughly grounded in physical machines.

          You appear to be confused with hypercomputing:

          The terms are not quite synonymous: “super-Turing computation” usually implies that the proposed model is supposed to be physically realizable, while “hypercomputation” does not.

          • Posted May 30, 2011 at 9:41 am | Permalink

            That may well be, though I rather doubt it.

            Can you offer a specific, real or hypothesized, example of something that a “super-Turing” device could compute?

            All the examples on that page are either Turing-complete or physical impossibilities — as is the case with all such examples I’ve ever encountered.

            Don’t forget: you’re proposing the invalidity of the Church-Turing thesis which not only would have profound repercussions throughout all of math and logic, but all sorts of real-world impacts as well. It would either mean that P=NP, for starters, or that one could sidestep that particular problem entirely.

            …and, we inevitably keep coming back to opening the door for exploiting such tricks to create perpetual motion machines.

            So, again: what’s something specific that you think could be computed in our universe but not by a Turing machine?

            Cheers,

            b&

            • Posted May 30, 2011 at 10:10 am | Permalink

              Can you offer a specific, real or hypothesized, example of something that a “super-Turing” device could compute?

              From the page on super-recursive algorithms:

              There are hierarchies of inductive Turing machines that can decide membership in arbitrary sets of the arithmetical hierarchy (Burgin 2005).

              Not that I understand what that is, or how useful that is…

              All the examples on that page are either Turing-complete or physical impossibilities

              If you’re referring to the hypercomputing page – that’s why I originally linked to a different page, and linked you to the hypercomputing page only to show that we were talking about different things.

              Don’t forget: you’re proposing the invalidity of the Church-Turing thesis

              Do you really think it’s impossible that there might exist some class of generalized Turing-machines? That might be more powerful than Turing machines, while still unable to solve things like the halting problem? I’m not an expert, but it looks to me like there are people actively researching this.

              I hadn’t heard that you could build a perpetual motion machine using a hypercomputer though. Could you give me some pointers how that would work?

              • Posted May 30, 2011 at 10:45 am | Permalink

                I can’t be bothered to pop over to the library to check out Burgin’s book, but one of the reviews linked by Wikipedia suggests I needn’t bother. Mr. Davis, the reviewer, reaches the same conclusions as I already have.

                Do you really think it’s impossible that there might exist some class of generalized Turing-machines? That might be more powerful than Turing machines, while still unable to solve things like the halting problem?

                Yes, for a simple reason. Turing machines are limited either by that which is logically incalculable (such as the Halting Problem) and by that which is physically incalculable (because it require more computational resources than exist in the universe).

                I’m not an expert, but it looks to me like there are people actively researching this.

                So? There’re people actively researching zero-point energy, time travel, and the proper construction techniques for the Ark.

                I hadn’t heard that you could build a perpetual motion machine using a hypercomputer though. Could you give me some pointers how that would work?

                My favorite example involves Maxwell’s Demon. Maxwell showed that the observation / computation cycle must consume energy or else his Demon could endlessly separate hot and cold gas molecules. As we all should know, it’s trivial to generate power from a thermal gradient; if the Demon existed and used less energy than it created with its sorting work, you’ve got a perpetual motion machine.

                The particulars would depend on the nature of the alleged hypercomputer, but “all” you’d have to do is find some similar way of analyzing something chaotic in a way that lets you exploit the chaos in order to organize it in a way that couldn’t be done classically. Maybe that means predicting wave motions in a closed pool; maybe it means manipulating the stock market; maybe it means exploiting quantum fluctuations to extract zero-point energy. Regardless, your hypercomputer is doing more work per unit of energy than a Turing-equivalent machine is capable of, and turning that into “free” energy becomes a matter of engineering.

                Cheers,

                b&

              • Posted May 30, 2011 at 3:05 pm | Permalink

                Yes, for a simple reason. Turing machines are limited either by that which is logically incalculable

                Then how do you define “calculable”? Because if you say “that which can be calculated by a Turing machine”, you’re running into a little problem called circular reasoning.

              • Posted May 30, 2011 at 3:15 pm | Permalink

                Because if you say “that which can be calculated by a Turing machine”, you’re running into a little problem called circular reasoning.

                Well, first of all, every formal method of computation and analysis to date has been proven to be logically reducible to a Turing machine.

                So, I’ll admit that, at the very least, I’m using a bit of linguistic shorthand.

                …which just means that I’ll turn it back around to you.

                Including the examples on the Wikipedia pages you’ve cited, I’ve yet to encounter an example of something that could be computed at all that a Turing machine can’t compute.

                Can you offer an example and / or a definition of calculation that is possible but not with a Turing machine?

                If your example is one of the ones on Wikipedia, can you explain what it is that the example does that a Turing machine can’t, as well as how that particular calculation is possible at all?

                I’ll give you a hint: there’s a Fields Medal in it for you if you can.

                Cheers,

                b&

              • Posted May 31, 2011 at 2:14 am | Permalink

                I’ll give you a hint: there’s a Fields Medal in it for you if you can.

                No I can’t. But there’d also be a Fields Medal in it if you could prove that the Church-Turing thesis holds. It’s generally thought it does (just like most people think P != NP) but we don’t know for sure yet.

              • Posted May 31, 2011 at 2:18 am | Permalink

                I’ll give you a hint: there’s a Fields Medal in it for you if you can.

                No I can’t, as I had already admitted. But there’d also be a Fields Medal in it if you could prove that the Church-Turing thesis holds. It’s generally thought it does (just like most people think P != NP) but we don’t know for sure yet.

            • Posted May 30, 2011 at 10:16 am | Permalink

              Also please note that I’m not claiming that mind requires anything super-Turing to work, let alone hypercomputing or anything supernatural. I just don’t think it’s a settled question yet whether Turing-completeness is the end-all-be-all of computability.

              • Posted May 30, 2011 at 10:58 am | Permalink

                I just don’t think it’s a settled question yet whether Turing-completeness is the end-all-be-all of computability.

                It might be that the deadbolt hasn’t been thrown yet, but the door’s been slammed shut, latched, the handle lock has been turned and the key removed.

                About the best one could hope for at this point would be if we were in a Matrix-style simulation and if there were some way to access the computer that’s doing the simulation (or even other “outside” computers the simulator can communicate with). Even then it should be apparent that all we’re doing is broadening our horizons and not escaping any fundamental logical limitations of reality.

                It’d be rather like “solving” our current petroleum crisis by mining the Jovian atmosphere.

                Cheers,

                b&

    • Posted May 30, 2011 at 3:44 pm | Permalink

      I’ve done a bit of work on the merits (and lack of same) of hypercomputing/super-Turing computation. I’ve never seen it claimed (much less argued) that a hypercomputer would be a perpetual motion machine. Care to provide a reference? I’m curious …

      • Posted May 30, 2011 at 6:16 pm | Permalink

        Keith, it’s buried in the knock-down, drag-out argument above, but the basic idea is that something which can beat a Turing machine could also serve as Maxwell’s Demon. The particular system to game would depend on the nature of the ultramegasupracomputer, of course; it needn’t necessarily be a matter of separating hot and cold gaseous particles.

        Cheers,

        b&

        • Posted May 31, 2011 at 2:22 am | Permalink

          Doesn’t that assume that a machien that can beat a Turing machine uses less energy than it generates by playing Maxwell’s demon? I don’t see why it has to – and in fact, many hypercomputing concepts appear to fail precisely because they require unphysical amounts of energy.

          • Posted May 31, 2011 at 4:49 am | Permalink

            I don’t get it either.

            A perfect analogue machine (e.g. a Siegelmann style neural network, with arbitrary real valued weights) is super-Turing, but is unrealistic because of *that* aspect of it. To pick another example, a “Zeus machine” needs presumably an infinite amount of energy to operate (though I’m not sure that’s true: it might be the its energy usage would converge).

            • Posted May 31, 2011 at 5:40 am | Permalink

              If you could actually construct and operate a machine that required an infinite amount of energy, you would — by definition — have access to an infinite amount of energy.

              You could then use a surplus of your infinite energy supply to power the perpetual motion machine you’d need to generate the energy in the first place.

              I’m not enough of a theoretical physicist to think of a way off the top of my head of exploiting a system that transcends Planck limitations to create a perpetual motion machine, but I don’t think it’d be too hard for an expert in the system to do so.

              Cheers,

              b&

              • Posted June 2, 2011 at 3:08 pm | Permalink

                But why would a hypercomputer require an infinite amount of energy? What’s the connection?

  9. Posted May 30, 2011 at 7:36 am | Permalink

    When I read “brains and skunk butts” I thought that you were going off on Sarah Palin again. 🙂

  10. gc
    Posted May 30, 2011 at 7:37 am | Permalink

    The human mind is not simply the brain, it is brain and body, without the somatosensory systems and emotions and feelings, self, rational thinking and consciousness does not function.

    There is a “ghost in the machine” but it is not a real ghost, it is purely physical processes operating with extremely complex rules of natural chemical and electro-chemical actions and under the “guiding hand” of evolution.

    Read “Self Comes to Mind”, Antonio Damasio.

    • AT
      Posted May 30, 2011 at 8:19 am | Permalink

      good point!

  11. Posted May 30, 2011 at 7:40 am | Permalink

    The brain-computer metaphor is, as we have seen, a very poor one;

    So according to the IDists, biological mechanisms are exactly as similar to human-built machines as is convenient to their argument, and when, but neither more nor otherwise than that.

  12. Ron McLaughin
    Posted May 30, 2011 at 8:12 am | Permalink

    My brain reacts to Torley’s opinions with profound fear. This is because I believe that those opinions are very much like the opinions of a significant majority of people who make decisions as holders political office and positions of power in government agencies. Turns out that some human minds are much like skunk butts in their ability to reason.

  13. Gayle Stone
    Posted May 30, 2011 at 9:07 am | Permalink

    Sexual Selection in the words of Darwin, ‘Has been most efficient’ in producing humans with different features, skin pigmentation and brains in only 160,000 years since Mitochondrial Eve and 60,000 from Y Chromosomal Adam. This is the shortest speck of time on the largest scale that can be ilustrated compared to the rest of time for Natural Selection to get us started. Yes it has been a very rapid change for humans and other sexually oriented organisms. These “wishful” humans should get back to basics instead of dreaming up supernatural ideas!

  14. Drosera
    Posted May 30, 2011 at 9:23 am | Permalink

    Hyperfast evolution of the central nervous system probably just indicates the presence of a steep gradient in the fitness landscape. It means that there was strong selection towards higher positions on the slope leading to increased brain size and higher intelligence. Articial selection has proved that such changes can go fast. Once the conditions were such that increased intelligence would be extremely beneficial, our furry ancestors rapidly evolved into more and more intelligent ape-men. It’s a pity that people like Torley represent a downhill regression.

    • Drosera
      Posted May 30, 2011 at 9:26 am | Permalink

      Articial => Artificial

    • Gregory Kusnick
      Posted May 30, 2011 at 10:48 am | Permalink

      A steep gradient also means that beneficial mutations are not all that rare. Any random change (in the systems undergoing rapid adaptation) is as likely to take you higher as lower. It’s only near the adaptive peak that most mutations lead downhill.

  15. Dominic
    Posted May 30, 2011 at 9:43 am | Permalink

    Arrrgh! He says so many ridiculous things I cannot comment on them all.
    Just because present computers are not self-repairing (or have redundancy?) does not mean we cannot envsage future compters that will be. The nature of flesh means that when the power (food/oxygen) is turned off the body dies & cannot be revived. Clearly computers are made of different material that does not need power all the time but that does not invalidate Hawking’s comment.
    OK so the human mind is ot the brain – but the electrical activity in the brain IS the mind.

    Prof, I cannot be the only one who is eager to hear about the pet skunk! Do they make good pets? Do they smell sweeter than undergraduate students? Please do a post on it sometime!

  16. Peter R
    Posted May 30, 2011 at 9:56 am | Permalink

    He has obviously never heard of analogue computers :

    http://en.wikipedia.org/wiki/Analog_computer#Modern_era

  17. Scott near Berkeley
    Posted May 30, 2011 at 10:10 am | Permalink

    Hmmmmm, “sodden thought” here, folks. Instead of referring to those that believe in a god and afterlife as “theists” or “believers”, a more useful term is “religionists”. There is no doubt that people such as Stalin, Hitler, and Mao, were indeed “Religionists”, as they believed in their own divinity and infallibility…the punishment for apostasy in the societies of Hitler, et al, was death (as in Islam). Heck, Saddam Hussein as much admitted this to be his “truth”.

    So, IMHO, promote the term “Religionists” for deity-chasers. To the question, “Do you believe in God (Jebus, Allah)?” it would be more accurate to answer, “No, I am -not- a Religionist. I am a Naturalist; I believe in the Natural World.”

    • AT
      Posted May 30, 2011 at 1:03 pm | Permalink

      one can also be clear about the concepts of “belief” and “faith” as they apply to “science”

      there is a way to argue that “science” is “belief-free” or “faith-free” whatever you chose

      the important point here is true scientist is aware of the tendancy of his brain/mind to form “beliefs” as the shortcuts to save time/resorces for repertetive computations

      as the body of scientific knowlege constantly gets bigger it is important to develop a habit of “cleaning the house”, that is to re-evaluate our “beliefs” in view of new knowledge

      the lack of this “habit to clean the house” accounts for mankind not addressing the “real problems” – we are constantly coming back into “goo of institutionalized ignorance”

      and because scientists are humans first they will not sacrifice their social status by promoting “purely scientific worldview” on human conditon – they will pick an chose only those issues/positions that do not threaten their “good life” ( do not disrupt status quo)

      of course when the individual viability is severly compromised for everyone (once the overpopulation/overconsumption reaches its limit) the scientists will have no choice but “clean the house of their thinking”

      so inevitably religion (or faith/belief based human condition) will die out and only phenomenological discourse will remain – but the resources of the planet at that point of time will be far less than what we have now

      😦

  18. Xenithrys
    Posted May 30, 2011 at 11:14 am | Permalink

    “… our job as human beings is to discover them and to follow them, insofar as they apply to our own lives.”

    And when we do, some jerk who has taken no part in the discovering will decide on the basis of scripture whether we’re right or wrong.

  19. Posted May 30, 2011 at 3:46 pm | Permalink

    I’ve said for a while that the *real* debate isn’t evolution. It is “naturalized” responsibility, mind, action, etc. Philosophers who ignore the neuroscience, sociology, etc. which is still being developed here run the risk of being useless or worse. I suspect this is another area (well, we see it already) where non-scientific philosophy will be ready to lend a hand.

  20. Sili
    Posted May 30, 2011 at 4:03 pm | Permalink

    We should remember that the human brain is easily the most complex machine known to exist in the universe.

    Known, yes.

    For all we know Vegan and Betelgeuzean brains make ours pale in comparison.

    I’m sure that’ll make Whatshisname kneel before Zod when that is demonstrated to him. Zod obviously outdoes his God immediately – not least by actually existing.

  21. Posted May 30, 2011 at 4:31 pm | Permalink

    With one data point I can say conclusively that the Cat Brain is the most complex machine in the Known Universe because it has the ability to control the Human Brain.

    ‘Scuse. Kink wants his Kat Snax. Coming, Kink …

  22. Torbjörn Larsson, OM
    Posted May 30, 2011 at 6:41 pm | Permalink

    Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves. However, brain processes cannot possess this kind of meaning, because physical states of affairs have no inherent meaning as such.

    Torley via Feser may mean that conscious thoughts maps to symbols. Disregarding the problem whether conscious thought is merely a simulation attempt of modeling our actions backwards and forwards (as animal experiments seem to say) which then follows from unconscious thought or if we are really chosing consciously, this is no problem in neuroscience.

    As the same Chatham notes, neural networks models of our prefrontal cortex self-organizes into symbolic processing.

    This explains how we can learn without the problem of over-training that networks and computer algorithms usually do. Symbolic processing has a fitness advantage.

    Usually you would have a problem to understand how some function is emergent on other substrate functions. However, if the same function appears in the substrate there remains only a correlative coupling. Specifically here, basic functions of the part of the brain that handles vision and sound uses symbolism, which then easily couples to other functions like language and writing as they evolve.

    As a matter of fact, Feser using his own argument conclude from modern neuroscience that “brain processes … possess this kind of meaning”, that “physical states of affairs have … inherent meaning as such”. He just killed explicit dualism and, specifically, the need for implicit supernaturalism.

    And they wonder why we call them IDiots?

    • Torbjörn Larsson, OM
      Posted May 30, 2011 at 6:47 pm | Permalink

      I should add that I guess having a fitness advantage doesn’t explain why symbolic processing evolved, merely that it isn’t surprising to observe.

    • Torbjörn Larsson, OM
      Posted May 30, 2011 at 6:50 pm | Permalink

      Oops. “Feser using his own argument conclude” – Feser using his own argument _should_ conclude.

    • greg byshenk
      Posted May 31, 2011 at 2:59 am | Permalink

      Piggybacking here onto Torbjörn’s comment, I’d like to focus a bit more on that same bit of Torley’s text:

      Feser points out that our mental acts – especially our thoughts – typically possess an inherent meaning, which lies beyond themselves.

      He may be right, here, so long as one resists the supernaturalist’s desire for magic. One might well suggest that what makes something a ‘thought’ is that is has “meaning”, and thus ‘meaning’ is “inherent” in thought. But one cannot legitimately leap from there to the reification of ‘inherent meaning’ as some sort of thing.

      The problem with doing so is that ‘meaning’ is fundamentally relational. A thought (or concept, or any kind of sign — with ‘sign’ here being used broadly as “anything that signifies” — or can have meaning) has “meaning” only in relation to other thoughts. If we ask what some sign ‘means’, then it can be cashed out only in terms of other signs, and its relation to them.

      An isolated mark somewhere could have all manner of different meanings: a letter, a number, a graph, etc. — but only in relation to other things. In isolation it has no “inherent” meaning. Similarly, if we were to attempt to imagine an isolated thought unrelated to any other, what would its “meaning” be? How could it have any at all?

      And indeed Torley seems to recognize this, if less than fully, in noting that the meaning “lies beyond” the thought.

      But once we see this, I submit that the entire argument collapses. For, if ‘meaning’ is relation, then there is nothing necessarily incoherent in supposing that meaning inheres in the relation of neural states or other ‘material’ things.

      Of course, this does not prove materialism (eliminative or otherwise) to be true, as it doesn’t prove that meaning actually is such a relation. But it does mean that Torley’s disproof fails.

    • Al West
      Posted May 31, 2011 at 3:24 am | Permalink

      “Symbolic” processing is problematic, because the things we call symbols in ordinary language (a sign of the cross, a star of David, or, to use an example given by Dan Sperber in his 1975 book, “Rethinking Symbolism”, the placing of butter on the heads of prominent individuals in certain rituals among the Dorze of Ethiopia) are not the things we are talking about when we talk about “symbolic” processing or the symbolic capacity of language. Symbols like that are about knowledge and the evocation of feelings, not meaning in the sense that language has, and I’d recommend Sperber’s book for a detailed account of the reasons for this (it’s a shame it’s not much more famous than it is). Take the following sentences:
      a) The lion roared.
      b) The lion emitted its characteristic call.
      c) The lion RRROARED!

      In terms of semantics and reference to the outside world, no extra information is added or taken away in any of these statements: each one portrays a lion roaring. What is changed is the evocation of the situation, not the understanding of the situation, which in each case is the same.

      Basically, when we talk about symbolic capacities and ‘meaning’, we have to be very clear about what we’re talking about. Feser, and Torley therefore, and actually most philosophers and even psychologists, are often quite confused about what, exactly, is meant by the word ‘meaning’.

      Anyway, it is the human capacity of the semantic memory that allows for this abstraction from pragmatic context that creates what we call ‘meaning’, and it is not inherent in anything except the information processing capacities of normal human brains. All that is required to show Torley wrong is to link up the neuroscience end with the cognitive psychology end, which might be easier than we think.

  23. Diane G.
    Posted May 30, 2011 at 7:55 pm | Permalink

    (subscribing)

  24. Posted May 31, 2011 at 1:48 am | Permalink

    “Just because … is not evidence …”

    Oh, that poor, defenseless grammar. D`:

    His second argument looks weird. The others seem standard fare. “Is too!”
    But there’s something special about the second one.

  25. Kevin
    Posted May 31, 2011 at 11:32 am | Permalink

    Oh for crying out loud, people…

    Hawking’s analogy is perfectly apt, and has nothing to do with whether or not the brain is digital or analog, the Turing test, or any other such issue.

    Brains and computers are natural. They rely on an energy source to operate. No energy source, no operation.

    Of course, brains have a disadvantage that once the energy source is turned off, ALL of the data within it are discarded. Computers can be plugged back in — and external back-ups can be provided as well to avoid loss of data.

    So, to a large extent, computers are better than brains. Because with brains, off is not just off, it’s permanently and irrevocably off. And there’s no external back-up.

    How tough is that?

    • Diane G.
      Posted June 2, 2011 at 12:14 am | Permalink

      Thank you! My sentiments exactly.


One Trackback/Pingback

  1. […] Butts and brains Jerry Coyne happened to mention that the skunk’s odor defense mechanism (a nozzle near its anus that emits the stinky substance) and the human brain are products of […]

%d bloggers like this: