Matthew Cobb and others on BBC: “Do insects feel pain?”

Have a listen to this 26-minute BBC show (click on screenshot to go to the show, which should be accessible worldwide). It’s the second part of a show about whether it’s moral to kill or hurt insects.

A personal note: I avoid killing insects, or any animal, whenever possible. I may swat a mosquito, but if I see a millipede, an earwig, or anything else in my home or lab, I take it outside and release it. Yes, I killed millions of flies doing genetics research over my career, but I always killed them humanely, first putting them to sleep. (When I was doing undergraduate research on flies at William & Mary, I would take my spare flies to the roof of the biology building and let them go. I was finally caught doing this by my advisor, who chewed me out for polluting the natural gene pool—of the cosmopolitan species Drosophila melanogaster!) But yes, I do eat meat, and am aware of this hypocrisy, so there’s no need to remind me of it.

This show features a number of scientists, including our own Matthew Cobb, weighing in on the issue of whether insects feel pain. It’s not just pain alone, though—it’s a matter of sentience and consciousness, of whether in some sense insects value their lives. Of course we don’t know what it is to be a fly or a bee, but some scientists and philosophers urge caution because of the possible consciousness (and pain “qualia”) of insects. I always err on the side of caution, and my grounds are these. Both insects and mammals like us have evolved the ability to avoid stimuli that might hurt our survival and reproduction. Pain is simply an evolved sensation that tells mammals to get away from harmful stimuli. If pain wasn’t, well “painful,” then we wouldn’t be so quick to avoid it. Thus it’s at least possible that insects also feel “pain” in the sense that they don’t like sensations that are harmful. (Of course that doesn’t mean that aversion involves anything like the pain we feel when we sit on a thumbtack.)

Of course it’s possible that the whole aversion behavior in insects and so-called “lower” animals comes through a system of evolved automatic response that doesn’t make its way through consciousness or produce qualia. But it’s possible that it does include that, so, like many other scientists (see below) I err on the side of caution. After all, science progresses: one example is recent evidence that fish can feel pain, after people thinking for years that they didn’t. With this increasing awareness of possible animal sentience comes stricter regulations on how scientists can treat their research animals.

Anyway, have a listen; it’s a good show, with thoughtful opinions on both sides. Matthew, as always, is eloquent. I asked him if he had a quote about the show for readers here, and he sent this:

I think it’s a very thought-provoking programme by Adam Hart, and the producer, Andrew Luck-Baker. My take? We don’t know the answer, so be as nice as you can to insects, just in case. Which, in my experience, is how most scientists act towards their animals. But maybe readers think we are wasting our time doing this?
h/t: Christopher

124 Comments

  1. ThyroidPlanet
    Posted June 20, 2018 at 9:56 am | Permalink

    I’ll send this to someone who just had their face puff up from a wasp sting.

    As for the meat thing : Beyond Meat has yummy sausages and burgers.

    • Posted June 20, 2018 at 11:39 am | Permalink

      I guess you’re saying that if a wasp stings you, you have the right to kill it, even if it doesn’t sting again. Do you think that if a cat claws you, you have the right to kill it, too?

      Killing like that is revenge, pure and simple.

      • Posted June 20, 2018 at 12:10 pm | Permalink

        I claim that right, whether the motivation is revenge or not. With the insect, not with the cat. Cats are not wasps.

        There is nothing wrong in drawing lines where none exist. Some lines are imposed on us by law (and that is right, so long as all get a say in how those laws are made). For all others, you may not agree on where the line ought to be drawn but I’m going to draw it where I will.

        FTR – I rescue spiders from drains, I let bugs out of the house; I don’t swat them, I’ve taught my boys to never kill anything, insects included, just because they can. But insects are not cats.

      • ThyroidPlanet
        Posted June 20, 2018 at 12:23 pm | Permalink

        The wasp that stung my friend took off. Nobody’s going to hunt it for revenge.

        But consider

        Wasps build a nest at someone’s home somewhere, possibly with children.

        I don’t know what else to do in practical terms except spray them with the poison, vacuum them up, or otherwise- that is, I don’t know a humane way to move that nest without wearing beekeeper gear and spending a day walking it way out somewhere. But theoretically I think it can be done. Alternatively one could encourage predators of wasps, if the nest could be accessible.

        • nicky
          Posted June 20, 2018 at 12:36 pm | Permalink

          At present I have 6 red paper-wasp nests under my roof. Nobody ever got stung in 16 years, except the one guy who wanted to remove a nest. And yes, I have young children, they know they have to leave them alone. These wasps are the least of my concerns. I bet the spraying would do more harm to my children than the wasps ever would.

          • ThyroidPlanet
            Posted June 20, 2018 at 12:48 pm | Permalink

            Therefore….

          • Filippo
            Posted June 20, 2018 at 4:30 pm | Permalink

            In high school I was mowing around the house. I felt the strangest feeling on top of my head, which very quickly turned into quite significant pain. Apparently one or more hornets in a nest ten feet above me didn’t care for my presence. I confess to being consumed with rage. I went to the grocery store for the top-of-the line spray, and commenced to accomplishing my revenge.

            I wonder what concern the ichneumonidae wasps feels while it is accomplishing its selective paralysis of the ganglia of its prey.

            • infiniteimprobabilit
              Posted June 20, 2018 at 7:34 pm | Permalink

              That last paragraph is why I feel justified in killing any wasp I see.

              That, and the fact that yellow wasps have been declared a noxious pest in New Zealand because of the effect they have on other native insects.

              cr

        • ThyroidPlanet
          Posted June 20, 2018 at 1:18 pm | Permalink

          Other humane solutions I should have acknowledged:

          Never go near/avoid nest (for this example though, the nest would have to be on a door)

          Sell house

          Move out

          … Next owners would probably kill wasps though.

      • infiniteimprobabilit
        Posted June 20, 2018 at 7:31 pm | Permalink

        I am happy to take pre-emptive revenge on every wasp I see. Nasty vicious things.

        On the other hand, I rescue millipedes and spiders and mantises and put them outside, even earwigs. Not ants though. They get squirted.

        I also rescue huge leopard slugs that appear inside the kitchen near the door at night. They must crawl through the tiny crack under the door which I would have thought far too narrow (it’s only a couple of millimetres). But then they’re molluscs, like octopuses, which can famously squeeze through impossibly narrow gaps.

        cr

        • ThyroidPlanet
          Posted June 20, 2018 at 8:14 pm | Permalink

          I’d like to emphasize that my argument is from an interest in safety only. Imagine a little kid poking a wasp nest – they’d probably end up in the hospital. Also, roofers can’t work if there’s a nest. Etc.

          I’d like to add also that I am on record here at WEIT for killing ants to help grass grow, but have found better ways to go about helping the plants and not outright killing ants. The point of that is by discussing the topic – in as informal a setting as it was – it helped me find better ways to go about the problem solving.

          Lastly, I’m saddened to hear about the strong “revenge” position. I guess it’s an age thing. I too was more hateful to stinging insects as a youth. There, I said it. I’m not a youth anymore.

          • GBJames
            Posted June 21, 2018 at 7:07 am | Permalink

            “I’m not a youth anymore.”

            It seems few of us, here, are.

            • ThyroidPlanet
              Posted June 21, 2018 at 7:10 am | Permalink

              I’m also getting older

        • Posted June 21, 2018 at 5:35 am | Permalink

          “Nasty vicious things.” Sorry – but that is a ridiculous thing to say. Vicious is a term that can only be applied to human motivations.

      • friendlypig
        Posted June 22, 2018 at 3:21 am | Permalink

        What about the billions of tons of crops that are destroyed by insect? The millions of people affect by malaria and other insect borne diseases? I have suffered for years from hay fever, whenever I get bitten or stung during the hay fever season my body’s response is to produce large painful and very itch swelling. Sorry, Jerry et al, I do not give a damn it insects are sentient or have an IQ of 140, they’re a nuisance.

        • Jonathan Wallace
          Posted June 22, 2018 at 6:54 am | Permalink

          As E O Wilson has remarked – if insects were to suddenly disappear from the Earth ecosystems would collapse into chaos. If we disappeared the rest of life on earth would manage fine without us ( give or take a very small number of species such as head lice that are intimately engaged with our species). Writing of all insects as a nuisance that it would be good to eliminate is seriously ill judged irrespective of whether they feel pain.

          • Posted June 22, 2018 at 10:19 am | Permalink

            It’s good to be at the top of the food chain but it also means we’re expendable. Life went on just fine after the dinosaurs left. I suspect it will do well after we are gone as well.

    • Posted June 21, 2018 at 5:32 am | Permalink

      Last summer I was stung on the head three times in quick succession by three wasps. I experienced a sudden sharp pain but that went after a minute & I had no swelling or reaction. The wasps were around before us & I hope will be around after us.

  2. Hempenstein
    Posted June 20, 2018 at 10:02 am | Permalink

    For me, and in distinct contrast to my pathologically arachnophobic daugher, spiders get a free pass. And I go as far as relocating earthworms when digging (both out of sympathy and to help them avoid the piggy & opportunist robins.

    Mosquitoes, yellowjackets, most ants & moths, OTOH…

    • GBJames
      Posted June 20, 2018 at 11:58 am | Permalink

      You brute… starving the poor robins!

    • David Coxill
      Posted June 20, 2018 at 12:03 pm | Permalink

      Never met a spider that would not benefit from a good squishing .
      Just joking ,i wish i were not so terrified of them ,the only one i can bear on my hand is the little black and white jumping spiders .

  3. John Hamill
    Posted June 20, 2018 at 10:03 am | Permalink

    Hi Jerry,

    This is really interesting and I think the principle of caution described by you and Matthew is a good one. I’d love to get your opinion on a related question. If a robotic AI learned to “get away from harmful stimuli” would you extend the same principle of caution to software? I don’t think it’s the case yet, but it doesn’t seem at all fanciful to imagine that we may soon have AIs that can ‘think’ in a way that may be as sophisticated as an insect can ‘think’. Perhaps before too long, you’ll award the same ethical obligations to your smart phone, as you do to earwigs?

    John.

    • Posted June 20, 2018 at 11:03 am | Permalink

      If we built AIs that we felt needed to be terminated under some conditions, we would provide a mechanism whereby it didn’t feel pain, given virtually any definition of “feel pain”, assuming we had control over its programming.

      On the other hand, ending the life of a sentient being, even if painless, ends their striving toward goals, life experience, relationships with others, etc. Eventually AIs will be complex enough for that to matter to us humans.

      Of course, in a world with infinite cloud-based backups, we can take solace in telling ourselves that no AI ever dies as long as we have them properly backed up.

      • Posted June 20, 2018 at 11:35 am | Permalink

        We might not have control over programming in most complicated AIs already, so we should start thinking …

        Why? because the behaviours these days are effectively learned – they are due to pattern matching and associations now …

        I for one worry about classification, even: neural nets with hidden layers that say yay or nay to something but in a completely incomprehensible way.

        • Posted June 20, 2018 at 11:44 am | Permalink

          This is incorrect. I think what you may be referring to is the fact that it is hard for humans to figure out why a neural net application works. We can train an AI to identify, say, cats vs dogs but it doesn’t tell us how it works. We humans learn nothing from the exercise. We still control all the AIs programming, however. And we can pull the plug at any time.

          Some newer AI research is into neural nets where the “neurons” contain code that is evolved. In a sense, this means that the AI is writing its own code. However, this means much less than it sounds like. We have programs that generate code in many situations. There is nothing new with this. The one thing that is important to note in the current context is that no AI writes ALL its own code. These AI programs still run on normal operating systems which are programmed by humans and can be terminated in the usual way. AI programs generating their own code is just a mundane programming technique.

          • John Hamill
            Posted June 20, 2018 at 12:06 pm | Permalink

            I’m not sure we can assume that ‘we can pull the plug at any time’. That’s what Bostrom’s book was about. I also don’t think it’s correct to say that we ‘control all the AIs programming’. Contemporary AIs are more taught than programmed. Even narrow AIs regularly learn how to do unexpected things that their programmers didn’t anticipate and couldn’t control.

            • Posted June 20, 2018 at 12:22 pm | Permalink

              These discussions always suffer from an ambiguous time frame and context and misunderstandings that result.

              No AIs being created now can’t have their plug pulled. Someday we could create an AI that actively fights against having its plug pulled. I suppose that’s possible today if we include AI-based weapons. Certainly if a missile is headed toward you, you do not have the ability to pull its plug in any real sense unless we consider blowing it up with another missile as “pulling its plug”.

              Sure, AI programs can do unexpected things but my phone does unexpected things sometimes. That has nothing to do with our ability to pull its plug. We still have ultimate control. If an AI creates things or does something unexpected, it is because we programmed it to do so.

              I believe that we will eventually create powerful AIs, perhaps even ones that can change their own programming as they do in the movies. We are a long way from that now.

              • John Hamill
                Posted June 20, 2018 at 12:32 pm | Permalink

                Nobody programmed the Facebook AI to make up it’s own non-human language … and yet:

                https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/

                I don’t think it’s true to say that an AI only does what it is programmed to do. It is not programmed. It learns. If AlphaGo could only make moves that a human programmed it to make, then in principle, it could not be a better player than the best human programmer. In reality, it learned to play better than any human programmer can play. It learned to play in an improved way that its programmers didn’t program.

              • Posted June 20, 2018 at 12:47 pm | Permalink

                I didn’t at all say “an AI only does what it is programmed to do”. However, that’s doesn’t mean an AI can do anything it wants to do. People have created self-modifying programs practically since computers were invented. It’s nothing new. People have created programs that do things they never intended them to do. That’s nothing new either.

                AlphaGo can only improve its ability to play Go. It can’t do anything else. While AlphaGo is an amazing achievement, its creators have made dubious statements that reporters have run with. Gary Marcus has written an interesting paper on the limitations of AlphaGo and their approach. Even the claim that it learned how to play Go by itself is misleading. See https://arxiv.org/pdf/1801.05667:

                “Thus, rather being an illustration of the power of tabula rasa learning, AlphaGo is actually an illustration of the opposite: of the power of building in the right stuff to begin with. With the right initial algorithms and knowledge, complex problems are learnable (or learnable given some sort of real-world constraints on compute and data). Without the the right initial algorithms, representations and knowledge, many problems remain out of reach. Convolution is the prior that has made the field of deep learning work; tree search has been vital for game playing. AlphaZero has combined the two.”

              • John Hamill
                Posted June 20, 2018 at 12:54 pm | Permalink

                I believe what you wrote was this:

                “If an AI creates things or does something unexpected, it is because we programmed it to do so.”

                I don’t believe that is accurate. If all of the moves made by AlphaGo are moves that it was programmed to do, how does it beat the best human players? Are you saying the programmers are the greatest Go players in the world and are providing this knowledge to the AI?

              • Posted June 20, 2018 at 2:46 pm | Permalink

                I guess what I wrote was not very clear. What I meant by that is that the creativity and ability to do something unexpected is very restricted. For example, AlphaGo can only surprise us with good Go moves. I suppose it might also surprise its programmers by crashing but nothing more than that.

                A somewhat simplistic analogy is that if I program an algorithm to take square roots, it doesn’t mean that I know all the square roots it can output. As with AlphaGo, you have to run the program to tell what it is going to do. The difference is merely a matter of degree.

              • Adam M.
                Posted June 20, 2018 at 2:15 pm | Permalink

                I think what Paul is saying is that if they learn it’s because they were programmed to learn, and the possible ways in which they can learn were specified by the designers.

              • Posted June 20, 2018 at 2:51 pm | Permalink

                Exactly. AI programs can certainly still surprise us but they can’t operate outside the space in which they were designed to operate. Just as my square root program can only output a number, not “hello world”.

              • Posted June 20, 2018 at 3:07 pm | Permalink

                Yes, but much of the focus on AI is “general intelligence” as opposed to something specifically targeted like driving a car or interpreting an X-Ray.

                Such an AI would be as constrained by the designs of its engineers as you yourself are by the hopes and dreams of your parents, and for the same reasons.

                I suspect we’re still quite a long ways from that form of “strong” AI. Long before then, though AI as we’re already familiar with it will have thoroughly disrupted our society. Commercial driving as a profession likely won’t exist in a decade, and certainly not in a quarter century — and it’ll take out all the associated supporting roles, from dispatchers to the hotels (and everything else) at truck stops. And there won’t be any jobs for them, because even the doctors and lawyers are going to be out of work alongside them at the soup line. Hell, you won’t even be able to get a job as a strawberry picker…

                …all because of AI.

                This is a bad thing if you define a person’s worth by take-home pay. This is a good thing if you’re willing to distribute the fruits of society’s labors based on some model other than the scarcity of human labor. We’re no longer in an “all hands on deck” crisis mode for mere sustenance; we’ve got far more people than there are jobs that need to be done — and, thanks to robots and AI, the ratio of work done per person alive is only going to continue to skyrocket. So why not let all people live as they wish, whether or not that includes doing something an insanely-wealthy individual or corporation is willing to pay somebody to do?

                Cheers,

                b&

                >

              • Posted June 20, 2018 at 3:51 pm | Permalink

                Perhaps you meant to say that there’s LESS focus on AGI (Artificial General Intelligence) than on the specific targeted kind. The people working on AGI had to invent this new acronym because most of the work on AI lately is of the targeted commercial variety involving neural nets used in specific applications.

                I agree with you that AI is going to take over many jobs formerly done by humans though I doubt that lawyers and doctors will be put out of work soon. What they do has an unavoidable human component. They will certainly get some help from AI.

                Our economic system is going to need to make some changes but such changes have happened in the past and the world has adjusted. The biggest obstacle to solving these problems are leaders who are still fighting the battles that were important 50 years ago. We have a president that is touting clean coal, for example. China will likely adjust more quickly.

              • Posted June 20, 2018 at 4:44 pm | Permalink

                Doctors and lawyers are already amongst the biggest job losses.

                Lawyers, especially. Law firms used to be huge by modern standards, with insane numbers of people doing “document research” that now is little more than typing a term in a search box. Just as you used to need an accountant to do your taxes but now can do it all yourself online with TurboTax, most routine legal work can similarly be done cheaply online. Even the high-powered stuff isn’t immune, as there’s good reason to suspect that IBM’s Watson can already give the average lawyer a run for the money.

                Radiologists who spent six-figure sums on a decade of education don’t do as well at reading diagnostic imaging as modern AI. They miss stuff the AI finds and consider significant stuff the AI correctly dismisses as insignificant.

                Many people place high value on the “personal touch” that an human lawyer or doctor can give. But I bet most people (and virtually all companies) would pick an impersonal computer lawyer that wins in court over a sympathetic human that loses to the computer, and basically everybody is going to pick the computer that correctly diagnoses cancer over the friendly doctor who gets it worng.

                And we haven’t even begun to touch on the overwhelming majority of jobs, most of which are already considered low-skilled jobs of last resort. Never mind that manufacturing plants employ only a tiny fraction of those a generation ago, that the days of Henry Ford’s factory floor assembly line teeming with people are long gone. Boston Dynamics already has prototype robots that can replace warehouse pickers, and there are already burger joints using robots to flip burgers. Most retail locations have already automated away at least a significant fraction of the cashiers in favor of self-serve checkouts. Hell, even construction…many homes are built from factory-made components that are assembled rapidly on site by skeleton crews.

                Programmers, too…any programmer even vaguely familiar with history recognizes that her own personal productivity is many orders of magnitude greater today than it would have been in the punch card era, and that every software update that makes her own job easier is one that reduces the need for programmers in the first place. Far too many people think out-of-work truck drivers are supposed to re-train themselves as Web programmers or the like…but, thanks to WordPress, Web programming is mostly unskilled labor that doesn’t call for a dedicated employee any more — the now-streamlined marketing team doesn’t need a programming team any more, and can probably do without the design team.

                Basically, if you think there’s a job, any job, that people are currently paid for, that can be done better by an human than by a robot from a generation in the future (if not one already here today), you’re delusional. We already know that humans are far inferior to machines in many ways, and it’s now unarguable that there is simply no domain in which humans are, in principle, superior to machines.

                If you really want a taste of just how deep this rabbit hole goes, consider how many Millennials have closer relationships with their friends’s in-game avatars than they do with the friends themselves….

                b&

                >

              • Posted June 20, 2018 at 4:57 pm | Permalink

                A lot of what you mention here is not AI but computerization. Of course, “AI” today is a marketing term so one can call practically anything AI.

                What you say about programmers is just plain wrong. Yes, the productivity of a modern programmer is way higher than it was in the past but this is due to faster computers and improved tools, none of which have much to do with AI. I entered the industry in 1974 and just retired so I definitely have been there for all the changes. The demand for good software engineers has pretty much always been high and right now it is higher than it ever.

                Certain kinds of programming task have always been subsumed by automation. It’s really the way that the industry has always worked. At first, a task must be custom programmed. Leter, it is handled by a program that takes input from a form and is filled out by the customer without a programmer involved. However, overall the number of programming jobs keeps rising. You are correct that these programming jobs are not going to be filled by retraining truck drivers.

              • John Hamill
                Posted June 20, 2018 at 5:02 pm | Permalink

                AlphaGo can “only make good Go moves” or else crash. When you or I are playing Go, we can also “only make good Go moves” or else crash.

                Of course AIs only operate within their own limitations, but that’s equivalent to saying that AIs are not magical. The point is that the capabilities of a neural network can be beyond those of their programmers, unlike an expert system.

                When AlphaGo makes super-human Go moves (which it does) that specific behaviour has not been programmed by a human. Obviously. By definition. AIs can learn to be super-human and need not be restricted to only the knowledge and capability of their programmers.

              • Posted June 20, 2018 at 5:16 pm | Permalink

                I think you are missing my point. Everything operates within its limitations. That’s built into the definition of “limitation”. My point is that AlphaGo is not designed to do anything other than play Go. It is not an artificial general intelligence and was not designed to be.

                A neural network can be trained to do a certain task faster and more accurately than its programmers or, in fact, any human. However, that is not the case for any task. There are still many tasks that we have no idea how to get a neural net to do as well as a human. We also don’t have a general way to combine these neural net task engines together except for humans to program it.

              • John Hamill
                Posted June 20, 2018 at 5:40 pm | Permalink

                Nobody has argued that a super-human AGI exists. The argument was that …

                “… it doesn’t seem at all fanciful to imagine that we may soon have AIs that can ‘think’ in a way that may be as sophisticated as an insect can ‘think’.”

                I don’t see why it is at all relevant to anything in this thread, to point out that a super-human AGI doesn’t yet exist.

              • Posted June 20, 2018 at 6:13 pm | Permalink

                These threads often have a life of their own. One responds to what the last person said and it becomes a game of telephone. I do maintain that my first comment was at least relevant but that is obviously in the eye of the beholder.

                Reconsidering your original statement, I do not think it is all obvious we’ll soon have an AGI as sophisticated as an insect brain. Of course, that depends completely on your definition of “soon”. There’s still a lot that is unknown. What AI researchers are working on now has little to do with insect brains anyway.

              • John Hamill
                Posted June 21, 2018 at 12:52 am | Permalink

                Nobody said that any AI research is anything to do with insect brains. What you said was this …

                “If an AI creates things or does something unexpected, it is because we programmed it to do so.”

                I believe that’s wrong. If that were true then by definition, no AI could ever do anything super-human, and there are plenty of counter-examples to that idea. I think the mistake is to consider contemporary AIs as ‘programmed’ to carry out their narrow functions.

              • Posted June 21, 2018 at 11:29 am | Permalink

                I think you are misinterpreting my statements. Perhaps I could have been clearer though you are also choosing an obtuse reading. I am saying that an AI’s creativity and ability to give unexpected or super-human results is because its designers programmed that ability into it. I am not saying this will always be the case, just that it is the case now. I am a believer in Strong AI and I am actually doing work on it myself. However, current AI programs like AlphaGo can’t do anything but play a good game of Go. Their ability to surprise us is limited to great Go moves. Even the statement that AlphaGo learned to play Go by itself is misleading. It definitely didn’t learn how to play like a human would.

                There’s a lot of hype surrounding AlphaGo and similar AI programs created by the researchers and enhanced by the press. People are misled into thinking we are about to build AIs that approach the abilities of the human mind. Maybe, but it has nothing to do with programs like AlphaGo. See the references I give in other comments here.

              • Posted June 21, 2018 at 11:24 am | Permalink

                I was not talking about the decision to “pull the plug”, I was talking about the decision to make use of a (binary, so “simple in that sense) classifier in a way that profoundly affects people’s lives. I’ve oversimplified an actual classifier to show that even the simplest case has profound ethical implications if used in a “technological mode” and there is no theoretical background one can use as a check.

                (I was at a talk at NRC the other day where the goal was to replace solving the Schrodinger equation in materials science. That’s *much* less of a big deal than an ANN for law enforcement, IT Security (my current profession), intelligence, etc.)

              • Posted June 21, 2018 at 12:15 pm | Permalink

                Certainly the more autonomy we give our machines, the greater the danger that something bad will happen. Even if we have the ability to pull the plug, it may be too late. With a rogue missile, we might lose the ability to pull the plug even if such a feature was programmed. Of course, this is already a problem that is somewhat independent of AI. AI just makes it worse.

                AI could eventually improve on this. We migth think that a human-controlled weapon wouldn’t make the same mistakes as one controlled by AI but that’s obviously too simplistic. An AI can potentially react to danger faster than a human. It also doesn’t fall asleep or take drugs. Humans and AIs will both make mistakes, just different ones.

              • John Hamill
                Posted June 21, 2018 at 11:47 am | Permalink

                “I am saying that an AI’s creativity and ability to give unexpected or super-human results is because its designers programmed that ability into it.”

                This is wrong. Super-human abilities can’t be programmed by humans. Obviously. By definition. Where AlphaGo has acquired super-human abilities it is through *learning* by playing itself. Super-human abilities cannot be programmed by humans, otherwise they would be merely human abilities. Obviously. By definition. Contemporary AIs *learn* more than they are programmed.

              • Posted June 21, 2018 at 12:21 pm | Permalink

                They learn more BECAUSE they were programmed to. Whatever learning mechanism they use, it was created by human programmers.

                It is a simple confusion between thinking that just because we wrote a program, we know what it will generate. In general, we only know what a program will do by running it. Programs can always surprise us with their output. Of course, there are levels of surprise. If my program crashes because it has a bug, I am surprised but only a little. If my program beats the best human Go player, I am more surprised. It is all a matter of degree.

              • John Hamill
                Posted June 21, 2018 at 12:45 pm | Permalink

                Nobody said that because we write a program we will know exactly what it will generate. In fact, I said precisely the opposite. AIs are designed so that they can learn to exceed human capabilities. Their capabilities then are not merely those that they have been programmed with (contra your claim) but in fact they can learn abilities that exceed those of their programmers. That a super-human AI may surprise its programmers with novel abilities is a *feature*. In fact it’s the very purpose of the exercise. That an inadvertent error in a piece of software may cause crash, is a *bug*. A feature and a bug are not just two kinds of surprises that differ only by degree. One is success and the other is failure. Obviously.

              • Posted June 21, 2018 at 12:50 pm | Permalink

                We are just repeating our arguments here so I’m done.

              • John Hamill
                Posted June 21, 2018 at 1:12 pm | Permalink

                I disagree. I don’t think you’re repeating the same argument at all. First you argued that anything an AI does must be something it has been explicitly programmed to do by a human programmer. This is not consistent with an AI having any super-human capability. By definition. Then you argued that such super-human capabilities are just ‘surprises’ that differ from crashes only in degree. This confuses features with bugs. In this manner you haven’t been repeating the same argument but instead making new arguments, which are each wrong in novel ways.

              • Posted June 21, 2018 at 2:34 pm | Permalink

                You have completely misunderstood my points. I work in AI, for God’s sake! I have no reason to engage in an argument at the level of “computers can only do what they are programmed to do”. If you think that’s what I’m saying then you are wasting my time and your own.

                If you disagree, then read the article I linked to or pretty much anything written recently by Gary Marcus. His opinion is by no means unique but he has written extensively on the hype surrounding AI and he is a well-known AI researcher.

              • GBJames
                Posted June 21, 2018 at 1:31 pm | Permalink

                “AI having any super-human capability. By definition.”

                I saw this comment upstream and it bothered me a bit but I didn’t react until now…

                I don’t see how this is true. We create machines with super-human capabilities all the time. No human can fly. None can stay underwater for more than a few minutes. None can lift eight tons of weight. But we create machines with these super-human capabilities and don’t think twice about it. Why is AI any different than other machinery (in this regard)?

              • Posted June 21, 2018 at 2:50 pm | Permalink

                I agree. We do create machines with super-human capabilities all the time. This is just someone misinterpreting what I said. AI is not different in this regard.

                Perhaps the analogy with airplanes is a good one. While they do something that humans can’t do, the engineers that build them aren’t at all surprised because they designed them to work that way. AI is no different. That was my point in a nutshell.

                In a sense, “computers only do what they are programmed to do” is correct but this has subtle nuances that matter. Right now we don’t have AIs that can program themselves in the everyday sense though many programs generate code so even that is not the whole story.

                When looking at claims that AlphaGo learned to play Go by itself, we must look closely at what this means or be misled by the hype.

              • John Hamill
                Posted June 21, 2018 at 1:58 pm | Permalink

                Humans can create flying machines based on the physics that humans can understand. Humans can program software to play human-level Go based on the strategies of Go that humans can understand.

                However, an AI can learn new super-human Go strategies that are beyond the capabilities of any human. Such super-human strategies are ‘learned’ by the AI by playing itself. They are not programmed into the AI by the human programmers, otherwise they wouldn’t be super human. By definition.

                If you encountered a flying machine that only flies based on physics beyond the standard-model that humans currently understand, then that would be analogous to encountering a super-human Go player. If a flying machine is designed based on super-human physics then it was not designed by a human. By definition. If an AI can play Go using super-human strategies, then those strategies were not programmed by a human. By definition.

              • John Hamill
                Posted June 21, 2018 at 2:40 pm | Permalink

                I’m engaging with your argument on the level of quoting it verbatim within quotation marks, and then describing why it’s wrong. Like this statement …

                “If an AI creates things or does something unexpected, it is because we programmed it to do so.”

                I’m a Computer Science professional too.

              • John Hamill
                Posted June 21, 2018 at 2:54 pm | Permalink

                “Perhaps the analogy with airplanes is a good one. While they do something that humans can’t do, the engineers that build them aren’t at all surprised because they designed them to work that way. AI is no different. That was my point in a nutshell.”

                AI is no different because AI designers aren’t surprised by what an AI does? Erm … AI designers are very often surprised by what super-human AIs do.

              • Posted June 21, 2018 at 3:01 pm | Permalink

                Not really. Do you think AlphaGo’s designers were surprised that their creation made good moves? I am sure they were happy and couldn’t have predicted precisely how it won its games but that’s not the same thing at all. My square root algorithm produces answers that I don’t know ahead of time but I am not surprised by them.

              • John Hamill
                Posted June 21, 2018 at 3:17 pm | Permalink

                I think that the AlphaGo designers are unable to distinguish the good AlphaGo moves from the bad AlphaGo moves, because AlphaGo is using super-human strategies.

                Your statement here about ‘surprising results’ is quite different from your original contention about ‘surprising results’ though. Your original contention was as follows …

                “Programs can always surprise us with their output. Of course, there are levels of surprise. If my program crashes because it has a bug, I am surprised but only a little. If my program beats the best human Go player, I am more surprised. It is all a matter of degree.”

                This statement is entirely incoherent and confuses bugs with features. Since the discussion has regressed into efforts to re-write previous statements, I think we’ve reached the end. Thanks for your time and enjoy the rest of your day.

              • Posted June 21, 2018 at 3:42 pm | Permalink

                I stand by what I said. If what I have said seems to be evolving, it is simply me attempting to make my point in different ways in the hope that you’ll understand it. I guess that’s not going to happen. You enjoy the rest of the day as well.

          • Posted June 21, 2018 at 11:20 am | Permalink

            Take the hidden layer of ANN designed to classify “terrorist” or “not terrorist”.

            What description of the hidden layer is possible? In general – and this is the *virtue* of ANNs that have been promoted for years by the likes of Paul Churchland – this is impossible (except by simply summarizing all the weights). Non-propositional knowledge!

            Now law enforcement makes use of it to classify people at the airport. How are we to know where it is going wrong, if it is, if we can’t understand its operation?

            (I realize that this may be true of us in some sense- that’s PC’s point, but we at least can “flatten our state space” into words …)

            • Posted June 21, 2018 at 12:08 pm | Permalink

              Yes, it’s a big problem with neural nets. It’s kind of ironic as it is analogous to human experts’ inability to explain how they do things, as you suggest.

              Besides preventing us from reaching in and fixing problems, it also prevents many applications from approaching human performance.

              It is clear from psychological research that our sense processing takes advantage of our knowledge of the world, common sense, etc. Neural nets can only learn from their training data. Our brain’s neural nets benefit from what we expect to see.

  4. Posted June 20, 2018 at 10:12 am | Permalink

    Oh good, I am not the only crazy one. I avoid killing insects and spiders unless they are an intolerable nuisance to me. Much to the dismay of my wife who thinks I’m nuts. When I shoo them out of the house she screams at me “They’ll only come back in!”

  5. Randall Schenck
    Posted June 20, 2018 at 10:42 am | Permalink

    Very interesting look at something we hardly think about. Just as some few years ago we did not think about the consequences of the pesticides produced and used in large quantities, not only on the insects but on other insects or larger animals.

  6. Christopher
    Posted June 20, 2018 at 10:43 am | Permalink

    If you listen to the first episode, you’ll get the back story on this insect question. Apparently there was a bit of a kerfuffle over a citizen science study asking people to build wasp traps. Some thought it was unnecessarily cruel, others didn’t care because they hate wasps, others said it was for the greater science good. For me, it was very much about the tone the scientists took, “harnessing the hate” for wasps. Even if the result is the same, laughing and joking about killing things puts me right off, and that, Prof, is why your eating meat is very different and not at all hypocritical. Unlike enjoying stomping on a spider, as many do, you don’t seem to enjoy the killing of the animal. You clearly respect them, and, again unlike just killing a bug because it’s there, your meat eating has a purpose, no different than the wasp carrying a stung spider or caterpillar away for food. I’ve been a vegetarian for 20+ years, I routinely rescue insects and spiders, I never use bug spray (get the hose and spray water on a wasp nest if they’ve built in a bad spot) but I find no fault with your carnivory, unlike those who enjoy hunting because they like shooting things, or those who swerve to hit an animal on the road. Intentions matter, morally and philosophically, and most certainly including scientific intentions.

  7. Posted June 20, 2018 at 10:46 am | Permalink

    Reblogged this on The Logical Place.

  8. Mehul Shah
    Posted June 20, 2018 at 10:54 am | Permalink

    So, I’m not the only crazy one.

    There was a spider living in my car for a while. Not sure how he survived, but eventually, I captured and released it to the wild – my colleague who was in the car at the time was totally dumbfounded.

    I try not to invite insects into the house (by keeping it clean), so I don’t have to kill them. The only time I do kill, is when I’m out backpacking, and then it’s just impossible to avoid it.

    There’s a better (at least equally good) case to be made for not killing/using pigs, cows, goats, chickens etc. There’s no doubt that they suffer. Natalie Portman has a new documentary out on the subject called Eating Animals.

  9. Jon Gallant
    Posted June 20, 2018 at 11:24 am | Permalink

    A recent article in the Seattle Times reports that “the mussels in Puget Sound contain detectable levels of oxycodone.” So, these molluscs are evidently sentient enough to take pain-killers from time to time. I must say, this is what makes Northwestern
    moules marinières particularly good.

    • Posted June 20, 2018 at 11:37 am | Permalink

      Well, that might be a case of what my father calls the “amazing work the analytical chemists do”. Lots of stuff can be detected at concentrations than what was possible even 20 or even 10 years ago – by orders of magnitude in some cases.

      That’s why some of these “pesticides in drinking water!!” headlines are alarmist. Yes, there are such, but if they are at ppb concentrations, should we care?

      • Posted June 20, 2018 at 12:20 pm | Permalink

        That’s correct and the amounts of prescription drugs found in Puget Sound mussels is far below the amounts mussel doctors would have prescribed to them, let alone amounts that might impact other forms of life.

        Still, it does suggest a problem, doesn’t it? If things we make as drugs to treat humans appear in the wider environment after passing though us, then maybe we’re using too much of the drugs. That’s the take home message.

        Same is true of pesticides or anything else we allow to leech into our water. We need to be aware of these issues and take action where warranted (there’s the rub).

        • Posted June 20, 2018 at 12:38 pm | Permalink

          You’re assuming that the drugs work by being consumed, an assumption I wouldn’t make. And you’re making lots of other similar assumptions, including some that could be very dangerous. What if the body tolerates relative overdoses well, but a low dosage is ineffective? It would be most irresponsible in such a case to attempt to “dial in” the “perfect” dosage.

          …and that’s long before we get to the very real problem of people disposing of no-longer-needed medications by dumping them down the toilet….

          b&

          >

          • Posted June 20, 2018 at 12:55 pm | Permalink

            I’m not making any assumptions. I wasn’t addressing the use and mis-use of drugs, I was commenting on the value of finding trace amounts of man-made substances in the water.

            If something we make -something which would not otherwise naturally get in to the water- is being found at detectable levels, then that suggests there may be a problem at the source. Something we should be aware of and take action, if warranted. Knowledge is power.

            In the case of drugs, whether the source is urine or people flushing them down the toilet doesn’t make any difference to my comment. If it is something we think ought not to be in the water, detecting it there is the first step. Further, the mere fact that we can detect a substance, irrespective of any problems it might have on the environment, suggests something about our society we may wish to address. Such as our opiate crisis (we don’t need drugged mussels to know we have a problem there).

        • Posted June 21, 2018 at 11:30 am | Permalink

          But *everything* is in everything if the scope is large enough or you look hard enough.

          For example, there are likely on the order of 3 plutonium atoms in your body, right now. These are intensely radioactive and will likely kill one a cell or two as they decay.

          Should you worry? No, of course not. Dose makes the poison.

          Should you *not* worry about contaminants? No there either, for the same reason.

    • nicky
      Posted June 20, 2018 at 12:12 pm | Permalink

      That is an outstanding line of research, if there are some hormones killing pain, there must be something like pain* to begin with.

      *a disagreeable alarm stimulus that affects proper functioning if overdosed.

      • nicky
        Posted June 20, 2018 at 11:21 pm | Permalink

        Oops, misread that, thought about endogenic painkillers, such as endorfines, produced by the mussels

  10. Frank Bath
    Posted June 20, 2018 at 11:28 am | Permalink

    I try not to kill insects, especially spiders, however I am not merciful with silverfish. A housefly battering itself against a window can be difficult to help and it saddens me to find it dead of exhaustion later.
    I understand there are Jain holy men in India who employ a sweeper to brush the ground ahead of them in case they tread on an insect. I don’t know who does the same for the sweeper.

    • nicky
      Posted June 20, 2018 at 11:56 am | Permalink

      Spiders are not insects, but what is you bug with these harmless silverfish?

      • Frank Bath
        Posted June 21, 2018 at 5:48 am | Permalink

        Thank you for the correction, my bad, I was over generalising. The kitchen in the flat I have recently taken over is plagued with silverfish, which come out of every crack at night – much like people have cockroaches. I have to squirt them. It’s been a battle.

        • GBJames
          Posted June 21, 2018 at 7:11 am | Permalink

          I find silverfish creepy, too.

          I just went over to wikipedia and learned that they are fond of wallpaper paste and books. I didn’t know that.

  11. Posted June 20, 2018 at 11:34 am | Permalink

    Pain is not the problem, as a careful introspection of your own relationship to pain will revail.

    Fear is the problem.

    There are severe forms of pain that some might even embrace. Fear is either acknowledged and then ignored or simply not a factor at all. Childbirth for some, especially repeat mothers, comes to mind — as does that coming-of-age ritual involving a glove with bullet ants woven into it. Even the least brave amongst us can appreciate the pain of jumping into a cold lake along with our relationship to the pain and the fear of it.

    And there are trivial pains that cause intolerable fear or aversion. For example, back pain can be very intense, yes, but the spasms are over in a moment…and most spasms are very mild. What’s crippling is the fear that the momentary pain will keep building exponentially and will last forever. So even a small twinge causes great fear as an harbinger of much worse to come…even though, statstically, the worse very rarely does come and doesn’t last very long when it does.

    In reality…it’s only pain. And it won’t last. Save for an unlucky few with chronic pain, it doesn’t last long, even if it seems interminable at the time. And even if it lasts the rest of your life…well, that’s, at most, mere decades away — a blink of an eye.

    Not to suggest that pain is desireable, or something to seek out, or not something to avoid. However, a full perspective of pain can change it from something overwhelming to just another body sensation, like hunger or warmth. You’re then free to make rational responses to it, and the worst part — the fear — mostly evaporates.

    Cheers,

    b&

    • nicky
      Posted June 20, 2018 at 11:55 am | Permalink

      I agree that a short pain, especially when you know it will be short, is more easily gotten over than longer lasting pain. However, although fear may certainly compound the suffering, I’m sceptic whether it is fear that causes most of the suffering caused by pain.

    • GBJames
      Posted June 20, 2018 at 12:01 pm | Permalink

      What’s wrong with fear? It is just recognition, usually based on experience, that doing X is going to result in painful consequences?

      • Posted June 20, 2018 at 12:13 pm | Permalink

        Fear is rarely as rational as you paint it. Rather, fear tends to become all-encompassing and overwhelming.

        For example, the best way to get over back pain is (typically, for the usual case, but dependent on the cause so “ask your doctor”) to move and stretch — yet the moving and stretching will almost inevitably cause pain. Fear usually prevents people from moving and stretching through the pain. Rationally, it is much better to accept the inevitable pain — once your back has gone out, it’s going to continue spasming no matter what you do or don’t do — and work through it in the most productive manner possible. But the fear causes people to shut down entirely, to desperately attempt to minimize even the slightest twinge…and that winds up prolonging and worsening the pain.

        Worse, you can become afraid of fear, in a vicious circle that is especially powerful and destructive….

        Another perspective: those suffering from back pain spend 99 44/100% of their time _not_ experiencing pain. The pain comes in brief (though, granted, sometimes intense) and infrequent bursts. Yet they often spend 100% of their time crippled by fear. If that isn’t enough to convince that fear is the problem, not pain….

        Again, this is not to minimize the suffering. The suffering is real, very real! It’s just that it’s of vital importance to understand the true nature of the suffering in order to alleviate it. Treating the pain would help alleviate the suffering, but the problem is the suffering, not the pain. Directly address the suffering and then you can independently make a rational decision as to what to do about the pain.

        b&

        >

        • Randall Schenck
          Posted June 20, 2018 at 4:56 pm | Permalink

          I have a cat that has terrible fear of any persons other than myself or the other staff, my wife. Do not know why but he came to us upon birth from his mother who came from the wild. Cannot figure that one.

          Hey, are you back now or just stopping in?

      • GBJames
        Posted June 20, 2018 at 12:20 pm | Permalink

        Who says it needs to be rational to be useful? Pain isn’t rational but it (along with fear) tend to keep the fingers out of the flame.

        You can take anything and make a pathology of it, which you’re doing here regarding “fear”.

      • nicky
        Posted June 20, 2018 at 12:25 pm | Permalink

        That is the function of pain: preventing painful, often harmful, consequences (yes, it can misfire, but that is a tangent here, I’m sure you’ll agree). That is why we have pain, and why I think insects do feel something like pain. The reason why any animal must feel something that approaches it.
        Maybe I misunderstood you, but I took issue that the most important part of suffering caused by pain is the fear of pain. I do not think so. It is important, but not the most important factor, IMMO.

    • Mark R.
      Posted June 20, 2018 at 8:58 pm | Permalink

      There are also fearful medical conditions that don’t involve physical pain at all. Atrial fibrillation is one. I don’t think stroke victims suffer from a lot of pain either.

      I think putting pain instances into the category of ‘physical’ undermines the reality that pain diminishes from memory; probably an evolved trait. Like your example of maternity pain.

      Fun reading your words as always.

  12. mirandaga
    Posted June 20, 2018 at 11:36 am | Permalink

    A fish story: once, on the Clackamas River in Oregon, I hooked a 12-pound steelhead. While I was playing it (or vice versa) I observed something very strange: another steelhead about the same size was following the hooked steelhead around. Wherever the hooked fish went, the other followed. To all appearances, it looked like the second fish was upset and trying to help out.

    I’ve never seen this behavior before or since in all my years of fishing, and steelhead are notoriously easily spooked fish, which made it even more remarkable. In any case, I’m more careful about making assumptions about what fish and other animals feel or don’t feel.

    • Posted June 20, 2018 at 12:27 pm | Permalink

      It could have been a mating pair. The male and female stay together for a short time, building their redd (a nest of gravel) and mating periodically – the female deposits some eggs and male spreads his semen (milt) over them. Then they cover the eggs with more gravel.

      If that was the case, you may have hooked the female and the male followed because males try to stay close by females so that only he gets to fertilize her eggs.

  13. nicky
    Posted June 20, 2018 at 11:38 am | Permalink

    I’m sure flies and bees can feel something like pain, I saw how they got frantic when flies got pinned and their wings pulled out or when a nasty ‘uncle’ of mine burnt a bee in a flower crushing it with his burning cigar tip.
    That being said, killing mosquitos is self defense. I don’t mind them sucking my blood, but I do mind the itchy pustule they leave, not to mention their sleep-wrecking buzz.
    Moreover, I kill flies, since they are vectors of all kind of diseases that would fall under self defense too.
    For flies I have a good bare-hand technique: You slowly slide you index and ring fingers on either side of the fly (you have to approach from front or back, from the side it doesn’t work), since the stimulus is about equal from either side they cannot decide to budge (or maybe the stimuli cancel each other), and then the middle finger, which you held up with your other hand, springs smashingly down: instantaneous, no suffering.
    Other insects, as well as spiders I leave alone.
    When in Mafikeng, I was the only one not to use poison against roaches, result? I had a lot of flatties (spiders of the Selenops genus) that hunted my roaches. It seems those spiders are much more sensitive to poison than the roaches. As a result I was the only house on the hospital campus that had no roaches, just a skeleton here or there.

  14. Posted June 20, 2018 at 11:41 am | Permalink

    An important, but difficult topic – thanks to all for tackling it.

  15. E.A. Blair
    Posted June 20, 2018 at 11:45 am | Permalink

    Outdoors, I’m more than willing to live and let live, only making exceptions when I catch some critter trying to make a snack of my blood – then it’s slap first and ask questions later. However, when it comes to indoor space, I’m less than tolerant. My policy is that I do not willingly share my living space with any creature whose natural compliment of legs is less than two or more than four.

    More recently, I ammended that rule to include the proviso “big enough to see” so as to exclude my beneficial internal menagerie.

    A good friend of mine who died recently had his home infested with bedbugs, which are not compatible with coexistence. He hadn’t been able to deal with the problem before he passed, so now his family is charged with the task of debugging the house before they can take care of his belongings. For every good multilegged critter that can share a house, there are dozens that are much more problematic.

    • nicky
      Posted June 20, 2018 at 12:46 pm | Permalink

      That is very hateful towards amputees, like say diabetics. What other ‘less than two legs’ were you referring to?

      • E.A. Blair
        Posted June 20, 2018 at 1:28 pm | Permalink

        Did you even read my comment? If you did, did you understand it? I said that I do not willingly share my living space with any creature whose natural complement of legs is less than two or more than four. Humans who are missing a leg due to amputation or birth defect still carry the genetic pattern of a two-legged primate.

        That does not prevent me from being staff to an amputee cat (one of my best cats had feline diabetes and required daily insulin injections, though he had all his limbs), nor would it preclude a human amputee partner.

        Less than two legs? It means I would not keep a pet snake. You have heard of snakes, haven’t you?

        I deeply resent your characterization of me as hateful towards amputees and diabetics. I leave it to other commenters to judge whether an apology is in order.

        • Adam M.
          Posted June 20, 2018 at 2:27 pm | Permalink

          I read his comment as tongue-in-cheek. It’s possible he wasn’t really accusing you.

          • nicky
            Posted June 20, 2018 at 11:24 pm | Permalink

            😁

        • nicky
          Posted June 20, 2018 at 11:23 pm | Permalink

          Oops, skipped the’ natural’, sorry

        • nicky
          Posted June 20, 2018 at 11:36 pm | Permalink

          Snakes are four-legged animals (tetrapoda) 😁, but I see you don’t like snails. I don’t like snails inside partcularly either, unless in garlic butter.

    • Liz
      Posted June 21, 2018 at 9:25 am | Permalink

      I’m in agreement here. I generally won’t kill ladybugs, daddy long legs, big, fuzzy bumble bees, dragonflies, grasshoppers, cicadas, millipedes, centipedes, earthworms, or caterpillars. I have never seen an alive Asian Longhorned Beetle but I definitely would save it and call to report it. I don’t kill unknown spiders because I am afraid that it might injure my fingers. I found a tic on me after hiking several weeks ago. I got it out before it stayed in for too long. I wouldn’t kill it but tics are also difficult to kill. Mosquitoes I will slap to get them off of me but don’t necessarily want to kill them because then you see all of the blood they just took from you. If it is anything else that’s in my house, it comes down to doing whatever to get it out. It doesn’t always work out to save it.

  16. nicky
    Posted June 20, 2018 at 12:07 pm | Permalink

    In laboratories they have all kind of rules to treat mice humanely, the lab mice that is, the good mice. However, the ‘wild’ mice, the bad mice, are treated without any consideration (sticky traps and horrors like that). Ironically, a good mouse escaping becomes a bad mouse, for which (whom?) any kind of cruel killing method goes.
    This is elaborated in Hal Herzog’s “Some we love , some we hate , some we eat.” (say cats, rats and little lambs). I won’t post the link, since last time I did that the whole cover appeared.

    • Posted June 20, 2018 at 12:31 pm | Permalink

      The “bad” mice need to be killed because they bring in disease – which the lab mice are often unable to defend. The same is true of escaped mice – lab mice are kept in SPF (Specific Pathogen-Free) environments. If one should escape and are returned tot he colony, they can bring back disease with them.

      Further, if wild mice should breed with the lab mice, it can completely destroy their value as research subjects. Lab mice are (for the most part) clonal – they are homozygous at every allele. Introduce new alleles and you can’t rely on what your experiments are telling you.

      • nicky
        Posted June 20, 2018 at 12:57 pm | Permalink

        Mikey, I did not contest at all they should be killed/exterminated -there are indeed very good reasons to do so-, I just pointed out that the good mice have all kinds of protection against cruelty, against being killed (yes, we kill them) in an inhumane manner, but that the bad mice do not have such protection, anything cruel, sometimes unimaginably cruel, goes there. A kind if schizophrenic attitude/rules, like most of our interactions with other animals are.
        What would our host think of a nice duck curry or a Peking Duck? I hope you get my point.

        • Posted June 20, 2018 at 1:09 pm | Permalink

          Ah. I see. Quite right. Carry on, then. 🙂

      • infiniteimprobabilit
        Posted June 20, 2018 at 7:42 pm | Permalink

        I would never wish to kill a mouse.

        BUT – we have got mice in our roof. Droppings everywhere. I would be happy to ignore them if they were house-trained.

        So – how to get rid of them?

        So far, I’m ignoring the problem. This is not going to make it go away.

        cr

        • nicky
          Posted June 20, 2018 at 11:28 pm | Permalink

          a cat maybe? 😊

          • Liz
            Posted June 21, 2018 at 11:32 am | Permalink

            I can’t even tell you how many people said to me last month when I had the same problem,”Get a cat.” It was at least 10.

        • Liz
          Posted June 21, 2018 at 11:30 am | Permalink

          I would never wish to kill a mouse either. But, if there is a mouse inside, it has to go. It was about a month ago when I first saw a mouse in my very first home I got all by myself. I went directly to Home Depot and bought about $80 worth of every type of trap. I thought the glue traps were the best ones to use so I set out four of them and went to work the next day. I started to feel uneasy at work having it there. I googled what to do and started reading all sorts of terrible things about glue traps. I took the rest of the day off, called an exterminator who couldn’t come out until four days later due to Memorial Day, and went home in a panic. I didn’t find anything on the glue traps but found a mouse the following morning. It looked dead and I was completely worn down and exhausted from it. An hour later, I went to throw the mouse/trap out as I had looked it up and talked to several people who told me I was worried for nothing. “Wait until you have a real problem. Like having to replace your heating system.” I went to the glue trap with the mouse and it was now alive – and trying to escape. I lost it and started crying and it was either the mouse or me. If it got loose, it might bite me. I put a plastic container over it and dragged it out of the house. Crying the whole time. It was terrible. Awful.

        • Posted June 21, 2018 at 4:52 pm | Permalink

          There is a nice clear humane SmartMouse trap that works well. They’re small and the work well. Have them in several locations.

          Only one year of being catless and mice tried moving in. Now that I have two indoor ones, they don’t even try.

          The problem is more relocation- far enough away that they don’t come back. I took to releasing them in abandoned houses in my neighborhood. Of which we have many. Who will then migrate to someone else’s inhabited house, of course.

  17. busterggi
    Posted June 20, 2018 at 12:09 pm | Permalink

    I also relocate most arthropods if possible, spiders are an exception as I consider them my first line of defense so they stay with me. I did squash a huge cockroach last night that flew in my living room door – sorry archy but none of my cats is named mehitibel. I’ve contacted the CT Agricultural Center to see if they’re getting reports of cockroaches increasing as wild critters in the state due to climate change.

    The whole qualia thing is based on supposition, frankly I don’t believe in philosophical zombies any more than actual ones. Its always seemed like religion-based assumptions of human superiority to me.

  18. Posted June 20, 2018 at 12:09 pm | Permalink

    “If pain wasn’t, well ‘painful,’ then we wouldn’t be so quick to avoid it…Of course it’s possible that the whole aversion behavior in insects and so-called ‘lower’ animals comes through a system of evolved automatic response that doesn’t make its way through consciousness or produce qualia. But it’s possible that it does include that, so, like many other scientists (see below) I err on the side of caution.”

    Thus far no one’s been able to show how qualia like pain contribute to behavioral responses like aversion over and above what their neural correlates accomplish. If someone says well pain *just is* the correlates, the same point applies: we can tell the story of aversion in terms of the correlates and leave out any reference to pain as a feeling. Still, if qualia are present (a very hard question to answer), as you say that’s a good reason to act compassionately.

  19. rom
    Posted June 20, 2018 at 1:03 pm | Permalink

    Very simply, there are some life forms I do not want to share certain spaces. There are some inanimate materials I do not want share a space with.

    Striving to be amoral has certain advantages.

    Feeding mealworms to ducks. Giving an advantage to eight ducklings will of course disadvantage other ducklings in the coming years.

    This is all part of an unfolding universe. The morality plays we play are just memes striving for survival. But then so is my amoral meme.

  20. eric
    Posted June 20, 2018 at 1:22 pm | Permalink

    It seems to me that practically every time a scientist asks a question of the type “is human-type consciousness a necessary prerequisite for mental capability x?” the answer turns out to be “no.” So I’ve (tentatively, provisionally) decided to skip to the end of this logical train of thought and conclude that yes, animals have qualia.” Yes they can calculate. Yes they can reason in a basic sense. No they don’t have what we recognize as consciousness – but this feature doesn’t appear to be necessary for any of those other capabilities. Rather consciousness appears to be an overlay. And if there’s some mental capability that we seems to observe in both a lizard and a mammal, then there’s every biological reason to think that the brain module to do that ‘thing’ evolved long before – and works independently of – the ‘consciousness’ module.

    • Posted June 20, 2018 at 3:17 pm | Permalink

      “Rather consciousness appears to be an overlay.”

      Agreed, but of course this violates the common intuition that conscious experience like pain plays a role in behavior control. It’s simply assumed that it does, but closer consideration suggests that the private, subjective, qualitative character of pain and other experiences (qualia) won’t end up in third-person explanations of action, only the observable neural correlates of qualia.

      • eric
        Posted June 20, 2018 at 7:49 pm | Permalink

        Consciousness could still play a role in behavior control. Gom Jabbar! 🙂 But what I’m saying is that an animal would still feel pain and respond to the pain without it. Maybe not the same way a conscious animal would, but in its own way.

        And yes, I’m a determinist. If consciousness is an overlay/module, then the signals and pathways associated with the consciousness phenomena could have an affect on brain function without implying indeterminism. To use a really bad analogy, stick a capacitor in a circuit and it will behave differently without being indeterminate. Stick consciounsess in an animal and it may likewise behave differently but this doesn’t imply determinism is wrong.

  21. CAS
    Posted June 20, 2018 at 4:58 pm | Permalink

    I also avoid killing anything when possible.
    RE: Of course it’s possible that the whole aversion behavior in insects and so-called “lower” animals comes through a system of evolved automatic response that doesn’t make its way through consciousness or produce qualia.
    It may be that pain (call it a qualia) is a simpler, more efficient, flexible and centralized way of dealing with avoidance and survival rather than evolving a large number of separate automatic responses. I’m not at all clear that there is anything mysterious about qualia or that the term is even meaningful since it describes the subjective experiences of an organism.

  22. Posted June 20, 2018 at 7:19 pm | Permalink

    In summer i have taken to herding flies (some are big “bush” flies) out of the house, out doors, out windows!!!! what the hell!
    I like to think they are part of the greater food chain, spiders the same, where i can. I like to think that they help regulate the population of other insects, something i’ll never know.
    White tails get the squish, Australian immigrant with a nasty bite… arachnid racist i am. Wasps, i ignore until someone gets hysterical and must do my duty… and sometimes i still ignore.
    Rule of thumb for me, inside is human territory, outside, the jungle is yours and good luck fellow creatures.

  23. David Coxill
    Posted June 20, 2018 at 7:35 pm | Permalink

    Where do you draw the line at killing living things .
    I read that Albert Schweitzer hated to kill bacteria in his research .

  24. Howiekornstein
    Posted June 21, 2018 at 3:33 am | Permalink

    I write this just having killed a mosquito. My feeling on this subject is that it is ultimately impossible to rise above our own nature of being evolved animals and hold any morally satisfying stance on killing much lower species-that any of our compromises on the subject belie an underlining hypocrisy. We as animals are inherently parasites – our existence predicated on killing. Evolution itself is embodied violence- Darwin’s beautiful tangled bank a vicious battlefield. Our magnificent cognitive powers that make us able to address these philosophical issues evolved from our superior capabilities as particularly efficient carnivores. How can we rise above any of this? Yes, we can mitigate the amount of carnage and pain we cause, but there is no happy solution which allows us to claim a posture of moral goodness.

  25. Steve Pollard
    Posted June 21, 2018 at 8:11 am | Permalink

    I am one of those people whom wasps seem to go for. Whether it is some component of my personal pheromones or whatever, I don’t know. What I do know is that if there are wasps about, they home in on me like heat-seeking missiles. It’s become a bit of a family joke.

    And sometimes avoidance doesn’t work, and swatting is the only way to buy a bit of peace. For me, I mean, not the wasps.

    On the other hand, I do seem a bit less attractive to mozzies, and even midges, than most other folk.

  26. Jonathan Wallace
    Posted June 22, 2018 at 7:26 am | Permalink

    “But yes, I do eat meat, and am aware of this hypocrisy, so there’s no need to remind me of it.”

    I don’t think there is any hypocrisy in your position which seems to be that in some circumstances (scientific research, provision of food, pest control for example) it may be necessary to kill animals but where this is so it should be done as humanely as possible and any killing that is not necessary is to be avoided. People may differ in what killing is necessary (vegetarians clearly do not perceive the necessity to kill animals to eat them) but simply by living we all require the killing or eviction of some animals whether it is the killing of pest species in crops, destruction of habitat to make way for our homes and places of work and so on. It is as well to acknowledge this whilst endeavouring to ensure that we cause no more harm than we have to to our fellow inhabitants of the planet.

  27. Jan
    Posted July 10, 2018 at 2:27 pm | Permalink

    My opinion of this whole subject can be summarized as follows: “Humans have a right to kill, torture, and otherwise exploit any living being that isn’t human for whatever purposes they deem necessary”. And i’m yet to see any argument as to why it shouldn’t be so, that wouldn’t amount to just self-righteous posturing or vacuous moralizing.

    • Posted July 10, 2018 at 2:33 pm | Permalink

      And my opinion is that you are a horrible person who is willing to torture animals if it would give you pleasure. If you can’t see any reason why that’s bad, you need professional help. I’m serious.

    • Posted July 11, 2018 at 6:35 am | Permalink

      Jan.
      Putting it in terms of rights sugegsts a legal framework. Do you mean to say that? I would guess, probably not. Lots of things that are legal are immoral–such as foreclosing on widows, lots of things that are moral can–and have been illegal–cannbis use, freeing slaves etc.
      So–I suspect you mean something like “It’s moral to do it”. But then–the onus is on you, not the opposition, to provide arguments (maybe self-righteous and posturing, but arguments nonetheless).
      The usual route is to either say
      1) That animals cant suffer (which seems unlikely on th eface of it) or
      2)That their suffering doesn’t matter (which requires further argument) or
      3)To say that said suffering is real, but outweighed by human benefit.
      It would hep the rest of us if we knew which of these three (or some other variant) you prefer. Then we’ll get working on the rebuttals…

  28. Diane G
    Posted July 10, 2018 at 5:31 pm | Permalink

    sub

  29. Posted August 2, 2018 at 3:08 am | Permalink

    Hi, I think that it is an uncomfortable thing when humans use insects as a source of entertainment, regardless of the condition of the insects when the entertainment is complete. I am thinking of the UK program “I’m a Celebrity, Get Me Out of Here”. There are jungle trials for the celebrities which, if passed, earn extra food. Some of the trials involve insects, snakes, etc being in close proximity to each other. The insects are even sometimes piled up in tubes. Injury to the insects is inevitable.
    I have written to the producers and asked them to not do this, but they tell me that it is OK as the insects are bred specifically for the purpose. They miss my point (deliberately?). The possibility that pain and stress could be caused to the insects means I find this morally wrong. A living creature is being abused for human fun. This is on a par with finding a childhood friend enjoyed lighting matches and sticking them into snail shells to watch the ooze come out.
    I would be surprised if insects were unable to feel pain in some way. When a person is unable to feel pain in part of their body they need to constantly check if the are still OK. They don’t know when they have injured themselves. Pain is there to tell us something is amiss. Insects need this too. It reinforces avoidance of dangerous situations. It is a survival tool.
    If you need to kill to eat, fair enough. If culling is needed to protect crops etc. OK. Insects, just because they are a lower order of animal, and especially assuming that pain is a possibility for them, shouldn’t be excluded from the ethos of respecting living creatures.


%d bloggers like this: