Two morning tw**ts

Although I don’t follow anyone on Twi**er, as I’d never get anything done if I did, I do count on the kindness of stranger (and readers) to call interesting tw**ts to my attention. Here’s one that Grania sent me.

I watched the video that these tweets ultimately link to, and put it below.

Sam is, of course, referring to the famous trolley problem first outlined by philosopher Philippa Foot. As you probably know, modern versions involve making a decision that involves you taking an action that will lead to someone’s death, while inaction will lead to more people’s deaths. Five people are on one track, with a runaway train about to smash into them, surely killing all five. But, by pulling a switch, you can divert the train onto a track so it will hit only one person. Do you take that action? (I’d say “yes”.)

An alternative is that you’re standing on a footbridge over the tracks with a fat guy beside you whom you don’t know. If you throw him onto the tracks, you can stop the train and save five lives, though the chubby man dies. (You’re assumed to be too thin to stop the train.) Do you heave that person onto the tracks? The consequences are exactly the same, but most people, including me, would say “no” to that question. It’s interesting to ponder why we see a difference between these two innate feelings, and why we somehow feel that hurling the fat guy is wrong.

There are lots of variants of this problem, all designed to explore our moral intuitions. It’s a good Gedankenexperiment to explore why we have different knee-jerk reactions to “moral” situations that are fundamentally similar.

Below is a funny video in which a father who knows about the trolley problem poses it to his son. His solution is unique, but I have to say that if I were a kid, I would probably have done the same thing!


Finally, Matthew Cobb, who reads the Times Literary Supplement, found a review of a book called Holy Sh*t: A Brief History of Swearing (but why the asterisk given that there’s already a well know book called On Bullshit?). He shared some of the review’s contents on Twi**er, and it was shared widely. Trigger warning: scatology, profanity, and sexuality!

Looking up “Gropecuntelane,” I found there’s a long Wikipedia entry for it, and that many streets in England were given that name, all because they were where prostitutes plied their trade. (British street names often derived from the activities taking place there.) There were in fact several streets in London alone with this name, but all disappeared by the end of the 16th century.


  1. ThyroidPlanet
    Posted September 1, 2016 at 8:40 am | Permalink

    If you don’t do anything, no one can sue you in court…. right?

    • Posted September 1, 2016 at 1:41 pm | Permalink

      If you don’t do anything, no one can sue you in court ? I don’t think this is true. If you live next to someone who falls in their yard, and if they broke a hip and are moaning on the ground, I think there are laws that say you MUST do something t help that person. I am not sure of this. Any other opinions here ?

      • infiniteimprobabilit
        Posted September 1, 2016 at 5:54 pm | Permalink

        It may vary from country to country, but as I understand it, in English-common-law based countries, no. Not unless a ‘duty of care’ exists or some specific regulation (e.g. ‘health and safety’) applies.
        But IANAL so don’t quote me!

        That doesn’t mean the guy can’t sue you later, anybody can sue for anything.


    • infiniteimprobabilit
      Posted September 1, 2016 at 7:20 pm | Permalink

      While I’m at it, never mind just being sued. If you push the fat guy you are quite likely to be charged with murder. If you don’t you won’t.

      Also, there’s the legal doctrine of ‘assumption of risk’. (Note that my knowledge of US law comes mostly from perusing Kevin Underhill’s excellent site ‘Lowering the Bar’). For example, a baseball spectator hit by the ball may fail in his lawsuit because that is a known risk which they implicitly accepted by their attendance.
      In this case, it is arguable that the trolley passengers accepted some risk of a crash whereas the fat guy on the bridge didn’t.


      • Dale Franzwa
        Posted September 2, 2016 at 12:18 am | Permalink

        Good observation. What Foot did with the trolley problem was refute utilitarianism as a valid ethical system. Five is greater than one, right (duh)? Therefore, the proper ethical response is to flip the switch and save five rather than one? The problem here is utilitarianism has no built in provisions for making exceptions to the preferred response. Consider this example: The five guys are terrorists who have captured Jerry (Angry Cat Man) and tied him to the main track. They then move to the siding and start planting a bomb. Ben Goren is in the tower with the switch observing all this. He can also see the train arriving ahead of schedule. Which way do you think he flips the switch? (Your guess is as good as mine).

        The fat man version requires you to commit murder in order to save five unknown men. Do you think a jury will see things your way and forgive you for murdering one guy in order to save five? Again. . .

        • infiniteimprobabilit
          Posted September 2, 2016 at 3:07 am | Permalink

          Reminds me of an old English legal maxim, “Circumstances alter cases”.

          As soon as we start trying to add some context to the ‘trolley problem’ we change it to some extent.


  2. rationalmind
    Posted September 1, 2016 at 8:40 am | Permalink

    My favourite piece of graffiti from a toilet wall was genuine Latin.

    Vidi Vici Veni

    I just wonder how many people understood it.

    • Ken Kukec
      Posted September 1, 2016 at 10:37 am | Permalink

      I recall mine (seen in a crapper near the philosophy department):

      “To do is to be” — Sartre

      “To be is to do” — Camus

      “Do be do be do” — Sinatra

      • HaggisForBrains
        Posted September 2, 2016 at 3:32 am | Permalink

        I remember that one from uni in the sixties.

    • gravelinspector-Aidan
      Posted September 2, 2016 at 1:02 pm | Permalink

      The version I sent back to Matthew was “Veni Vidi Victi VD.” Of archaeological vintage.

    • Mark Joseph
      Posted September 2, 2016 at 4:26 pm | Permalink

      The English translation of that exact phrase occurs “in the song Fireballs”, just after the 45-second mark.

      How and why I know that is a story too tedious for telling.

  3. kieran
    Posted September 1, 2016 at 8:46 am | Permalink

    I was asked that question years ago…fatty got thrown in front of the train straight away. If you’re willing to flick a switch to kill someone you shouldn’t balk at getting your hands dirty doing it yourself.

    • keith cook + / -
      Posted September 2, 2016 at 4:46 pm | Permalink

      If you believe the war movies I used to watch where it was protocol to close the hatch on your shipmates in a flooding compartment, all you need is to be ordered. Save the ship at all cost! and possibly more lives by this action. In a submarine I doubt you would even think about this as it applies to everyone on board.
      The fat man problem is likely to backfire on me, fat men don’t want to die and fight back, the dead would be six.

  4. Posted September 1, 2016 at 8:51 am | Permalink

    Chris Donald of Viz magazine is great at coining rude euphemisms.

    Here is a sample from ‘Hail Sweary’ edited by Graham Dury, Davey Jones and Simon Thorp, with an intro by Charlie Brooker of Philomena Cunk fame.

    farmer’s footprint n. A huge skidmark. ‘That’s the last time I let Meatloaf in to use the Sistine Chapel shitter. The fat bastard’s left a right farmer’s footprint in the U-bend, your holiness.’

    • Ken Kukec
      Posted September 1, 2016 at 11:03 am | Permalink

      Such an elegant and fanciful usage note! 🙂

      Reminds me of Karen Elizabeth Gordon’s grammar guide The Transitive Vampire (well, sort of, anyway).

    • gravelinspector-Aidan
      Posted September 2, 2016 at 1:04 pm | Permalink

      If you want to plumb those depths, you need to consult Roger’s Profanisaurus. It’s getting a bit aged now, but is still being updated in print.

  5. Taz
    Posted September 1, 2016 at 8:57 am | Permalink

    It’s interesting to ponder why we see a difference between these two innate feelings, and why we somehow feel that hurling the fat guy is wrong.

    I haven’t studied this stuff, but could it have something to do with the fact that the premise is absurd? Throwing the switch is plausible – the fat guy’s body mass stopping a train is not. (The idea that his would and yours wouldn’t – and you would somehow be able to calculate that – is even less so.) No matter how much you try to think of the situation in the abstract it’s hard not to view the act of throwing someone in front of a train (in the hopes of stopping it) as the action of a sociopath.

    • Richard Bond
      Posted September 1, 2016 at 11:55 am | Permalink

      I agree completely. My response to this so-called problem is that the premise calls for completely impossible perfect information. One of my favourite anti-philosophy jokes:

      A physicist, a chemist and a philosopher are washed up on a desert island, with only a tin of beans to eat. To open the tin, the physicist suggests heating it in a fire until the pressure forces it open, but the philosopher objects that it will explode and blast the beans everywhere. The chemist suggests putting the tin where the waves can wash over it and corrode a hole, but the philosopher objects that the seawater will get in and spoil the beans. The physicist and the chemist challenge the philosopher to find a way. After much deep thought, the philosopher comes up with “Let us suppose that we have a tin-opener…”

      My personal label for these contrived and improbable conundrums is “tin-opener philosophy”.

      (I guess that in the USA this would be “can-opener”.)

    • Paul S
      Posted September 1, 2016 at 1:32 pm | Permalink

      My first question when I heard the problem was who’s driving the trolley. Someone or something has to on the deadman switch.
      I understand what the premise is supposed to be, but it’s based on having zero knowledge of trolleys much less physics.
      I highly doubt that answering a question to an implausible situation has any bearing on what you would do in a crisis. As stated, I questioned the plausibility of the premise and asked many questions. However, when confronted with an apparent heart attack victim at at highway rest area, I administered CPR while others watched until an ambulance arrived.

      • Posted September 1, 2016 at 6:02 pm | Permalink

        Indeed, one might even suspect that no philosopher ever has even so much as imagined that there might be a respected, licensed, professional position such as, “railway safety engineer,” or, if so, that such a person could possibly be of any help in clarifying their confusion.

        I imagine that a philosopher, if so challenged, would object that the “thought” “experiment” isn’t meant to be realistic…but, in so doing, would be offended were you to observe that that means that the question therefore has no more bearing on reality than that of the number of theologians that can tango on a philosophical conehead.

        Am I being snarky? Maybe a bit. But I’m also quite serious.

        Philosophical “thought” “experiments” as ludicrously oversimplified as these are as useless as having students weigh a feather and a brick and use Newtonian gravitation to predict which will actually hit the floor first when the teacher drops them simultaneously. Unless, of course, the teacher is making a point about either air resistance or the danger of naïve oversimplification — but the philosopher expects us to take the Trolley “thought” “experiment” seriously and thinks there’s something meaningful to be learned from it.




      • gravelinspector-Aidan
        Posted September 2, 2016 at 1:14 pm | Permalink

        However, when confronted with an apparent heart attack victim at at highway rest area, I administered CPR while others watched until an ambulance arrived.

        I’ve killed one more person than I’ve ever wanted to performing CPR (incorrectly ; I’d never been trained in it at that time. I will not make that error again.)
        The niceties of philosophy leave me strangely unmoved. I remember one night of horrible radio traffic where a couple of guys were presented with a real world version of the thought experiment. Seven men who need not necessarily have died did die. Utilitarianism, simply and naively applied, would have saved those men’s lives.

        • Posted September 2, 2016 at 2:23 pm | Permalink

          I’m sure you’d agree with me that the real lesson from your night of bad radio isn’t whether or not to shove the fat man in the potato cannon in order to derail the Amtrak train before it triggers the nuclear ICBM launch…

          …but rather which safety protocols need to reevaluated or developed in order to prevent a similar tragedy from ever occurring again in the first place.

          Why doesn’t it ever occur to any philosopher to consider what could have done to keep people off live tracks, properly maintain the trolley’s brakes, and so on?



          • gravelinspector-Aidan
            Posted September 2, 2016 at 5:24 pm | Permalink

            …but rather which safety protocols need to reevaluated or developed in order to prevent a similar tragedy from ever occurring again in the first place.

            The protocols were well established. The rescuers were ordered to return to disembark the 5 people they’d pulled out of the burning water. The rescuers saw two more people in the water waving and went back into the fire to try to get them. The riser ruptured. Seven dead, not two dead.
            Procedures were not followed People died whose fate was not sealed.

            • Posted September 2, 2016 at 6:37 pm | Permalink

              Oh, I’m so sorry.

              I can certainly sympathize with the overwhelming instinctive urge to go above and beyond and save the additional two — as well as the guilt they would have anticipated had they not and the riser not ruptured until after they would have made it to safety.

              But…as you so often note, that procedure was obviously written in somebody else’s blood, and the true tragedy is that it takes even more blood to fully establish it — and maybe not even then.

              Human intuition is really, really, really ineffective during triage situations — including ones that unfold achingly slowly, such as the decision of whether or not to invade Iraq during the Gulf War. We feel an overwhelming urge to do something, and anything that’s proposed is perceived as better than nothing…even when nothing really is the better course of action.

              I don’t know of any more effective way to overcome this failing of ours other than, as you yourself do, drill in procedures written with the blood of failure.



              • gravelinspector-Aidan
                Posted September 3, 2016 at 1:57 pm | Permalink

                Human intuition is really, really, really ineffective during triage situations

                That’s why you don’t rely on intuition but procedures.

    • eric
      Posted September 1, 2016 at 9:46 pm | Permalink

      The premise used to be absurd, but with driverless cars probably just a few decades away, it isn’t any more. If we give control to computers, we’re going to have to come up with heuristics for them to follow for selecting between various accident outcomes. Such as: do you let your passenger die or crash into another car that has a different passenger, likely killing them instead? How about if the other car has two people in it – should the safety of the passenger be paramount? Consider it from the owner’s perspective; you probably do want your car to value your safety more than others.

      None of this is the trolley problem in exact detail, but it gets pretty close. An automated car is going to have to be programmed on whether to turn and intentionally hit the fat guy car coming the other way, or do nothing and cause a four-car pileup. We, the programmers, are going to have to pick. So we will have to ‘solve’ trolly-like problems (i.e., pick some set of rules to govern computer responses in those cases) pretty soon, whether we want to or not.

      • Gregory Kusnick
        Posted September 1, 2016 at 10:14 pm | Permalink

        You make a fair point, but I think your examples all correspond more or less to the original trolley scenario: risking m lives to save n lives, appropriately weighted.

        The analogy to the fat man scenario would be if the car decides to save its passengers by using pedestrians’ body mass to cushion its crash. If we find engineers programming that sort of logic into their vehicles, they deserve to be prosecuted.

        • eric
          Posted September 2, 2016 at 1:49 pm | Permalink

          That depends on whether the engineers are doing what society has told them to do or not. You’re presuming everyone will accept your moral calculus. I’m not sure that’s true. I think to a lot of prospective car buyers, “this car puts the safety of its passengers i.e. your child paramount, and counts it as worth 2 random strangers” would be a selling point.

          This could be particularly true if these future cars don’t track the number of people in each vehicle. Then the choice is just value your own vehicle equal/higher/lower than someone else’s. If you’re buying a minivan for your 4 kids, I think it’s perfectly reasonable to think you might value your own vehicle higher than a sub-compact, even under the moral calculus that everyone’s life is equal.

          There are all sorts of ramifications here. Should such programming be regulated by government or should the purchaser have a choice? Seems like a prisoner’s dilemma, which points to regulation being needed. But then there will be a potential black market for programmers, and rich people might get the illicit “value me more” types anyway. If we allow choice, how do we stop that race to the bottom? Or is that a problem – maybe we want cars to protect their passengers as a first priority. And so on, and so on…

          • Gregory Kusnick
            Posted September 2, 2016 at 2:32 pm | Permalink

            Perhaps I’m not being clear. Valuing your own passengers’ lives over those of random strangers is part of the “appropriate weighting” I mentioned.

            The dilemma I’m trying to highlight involves, say, a decision whether to crash into a brick wall or a crowd of pedestrians. The brick wall poses a greater risk to the vehicle’s passengers, and it may turn out that some of that risk can be mitigated by choosing the softer target, inflicting moderate bodily harm on some pedestrians in order to save passenger lives. Nevertheless I claim the car should choose the wall (and ethically responsible engineers should program it to so choose), rather than reduce pedestrians to the status of involuntary crash buffers.

            This is analogous to the argument against compulsory kidney donation, even when such donation can save a life without much physical harm to the donor.

            • Posted September 2, 2016 at 6:30 pm | Permalink

              The dilemma I’m trying to highlight involves, say, a decision whether to crash into a brick wall or a crowd of pedestrians.

              Again, you’re proposing godlike knowledge and evaluation skills be instilled into the car.

              Is it a brick wall, or a brick-pattern carnival tent filled with even more people? Is one of the pedestrians blocking the view of a fire hydrant that’s going to cause certain death to the occupant when the car slams into it? And so on.

              And, again again, you miss the even bigger problem…namely, why was the car navigating at high speed between a brick wall and a crowd of pedestrians in the first place? Why didn’t the car detect the crowd of pedestrians earlier and slow down to a speed such that it could safely stop if one of them darted out? I mean, your hypothesis means the car is able to distinguish between crowds of people and not-crowds of not-people…so, it saw the crowd of people and failed to slow to a safely navigable speed…why, exactly? And, if it hasn’t manage that basic and obvious and essential maneuver…you expect it capable of radically greater sophistication…why…?

              Or are you suggesting that somebody has sabotaged the brakes and / or steering in a way that the car didn’t detect until too late? If so…why on Earth would you expect the car to protect anybody from acts of sabotage? Aside from taking reasonable precautions to protect the car from sabotage in the first place, why even worry about how the car’s supposed to function after it’s been successfully sabotaged?

              If I might observe, this entire discussion is a perfect example of why philosophy is the worst possible discipline to investigate questions such as these. It’s so far removed from real life, so far lost in Platonic Ideal Land, that any “conclusions” you might reach philosophically are even less relevant than a physicist’s spherical cow.




              • infiniteimprobabilit
                Posted September 2, 2016 at 7:39 pm | Permalink

                Avoid spherical cows at all costs. Admittedly said cow is probably dead already, but if it’s that bloated the impact is likely to be… messy.



            • nicky
              Posted September 2, 2016 at 9:28 pm | Permalink

              I somehow suspect that a ‘self-driving’ car programmed to crash into a brick wall when in doubt is not going to be a big hit. 🙂

          • infiniteimprobabilit
            Posted September 2, 2016 at 7:21 pm | Permalink

            This gets impossibly complicated very rapidly.

            When the crash is inevitable, should we (our computer) hit a small car (and probably kill its pax) or a bus – whose passengers would probably survive, unless of course the bus then crashes and kills them?

            (This implies that buses should drive as fast as possible, so that a crash *will* kill all on board; this will be taken account of by the approaching car’s computer which will then choose to avoid the bus – yes?)

            Of course, if the choice is between the (side of a) bus and a truck, we should obviously sacrifice ourselves and hit the truck – unless of course the truck is carrying e.g. LPG…



      • Posted September 2, 2016 at 10:17 am | Permalink

        If we give control to computers, we’re going to have to come up with heuristics for them to follow for selecting between various accident outcomes. Such as: do you let your passenger die or crash into another car that has a different passenger, likely killing them instead? How about if the other car has two people in it – should the safety of the passenger be paramount?

        No, we’re not. Not even vaguely.

        You’re presuming that the cars will have both some sort of transhuman power of observation that lets them distinguish between a mannequin in the trunk and a kidnap victim in there, plus the ability to perform these sorts of moral calculuses in microseconds.

        In reality, the cars just need to be good drivers, as we already understand the phrase. Observe the road around, looking for potential dangers. Maintain a speed that reaches a “good enough” balance between preserving maneuverability and keeping up with traffic. If a likely danger scenario is present — such as kids playing ball in a driveway — slow down. If anything gets in the way of the car, slam on the brakes and avoid everything if possible. If not possible, avoid moving things first and big things second.

        That’s it, really.

        Do we currently hold people responsible for not being able to stop in time when a kid runs out from behind a parked car? Even if a NASA expert can propose, after the fact, a way that the driver could have ricocheted off the parked car onto a trajectory that would have saved the kid?


        Then why should we expect our robot drivers to do even better?

        When it comes right down to it, the only reason to think that the Trolley “thought” “experiment” applies to automotive automation is if you think computers have divine powers of knowledge and wisdom.

        In practice, as a society we’ll be better off when robot cars are no worse than the average driver. That’s all.

        They don’t have to be Mario Andretti all fresh and rested at the start of the race. They just need to get a passing score on the DMV exam — and that alone will save countless lives. Why? Because the car will always be driving as well as it did at the DMV exam. It’s never going to be sleepy, worrying over what to say to the boss, distracted by the kids fighting in the back seat, or looking under the seat for the slip of paper with the phone number they just dropped — let alone drunk, shaving, texting, or whatever.

        The problem with human drivers isn’t that they’re crashing into five Rhodes Scholars instead of one fat junkie. The problem is that their attention wavers. Solve that problem — which is exactly what robot cars do — and you could potentially save more lives every year than were lost during the entire course of the Vietnam War.




        • eric
          Posted September 2, 2016 at 2:00 pm | Permalink

          The difference, Ben, is I can’t make a fully informed decision about my surroundingns in milliseconds. That’s why car accidents today aren’t trolley problems; because even though the driver makes a split-second decision about whether to turn the wheel left, right, or straight, its not really an informed moral choice the way the philosophical trolley car situation is.

          But to a computer, milliseconds is plenty of time to communicate with the other vehicles around it, calculate trajectories, impact velocities, etc. and make a calculated decision whether to veer left, right, or go straight. To make a calculated decision to hit another car (and potentially cause that passenger a fatal injury) in order to avoid potentially fatal injury to the car’s own passenger, or not.

          I guess we could choose to program our auto-drives to not do such a calculus, but that would also be a moral programming choice, and IMO frankly a much less moral one than trying to get the car to make the best possible decision using the most recent available information it can get.

          • Posted September 2, 2016 at 2:32 pm | Permalink

            As I wrote, and you now make explicit, you’re presuming godlike powers for robot cars. And if you’re going down that path, why not assume that they’ll be able to see a potential crash developing in enough time to avoid it entirely? What makes you think such a car is going to have enough time and control to decide who lives and who dies, but not enough time and wisdom to slow down to 15 MPH on the crowded residential street in the first place?

            Indeed, I think that would help you realize the absurdity of your position. Propose a real-world accident scenario where the robot car needs the sort of ethical brilliance you desire of it, and why the computer is better equipped to deal with the scenario than your favorite philosopher who moonlights on the Formula 1 circuit.




  6. Posted September 1, 2016 at 8:59 am | Permalink

    This child is a NJ Transit officer in reverse.

  7. fzulps
    Posted September 1, 2016 at 9:02 am | Permalink

    The two-year-old’s solution strikes me as an ideal variation of the classic “Kill ’em all and let G*d sort it out” approach to problem solving.

  8. Posted September 1, 2016 at 9:22 am | Permalink

    Nickolaus chose the Arnaud Amalric solution.

  9. mordacious1
    Posted September 1, 2016 at 9:35 am | Permalink

    The kid’s reasoning might be that, if people are stupid enough to stand on the tracks and not jump off as the train approaches, they shouldn’t be in the gene pool. Sound reasoning.

    • Posted September 1, 2016 at 9:44 am | Permalink

      Or, he’s establishing in-group solidarity while abrogating any possibility of survivor’s guilt.

      • Mark Sturtevant
        Posted September 1, 2016 at 11:11 am | Permalink

        The kid is clearly exercising his prerogatives as a young member of the cis white patriarchy.

  10. Posted September 1, 2016 at 9:43 am | Permalink

    And then there’s the Zaphod Beeblebrox solution:

    “Zaphod,” she (Trillian) said patiently, “they (Ford Prefect and Arthur Dent) were floating unprotected in open space … you wouldn’t want them to have died would you?”

    “Well, you know … no. Not as such, but …”

    “Not as such? Not die as such? But?” Trillian cocked her head on one side.

    “Well, maybe someone else might have picked them up later.”

    “A second later and they would have been dead.”

    “Yeah, so if you’d taken the trouble to think about the problem a bit longer it would have gone away.”

    • infiniteimprobabilit
      Posted September 1, 2016 at 5:58 pm | Permalink


      DNA was a frickin’ genius!


  11. dooosp
    Posted September 1, 2016 at 9:43 am | Permalink

    I’ve always found the pushing-the-fat-man example to be severely lacking, because no matter how I try to think about it, I can’t believe the fat guy is so fat that he can stop the trolley. And if he is, then it’s very likely that only one of the other people on the track will die and the others will only get hurt if you don’t push him.

    Anyhow, there’s too many unknowns with the pushing-fat-guy-example and that’s why it’s more uncomfortable example, rather than pulling a lever where there are no unknowns, and it’s all mechanical, and not because you’re pushing a fat guy to his death.

    • darrelle
      Posted September 1, 2016 at 11:06 am | Permalink

      If the fat guy were massive enough to stop the train, approximately Godzilla size I’d say, how in hell would the average person (or even the 1/100 of 1 percenter) be able to push him any fraction of an inch?

  12. Posted September 1, 2016 at 9:44 am | Permalink

    Interestingly, the trolly problem is now more than a philosophical puzzle.

    • Posted September 1, 2016 at 11:43 am | Permalink

      This has been true since the dawn of robot ethics as a field approximately. (~2000)

    • Posted September 1, 2016 at 5:47 pm | Permalink

      Self-driving vehicles are not new; we’ve had them for millennia.


  13. K
    Posted September 1, 2016 at 9:47 am | Permalink

    For me, it’s an easy decision: I flip the switch but I don’t push the fat guy. Why? It’s probably partly the same reason why it’s easier to drop bombs at 20,000 feet and kill people than it is to do in person: I don’t want to look the fat guy in the eye when I do the deed.

    I think most people would want to distance themselves from an act like this and it’s a lot easier to just “flip a switch”.

    • Kevin
      Posted September 1, 2016 at 10:15 am | Permalink

      Or program a computer. Teslas and Google Cars will have time to work and refine on algorithms that will couple Utilitarian and Libertarian weights to statistical models which ultimately provide a single probability of action.

      Computers will ultimately decide faster and better (statistically moral outcome) than we can.

  14. Gordon
    Posted September 1, 2016 at 9:51 am | Permalink

    So the ‘Grope Lane’ I was shown in Shrewsbury has (as might be expected given the town’s most important son) evolved to have simpler name allowing it to survive in a more prudish environment

    • neil
      Posted September 1, 2016 at 11:49 am | Permalink

      The one in Whitby became “Grape Lane”.
      If i was a doctor, i’d open a haemorrhoid clinic there…

  15. Kevin
    Posted September 1, 2016 at 9:55 am | Permalink

    Nicolas has adopted the American education strategy: if not everyone can get a cookie, then no one gets a cookie.

  16. Ken Kukec
    Posted September 1, 2016 at 10:17 am | Permalink

    Good for the kid in the video; he’s clearly a nihilist in the making.

    Fatman ever gets near me on a bridge over the streetcar tracks, I’ll shove him overboard on general principles.

    Call it the ethical equivalent of preventative war.

  17. Posted September 1, 2016 at 11:00 am | Permalink

    The fundamental problem with the Trolley Problem is that it’s no different from the too-familiar dilemma of the Nazi officer demanding of the husband to pick his wife or his children to be gunned down, with the philosopher playing stand-in for the Nazi.

    Frame it that way, and the real answer is obvious: you’re being set up to be blamed for a catastrophe not of your making, and you’re royally fucked no matter what. Worrying how you’d react is pointless. And, for that matter, Milgram already showed us that people will follow stern orders given by people wearing official-looking clothing, even if those orders are abhorrent…so we should be surprised, why, exactly, that the authority figure of the philosopher is able to compel abhorrent behavior in the victims of this “thought” “experiment”?

    In the real world, you’re not going to be touching critical industrial safety equipment in the middle of a crisis unless you’ve got the proper training or somebody with the proper training is instructing you. (How many of us would even recognize a modern railroad switch for what it is?) You make whatever snap judgement you can and focus your real attention on cleaning up the mess — especially rendering first aid as best you can and then giving a full and honest report to the investigators.

    Those who actually have to worry about these sorts of problems have all sorts of training and skills to deal with them. Doctors learn about triage, and MSF doctors, alas, are true experts at it. At the same time as pilots are taught how to handle engine malfunctions, they’re taught how to seek out the best emergency landing sites — and that training emphasizes minimizing casualties on the ground. And, contrary to earlier posts on this thread, the programmers of self-driving cars aren’t worried about the trolley problem, because the right solution is the same as you yourself would do when a kid chases a ball right in front of your car: slam on the brakes, maneuver to avoid hitting anything, accept that some accidents are inevitable, and do your best to spot the kid and ball and the potential for disaster in the first place.

    That last one bears some further explanation. The “thought” “experiment” is that, for example, the brakes have failed just as the car is coming to a stop at a crowded intersection, and the car has to choose to kill the mother with the baby stroller or the wino. In reality, a self-driving car isn’t going to start unless all its self-checks are up-to-date, including its record of required maintenance inspections, so basically the only way the brakes are failing in the first place is if they’ve been sabotaged. And we’re expecting the car manufacturers to worry about how to program their cars to drive when somebody’s cut the brake lines…why, exactly? (Don’t forget that the car’s going to include a brake fluid level and pressure monitoring system, so you’re going to need some pretty sophisticated means of fooling the car into thinking it’s safe to drive.)

    So…much ado about nothing. Way, way, way, way too much ado about nothing.



    • infiniteimprobabilit
      Posted September 1, 2016 at 6:04 pm | Permalink

      So, you push the philosopher onto the track instead. Might not stop the trolley, but at least the bastard won’t be around to put anyone else in an impossible position.


      • HaggisForBrains
        Posted September 2, 2016 at 3:42 am | Permalink


    • reasonshark
      Posted September 1, 2016 at 6:52 pm | Permalink

      I think the thought experiment itself is a distraction of the main moral issues: choosing between fewer and greater losses; actively killing at least one person to achieve an end versus passively letting at least one person die when one could’ve prevented it, again to achieve an end.

  18. bric
    Posted September 1, 2016 at 11:18 am | Permalink

    Malcolm Tucker lives!

    • neil
      Posted September 1, 2016 at 11:47 am | Permalink

      Malcolm Tucker is the best Prime Minister we never had….

  19. Posted September 1, 2016 at 11:53 am | Permalink

    I’ve said ever since I encountered “Pedro and the Indians” in B. Williams’ work in an undergraduate ethics class that as a gamemaster of many years this scenario is horribly designed. My classmates and instructor asked how, and I said that an RPG with only two choices is a very bad computer one, not a scenario with a live GM and other players. Or, if you prefer, anyone who has seen an episode of _MacGyver_ or a lot of _Star Trek_ knows there are “always alternatives”. I was told then “but here by hypothesis there aren’t!” So I said, fine, pick one then, but what does the conclusion of the thought experiment tell us about the world in which we live, in which we *can* try to be like MacGyver or even just live as real humans? There was no answer.

    If I had been thinking less RPG and more logic (amazing for me :)) I realize now I could have put it in terms of _ex falso quodlibet sequitur_, which is an interesting question about thought experiments in general that I came up with when debating the “twin earth” scenario beloved of H. Putnam and his followers.

    There has to be some way of constraining them – idealizations have to be studied, somehow. Hofstadter and Dennett talk about “knobs”, which is a good idea, but doesn’t quite answer the logical point.

  20. Mark Sturtevant
    Posted September 1, 2016 at 12:21 pm | Permalink

    The swearing bit had activated some adjacent neurons in my brain about adult coloring books with bad words. It would be restful to sit in a window seat, next to a cat, while coloring in scrolly, artful letters that spell out b u l l s h i t…

    • Zetopan
      Posted September 6, 2016 at 5:25 am | Permalink

      “…artful letters that spell out b u l l s h i t…”
      Which reminds me of something that I did many years ago. We had just gotten some new computers into the labs (brand: Everex) and some of them had a short multi-character alphanumeric LED display on the front panel. I always read the manuals (many, if not most do not) and found out that it was possible to put up a scrolling message on that display using a “hot key”, which was otherwise unused in normal operation. So I created a small program that could be activated with a single “hot key” on the keyboard that would cause BULLSHIT to scroll across the display.
      Whenever anyone matching that description came by (we did have our share of flakes) I would simply hit the hot key while smiling and listening to them tell me something worthless. Not a single targeted individual caught on, they were completely oblivious to what was happening around them when they were describing their latest great feat, or whatever else they wanted to waste my time about.

  21. p. puk
    Posted September 1, 2016 at 12:31 pm | Permalink

    (British street names often derived from the activities taking place there.)

    I think you will find this true all over the Old World. Just a single quick example of quaint street naming from Delft: Waagsteeg (Scales Alley) is named for the large communal scales that traders would use when making deals in the town square.

    The building that housed the scales is now home to a rather good restaurant conveniently called De Waag.

    Europe is littered with the like.

  22. Q-Tim
    Posted September 1, 2016 at 1:22 pm | Permalink

    I don’t know, maybe it’s just me, but I really don’t see much of a moral paradox with the trolley problem.

    You would flip the switch to save 5 people even if there was no lone guy on the track. If you flip the switch, and the lone guy dies, his death is a collateral damage. His death is not the instrument of saving the five. If, by a happy accident, he decides to walk off the track for some reason, the five still will be saved.

    In fact, in real situation, after flipping the switch, one would do anything in their power do get the lone guy off the tracks (like shout your lungs out, or something).

    But in the throwing a fat guy scenario, he becomes a *tool* to save the five. He *has* to die to save them.

    Moreover, the fat guy is an innocent third party, while the lone guy on the track is already in the harm’s way: before looking at the actual position of the switch, there was a 50% chance that the trolley was going his way anyway.

    Clearly, nobody wants to live in a society where it is acceptable to use a random person’s life as tool to save five other people.

    • Q-Tim
      Posted September 1, 2016 at 1:29 pm | Permalink

      In the same vein, there is a huge difference between neglecting a heavily wounded person in order to save five other people, vs snatching a perfectly healthy person in the streets to extract organs in order to save five other people.

      If the latter options becomes morally acceptable, the society will quickly descent into utter chaos.

      • reasonshark
        Posted September 1, 2016 at 6:58 pm | Permalink

        If the latter options becomes morally acceptable, the society will quickly descent into utter chaos

        It would be nice to think that was true, but there’s no conclusive evidence that it is true or is even statistically true. To speak anecdotally, it’s hard to read lists of historic atrocities and conclude that there are many barbaric practices that would make a society particularly prone to self-destruct. Heck, the Aztecs got by with regular pointless blood sacrifices, apparently in the hopes of preventing the end of the world nightly.

        Never underestimate humanity’s ability to adapt to evil.

    • Gregory Kusnick
      Posted September 1, 2016 at 2:02 pm | Permalink

      And yet during the Vietnam War, the US was just such a society, in which a majority considered it acceptable to expend the lives of random young men as tools in a game of Cold War brinkmanship.

  23. Graham
    Posted September 1, 2016 at 3:39 pm | Permalink

    Two parallel roads run through my village, which used to be a coaching stop on the Oxford to London road. The coaches would come down the High Street and turn into one of the coaching inns to change horses. They would then exit from the rear of the inn onto the parallel road which was quite sensibly known as The Backside. Alas at some point this became unacceptable and sadly it’s now known as Church Lane.

  24. Posted September 2, 2016 at 8:14 am | Permalink

    Sam Harris’s latest podcast goes into moral/ethical philosophy in great detail and it is fascinating. I highly recommend it.

%d bloggers like this: