Two math posers

Poser #1: Yesterday a colleague from another school asked me how to test whether hummingbirds would visit two related species of flowers nonrandomly, that is, whether the flowers were reproductively isolated because the hummingbird (which pollinates as it sips nectar) prefers one over the other. He proposed an experiment in which he would put two individual plants of each species in a four-plant array, and then watch which ones the hummingbirds visited.  His initial supposition was that if there were no reproductive isolation (that is, the species are equivalently attractive to the bird), any two bird visits would result in the flowers being of different species 50% of the time. As he said,

“All else being equal, you’d expect a bird that’s falling from the sky (telling itself, ‘I am GOING to pick two random flowers’) the thing should visit two A’s 25% of the time, two B’s 25% of the time, and 50% of the time it should visit one of each species.

Again, it SHOULD pick two DIFFERENT species 50% of the time if it somehow chose them simultaneously.”

But then he realized that that wasn’t right. He asked me what the answer was, and, after about 5 minutes of thought, it came to me, and it’s obvious if you think about it.

This is equivalent to putting two black balls and two white balls in an urn, and then picking two balls. What are the chances that you’d pick two balls of different color?

It’s not 50%.  And the true answer doesn’t matter whether you draw the balls successively, or grab two balls at once.

What’s the answer? (It’s the same as if the department has four new graduate students: two males and two females, and asks you to put two of them in a vacant, two-person office. What are the chances they’d be of opposite sex if you choose randomly?

Explain your answers below.

And this real-world biological problem brought to mind a very famous hypothetical problem:

Poser #2. This is the most counterintuitive probability poser I know: the famous “Monty Hall” problem, about which my pal Jason Rosenhouse wrote a whole book. It’s based on the old television game show, “Let’s Make a Deal,”hosted by Monty Hall, which gave contestants a choice like the one described below.

Here’s the deal: You are shown three doors. Behind one of them is a fabulous prize, like a car or a vacation. Behind the other two are trivial prizes, say pillows.  You choose a door.  The host, who knows what’s behind every door, then opens one of the doors you didn’t choose, revealing a pillow.  He then asks you, “Do you want to switch doors now?” That is, he’s saying you can stick with the door you originally chose, or switch to the other, unopened one. Whatever you decide to do, you get what’s behind the door you stick with at the end.

The question: should you switch doors? The intuitive answer is “no, it doesn’t matter: the chance I’ll choose the one with the prize is 50% whether I switch or not.”

That’s the wrong answer. It does matter.

I’m sure some of you know the answer, but wait a while before you explain it in the comments.  For those who can’t wait, the explanation is here.

 

 

153 Comments

  1. Steve Reilly
    Posted May 21, 2013 at 11:46 am | Permalink

    For number one, 2/3? The bird goes to one species. Then there are 3 remaining plants, and if it chooses randomly, it has a 1/3 chance of picking the same species again and a 2/3 chance of picking a different species.

    • aspidoscelis
      Posted May 21, 2013 at 5:13 pm | Permalink

      Agreed.

    • Kevin Alexander
      Posted May 22, 2013 at 4:56 am | Permalink

      Steve, you’re smarter than I am. I had to get some paper and list every possible combination to get the answer.

  2. Posted May 21, 2013 at 11:47 am | Permalink

    Hummingbirds are opportunistic. I’ve watched them in my gardens and at my feeder for many years. There are flowers they like because those flowers produce lots of good nectar and then others are not even given a glance. Hyssop good, zinnias good, marigolds – eh. I have concluded, among other things, that hummingbirds are extremely intelligent yet have been unable to find a single paper in which someone has studied hummingbird intelligence.

  3. Rebecca Harbison
    Posted May 21, 2013 at 11:49 am | Permalink

    I remember when presented with Problem 2 in college, I had to mentally think about all the possible results. It’s the long way, but it can be done for a small problem like this, and helps you think about how it works in a larger case.

    The same for problem 1 — if you write down all the ways to select two things from four, and then sort them into ‘like’ and ‘unlike’ categories, you can see.

    Though actually, thinking about it, Problem 1 has a different probability if you are choosing two samples from four than if you choose from six (three black, three white), and should approach 50% as the number of possible samples becomes large. (Or you allow for ‘replacement’ — sampling the same source twice.)

  4. eric
    Posted May 21, 2013 at 11:50 am | Permalink

    #1: Its 2/3. Let’s call whatever group you picked first group A and the one you didn’t group B. It doesn’t actuallly matter which group you pick first or how you picked it. What matters is that the second pick now comes from a pool of 2 opposites (B’s) and 1 same (A). So if the second pick is random, you should pick an opposite second 2/3s of the time.

    I’ve read Rosenhouse’s Monty Haul book and agree with him, so I won’t spoil that one for others.

    • eric
      Posted May 21, 2013 at 11:53 am | Permalink

      P.S. I am assuming hummingbirds can’t or won’t pick the same flower twice. If they can, then you may be back to 50%.

  5. Michael Fisher
    Posted May 21, 2013 at 11:59 am | Permalink

    Balls:- 1b,2w,3b,4w

    Combinations of two balls
    12 diff
    13 same
    14 diff
    23 diff
    24 same
    34 diff

    Chances of picking diff = 4/6 = 67%

    • Michael Fisher
      Posted May 21, 2013 at 12:10 pm | Permalink

      Short video:- THE BALL magic trick

  6. NewEnglandBob
    Posted May 21, 2013 at 12:02 pm | Permalink
  7. Posted May 21, 2013 at 12:05 pm | Permalink

    Is the problem purely one of mathematics, though ?

    If the humming-bird is at plant #1, won’t its choice the second plant be affected by the distances to the other plants ?

    • Posted May 21, 2013 at 12:07 pm | Permalink

      I probably should have said something about what constitutes a fair pattern for the plants, and how to interpret the results – but that’s left as an exercise to the reader.

      • Posted May 21, 2013 at 1:33 pm | Permalink

        From observation — yes, the distance to the next plant or flower is very much a determining factor. They do not read math or puzzle books and are extremely practical most of the season because of very high energy costs (and very territorial as well). The few weeks prior to migration is hummingbird crazy time, but energy is abundant then.

        • aspidoscelis
          Posted May 21, 2013 at 5:22 pm | Permalink

          The problem of placing plants to avoid having mere proximity bias the results raises a further issue… if our question is, “Do hummingbirds choose among flowers randomly?” obviously we want to avoid confounding variables like proximity. If our question is, “Does nonrandom pollination by hummingbirds reduce or eliminate gene flow between these plants?” then the answer to the question may lie in precisely those confounding variables. Suppose the hummingbird doesn’t really care what species it visits, it just picks the next closest flower. If plants are nonrandomly distributed in natural populations (which they almost certainly are) then you get nonrandom pollination regardless of whether the hummingbird has any tendency whatsoever to distinguish among the flowers.

          Hummingbird preferences may not be a particularly good proxy for randomness of actual pollination in the field.

          • Posted May 21, 2013 at 6:39 pm | Permalink

            Another “confounding variable” is pattern recognition by predators.

            Locally, we have burrowing owls out at the Berkeley Marina, from October to April. People are told (by signage) not to stare too long, or point (!!) as raptors (typically hawks) use the information to find the owls at their burrows.

            We all tend to forget that raptors have incredible eyesight. Some flower-to-flower movement by hummingbirds may have evolved as the least dangerous type of movement, whether it means returning to the same flower, moving from flower to flower, etc etc etc. all evolving to maximizing survival.

            A spray of water, broken down into discreet colors, also seems to draw hummingbirds. For consideration.

        • cherrybombsim
          Posted May 22, 2013 at 8:48 am | Permalink

          You hang one flower in a pot at the apex of a tetrahedron.

    • Gregory Kusnick
      Posted May 21, 2013 at 12:10 pm | Permalink

      Well, the title of the post is “Two math posers”, so I presume we’re meant to think of them as abstract math problems, in which case the correct answer to #1 is 2/3.

      But like you I’m mystified as to why Jerry’s friend thinks this is a plausible model of hummingbird behavior.

    • Jeremy Pereira
      Posted May 21, 2013 at 12:39 pm | Permalink

      My brother – a statistician – has a favourite problem: I take a coin out of my pocket and toss it ten times. It comes up heads ten times in a row. If you want to guess the next toss right, how should you call it?

      A variation on the Monty Hall problem that might shed some light on the correct answer. There are a thousand doors, nine hundred and ninety nine have pillows behind them and one has the car. You choose one door, then Monty opens up 998 of the others all with pillows behind. Should you switch? In making your choice, bear in mind that the probability that you chose the door with the car first time is only 1/1000

      • Steve Reilly
        Posted May 21, 2013 at 12:43 pm | Permalink

        Coin problems usual begin with a caveat about “a fair coin” but your brother’s begins with him taking it out of his pocket. Not sure if I’m going against an unstated rule, but I’d say heads since there’s clearly a decent chance that the coin is weighted.

        • jono4174
          Posted May 22, 2013 at 12:13 am | Permalink

          > Coyne problems usually begin with a

          prize for first place – often a signed copy of ‘Why Evolution is True’

          • Steve Reilly
            Posted May 22, 2013 at 5:19 pm | Permalink

            Well done!

        • Jeremy Pereira
          Posted May 22, 2013 at 5:22 am | Permalink

          That is correct. In fact, there’s a high probability that it is double headed. Of course, you could argue that he is clearly an adept cheat and you shouldn’t even play the game.

      • Posted May 21, 2013 at 12:55 pm | Permalink

        First I’d take the coin and check whether it had two heads; then I’d flip it a few times to check that it wasn’t obviously a biased coin, and then I’d want to give it to a disinterested party who was blindfolded to flip it if I decided it was worth trying to guess.

      • E.A. Blair
        Posted May 21, 2013 at 4:29 pm | Permalink

        Whenever I flip a coin to help me make a decision, it doesn’t matter what the result is. The reason is explained in this grook:

        A Psychological Tip

        “Whenever you’re called on to make up your mind,
        and you’re hampered by not having any,
        the best way to solve the dilemma, you’ll find,
        is simply by spinning a penny.
        No – not so that chance shall decide the affair
        while you’re passively standing there moping;
        but the moment the penny is up in the air,
        you suddenly know what you’re hoping”

        — Piet Hein

      • neil
        Posted May 22, 2013 at 8:29 am | Permalink

        There is a work around for this. Say you want to decide by flipping a coin but you don’t know if it is fair. Start by letting the party who does not own the coin call heads or tails as usual. Agree to flip the coin until you get two different outcomes–say heads then tails or tails then heads (you might have to do this a while if the coin is weighted.) Also agree if heads came first it is “heads” and if tails came first it is “tails”.

      • Jeff Lewis
        Posted May 22, 2013 at 9:36 am | Permalink

        My smart ass answer to toin coss questions always used to be that there was a small but finite chance that it wouldn’t come up either heads or tails, but that it would land on edge and stay there. A few years ago, I actually had that happen when I dropped a coin – it didn’t even roll. And according to this paper, there’s about a 1 in 6000 chance of that happening with a nickel.

      • Posted May 22, 2013 at 10:57 am | Permalink

        “My brother – a statistician – has a favourite problem: I take a coin out of my pocket and toss it ten times. It comes up heads ten times in a row. If you want to guess the next toss right, how should you call it?”

        If that’s all the information we are given, then the obvious answer is to call heads. A better question is: If that’s all the information we are given, what is the probability that the next flip will be a head?

  8. Posted May 21, 2013 at 12:13 pm | Permalink

    Question 1: the probability of different colors is 2/3. Reason: look at the probability that both are the same color: 2/(4 choose 2) = 1/3 and 1 – 1/3 = 2/3.

    Or put another way: the choice of the first ball doesn’t matter. There are three choices for the second ball: 2 of them yield a different color, 1 yields the first color.

    Question 2: yes you should switch. Reason: you have a probability of 1/3 of getting right on your first selection. Since you are always shown a “wrong choice”, switching allows you a 2/3 probability of getting it right.

    This is easy to see if you change the experiment to, say, 100 doors. Pick one, then you are shown 98 doors that are wrong. Would you switch to the remaining door? I would!

    • Posted May 21, 2013 at 12:14 pm | Permalink

      The second is an easy one to demonstrate Bayes Law.

    • JBlilie
      Posted May 21, 2013 at 1:42 pm | Permalink

      Nicely put, thanks! Your explanations really ehlped the old bulb go off over my head!

    • Diana MacPherson
      Posted May 21, 2013 at 5:26 pm | Permalink

      I thought of it this way too almost like you’re doing throughput analysis and trying to avoid the defects. :) I never trust my math answers though so I’m glad you saw it the same way.

    • Kevin Alexander
      Posted May 22, 2013 at 5:13 am | Permalink

      I first saw the Monty Hall thing years ago. It took me almost a day to figure out.
      First, I used the Rule of Trick Questions which is The Obvious Answer is Wrong so it can’t be 50% so you should either switch or not but it does matter.
      When you pick the first door you divide the three doors into two sets, the set of all doors you picked and the set you didn’t and there’s a two thirds chance your set doesn’t have the fabulous prize. That can’t change just because Monty opened one of the other doors, so you should switch.

      • sailor1031
        Posted May 22, 2013 at 5:38 am | Permalink

        What confuses the issue though is that you now get to guess again between only two possibilities. If you didn’t have the second guess your chance would be one in three still, as it was before the door was opened.

        • Posted May 22, 2013 at 7:02 pm | Permalink

          Think of it this way: after your first guess, you will always be shown a bad door. So your choices are:

          1. Stick with your first guess (which will win if you guessed correctly; probability is 1/3.

          2. Assume that you didn’t get it right the first time and switch (probability that you didn’t get it right was 2/3).

          • sailor1031
            Posted May 25, 2013 at 1:35 pm | Permalink

            Think of it this way – probability of making a correct choice between only two possibilities cannot be 1/3.

            • Posted May 25, 2013 at 3:26 pm | Permalink

              Uh, no. Don’t think of it that way.

  9. Jiten
    Posted May 21, 2013 at 12:19 pm | Permalink

    Poser 1.

    There are 12 possible combinations of which 4 are balls of same colour, either 2 black or 2 white. So the answer is 8/12 or 66.666666%.

  10. ladyatheist
    Posted May 21, 2013 at 12:24 pm | Permalink

    Poser #1: The first time the hummingbird goes to a flower, the odds of any particular color being chosen would be 50%. Now that the first one has been chosen, the odds of the other color being chosen are 2/3 because only one of the first color remains in to be chosen from. So the answer is 2/3.

    I remember reading the solution to the Monty Hall problem and it made my brain hurt, so I didn’t click on the link.

    If I get 50% right will you give me extra credit for showing up to work on a Friday?

    • Jeremy Pereira
      Posted May 22, 2013 at 5:38 am | Permalink

      The Monty Hall thing is quite simple, if you think about it in the right way. The switching policy aways works if you chose the wrong door in the first place. The sticking policy always works if you chose the right door in the first place.

      So all you need to do is calculate the probabilities that you chose the right door and wrong door in the first place. These are 1/3 and 2/3 respectively.

      • Siegfried Gust
        Posted May 22, 2013 at 5:58 am | Permalink

        Well explained

      • Diane G.
        Posted May 22, 2013 at 11:00 pm | Permalink

        Love that “framing!”

  11. Posted May 21, 2013 at 12:27 pm | Permalink

    Does the Monty Hall problem apply to the game “Deal or No Deal”?

    In this game 26 boxes each have money in (well, it’s written down the value, the money’s not actually in there), ranging from (in the UK at least, it varies internationally I hear) 1p to £250,000.

    You start with a box. As the game goes on, you select other boxes, and you see what the value of their contents is. There’s a banker, who will give you a value of money (based on what values are left in te boxes). If you don’t deal with the banker you will end up in a situation where there are two boxes left, and you are given the option to swap.

    Assuming you’re in a situation where the two values the penny and the £250K, I’ve always assumed the Monty Hall problem still applies, and that switching is the better strategy. However, I’ve been told it doesn’t, because the boxes are removed with out knowing what’s in which. I fail to see how this affects things.

    • Steve Reilly
      Posted May 21, 2013 at 12:35 pm | Permalink

      I don’t fully understand the game you’re describing. But with Monty Hall, it only works if we assume that MH 1) knows where the prize is hidden and 2) will only open a door that don’t contain prizes. But I’m exactly not sure if that’s relevant to your question since I’ve never seen Deal or No Deal.

      • Logicophilosophicus
        Posted May 22, 2013 at 3:45 am | Permalink

        Never seen it? You haven’t missed a thing. Basically it feels like a quiz with no questions… The mathematical basis is banal. For example, the first US $1,000,000 winner faced a final decision where she would either receive $200,000 dollars or $1,000,000 depending on a random choice. The twist is that “The Banker” offers the contestant a “deal” – drop out of the game, taking instead the amount offered. The woman in question was offered $561,000. The mathematical expectation is obviously $600,000. The real question, obviously, is about the contestants’ need-vs-greed – and that is much more stark when the last two sums available are, at the most extreme, $0.01 and $1,000,000. If you enjoy watching people in an agony of indecision, DOND is for you. But not for me.

    • Gregory Kusnick
      Posted May 21, 2013 at 12:36 pm | Permalink

      If I’ve understood your description correctly, then you should swap. The basic idea is that the two remaining boxes or doors contain opposite prizes, one big and one tiny, and your chances of holding the big one are 1/N. In that situation the chances that the other remaining box holds the big prize must be (N-1)/N, so you should swap.

    • Tyle
      Posted May 21, 2013 at 12:53 pm | Permalink

      Those who have told you that switching wouldn’t help are correct.

      The reason that switching is better in the Monty Hall case is that the host never takes away the winning box – he always takes away one of the losers.

      This means that when the box is taken away in the Monty Hall case *you don’t learn anything new about your initial choice*.

      In the Deal or No Deal scenario, OTOH, your estimate of the worth of your initially-picked-box should increase each time another non-originally-picked-box is revealed not to be the winner. This is precisely because the boxes which are discarded were not guaranteed, by the rules of the game, to be empty.

      Suppose the game were as follows: Pick a random box. All other (24) non-winning boxes are discarded by the host, leaving your initial pick, and one other. Should you switch? Absolutely!! Let me play this game please. :)

      The difference is hopefully clear – in this scenario, when the host discards the 24 boxes, you don’t learn anything about your initial choice (because whether it was a winner or not, the host discards 24 boxes). But you do learn about the remaining unpicked box – the only way it isn’t the winning box is if you picked the winner to begin with! So you should switch.

      Meanwhile, for the game that you described, you are learning about all remaining boxes each time a non-winning box is revealed (namely, learning that one of this now-smaller-set has to be the winning box).

    • eric
      Posted May 21, 2013 at 1:37 pm | Permalink

      From what I’ve heard, the actual gameshow ‘Deal or No Deal’ intentionally lowballs the first two or three offers, because its bad television when some just takes a briefcase and leaves with it. No dramatic tension, and now you have to do the big opening sequence (where all the ladies prance out with their cases) all over again.

      That aside, if you’re talking about the math, then here it is: your expected payoff is the average of the remaining (unopened) suitcases. If the banker offers you less, don’t take it. If the banker offers you more, take it and leave. As a starting point, with no boxes open the average is around $130k.

  12. Posted May 21, 2013 at 12:34 pm | Permalink

    Randomly, if you pick two balls from a pair of black and a pair of white balls, you’ll pick two of different colors 2/3s of the time. Whatever ball you count as “first”, there’s only one other ball that matches and two others that are different.

    The Monty Hall problem is too old to bother with; change your door selection.

    sean s.

  13. Aelfric
    Posted May 21, 2013 at 12:40 pm | Permalink

    With apologies ti EnglishAtheist, whom I believe is talking about something else. I will do my best to explain the Monty Hall problem. You are presented with three doors, one of which contains a prize. You pick one, say, door number 3. The host then shows you that door number 1 is a loser, and asks if you want to switch to door number 2. You had a 1/3 chance of picking the winning door initially, and a 2/3 chance of picking the wrong door. The key is that the host ALWAYS picks a losing door; the host knows more than you do. The act of the host showing you a losing door essentially “collapses” the 2/3 probability that you were wrong in to the remaining, unpicked door–door #2. Thus, it is ALWAYS a better decision to switch to the unopened door, because you have a 2/3 chance of success.

  14. Posted May 21, 2013 at 12:51 pm | Permalink

    Hummingbird flower choice isn’t random, and hummingbirds eat insects as well as nectar, so they may visit a plant more than once. Here in SE Arizona, where hummingbirds are abundant and diverse, there are many endemic plant species (in several unrelated families!) that have evolved showy red or yellow tubular flowers with hummingbirds as the primary pollinators. Hummingbirds are also attracted to native trees that host lots of insects, and even visit a few non-native ornamental plants.
    In my yard, I have native trees, three or four native hummingbird shrubs in bloom at any one time for most of the year, hummingbird feeders, and two or three species of hummingbirds on any given day (six species recorded for the yard). Our hummingbirds spend most of their time in the shady, “buggy” trees (oaks, mesquite, and desert hackberry), visiting the flowers in the morning and evening (freshest flowers first, regardless of species). They use the feeders mostly in bad weather or when the plants aren’t blooming as much.

  15. Barney
    Posted May 21, 2013 at 1:04 pm | Permalink

    If you want a real argument about permutations, probability and information with a simple set-up, try this one: “I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?”

    http://www.newscientist.com/article/dn18950-magic-numbers-a-meeting-of-mathemagical-tricksters.html?full=true .

    I had probably the longest fight I’ve ever had on the web trying to persuade some people that the New Scientist, and the mathematicians quoted, had got it right. I tried different ways of explaining it, as did some others; I wrote a simple-to-understand program for them to run experimentally; some people, after a thread with over 250 replies, were accusing me of lying and falsifying the results of the program, while refusing to run it themselves, or explain why they thought it wasn’t the correct simulation. They just couldn’t accept the answer that all the mathematicians give.

    • Posted May 21, 2013 at 1:22 pm | Permalink

      I agree that the Tuesday Birthday Problem (TBP) is even less intuitive than the Monty Hall Problem. The fact that knowing the day of the week that one son is born on is relevant to the probability that the other child is a boy is shocking. I wrote a brief blog post about it here.

    • Rick M
      Posted May 21, 2013 at 5:02 pm | Permalink

      Barney & Jay

      Why is the birth order of the children factored in to the equation?

      Any parent with two children has one of these combination: BB BG GG. Eliminate GG then BB BG as remaining combinations so even odds.

      I absolutely think I should trust your answers but I can’t make it squeeze into my brain. I finally figured out the Monty Hall problem in the dust on my dashboard on a very lonely 500 mile trip. I’ll keep trying.

      Try this…

      Sportscoats come in a 3 or 4 button style. I never count the buttons when I buy a coat. I have 2 sportscoats, one is a 3 button, I bought it on a Tuesday. What is the probability I have two 3 button sportscoats?

      Should it be the same odds as the TBP?

      • Barney
        Posted May 21, 2013 at 5:16 pm | Permalink

        Yes, I think that should have the same odds (we assume that all days of the week are equally likely for births or purchases of sportscoats).

        As well as Jay’s blog explanation, you might try this ‘graphical’ one – we make a 14×14 grid, which represents all the possible combinations of boy/girl and day of week (boy Monday, boy Tuesday, … girl Sunday) for 2 children, once child on each axis.

        Then we pick out the combinations that we know satisfy “one is a boy born on a Tuesday” – anything in the row *or* column marked ‘boy Tuesday’. There are 27 of these – each is 14 long, but 1 is where they meet. Of those, the first half of the row/column is when both are boys – each of these sections are 7 long, but, again, they intersect, and there’s a total of 13 of them. So the probability of 2 boys is 13 in 27.

        I hope this will display OK – it may come out in a proportional font, so it may not look like a tidy grid. An ‘x’ of any sort means a case where (at least) one child is a boy born on a Tuesday; a capital ‘X’ means both are boys, with (at least) one born on a Tuesday.

        ---BBBBBBBGGGGGGG
        ---MTWTFSSMTWTFSS
        -----------------
        BM-+X++++++++++++
        BT-XXXXXXXxxxxxxx
        BW-+X++++++++++++
        BT-+X++++++++++++
        BF-+X++++++++++++
        BS-+X++++++++++++
        BS-+X++++++++++++
        GM-+x++++++++++++
        GT-+x++++++++++++
        GW-+x++++++++++++
        GT-+x++++++++++++
        GF-+x++++++++++++
        GS-+x++++++++++++
        GS-+x++++++++++++
        
      • Gregory Kusnick
        Posted May 21, 2013 at 5:31 pm | Permalink

        Why is the birth order of the children factored in to the equation?

        Because it’s a way of telling the two children apart.

        Replace the children with coins. Say I have a dime and a penny in my pocket. I toss them randomly on the table. There are four (not three) equally likely ways they can fall:

        DH PH
        DH PT
        DT PH
        DT PT

        There are two distinct ways to throw heads-and-tails and we have to count them both. So the number of ways to make at least one heads is three (not two), and I’m twice as likely to throw heads-and-tails as I am to throw all heads.

        • Rick M
          Posted May 21, 2013 at 8:31 pm | Permalink

          Thanks for the reply,Gregory.

          But, we don’t have a penny and a dime. We have two pennies (children) and each penny can land either heads or tails (girl or boy) so HH HT TT. We know one is H so only HH or HT is possible. It doesn’t seem to matter in what order the pennies were minted or the day they were minted.

          I admit I may be misconstruing the problem and will continue to ponder.

          • Gregory Kusnick
            Posted May 21, 2013 at 9:04 pm | Permalink

            But in fact the mint dates are printed right on the pennies. We might have a 1986 penny and a 2005 penny. So we’re still in the same boat as the penny-and-dime scenario, with two distinct coins and two distinct ways to make heads-and-tails:

            1986H 2005T
            1986T 2005H

            Now suppose the dates are worn down so they’re (nearly) illegible and we can’t (easily) tell the coins apart. Does that change the odds of throwing two heads? Do the odds depend on whether we have a magnifying glass handy? I hope you can see that they don’t. Whether the coins look alike is not relevant; what’s relevant is that there are two of them, and each one falls independently of the other.

            If you doubt this I suggest you actually try throwing pairs of coins and counting up the ways they fall. With enough throws you’ll find that mixed heads-and-tails outnumber all heads by two to one, whether or not you read the dates.

            • Rick M
              Posted May 21, 2013 at 9:54 pm | Permalink

              I think we are saying the same thing. In your scenario if one of the pennies has two heads (which would be the case if you translate this back to children and gender and you are told one of the two children is a boy) then it’s even odds that the coins will land HT or HH. The correct answer is supposed to be 13/27 for HH if the double-headed coin is minted on Tuesday.

          • Posted May 21, 2013 at 9:58 pm | Permalink

            The most foolproof way to solve probability problems like these is to enumerate the sample space—the set of all possible outcomes—in a way that each element of the sample space is equally likely. Then all you have to do to find the probability of the event of interest is to count the number of elements of interest and divide by the total number of elements in the sample space.

            For the two-coin flip example, assume you have two identical pennies, then there are two sets that we could write to describe the sample space:

            {HH, HT, TH, TT} or {HH, HT, TT}.

            These sets differ in whether we consider HT and TH to be distinct outcomes. The advantage of considering them as distinct (as in the first set) is that each outcome in this set is equally likely, with a probability of 1/4. So if we want to know what the probability is of say getting 2 tails, we can just add up the number of outcomes with two tails (1) and divide by the number of elements in the set (4). Likewise, if we want to know the probability of getting exactly one tail out of two flips, we can just add the number of outcomes with exactly one tail (2) and divide by the number of outcomes in the set (4).

            This counting up of individual outcomes and dividing by the total number of outcomes in the set obviously does not work for the second set. If we tried to naively determine the probability of any of the outcomes in the second set by the method above we would incorrectly conclude that the probability of each of the three listed outcomes were 1/3.

            In order to correctly compute the probabilities of events using the second set, we have to know the probability of each outcome in the set (ie, 1/4, 1/2, 1/4, respectively), but having to know these probabilities defeats the purpose of enumerating the sample space in the first place.

            In fact, we can see that the second set was actually derived from the first by collapsing two outcomes, HT and TH, which occur with the same probability as each other (and every other possible outcome) into a single event, so the first set is really the more fundamental enumeration of the sample space.

            • Rick M
              Posted May 22, 2013 at 3:43 am | Permalink

              Thanks for the reply Jay. I still don’t follow the logic there but that’s happened to me before. I need to learn more about probability theory.

      • Barney
        Posted May 21, 2013 at 6:24 pm | Permalink

        I just noticed “Why is the birth order of the children factored in to the equation?”

        It’s not the *order*. What we are told is that one of the children (and that means “at least one”, for the 13/27 probability) was born on a Tuesday. That could be the old, it could be the younger (or it could be both, which it what makes it interesting).

        Effectively, what’s happening is that information about the children is gradually being revealed. At first, we know there are 2 children, and, if they are described by sex and day of birth, each has 2×7=14 ways of being described – so there’s 14×14=196 ways of describing the pair. Then we’re told at least one is a boy born on a Tuesday. That cuts down the possible combinations to 27. Then we’re asked to say “how many of those combinations are both boys, with at least one born on a Tuesday?” – and that’s 13. So, having been told about the Tuesday, the probability of 2 boys is 13/27.

        • Rick M
          Posted May 21, 2013 at 9:43 pm | Permalink

          I’ll keep trying but let me detail my thought process (forget sportscoats and apparently they come 2 or 3 button not 3 or 4 button, or so my partner says)

          If you have the patience Barney, please read this and let me know where I go off the track.

          A parent tells me they have two children and asks me to guess their gender I have 3 combinations to choose from – GB or GG or BB. (I could say BG or GG or BB because I don’t care which order they were born in, just the gender.)

          The parent next tells me one child is a boy and born on a Tuesday. Now I have 2 combinations – GB or BB (again, I could say BG or BB) I subtract a B so the remaining child is either G or B. I ignore the day of birth, or astrological sign, height, or hair color because that tells me nothing about gender. I calculate a 1/2 probability of being right and toss a coin to decide.

          You are saying I should guess girl because the probability is 14/27.

          • Barney
            Posted May 22, 2013 at 12:30 pm | Permalink

            Yes – it can seem, as Jay said, ‘shocking’. The thing is that we haven’t actually said “here is one person, and some information about them; what are the chances of facts about the other?”, we’ve said “here is some information that applies to one or both of a pair of children; what are the chances of facts about the pair?” – although it’s tempting to say “I just want to consider ‘the second child'”. But really, there isn’t a ‘second’ child.

            Try this analogy:
            I have thrown 2 dice. At least one of them is a ‘1’. What is the probability that both of them are odd?

            We can construct a 6×6 grid of all the possible combinations of throwing 2 dice; then restrict it to the case in which at least one of them is a ‘6’; and then work out how many combinations are ‘all even’.

            First, all the 36 combinations of 2 dice - an 'x' represents 'still possible', a '.' 'not possible':
            
             123456
            1xxxxxx
            2xxxxxx
            3xxxxxx
            4xxxxxx
            5xxxxxx
            6xxxxxx
            
            Then, cut this down to those in which at least one of them is a '6' - we are left with 11:
            
             123456
            1.....x
            2.....x
            3.....x
            4.....x
            5.....x
            6xxxxxx
            
            Then which of those are all even - 5:
            
             123456
            1......
            2.....x
            3......
            4.....x
            5......
            6.x.x.x
            

            so, when we know at least one of them is a 6, the chances of both being even are 5 out of 11. But saying the result of a die throw is a ‘6’ is the same as saying ‘it is even, and exactly divisible by 3′. So the information that at least one die is divisible by 3 has meant it affects our knowledge of the total that can be even. That’s the equivalent of knowing the day of birth.

    • Posted May 21, 2013 at 8:13 pm | Permalink

      Ok. I see why knowing the day of the week on which one of the children was born would have an effect. But I think the question should then be: “what is the probability that I have two boys, one of which was born on a Tuesday?”

      Perhaps people are reacting to the way the question is asked, which seems to ignore the information about the day of the week on which one child was born, and ask only “what’s the probability of having two children, both boys?”

      • DV
        Posted May 22, 2013 at 2:16 pm | Permalink

        Yep this phrasing of the question is part of the trick. But the really clear phrasing is “what is the probability that I have two boys, at least one of which was born on a Tuesday?”

        • Posted May 24, 2013 at 9:07 am | Permalink

          Of course you’re right. My phrasing could be seen to exclude the possibility that both were born on a Tuesday, which is a possibility not excluded by the setup.

          I’m a little surprised my comment didn’t get more attention. It becomes easy to see the relevance of the day of the week when the question is asked properly. I think most of the referenced hot debate about this problem is/was taking place between people arguing at cross purposes.

    • Logicophilosophicus
      Posted May 22, 2013 at 9:39 am | Permalink

      There is a lot in common between the Tuesday-boy and Monty Hall teasers.

      MH only works if it is clear that whatever you choose, a losing option will always/inevitably then be revealed. Imagine you are playing find-the-lady at a carnival sideshow (you bet you can pick out the queen from three cards which are shown, turned face down, and swiftly slid around for a few seconds, ending in a row). You point to a card, and, unexpectedly, the carny turns over one of the two remaining cards – not a queen – and offers you a switch to the third card. As Damon Runyon simply puts it, “Do nt take that bet…”

      The Tuesday-boy solution (that the odds of a brother are better than 1/3 given the birthday information) only works if the day – Tuesday – was specified before the parent was picked. If he says, instead, “I have two children one of whom is a boy born on… let me check my diary… ah, it was a Tuesday,” then the chance that the second child is also a boy remains 1/3. (Well, very slightly more in the real world – they could be monozygotic twins, for example.)

      Note that the New Scientist analysis sates that the tighter he restriction, the closer the probability is to 1/2. For boy-born-on-July-4th it would be very close; for boy-born-on-an-aeroplane, even closer. Sounds ok… But for a really tight restriction such as boy-called-Wesley-Trent-Snipes the probability is unity…

      • DV
        Posted May 22, 2013 at 1:48 pm | Permalink

        Where does 1/3 come from? If given no other information aside from there being two children and one of them being a boy, the chance of the other child being also a boy is 1/2. There are only two possible sexes for the other child.

        This is different from the 4 flowers of 2 kinds scenario.

        • DV
          Posted May 22, 2013 at 1:59 pm | Permalink

          I see now it’s 1/3 because the one boy was not identified as which specific child.

    • Marley52
      Posted June 3, 2013 at 8:34 pm | Permalink

      13/27 is the answer to the question “Of all 2 children families who have at least 1 boy born on a Tuesday what is the probability that the family consists of 2 boys?”

      That wasn’t the question asked though. The answer to the question posed is 1/2. as is the answer to the question “I have 2 children, one is a boy. What is the probability I have 2 boys?. The answer is not 1/3, (or the question is too ambiguous to answer)

      • E.A. Blair
        Posted June 4, 2013 at 5:41 am | Permalink

        I finally realized what’s been bothering me about this part of the posers. It assumes that the probability of any given pregnancy is exactly 50% male/50% female, when, in fact, there are many factors which can predispose a particular couple towards offspring of a one gender and it’s only when we take aggregate human reproduction into account that the 50/50 ratio is approached. I’m not a biologist, but I have read enough to be fairly certain that Henry VIII should have been blaming himself, not his long-suffering wives, for the lack of a male heir.

      • Jeremy Pereira
        Posted June 4, 2013 at 7:10 am | Permalink

        No you are wrong. The information “one is a boy” only eliminates the possibility that they are both girls. That leaves three scenarios

        a) the oldest is a boy and the youngest is a girl,
        b) the oldest is a girl and the youngest is a boy
        c) they are both boys

        Each of those scenarios is equally likely (assuming the sex of each child is independently determined and there is no bias to one sex or the other) and in two case out of three one of the children is a girl.

        If the question was “the oldest is a boy”, then we are back to 1/2 because, in addition to eliminating the two girl scenario, that also eliminates scenario b above.

        • Marley52
          Posted June 4, 2013 at 7:50 am | Permalink

          I’m not wrong.
          The mistake you’re making is not accounting for the fact that if I had a boy and a girl I could have equally (truthfully) said “I have 2 children, one is a girl. What is the probability I have 2 girls?”
          Just because I didn’t doesn’t mean you can assign it a probability of zero (at least not without making an assumption which is neither stated nor implied in the question asked)

          • Logicophilosophicus
            Posted June 4, 2013 at 9:59 am | Permalink

            The assumption is that the parent CAN say that one is a boy. He could say “None of your business” or “I don’t believe gender is fixed” – but the puzzle assumes that if he can claim a boy, he does.

            Personally, having dealt with language professionally all my working life, I am sure that the natural meaning of “One is a boy” is “I have one of each” – but the subsequent question makes it clear that we are in the land of mathematicians, which is far from natural. (Someone defined a mathematician as someone who fills the bath by turning the hot tap full on, the cold tap half on, and leaving the plug out…)

            • Marley52
              Posted June 4, 2013 at 10:49 pm | Permalink

              The assumption being made is not that the parent CAN say that “one is a boy”, but that the parent CANNOT say “one is a girl” when the parent has both genders. There’s no basis for making such an assumption.

              If there are 2 red cards and 1 black card face down on a table, you pick 1 but don’t look at it, I pick 1 from the other 2 cards and turn it over. It’s a red card.What’s the probability the card you picked is the black card? The answer is not 1/3.

              Just because it didn’t happen (parent says “one is a girl”), doesn’t mean it couldn’t happen. Probabilities should be assigned accordingly.

              • Logicophilosophicus
                Posted June 5, 2013 at 3:12 am | Permalink

                “…pick 1 from the other 2 cards and turn it over.”

                If you mean “pick one at random and expose it without checking what it is” then, yes, you get a different result from “knowing that at least one is red, secretly determine that a card is indeed red before choosing/exposing it.” But you don’t specify which regime applies in your statement.

  16. Posted May 21, 2013 at 1:06 pm | Permalink

    The Monty Hall problem has been around a long time. When Marilyn vonSavant originally posed it in Parade magazine and then gave the solution she received angry rebuttals form numerous math professors with PhDs who called her an idiot, more or less. A few eventually wrote to acknowledge their error. For those who don’t care to puzzle through the math, just try playing the game with someone acting as Monty Hall 10-20 times and see what happens.

    • Posted May 21, 2013 at 2:27 pm | Permalink

      The NYT has an online version of it:

      http://www.nytimes.com/2008/04/08/science/08monty.html?_r=0

    • Posted May 21, 2013 at 2:45 pm | Permalink

      “just try playing the game with someone acting as Monty Hall 10-20 times and see what happens.” 10-20 times – Ay, there’s the rub! Contestants on the Hall show got to play the game only ONCE! What would happen if they played it 10-20 times is wholly irrelevant when you play it just once; then, whether you switch makes no difference – it’s just pure luck.

      • Steve Reilly
        Posted May 21, 2013 at 3:24 pm | Permalink

        So probability theory means nothing when an event is only happening once? Imagine we were betting on one throw of a 20-sided dice. You win if 1 comes up, I win otherwise. Would you say, “Sure, that’s a fair bet. It’s just pure luck whether 1 will come up or not”?

        Switching increases the odds of winning the Monty Hall problem to 2/3. So yes, it makes a differences even if you’re playing just once.

        • Posted May 21, 2013 at 5:23 pm | Permalink

          Suppose 1 does come up and I win. Should I then have not taken the bet? I think not – but I admit that how much the bet was for would have influenced me! As for the Hall choice, I’d want the empirical data specifically about the contestants on the Hall show. How many switched, how many didn’t, how many in each category won or lost, were the actual results in accord with the probability calculations? I’ve never seen any data about that . . .

          • Posted May 21, 2013 at 5:52 pm | Permalink

            Seriously? I series of people on the show counts as data, but a series of people doing the exact same thing, but not on tv doesn’t count? I don’t think you’ve thought this through.

          • Steve Reilly
            Posted May 21, 2013 at 5:54 pm | Permalink

            According to Leonard Mlodinow’s book The Drunkard’s Walk, “Statistics from the television program bear this out: those who found themselves in the situation described in the problem and switched their choice won about twice as often as those who did not.” But he doesn’t give a citation.

            Here’s a Times article about a simulation:http://www.nytimes.com/1991/07/21/us/behind-monty-hall-s-doors-puzzle-debate-and-answer.html?src=pm

      • Thanny
        Posted May 21, 2013 at 9:53 pm | Permalink

        Completely wrong-headed. Whether it’s one person playing the game 20 times, or 20 people playing the game once each, the probability works out exactly the same. In fact, if it’s just one person playing the game exactly once, for all time, the probability is the same.

        The simplest way to look at it is that your initial pick had a 1/3 chance of being correct and the remaining two doors have a combined 2/3 chance of being correct. Nothing that happens after your pick changes those odds. What happens is that the host removes a guaranteed-incorrect door from the unpicked doors, which means that the one remaining door inherits the 2/3 chance of being correct.

        So the sensible move is to switch to the door that has a 2/3 chance of having the prize.

  17. Eric Shumard
    Posted May 21, 2013 at 1:10 pm | Permalink

    The number of ways to choose m items from a set of n is (n,m) = n!/(m! x (n-m)!), where n! = n factorial = 1 x 2 x 3 x … x n.
    So (n,2) = the number of ways to choose 2 items from a set of n = n * (n – 1) / 2.
    From a set of n/2 white balls and n/2 black balls, the probability of choosing 2 balls that are both white is = the number of ways to choose 2 white balls / the number of ways to choose 2 balls =
    (n/2, 2) / (n, 2).
    The probability of choosing 2 black balls is the same as the probability of choosing 2 white balls so the probability of choosing 2 balls of the same color is 2 x the above expression. So, the probability of choosing 2 balls of the same color =
    (after some algebra) (n/2 – 1) / (n – 1).
    This asymptotically approaches 1/2 as n increases, which is the usually immediate intuitive answer. For n = 4 (as in the hummingbird example), the answer is 1/3.

    • Kevin
      Posted May 21, 2013 at 2:38 pm | Permalink

      Yeah, that’s what I came up with intuitively. Glad to see math tuition matches my math intuition.

      • Eric Shumard
        Posted May 21, 2013 at 3:07 pm | Permalink

        Or simpler, following Andrikzen’s comment below,
        P(B),P(B|B) = (n/2) x (n/2 -1)/(n – 1)
        P(W),P(W|W) = (n/2) x (n/2 -1)/(n – 1)
        …so same result.
        P(B|B) = number of black balls left after choosing one black ball / number of balls left after choosing one ball.

  18. Andrikzen
    Posted May 21, 2013 at 1:18 pm | Permalink

    P(B),P(W|B) = 1/2 x 2/3 = 1/3
    P(W),P(B|W) = 1/2 x 2/3 = 1/3
    1/3 + 1/3 = 2/3

  19. AR.
    Posted May 21, 2013 at 1:47 pm | Permalink

    Here is a tricky problem I give people:
    Two carts go onto a race track for a two lap race.
    Cart A does the first lap at 20 Mph, and the second lap at 30 Mph.
    Cart B does both laps at 25 Mph.
    Does A win, B win or is it a draw?

    • Steve Reilly
      Posted May 21, 2013 at 2:02 pm | Permalink

      B would win. Assume the track is 300 miles around. B does each lap in 12 hours, for a total of 24 hours. A does the first lap in 15 hours, and the second in 10 hours, for a total of 25 hours.

      The easy response is to average A’s MPHs for each lap, figure that it equals B’s MPH, and declare a tie. But obviously that doesn’t work. Averaging MPHs like that only seems to work (I think, someone correct me if I’m wrong) if you travel at 20 MPH and 30 MPH for equal times rather than for equal distances.

      • AR.
        Posted May 21, 2013 at 4:19 pm | Permalink

        You are correct.

        The way to average the MPHs is to use the reciprocals. 1/(1/20 + 1/30)/2 = 24

  20. E.A. Blair
    Posted May 21, 2013 at 3:02 pm | Permalink

    The Monty Hall poser was used on an episode of NUMB3RS. I caught it on a late-night rerun last week.

  21. Posted May 21, 2013 at 3:11 pm | Permalink

    Skipping reading the thread to fail at the posers unassisted…

    Four flowers, two of each type. One bird (for the moment) and we’re going to wait for birdie to take a snack, flay away for a bit, then snack another time — not necessarily on a different flower. We’re going to “force” it to sip twice, no sipping not allowed.

    The correct denominator should be a 2^4 combinatorial… 16 possibilities in all.
    (1-1, 1-2, 1-3, 1-4, 2-1, 2-2, 2-3, 2-4…etc.)

    If two are type A (say 1 & 2), that makes 50% of the observations (the first two rows in the matrix) the correct number IF we further restrict the situation where a type A flower is supped from FIRST. There are more cells in rows 3 and 4 involving type A flower, namely half of them. So the correct answer should be 75% that the bird, taking two supposedly independent sups would hit on a type A flower *at least* once.

    Perhaps that’s not the answer one is looking for, because there would probably be some autocorrelation involved (destroying independence) — if birdie first sups from flower 1, what is the chance it will come back for a repeat, given the vagaries of position, whatever. If birdie was allowed to sup from one set of four flowers, then ANOTHER four flowers substituted for the second sup – randomizing the positions of the flowers and doing the experiment a zillion times, then hopefully we’d restore some modicum of independence.

    Monty Hall: I know that one. Not good to bash that one out here. A good one that has gotten more than a few statisticians into fistfights, even though it’s extremely straightforward, if you can just set the thing up properly.

    For problems of a more vicious variety though, try googling “two-ladder problem” and its variants. Hint: Newton invented a method for precisely these kinds of mathematical problems when he was trying to reconcile planetary motion with observation.

    • Posted May 21, 2013 at 3:27 pm | Permalink

      Ah yes, and maybe I should answer the question 1 as posed. Probability of picking *two different* type flowers (with replacement) would seem to be 50%.

      …but if the situation is restricted to having to choose two different plants (meaning: “without replacement”), then our denominator becomes 12 with 8 possibilities selecting for different types. (2/3)

      Easier to use different birds, and see how they sup initially. Measure first sup, and not bother with combinatorials and their assumptions of independence.

  22. Posted May 21, 2013 at 3:11 pm | Permalink

    Here is a lovely tutorial/walkthrough for bayesian estimation of probabilities I picked up on some time ago and have been evangelical (no pun, I promise) in spreading ever since.

    http://yudkowsky.net/rational/bayes

  23. Posted May 21, 2013 at 3:27 pm | Permalink

    Marcus du Sautoy explained the Monty Hall problem to Alan Davies on BBC2’s Horizon program a couple of years back

  24. Posted May 21, 2013 at 3:31 pm | Permalink

    JAC:The question: should you switch doors? The intuitive answer is “no, it doesn’t matter: the chance I’ll choose the one with the prize is 50% whether I switch or not.”

    The intuitive answer has two parts: 1) it doesn’t matter which one I choose; 2) the chance is 50% either way.

    1)is right, because a Hall show contestant plays the game only once, and probabilities, which presume more than one play, are irrelevant. 2) is wrong, because probabilities, 50% or any other, just don’t apply to a single play.

    • Steve Reilly
      Posted May 21, 2013 at 3:38 pm | Permalink

      So imagine a coin heavily weighted to tails. If you were betting on a single toss of that coin, you’d be indifferent to betting on heads or tails?

      • Posted May 21, 2013 at 5:40 pm | Permalink

        If the actual empirical data about the contestants on the Hall show (how many switched, how many didn’t, how many won or lost, etc. – which I’ve never heard of) accorded with the math, I might reconsider; but I just don’t have confidence (faith?) that math rules. But suppose – and no doubt there were some – someone who didn’t switch but won. Should that contestant have switched – and lost? I think not. The math says that 1/3 of the non-switchers will WIN. If I choose not to switch, how can I figure whether I’m in the 1/3 that will win, or the 2/3 that would lose? I don’t see any way to answer that!

        • Posted May 21, 2013 at 6:00 pm | Permalink

          I would LOVE to play cards with you.

        • Gregory Kusnick
          Posted May 21, 2013 at 6:51 pm | Permalink

          If I choose not to switch, how can I figure whether I’m in the 1/3 that will win, or the 2/3 that would lose? I don’t see any way to answer that!

          That’s the whole point. You can’t know which door is a winner. All you can do is choose the door that’s more likely to be a winner — and after the reveal, that’s the other door from the one you chose. You can still be wrong either way, but at least by switching you’ve maximized your chances of being right.

    • Gregory Kusnick
      Posted May 21, 2013 at 3:42 pm | Permalink

      Or try this:

      There’s tumor growing in your brain. Without surgery, there’s an 90% chance you’ll die within a year. With surgery, there’s a 1% chance you’ll die on the table, and a 99% chance you’ll live to a ripe old age.

      Are you seriously saying that because you only live once, it doesn’t matter whether you have the surgery or not?

    • Posted May 21, 2013 at 4:25 pm | Permalink

      I think there’s a probable oopsie in there.

      • Posted May 21, 2013 at 5:48 pm | Permalink

        In where, exactly?

        • Posted May 21, 2013 at 11:04 pm | Permalink

          Probabilities don’t necessarily involve more than one play. That’s the beginning of where you went boomph.

          Chuck a 6-sided die in the air, and what’s the chance it comes up a one? Flip a fair coin, what’s the chance it will be a tails? The entire edifice of establishing likelihoods depends on an abstraction — one of a consideration of a possibility of “fairness” of the randomization tool being used.

          Even the probabilities of single quantum events happening or not happening… SINGLE events — heck, this stuff is so fundamental, I shouldn’t be typing so much.

          Besides missing that, you are also missing how Monty Hall reveals information when he opens the door. This is the crux of the problem – the bit that can get statisticians fighting each other (until the wrong people get it). For Monty to open a door, he has revealed a state of the system, because if he opened the door with the prize, it would be game over. He had to open a door with a piece of shit behind it, which in essence “collapsed” a bunch of possibilities, making it twice as advantageous to switch your guess — but you’d have to work out why it was that way by enumerating the possibilities.

          You haven’t even gotten out of the starting gate by presupposing that probabilistic assessments can only be made in situations where there are multiple observations. That is clearly wrong. It betrays a fundamental misunderstanding of what probability means. You are confusing the very powerful abstraction (which works amazingly well) with an experimental method for figuring out probabilities by doing repeated measurements. They are not the same thing.

  25. J.J. Emerson
    Posted May 21, 2013 at 3:42 pm | Permalink

    What I love about the Monty Haul problem is that either the correct or the wrong answer seems intuitively obvious if that’s the first perspective confronted by the casual observer. And when confronted with both in quick succession, it can even seem paradoxical.

  26. Robert MacDonald
    Posted May 21, 2013 at 5:08 pm | Permalink

    Here’s a variation on the Monty Hall problem — Bertrand’s Box Paradox.
    There are three boxes in front of you, one has two silver coins inside, one has a silver coin and a gold coin, and one has two gold coins. You open a box at random and take out a coin, which happens to be gold. What is the probability that the second coin in the box is gold as well?

    • Barney
      Posted May 21, 2013 at 5:30 pm | Permalink

      I would say it’s 50% – you know you haven’t opened the box with 2 silver coins, so you know you have one of the 2 remaining boxes. You do not have more to differentiate them, so it’s 50-50.

      • Steve Reilly
        Posted May 21, 2013 at 5:59 pm | Permalink

        But isn’t it more likely, if you have gold, that it came from the 2 gold coin box than the 1 gold coin box? Twice as likely, in fact, so the chances of the other coin being gold are 2/3.

        • Robert MacDonald
          Posted May 21, 2013 at 6:28 pm | Permalink

          Points to Steve…

        • Barney
          Posted May 21, 2013 at 6:36 pm | Permalink

          I see your point. My answer would work if I picked a box, and then someone said “there’s at least one gold coin in there”. But by me picking one coin out and seeing it’s gold, I have also eliminated a case where it’s the mixed box, but I happen to pick the silver coin out first. So it is, as you say, twice as likely that I’ve picked the double gold box.

  27. Posted May 21, 2013 at 5:16 pm | Permalink

    I know both of these. Used to get in a lot of arguments about the Monty Haul paradox.

    2 white W
    2 black B

    If you select Ball W on your ‘first’ pick you’re then selecting (determining your set probability) from a set that consists of B, B & W.

    Your odds are now 2/3 for WB and 1/3 for WW.

    If you select Ball B your secondary selection comes from W, W, B.

    Your odds are now 2/3 for BW and 1/3 for BB.

    Monty Haul is easy enough, as well:

    You have two sets of doors:

    Set 1 consists of one door which has a 1/3 probability of having the item.

    Set 2 consists of two doors which have a cumulative 2/3rds chance of having the item.

    When you get additional information about set 2, the second SET STILL RETAINS its 2/3rds probability.

    Therefore you change. It’s really easy to demonstrate.

  28. Diana MacPherson
    Posted May 21, 2013 at 5:28 pm | Permalink

    This is why Barbie said “math is hard” :)

  29. Posted May 21, 2013 at 7:28 pm | Permalink

    In the Monty Hall problem, there’s an assumption that often goes unstated: the host Monty Hall is always guaranteed to open a door with a pillow behind it – he never reveals the car.

    If Monty Hall chose from the other doors at random, then the intuitive answer would be the correct one.

    • Logicophilosophicus
      Posted May 22, 2013 at 9:59 am | Permalink

      If by the intuitive answer you mean, “It doesn’t matter,” you are dangerously mistaken. In card play we call this grandly The Principle of Restricted Choice. If Monty Hall chose randomly, he was twice as likely to pick a goat with two goats remaining as with only one – in this version you have a 2/3 chance by NOT switching.

      • Logicophilosophicus
        Posted May 22, 2013 at 11:18 am | Permalink

        Damn! so I thought – but good old total enumeration tells me that of the 4 times MH reveals a goat RANDOMLY, two are from goat+goat and two from goat+car. So (stupid me) switching improves your chance from 1/3, but now only to 1/2 rather than 2/3. Still, the switch is correct. Damn!

    • Gregory Kusnick
      Posted May 22, 2013 at 10:12 am | Permalink

      I’m not sure what you think the intuitive answer is, but if Monty reveals the car, then it’s game over, you lose, and the opportunity to switch doesn’t arise.

      In games that conform to the problem statement, i.e. he reveals a goat and offers you the opportunity to switch, the smart thing is to take it, as already explained elsewhere in the thread. Whether Monty knew in advance he’d find a goat there is irrelevant; what matters is that now you both know that the car must be elsewhere.

      • Logicophilosophicus
        Posted May 22, 2013 at 10:19 am | Permalink

        The two goats are analogous to the two gold coins…

        • Logicophilosophicus
          Posted May 22, 2013 at 11:20 am | Permalink

          But you are riht about the switch :(

      • Posted May 22, 2013 at 10:47 am | Permalink

        You’re incorrect. If Monty opens one of the two remaining doors at random even though the car could be behind one of them, then there is no advantage to switching doors. As you say, “[N]ow you both know that the car must be elsewhere.” True, but the car has a 50–50 chance of being behind your door or the remaining door. Under this scenario the probability that you already have the correct door has risen to 1/2.

        The fact that Monty will never open the door that the car is behind is crucial to the problem. If Monty will never reveal the car, then Monty’s opening a door gives you no information about whether your initial choice was correct: if you have already picked the car, then Monty will reveal a goat; if you have not already picked the car, Monty will still reveal a goat. Therefore, unlike the previous scenario, Monty’s revealing a goat does not change the probability that you have already picked the car; it remains 1/3.

        However, under this scenario, Monty is giving you information about the probability that the car is behind the last remaining door. If you did not already pick the car—and the probability of this is 2/3—then the car must must be behind the last remaining door. Therefore, by switching doors, you increase your chances of winning from 1/3 to 2/3.

        • Gregory Kusnick
          Posted May 22, 2013 at 11:01 am | Permalink

          You’re right; “games that conform to the problem statement” are a different set if Monty chooses at random than if he chooses knowledgeably.

          • Logicophilosophicus
            Posted May 22, 2013 at 11:21 am | Permalink

            Agreed.

  30. garardi
    Posted May 21, 2013 at 9:43 pm | Permalink

    In the first problem there are 12 possible ways to draw 2 balls

    XXYY
    XX
    XY
    XY
    XY
    XY
    XX
    YY
    YX
    YX
    YX
    YX
    YY

    This gives 4 pairs and 8 mismatches therefore
    in three chioces only one is right and 2 are wrong

  31. Thanny
    Posted May 21, 2013 at 10:05 pm | Permalink

    Shortcut for the first problem is to look at what’s left when you make your first pick.

    There are two ways left to end up with two different objects, and one way left to end up with two same objects. Three ways total, two of which produce a mix, ergo 2/3 mixed.

  32. Dominic
    Posted May 22, 2013 at 2:31 am | Permalink

    Isn’t this a variation of the sock draw problem? How many socks do you have to pick to get a matching pair…

    • Dominic
      Posted May 22, 2013 at 4:06 am | Permalink

      I tried this with 4 screws, two with a spot of tipex on the end, picking two at a time 30 times.
      Black & white came 21 times
      white & white 5 times
      Black & black 4 times

      That looks pretty convincing to me.

  33. Tony Lawless
    Posted May 22, 2013 at 3:44 am | Permalink

    I offer this as a simple way to consider poser 1
    Contestant picks box 1 with a 1/3 chance or 33.33% of that being the prize
    Host picks box 3 which reveals no prize.
    There is now a 1/2 chance or 50% chance that box 2 has the prize, but box 1 still retains it’s value
    of only 33.33% chance
    Comparing box 2 to box 1 gives a 50.00/33.33 or 3/2 favour to box 2 having the prize

    • Tony Lawless
      Posted May 22, 2013 at 3:59 am | Permalink

      I think you meant poser 2

    • Gregory Kusnick
      Posted May 22, 2013 at 10:17 am | Permalink

      You’ve left 1/6 of the total probability unaccounted for. Once box 3 has been eliminated, P(box 1) + P(box 2) must add up to one. If P(box 1) = 1/3, then P(box 2) must be 2/3, not 1/2.

  34. Posted May 22, 2013 at 7:26 am | Permalink

    The “solution” as to whether or not you should switch in the Monty Hall problem seems wrong. Perhaps another example of statistics leading us astray (or another example of my misunderstanding statistics).

    Let’s imagine two people playing the Monty Hall game. There are three doors: 1, 2, and 3. Person A picks door 1; person B picks door 3. The host reveals that door 2 hid a goat. According the the “correct” solution to the Monty Hall problem, both plays can increase their odds of winning by switching, which seems impossible.

    What am I missing? Anyone?

    • Logicophilosophicus
      Posted May 22, 2013 at 10:10 am | Permalink

      Strange but true: each will have a better chance than the original 1/3 after switching. Well, not so strange really, because the better chance (1/2) is the same even if they don’t switch. MH had a goat available to reveal, so A or B had already made the right pick.

    • Gregory Kusnick
      Posted May 22, 2013 at 10:21 am | Permalink

      What you’re missing is that you’ve now overconstrained Monty’s options. Suppose player A and player B both pick doors with goats. Now there’s no goat left for Monty to reveal.

      For Monty to reveal a goat in the two-player game, one of the players must already have chosen the car, but we don’t know which one. So each remaining door has an equal chance of hiding the car, and there’s no advantage to switching.

      • Posted May 22, 2013 at 11:46 am | Permalink

        Imagine that they’re playing the game in separate rooms, or on separate computers. It’s all the same, really. From each of their perspectives, the game is the same Monty game. They selected 1 door, saw a reveal, and then had the option to switch. That you get 50% winning whether both switch or whether neither switch seems to make it clear that switching per se isn’t what improves your odds.

        • Steve Reilly
          Posted May 22, 2013 at 11:49 am | Permalink

          But you’re example only works if we assume that the door neither picks has a goat. Without that assumption, your example doesn’t make sense. And by including that assumption, you’ve changed the odds.

        • Steve Reilly
          Posted May 22, 2013 at 11:49 am | Permalink

          But you’re example only works if we assume that the door neither picks has a goat. Without that assumption, your example doesn’t make sense. And by including that assumption, you’ve changed the odds.

        • Gregory Kusnick
          Posted May 22, 2013 at 12:10 pm | Permalink

          How separate are you proposing to make them? Could the two players pick the same door? Could Monty reveal different doors to different players?

          If so, then you don’t have a two-person game; you have two independent one-person games, so of course the one-person probabilities apply to each. And Monty could find himself giving away two prizes.

          But if the games are linked — only one prize, one reveal, and (at most) one winner — then it’s not the same as the original games, and the additional constraints change the probabilities.

          • Posted May 22, 2013 at 12:26 pm | Permalink

            Sure; they could pick the same door. The only restriction I put on them is that they don’t know what other people picked. See the example I gave below.

    • Steve Reilly
      Posted May 22, 2013 at 10:49 am | Permalink

      What would MH do in your example if the prize were behind door #2? That’s important. Would he reveal it, and the game ends before anyone gets a chance to switch? That’s important since it indicates a 1/3 chance that the game ends prematurely. Since the choice was forced on him, he’s given us no info about any door except door 2. So you can revise your chances of winning with either door to 50%, and there’s no good to come of switching.

      In the original game, it’s important that MH may or may not have a choice. You pick door number one. There’s a one third chance the prize is there. if so, MH can pick at random between doors 2 and 3.

      On the other hand, there’s a 2/3 you have the wrong door. So if the prize is behind door 2 MH is forced to reveal door 3, and if it’s behind 3 he’s forced to reveal door 2. Since there’s a 2/3 chance his choice was forced on him, there’s a 2/3 that the door he doesn’t reveal has the prize.

      • Posted May 22, 2013 at 11:48 am | Permalink

        See the above comment. All we have to do are imagine these are different players that do not have access to what other people pick. Whether one switches or not, in that case, leads to the same numbers of wins and losses.

        • Steve Reilly
          Posted May 22, 2013 at 11:50 am | Permalink

          Nope, as I point out in my response to your response above, you’re making a crucial assumption that changes the odds.

        • Steve Reilly
          Posted May 22, 2013 at 12:08 pm | Permalink

          Actually, can you spell out how this 2 players in separate places game would work? What happens if one goes for door 1, the other for door 3, and the prize is behind door 2? MH can’t open that door for them, so each would see the other’s door open (I guess?) and then switching would be the winning move. But I’m not sure how it’s supposed to work.

          • Posted May 22, 2013 at 12:24 pm | Permalink

            Sure. Let’s say it’s a standard economics guessing game. Each subject is seated at their own computer and can only see their own choices.

            They are presented with three doors on the screen. If they select the correct door, they get some prize money (say, a dollar). This game is to be repeated a dozen times and, further, the “correct” door is the same for each person in the room.

            So let’s start with 99 subjects. Each subject makes an initial choice: door 1, door 2, or door 3. For the sake of ease of math, let’s say they pick each option equally. 33 pick door 1, 33 pick door 2, and 33 pick door 3. The program then reveals that door 2 is a bust. Everyone who picked door 2 is out.

            From the point of view of the 66 remaining subjects who picked doors 1 or 3, the game now resembles the precise Monty Hall setup: They had three choices, selected one, then had one they did not select revealed to them. They do not know what choices other people made, or even if anyone else was eliminated. In fact, they need not even know anyone else but them is playing.

            In any case, of those remaining 66 players, 33 are on door 1 and 33 are on door 3. It becomes immediately clear the “always switch” rule makes opposing predictions: the rule would tell everyone on door 1 they can increase their odds of winning by moving to door 3 and everyone on door 3 they can increase their odds of winning by moving to door 1. That everyone switches leads to the same outcome: half of the 66 win and the other half lose. The same outcome would similarly obtain if everyone stayed.

            So what am I missing?

            • Gregory Kusnick
              Posted May 22, 2013 at 12:41 pm | Permalink

              Your game differs from the Monty Hall game by eliminating players in the reveal. So players who survive the reveal know that they’ve survived, and that they’re now in a restricted subset of the game as a result.

              In the Monty Hall game there’s no elimination of players. There’s only one player, who always survives. Monty never reveals the player’s chosen door; he always reveals some other door. So the information the player has after the reveal is different in the two cases.

              • Posted May 22, 2013 at 12:49 pm | Permalink

                That players are eliminated doesn’t seem to change the underlying logic of the game for those still playing. People were eliminated at random, essentially, and those who were not eliminated are in precisely the same position as someone in the initial Monty Hall problem. They have no privileged access to any new information that ought to affect their choice in any way.

              • Gregory Kusnick
                Posted May 22, 2013 at 1:17 pm | Permalink

                The fact that they survived the elimination tells them they’re now in a subset of the game in which they have a 50% chance of being right. Surviving the cut raises their odds from 1/3 to 1/2, because 1/3 of the initial probability mass has been eliminated.

                In the MOnty Hall game, by contrast, there’s no chance that Monty will reveal the player’s chosen door, so the player gets no new information about that door, which retains its 1/3 chance of being the winner. What he gets is information about the door Monty could have revealed but didn’t, which now has a higher chance of being a winner, up from 1/3 to 2/3.

                The key difference is that in your game the reveal is unconstrained and effectively random; in the Monty game it’s constrained by the player’s pick and by the location of the prize. If you ran 99 simultaneous Monty Hall games, you couldn’t make the same reveal in all of them. You’d have to tailor the reveal to each individual game, depending on the player’s choice, thereby injecting information about which door(s) you’re permitted to reveal.

              • Posted May 22, 2013 at 2:06 pm | Permalink

                I think I got it now, though. Weird how the involvement of other people changes it. Ah well.

              • Steve Reilly
                Posted May 23, 2013 at 10:27 am | Permalink

                Jesse, it isn’t that the involvement of other players changes anything. It’s that in your version of the game, there’s a 1/3 chance that MH will open the door you’ve chosen. That changes the game, obviously, for the 1/3 of the players who are eliminated without being given a chance to switch, and the other 2/3 are receiving different information than they do in the canonical game.

                You can imagine a game with a single player where MH will open one of the two non-prize doors at random. If it’s the one you chose, you lose, and if it’s the other, you can switch or not. That game is equivalent to yours, and in that game there’s no benefit to switching if you aren’t eliminated. Either remaining door has a 50/50 chance of winning.

  35. Diane G.
    Posted May 22, 2013 at 11:21 pm | Permalink

    sub

  36. Posted May 28, 2013 at 3:19 pm | Permalink

    So, Schrödinger said, if the first cat is dead, what is the chance that the cat in one of the three remaining boxes is alive?

    /@


Follow

Get every new post delivered to your Inbox.

Join 28,791 other followers

%d bloggers like this: