Reader Peter sent me this paradox (it’s not really a “paradox” as I understand the meaning of that term, but a result that, like the Monty Hall problem, is deeply counterintuitive). It’s called Bertrand’s Box Paradox after French mathematician Joseph Bertrand, who raised it in an 1889 book on probabilities.

The setup is simple:

There are three boxes:

- a box containing two gold coins,
- a box containing two silver coins,
- a box containing one gold coin and a silver coin.

The ‘paradox’ is in this solution to this question. **After choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, what is the probability that the ***next* coin drawn from the same box will also be a gold coin?

A graphic representation:

I got it after a few minutes of cogitation, but I won’t give you the answer. The only hint is that it’s not what you’d first think, unless you’re a savant. Give your answer and reasoning in the comments, and I’ll chime in showing which answer is right. The link to the paradox at top gives the answer, but try not to look till you’ve given it a go.

It turns out that the probability is identical to that for winning by “switching doors” under the Monty Hall problem with three doors—and for pretty much the same reason.

### Like this:

Like Loading...

*Related*

## 237 Comments

2/3 – If you picked a gold coin, then it can’t be box 3. That means there are 3 coins left in play, 2 of which are gold.

But, the rule arise you have to draw from the *same* box. That changes things.

Also there are only 2 gold coins left, not 3.

However the Monty Hall strategy of “switch boxes if you’re allowed” still holds. This will give you a (0.5*0.5) + (0.5*1.0) chance of drawing a gold coin. Sticking with your original choice gives you a (0.5*0.0)+(0.5*1.r) chance of drawing a gold coin.

(For both expressions, the first 0.5 in each parenthesis refers to the fact that you don’t know which box you drew from. The second refers to your chance of drawing a gold coin from the other box)

“Also there are only 2 gold coins left, not 3.”

Which is why Michael already mentioned that…

But there are two ways you could have picked the gold coin from the first box but only one way you could have picked the coin from the middle box. Therefore the probability is 2/3 that you picked the coin from box 1 so 2/3 that you have a gold coin in the box you chose.

I agree with Michael. We know you have either box 1 or 2. The chance of the gold coin you picked being from box 1 — which means the next coin will also be gold — is 2/3.

I originally thought it might be 1/2, but quickly realized that there was not an even number of coins remaining.

Yes.

There are 3 gold coins. One of them is paired with a silver, two of them are paired with another gold.

If the gold coins are called A, B and C, then: if you happened to get coin A, the next pick will be coin B. If you happened to get coin B, the next pick will be coin A. If you happened to get coin C, the next pick will be silver.

That’s 2/3 chance of another gold.

(I was explaining the Monty Hall to an educated friend of mine last week – a highly educated constitutional lawyer – and he still couldn’t ‘see’ it. Maybe my explanation was at fault. There just seems to be something about that problem that’s counter-intuitive).

cr

There’s a 50:50 chance you get gold again. You’ve eliminated the possibility that box has 2 silver. You’re down to the remaining ball being either gold or silver.

But are you equally likely on the other two boxes?

I’m going for 2/3rds.

You’re correct for the same reason as in the Monty Hall problem.

1/2 for the same reason. I think it might be from the same box, though.

I agree. It is 50%. The first ball was gold, so it is either the gold-gold box or the gold-silver box with the same probability.

Nope. 2/3.

Yes, I knew this by the time I went to bed yesterday. I did not heed your warning to sit down and think longer, but the other readers convinced me.

Thank you for the interesting and educating problem.

Yep, 50% would be my bet too, but my math is atrocious, which makes me suspicious. But I can’t see what else it could be, even though it seems too obvious. I assume there’s a trick somewhere.

I’d agree with you and it came to me quickly, so probably(chances are) It’s wrong. I’m not a savant.

It can’t be 50% because you already removed a gold coin, so there aren’t an equal number of gold and silver balls left.

I fail to understand how this matter, though. If you’ve pulled out a gold coin, the box with two silvers is already eliminated. It’s irrelevant to the problem because you can’t switch boxes. The probability changed from 1/2 only if you’re told you could or must switch boxes. But the problem, as written here, truly leaves you a 1/2 chance of pulling another gold coin from the same box from which you’ve already extracted a gold ball.

Nope. Imagine repeating this 6 times, each time picking a ball. Write down the cases.

*Always go back to careful specification of the population* in tricky problems.

Also, symmetry. If you didn’t pick gold then you picked silver. Now look carefully, can both those cases be 50-50?

Right after I posted that, I realized my mistake. It’s much easier to explain it this way: we can forget about the third box. Pretend there are only two boxes, one with one gold and one silver ball, the other with two gold balls. If you pulled out a gold ball, then it’s more likely (66%) you chose the box with two gold balls than it is you chose the one with one gold and one silver ball (33%). The third box is still irrelevant, though. Those numbers transfer, as it’s the same with counting all the balls left: two golds and one silver.

Oh wait, I see now. It rests on the probability that you had picked a golden ball in the first place from one of the two boxes. It’s more probable that a gold ball was from the all-gold box than from the half-and-half box, so that changes the probability of which box you chose from. I get it now!

I also realize now that one can look at it this way, or from the standpoint of the three balls remaining in the two relevant boxes. After picking a gold, there are two golds and one silver left. Forget the boxes and pool those balls together, and you have 2/3 gold balls and 1/3 silver ball.

Sub

Is this at all related to the fact that switching checkout lanes at Costco always seems to result in the person in the lane you switched from getting done before the person in front of you in the new lane?

No, that’s 100% probability.

😀

If you’ve picked a gold ball you must have chosen box 1 or 2. I guess that this gives a 50:50 of the next ball being gold.

My thinking, too. This is not really about choosing coins, it’s about the boxes.

50%

Apparently, you picked a gold ball from box 1 or from box 2. if you draw from the same box, chance of gold is 100% if it is box 1 (1 gold ball left) and 0% if it is box 2 (the one gold ball is gone).

Since I am not a savant and did not think long enough, this must be the wrong answer.

Sub (on both thoughts!)

It’s twice as likely you picked from the double gold box.

That box has two of the possible scenarios, the other box has only one.

2/3.

The probability of choosing gold from box 1 is 100%.

The probability of choosing gold from box 2 is 50%.

The probability of choosing gold from box 3 is 0%.

The question is essentially asking P(Box 1 | Gold). So, using Baye’s Theorem just solve P(Gold | Box 1) * P(Box 1) / P(Gold).

P(Gold | Box 1) is 100%.

P(Box 1) is 33%.

P(Gold) is 50%.

You end up with 33%/50% or 1/3 * 2 = 66%.

So there is a 66% chance the next coin you pick will be Gold.

Or, to put some English around that math. The chances that you’ve chosen Box 1 given that you’ve drawn Gold, is the same as the probability of drawing gold from Box 1 times the probability of choosing Box 1 all divided by the probability of choosing Gold period. Or, to expand the denominator out: the probability of choosing gold given you chose Box 1 plus the probability of you choosing gold given you didn’t choose Box 1.

This is actually very similar to the Monty Hall problem, with the information about the initial coin draw in the role of “Monty”. But in this case, to get both gold coins you should stick. If you were aiming to maximize profit, and you found you had drawn Silver, you should switch, just like the Monty Hall problem.

Sorry, this wasn’t actually meant to be a reply to you Mike. I guess you were the bottom most reply when I started and I forgot I had to scroll back to the top before hitting reply.

I’m with you Starr. 2/3.

Your first pick already happened. The probabilities involved in its outcome have no effect on the probability of the next pick. In fact, the existence of the box with only silvers has no relevance to the question at all, its only role is a distraction.

You 2nd sentence is not true. Imagine picking from 1000 boxes of each type. If you pick gold, would it more likely be from a double-gold box or a gold/silver box?

Obviously the first and I see your point. Still, this involves a probability of an already happened event.

Anyway, I am not sure of my answer (especially since we were warned that it is not obvious and mine is an obvious one), but stick to it.

I also mistakenly thought it was 50%. Wikipedia has a helpful reformulation: let the boxes with two coins become cards with two faces. There is a white/white card, a black/black card, and a white/black card. The question is “after picking a card at random, what is the probability that it has the same color on both sides?”

Reading Wikipedia on the topic is kind of cheating. 🙂

The card analogy is not convincing for me, because I do not see it as a simple rephrasing. The argument that pools the contents of the two possible boxes and says that out of 3 remaining balls two are golds, is better.

BTW, I am totally going to do this experimentally once I have a lot of time. I just need three really identical boxes and a way to reliably randomize their content in every turn.

“Still, this involves a probability of an already happened event.”

Probabilities aren’t just about predicting the future.

E.g., convicting someone of murder — beyond reasonable doubt (a probability statement) — is not saying whether someone *will* commit murder, it’s saying whether someone *has* murdered.

That example has nothing to do with mathematical probability.

How many ways are there to draw Gold?

You could be in Box 1 / Position 1

You could be in Box 1 / Position 2

You could be in Box 2 / Position 1

How many ways are there to draw Silver, given Gold?

You could be in Box 2 / Position 2

If we chose 1-1, we will next draw 1-2.

If we chose 1-2, we will next draw 1-1.

If we chose 2-1, we will next draw 2-2.

There are no other options. Two out of three options give us another Gold.

This is (so far) the most elegant explanation.

There isn’t really a next pick. The guidelines states you have to pull the second coin from the same box that you chose for the first pick. An equivalent way of stating the problem would be: given that you pulled a gold coin, what is the chance that you pulled this coin from the box with 2 gold coins vs the chance you pulled it from the box with one of each coin.

Your free will in the second pick is an illusion. If you are a hard determinist like our esteemed PCC, your first pick was an illusion too.

There is no illusion of free will involved. This is random picking. Even if you believe in free will (I do not not) the outcome of a trully random event is not affected by any kind of will, free or not.

As for the argument of “what is the chance that you pulled this coin from the box with 2 gold coins vs the chance you pulled it from the box with one of each coin” I understand it. It just goes against my instinct, so I am trying to handle the probabilities of the two draws as independent ones, but actually the second is dependent on the first.

Sorry, I was making a joke with the second paragraph. I should have put a 😉 after it. Please don’t take the second paragraph seriously.

I think the “paradox” works because of how they state the problem. My initial instinct was 2/3, but I had the odd sensation when I read other responses and the first one stating 50% sounded reasonable too. I had to think about it a bit to come up with a way to reframe the problem to justify my impression that you were more likely to be with the 2 gold coin box.

This isn’t complicated. After the first pick, there are two boxes left with a gold ball. The first pick was from one of those two, so 50/50 that the box will have another gold ball.

I phrased that wrong, after the first pick there is either one box with two gold balls and the other box with a silver ball, or two boxes each with a gold ball. Still, 50/50 which one you choose.

I will say 3/4 because it has to be more than 50/50

50%. Based on the premise, the box in question is either Box 1 or 3 with equal probability. Box 2 is eliminated from the problem by the premise. If the box chosen was Box 1, then the next coin chosen from that box will gold. If the box chosen was Box 2, then the next coin chosen from that box will be silver.

Arrgh! The last sentence should have referred to Box 3, not 2. Wish we could edit our own comments like we can on some other blogs.

“Based on the premise, the box in question is either Box 1 or 3 …”

Yes.

“… with equal probability.”

You sure?

Yes, I think so. The third box with only silver coins doesn’t bear on the problem, given the premise.

But you’re going to miss Box 3s half the time, owing to picking silver, thus you are not equally likely on Box 1 or Box 3.

It bears on the problem because you know you didn’t pick a S/S box. This is information you should use to help you figure the probability.

Yes it’s true that if you only had two boxes to start with and you drew a gold coin from one of them, you would be at 50/50. But you didn’t just have two boxes, you had three.

No, not sure. I am pretty bad at these things though I will admit that I didn’t think about it too long. When I saw virtually everyone else was coming up with 2/3, I found the page on Wikipedia: https://en.wikipedia.org/wiki/Bertrand%27s_box_paradox. It is an interesting problem.

Nope. I was wrong.

I thought 2/3 immediately but am probably wrong 🙂

A cooler sounding name would have been Bertrand’s box paradox.

Cooler indeed. Bertrand’s box paradox rocks, though some curmudgeons think a pox on Bertrand’s box paradox.

Bollocks to the pox on the box paradox!

2 out of 3. You picked a gold coin so it is not box 3. There is only one silver coin left that you might pick out whereas there are two gold coins that you could still pick out.

+1

This is the simplest way to look at it, IMO.

Ignore the box factor, just focus on how many coins of each type are left.

2/3. Indeed non-intuitive.

The intuitive answer is 50% – since you know you’ve picked either the double gold box or the one gold, one silver box, then if it’s the first, the 2nd ball will be gold, and if the 2nd, the 2nd will be silver. So 50%.

But this is wrong. To see why, imagine you repeat the experiment many, many times. Ignore the times you picked from the double silver box. Of the times you pick a gold ball, 2/3 of those will be from the double gold box, and 1/3 from the one gold, one silver box. So, 2/3 of the time when you reach into the same box again, you’ll pull out a gold.

So the correct answer is 2/3.

In your example you are repeating the experiment starting from before the first ball is drawn. However in the question asked the result of the first drawn is preset condition, a certain event that already happened. The experiment starts after that.

No, the experiment starts before picking the first coin, with: “After choosing a box at random and …”. That’s crucial.

The facts that we picked a box that contains at least one gold ball and then pulled a gold ball out of it are input parameters of the question with 100% probability, since it has already happened. The question starts by declaring that. If you start a series of experiments by picking random boxes, then pulling the first ball from them, you invalidate the starting premise of the question half of the time.

Agreed. So take 1000 of each type of box, then consider all the outcomes following:

Randomly pick a box,

Take a coin from it,

it is gold.

Right.

After each selection event, the odds change.

(As an example, what are the odds of tossing six heads in a row with a fair coin? 1 in 2^6, i.e. 1 in 64. Now, if you’ve just tossed 5 heads in a row, what are the odds of making it six in a row? – 50-50. Because we’ve just overcome odds of 32 to 1 to get this far).

cr

If you play the game 900 times, you’ll draw S/S 300 times, S/G 150 times, G/S 150 times, and G/G 300 times. On average.

So correct, if you’ve drawn G first, then you’re in the G/S situation 150/450 times or in the G/G situation 300/450 times.

Yes, 2/3rd is correct. It is because the second chance is not independent of the first one. Your explanation is the clearest one yet.

To make it less counterintuitive, you can also compare it to these programs where you have 3 boxes, one with a big prize. You chose one, and then the presenter opens one box, which does not contain the prize. The presenter then asks if you want to change the box you chose initially. Obviously you do, because the other box has a 2/3rd chance of containing the prize.

Oops, I just see that that comparison

isthe ‘Monty Hall’ th others are referring to.My answer is that there is a 2/3 chance the next coin will be gold:

The problem indicates that you have already selected a gold coin. Their are three ways to select a gold coin: Select gold coin 1 from the box with 2 gold coins, select gold coin 2 from the box with 2 gold coins, or select the gold coin from the box with the gold and silver coin. 2 of these ways of getting a gold coin involve you selecting from the box with 2 gold coins. One of these ways involves you selecting from the box with the silver and gold coin. Therefore, if you have selected a gold coin, their is a 2/3 chance you have selected the box with 2 gold coins, and a 1/3 chance you have selected the box with 1 silver and 1 gold coin. If you have selected from the box with 2 gold coins, their the probability of selecting a second gold coin is 1. If you have selected the gold coin from the box with a gold and silver coin, your probability of selecting a second gold coin is 0. Thus your total probability of selecting a second gold coin if the first coin you selected was gold is (2/3) * (1) + (1/3) * 0 = 2/3.

I’m also assuming you do not replace the gold coin before drawing again. If you replace the gold coin before redrawing, your chances change to:

(2/3) * (1) + (1/3) * (1/2)

= (4/6) + (1/6)

= 5/6 chance of drawing a second gold coin

I think that assumption is implicit in the problem 😉

cr

If Bertrand Russell had gay-married Joseph Bertrand, would his name have become … well, never mind that; I bet they’d have had some intense maths-and-logic pillow talk.

And if Whoopi Goldberg had married the English Actor Peter Cushing she would be known as

Whoopi Cushing .

Yeah. Same with Richard Cardinal Cushing, the former prelate of Boston. But he’d taken a vow of celibacy, so not sure things would’ve worked out so well for him and the Whoop.

It is interesting that there’s a paradox named for both of them, yes.

(Russell’s paradox *is* a paradox in the sense of a self-contradictory statement.)

If memory serves, Russell showed it was not a true paradox. He pointed out that the apparent paradox stemmed from a confusion about levels of meaning. I could be worng.

The *resolution* of how to avoid it can involve different levels of meaning, but that’s only one possible solution: it is still a paradox in (say) Frege’s logic.

The “ordinary language” version is the “barber” paradox.

There are only two boxes that have gold coins in them, so the bow chosen is one of those two. One of those boxes has a second gold coin, the other does not, so the odds are 50:50 or 50% that there will be another gold coin in the box.

2/5.

Even money. If you don’t get that immediately, stay away from the gaming tables. (Hell, stay away from the gaming tables anyway; they’re a sucker’s bet.)

It’s the fact that’s the ‘obvious’ answer that makes me think you are wrong.

Ok. I’ve thought it through further, and since the first box was chosen at random, and since the odds of selecting a gold coin first out of the mixed box are only half what they are for the double-gold box, the overall odds of selecting a second gold coin are two out of three.

I’ma stay offa the gaming tables, too. It’s too hard to make a hard-eight at craps, anyway. 🙂

If you *do* get that immediately, stay away from the gaming tables. 🙂

😀

50%

I’d say the probability that the next ball is golden is 1/2.

It’s a conditional probability. What is the probability of getting a gold coin given that the 1st chosen was gold? A tree diagram (with 21 branches and 22 nodes) can be helpful to understand the problem.

P(B|A) = P(A and B)/P(A) = (1/3)/(1/2) = 2/3

It is important to distinguish the overall probability from the probability that is left when picking the second coin/ball.

I used to know the answer to this puzzle. But then I got older.

After getting it wrong, I looked it up on Wikipedia. The key is that the chance the box from which the first gold coin is drawn is Box 3 is higher because it has 2 gold coins vs the one in Box 1.

I think one of the confusion people who don’t come up with 2/3 have is that they don’t realize you need to count how many ways there are to draw. The coins/balls aren’t fungible. If you have 2 gold balls in a box, you need to imagine giving them labels like A and B. You could chose A then B, or B then A from that box. Those are two distinct possibilities that must be accounted for.

Thus, if we call Gold A/Gold B Box 1, and Gold A/Silver B Box 2, we see that there are twice as many ways to draw gold out of Box 1 as Box 2. Given we already drew one Gold, Box 2 only gives us 1 possibility, we drew Gold A, and will get Silver B next. However there are two possibilities in Box 1. If we drew Gold A we will draw Gold B, while if we drew Gold B we will draw Gold A. Three scenarios, two of which give us another gold.

Yes. The event space S of choosing two balls has 6 elements. If g is the event of choosing a gold ball and s is choosing silver, S is: {g,g},{g,g},{g,s},{s,g},{s,s},{s,s}. If the 1st choice was g, the event space shrinks to {g,g},{g,g},{g,s}. The answer should be obvious at this point.

+1

No, it’s 2/3. 😉

I think this is a very clear and precise explanation!

I think it helps do away with the following confusion : if the probability before you pick factors into the probability after we know the gold coin has been chosen.

CONcise is the word I was looking for

concise

Thank you sleep committee

I think this point is particularly good for understanding the answer

There are not 3 possibilities because a gold coin has already been drawn. One box is eliminated for having no gold. So the remaining possibilities are one box with one more gold coin and the other box with gold/silver, or one box with silver remaining and the other box with gold/gold. Only two possibilities, so 50/50.

There are three possibilities, I listed them.

You could chose Gold Coin A from Box 1, and will next draw Gold Coin B.

You could chose Gold Coin B from Box 1, and will next draw Gold Coin A.

You could chose Gold Coin A from Box 2, and will next draw Silver Coin B.

All three possibilities assume you have picked a gold coin already. Which one do you think is invalid such that you are left with a 50/50 shot at drawing gold?

50% But that’s probably wrong according to an argument that will sound as convoluted as arguments for God.

These puzzles always bother me because, ultimately, they boil down to semantics. Mathematically, it’s clear-cut: the correct answer (2/3) is a conditional probability (we have already observed something to happen – a gold coin on the first draw), whereas the “intuitive” answer (1/2) is the probability of an intersection of events (gold on the first draw *and* gold on the second draw).

I tell my intro probability students to always examine the sentence structure for *and* or *if* (or *given*). “And” implies unconditional probability, whereas “if” or “given” imply conditional probability. A cleverly worded question can really boil down to just semantics though, which is why I tend to avoid such examples when teaching.

Still, they are always fun to think about and to start arguments amongst your friends and colleagues!

Would it not be 33%? If you are forced to choose from the same box twice then essentially you are just asking what is the likelihood of initially choosing the box with two gold coins, 1/3.

1/3. The only way you can take out two gold balls from the same box is if you chose the first box. The odds of that were 1/3.

Wouldn’t it just be a 33% chance? If you have to choose from the same box again then it is equivalent to asking what is the probability of the choosing the box with 2 gold coins in the first instance, 33%.

Sorry, thought my first one didn’t post.

Every Fall I teach a class in counter-intuitive probabilities at an afterschool program in the East Bay. I always use Bertrand’s box, and yes, it is the same as the Monty Hall problem. It is in a limited way a kind of Bayesian analysis.

An equivalent problem is the Three Prisoners’ problem:

“Three prisoners, A, B and C, are in separate cells and sentenced to death. The governor has selected one of them at random to be pardoned. The warden knows which one is pardoned, but is not allowed to tell. Prisoner A begs the warden to let him know the identity of one of the others who is going to be executed. “If B is to be pardoned, give me C’s name. If C is to be pardoned, give me B’s name. And if I’m to be pardoned, flip a coin to decide whether to name B or C.”

The warden tells A that B is to be executed. Prisoner A is pleased because he believes that his probability of surviving has gone up from 1/3 to 1/2, as it is now between him and C. Prisoner A secretly tells C the news, who is also pleased, because he reasons that A still has a chance of 1/3 to be the pardoned one, but his chance has gone up to 2/3. What is the correct answer?”

This is a nice variation and not as identical to Monty Hall as I first thought. I think that A’s chance of the pardon has remained at 1/3 and C’s has doubled to 2/3.

Yes, I think so.

A is still 1/3, as it was from the beginning. The information won’t have changed his chances.

However, if it isn’t A then it must be C (by the terms of the question) so he must be 2/3.

cr

To a propensitist, the MH, BB and 3prisoners are only the same if the entities are determined at random.

King to prisoner:

Behind one of these doors is a fire-breathing dragon. Behind the other is a princess.

Now, which do you choose?

Prisoner: The one with the princess, of course.

(Stolen from Wizard of Id, probably)

cr

The paradox behind this is due to whenever you start applying probabilistic reasoning.

If you start *after* picking the first coin, then there’s a 50% chance of picking another gold coin. If you start *before* picking the first coin, then there’s a 66% chance of picking a gold coin after having picked the first gold coin.

Saying this another way, if you start from the assumption that this question is asking you to update on some initial information, then you’re likely to use Bayes Theorem. If you don’t start from that assumption, then you’re likely to just count the number of remaining gold coins available between the two boxes with gold coins in them.

If you start before picking any coin, then your prior probability is 33%. When you encounter the first gold coin, then you use Bayes Theorem to update your prior. If you start after picking the first gold coin, your prior probability is 50% and no update happens. I don’t agree that you should start *after* picking the first gold coin, but this is what some people do nonetheless, and is why it seems like a “paradox”.

And like PCC said, this is the same thing that happens with the Monty Hall problem.

Is there a cat involved in this ?

Cat? Box? Calling Dr. Schrodinger …

You’ll pick a gold coin from the box with 2 golds twice as often as you will from the gold/silver box. Therefore, the chance that the next coin is gold is 2/3.

Do the experiment 3,000,000 times from the beginning, before even choosing the box. Let ~ mean approximately, and it’ll be very close here percentage-wise. Then:

(A) ~ 1,000,000 times you’ll pick 2s box, and in the end ignore, because you got s not g.

(B) ~ 1,000,000 times you’ll pick 2g box, and proceed; and of course get g after proceeding.

~1,000,000 times you’ll pick the 1g and 1s box.

(C) Of those, ~500,000 times you’ll get s at first, not g, so ignore.

(D) In the remaining ~500,000 cases, you’ll get g, so you’ll proceed and have s as the remaining one.

So the updated info only involves cases (B) and (D), with ~1,500,000 experiments completed. Of these, ~1,000,000 give g as the remaining coin, and ~500,000 give s.

Therefore, the probability for g as the remaining coin ‘is’ ⅔ = 1,000,000/1,500,000.

I didn’t look, and won’t till morning.

I was thinking the odds of picking a 2nd gold ball is 1/2. It is now a matter of which of the two boxes is the one with two gold balls.

It’s 1/2. The coin is either gold or silver, the 3rd coin doesn’t come into it.

Ah. “All probabilities are 50%. Either a thing will happen or it won’t”. 😉

cr

Forgot to say, but clearly I’ve done it as though I were a “frequentist” not a “Bayesian”, though I’m the latter if anything.

But really I’m committed to neither. I don’t think the word ‘probability’ has yet been understood properly within non-Everettian basic physics, even though intelligent humans know how to calculate with ‘it’.

David Wallace, in what seems to me to be a terrific book explaining the ‘many-worlds’ interpretation (he wouldn’t call it that last word seriously) claims that this ‘quantum mechanics with no measurement problem’ actually comes much closer to a complete understanding of the word ‘probability’. And this despite the fact that a shallow objection to Everettian quantum theory would say: everything that could happen does happen, so how could there be any probability at all?

Keep forgetting, but of course Monty Hall can be explained in exactly the same way with 3 quadrillion experiments, and no word ‘probability’ occurring till the last sentence. so nothing original; it’s the same so-called paradox, in reality.

My answer: 2/3.

My reasoning is thus: the boxes are a distraction as far as the initial pick is concerned – there are 6 balls, 3 of them gold, so a 1/3 chance for each gold ball. You have 2 ways to pick the box with 2 gold balls and 1 way to pick the box with only one. The first box will always yield another gold, the other always gives silver, so all that matters is the probability of picking the box with 2 gold balls, which is 2/3.

Nice!

The question was: What are the odds of getting a G on the SECOND draw, GIVEN THAT the first one was G.

This condition implies that you cannot have chosen the S/S box in the first stage where you’ve chosen the box. In fact, it is already irrelevant that there were 3 (or however many) boxes at the outset – you already *have* the gold ball in your hand which means your box is either G/G or S/G with equal probability. And therefore, with equal probability, the remaining ball in the box is G. So, 50%.

Now, let’s look at the solution …

My reasoning: There are two gold balls left and one silver ball after pulling a gold ball from a box. So you have a 2 in 3 chance of pulling a gold ball. The containers are an artificial constraint in the problem.

Just think of the options:

Container 1: G G or G G

Container 2: G S or S G

Container 3: S S or S S

Since you have pulled Gold, there are only three options remaining: C1 G G, C1 G G, C2 G S. Since you don’t know which scenario you’re in, each is equally probably. So 2/3 remaining scenarios give you the gold.

Being the computer need I am, I wrote a program to simulate this problem. Results:

1st run

1000 iterations

505 times gold picked first

335 times gold picked once gold picked first

Percentage: 66.3%

2nd run

1000 iterations

507 times gold picked first

346 times gold picked once gold picked first

Percentage: 68.2%

3rd run

1000 iterations

512 times gold picked first

335 times gold picked once gold picked first

Percentage: 65.4%

OK, if you really want an argument about this kind of problem, try this:

https://www.newscientist.com/article/dn18950-magic-numbers-a-meeting-of-mathemagical-tricksters/?full=true

‘Gary Foshee, a collector and designer of puzzles from Issaquah near Seattle walked to the lectern to present his talk. It consisted of the following three sentences: “I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?”

The event was the Gathering for Gardner earlier this year, a convention held every two years in Atlanta, Georgia, uniting mathematicians, magicians and puzzle enthusiasts. The audience was silent as they pondered the question.

“The first thing you think is ‘What has Tuesday got to do with it?’” said Foshee, deadpan. “Well, it has everything to do with it.” And then he stepped down from the stage.’

Wow

Before I look at spoilers besides Monty Hall

I’m confused because you don’t choose between boxes

But

I get that there is a probability

… it’s just …

You can’t switch…

Hm…

If you are in box 1, you get a gold coin.

If you are in box 2, you do not get a gold.

What are the chances you are in box 1? Two thirds because 2 of the 3 gold coins are in box 1.

This repeats what Scott and Mark Ayling said but with a simpler (to me at least) logic.

No this is not why its 2/3. But it is 2/3.

You have a 2/3 chance of picking a «pure» box.( pure gold or pure silver box) and a 1/3 chance of having picked from the mixed box

So 2/3 times you have picked a pure box , so 2/3 times you will pick the same color again.

No, the chance of you having picked a pure silver box is 0 because you have chosen a gold coin. We can ignore the silver boxes because we did not choose one.

Think about adding 98 more silver boxes. Before we chose, there was 99% chance we would choose a pure box but that does not mean there is a 99% chance we are in the gold box.

I reasoned that you have chosen the box with two gold coins witty 2/3 probability but that sounds too obvious. Where is the paradox?

The answer doesn’t seem counterintuitive to me, but maybe that’s from prior exposure to Monty Hall.

My first guess was 50%. Before looking at the responses here I thought maybe 75%.

My idea was 50% chance of picking gold from 2nd box. 100% from first so the chances you picked Box 1 is 75%

At first I was thinkin 2/3 and all the 1/2 people were foolish but then I started thinking 1/2. These problems have always given me trouble, and I have no life, so I decided to let empiricism rule the day. I ran 44 trials and came out 14 silver (dark penny) to 30 gold (shinny penny). The chances that the chances are 50:50 but my trials turned out that way are very slim (that’s a whole other calculation) so I’m pretty confident it’s 2/3.

I think it works this way. You can have three outcomes: gold silver, gold 1 gold 2, gold 2 gold 1. There are two ways to pull out a gold second but only 1 silver second. Three different ways to pull out a gold first. You can also pull silver then gold but the problem as stated excludes that outcome.

I found myself in the same situation as you, but I’m lazy so I wrote a script. Gold: 4999894 – Silver 2499691.

My intuition says its 2/3 chance, since you picked gold, , theres a2/3 chance youare in the «pure gold» box already and therefore you will 2/3 times off such cases draw gold again. For me the answer comes if you imagine repeating the experiment 999 times , thent 333 times you have picked from the mixed box (and will therefore pick the other color next , but 666 times you picked from a pure box , and will therefore pick the same color again

Here’s another way to think about the problem: Pick a box at random. Pull out one coin at random. What’s the probability that the next coin is of the same type as the first coin? It is the same as the probability of picking a box with two coins of the same type which is obviously 2/3.

The distribution of gold and silver coins is symmetric so the probability of getting 2 coins of the same type is independent of the type of the first coin. Now only consider events where the first coin was gold. The probability of getting 2 coins of the same type, in this case gold, is still 2/3 because it cannot depend on the choice of the first coin due to symmetry.

Or use Bayes’s theorem:

Let 1G = event where 1st coin is gold

2G = event where both coins are gold

P(A) = probability of event A.

P(A|B) = Probability of event A given that event B has occurred

Bayes’s theorem states:

P(2G|1G) = P(1G|2G)*P(2G)/P(1G)

P(1G|2G) = probability that the first coin is gold given that both coins are gold = 1

P(2G) = probability that both coins are gold = 1/3

P(1G) = probability that the 1st coin is gold = 1/2

so P(2G|1G) = (1*1/3)/1/2 = 2/3

The silver box is irrelevant because you have picked a gold coin.

Think about having 98 silver boxes. There is a 99% chance that the two coin will match UNTIL YOU PICK A GOLD. At that point, you have a two thirds chance of being in the gold box because two thirds of gold are there.

Sure, there could be a billion double silver boxes and the answer would still be 2/3, however, given the specific setup of the problem one can use a symmetry argument to get the correct answer without enumerating the probabilities of all possible outcomes. The point is that there is an elegant method to get the answer in this case without resorting to tedium.

We agree that adding 98 silver boxes does not change the outcome but, if I understand you what you are saying correctly, it does make symmetry no longer relevant.

I think that means symmetry not a valid way of looking at this problem. It just happens to work in a specific instance and is not a general method of solving this type of problem.

My solution is (look at 45) seems the simplest explanation but, of course, that it what I would think since it is my solution. It can be generalized to work regardless of the number of boxes (easily) or coins (with a little effort).

Symmetry is indeed a valid way of looking at this problem. Since the answer is independent of the number of double silver boxes, one is free to consider any number of double silver boxes so why not choose the configuration that makes the calculation simpler, i.e., deliberately choose to consider the configuration of one double silver box since it has symmetry between silver and gold and then argue from symmetry. This specific problem isn’t that complex so the symmetry method may not seem significantly simpler than your method but it can be a powerful technique in general. The trick of adding or subtracting data that don’t change the answer but make the calculation simpler by introducing symmetry is a common practice in the field of signal processing.

Or, if there were a billion double silver boxes one could argue that the answer is the same as a single double silver box and then still argue from symmetry.

Yep, so long as only two boxes contain gold coins.

Please see comment above.

Here’s a tip for those who do not see the answer intuitively. List all possible cases. Do this carefully, each case once. That is the “population” you sampled. The probability is given by counting cases.

I have nothing new to add, but I like this post. 🙂 (I am an academic statistician.)

The answer is 2/3. Not for Monty Hall reasons though. The MH analogue would be that you choose a box but before you choose a coin Monty opens one of the other boxes to reveal it has two silvers and asks if you would like to switch to the remaining box. The answer is yes because there is only a 1/3 probability that you chose the box with two golds and you double your odds by switching to the remaining box. With the Bertrand paradox, you learn something about the box you chose, not the boxes you didn’t choose.

See 39. for how I claim it is logically identical to Monty Hall.

“With the Bertrand paradox, you learn something about the box you chose, not the boxes you didn’t choose.”

You learn the box you did choose is gg or gs.

Therefore you learn that the box you didn’t choose is ss. So your last phrase quoted is false.

I think much of the confusion and disagreement is with people using the words “odds” or “chance” or “probability” without being crystal clear about what they think they mean. Speaking of a large number of identical experiments is the best way of doing that.

You learn that ONE of the boxes you didn’t choose is ss, you do not learn WHICH box is ss as you would in the MH problem.

You’ll win an odds-on 2-to-1 shot two out of three times.

I take back the claim of a MH analogue in this case. There is no MH analogue because Monty must be able to open an empty curtain regardless of the contestant’s choice. In my analogue, Monty cannot reveal the ss box if the contestant has chosen it. More reason to claim that Bertrand’s paradox and the MH problem are not the same.

The question as phrased is rather poor. You’re dropped into the middle of a sequence of events, where half of the results thus far possible have already been discarded.

The real question being asked is more like, “Assuming you picked a gold coin, what is the probability that you picked that gold coin from the box containing two gold coins rather than the box containing only one gold coin?”

Which makes the answer they were looking for rather obvious – you have two chances to pick gold from one box, and one chance to pick gold from the other, out of three chances total. So the probability that you picked the gold coin from the box containing two gold coins is 2/3.

The Monty Hall problem at least lets you implicitly go from start to finish, so the odds in question are the odds of winning from the very beginning. Though the scenario starts with you having already chosen a door, you aren’t excluding any of the possible choices from the start (i.e. you could have chosen either goat or the car).

This scenario is more contrived.

No,, it’s asking the question “what is the probability that you pick gold given that you picked gold last time?” There’s nothing wrong with that kind of question, in fact we ask it quite frequently.

What is the probability of you having a certain disease given that you tested positive for the disease? Assume the test is correct 99% of the time and only 1 person in 10,000 has the disease.

The answer is (IMO) s little bit counter intuitive. Out of 1,000,000 people 100 will have the disease and 99 will be correctly diagnosed by the test. On the other hand 999,900 won’t have the disease but 9,999 will have been falsely diagnosed as having it. So the probability that you have the disease having been tested positive is 99/9,999 – less than 1%.

Usually when having a test of that nature, you are doing so because you have manifested some of the symptoms of a disease thus cutting down the initial population, but there are situations in which people do get tested randomly for certain conditions,, like professional athletes and drug tests. I assume the authorities have thought of this, especially as they presumably do lots of tests on each urine sample provided by athletes which raises the probability of getting at least one false positive.

Huh? If you have tested positive for a disease, assuming the test procedure is accurate, then you are 100% likely to have the disease by definition.

Maybe I misunderstand your statement. But this is definitional, no?

As for the question itself… there’s nothing

wrongwith the question but it is a different from “What is the probability of picking gold twice in a row” (before any picking has started).I said in my post that the test is 99% accurate.

That’s effectively 100%. If you stipulated 99.9999999% your position would mandate 99.9999999/9,999, still less than 1%.

You see the problem?

Rubbish. 99% is 99%. That means it gets one diagnosis out of 100 wrong. or 10 out of a 1000 or 10,000 out of a million.

How is 99% different than 99.999%,

with regard to affecting this comparison?“99% is 99%” is not a serious answer. It is true, but irrelevant. I’m sure you would agree that 99% and 89% are effectively the same from your point of view. So why is 99.9999% significantly different? They all approximate 100%.

Hell, let’s go with 100. You’re analysis would say “100/9,999 is only 1%”.

Of course it is relevant. The point is that with a success rate of 99%, out of a million people that do not have the disease, 10,000 will be diagnosed incorrectly as having the disease. If the success rate is 99.9999% then only one would be diagnosed incorrectly as having the disease.

If your suppositions were true then diagnostic techniques would be of no value because pretty much NOTHING is 100% absolutely-for-certain true.

You go to the doctor. He tests you for, say Lyme disease. He says “The test is 99% accurate”.

The chances that you have Lyme disease are greater than a randomly selected un-tested member of the population. Otherwise, don’t bother going to doctors.

What do you mean “suppositions”? 99/100 means the same thing as 990,000/1,000,000. It’s a mathematical truism.

But yes, the story really does have an implication for diagnostic techniques for rare diseases. That’s why I brought it up.

Well I don’t know what the incidence of Lyme disease is in the population, but assuming it is quite low maybe 1 in 1000 or less, and 99% is the accuracy of the test, then the fact that a random person tests positive does not mean they have Lyme disease or even a high chance of having Lyme disease according to the maths.

In reality doctors do not pick random people off the street and administer tests for lyme disease; you have probably gone to the doctor because you are suffering some of the symptoms and then, if the doctor is still uncertain, he might administer a second set of tests.

It _seems_ like you are conflating to different probability situations. The probability of a randomly selected individual having a disease is not the same as the probability of an already-selected person who tests positive having the disease.

I don’t know any more about Lyme disease than you do. It doesn’t matter. The Dread Gomboo is equally useful example. It occurs in only .00001 of the population but the diagnostic test is almost totally accurate. It only fails one time in a million. (This terrible disease has no noticeable symptoms for years but suddenly you are struck with ten minutes of intense pain and then you expire. Untreated, it is always fatal.) The only known treatment is dandelion root salads.

If you test positive for Gomboo you would be wise to make sure your dandelion garden is well tended. Your neighbor can probably just keep his weed-free garden as is. Your chances for a miserable 10 minute demise are greater than his.

<

No it doesn’t seem like that at all. In fact I explicitly discussed the difference in the last paragraph of my previous post.

Just adding my voice to say you are completely, 100% in the wrong on the issue of disease testing.

If you take a test (sans symptoms) with a 1% false positive rate for a very rare disease, it’s still more likely that a positive result is wrong than that you have the disease.

The a priori chance of having the disease is so much smaller than the a priori chance of a false positive that a false positive doesn’t increase the chance of actually having the disease above 50%.

That’s why multiple tests are done, and symptoms of the disease are considered. No diagnostic test is 100% accurate, and you must compound as much evidence as possible to conclude that a condition is real.

It may be a common way of posing such questions in statistics, but as the confusion about this type of problem shows, it’s a pretty poor mapping onto natural language.

You’re dropping someone into a situation where there are two possibilities, so it’s quite natural to conclude that the chances are 50%. Because they are, if that’s the only event taking place – reaching in for that last coin.

But the question is about the entire chain of events, and there are better ways of phrasing it to make that clear, in normal language.

It’s easier than Monty Hall.

You have a random gold ball. There is no chance it came from Box 2 and a 1/3 chance that it came from Box 3. The answer is then obvious

You pick a gold coin.

There are three gold coins.

Odds that you got it from box 1: 2/3.

Odds that you got it from box 2: 1/3.

So odds of another gold coin are 2/3 since box 1 has 1 left and 2/3 are odds you picked that box.

Yes, this is the right answer.

Answer is right, but is the explanation completely convincing?

Is it not to be expected that [the underlying assumption (really the fact) that each of the 3 gold coins has equal odds of being chosen] itself needs to be justified for the explanation to be complete?

After all, the situation of the 3 coins in the boxes is not symmetric.

This is asking for a justification of lines 3 and 4 in the explanation. To emphasize, such a justification certainly does exist. But to me, it depends on thinking what “odds” is supposed to mean.

I’m never convinced completely on this type of question (especially when people call them paradoxes) until I’ve mentally done my 3 quadrillion experiments. Maybe the fact that I don’t do science of this kind professionally makes me more suspicious of what seems like overly glib explanations.

I think this is an alternatively clear and precise explanation to my other fave up there ^^^

I finally convinced myself of the correct solution to the Monty Hall problem empirically. I had a friend take an ace and two deuces and do a three card monty shuffle. [There’s that Monty again.] I’d pick a card without looking. He would pick up the other two without letting me see, and discard a deuce. After a few dozen runs, I was convinced: yes, my original pick only won about one third of the time. For some psychological reason, it then became obvious; the friend didn’t have to toss a deuce, what he in effect was saying was “If either of these two cards you didn’t pick is the ace, you win.” That is obviously a two-thirds chance of winning.

The simplest way to understand MH is that there is a 1/3 probability that the prize is behind the curtain you chose and a 2/3 probability that is behind one of the other two curtains. That doesn’t change when Monty opens one of those curtains and shows it empty (he can do that in any event), so you can increase your chances to 2/3 by switching to the remaining curtain.

I won’t dispute it sounds like they are somehow different in English. But here’s the isomorphism abstractly between them:

Translate the phrase “picks gold coin”

back and forth to the phrase “kcips the car”.

Here “kcip” is a new English verb which is the negation of the verb “pick”, in our expanded English. So really that silly phrase is the same as “does not pick car” or “does not open the curtain which hides the car”.

So I maintain the (so-called) paradoxes are logically identical (or logically isomorphic if you want fancy language).

Sorry, that should be after Darwinwins reply to my reply to 54.

I’m going with 0.5 as well. You chose either box 1 or 3 so now, no matter how likely that choice was, you have equal probability of being in 1 or 3. Unlike the Monty Hall case, you don’t have the option of switching boxes.

When I googled “Bertrand’s Paradox”, I found something abstruse about a randomly chosen chord of a circle being longer than a side of an equilateral triangle inscribed in that circle. Is this somehow equivalent to that?

“something abstruse”. There’s a conspiracy to confuse the rest of us about probability… probably.

Like the Monty Hall problem, the probabilities assume that you play the game many times; then in the long run, switching will “win” very close to 2/3 of the time.

But if you play the game only once, then (seems to me) what would happen in the long run is irrelevant – long-term frequencies just don’t apply to a single case.

Are not “long-term frequencies” exactly what you mean by probability when applied to the “single case”?

No, what I mean is that long-term frequencies are (as I said) not applicable to a single event. It’s widely thought that they are; I disagree. For something similar to my view, see S.J.Gould: “The Median isn’t the Message”

Actually, no. This was refuted in 1930 by Jean Ville. The “frequentist” account of probability is wrong (and IMO so is the Bayesian) – propensities are the way to go. 🙂

I was asking Frank what was his fundamental definition of probability, not asserting that the frequentist (incoherent) one was mine. But most here have only a sort of vague prejudice about the matter, which leads to all sorts of arguments–entertaining if mostly unenlightening.

In fact I agree with your evident feeling that both it and the Bayesian ‘philosophy’ leave much to be desired, as I expressed quite explicitly in the 2nd paragraph of 38. above. Admittedly my thoughts on this are not as extensive as they should be. I’ll try to read up more on the ‘propensity’ point of view. Right now, it is only the difficult chapters 4 to 6 of David Wallace’s “The Emergent Universe” which give me the feeling that that might be a fundamental viewpoint on probability which is coherent. But that is very hard stuff for me, and would entail a definite commitment to the many-worlds of Everett as a quasi-classical consequence of taking bare quantum theory seriously.

But for discussing a so-called paradox like this in such a crowd, surely just treating probability in a naive frequentist way is almost necessary (as I do in 35.) and quite sufficient. It doesn’t lead to the wrong answer if the logic employed is sound and complete. This despite me myself waffling on in other places urging people to think clearly about what they really mean by the words “odds” “likelihood” “probability” etc.

(as opposed to snowing most here with technical notation from math courses on probability, or with Bayes’ theorem, or with cookbook jargon from low level scientific stats).

Once again, I goofed with “..Universe” rather than “..Multiverse”!

Probability statements can be *tested* by gathering frequency data – I think that’s why people make the confusion.

The propensity interpretation is originally due to Popper, and has been defended by Mellor, Bunge, Armstrong (I think) and otherse.

The answer to the coin problem is 2/3, not 1/2 or anything else. Read the link on the problem.

–Professor Ceiling Cat (Emeritus)

I don’t understand the Monty Hall problem, either. It’s been a great frustration for many years.

Here’s a thought then. Imagine playing Monty Hall, but instead of picking a door the prize is behind, you pick a door it isn’t behind. You get what is behind the other doors. Say your choice is “the prize is not behind door number 1, so give me what’s behind doors 2 and 3”. What are the chances you are correct and win? 2/3. Easy to see.

Well, always switching is exactly that game. It is EXACTLY that game. Monty asks you to pick a door. You say “I pick door number 1, and I switch.” Work it out, you win if the treasure is behind either 2 or 3.

I don’t “deny” it, I promise I’ll switch doors if I’m ever on the game show, and I even claim I could pass a (basic) test about it, but I’ve never had any luck actually understanding it despite way too many hours of study. (I understand that the problem doesn’t “reset,” but I’ve not grasped why it doesn’t.)

The difference with the Monty Hall cass in this case is you can’t switch doors. The result of the 2nd draw is determined 100% by your choice of the box with one or the box with two golds in the first draw because you’re constrained to do your 2nd draw from the same box. Therefore given that you’ve drawn gold already, it means you could only have picked the one gold or two gold box and that doesn’t give you any clues about which one you did, in fact pick with the same probability of 1/3 that each box had. Now that you know you’ve picked one of two boxes that had at least 1 gold, your probability of having picked the two gold box is 1/2

I understand the math if the MH problem remains a “3 door” choice. What I don’t understand is why it remains a 3-door problem after Monty shows the contestant that one door does not hide the prize. At that point, why doesn’t it become a “2-door” choice?

Perhaps I can help. This is how I think about Monty Hall. This is similar to the above answer.

Let’s just think about the scenarios under which you would win:

There are 4 possible scenarios:

(1) you pick a goat and switch

(2) you pick the car and switch

(3) you pick a goat and don’t switch

(4) you pick the car and don’t switch

You win if:

(1) you initially choose the car and don’t

switch

(2) you initially choose a goat and switch

You lose if:

(1) you initially choose the car and switch

(2) you initially choose a goat and don’t switch

The above statements about winning and losing are always true. Since the probability of initially choosing the car is 1/3, you will lose 1/3 of the time if you switch. There are no other scenarios under which switching causes you to lose. In other words, the only situation under which you lose after you switch is when you pick the car on your first pick, which happens 1/3 of the time (because it is 1 of 3 boxes).

Conversely, the probability of choosing a goat is 2/3. You win when you switch after picking the goat. Thus, switching causes you to win 2/3 of the time. In other words, if you pick a goat, you know that switching will result in winning. Because picking a goat is more likely than picking the car initially, it follows that switching will increase your odds of winning.

Another way of thinking about it: switching basically exchanges goats for cars. If, given the choice, you would certainly change the game such that there were 2 cars and a single goat, right? Well, because it doesn’t matter which goat you chose beforehand, switching after you chose is no different than switching before you choose. The fact that one door is removed does not change how likely it was that you chose a goat — that happens 2/3 of the time.

Well, without reading further, I’d say your odds are fifty-fifty. After all, there are only two boxes which contain gold coins, and you’ve pulled a gold coin out of one of them, so the odds are even that you’ve got the box with the two gold coins.

2/3. This is conditional probability in action. There are 3 gold balls at first, all equally likely to be chosen. 2 of those lie in the box of 2, 1 lies in the box with the silver. So this is really P(box 3 | given gold was selected) = P(gold was selected|box 3) p(box 3)/ (P gold was selected|box 3)P(box 3) + P(gold was selected|box 2)(P(box 2)) = (1/3)(1/3 + (1/2)(1/3) = 2/3

The simplest way to understand this is to number the 6 balls. 1, 2, & 3 are Gold, 4, 5 & 6 are silver. So before you’ve done anything, there are 6 equally likely events, corresponding to the 6 balls. We know that 4-6 didn’t happen. But 1, 2 & 3 are equally likely. In the case of 1 & 2, you’re going to get a 2nd Gold ball; 2 times out of 3. So the probability is 2/3.

Let’s simplify the problem temporarily and say there are only two boxes, one with GG and one with GS. If we draw G on the first draw, as stated in the problem, then the odds of drawing G on the second draw are obviously 50/50: either we originally drew from the GG box or we originally drew from the GS box. Now let’s complicate the problem and say that *after* drawing the G on the first draw, we set a third box on the table next to the first two, and the third box has SS. Does that change the odds on the second draw? Of course not. The rules of the problem require that we draw the second time from the box we drew from the first time, so the presence of the third box is irrelevant because we can’t draw from the third box by the rules of the problem. So if we place the third (SS) box on the table *after* we draw the G on the first draw, the odds of drawing another G are 50/50. Now let’s say we place the third (SS) box on the table *before* the first draw. We are now in the position of the problem as stated. The consensus at this web site ((PAWS)(Praise Always the Web Site))seems to be that the odds in this case are 2/3 for drawing G on the second draw. So now we seem to have a true paradox. If we place the third box on the table *before* the first draw, the odds of drawing G on the second draw are 2/3, but if we place the third box on the table *after* the first draw, the odds of drawing G on the second draw are 1/2. Why should the time when we place the third box on the table affect the odds on the second draw?

In the first scenario with only two boxes, the probability of having chosen the box with GG is 2/3, not 1/2. Each of the four balls is equally likely to be chosen; among the the three gold choices, two are from the GG box.

“the odds of drawing G on the second draw are obviously 50/50” – why ‘obviously’? It turns out you’re wrong, so your leap to a conclusion here was incorrect.

In your starting scenario, there were 4 possible balls you could have picked first; it’s gold, so we eliminate the scenario where silver is picked first. So we know we’re in 1 of 3 situations: 1 is that we picked the gold ball in the box with the silver one, and two are that we’ve picked one of the two gold balls. We can’t switch which balls are where, so this has given us the answer: the chances that the other ball in the box is gold is 2/3.

Because the question setter is playing the part of Monty Hall and has essentially cheated and looked inside the boxes. Monty has not played fair and has ruled out a disproportionate number of potential silver-second outcomes. He can only do this if you have had the option of these silver-second possibilities in the first place. It is crucial that the silver-silver box is on the table at the time of your choice or else Monty will not have this power to cheat. The puzzle is highly counter-intuitive for most people. Do not play poker against anyone for whom it is obvious.

My first answer was 1/2, but because the Monty Hall problem was mentioned, I think it’s 2/3. I only got the reasoning down afterwards.

What we have: a gold ball

Probability of the box we chose from being

1) double silver: 0

2) double gold: 2/3

3) mixed: 1/3

Probability of the boxes containing another gold ball

1) double silver: 0

2) double gold: 1

3) mixed: 0

Probability of getting another gold ball GIVEN that you have a gold ball

= (2/3)×1 + (1/3)×0

=2/3

Did I do good? 😀

Yes, excellent. Very clear.

I tested this empirically and sure enough using 10,000 randomly sequenced numbers from random.org, picking up the last digit of each, discarding all the 9s, then scaling the remaining digits to be 1, 2, or 3 to represent the boxes in the order they are shown in the diagram I found that box 1 (containing 2 golds) came up 3000 times of the 6000 times where box 3 (containing no golds) was not the one chosen. Since we are told that the 2nd gold has to come from the same box as the one chosen first, if and only if box 1 is chosen to get the first gold, will you get the 2nd gold. In 9000 random trials that should come up 3000 times, and random.org confirms it. Mind you, the random numbers in that sequence don’t repeat, but the last digits should still be (and are) randomly distributed.

It is certainly 2/3.

Here is an easy way of thinking about it. Assume that you have picked a gold coin. You don’t know which box it was from, but you know it’s a gold coin.

If you picked box 1, you will always get a gold coin. If you picked box 2, you will get a gold coin 1/2 of the time. Therefore, it is twice as likely that picked the gold coin from box 1 than box two. Hence,

Pr[box 1] = 2*Pr[box 2] (1)

Pr[box 1] + Pr[box 2] = 1 (2)

3*Pr[box 2] = 1 (3)

Pr[box 2] = 1/3 (4)

QED.

It is certainly 2/3.

Here is an easy way of thinking about it. Assume that you have picked a gold coin. You don’t know which box it was from, but you know it’s a gold coin.

If you picked box 1, you will always get a gold coin. If you picked box 2, you will get a gold coin 1/2 of the time. Therefore, it is twice as likely that picked the gold coin from box 1 than box two. Hence,

Pr[box 1] = 2*Pr[box 2]

Pr[box 1] + Pr[box 2] = 1

3*Pr[box 2] = 1

Pr[box 2] = 1/3

QED.

Gotta be 50/50 but that would be too easy

My logic was that even though the one box contains two gold coins, it dosing matter that there are two. Since one box gives you a 100 percent chance and the other gives you a zero percent chance. The all silver box can be ignored all together. Maybe I am wrong though, because I found that easy.

You are right that it is not a paradox, but as we can see from many of the answers, the many that do not give 2/3, it is not really intuitive either.

We were discussing this in the pub, as you do, & the problem is that you introduce knowledge into the system…

Hey folks – run the experiment 100 times & then get back to us – with evidence! 🙂

Yes, those who still say 50% despite the link’s two proofs should just go ahead and use three boxes with two different types of coins. THAT will settle it.

There are 3 ways to pull a gold coin.

You can pull the first gold coin from the two gold coin box (win), the second gold coin from the two gold coin box (win),

or the single gold coin in the one gold coin box (lose).

The chance of the other coin being gold is therefore 2 out of 3.

The reason it is confusing is because you have 3 boxes with 2 coins, and it’s easy to get those 1/2 and 1/3 chances mixed up unless you actually count the possible choices.

If you had 999 Gold+Gold boxes, 1 Gold+Silver box, and 1 Silver+Silver box, then the illusion falls apart. Few people would then think the chance is anywhere near 50%.

50/50

You’ve eliminated the box with the two silver balls, so either you have the one with two gold, or the one with one gold, one silver.

Here’s a question for the maths junkies. To my ear, this puzzle sounds like it relates to Bayesian statistical inference. Is there is a prior probability involved based on having picked gold initially?

It can be solved using Bayes theorem. In fact that was my first answer in this thread. I think it is clearer why the answer is 2/3 by just doing some simple counting though.

To reiterate the Bayesian calculation though. Asking whether the next coin will be gold is equivalent to asking: What is the probability I drew from Box 1 (containing 2 gold coins) given that I drew a gold coin.

P(Box 1 | Gold) = P(Gold | Box 1) * P(Box 1) / P(Gold).

All the terms on the right side of the equation are known:

P(Gold | Box 1) is 1 (all coins in Box 1 are gold)

P(Box 1) is 1/3 (there are 3 boxes)

P(Gold) is 1/2 (there are 3 gold coins and 3 silver coins)

So P(Box 1 | Gold) = 1 * 1/3 / 1/2 = 1/3 * 2 = 2/3.

Alternatively, you could ignore Box 3 (2 silver) and make the equivalent calculation:

P(Gold | Box 1) is 1

P(Box 1) is 1/2

P(Gold) is 3/4

1/2 * 4/3 = 2/3.

Thanks.

Bayes *theorem*, which has only a connection to Bayesian statistics.

2/3, because it’s twice as likely the box you picked the first gold ball from was the box with two gold balls in it instead of the box with one gold and one silver.

My favorite

realparadox is The Paradox of the Unexpected Hanging. I haven’t had any success in resolving it.How is this an inaccurate description of the state of affairs at the relevant time?

“There is a box. It has a ball in it. The ball is either gold or it is silver. What are the chances that it is gold?”

I’m not sure how you mean that, but maybe you’re talking about the fallacy of characterizing an uncertain state of nature as an objective probability or “chance”, when, strictly speaking, probability can only apply to events happening in the future, and the question should more properly address personal level of confidence based upon knowledge.

Or maybe that’s not what you meant at all.

Does it really make a difference if I’d worded it “If I take the ball out of the box what are the chances that it will be gold?”

This is not like the Monty hall problem in that you can’t switch doors.

“What are the odds that your next coin will be gold?”

Exactly the same as the odds it will be silver. It must be either gold or silver therefore 50/50. Any previous action is irrelevant. You could have 100 boxes with only silver and it would not change the odds.

Monty Hall’s key role in his problem is not to offer the choice, it is to make the reveal. He removes some of the possible outcomes but he does not do so ‘fairly’ and the remaining outcomes are skewed such that it is smart to switch. The role of Monty in this gold and silver problem is played by the question setter when they say ‘you got a gold coin first’. That removes some of the possible outcomes but again in a skewed way such that your odds of having a gold coin left in the box are greater than your intuition is telling you.

Your last sentence is correct. Having more silver boxes is not relevant once you are told that your first coin is gold. The question setter swept those possibilities away and left you with the three remaining ones just as before.

Exactly right. Monty deliberately chooses a door to open that he secretly knows does [b]not[/b] reveal the grand prize, and so, in a sense, “does not do so fairly.”

If Monty was known to have no knowledge of where the big prize was, and he randomly chose to open an unchosen door at the risk of exposing the prize, there would be no gain for the contestant in switching choices.

The rules:

Don’t see inside the boxes

The second pick must be from the same box than the first.

The data:

1 Gold has been picked from a box. This implies the box with 2 silver balls is out of the game.

The real challenge:

Establish the probabilities of HAVING PICKED THE BOX WITH TWO GOLD COINS IN THE FIRST DRAW BECAUSE THIS IS THE ONLY WAY TO OBTAIN 2 GOLD COINS.

Answer: 50%

Why?: At this point, there are only two boxes but, according to the rules, you can pick ONLY from the same box the first draw was done (no free will). This Box can have a gold or a silver.

There seem to be (at least) four approaches people use here:

a) A more or less “direct” approach that sticks as closely as possible to the particular elements of the problem and defers calculation as long as possible — even altogether. Number 40 is an example, as is my own method back in response to 1. Some people who use this method may also add to the original conditions by, say, explicitly distinguishing the different coins.

b)A systematic approach that organizes possibilities using an exhaustive list or tree diagram and then counting up the result. This may also involve distinguishing the coins more closely.

c) Bayesian formalism

d) Empirical / simulation of trials.

Approach a) Seems (at least to me as a user of it) to directly get at the the “why” of the result. On the other hand, however, it also seems to be the method most used by those who get the wrong answer. Approach (b), while seeming sort of “brute force-ish” uses a powerful, reliable heuristic that can be applied to many other problems. (c) and (d) are the most general, sophisticated approaches, though I could still imagine after using either of these to get the answer being left with a sense of surprise — and of wanting to see an explanation that appeals more directly to the problem situation.

Aha! The key is not the selection of the box, but the draw of the first ball. The third box is a distraction, because once you are down to two boxes, GG and GS, and you have drawn a gold ball, it is more likely that you have drawn it from the GG box.

When you choose the GS box, you will draw a gold ball half the time. When you choose the GG box, you will draw a gold ball all the time.

“All the time” is twice as often as “half the time,” as 2/3 is twice 1/3.

First time you picked one of the two boxes with gold. Fifty-fifty it was the one with two golds. Same box second time. Fifty percent chance you’ll find gold again. Not sure why this should be hard or what I’m likely missing.

See the next day’s post for the answer to what you’re missing.

Think in terms of balls, not boxes. Number the balls 1 through 6. Balls 1, 2 and 3 are gold. Balls 1 and 2 are together in the first box. When you make your pick, the boxes are irrelevant, because you’re picking a ball — a random number from 1 to 6. Ignore the 3 out of 6 cases where you pick a silver ball first. If your first pick is gold, it’s 2/3 likely you picked ball 1 or 2, in which case your second pick (the other ball in that box) will also be gold. It’s 1/3 likely you picked ball 3, in which case your second pick will be silver. 2/3 vs. 1/3.

The problem is phrased thus in the OP:

“After choosing a box at random and withdrawing one coin at random, if that happens to be a gold coin, what is the probability that the next coin drawn from the same box will also be a gold coin?”

Phrased in this way, the probability must be 50%, since the probability of the first coin being gold is not part of the overall probability, but an already-given starting condition. You have a gold coin *already* in your hand. Re-run it a thousand times or a million, you’re always starting at that point, with the gold coin already chosen as part of the ground conditions. With the choices still available to you, one box yields a probability of 100%, the other of 0%, so without knowing which, the overall probability can only be 50%.

Any conceptualization of the problem that results in a 2/3 probability must include the probability of choosing the first coin. That would require the original statement of the problem to be re-phrased.

The counterintuitiveness of the Monty Hall problem also rests on a condition that is often glossed over or fudged in the phrasing of the problem: that the door the host opens after the contestant’s first choice cannot be *either* the prizewinning door *or* the door the contestant has chosen. That changes the probability because it forces the host to reveal that one of the unchosen doors definitely does not contain the prize, thereby changing the information we have about the other one. Without this condition, the phrasing of the problem can make it seem as if the mere fact of choosing can change what is behind a door.

When Joseph used the word “paradox,” he didn’t mean the problem itself. He meant an explaination for why the answer can’t be 1/2, since it would create a paradox.

What if you take a coin out of the box, but don’t look at it? Instead, you keep it in a closed fist and ask “what is the probability that this (unknown) coin is the same as the one still in the box?” This has to be 2/3.

But if revealing it to be gold changes the probability to 1/2, revealing it to be silver would also change the probability to 1/2. That’s a paradox

Oops, hit “return”too early. It’s a paradox since, if the probability changes to 1/2 regardless of what you reveal, it must already be 1/2. But it’s 2/3.

The problem is not just similar to Monty Hall, with a clever transformation it is identical. Say you pick Door #1. The game is “GS” if that door has the car, “GG” if Door #2 does, and “SS” if Door #3 does. If Monty opens #3, you know the game has at least one G, but not if it has two.

There’s another famous variation, that is identical to a box problem with four boxes; the extra one also has one of each kind of coin. “A woman you know has two children, and you know that at least one is a boy. What is the probability that both are boys?” Most books you see this in will say the answer is 1/3, which uses the incorrect method that says the answer to the three-box version is 1/2. The paradox I described says the four-box answer must be 1/2.