Newcomb's Paradox

Veritasium recently posted this video on Newcomb’s Paradox. I also debated the matter with someone in the Shoutbox, and now I felt compelled to make a post on this. I believe that with a little logic, the singular correct answer, and also the source of seeming paradoxicality, becomes crystal clear.

The Newcomb Scenario

1. The Two Versions

There are really two formulations of the Newcomb Scenario, and their subtle difference in phrasing has massive implications. In the Shoutbox, I overstated my position by saying there was only one version of the problem. In the Veritasium video, I’d argue my version is the one being explained. In the original publication by Nozick however, the language is quite vague, and as such, it really lacks the criteria to be my version. I will explain what I mean below.

1.1 The Set-up in Both Versions:

  1. You walk into a room.

  2. In it, there are two boxes and a super-intelligent entity.

  3. You know for a fact this being is a near-perfect predictor of what people will choose when faced with Newcomb’s dilemma.

  4. Box 1 is clearly filled with $1000, but the contents (or lack thereof) of Box 2 is not known to you.

  5. You are explained that before you entered this room, the entity either put $1,000,000 into Box 2, or it did not. Whether it did so was based on its prediction of your choice to the following dilemma: Do you take only Box 2, or both Box 1 and 2?

  6. If the entity predicted you would only grab Box 2, then it put the $1,000,000 in Box 2. If it predicted you would grab both Box 1 and Box 2, then it put nothing in Box 2.

  7. You will never play this game again.

So, do you grab only Box 2, or do you grab both Box 1 and Box 2?

The current formulation may seem complete and clear, but there are a few ambiguities to iron out.

The First Question: Did you know this dilemma would be posed to you, even as a hypothetical, before the entity made its prediction and fixed the boxes?

The Second Question: By the premise of the scenario, you are faced with this choice after the entity has filled the boxes. The question of which decision you ought to make after the entity has fixed the contents of the boxes is one thing. But the question you are being asked here and now, is hypothetical. You could construe your answer to the above puzzle as your answer to what your strategy should be before this being has even begun determining the contents of the boxes.

In the Veritasium video, at 1:33, it is said that the entity “…made its prediction [and thus fixed the contents of the boxes] before you knew about the problem.” I interpret this as a “no” to the first question above. That removes the practical need to answer the second question as we analyze this version. Nozick’s version arguably does not demand this interpretation, though whether his derivations imply he too uses my interpretation, I don’t know. I haven’t read his paper. Anyways, I will analyze the version involving foreknowledge separately, and first deal with the version where one is ignorant.

2. The First Version

We begin with the same set-up, but with an additional eighth premise.

2.1 The Set-up in Version One:

  1. You walk into a room.

  2. In it, there are two boxes and a super-intelligent entity.

  3. You know for a fact this being is a near-perfect predictor of what people will choose when faced with Newcomb’s dilemma.

  4. Box 1 is clearly filled with $1000, but the contents (or lack thereof) of Box 2 is not known to you.

  5. You are explained that before you entered this room, the entity either put $1,000,000 into Box 2, or it did not. Whether it did so was based on its prediction of your choice to the following dilemma: Do you take only Box 2, or both Box 1 and 2?

  6. If the entity predicted you would only grab Box 2, then it put the $1,000,000 in Box 2. If it predicted you would grab both Box 1 and Box 2, then it put nothing in Box 2.

  7. You will never play this game again.

  8. You did not know you would be faced with this dilemma before you entered the room, you didn’t even know about it as a hypothetical.

So, in this situation, do you take only Box 2, or do you take both Box 1 and Box 2?

Firstly, note that this dilemma is logically impossible right now. I mean, right now, you do know about the Newcomb dilemma, as a hypothetical. By premise 8, this is disallowed. However, you could of course forget about all this. Anyways, that is not really important, because the scenario is pretty unrealistic to begin with.

By the way, people who choose the former are called One-Boxers (OBs) and people who chose the latter are called Two-Boxers (TBs).

2.2 Why You Should Take Both Boxes

You’re standing in the room, and Box 2’s contents have already been determined. There is nothing you can do to change its contents.

There is a probability P that the entity predicted that you are an OB. So, the expected utilities of taking one box versus two boxes are the following:

\text{EU(one-box)} = $0 + P\cdot $1\ 000 \ 000
\text{EU(two-box)} = $1000 + P\cdot $1\ 000 \ 000

Since you are standing in the room right now, and the entity has already made its choice… you can have no effect over P. (We’re assuming you cannot influence the past here…)

At this point, no matter what, you will gain $1000 more by going for two boxes.

Your heart sinks as you realize this…

Because if you know this, then the entity probably knew that you would know this, which means P is likely a very small number. The entity is a very good predictor, and from all of this, you have just proven to yourself that you are probably a TB… as such, you have proven that P is a small number.

Goddamn, you’re too smart for your own good! But there is no changing the past. Whether or not those million dollars are in Box 2, you ought to grab both Box 2 and Box 1.

2.3 The Seeming Paradoxicality:

Even for people who agree with my argumentation above, we cannot deny that OBs are richer than us… They seem to be playing the Newcomb Game better than us, so surely, they must be more rational than us TBs?

This is the mistake that gets made. Equating better performance in the Newcomb Game (specifically Version 1) with higher rationality is incorrect.

To see why, we must delve into what the Newcomb Game really is.

2.4 What Is the Newcomb Game?

Keep in mind we are working within Version 1 here. The Newcomb Game is a two-phase game.

PHASE 1

In this phase, you live your life being completely oblivious to Newcomb’s scenario. As such, you are oblivious to whether you are an OB or a TB. You must be oblivious, because knowing whether you are an OB or a TB is knowing what you would do in Newcomb’s scenario, and that entails that you know about Newcomb’s scenario at this point, which contradicts the eighth premise.

But if you don’t know if you’re an OB or a TB, how can you even be one?

If we live in a deterministic universe, then your future choice is already determined, and this determines your trait of being an OB or a TB.

But if we live in an indeterministic universe, then what you will do in Newcomb’s scenario is nonetheless highly predictable, by the third premise. Your extremely strong predisposition to being an OB or a TB is a trait; I simplify this as simply being the trait of being an OB, and equivalently, the trait of being a TB.

The best strategy for Phase 1 is to simply be lucky and dumb enough to have the trait of being an OB… oh the bliss of ignorance!

PHASE 2:

The second phase is defined as precisely after the entity has filled the boxes. At this point, the boxes’ contents cannot be changed.

At this point, once you get informed of your dilemma, the best strategy is to grab both boxes, by the argument in the section 2.2. By this point, there is nothing to do about the choice the entity made, the prediction they made. The most critical part of the game is over, and by now, you should simply take as many boxes as possible. At this point, being a TB is the most rational.

2.5 Conclusion For Version One

How can we resolve the seeming paradox at play here? First, we must define rationality.

What does it mean to be rational?

In this context, being rational means you use logic and reason to maximize the utility of your outcomes, utility here being measured in dollars (as it often is).

Are TBs rational?

In Version 1, to be a TB is the rational stance. It is also the worst stance.

You see, the Newcomb Game is a game that rewards irrationality, and punishes rationality.

“But wait! A rational person sees that, and rationally changes their behavior!”

Sure, but in this case, they don’t see it! That is why I have been so anal about the exact parameters of this scenario. If TBs had known about the dilemma during Phase 1, they could impact the future decision made by the entity by becoming an OB.

However, strictly speaking, that is self-contradictory, because being an OB in this case is defined in terms of a scenario in which the participant does not know of the scenario, even in theory, before they enter it.

Below, we define that to play better/worse in a phase of the Newcomb Game is measured only in how your playing in THAT phase affects how much money you (likely) bring home.

So, here are the conclusions we can draw about the Newcomb Scenario V1:

  1. TBs play the Newcomb Game worse than OBs. Specifically, they play Phase 1 worse than OBs, but they play Phase 2 better than OBs. (This is generally speaking, as most TBs will take home less money than OBs)

  2. Nobody plays Phase 1 of the Newcomb Game (ir)rationally, because nobody knows they are playing the game at that point, and they don’t know about their status of being an OB or a TB, so they can make no rational decision to change it. Therefore, Phase 1 is played arationally, by virtue of being played unknowingly.

  3. OBs are simply irrational. Nonetheless, they play the Newcomb Game better, because Phase 1 rewards being irrational, and Phase 2 does not heavily punish OBs for being irrational (they only miss out on $1000 after all). Note that Phase 1 does not reward playing Phase 1 irrationally, for there is no such thing. In Phase 1, OBs are being irrational, but they are not playing irrationally.

  4. In Phase 2, you should be a TB. In Phase 1, you should not be a TB. But you don’t get to consciously impact what you are in Phase 1, so what you should and should not be is irrelevant to the status of your rationality.

It may seem like I am being pedantic about the details here, but they really matter. For one, by pointing these details out, we dispel any notion of a contradiction. You see, someone might wonder, “how could an OB be irrational during Phase 1, yet also be playing Phase 1 more rationally than a TB?” That would be a contradiction, if it weren’t for the fact that OBs, just like TBs, are not playing Phase 1 rationally, nor irrationally. OBs are playing Phase 1 better than TBs, but they are both playing Phase 1 arationally.

3. The Second Version

Now, things will be fundamentally different. We will remove the eighth premise. This immediately divides all the participants up into those who know, and those who don’t.

Now, for those who don’t know, the situation is pretty much identical to Version 1. I mean, the eighth premise has been removed, and then put back in, not as a constraint on the game, but as a coincidental fact of their life. And it’s not like we can fault them for not knowing they could be playing Phase 1 of the Newcomb Game, because there are infinitely many games they could be playing, who’d all be requiring (potentially mutually exclusive) Phase 1 strategies from them. So, we classify the people in the second version, who don’t know, the same as all the people in the first version.

3.1 Those Who Know…

Imagine you’re a blissfully ignorant, happy and soon-to-be-millionaire OB person. Well, sorry, here I am, convincing you into becoming a TB. My proof of the rationality of being a TB could be considered an infohazard.

But… is being a TB really the rational choice in Version Two?

Well, I direct your attention back to The Second Question at the start of this post. In Version Two of the Newcomb Problem, it is of extreme importance when you are evaluating your TB-vs-OB stance.

Are you in Phase 1 right now, or are you in Phase 2? This wasn’t really a relevant question in Version 1, but now it is. And technically, the exact formulation of the Newcomb Problem may perhaps be explicitly asserting that you are in Phase 2, but we’ll still ask what you should do if you are a knowing player still in Phase 1.

3.2 Starting in Phase One

If you’re already an OB, then maybe you shouldn’t fix something that ain’t broke. Then again, if your OB-ism is built on a house of cards, then maybe you should really make your commitment to OB-ism ironclad. And if you’re a TB, then you’ve definitely got to do something.

But the problem is, even if you’re in Version 2, being a TB means you’ve seen the light. You know that it would be best for you to become an OB during Phase 1, but you also know that once Phase 2 arrives, it will nonetheless be best to be a TB…

I mean, the optimal (and impossible) strategy would somehow be truly being an OB during Phase 1, and then flip to being a TB during Phase 2. But just knowing that means that your ability to truly become an OB during Phase 1 is very limited.

You cannot simply posture as an OB. By the very premise, the entity is great at predicting what you will do. If you genuinely aren’t going to one-box, then the entity will likely know that.

You need to pre-commit to OB-ism…

But you cannot change the logical fact that once you enter Phase 2, it is mathematically better to two-box… Or can you?

You must give yourself an irremovable penalty for two-boxing, and this penalty acts as the pre-commitment. It really acts as a self-hack that makes committing to OB-ism not just rational in Phase 1, but also rational in Phase 2.

This penalty could be as simple as signing a legally binding contract with someone in which you promise to not two-box, with the fee for violating the contract, denoted as F, being greater than $1000. Let’s assume you cannot withdraw from this contract, at least not once you’ve entered Phase 2.

Then, we can re-examine the expected utilities of one-boxing and two-boxing, again using P as the probability that the entity predicted you would one-box.

\text{EU(one-box)} = $0 + P\cdot $1\ 000 \ 000
\text{EU(two-box)} = $1000-$F + P\cdot $1\ 000 \ 000

Since we know that F > 1000, we have that \text{EU(two-box)} < \text{EU(one-box)}. And as a rational actor, having made it so that this is the case, we have forced ourselves into becoming an OB. And the entity knows this, being the master predictor that it is, meaning it has very likely put a million dollars in that box we’re about to pick up.

But setting up a contract is a bit of an inconvenience. Why can’t you just do without? A penalty-less Phase 1, which some people talk of in vague, magical terms like “you need to really convince yourself of OB-ism”, is really impossible unless you remove some of your knowledge and rationality. And that does not seem to be less of an inconvenience than setting up a contract.

But why would you do all this just on the off-chance that you find yourself in a Newcomb situation? Well, you wouldn’t. You would have to know that very likely are going to face the Newcomb dilemma for all of this Phase 1 preparation to be worth it… unless there is some way you can turn yourself from a TB to an OB without suffering the costs of setting up a silly contract, or removing some of your relevant knowledge and rationality. And even if there was such a low-cost method, I mean, are you going to do that for every kind of Newcombian problem? …Well, some Newcombian problems are actually kind of relevant. These structures are more general than one silly thought experiment, and apparently, different theories of decision-making have arisen in response to Newcomb’s Paradox. I haven’t investigated them at all, but just their existence points to there being something of practical importance to this whole thing.

3.4 Starting at Phase Two

Okay, what if you knew about the Newcomb problem, but you didn’t prepare? Not only did you know about it, you knew you were actually going into that room at some point, to face the dilemma, and yet you made no preparations. You gave yourself no penalty for two-boxing, for example.

Well then, your best and most rational choice at this point is two-boxing.

However, we can now say that the players who deliberately turned themselves into OBs not only played better than you, but they played more rationally than you. So, in this scenario, you’re not only broke, but also stupid, hah!

4. The Iterated Version

If you look at the original set-up, you see the seventh premise states you will not play again.

If we remove this premise, and instead say that you will get to play this game over and over, with some fixed amount of time in between… then things change.

Now, you don’t need to give yourself a penalty, because you can commit to a principled practice on a purely rational basis, thus meaning there’s no flip from OB-ism to TB-ism when you go from Phase 1 to Phase 2. Once you’re at Phase 2, whether you prepared or not, you can simply recognize the incredible power you wield by beginning the practice of one-boxing every time.

On the extreme off-chance that the entity is wrong the first time you one-box, they will almost certainly never be wrong again. So, in the iterated version, assuming you get to replay within a reasonable amount of time, you should always one-box.

5. The Conclusion

Newcomb’s Paradox seems to split people pretty evenly. But I think that’s because people are not thinking about it precisely enough. Once you get really precise, it reduces to relatively simple logic, in my opinion.

What do you guys think? Also, below are some polls, divided up into the different versions of Newcomb’s Scenario:

Version 1: Are you an OB or a TB?

  • OB
  • TB
0 voters

Version 2: Are you an OB or a TB?

  • OB
  • TB
0 voters

Iterated Version: Are you an OB or a TB?

  • OB
  • TB
0 voters

Well, I think the question here is a typo. I would grab box 2. The thing gets a little tricky depending on how good a predictor is. If perfect, yea, box2. It’s gotta be pretty bad at it to get me to grab both, but if box 2 had say 1001 in it instead of 1M, the predictor would need to be really close to perfect to get me to take just one box.

This does not affect my answer.

Second question didn’t seem to ask a question.

Is it? I presume said entity only needs to note my nature, and I’m hardly trying to fool it. It’s not to my advantage to try to trick it.

I’m told the rules when I walk into the room. I know about it before I decide.

I don’t think that’s valid. It knows with almost certainty my nature. I cannot violate my nature, and more importantly, it’s not to my advantage to attempt to do so. I want the million. All the OB people walk away with a million, and all the TB people walk away with 1000. Rule 3 says I know this. Which do I want to be?

Agree, and yet I take the one box, with confidence and no regret, no heart sinking. As I said, it gets more dicey if the 2nd box might have barely more than a thousand in it, but the million makes the choice trivial.

In the end, I actually don’t see a paradox.

We don’t think our answer is irrational, and we have the money to prove it.

Some are just more rational than others, and that’s something the predictor can know ahead of time.

True, but if you look at it from that PoV, the predictor also has no choice in the matter.

You equate rationality with being dumb… Interesting.

And yet you call the TB people the rational ones. Maybe it’s these repeated assertions that are what’s irrational.

Most of the remaining post is just repeating the same reasoning (and contradictory labels).

Upgraded to ‘proof’ now I see, despite all the contradictions you yourself note when following your argument.

Pretty easy since it’s the rational choice, per your definition of rationality.

You seem to be seeking a way to cheat the predictor, despite it not being in your interest to do so. No a rational endeavor.

As does the Monty Hall problem. My contempt for humanity is justified.

I didn’t vote, but I’m OB all the way obviously.

@noAxioms is right, it’s taking box 2 (not box 1) or taking both.

I’m a one boxer. One boxers are the kind of people who walk away with a million dollars, I want to be the kind of person that does that as well.

If the entity is a near-perfect predictor then:

  1. If I only pick Box 2 then I will win $X, where X is almost certainly 1,000,000
  2. If I pick both Box 1 and Box 2 then I will win $X + $1,000, where X is almost certainly 0

The above justifies only picking Box 2.

Two-boxer reasoning ignores the “where X is almost certainly Y” part of this reasoning, but this part is essential to the problem.

Not really. The choice of boxes you make in the moment doesn’t determine (causally) whether X is almost certainly the million or nothing. You are already either in the scenario where box 2 contains nothing or in the scenario where box 2 contains the million. In both cases, picking box 1 just adds $1000.

Maybe imagine, instead of “two boxes”/“one box”, it’s “take all the money there” or “take only some of the money there”. Obviously you should take all the money.

Are you saying that one or both of these is false?

  1. If I only pick Box 2 then I am almost certain to win $1,000,000
  2. If I pick both Box 1 and Box 2 then I am almost certain to win $1,000

Both are false in the sense that it doesn’t follow. It’s as if you were saying “If 1+1 = 2 then it will rain tomorrow”.

The only thing that determines whether there is 1 million or nothing in box 2 is what the predictor based their prediction on. So, at best, what you did before entering the room and having to choose the boxes, not the choice of boxes.

It matters that the predictor is a near-perfect predictor. The very premise of the experiment is such that in almost all games where the participant only picks Box 2 the predictor placed $1,000,000 in Box 2 and in almost all games where the participant picks both Box 1 and Box 2 the predictor placed $0 in Box 2. This is why (1) and (2) are true.

Compare with the variation where the predictor is infallible; if you only pick Box 2 then you are guaranteed to win $1,000,000 and if you pick both Box 1 and Box 2 then you are guaranteed to win $1,000. It’s impossible to win either $0 or $1,001,000.

It matters that the predictor is a near-perfect predictor. The very premise of the experiment is such that in almost all games where the participant only picks Box 2 the predictor placed $1,000,000 in Box 2 and in almost all games where the participant picks both Box 1 and Box 2 the predictor placed $0 in Box 2. This is why (1) and (2) are true.

What you say is true but it doesn’t show (1) and (2) are true. You are almost certain to win (at least) $1,000,000 IF your mental state or actions before the choice of boxes influenced the predictor towards putting the million in box 2. It has (almost) nothing to do with your present choice.

Compare with the variation where the predictor is infallible; if you only pick Box 2 then you are guaranteed to win $1,000,000 and if you pick both Box 1 and Box 2 then you are guaranteed to win $1,000. It’s impossible to win either $0 or $1,001,000.

This version is problematic cause it assumes a choice that isn’t there. As you said $0 and $1,001,000 are impossible. Because if the predictor predicts X will happen, then X will happen. There is no choice for the player.

Yes, if the predictor has a 100% success rate then 100% of the time it predicts that X will happen, X will happen.

And if the predictor has a 99% success rate then 99% of the time it predicts that X will happen, X will happen. Therefore, if I only pick Box 2 then 99% of the time I will win $1,000,000 and if I pick both Box 1 and Box 2 then 99% of the time I will win $1,000. Therefore, it’s rational for me to only pick Box 2.

it seems to me the controversy is because some people think they can outguess the predictor. “It Thinks I’m going to do A. Therefore I’m going to do B. But, of course, it predicted I would think that way, so I must really do A.” On and on. But there’s no reason to think we can outwit the predictor, and there is reason to think the predictor is going to correctly predict our choices. In my case, it would correctly predict that I want $1 million, and I’m not going to take a chance on losing it on the hope of getting an extra thousand on top of The million. It’s a Huge risk for an insignificant reason.

It’s not just a 100% success rate, it’s omniscience (‘will always have a 100% success rate’). It “forces” in a way the action of the player. Consider someone making bets on sports (or anything) in the real world. By incredible luck, they could have a 100% success rate so far, that doesn’t mean the next time they say team A will win, then team A will win.

That’s the key difference and when you go to 99%, you lose this omniscience and you go back to the real world where it’s only a correlation.

And if the predictor has a 99% success rate then 99% of the time it predicts that X will happen, X will happen

Sure

Therefore, if I only pick Box 2 then 99% of the time I will win $1,000,000 and if I pick both Box 1 and Box 2 then 99% of the time I will win $1,000.

No, it doesn’t follow. The success rate means 99% of the time the predictor predicted one box, the person did pick one box. This doesn’t mean that 99% of the time the person picked one box, the predictor predicted one-box.

Consider the moment the predictor has already made the prediction and specifically, he predicted you would pick one box and put the million in the box. In this moment, everyone who picked two boxes win more than people who picked one box.

And if instead the predictor made the prediction and predicted you would pick two boxes, same thing. Everyone who picked two boxes win more than people who picked one box.

You are exactly at this point and in 100% of cases, you win more than if you pick only one box.

Nope, two-boxers don’t assume they can outwit the predictor. It’s really just simple causal reasoning.

“It Thinks I’m going to do A. Therefore I’m going to do B. But, of course, it predicted I would think that way, so I must really do A.” On and on

You might be conflating the stage before the prediction (and the choice) and the stage after.

This doesn’t happen. There is already an answer the predictor arrived at and you know that, you really just have to take all the money in front of you.

In the context of the thought experiment the predictor having a 99% success rate isn’t just a description of its past predictions.

The problem can be rephrased as such:

  1. There are 100 participants, 100 rooms, and 200 boxes
  2. The predictor has made 100 predictions and placed the appropriate amounts into each Box 2
  3. 99 of its predictions will be correct
  4. You are one of the participants

Given the above, there is a 1% chance that you are the failed prediction, so if you only pick Box 2 then there is a 99% chance that you will win $1,000,000 and if you pick both Box 1 and Box 2 then there is a 99% chance that you will win $1,000.

It is rational to only pick Box 2.

In the context of the thought experiment the predictor having a 99% success rate isn’t just a description of its past predictions.

It is but it’s not as restrictive as you might think or I might have portrayed. We can accept that the next time the predictor predicts there is 99% chance the person will follow the prediction.

The problem can be rephrased as such

I am not sure it can, but even in this scenario, you say

if you only pick Box 2 then there is a 99% chance that you will win $1,000,000 and a 1% chance that you will win $0.

but it doesn’t follow as I said, the success rate doesn’t mean that. If you know probability theory, the success rate is P(\text{choice} = X|\text{pred} = X) (probability the person chose X given the predictor predicted X) but what you are talking about here is P(\text{pred} = X | \text{choice} = X) (probability the predictor predicted X given the person chose X).

Even in this scenario, you should take two boxes or in other words, take all the money there is. Replace the language around boxes with simply cash being presented in front of you, the question simply becomes “should you take all the cash in front of you or only some of it”.

If there’s a possibility that it can make the wrong prediction, then never mind.

If he is predicting with 99%+ accuracy, then I think about it like this:

It seems likely that whatever is causing him to predict that some individual is a one boxer also seems to be causing that person to actually be a one boxer.

It’s not about your choice causally affecting the past, changing what he predicts. Obviously. It’s about what he predicts and your current choice having some cause in common.

And that cause can be many things, but one of them can be simply “being the type of person that one boxes.”

And you can only be the type of person that one boxes if you actually one box.

2 Likes

To make this more precise, let’s phrase it this way: “should you take it all or only half?”. You don’t get to see how much money there is before choosing.

If the entity predicts that you will take it all then it will place $1,000 in the room. If the entity predicts that you will take only half then it will place $1,000,000 in the room.

Given that the entity’s predictions are correct 99% of the time, 99% of people who take it all will take $1,000 and 99% of people who take only half will take $500,000.

It’s very unlikely that I’d take either $500 or $1,000,000, so I will choose to take half.

This smuggles in what happens before.

To make this more precise, let’s phrase it this way: “should you take it all or only half?”. You don’t get to see how much money there is before choosing.

It doesn’t matter whether you see the money or not, the fact is that there is money and you can either take it all or take some of it.

What statement here would you disagree with?

“Once the prediction “one box” has been made, in 100% of cases, those who take two boxes win more money”

“Once the prediction two boxes has been made, in 100% of cases, those who take two boxes win more money”

“We are in a situation where the prediction has already been made, therefore me taking two boxes will win me more money.”