Also, one boxers love to talk about “near-perfect” predictor but if we follow their reasoning, the predictor only needs to have a success rate of 51%. I don’t know if they still endorse one-boxing with this success rate.
And it’s definitely not rational to assume causality when you see correlation. Just look at those correlations.
You are committing the same fallacy that switchers make in the two envelopes problem
I don’t think so, why do you think I am committing the same fallacy?
It is true that A1 = X + 1,000 and it is true that B1 = X + 1,000 but it’s not true that A1 = B1. And that’s because A1 = X + 1,000 where X = 0 and B1 = X + 1,000 where X = 1,000,000. In treating these as a single “X + 1,000” outcome you are conflating two different values of X.
It is indeed the case that in your example, X refers to different values depending on whether we are in A or B but this isn’t the case in my modeling so I don’t see the problem.
That it isn’t the case in your modelling is the problem. A1 does not equal B1 and so you cannot say that both are equal to “X + 1,000”. That’s conflation and is why your reasoning is invalid.
Because if he’s just guessing then I might as well two box. If he knows with 99.9% certainty, I can assume he knows that because he has some kind of access to signals about my decision making. If I don’t think he has access to those signals… two box, obviously.
I don’t think it’s a big surprise that when the scenario changes, the answer changes. The scenario given was that he gets it right remarkably frequently (near perfect predictor, in the OP), that’s the scenario where I one box.
But if you witness them over and over and over again repeatedly, and you can tell it’s not just some weird selection bias, then… you do.
That’s how you know your keyboard works. That’s how you know that pressing the k key causes the letter k to appear when you’re typing a post. Because pressing that button has a repeated correlation with seeing it on the screen. There’s no logical necessity that that key will cause that result, you believe in the casual chain because of the witnessed correlation.
That it isn’t the case in your modelling is the problem. A1 does not equal B1 and so you cannot say that both are equal to “X + 1,000”. That’s conflation and is why your reasoning is invalid.
So, the X’s should be different? Why does it matter that A1 does not equal B1, I am not using them in my modeling. The X in your A/B scenario has nothing to do with the X in my scenario.
Then what’s the threshold? Guessing is 50% and near-perfect is 99.9%. When do you go from two-boxing to one-boxing?
But if you assume they “know”, then it’s just the omnipotent scenario. Unless, they don’t actually know the decision you will take but they “know” something else?
Yes. There are some correlations, where I think causality and some where I don’t. And because I know there are a lot of correlations that aren’t causality, I don’t think it’s rational to assume causality in the mere presence of correlation.
You shouldn’t use Xs at all. You should just specify the actual amounts in real numbers.
There are four possible outcomes:
Win $0
Win $1,000
Win $1,000,000
Win $1,001,000
You need to assess and compare the probability of each outcome, and as we established with the comparison between the predictor that’s almost always right and the predictor that’s almost always wrong, the reliability of the predictor is an essential factor in determining the probabilities.
If the predictor is almost always right then P(win $1,000,000) = P(win $1,000) > P(win $0) = P(win $1,001,000).
So I either choose one and almost always win $1,000,000 or I choose both and almost always win $1,000.
Maybe that’s a good question, I can’t tell. I don’t know where the threshold is. But we don’t have to grapple with that because op says near perfect. We just have to grapple with the situation at hand.
Ok. In your model, the problem is that you don’t take into account the fact that C1 is as likely as D2 and C2 is as likely as D1. It doesn’t depend on the predictor success rate at all. The probability of C1 (and D2) is simply the probability that there is no money in the box, it’s just a number x. And the other is 1-x.
So if we take C, we have 1,000x + 1,001,000(1-x) and if we take D, we have x + 1,000,000(1-x). So clearly C is better.
We do if we want to understand why it matters that he is near-perfect. Why is 99% enough to one-box? Why don’t you require 99.9999% to one-box. If we want to be rational, we can’t leave that to vibes.
I choose only Box 2. Therefore, I know that I will win one of these two amounts:
$1,000,000
$0
If the predictor is almost always right then I will almost always win $1,000,000 and if the predictor is almost always wrong then I will almost always win $0.
Whereas if the predictor simply tosses a coin then I will win $1,000,000 half the time and $0 the other half.
If you want to be more rational (ie win more money than your current strategy will net you), you can absolutely leave it to vibes. You don’t need that level of precision to improve on your current strategy
That’s why I avoided talking about your probabilities…
Okay let’s try something else. Someone throws a coin, you don’t know the result. Can the probability that the coin landed on head or tails depend on what you’ll do after the throw?
More generally, an event happened with a certain outcome. You don’t know the outcome. Can the probability that you are in the world where a certain outcome was obtained depend on what you’ll do after that event happened?
If you think that when the predictor is right 99% of the time, you should one-box; what would you argue to someone saying “No they need to be right 99.99% of the time at least”?
Do you think you are right and that person would be wrong? Why?