While I don’t think we live in a computer simulation, it’s also not a strong move to base your argument on the claim that nature is truly random, since someone can simply reject that premise.
Quantum mechanics is not established as “proven true randomness.” I’m not sure where that claim is coming from, and the examples you put in parentheses don’t really support it.
Heisenberg’s uncertainty principle
“Uncertainty” is about knowledge, which is epistemic, not necessarily about what exists. It doesn’t automatically imply anything ontological. For instance, imagine a qubit that carries only one bit of information but can be measured along three different axes. Changing the orientation of the measurement device could be understood as perturbing that bit. In that case, you cannot measure X, Y, and Z simultaneously because the system does not actually have three independent degrees of freedom at once. If those perturbations behaved chaotically rather than fundamentally randomly, you would still recover the same statistical results.
wave packet reduction
Take the quantum state and apply a polar decomposition. This splits it into two real-valued vectors, which you can then express in polar form. One becomes a probability distribution, and the other a set of phases. When you make a measurement, you can perform a Bayesian update on the probability vector using Bayes’ theorem, just like in a classical probabilistic system. Afterward, convert back to Cartesian form and recombine into a complex vector. The result is equivalent to what you would get from wavefunction collapse.
Wavefunction collapse only seems strange if you assume value indefiniteness, meaning you deny that systems have definite properties when unobserved. If you instead keep object permanence, then collapse can be understood as updating your knowledge about a configuration that already existed prior to measurement.
Bell’s inequality violations…the experimentally proven violation of Bell’s inequalities directly proves that there cannot exist any hidden determinism
You might want to look at Bell’s paper “On the Impossible Pilot Wave,” where he discusses a fully deterministic model first proposed by de Broglie in 1927 that reproduces all predictions of quantum mechanics. He presents it as a counterexample to the claim that quantum mechanics must be inherently random or value indefinite.
Bell did not see his theorem as disproving determinism. In fact, his work was motivated by examining deterministic models. In 1964, alongside “On the Einstein Podolsky Rosen Paradox,” he also published “On the Problem of Hidden Variables in Quantum Mechanics,” where he criticized von Neumann’s argument against hidden variables.
Bell’s goal was broader. He aimed to show that combining object permanence, locality, and the predictions of quantum mechanics leads to a contradiction. Importantly, he did not assume determinism. When he referred to “hidden variables,” he was not talking about extra parameters added to restore determinism, but rather the idea that particles possess definite properties even when unobserved.
For example, if you measure particle positions and observe statistical distributions, the “hidden variable” in Bell’s sense is simply the actual position of the particle. These are not exotic unseen parameters, but the very outcomes you observe when measurements occur.
“The usual nomenclature, hidden variables, is most unfortunate. Pragmatically minded people can well ask why bother about hidden entities that have no effect on anything? Of course, every time a scintillation occurs on screen, every time an observation yields one thing rather than another, the value of a hidden variable is revealed.”
— John Bell, “Einstein-Podolsky-Rosen experiments”
“That [the actual position of the particle] rather than ψ is historically called a ‘hidden’ variable is a piece of historical silliness.”
— John Bell, “On the Impossible Pilot Wave”
So Bell’s theorem is not about ruling out determinism. It is about the incompatibility of locality with the combination of realism and quantum predictions. Bell himself leaned toward keeping realism and abandoning locality. The idea of rejecting object permanence was not something he seriously entertained.
If you read his conclusions, the takeaway is not that hidden variables are impossible, but that any such theory cannot be Lorentz invariant. In other words, it must be nonlocal. Others have later pointed out that anything explained through non-locality can be equally explained through non-temporality instead. However, non-temporality is still not Lorentz invariant.
You already accept nonlocality as a premise. Once you do that, there is no longer a clear argument against determinism, because there are explicit counterexamples like Bohmian mechanics that reproduce quantum predictions within a nonlocal deterministic framework.
The position that denies hidden variables tends to come from trying to preserve locality at all costs. But in doing so, it gives up more than determinism, it gives up realism itself. There is an old saying, “reality doesn’t exist, but thank God it is local!”
If locality is already off the table, there is no need to go that far. You can still argue that a fundamentally random interpretation is simpler or more practical, and that we lack evidence for deterministic alternatives. But that is a preference, not a proof.