Can Rational Agency Survive in a Physically Closed World?

Against the Causal Exclusion of the Mental, and Against the Picture of Agency That Motivates It

(See tl;dr summary in the first reply below)

Participants in the Newcomb thread will know that a recurring pressure point in that discussion has been the apparent tension between two things most of us want to say: that an agent’s rational character is stable and legible enough for a perceptive predictor to anticipate their actions, and that the agent nonetheless faces genuinely open alternatives from their own practical standpoint. In the Newcomb thread, @Suny has pressed this tension from one side — insisting that the agent retains genuine freedom in front of the boxes, which means their choice must be independent of what the predictor has already done, and therefore two-boxing is rational. @Hypericin has pressed it from the other — accepting that the agent’s choice is determined by the same rational character the predictor read, embracing compatibilism, and one-boxing. I promised a thread that addresses this tension on its own terms, so here it is.

The question isn’t specific to the Newcomb problem. It’s the old question of whether we are free, sharpened by a challenge from the philosophy of mind that most discussions of free will don’t adequately engage with: Jaegwon Kim’s causal exclusion argument, which purports to show that if every physical event already has a sufficient physical cause, there’s no causal work left for rational agency to do. Getting clear on where that argument goes wrong, I’ll argue, is the key to dissolving the apparent tension between the legibility of rational character and the openness of practical deliberation.

I want to argue that your reasoning really does do something — that rational agency is a genuine causal power, irreducible to the physical processes that underlie it, even though it operates entirely within a physically closed world. No spooky stuff, no violations of physics. But getting there requires working through a serious challenge.


The Challenge: Does Physics Leave Room for the Mind?

The Hard Determinist’s Challenge

Here’s a familiar thought. Your brain is a physical system. When you deliberate about what to do — whether to take the job, whether to apologise, whether to study for the exam — your brain is in some physical state, and the laws of physics determine what physical state it transitions to next. By the time you experience yourself as “deciding,” the physics has already settled the matter. Your feeling of deciding is real enough as an experience, but it’s not doing any causal work. The real causes are physical: neurons, synapses, electrochemical signals, all governed by the same laws that govern falling rocks and orbiting planets.

This is the hard determinist picture. It’s intuitive, it’s scientifically respectable, and many smart people find it compelling. Sam Harris puts it bluntly: you don’t author your own thoughts. They simply appear in consciousness, produced by prior physical causes you neither chose nor control.

But there’s something deeply strange about this picture — something that becomes visible once you take it seriously as a guide to practical life rather than just as a theoretical claim.

The Practical Absurdity

Nobody actually lives this way. Harris himself deliberates about what arguments to make, revises his prose, and holds people responsible for their actions. The chess grandmaster doesn’t say: “My move is determined by my brain state, so I might as well push the first piece my hand touches.” The student doesn’t say: “Whether I study or not is already settled by prior physical causes, so I might as well watch Netflix.” These conclusions follow from the hard determinist picture, if taken as a guide to practical reasoning. But nobody draws them, not even hard determinists.

Why not? The standard answer is that the hard determinist picture is true but practically irrelevant. We can’t help deliberating, even if deliberation doesn’t really do anything. But I think this underestimates the problem. The issue isn’t just that we can’t stop deliberating. It’s that the hard determinist picture, taken seriously, makes deliberation pointless. And deliberation is manifestly not pointless. The student who studies does better on the exam. The grandmaster who calculates deeply plays better chess. The person who carefully weighs the pros and cons of the job offer makes a better decision. These aren’t just correlations. The studying produces the better performance, the calculation produces the better move. If you doubt this, consider: would you accept surgery from a surgeon who didn’t bother deliberating about where to cut, on the grounds that the outcome of the procedure was already determined by physics?

So we have a tension. On one hand, physics seems to be causally complete. Every physical event has a sufficient physical cause. On the other hand, rational deliberation seems to be genuinely causally efficacious. Reasoning well actually produces better outcomes. How can both of these be true?

A quick aside before we go further. Someone might object: “But physics isn’t deterministic: quantum mechanics tells us that physical processes are fundamentally probabilistic.” This is true, but it doesn’t help. Physical causal closure doesn’t require that the laws be deterministic. It requires that the physical story be self-contained: that you never need to leave the level of physics to account for why any physical event happened, or why physical events have the probabilities they do. Probabilistic laws are still physical laws, and they leave no gaps for non-physical causes to slip into. Quantum indeterminacy gives you randomness, not freedom, and random noise is no more hospitable to rational agency than clockwork determinism is. The question of whether your reasoning does real causal work is equally pressing whether the underlying physics is deterministic or probabilistic.


Kim’s Challenge: The Exclusion Argument

The philosopher Jaegwon Kim sharpened this tension into what I think is the most powerful argument against the causal efficacy of the mental. The argument has a deceptively simple structure. I’ll lay it out in three steps.

Step 1: Physical causal closure. Every physical event that has a cause has a physical cause. When your arm moves, that movement has a complete physical explanation: signals from motor neurons, muscle contractions, and so on, all the way back to brain states governed by physical laws. There’s no point in the physical causal chain where a non-physical cause needs to intervene to keep things going. The physics is, as it were, self-contained.

This doesn’t mean physics can explain everything. It means that for any physical event — your arm moving, your vocal cords vibrating, your fingers typing — you can in principle trace a complete causal history that never leaves the physical level. No gaps, no mysterious interventions.

Step 2: Mental states depend on physical states. Your belief that it’s raining, your desire for coffee, your judgment that you should apologise; these mental states aren’t free-floating. They depend on what’s going on in your brain. (This is sometimes put by saying that mental states supervene on physical states. They depend on them, are realised by them, but aren’t reducible to them. If two people are in the exact same physical state down to the last atom, they’re in the same mental state. But the same mental state could be realised by different physical configurations.)

Step 3: The exclusion. Now here’s the punch. Suppose you deliberate about whether to apologise, conclude that you should, and then apologise. Your reason (the judgment that you should apologise) and your brain state (the neural configuration that realises that judgment) both seem to be causes of the same outcome (the apology). But the physical cause — the brain state — is already sufficient for the outcome, by Step 1. The physical causal chain doesn’t need any help from “reasons.” So what causal work is the reason doing? It seems to be excluded from the causal picture by the sufficiency of the physical cause.

Kim’s conclusion: if you want to maintain that the physical world is causally closed (Step 1), and that mental states depend on physical states (Step 2), then you must accept that mental causation is redundant. Your reasons don’t cause your actions. Your brain states do. The reasons are just along for the ride.

Why This Is Hard to Resist

The argument is powerful because each step is individually plausible:

  • Denying Step 1 (physical causal closure) means accepting gaps in physics: places where non-physical causes intervene in physical processes. This is what old-fashioned Cartesian dualism required, and it faces the notorious problem of explaining how a non-physical mind could push physical matter around.

  • Denying Step 2 (mental dependence on the physical) means accepting that mental states float free of the brain. It is a kind of dualism most people find implausible.

  • Denying Step 3 (the exclusion) means accepting causal overdetermination: every action has two independent sufficient causes (one physical, one mental), like a person simultaneously shot by two bullets, each independently lethal. This is logically possible but seems extravagant and ad hoc if it holds for every single voluntary action.

So Kim’s argument doesn’t rest on any exotic premises. It follows from two claims that almost everyone accepts (physical closure and mental dependence on the physical) plus the reasonable principle that we shouldn’t multiply causes beyond necessity.


The Non-Accidentality Reply

I accept Steps 1 and 2. The physical world is causally closed, and mental states do depend on physical states. But I reject Step 3 — the conclusion that mental causation is therefore redundant. Here’s why.

Grant, for the sake of argument, that the physical story is causally closed. P₁ (the physical state of your brain before deliberation) determines P₂ (the physical state after deliberation) via the laws of physics. And M₂ (your decision) is realised by P₂. So far, so good. Kim’s premises are in place.

But now ask a question that the purely physical story cannot answer: why does P₂ happen to be a physical state that realises M₂?

Think about what’s happening. P₁ realises M₁ (your awareness of the situation, your assessment of the considerations, your rational grounds). And P₂ realises M₂ (your decision to act). The mental transition from M₁ to M₂ is rationally intelligible: given those grounds, that decision makes sense. It’s the kind of transition we recognise as good reasoning.

Now, the physical laws tell us that P₁ leads to P₂. But the physical laws don’t care about rational intelligibility. They govern the interactions of particles, the propagation of electrical signals, the dynamics of chemical reactions. There is nothing in the laws of physics that says: “when a brain state realises an awareness of rain, the next brain state must realise an intention to take an umbrella.” The laws just say: given these particular physical conditions, those particular physical conditions follow.

So the fact that the trajectory your brain follows happens to be a trajectory between physical states that realise rationally connected mental states — that the transition from P₁ to P₂ is non-accidentally a transition from “grounds for acting” to “acting on those grounds” — is something the physical story alone doesn’t explain.

But the point here is not merely that physics leaves a coincidence that cries out for a further explanation tacked on alongside the physical one. It’s that the physical facts themselves do their explanatory work, with respect to producing a rational action, in virtue of the mental properties they realise.

Consider a chess computer. At a particular moment, the machine’s transistors are in some specific configuration — call it E₁. The laws of electrodynamics determine that E₁ leads to E₂, a later transistor configuration that results in a signal being sent to move the bishop. The electronic story is causally complete. But now ask: why does E₁ lead to a bishop move specifically? The answer isn’t really about the particular voltages and current flows that constitute E₁. It’s that E₁ is a physical state that realises a particular stage in the execution of the chess algorithm — say, the point at which the minimax search and positional evaluation function have converged on the bishop move as optimal. E₁ produces a bishop-moving E₂ in virtue of realising that functional state.

You can see this clearly through the fact that the same functional state could be realised by very different physical hardware. A different chess engine, running on different chips with entirely different electrical properties, implementing the same algorithm (or even a different algorithm that converges on the same evaluation), would arrive at the same bishop move from the same functional state. The physical trajectories would differ in every electronic detail. What they’d share is the functional property: they all realise a state in which the algorithm has determined the bishop move to be optimal. What explains the convergence across physically diverse implementations isn’t any shared electronic feature. The electronics may be completely different. It’s the shared higher-level property: the functional state that the algorithm has reached.

Strip away the algorithmic description and you don’t just lose a useful gloss on the electronics. You lose the very thing in virtue of which E₁’s leading to E₂ is a case of executing a sound chess move rather than one arbitrary transistor state succeeding another. The higher-level description isn’t a supplementary explanation offered alongside the electronic one. It’s what the electronic trajectory operates through when the question is why the machine moved the bishop.

The same structure holds for rational agency. P₁, the fully specific physical state of your brain, is causally sufficient for P₂ — the physics is complete. But P₁ produces an M₂-realising P₂ in virtue of being a realiser of M₁ (your rational grounds). The same rational assessment could in principle be carried by different neural configurations, or even by a very different kind of physical system altogether — and any such realiser would have led to a state realising M₂. What explains this convergence across diverse physical configurations is not any feature specific to this physical trajectory, but the fact that they all realise M₁ and M₂ is what M₁ rationally calls for.

This is why the physical story, while telling you that P₁ leads to this specific P₂, leaves something unexplained: why P₂ had to be such as to realise M₂. That necessity doesn’t come from physics. It comes from the rational connection at the mental level.

And this modal “had to” is not an abstract explanatory posit visible only to philosophers. In the case of a rational agent, it is the most familiar thing in the world: it is the agent’s own practical recognition that this is what needs to be done. The chess player moves the bishop because the position demands it. The student studies because the exam requires preparation. The person apologises because the situation calls for it. In each case, the agent does what they do because they recognise a rational requirement, and it is precisely this recognition, operating through its physical realisation, that explains why the physical trajectory had to be such as to realise that action rather than any other. The “had to” of rational necessity and the “had to” of the non-accidentality argument are one and the same: they are rational agency at work.

One important disanalogy between the chess engine and the human agent is worth flagging. The chess engine’s strategic structure was imposed from outside, by the programmer. So a physicalist might concede the explanatory point about the chess engine while insisting that it’s merely a useful description we impose for our own convenience. The claim I’m making about rational agency is stronger: the agent’s rational structure isn’t externally imposed. It is the agent’s own, cultivated through development, education, and the practices of holding one another responsible. It isn’t a convenient shorthand for underlying physics. It is a genuine form of causal organisation. But this stronger claim can wait because even the weaker chess-engine version of the point is enough to show that Kim’s exclusion argument fails even in the case of deterministic functional artefacts like chess engines.


What About Free Will? The Agent Isn’t Part of the Furniture

The causal exclusion argument isn’t just an abstract puzzle in the philosophy of mind. It connects directly to the oldest and most practical philosophical question: are we free to choose between alternative opportunities for action?

Here’s the connection. If the physical story is complete and the mental story is redundant, then a natural conclusion follows: since the prior physical state of the world (P₁) fully determines the outcome (P₂, and hence M₂), the agent couldn’t have done otherwise. The past was fixed, the laws are fixed, and together they fix the outcome. Your deliberation was a passenger, not a driver.

This is essentially van Inwagen’s Consequence Argument for incompatibilism, and it’s the same reasoning that motivates hard determinism: since everything you do is determined by prior physical states and the laws of nature, you never really had a choice.

Both hard determinists and a certain kind of libertarian accept this reasoning. The hard determinist concludes: we’re not free. The contra-causal libertarian concludes: since we are free, something must break the physical chain: there must be some point where the agent’s will intervenes from outside the causal order.

I think both are wrong, and wrong for the same reason. They share a picture of the agent that is fundamentally mistaken.

Imagine you’re standing at a bus station. One bus has already departed. You didn’t catch it, and that’s settled. It’s now part of the past, beyond your power to change. The question “what should I do?” now operates within the constraints left by that settled fact.

Hard determinism and contra-causal libertarianism both conceive of the agent’s own rational character on the model of the departed bus — as part of the fixed past that constrains the agent from outside. On this picture, the chain P₁→P₂→M₂ is a set of settled facts bearing down on the agent, and the agent either submits to them (hard determinism) or somehow breaks free of them (contra-causal libertarianism).

But this picture smuggles in a fundamental confusion. The agent is not part of their own circumstances. My rational character (which includes my capacity to weigh reasons, my cultivated dispositions, my practical judgment) is not a feature of the landscape I must navigate. It is me, the one doing the navigating.

When I say “I could have done otherwise,” I don’t mean that the past could have been different. I mean that I could have exercised my capacities differently. Consider J. L. Austin’s famous example from his essay “Ifs and Cans”: the golfer who misses a putt and says “I could have made that.” They don’t mean: “in some other possible world with a different past, a different version of me would have made it.” They mean: right here, with this green, this ball, this distance; I had the skill to sink it, and I didn’t. The circumstances were fine. I fell short.

This only makes sense if we distinguish, within the total situation, between the circumstances the agent faces (the layout of the green, the distance to the hole, the wind; all of these are genuinely fixed, external constraints) and the agent who faces them (whose capacities and their exercise are precisely what’s at issue). The hard determinist lumps the agent’s rational character in with the circumstances, treats it as one more fixed antecedent condition, and then observes that, given all of that, the outcome was determined. Of course it was! You’ve included the agent’s actual exercise of their capacities among the fixed conditions. But the agent isn’t a fixed condition of their own action. They’re the one acting.

The contra-causal libertarian makes the same mistake in reverse. Seeing that the agent-plus-circumstances determines the outcome, they conclude that freedom requires the agent to somehow act independently of their own rational character, injecting an element of indeterminacy that breaks the chain. But an action disconnected from the agent’s rational character isn’t a freer action. It’s a random one. The chess grandmaster whose move bears no intelligible connection to their assessment of the position hasn’t exhibited superior freedom; they’ve had a seizure.

What Makes the Past Settled?

There’s a further point worth pausing on. What makes the past settled, and what therefore makes it belong to the agent’s fixed circumstances rather than to what remains open for them to determine, is the fact that it is past.

This matters because the hard determinist’s argument depends on the fixity of the past: since P₁ is fixed, and the laws are fixed, P₂ follows, and the agent is powerless. But consider what you’re doing when you say “the past is fixed.” You are drawing a line between what is settled and what remains open. You are placing yourself at a temporal vantage point from which some things are behind you and beyond your reach and other things are ahead of you and up to you. You are, in short, deliberating, occupying precisely the practical standpoint of an agent situated in time, facing alternatives, determining what to do.

Someone who says “the past is fixed, therefore your deliberation is powerless” is in a peculiar position. The very act of advancing that argument, marshalling considerations, drawing a conclusion, presenting it as something we ought to accept, presupposes the practical standpoint it claims to undermine.

This isn’t just a debater’s trick. It reveals something about the concept of the “fixed past” itself. The distinction between past and future, between what is settled and what remains open, doesn’t appear in physics. The equations of fundamental physics describe trajectories through state space with no privileged direction of time. They don’t mark any point as “now” or any portion of the trajectory as “done.” The settledness of the past is inseparable from the temporal perspective of a deliberating agent within the world. It is the perspective of someone for whom the departed bus is genuinely beyond reach while the next bus is still catchable. Strip away that perspective and you don’t have a “fixed past” constraining a powerless agent. You have a trajectory through state space, with no privileged temporal direction and no one to be constrained.


Putting It Together

Here’s the positive picture. You are a rational agent: a being whose cultivated capacity for reasoning shapes what you do. This capacity is realised physically: it is embodied in the structure and states of your brain. The physical trajectory your brain follows is causally closed. Every physical state has a sufficient physical cause, and no physical law is violated.

But the physical story, while complete on its own terms, doesn’t explain why your brain’s trajectory non-accidentally tracks rationally connected mental states. That explanation comes from the rational level: your grounds made your decision intelligible, and this rational connection is what makes the physical trajectory non-accidentally a trajectory between realisers of good reasoning rather than a trajectory between realisers of random noise.

When you deliberate and act on your conclusions, your rational assessment is genuinely causally efficacious not by violating physics but by being the rational structure that the physical trajectory expresses. The agent acts within a causally closed physical system simply by being the kind of organised system whose higher-level structure shapes which physically possible trajectory the system follows.

This isn’t unique to minds. Consider why a particular DNA sequence reliably produces a functioning heart. Every step (e.g. protein folding, cell differentiation, tissue formation, etc.) has a complete biochemical explanation. The chemistry is causally closed. But if you ask why this sequence of chemical reactions is, non-accidentally, a sequence that produces a working heart, the answer requires the biological level. The organism is a teleologically organised system whose functional structure constrains which of the countless chemically possible trajectories the molecules actually follow. The chemistry explains each step; the biology explains why the steps add up to a heart rather than a tumour.

A rational agent is that kind of system too — but one whose organising structure is rational rather than merely biological. Your cultivated capacity for reasoning shapes which physically possible trajectory your brain follows, in the same way that biological organisation shapes which chemically possible trajectory the developing embryo follows. In both cases, the higher-level structure operates entirely through the lower-level processes, violating no laws. And in both cases, it does explanatory work the lower level alone cannot do.

And when you say “I could have done otherwise,” you’re not right because the past could have been different or because some non-physical force might have intervened, but because you — your rational capacities, your skill, your practical judgment — are not part of the fixed furniture of your circumstances. You are the agent who navigates those circumstances, and the reactive attitudes that structure practical life (regret when you fall short, pride when you succeed, blame and praise from others) target precisely this: the gap between having a capacity and exercising it well. That gap is where your freedom lives.


Where This Goes

This thread will develop these arguments further as discussion proceeds. Among the questions I expect to address:

  • Unactualised capacities: What does it mean to say “I could have done otherwise” if the physical past determines the outcome? How do we properly analyse the general abilities that ground responsibility?
  • The reactive attitudes: Praise, blame, regret, and pride aren’t just responses to independently existing capacities. They’re partially constitutive of those capacities through scaffolding the development and exercise of practical rationality.
  • Teleological organisation: What makes a rational agent different from a thermostat or a chess computer? The answer involves the kind of substance the agent is: a teleologically organised being whose rational structure is genuinely its own, not an externally imposed programme.

For those coming from the Newcomb thread: the position I’ve defended there, that the agent’s rational deliberation is genuinely causally upstream of the predictor’s action, rests on the foundations laid out here. If rational agency is a genuine causal power, then the Newcomb predictor’s sensitivity to the agent’s rational grounds is sensitivity to something causally real, not to an epiphenomenal shadow. I’d encourage Newcomb-specific discussion to continue in that thread, where it has plenty of momentum already, and to keep this thread focused on the underlying questions about rational agency, causal exclusion, and free will. Cross-references between the two threads are of course welcome since the arguments feed into each other.

I’m looking forward to the discussion.


I’ll be drawing on work by G. E. M. Anscombe, J. L. Austin, M. R. Ayers, P. F. Strawson, Robert Kane, Anthony Kenny, Susan Wolf and others as the thread develops. The specific arguments here are developed in more detail in two unpublished manuscripts (“Autonomy, Consequences and Teleology,” 2009, and “The Problem of Free Will, Determinism and Responsibility,” 2018/2021).

3 Likes

tl;dr Summary of the original post

After posting my long OP I realized that the new ThePhilosophyForum has a Gemini powered “summarize” feature. I clicked the button and Gemini generated a very good albeit incomplete summary. Here it is, with minor corrections and the missing bit added:


Pierre-Normand initiates a discussion titled “Can Rational Agency Survive in a Physically Closed World?” to address the tension between the predictability of rational character and the genuine openness of practical deliberation, a conflict previously explored in the Newcomb thread. The post tackles this tension through Jaegwon Kim’s causal exclusion argument, which purports to show that if every physical event has a sufficient physical cause, there is no causal work left for rational agency to do.

The first core argument is the non-accidentality reply to Kim. The physical story is causally complete — P₁ determines P₂ via the laws of physics. But P₂ had to be such as to realise the rational decision M₂, and this necessity doesn’t come from physics. It comes from the fact that P₁ realises rational grounds M₁, and M₂ is what M₁ rationally calls for. Multiple realisability confirms the point: physically diverse systems realising the same rational assessment converge on the same rational outcome, and what explains this convergence is the shared mental-level property, not any shared physical feature. The mental level isn’t a convenient gloss on the physics — it’s what the physical trajectory operates through in producing a rational action.

The second core argument targets the picture of agency shared by hard determinists and contra-causal libertarians alike. Both treat the agent’s own rational character as part of the “fixed past” that constrains them from outside — like a departed bus. But the agent is not part of their own circumstances. What settled the outcome wasn’t “the past” bearing down on the agent; it was the agent themselves, exercising their rational capacities. The hard determinist packs the agent into the fixed conditions and concludes they’re powerless; the contra-causal libertarian does the same and demands the agent break free. Both mislocate the agent — placing them outside the process rather than recognising them as the rational being whose character the process expresses.

1 Like

What a long OP. Takes a long time to digest, and I did not like the summary, so going straight to the larger post.

This exploration of free agency is already intensely associated with those with a 2B rational character. The 1B types strive for openness and predictability. This alone, long before the likely choice is concluded by the predictor, probably gives solid ground for a very high percentage prediction rate. @Hypericin of course has been the hard one to predict this way since he’s on the fence and hasn’t consistently argued for one single side.

The topic is about being free, and the Newcomb example unfortunately rewards lack of freedom. Those attempting an exercise of free will come up short. The scenarios that seem to reward free will all seem to be based on random, not willed behavior.

Seems like a non-sequitur since the rational agency was already part of the cause of said event. A little detached in the Newcomb case, but not disjoint.

I want to argue … — that rational agency is a genuine causal power, irreducible to the physical processes that underlie it, even though it operates entirely within a physically closed world.

Good, because I’ve struggled to express exactly that in a manner which I feel satisfies myself.

For the topic at hand, I’ll accept this. In general, QM allows uncaused events, but human decision making does not rely on such mechanisms.
But also note that the cause of your arm movement is routed through your mental processes, not around them.

Step 2 seems to be a statement of physical monism. We’re not presuming supernatural intervention, separate mental properties of matter, or other magic here.

Step 3: … But the physical cause — the brain state — is already sufficient for the outcome, by Step 1. The physical causal chain doesn’t need any help from “reasons.”

This seems very wrong. It’s like asserting that a computer program in memory is sufficient to send an email, not requiring any help from the processor. The reasoning is part of the chain from the initial state to the apology. It’s not overdeterminism. The apology would never happen without the reasoning, which is simply the dynamics of how the state evolves. So I find it easy to deny step 3.

You word it differently, but I think we agree.

… transition from “grounds for acting” to “acting on those grounds” — is something the physical story alone doesn’t explain.

I actually think it can. I mean, it’s complex, but it’s no different than a similar explanation for something simpler like our email-sending process or your chess example.

One important disanalogy between the chess engine and the human agent is worth flagging. The chess engine’s strategic structure was imposed from outside, by the programmer.

The old programs maybe. The new ones not so. Those programs are fully written with no knowledge of any game. The rules are explained to it, and it figure everything out by itself, which is more than most humans do by reading books and such. The old programs won using human strategy and brute lookahead power. The new ones actually innovate, and they blow all the competition away.
The programmer teaches it to learn, but the programmer has no idea why it moved the bishop that last turn. They cannot point to a piece of code that made it do that in the board position at hand.

I buy complete, but the mental story (the processing) is part of it, not redundant at all. It’s not a passenger, even if the outcome is fixed. The state alone does not realize any choice without dynamic evolution. To not have a choice means that the outcome occurs even without the processing, which would be a different initial state, or an outcome that isn’t a function of that initial state.

I’m trying to work out a scenario like Newcomb except it rewards free will (and not randomness) instead of deterministic behavior. Not sure if it can be done. Trying to use free will to cross the street is almost sure death.

The hard determinist concludes: we’re not free.

I’m not a hard determinist, but I also don’t conclude the sort of freedom as defined those using determinism as a weapon.

The contra-causal libertarian concludes: since we are free,

Ah yes, the path of rationalizing an improbable premise. Hence all the literature attempting to do just that.

something must break the physical chain: there must be some point where the agent’s will intervenes from outside the causal order.

Which means your will is not your own, but the that of the external agent (that which possesses you), point being to give that external agent the responsibility for actions instead of the human puppet being controlled. I will admit to that being a legit reason for the external agent needing free will, but it makes no sense to the human thus possessed who is at times forced to do irrational things. In a twist of irony, the external agent needs to utilize rational choices as well. It just has different objectives than would the physical human, left with this calorie wasting brain and no reason to use it. Evolution would not have selected for that.

Golf example of ‘could have done otherwise’:

They mean: right here, with this green, this ball, this distance; I had the skill to sink it, and I didn’t. The circumstances were fine. I fell short.

Hmm, a matter of skill poorly executed as opposed to the wrong choice being deliberately made, sometimes spun as physics compelling a different choice than one you’d have otherwise wanted, which is ludicrous.

I don’t like the example since it isn’t one of deliberate choice, but your inability to find a better one mirrors mine: In the absence of a magical external agent worrying about its afterlife prospects being dependent on what it does with its puppet avatar, I see zero practical use for free will. I’ve failed to design a game to reward it.

The distinction between past and future, between what is settled and what remains open, doesn’t appear in physics.

It does, but physics treats it as a relation to any arbitrary event. The Titanic, already holed but afloat, puts the collision in its past, but the death of most of it’s passengers still under its future control to minimize, which they will choose (future tense) not to.

Interestingly, given an initial state of any closed system and fully deterministic rules, all the subsequent states are defined and there’s no actual need for the structure to be actualized. The determined states still have humans insisting on their actuality, feeling pain, whatever. Are those non-actualized being responsible for their choices? You bet they are. Reactive attitudes seem only relevant to rational agency, and not at all choices made not caused by the state at hand.

Thanks for the post Pierre. Been waiting a while for it.

1 Like

This doesn’t seem right, this is a misunderstanding, not an impractical consequence. The student is using their understanding of determinism to justify making an entirely different decision -not to study- than they would have otherwise made. That will have consequences. Perhaps that move itself was determined, who knows? But to get the grade they want they have to do the studying, determinism or no.

Inevitability doesn’t mean inevitable regardless of the students own actions. The student is as much a part of the casual picture as anything else.

@noAxioms, Thanks for taking the time to work through the OP. That’s much appreciated!

You suggest the Newcomb problem “rewards lack of freedom” and that “those attempting an exercise of free will come up short.” I think this reveals something important about how the picture of freedom you’re working with, particularly its relation to unpredictability, differs from mine.

A chess grandmaster’s brilliant move is often predictable to a skilled observer precisely because it’s brilliant, because it’s what the position demands and the grandmaster has the skill to see it. On the freedom-as-unpredictability picture, the grandmaster is less free the better they play. The novice who blunders unpredictably is exercising more freedom than Magnus Carlsen finding the only winning continuation. That seems exactly backwards. Carlsen’s move is the paradigm of free rational action. It flows from his cultivated capacity to assess the position and act on that assessment. The predictability is a consequence of the freedom, not its absence.

Now apply this to Newcomb. The one-boxer who recognises that their rational grounds are what the predictor tracked, and one-boxes accordingly, is doing a similar thing: exercising a rational capacity well, in circumstances that reward exercising it well. The predictor rewards good reasoning, not lack of freedom. Every game that rewards good reasoning rewards freedom if freedom is the exercise of rational capacity rather than unpredictability. You say you can’t design a game that rewards free will. I’d say you’re surrounded by them. Every practical situation where reasoning well produces better outcomes is such a game.

You also say you’re “not a hard determinist” but don’t “conclude the sort of freedom as defined by those using determinism as a weapon.” I think this is the right instinct, but the picture of freedom you’re reaching for — one that’s compatible with determinism and doesn’t require contra-causal intervention — is precisely what my OP was trying to articulate. It’s what the agent/circumstances distinction delivers. The golfer who says “I could have sunk that putt” isn’t claiming the past could have been different. They’re saying: given these circumstances (this green, this distance, this lie), I had the skill and fell short. This example may not be best construed as an exercise of free will since the golfer can’t choose to sink a putt rather than miss, but the main point is that the capacity was there even when the exercise failed (and the golfer can kick themselves for having failed to train better, or to evaluate the situation more carefully) because the agent isn’t a part of their own external circumstances. That’s also where freedom lives — not in unpredictability, but in the gap between having a capacity and exercising it well or badly — in the more paradigmatic sorts of cases where an agent’s failure can more clearly be traced to a defect in rational character.

Incidentally, you say that “all the literature” on libertarian free will involves rationalising an improbable premise. This may reflect the balance of discussion on forums like TPF, but it doesn’t match the philosophical landscape. A PhilPapers survey of professional philosophers found 59% compatibilists, 14% libertarians, and 12% hard determinists. Compatibilism — the view that freedom and determinism are compatible — is the dominant position, and most compatibilists do preserve a genuine sense of “could have done otherwise,” just not a contra-causal one. The libertarian minority (where I’d place myself, though my version purports to be consistent with causal closure at the physical level) is small but not negligible, and most of the published philosophical literature on free will is compatibilist rather than contra-causal libertarian.

You say that the reasoning is “part of the chain” from the initial state to the outcome, like a processor executing a program. I broadly agree, although the devil is in the details. Consider your processor example. A processor transitions through a sequence of voltage states, each physically determined by the prior state and the laws of physics. The electronics is causally closed — you never need to leave the level of voltages and logic gates to explain why this transistor switched at that moment. So far, the physical story is complete. But now ask: why does this particular sequence of voltage transitions non-accidentally constitute the sending of an email rather than its deletion? The answer requires the computational level: the program, the protocol, the functional organisation that makes these voltage patterns accomplish something significant. The voltage story explains that each transition happens; the computational story explains why the transitions add up to an email being sent rather than mere electrical noise.

This is exactly the structure of the non-accidentality argument. The physical story explains that brain state P₁ transitions to brain state P₂. But why does P₂ non-accidentally realise a mental state (say, the decision to apologise) that is rationally intelligible in light of what P₁ realised (say, the recognition that you were wrong)? The physical trajectory, complete as it is, doesn’t explain that. What explains it is that P₁ realised a recognition-of-wrongdoing, and the decision-to-apologise is what that recognition rationally calls for. The physical facts do their work in virtue of the rational properties they realise.

You say: “I actually think [the physical story alone] can explain this. It’s complex, but it’s no different than a similar explanation for the email-sending process.” But notice what you just did — you appealed to the email-sending process, which is a computational description, not a voltage-level one. You helped yourself to exactly the higher-level explanatory structure whose necessity the argument is establishing. If you tried to explain in purely voltage-level terms why this transistor sequence sends an email rather than formatting a hard drive, you’d find you can’t. That’s not because the voltages are incomplete to produce the intended physical effect (they aren’t), but because the property “sends an email” isn’t a voltage-level property. It’s a computational property that the voltages realise. And explaining why the voltage sequence realises that computational property requires the computational level.

So when you say “the reasoning is part of the chain,” I think you’re right, but in a way that supports my argument rather than competing with it. The reasoning is part of the causal process. It’s just not reducible to a physical-level description of that process, because what makes it reasoning, and what makes the transition from “recognition of wrongdoing” to “decision to apologise” non-accidental, is a rational-level property that the physical trajectory operates in virtue of.

Regarding the chess engine, the disanalogy I was flagging doesn’t actually depend on whether the strategic principles were externally programmed or internally developed. What matters is that even in the AlphaZero case there are functional norms (what the system is supposed to do) that individuate the system’s computational states and distinguish correct operation from malfunction. Without those norms, you can’t even say the system has bugs. You’d just have different physical trajectories, none better or worse than any other. And those functional norms are precisely what prevents the computational explanation from collapsing to the physical one. They’re what makes it true and explanatory that AlphaZero “moved the bishop because centralising the bishop controls key squares,” rather than “moved the bishop because these particular transistors switched in this particular pattern.”

So your observation actually strengthens the case. If even a system that develops its own strategic principles still requires a non-physical level of explanation (the computational/functional level) to make sense of what it’s doing, then the claim that “the physical story is all you need” is already refuted, and this even before we get to the further question of what distinguishes a human agent’s relationship to rational norms from a chess engine’s relationship to functional norms. That further question is real and important, but it’s a question for a later stage of the argument.

The issue about physics and the past-present-future distinctions that are being made from the perspective of an embodied agent (as they relate to McTaggart’s A and B series) is one that would also merit being delved into, but I’ll do so on a future occasion.

Kind of the opposite. If the move is predictable by a mere ‘skilled’ observer, it’s probably pretty obvious (not brilliant) to the grandmaster. The brilliant moves are the ones not predicted, that only he can see.

Predicting chess moves at 99% accuracy is unreasonable since there’s no one right answer like there is in the cases we’ve been considering. Yes, at times there is but one move that doesn’t lead to disaster. Those moves don’t require brilliance, just an absence of incompetence.

Good reasoning, yes, but free will isn’t useful for that.
You imply a definition of “freedom is the exercise of rational capacity”. You’re right, we don’t agree on definitions then. Any programmed fully deterministic device can do that.

A compatibilist says ‘the ability to choose what you want’, which is just the absence of external compulsions to do otherwise, such as your employer’s expectations, or gravity preventing your ability to fly. This is pretty in line with your usage above, and is not particularly the definition I used when saying what Newcomb does not reward. I’m using more of a libertarian definition there, and it is that sort of free will that I cannot think of a benefit.

Agree with all that. I wasn’t in any way suggesting most are libertarians. I said that most libertarian arguments involve “rationalising an improbable premise”. You’re a libertarian? Then I can ask what benefit you think it brings, what decision is better made utilizing it.

Computers are designed to be deterministic, to do the same thing each time, avoiding any amplification of quantum uncertainty. And it is essentially impossible to ‘explain’ say a file server action in terms of only voltages and logic gates. All the above applies to brains as well, all in principle explainable as voltages, chemicals, and neurons that act as the gates. Evolution also has selected out any biological primitive that might amplify quantum uncertainty, and it would very much have evolved otherwise if there was any information in said uncertainty, but no, it’s all just noise.

Nobody knows exactly how either (say a chess program or brain) works, but both are in principle ‘explainable’ in terms of the primitives. The libertarian position seems to be otherwise. Libertarian FW is incompatible with determinism, but as you’ve acknowledged, it’s also incompatible with randomness, the only alternate. So the libertarian position seems incompatible with quantum mechanics in general, which hasn’t really any alternative these days.

I had trouble reading your OP when you switch from P₂ to M₂, which seems to me like due to a lack of full understanding (just like the chess program), P and M cannot be the same thing, just one being a low level description and the other a high level description of the exact same thing.
And yet a computing device (robot?) would have no trouble advancing from a state of “recognition of wrongdoing” to “decision to apologise”. Why should it be so mysterious when any rational agent like us does the same thing?

I brought up chess (like AlphaZero) because its creators are quoted as saying they have no idea how it’s arriving at its move choices. Yes, hardware failures are malfunctions and are expected not to happen for normal behavior. Ditto with people.

That’s just describing the exact same thing at different levels. Yea, it’s really hard to do such an explanation at the wrong level.
One can similarly claim that photosynthesis does not occur, only quantum field disturbances interacting with each other. The one description does not preclude the other.

You’re the one apparently wanting an explanation at the voltage level, which is several levels of description away from what you’re trying to describe. It’s much easier to describe in terms of algorithms and computation, but brains don’t really work that way: No instruction set single point where the work is being done. It’s a network, and the language to describe what a network is doing is different.
Thing is, a simple single instruction stream process, nay, even a Turing machine (about as primitive and inefficient as can be) can do anything the human can do (says the physicalist), just not as fast.

I use both, depending on context. The present might be an illusion, but it’s a dang pragmatic illusion, and our fitness depends on it.

1 Like

@Pierre-Normand

Really impressive thread — this is one of the clearest presentations of a non-reductive position on rational agency I’ve come across. The non-accidentality framing is elegant and the chess computer analogy does alot of heavy lifting very effectively. I’m broadly sympathetic to where you’re heading, but I want to push on a point that I think matters for the overall success of the argument.

You’ve established that the rational level is explanatorily indispensable — that the physical story, while causally complete on its own terms, can’t explain why brain trajectories non-accidentally track rationally connected mental states. I think thats right. But explanatory indispensability by itself is compatible with a range of ontological interpretations, and not all of them give you what you need.

Here’s what I mean. A sufficiently clever reductionist can accept everything you’ve said and respond: “Sure, the rational-level description does explanatory work the physical description can’t replicate. But that’s a fact about us — about what we find explanatory — not a fact about the world. The rational description is a useful, maybe even indispensable, parsing of a physical process. But parsings don’t have causal powers. The actual causal work is still happening at the physical level; we just can’t track it without higher-level vocabulary because the relevant physical properties are wildly disjunctive.” This is how I read @hypericin’s reply above.

I don’t think this response works, but I think your argument as stated doesn’t quite block it. You say the physical trajectory “operates through” the rational structure, that the rational structure “shapes” which physically possible trajectory the system follows. But what do these phrases mean ontologically? If “operates through” is just a way of saying “is usefully described by,” then the reductionist absorbs your point without conceding anything. If it means something stronger — that the rational form is a real organizational principle that genuinely constrains which lower-level trajectory unfolds — then you’re committed to a metaphysics where the universe is structured such that irreducibly higher-order forms of intelligibility can emerge from and condition lower-order processes.

Your biological analogy actually points in exactly the right direction here. The reason we don’t think the heart is “just a useful redescription of chemistry” is that biological organization has a kind of reality that resists reduction — the organism is a genuine unity whose functional structure channels chemical possibilities in ways that aren’t capturable as disjunctions over chemical properties. But what grounds that? What kind of universe has to obtain for genuinely novel levels of organization to emerge and exercise real constraint?

I think you need something like this: the universe is characterized by what we might call emergent probability — lower-level processes generate manifolds of possibility, and higher-order forms of organization emerge that are genuinely new levels of intelligibility, not just re-descriptions of the lower level. These higher-order forms don’t violate lower-level regularities but they do constrain which possibilities within those regularites get actualized. The relationship is one of genuine formal causation, not just explanatory convenience.

Without that kind of committment, the non-accidentality argument — powerful as it is — stays at the level of “you need the rational description to explain this.” And a stubborn reductionist will always reply: “need for explanation ≠ causal reality.” You need to close that gap.

None of this is meant as a criticism of the overall direction. I think you’re right about where the argument needs to go, especially the stuff about teleological organization and the reactive attitudes. Looking forward to seeing those developed. Just think the metaphysical foundations need to be made more explicit if the argument is going to fully land.

1 Like

A physicist once displayed an equation (don’t ask me to write it down) on a screen and declared that that equation describes all the particles inside your body, and all the forces acting on those particles. It is a complete and fully deterministic account of your body (brain included). He, out of deference to our Xin, didn’t make the conclusion explicit.

I actually agree with all of this. On the hard determinist picture, however, the student’s deliberation is a determined intermediate step in a causal chain whose endpoint was already fixed by prior physical states. The student’s experience of weighing reasons and deciding to study is real enough, but it isn’t settling anything. The physics already settled it. Sam Harris himself is explicit about this: you don’t author your thoughts, they appear in consciousness produced by prior causes you neither chose nor control.

Now, the Netflix student who says “it’s already settled, so I might as well not bother” is indeed drawing a bad practical conclusion. But consider why it’s bad. You say it’s because the student’s decision has consequences. Studying produces the good grade. But on the hard determinist picture, whether the student decides to study or watch Netflix was also already settled. The student can’t use the insight “my actions have consequences” to guide their deliberation, because their deliberation isn’t guiding anything. It’s just the next determined domino falling.

Think of it this way. When I’m choosing between two buses and I discover that bus A has already departed, it’s pointless to deliberate about whether to take bus A or bus B. There’s only one live option. On the hard determinist picture, at the moment of deliberation all options except the one I’m predetermined to select are already “departed buses.” The hard determinist might reply: “But you don’t know which bus has departed, so you still need to deliberate.” That’s true but then deliberation is being justified by the agent’s ignorance of the determined outcome, not by the agent’s capacity to settle the outcome. That’s a very different thing from saying the student is genuinely “part of the causal picture” in the robust sense we both seem to intend.

What you want to say, I think, is something stronger. The student’s reasoning genuinely produces the outcome, that deliberation actually settles which way things go, and that this is compatible with the physical story being complete. But that’s the position my OP is defending. It’s my non-accidentality argument, the claim that rational agency is a genuine causal power operating within a physically closed world. It’s not the hard determinist’s position. Hard determinists like Sam Harris, Daniel Wegner, Galen Strawson and Derk Pereboom say that the physics settles everything and the reasoning is along for the ride. If you think practical reasoning genuinely does causal work that isn’t reducible to the physical level (I’ll say more about this in my forthcoming response to @EQV) then you’re agreeing with my OP and disagreeing with Harris, which is a fine place to be.

@EQV, Thanks for your patient reading and remarkably constructive criticism! This is exactly what I need to better articulate my positions.

This really goes to the heart of the matter. I agree that explanatory indispensability by itself doesn’t always close the gap to causal reality, and that a clever physicalist will be tempted to reduce the non-accidentality argument as a point about us (what we find explanatory) rather than a point about the world (what’s “really” causally efficient). So let me try to close the gap more explicitly.

Start with what “wildly disjunctive” means in this context. The class of physical brain states that could realise the decision to apologise has no physical unity. There is nothing at the level of physics or neurophysiology that all and only the apologising-brain-states have in common. The same decision could be realised by enormously different physical configurations (and/or different neural architectures). What makes all of these disparate physical states states of the same type is the rational-level property. They all realise the decision to apologise, and what individuates that decision is its rational content. It’s what the recognition of having been wrong calls for.

The reductionist insists that this is a fact about us. We can’t track the physical properties without the higher-level vocabulary because the relevant physical class is too disjunctive for us to handle. But the causal work is still physical according to them. But notice what this commits them to. They’re saying that the physical trajectory landing in the class of apologising-brain-states rather than the class of blame-deflecting-brain-states is, at the physical level, not an instance of any physical regularity. There’s no physical law or physical natural kind that groups the apologising states together. The relevant regularity (i.e. the non-accidental connection between recognising wrongdoing and deciding to apologise) only exists at the rational level.

Now here’s the question: is that regularity real, or is it merely a useful parsing? Consider the contrastive question: why did the trajectory land in the class of M₂-realising states (apologising) rather than M₃-realising states (deflecting blame)? The reductionist must answer: P₁ was physically sufficient for P₂, and P₂ happens to belong to the M₂ class. But “happens to” is exactly the wrong word. It doesn’t happen to — it had to, because P₁ realised the recognition of wrongdoing, and apologising is what that recognition rationally calls for. The physical trajectory was constrained to land somewhere within the M₂-realising region of state space not by a physical law (there is none that groups the M₂-realisers together), but by the rational organisation of the system.

This is the “in virtue of” point from my OP, but let me now make the ontological claim explicit. When I say P₁ produces an M₂-realising P₂ in virtue of being a realiser of M₁, I therefore am not just making a claim about what we find explanatory. I’m making a claim about which properties of P₁ are operative in producing an outcome with the character it has. The rational properties are operative. Change what M₁ is (change the agent’s grounds) and you change which class of physical states P₂ must fall into, even though the particular physical state P₂ might vary for countless physical reasons. The rational-level property is doing the work of constraining the trajectory to a region of state space whose boundaries are invisible to physics, because those boundaries are defined by rational content.

To put it another way: the contrast class in “why M₂ rather than M₃?” isn’t a physical contrast. No physical property distinguishes the M₂-realisers as a group from the M₃-realisers as a group. The contrast is individuated by rational properties. So if you think the causal structure of the world includes the fact that this system reliably produces M₂ rather than M₃ in circumstances like these, and it ideed does, that’s what it means for rational agency to be a real feature of the world. The contrastive structure of the causation tracks the contrastive structure of the rational explanation, not because we’re projecting our explanatory interests onto neutral physical processes, but because the system really is organised in a way that makes rational-level contrasts causally operative.

The email example from my response to @noAxioms illustrates this. Not even a committed reductionist thinks that “this circuit sends an email” is merely a useful parsing of voltage transitions that we impose for our convenience. The computational organisation of the system is a real feature that constrains which voltage trajectories occur. A bug in the software (which is a departure from the computational norms) is a real malfunction, not merely a voltage pattern we happen to disapprove of. If the reductionist accepts that computational organisation is genuinely real (not merely a useful description) when it comes to machines, they face a burden in explaining why rational organisation in agents is any different. The “useful parsing” move, applied consistently, would entail that no computer has ever actually sent an email. There have only been voltage transitions we find convenient to describe that way. I don’t think any reductionist should wants to live there though they sometimes appear to ague that they should!

I love this question because, natural as it is, it is liable to invite the conflation of two things that are better kept apart.

There’s a synchronic emergence question: why does this system’s physical trajectory non-accidentally track rationally connected mental states right now? And there’s a diachronic emergence question: how did systems with this kind of organisation come to exist in the first place?

The synchronic emergence answer is: because of how the system is organised. The rational-level organisation constrains which physical trajectories occur. This is what the non-accidentality argument establishes directly. The heart doesn’t pump blood because chemistry happens to produce pumping-like molecular movements. The heart pumps blood because the organism is a teleologically organised system whose functional structure channels chemical possibilities toward cardiac function. The chemistry explains each molecular interaction; the biological organisation explains why those interactions non-accidentally add up to a working heart. Likewise, the brain doesn’t happen to transition between states that realise rationally connected thoughts. It does so because the agent is a rationally organised system whose cultivated deliberative structure channels physical possibilities toward rational outcomes.

The diachronic emergence answer is: through natural and cultural history. Dissipative structures arise through thermodynamic self-organisation. Hearts exist because evolutionary selection pressures operated on populations of organisms over geological time. Rational agents exist because biological evolution, individual development, education, habituation, and enculturation produced beings with cultivated capacities for reasoning. Each of these histories is itself a legitimate domain of intellectual inquiry: evolutionary biology, developmental psychology, ethology, cultural anthropology, etc.

The crucial point is that these are complementary explanations, not competing ones. The diachronic story explains that there are rationally organised systems. The synchronic story explains what such systems do and why their behaviour is non-accidentally rational. The diachronic story explains the genesis of the organisation; the synchronic story explains the causal work the organisation does once it exists. Neither story is reducible to the other, and neither is reducible to physics.

This is why I think the language of “emergent probability,” while pointing in the right direction, risks suggesting that what needs explaining is how the universe permits higher-order organisation, when what’s actually doing the work is the specific organisational histories of specific kinds of systems within the material universe. The “room” for distinct explanatory levels is demonstrated by the actual existence of self-organising dissipative structures, organisms, and rational agents, and elucidated by the specific developmental and evolutionary histories that produced them.

The reductionist who asks “but where does the rational organisation come from, if the universe is fundamentally physical?” is asking the diachronic question and expecting it to undermine the synchronic answer. But it doesn’t. The fact that evolution produced hearts through biochemical processes beginning at a time when there were no hearts doesn’t make cardiac function “just chemistry.” The fact that learning and enculturation produced rational agents through biological processes doesn’t make rational agency “just biology.”

You’re right that the metaphysical foundations need to be made explicit, and I hope this goes some way toward that. The further questions you flag about teleological organisation and the reactive attitudes are ones I intend to explore further. In particular, what distinguishes a rational agent from a chess engine (or a heart) is that the agent’s relationship to the norms governing their activity is recognitional: they can articulate their reasons, be held to them, fall short of them, and call them into question. That’s where reactive attitudes like pride, regret, resentment, praise, blame, and so on, show up as partially constitutive of the kind of psychological organisation that makes rational agency what it is. But that’s for a future post (or future thread, maybe).

1 Like

You are nearly onto something, at least that rational agency has the final say over what the will proposes, this ‘say’ leading to more success. It seems that one’s nature is taken into account by the will, which at best has creativity, high learning ability, rumination, not-instantly-reactive, and more.

The initial product of the will may not be right if it has forgotten something or not learned it well enough.

In a Bridge card game, my partner, through bidding, asked me if I had the queen of trumps. I didn’t, and my will was going to bid as such since it forgot that our partnership had ten trumps between us, including the Ace and the King, and so then the opponent’s Queen will most of the time be easy to collect, as they only have three trumps between them.

Well, maybe my will was tired or moving too fast, but this was an online game and so one could play as slow as I wanted. My rumination time solved it and so I bid that I had the Queen or the tenth trump. Of course, now my will know it for sure in the future.

Perhaps I’ve not added much here, but keep on plugging.

I’d like to challenge this series of arguments on the grounds that physics is itself incomplete. When it comes to agency, what is the basis of saying that an agent’s actions have ‘physical causes’? The causal sequence that ostensibly describes the relationship of neural activities and physical movement is enormously complex. And the question of causal relations is fraught even in physics. So why should this ‘causal closure’ argument just be accepted as an inviolable principle?

Personally, I think it’s because of this problem:

Bearing in mind that Cartesian dualism is itself a philosophical model, not a scientific hypothesis of any kind. We all know perfectly well the historical precedents and consequences of that fateful move, but the upshot is that both ‘mind’ and ‘matter’ are then reduced to abstractions. Even Descartes himself was then at a loss to describe how these abstractions interacted - through the pineal gland, was the suggestion.

So might it not be the case that Kim’s definition of physicalism simply is the Cartesian picture with the ghost exorcised from the machine? The causal closure argument is tantamount to assuming physicalism is the case, rather than showing it to be true.

Organisms of all kinds act in order to…. I can’t think of any entities in physics which act in that manner.

Clearly, the objects of our fears and desires do not cause behavior in the same way that forces and energy cause behavior in the physical realm. When my desire for the pot of gold at the end of the rainbow causes me to go on a search, the (nonexistent) pot of gold is not a causal property of the sort that is involved in natural laws ~ Zenon W. Pylyshyn, Computation and Cognition: Toward a Foundation for Cognitive Science (Cambridge, MA: MIT Press, 1984), xii.

I feel you and @EQV are defending an easier position than what Kim is actually attacking. You are mixing the distinctions of low vs. high levels of organization, and physical vs. mental descriptions.

Physicalism doesn’t stumble on higher levels of organization. For instance, no physicalist would be stumped by the operation of car engines. An engine is not just a haphazard assemblage that happens to generate usable power. It is a rationality designed organization of parts that work in unison. Every physicalist would happily concede this, because that rational organization is still a part of p. P is not just stuff, it includes the relations between stuff that produces higher level properties.

Consider how a reductionist neurologist might respond to the apology example. They might say, “it might seem to you like you are deciding whether to apologize, but you are really only witnessing the real work: a struggle for dominance between the amygdala and the prefrontal cortex, where the amygdala’s fearful activation detecting loss of face competes with the PFC’s function of maintaining social harmony.” Or something like that. What the neurologist definitely won’t talk about is chemical reactions, sodium gradients, dendrites, or the like. They would discuss the scenario in terms of the level of organization appropriate to it.

But note that at wherever level of organization the neurologist speaks of, that level, while demonstrating behaviors specific to it, will still be explicable in terms of a previous level. And that previous level will also be explicable in terms of a yet more fundamental level, until you do reach chemical reactions. You get physical closure sitting alongside the emergent properties of higher levels of organization.

Whereas, m does not have that luxury. It begins at m. And so m is vulnerable to Kim’s epiphenomenal attack in a way that levels of organization of p are not.

1 Like

I think we’re closer to agreement than the framing of your challenge suggests, but there’s an important distinction I want to draw out because I think it’s the hinge on which everything turns.

You write that physics is “itself incomplete,” and ask why causal closure should be “accepted as an inviolable principle.” I agree with the first claim but want to resist the conclusion you draw from it. The causal closure of the physical domain doesn’t mean that physics is a complete account of everything that happens. It means something much more modest: that within the domain of event-event nomological explanation (which is the domain where we trace physical states to prior physical states via laws) the story doesn’t develop gaps that require non-physical interventions to fill. Neurons don’t pause mid-firing to await instructions from an immaterial soul. The electrochemistry is self-contained at its own level.

But, and this is the crucial point, that self-containment doesn’t mean the physical level is the only level at which genuine causal work gets done. That’s precisely the conflation my OP is designed to resist. Kim’s exclusion argument assumes that if the physical story is self-contained, any further causal story must be redundant. My non-accidentality reply shows this doesn’t follow. The physical trajectory operates through its rational-level organisation, in the same way that electronic transitions in a chess computer operate through the algorithm they implement. The algorithm isn’t a supplement to the electronics, competing with it for causal work. It’s what the electronics is doing, described at the level that makes its non-accidental character intelligible.

So I’m not defending Kim’s physicalism. I’m granting him his strongest premise, causal closure, and showing that even with that premise in hand, rational agency isn’t excluded. This is a stronger move than denying closure, because denying it invites exactly the Cartesian picture you rightly criticise: a non-physical mind reaching down to nudge physical matter, with an explanatory gap at the point of contact. You’re right that Kim’s framework inherits the Cartesian division between mental and physical substance. Both the hard determinist and the Cartesian dualist assume that if the mind does real causal work, it must be competing with physics for causal territory. The response isn’t to challenge physics its territory. It’s to reject the assumption that causal territory is a unified pie to be shared.

Your point about organisms acting in order to is exactly right, and it’s where the positive account lives. Teleological organisation — the kind of structure in virtue of which an organism’s chemical trajectory non-accidentally adds up to a functioning heart, or an agent’s neural trajectory non-accidentally adds up to a rational action — is genuinely real and genuinely causally efficacious, even though it operates consistently with the closure of the physical domain. The Pylyshyn passage you quote makes the closely related point that the intentional object of a desire (the pot of gold) doesn’t cause behaviour the way forces cause physical events. Rational causation isn’t event-event efficient causation. An agent’s recognition of reasons doesn’t push their neurons around the way one billiard ball pushes another. It is the rational organisation in virtue of which the neural trajectory takes the course it does.

One might even put it in terms sympathetic to our shared Kantian and Bitbolian interests: the causal closure of the physical domain holds for what Kant would call the empirical character of causation. This is the domain of event-event lawful succession. But actions, understood as exercises of rational agency, also have what Kant calls an intelligible character. They are initiations of new causal chains by an agent acting on rational grounds, not merely the latest link in a chain of prior physical events. Jennifer Hornsby makes a similar point in contemporary terms: actions don’t have upstream physical causes in the way that physical events cause other physical events. They have physical conditions (e.g. enabling circumstances, the state of the agent’s body, the perceptual situation) but the action itself, qua exercise of a rational capacity, is the agent’s own doing, not something that was done to them by their prior brain state.

This is fully consistent with physical causal closure. The physical story is self-contained at its own level. But the physical story doesn’t exhaust what’s real or what’s causally at work. I think this is actually closer to what you want to say than denying closure would be, and it has the advantage of not requiring any “gaps” in physics for the mind to slip through.

Incidentally, Bitbol’s own treatment of downward causation, in his paper “Downward Causation without Foundations”, is the closest account to my own that I know. He explicitly argues for the causal closure of the physical domain as relationally disclosed to us, while insisting that this closure doesn’t license the exclusion of higher-level causal structure. His position is quite different from George Ellis’s, who needs “leeway at the bottom” for higher-level constraints to select among low-level possibilities. This is a picture that, as some might notice, has some uncomfortable affinities with contra-causal libertarianism.

1 Like

Yes, probably! I might have missed the nuance - but then, it is quite a long OP - and, I admit, homed in on my habitual bogeyman. But, as always, I most appreciate your careful analyses, and will ponder carefully before jumping in again with guns blazing.

Of course, and very well said. Appreciate the references, too - it was from you that I first learned of Michel Bitbol.

1 Like

There’s a sleight of hand in this paragraph. Note these two connected sentences:

  1. Rational deliberation seems [my emphasis] to be genuinely causally efficacious
  2. Reasoning well actually [my emphasis] produces better outcomes

It is possible that (1) is true but (2) is false. So when you ask “how can both of these be true?” it’s unclear whether “both” refers to physics + (1) or to physics + (2).

I think that if physics and (2) are true then reductionism follows, and so if physics is true and reductionism is false then (2) is false and only (1) is true.

The final sentence here is debatable. It is possible that there is a one-to-one correspondence between physical states and mental states.

If M₂ is reducible to P₂ then this question makes no sense.

If M₂ is not reducible to P₂ then this question is ambiguous; is it asking why P₂ happens to realise a mental state or is it asking why this mental state happens to be M₂? If the former, then it’s perhaps like asking why the physical laws are what they are, and if it’s the latter then it might be that a mental state only counts as M₂ if it is realised by P₂.

I don’t quite understand what this is saying. There are three different parts to this:

  1. P₁ realises M₁
  2. P₁ causes P₂
  3. P₂ realises M₂

Are you saying that (2) is true only because (1) is true?

I think that this is a misrepresentation of hard determinism. The argument isn’t that rational deliberation is causally inefficacious but that we have free will only if rational deliberation is not causally determined by antecedent events — but it is, and so we don’t.

1 Like

@noAxioms, @Michael, your replies raise different questions but they converge on the same underlying issue, so I want to address them together. I’ll address further issues raised by noAxioms in a separate reply. The main issue for now is: what do P₁, P₂, M₁, and M₂ actually refer to, and what kind of “reduction” is at stake? I think some of the resistance to my OP comes from a natural misreading of the notation, so let me try to clear it up with the help of a diagram I should have included from the start. (I had uploaded a similar one in the Newcomb thread.)

Here’s what the labels mean, and, just as importantly, what they don’t mean.

P₁ and P₂ are fully specific physical states of the brain. By “fully specific” I mean: the complete microphysical configuration, every neuron, every synapse, every electrochemical detail. P₁ is the state before deliberation, P₂ the state after. The horizontal arrow between them is nomological: the laws of physics take you from P₁ to P₂. The physical story is complete. No gaps.

M₁ and M₂ are rational-level properties. M₁ is the agent’s assessment of their situation (their grounds for acting), M₂ is their decision. The horizontal arrow between them is rational intelligibility: given those grounds, that decision makes sense.

The vertical arrows are the realisation relation. P₁ realises M₁; P₂ realises M₂. This means: this particular physical state is one way — among potentially many — of being in that rational state. The mental state is not a separate thing sitting alongside the physical state. It’s a property of the physical state: what that physical configuration amounts to, rationally speaking.

Now, here’s the crucial point that I think both of you are missing, and it’s the point on which everything turns.

@noAxioms, you write that P and M are “just one being a low level description and the other a high level description of the exact same thing.” In one sense this is right: this particular M₁ is realised by this particular P₁. They aren’t two separate objects. So far we agree. But Kim’s argument doesn’t target this claim. The question Kim raises is about types, not tokens.

Think about it this way. The fully specific physical state P₁, with every neuron in exactly this configuration, realises M₁ (say, “recognises they should apologise”). But there are countless different physical configurations that would also realise M₁. A different brain, wired differently at the micro level, could realise the same rational assessment. The class of all physical states that realise “recognises they should apologise” has no shared microphysical property. There’s no neural signature common to every possible brain state that amounts to recognising you should apologise. The only thing that unites the class is the rational-level property M₁ itself.

This is multiple realisability, and it’s the standard argument against what philosophers of mind and biology, following Gerry Fodor, sometimes call type-type reduction — the claim that mental types (like “recognises they should apologise”) can be mapped onto physical types (some specific neural pattern) via bridge laws. If there were such a mapping, then you could in principle replace every mental-level explanation with a physical-level one, and the mental vocabulary would be a convenient shorthand — genuinely eliminable. Kim’s exclusion argument is designed to push non-reductive physicalists toward accepting that consequence: either reduce or be excluded.

The OP’s response is: neither. The mental level does causal work that the physical level, even though complete on its own terms, cannot do. And the way to see this is through the right question.

@Michael, you quoted the OP’s question — “why does P₂ happen to be a physical state that realises M₂?” — and replied that if M₂ is reducible to P₂, the question makes no sense. You’re right that it doesn’t, on the reductionist picture. But that’s because the reductionist picture ducks the question rather than answering it. Let me reformulate the question more precisely, because the original phrasing was ambiguous in a way that invited exactly your response, so that’s mostly my fault:

What is it about P₁ that accounts for the fact that it caused a physical state P₂ that non-accidentally realises M₂ rather than a physical state P₂ that realises some other mental state M₃, or no coherent mental state at all?

This is a contrastive causal question, and it’s the question that does the real work. Note what’s being asked. We’re not asking why P₂ and M₂ are correlated synchronically (the reductionist can answer that: they’re token identical, or the one constitutes the other). We’re asking about the diachronic transition: why did the causal process that started at P₁ land on a P₂ that realises this rational outcome rather than some other one?

The physical laws give you: P₁ leads to P₂. That’s complete and sufficient as a physical explanation. But the physical laws don’t care about rational intelligibility. There’s nothing in the laws of physics that says “when a brain state realises an assessment that one should apologise, the next brain state must realise a decision to apologise.” The laws just say: given these conditions, those conditions follow.

So what explains the convergence? Why is it that physically diverse realisers of M₁ — brains with completely different neural architectures — all produce physical states that realise M₂? The physical trajectories have nothing in common at the microphysical level. What they share is that they all start from a state realising M₁ and end at a state realising M₂. The only explanation for that convergence is at the rational level: M₂ is what M₁ rationally calls for.

This is why I say P₁ produces an M₂-realising P₂ in virtue of being a realiser of M₁. It’s not that there are two competing causes (a physical one and a mental one, that would be causal overdetermination). It’s that the physical cause does its work through the rational organisation it realises. The rational level is what makes it the case that the physical trajectory had to land within the class of M₂-realisers rather than anywhere else in the vast space of physically possible successor states.

Let me reuse a familiar example. Consider two chess computers: one running on silicon chips, another (hypothetically) running on vacuum tubes. Both are in a state that realises “the minimax algorithm has converged on bishop-to-e4 as optimal.” The electronic trajectories in the two machines share nothing at the hardware level — different voltages, different switching mechanisms, different physical substrates entirely. Yet both produce a state that realises “sends the signal for bishop-to-e4.” Why? Not because of any shared electronic property. Because both were in a state realising the same algorithmic evaluation, and that evaluation calls for that move. The algorithmic level isn’t a convenient redescription of the electronics. It’s what explains the convergence across physically diverse implementations.

@noAxioms, you wrote: “One can similarly claim that photosynthesis does not occur, only quantum field disturbances interacting with each other. The one description does not preclude the other.” I actually agree and this supports my argument rather than undercutting it. The photosynthesis description isn’t merely a convenient shorthand for the quantum field description. It picks out a real biological process that explains why these particular quantum field disturbances non-accidentally produce glucose rather than some other chemical outcome. If you asked “why did this collection of quantum interactions produce glucose?”, an answer purely in terms of quantum fields would be complete as a physical story but would miss the biological explanation: because this is a chloroplast, a teleologically organised system whose functional structure constrains which of the countless quantum-mechanically possible trajectories the particles actually follow.

You also wrote that “both are in principle ‘explainable’ in terms of the primitives.” There’s something importantly right about this, and I don’t want to dismiss it glibly. When a boxer dodges a punch, the scientifically minded inquirer rightly asks: what neurophysiological processes enable the psychological capacity to see the swipe and react in time? And when the neurophysiologist answers, the next inquirer rightly asks: what electrochemical mechanisms enable those neurophysiological functions? These are genuine, scientifically respectable explanations. This is what the physicist Steven Weinberg once argued in his “Two Cheers for Reductionism”: that the “arrows of explanation” always point downward toward more fundamental levels.

But notice also what these explanations are explaining. They explain enablement: what material organisation makes it possible for the system to exercise a given capacity. They don’t explain what counts as a proper exercise of that capacity as opposed to a defective one, or a complete failure of actualisation. The neurophysiology of vision can tell you what structures enable a boxer to see the incoming punch. It doesn’t tell you what makes this particular dodge warranted rather than useless. That contrast is individuated at the level of boxing skill, where the norms of the sport and goals of the boxer are specified, and not at the level of neural firing patterns. And this is what Weinberg was missing when he insisted that all arrows of explanation must point to lower level.

There’s an interesting asymmetry here, which P. M. S. Hacker and John McDowell have both noted. Material-level explanations are often more successful at explaining abnormal behaviour (e.g. mental illness, cognitive biases, systematic errors, software bugs) than they are at explaining successful singular performances. And the reason is telling: when something goes wrong, the material level often contains the relevant contrast (this neural pathway is damaged, this neurotransmitter is depleted, this memory address is corrupted). But when something goes right (when the boxer dodges at precisely the right moment, when the chess player finds the winning move, when the agent recognises they should apologise and does so) the relevant contrast (right move vs. wrong move, appropriate response vs. inappropriate one) is individuated at the rational or functional level, not the material one. An explanation at the material level that purports to explain why the agent apologised rather than deflected blame would have to marshal a contrastive class (the class of P₂-states that realise apologising vs. those that realise deflecting) that has no unity at the physical level. It’s a wild disjunction, united only by the rational-level property.

So the arrows of explanation do point downward when the question is about enablement. But not every question is a question about enablement. “Why did this system produce glucose rather than formaldehyde?” is not asking what molecular machinery enables photosynthesis. It’s asking why the process went this way rather than that way and the answer requires the biological level, because the contrast class is individuated biologically, not quantum-mechanically.

The same applies to rational agency. “Why did this brain transition to a state realising ‘decides to apologise’ rather than ‘decides to deflect blame’?” is not a question about what enables the capacity to deliberate. It’s a contrastive question whose contrast class is individuated at the rational level and the answer (because the agent’s assessment of the situation rationally called for an apology) is a rational-level answer that the physical story alone cannot provide, not because the physics is incomplete, but because the question isn’t a question about enablement.

@Michael, a few more specific points.

You identify a “sleight of hand” between “rational deliberation seems to be genuinely causally efficacious” and “reasoning well actually produces better outcomes.” But the structure of the OP is: here is what seems to be the case (rational deliberation is efficacious), here is what also seems to be the case (physics is causally closed), here is why you might think they can’t both be true (Kim’s exclusion argument), and here is why they can (the non-accidentality reply). The “seems” in the opening is not a hedge or a dodge. It’s setting up the tension that the rest of my OP aims at resolving.

You suggest that “if physics and (2) are true then reductionism follows.” This is a substantive philosophical claim, and it’s actually what Kim argues. If you accept physical causal closure and the genuine causal efficacy of the mental, Kim says you must accept reduction. But the non-accidentality reply shows this doesn’t follow, precisely because the mental level does causal work through (not in competition with) the physical level. This is work that the physical level alone cannot account for, as the contrastive question reveals.

Finally, you write that hard determinism isn’t about causal inefficacy but about the claim that rational deliberation is “causally determined by antecedent events.” This is a fair correction of the way my OP frames hard determinism in the opening section. But my OP’s argument addresses this version too, in the “Agent Isn’t Part of the Furniture” section. The hard determinist says: P₁ (plus laws) determines P₂, so the agent couldn’t have done otherwise. But this packs the agent’s rational exercise into the antecedent conditions. The agent’s rational character (e.g their capacity to weigh reasons, their cultivated practical judgment and their specific knowledge and understanding of the situation) is not a prior event that determines their action from behind. It is the agent themselves, exercising their capacities. The “determination by antecedent events” framing is what generates the illusion: it treats the agent as one more piece of the fixed furniture rather than as the one doing the deliberating.

Actually I’m not saying this. For the sake of argument, assume hard determinism. The students mistake is really basic. They are saying “well, I was going to study, but now that I’ve realized everything is determined anyway, I might as well watch Netflix.” They are assuming that the determined line before they discovered determinism is settled, and there is no reason to study. But, their discovery of determinism, and their response to it, is itself an event in the predetermined chain. Their response to that awareness will lead to a bad grade.

Getting a good grade sits at the end of chains of events where the student studies. Determinism, or freedom, does not affect this at all. Determinism offers no shortcuts like this. Determinism just means that the student was always the sort of person to be confused by determinism itself, not study, and fail the test.

1 Like

A hard determinist can accept that an agent is “the one doing the deliberating” and still argue that because their deliberation is causally determined they don’t have free will.

Your argument just seems to be a description of compatibilist free will. The hard determinist (and free will libertarian) will respond by saying that compatibilist free will isn’t “really” free will — or to avoid the nebulous term “really”, isn’t the type of free will that they care about, which is libertarian free will.

What does this even mean? Again, we have these three facts:

  1. P₁ realises M₁
  2. P₁ causes P₂
  3. P₂ realises M₂

What is M₁ causally responsible for? Unless you are trying to replace (2) with “P₁ and M₁ causes P₂” then I don’t see how your argument refutes the causal exclusion argument. If P₁ is sufficient then by definition M₁ is unnecessary.

Because even if mental states are not reducible to physical states, and even if multiple realisability is true, the connection between them is more than just causal. I hinted at this in my previous comment when I said that a mental state only counts as M₂ if it is realised by P₂.

I forget the philosopher who argued this, but a case can be made that emotions are the sensation of one’s physiology. I don’t cry because I’m sad; I’m sad because I cry. Similarly, the case can be made that I intend to move the chess because my body is preparing to move the chess piece — or to be more precise, the intention is the awareness that it is going to happen. The idea that we can separate the content of some mental phenomena from what the physical phenomena is doing is called into question. So this “convergence” is neither accidental nor evidence that mental states are causally efficacious.

It’s a bit unclear to me what you think the student’s mistake is on the assumption that determinism is true. If we are to assume that determinism is true, it doesn’t appear to be consistent to say that the student is making a mistake in assuming that determinism is true and therefore in concluding that whatever grades they will obtain in the future was already determined in the past (and by the past “physical state of the world” as van Inwagen’s argument would have it).

So, what you may be gesturing at is that if the student was predetermined to watch Netflix, then they may have been predetermined to do it on the basis of bad reasons, whereas if they were predetermined to study, then they likely would have been predetermined to do it for better reasons. (And who would not want to do things for better reasons?) But what it is that you fail to explain is what it is that makes the hard deterministic argument for watching Netflix (or anything else they might want to do) bad reasoning if, by stipulation, all the metaphorical busses leading to comparatively better or worse grades, except for one, already have departed the station.

While you suggest that the reasoning is bad and that the deterministic thesis doesn’t change anything to the fact that the Netflix watcher reasons badly and that the studious student reasons well, the truth of the matter remains that a Netflix watcher who watches Netflix on the ground that all the busses leading to better or worse grades, except one, already (ex hypothesi) have departed the station, is reasoning soundly on the basis of true premises!

And this seems to me to be precisely why hard determinists have always suspected compatibilism of being a sleight of hand. Once the compatibilist grants that the physical past determines the physical future, they’ve supplied the only premise the already-departed-buses argument needs. If it’s true that the student’s grades were already determined before they deliberated then, at the time of deliberation, the student’s grades are already set in stone. The compatibilist wants to insist that the student’s deliberation is part of the causal chain that produces the outcome. The hard determinist replies that since there’s only one bus left at the station (which the compatibilist has granted), even though we don’t yet know which one it is, why should anyone pretend that there is any point in choosing between buses?

A way out of this conundrum, of course, is my OP thesis that even on the assumption that physical-level determinism is true, the student keeps all of their options open until they settle for one themselves on the basis of their own rational grounds (which, in this case, is their apprehension of the instrumental connection between studying and achieving good grades, together with their own valuation of good grades).

So, on my view, the main mistake that the student is making isn’t one of failing to insert their deliberative episode along the P-deterministic (and predetermined) causal chain. This is the diagnosis a compatibilist indeed would make. I’m rather suggesting that the student’s real failure is a failure to recognize their deliberative episode as the actualization of a capacity to determine which (normatively defined) M-causal chain is to be realized by whatever (nomological) P-causal chain runs through it, on rational grounds that are good by their own lights. And it’s because they have the ability to make this rational determination that they are the one settling the consequences of their own actions regardless of the alleged fixity of “the past”.