You are losing me here. This does not seem to be an effective rejoinder by the determinist, and then your “way out” just seems to be a restatement of compatibalism.
Determinism says that the choice you will ultimately make, whichever one that happens to be, is the bus that hasn’t left. But to you, every bus still looks like it is waiting at the station, and you still have to pick a bus and get on. Each still goes to a different destination. You can’t say, “the game is rigged, I was always going to choose whichever bus I will get on, I abstain. I’m waiting at the station.” As soon as you do this, you blink confusedly, look around you, and find yourself riding on a bus.
To try to be crystal clear, the student’s error:
Original sequence
Study → Good grade
Then, the student discovers determinism. They accept it, and therefore know the outcome of the test is set in stone. But, they believe that their discovery of determinism, and their resultant decision to watch Netflix, somehow “doesn’t count”.
Student’s imagination:
Discover determinism → Watch Netflix instead! → Get good grade anyway!
They thought the outcome of 1 is what was set in stone, and that therefore studying is pointless. When in
Reality
Discover determinism → Watch Netflix → fail test
Under determinism, 3 is what was set in stone. The student was always going to discover determinism, watch Netflix, and fail the test. The reason 3 was set in stone is the collision between the discovery of determinism and the students poor reasoning powers. If they were someone who reasoned better, they would realize that determinism or no, studying is casually necessary to get a good grade. That in fact the metaphysical question of determinism cannot be empirically resolved, as it has no observable consequences. And so, to make any decision on the basis of determinism is to make a mistake.
I’m fully onboard with that. M1 can be realized by all sorts of different P1’s, leaving me still trying to grok this ‘right question’.
To me this sounds like asking about a collection of atoms arranged in a way P₁ which realizes R₁ [rock imbalanced at top of hill]. The diachroic transition from P₁ to P₂ utilized nothing but physical law governing the particles. P₂ realizes R₂ which is [rock at bottom of hill]. Note the multiple realisability. P₁ can be multiple valid states, including kind of rock, mass, some variations of shape and orientation, but not all, but point is they all realize R₁. Physics doesn’t know about rocks or even objects, only atoms, some arrangements of which constitute P₁.
If that doesn’t cause the same concern with you, then perhaps you suspect that mental processes don’t follow these same sorts of rules.
The only explanation for that convergence is at the rational level: M₂ is what M₁ rationally calls for.
Just like R₂ is what R₁ calls for. OK, it isn’t a rational calling in that case, but I can make R into a Roomba instead of a rock, and then we have an example that is closer to what the human does. Do you have a problem with P₁ realising R₁ [obstacle detected] leading to physical state P₂ realising rational outcome R₂ [choose to go around to the right]? and not say R₃ [call for help]? Just trying to narrow down where your issue is: rationality, or machine vs. human?
Half a billion years of evolution results in DNA that generates a neurological configuration that produces a response that is more fit. Doesn’t explain the rock or the Roomba, but it explains the apology. It doesn’t away work, but an initial state realising [need to apologise] doesn’t actually produce a state realising [apologising], then the initial state isn’t really an instance of P₁, bur rather a different state, such as [awareness of needing to do something you have no intention of doing].
But that’s also part of P₁, a physical state that realizes the capacity of rational reasoning. Any computer has this for instance, despite there being no rationalizy in any transistor or electron involved in the machine’s physical state P.
We seem to be generally in agreement, but I don’t see where your asserted libertarianism comes into play anywhere. The only nit I have with your last bit there is that there are all those other physically possible states, but the only possible successor states are the M₂-realisers. Most of the time. There are exceptions: I need to apologize but the house caught fire just now, so we put that off.
Concerning chess: Yes, despite there being only 3 states assignable to any position (W L D), the ‘perfect player, having solved the game’ who selects randomly from any valid move that does not degrade state, will win almost no games against a normal strong player. He will rapidly degrade into an incredibly weak position where only perfect play will stave off the loss. The game will end in a draw. Good moves aren’t just (not lose), but strive to limit the width of the path that the opponent must follow. All this is sort of a response to the chess bit in the Newcomb thread, but chess was brought up here, so there.
I’m good with all that. I’m kind of still bewildered why any of this needs asking in the first place, why it happens to be the ‘right question’.
I think it’s wrong to mix levels in the same sentence like that. At least, ‘why did this collection of quantum interactions produce a quantum state that happens to realize glucose?’.
This is what the physicist Steven Weinberg once argued in his “Two Cheers for Reductionism”: that the “arrows of explanation” always point downward toward more fundamental levels.
Yet your comment that mildly offended me just above pointed upwards from QM to glucose, but yes, glucose is explained by QM, and QM is not explained by glucose.
True not only of the P vs M boundary, but of various philosophies of mind as well.
It’s asking why the process went this way rather than that way and the answer requires the biological level, because the contrast class is individuated biologically, not quantum-mechanically.
See? We’re in agreement so far.
Because it’s P₁. Maybe it’s somebody else whose rational character is more inclined to deflect (similar to differing actions in reaction to the Newcomb scenario). Maybe it’s the same person, but in a different circumstance. Neither state is P₁, although each of these states does realize a form of ‘recognises they should apologise’
This is pretty much in line with when I said ‘the choice made (M₂) is a function of the deliberation (said diachroic transition). That makes it your choice, and puts the responsibility of that choice upon you.’
And once again, this has nothing to do with determinism, hard or otherwise, since none of it is any different given non-determinism.
Yes, but I might challenge said hard determinist to come up with an example where free will would have yielded a better choice.
Similarly, I can challenge him to explain how lack of determinism somehow allows libertarian free will. It doesn’t.
Sometimes. Some forms of determinism says that plenty of busses haven’t left, but they’re not labeled, and thus where you end up isn’t up is essentially random. Think MWI, where you get on all the busses.
The argument doesn’t ever go that way. Watch Netflix since bad grade is determined, which mistakenly mixes having a choice and not having it. The argument isn’t valid unless argued from a consistent PoV. From an objective standpoint, the conclusion to sluff off is defective reasoning, all indeed quite determined. Natural selection will have him digging ditches in no time.
The student here also makes the mistake of determinism releasing him from responsibility, which in this case is being judged for poor marks and not getting the good job.
That’s a good question. The response is that even if P₁ and M₁ are token-identical, and hence are in sense the same event, they don’t denote the same equivalence classes and don’t play the same roles in causal explanations.
In his 1996 paper “The role of contrast in causal and explanatory claims”, Christopher Hitchcock begins with raising the interesting issue of contrastive stress in causal claims (following Dretske and van Fraassen). The first example he provides is the sentence “Walter ran to the classroom”. This sentence can be stated with the stress being put on three different parts: “Walter ran to the classroom”, “Walter ran to the classroom”, “Walter ran to the classroom”.
This is most relevant when the sentence is part of a question like “Why did Walter run to the classroom?” The three ways to put the stress call for three different sorts of explanations, each targeting a different contrast class. Why Walter rather than someone else? (Because Walter knew how to repair the slide projector.) Why running rather than walking? (Because the situation was urgent.) Why the classroom rather than somewhere else? (Because the malfunctioning slide projector was in the classroom.) And each explanation makes reference to a different causal antecedent. It’s never just a raw “event” (e.g. Walter running to the classroom) that is “caused to happen” by a past “event”. And that is true in physics too, unless the word “cause” is used in a very artificial manner to denote so-called “total causes” like space-like spacetime slices.
In Kim’s causal schema, if we grant that M₂ and P₂ are token-identical, and therefore denote the same “event”, asking why it is that M₂ occurred, or P₂ occurred, amounts to stressing two different features of this event. Saying that P₁ (specifically) caused P₂ answers why this particular physical configuration obtained. Saying that P₁ being a realizer of M₁ makes intelligible why the agent did M₂, and hence explains why it is that P₂ happened to be a realizer of M₂ (even though it didn’t necessarily have to be on that account alone, since intelligibility relations between reasons and actions aren’t strict laws of nature).
In other words, explaining why it is that my limbs moved in this or that way, and why it is that this or that neural firing pattern happened in my skull, shouldn’t make reference to the same sort of antecedent cause as the question why it is that I chose to watch Netflix. The two different kinds of explanations, answering different questions, make reference to different causal antecedents.
So, when you are asking me what it is that M₁ is causally responsible for if P₁ is sufficient, the question presupposes that there is only one causal question on the table. But it is like asking what it is that makes Walter’s knowledge of the functioning of slide projectors causally relevant to explaining why he ran to the classroom if the urgency of the situation is sufficient to explain why he ran. The two causes account for different features of the “event”. Walter having the knowledge explains why he was called, and the urgency of the situation explains why he ran.
In the general case illustrated by Kim’s causal exclusion schema, the contrastive argument draws attention to the fact that there are two distinct causes. “P₁ causes P₂” answers the physical question. “P₁ being a realiser of M₁ makes intelligible why P₂ realises M₂” answers the rational question. M₁ isn’t an extra cause pushing P₁ aside, as Kim would have it, and it isn’t epiphenomenal either. It’s the feature of the event in virtue of which a different question gets answered. And this question is: what were the rational grounds on the basis of which the agent’s action can be disclosed as rational.
I thought you were arguing against reductionism? That seems inconsistent with token identity?
With that in mind, the rest of what you say seems to be a matter of language; we can describe events using the language of physics (mathematics) or we can describe events using English. I can agree with that, but like before it doesn’t really do the job of answering the hard determinist’s objections to us having free will; that we have free will only if our deliberation is not causally determined by antecedent events, such that we could have done otherwise even if the physics were the same.
Thanks for the excellent reply — it definitely gave me a lot of ponder. You and I are in agreement on many things. The only real point of divergence that I see is to be found in the quotation above. While I don’t necessarily fault you for wanting to avoid grand metaphysical gestures, I think you will eventually need to go down that path if you want to fully stabilize your position. Without a metaphysical framework in place, the meaning of terms like “organization”, “constraint” and “causal relevance” are left “blowing in the wind”, which leaves you more vulnerable to reductionistic reinterpretations of those terms. In other words, until you make explicit the intelligible form that is disclosed by each of the individual cases that you’ve pointed to, it remains unclear exactly what these cases are really meant to establish. Thoughts?
It’s not clear to me what you mean by “explicable in terms of”. Could you clarify?
It’s only vulnerable to Kim’s epiphenomenal attack if we already assume some kind of reductionistic picture of the world. But this is precisely the point that is being challenged. The claim is that physical closure and causal explanation pull apart. They are not the same thing.
Your analysis seems unnecessarily complicated. Simply conceptualize it as two minds: the primitive mind and the conscious mind.
The conscious mind has the ability to take input from the primitive mind, analyze it and accept it or veto it which it can do iteratively. The conscious mind is what we think of as ourselves. It is what has capacity for free will since it can deliberate independently from the primitive mind.
At times you seem to allude to the above concept, yet at others you seem to speak of the mind as a monolith.
That two different properties of a “material event” have distinct causes isn’t just a matter of semantics. The two causes (e.g., the situation being urgent or Walter being knowledgeable about slide projectors) aren’t different names for the same thing. The token-identity claim was granted by me for the sake of argument. But it ought to be made more precise. It boils down to accepting the supervenience thesis, but an unstated caveat is that, following Wiggins, I endorse material constitution without identity, and hence I am not a reductionist. You may think that reductionism just is the claim that people are entirely constituted by physical stuff. Consider, though, that a statue and the bronze lump that make it up at a given time aren’t the same material objects when they have different forms—different principles of persistence and individuation.
I’m reminded of the neat little book Real Natures and Familiar Objects (2005) by Crawford L. Elder, which I had been much impressed by. I think it’s from there that I first inherited the insight that the specific material constitution of a real object (that always has both matter and form lest it be undefined as a persistent object, be it a rock, a chair, a cat, or a galaxy) is an accident of this object just as much as any other macroscopic accidental property of it is. It may also be from there that I derived the insight that tracing back causal chains leading to such an object’s behavior misses the real causal antecedent entirely when they track its accidental properties, such as its material constitution over time.
For instance, say a car reaches a city intersection and turns left. Why did it turn left? The friction forces between the tires and the road will figure early in the material causal chain. Further upstream in this chain, the friction forces between the driver’s hands and the steering wheel will also figure, and then the photons bouncing off the street name sign and entering the driver’s eyeballs. We then must ask why it is that some patterns of neural firings were precisely what they were. But upstream of such neurophysiological events figures the fact that there was oxygen around so that the driver’s brain didn’t suffer anoxia. This leads us to the wrong causal antecedent: that the car turned left because the driver’s brain cells got the specific oxygen intakes that they got.
In order to get to the relevant causal antecedent, we must ascend, as it were, to high-level psychological properties that physics has nothing to say about. And there we find that the car turned left because the driver wanted to go back home and turning left led them there. But this isn’t a mere redescription of the first story. It moves us from (1) describing the physical causal antecedents of the raw motions of the car to (2) describing the intelligible grounds of the intentional actions of the driver. The car itself (and the driver’s brain cells) merely get caught up in the intersection of those two different causal histories, as it were, just like the atoms that make up both the lump of bronze and the statue at a given time do.
The oxygen intake is a genuine part of the physical causal chain leading to the car turning left. Without it, the brain doesn’t function as it should and the driver’s hands don’t turn the wheel. Biology and psychology, though, and not physics, tell us what it is for a brain to function as it should. But even more importantly, citing the specific oxygen intake as the cause of the left turn is absurd. And its absurdity isn’t a matter of insufficient detail. Adding more physical detail (the specific oxygen molecule trajectories, the specific metabolic pathways) doesn’t get you any closer to the right answer. It gets you further away.
The only way to reach the relevant causal antecedent (e.g., the driver wanted to go home and turning left led there) is to ascend to the rational level. And ascending to the rational level, without dismissing the enabling role of the material realization, demands that we attend to the question why it is that P₂ (e.g., the specific pattern of neural firings) was such as to realize M₂ specifically (e.g. a decision to turn left). This leads us to the right causal antecedent, picked from the right contrastive class: not the specific material realization P₁, but rather the fact that P₁ realized some M₁ relevant to making M₂ intelligible in the light of it.
Suppose you have a higher level p2 supposedly built from a lower level p1. Given any description using p2 language, that same description should be able to be rewritten using p1 language only. It might be incredibly awkward and unusably verbose, but that is beside the point.
There should be no part of the p2 description that escapes translation to p1. If there is, either the understanding of the relationship between p2 and p1 is incomplete, or there is compositionally more to p2 than p1.
But this is ambiguous. Are we talking about the rational properties the agent is consciously aware of? Or, are we talking about formal rationality, independent of awareness?
If the latter, it seems very plausible that these are already present in p. For instance, a chess computer is not just transistors, or a computer executing a series of op codes. It includes all of the higher level “rationality” necessary to decide which move is best. But this doesn’t mean that stockfish has m. It just has p, including all its lower and higher levels of description. Only human players are aware of the rationality they are exercising, only they have m.
Similarly, nematodes nervous systemsare not just assemblages of neurons. Their neurons fire in coordinated, higher level states, corresponding to human the feelings such as pain. Their nervous systems recognize the aversive stimulus and induce the worm to avoid it. But this doesn’t imply m, it is very plausible that the worm is a biological machine, without inner states.
You seem to be defining m as higher level p, then attacking Kim by saying we really need higher level p. This is what I meant earlier when I said you were defending an easier target than what Kim is actually attacking. He can concede we need higher level p, while still claiming that m is redundant.
You acknowledge that Stockfish isn’t “just transistors”. We are agreed that the functional description at the algorithmic level does explanatory work that the raw physical level description of its transistors (e.g. voltages, currents, etc.) can’t do. But then either those algorithmic descriptions are reducible to hardware level descriptions or they aren’t.
If they are, then you must retract your acknowledgement. Stockfish is just transistors after all, and you need not mention the algorithm at all to explain how Stockfish is able to recognize good chess moves in a specific board position.
If they aren’t reducible, then you have a functional level that supervenes on but isn’t captured by the physical level description. And this is exactly the structure Kim’s argument targets. You are just relocating my M₁/M₂ to what you are calling “higher-level P,” but Kim’s exclusion argument applies in the same way. The physical description of the transistors is sufficient. So what causal work is the algorithm doing? This is exactly the question my OP answers: the algorithm is what makes it non-accidental (and therefore explains the fact) that the transistor state transitions constitute good chess moves rather than arbitrary voltage fluctuations.
And then regarding the functional significance of consciousness, or the lack thereof, the same dilemma applies to whatever you reserve the label “M” for. If consciousness supervenes on your “higher-level P,” then Kim’s argument runs against consciousness too. The functional level is already sufficient, so consciousness is excluded. You would need the same kind of reply I give in the OP to preserve a functional role for conscious awareness, or else accept that consciousness floats free of the physical entirely (which looks like Cartesian dualism).
I’d like to know what it is that you mean by “independently” in the last sentence. Do you mean that the conscious mind represents a capacity to deliberate that isn’t enabled, or realized, by the brain, and that operates in parallel, as it were? Or do you simply mean that the neurophysiological story doesn’t fully explain what it is that the agent is doing when they deliberate? Then you’d be defending the irreducibility of the mental domain to the physical domain, without necessarily denying Kim’s supervenience thesis. This is what my OP does.
But note that chess has a very specific ‘salience landscape’ to allude to a term much used by cognitive scientist and philosopher John Vervaeke.
But the nematode nevertheless seeks to preserve itself and to propagate regardless of any conscious awareness on its part. So it has ‘skin in the game’ in a way that no machine does.
I am quite appreciative of your demand; however, my already fairly large OP wasn’t meant to advocate for a form of non-reductive physicalism but rather to grant to physicalists as much as possible that they might reasonably want (including chiefly the supervenience of the mental over the physical and the causal closure of the physical) and show that even with all of that, reductive physicalism remains indefensible.
Regarding my own favored form of Aristotelian naturalism, it wasn’t the aim of my OP, or indeed of the thread, to advocate for it since the two manuscripts where I had begun doing so are one hundred pages in total and I wanted my OP not to exceed ten. My first 2009 manuscript (titled Autonomy, Consequences and Teleology), though, had as one of its primary aims to distinguish and characterize grades of autonomy that belong to (1) natural non-living material objects, (2) dissipative structures, (3) functional artifacts, (4) living things, (5) non-rational animals, and (6) rational animals. And accounting for the strong emergence (both synchronic and diachronic) of the last three categories requires jettisoning simplistic mechanistic stories. In the case of natural evolution, this means also displacing “selfish gene” + “blind selection” stories with Evo-Devo + niche construction approaches and, in the case of rational animals, supplementing such approaches with accounts of the bootstrapping of individuals (children) into culturally evolved rational forms of life.
But doing all of this, although it might be responsive to your demand “that [I] make explicit the intelligible form that is disclosed by each of the individual cases,” would be beyond the scope of the present thread. I’ll be happy to broach some of those topics as they arise and are relevant here and elsewhere, of course.
I’ve been busy, but haven’t forgotten your excellent questions, or @Hypericin’s worries about my already-departed-busses argument against determinism. I’ll get back to them as time permits!
OK, so how does this work in the case of an apology? Suppose we go ahead and try to perform a “translation” into neurobiology. We end up with a description that features things like “the struggle for dominance between the amygdala and the prefrontal cortex”, but at no point will the normative/rational structure of the apology be reproduced. That’s a problem, because the normative/rational structure is constitutive of what it is for something to be an apology.
Now the conclusion that you would apparently draw from this is that either (1) our understanding of the relationship between an apology and the present state of neurobiology is incomplete, or (2) there is more (compositionally) to an apology than just neurobiology. The presumption is that there should be some translation between the two that preserves everything, and that it’s just a matter of finding it. But what justifies this presumption?
That’s understandable, and my intention wasn’t to make you recapitulate your entire metaphysics here on the thread. My aim was to point out that, in the absence of such a metaphysics, non-accidentality only demonstrates that the world can be described under more than one legitimate vocabulary. As it stands, the determined reductionist can grant basically everything you’ve said and still maintain that the ontological conclusion you want to draw simply doesn’t follow. The appeal to dissipative structures and teleological organization is suggestive, but it’s not yet clear that it amounts to anything more than that in the absence of an overarching theory.
Based on your response, there are some concepts that you don’t seem to be understanding. Perhaps it will help if I flesh out the conceptual model a bit.
Do you have a solid basic functional understanding of computers? Hardware, operating system, higher level apps and how they interact in a multitasking environment? If so, it will be relatively easy to explain. There are concepts that are readily transferable.
Whether your view here is correct or not, it’s a really important discussion and I think for the most part you’ve done a great job expressing yourself. It’s something that I’ve been thinking about a lot these past couple years. The exact nature of mental emergence.
What explanation? How are the rational and the physical (“physical” as physicalists commonly use it) related in any way that isn’t simply accidental? On the epistemic assumptions common to physicalism and its deflated understanding of causality, such a connection seems impossible to establish.
So sure, I agree: an agent apologizes in virtue of their understanding making that the rational thing to do. But what does “in virtue of” mean here? What actually links the rational and the physical such that physical states can bear rational properties at all, and such that a brain state can be a realizer of a judgment rather than just a neural configuration that happens to correlate with one? The common assumptions re a reduction to mathematics or computation seems particularly fraught here. How does one get merit, guilt, teloi, experiences, etc. starting from mathematics?
On the epistemic and metaphysical assumptions physicalists typically assume, the link is either inexplicable, purely nominal, or both. Even if there were a genuine explanatory link between the rational and the physical, on those assumptions, i.e., empiricism, third-person observability, mechanistic temporal causality, etc., such a link would seem impossible to establish. (My great muse Hume points this out quite well; yet, of course, causal-closure is itself unfalsifiable and not strictly observable on these grounds either).
Barring some explanation of the relation, the grounding of freedom in it seems problematic.
That would be my main point, but I guess this gets to my gripe about “hard” determinism. What is “hard” about it aside from the bare posit that free will doesn’t exist provided it obtains? Normally, the thesis is framed in terms of the “causal closure of the physical.” But this tends to pack in some hefty assumptions about causality itself, and it alone does no work re the denial of freedom unless it is also assumed that agents aren’t involved in physical causation in the first place. How do we get to that hidden premise? Generally, through more questionable assumptions, particularly reductionism, the assumption that being is reducible to some univocal formalism/mathematization, and quite often smallism (the assumption that smaller = more fundamental).
But none of these themselves are on particularly firm footing. The empirical track record for reductionism is, to my mind, quite weak. Thermodynamics to statistical mechanics was impressive (although far more nuanced that it is often presented as), but what is a strong example since then (particularly one that isn’t actually a unification rather than a reduction)? More than a century on, even the basics of molecular structure haven’t been reduced.
Smallism in particular seems like the sort of thesis the wise wizard in some coming-of-age novel chides an apprentice for.
Good points, but I think the difficulty is greater than this. It’s a self-refuting position, first because it denies that reason and discourse are ordered to truth per se, and second because it undermines any epistemic warrant we have for believing our own judgements in the first place, since they can only ever be accidentally related to selection processes (indeed, in epiphenomnalist versions it simply follows that our reasoning and sensations never, on pain of contradicting the thesis, affect behavior, which means they can never influence reproduction, which in turn means they can never be selected for).
I guess another question is, can there be “rational” freedom without teleology? Presumably, reason needs to be ordered to truth, and action ordered to some good that we can be right or wrong about for “freedom” to be anything other than arbitrary.