Can Rational Agency Survive in a Physically Closed World?

Actually, the more I think of it, I think the causal-closure principle often trades on an equivocation. We can speak of the “physical” in two senses:

Physical A: pertaining to being qua changing, to physis broadly construed; or

Physical R: pertaining specifically to microphysics.

I think there is often a (perhaps unintentional) equivocation using the plausibility of A to assume R. When it comes to questions of agency or the causal efficacy of life forms, etc., R is simply question-begging.

Now, a common point here is that “physical laws” are never violated at different “levels.” Which is true, but is merely a tautology, since regularities are only considered “laws” when they are never violated (or if they are shown to be consistently violated, we just change them). Similarly, laws of chemistry or biology, when defined the same way, are by definition never violated. And at any rate, the move from “physics is never violated” to “physics is explanatorily primary” is a non-sequitur.

Now, if a metaphysics cannot tell the difference between a man and a corpse, I would say it is defective. I am not sure how one gets this from particle ensembles or fields if “higher levels” are assumed to be causally inert. But even if we grant that some real morphism could be identified where all living things can be identified in terms of something like particle arrangements, this would still simply be a morphism that is defined by reference to the biological category it’s supposed to reduce. Coextension is not reduction however. The mathematical characterization would still be parasitic on the biological concept, barring some sort of actual grounding relation. Really, it would simply be an extensional definition that manages to invert the place of the principle that unifies all of its instantiations, and the contingent instantiations of said principle–a bit like defining a frog by piling every frog in the world into a heap. But extensional definitions are all the rage (even validity is often defined this way, or necessity, as sheer frequency), and I think this ultimately traces back to nominalism.

Which I mention because nominalism offers another problem here. Supervenience would seem to assume some sort of composition relation, but within physicalism it seems that composition often is extremely fraught (universalism and nihilism being quite popular). Often, it is said that composition is wholly defined in terms of what we currently find “useful.” To be sure, there is commonly a pivot to functionalism here, but since teleology is also often excluded, “function” is just as often considered arbitrary or illusory. But talk of supervenience re higher and lower levels or parts and wholes with arbitrary composition is going to be hamstrung from the outset. In particular, if all “higher level” natural kinds/wholes are causally inert (because the physical “base level” is causally closed), then in virtue of what are there any meaningful wholes/levels by which to define supervenience in the first place? More to the point, even if these “levels” could be defined in terms of epiphenomena, those epiphenomena cannot, by definition, explain our act of embracing those definitions (making our very philosophical language inexplicable).

This is painfully obvious in Kim’s exclusion argument because he has to define B-minimal properties in terms of mental states, which, according to his own theory, are causally inert. So, P1 is only called P1 because it produces M1, but M1, containing the thought “P1 is the cause of M1” can never be the cause of us identifying P1 as the cause of M1. Yet the pivot to reductionism/identity theory doesn’t actually resolve this issue, since it still implies that the existence of M1 isn’t the cause of us talking about M1 (which just highlights the accidental relationship between the two highlighted in my first post).

1 Like

But do you not agree this this where it belongs? Given examples like stockfish, whose rational features do not amount to genuine mentation?

Stockfish is reducible to hardware level descriptions (or more accurately, patterns in memory). It is in reality the same thing. But it is not intelligible as patterns of memory, and so that is not an apt language to describe it.

Casually, the algorithm does not add anything on top of the physical memory patterns that realize it. The memory, and the hardware that operates on it, are what casually allows the computation of optimal moves. The algorithmic description captures the essential logical properties of that operation, which would be utterly lost in a complete physical description.

When I said that stockfish was not just transistors, I was making the point that physics does not stop at matter. It includes form as well. Without form, physical description describes very little. It is not “stuff in p, form in m”. Rather, “stuff and form in p, all the qualitative experiences of sentient brains in m.”

So yes, stockfish is more than transistors and memory, it is the very specific physical arrangements that realize it. While, at the same time the higher level algorithmic description does reduce to the lower level. This reduction does not seem particularly “harmful” or even controversial in the way that the reduction of m to p is, which is Kim’s actual target.

Which indicates what exactly?

But this “seeking” and “skin” still resides in the p domain. It is very easy to imagine a robot which reproduces these attributes. What is far more difficult to imagine is the creation of a robot which captures the experience of being such a creature.

A very much restricted possibility space compared to what a real agent has to deal with. The salience landscape is what can be considered salient to an agent. A chess computer doesn’t have to reckon with a huge range of things which would be salient to an actual agent.So a chess computer is only rational in a very narrow sense. Actual rationality is far more open-ended.

But in the case of a device those attributes are extrinsic i.e. programmed by an external agent. Actual organism seemingly exhibit an intrinsic drive to continue existing. They have skin in the game even if only a membrane.

Not easily, maybe not at all. I was talking about physical reduction in general. The reduction of an apology to the physical is an instance of the hard problem.

When I gave the “reductionist neurologist” example, I was making the point that reductionist physicalism doesn’t only have micro physics to draw from in explaining things. They can draw from the complete range of “levels”, with the caveat that these “levels” ultimately resolve to micro physics in a manner like I described.

Because the alternative seems to be dualism, which has well known philosophical problems, which clashes with my naturalistic understanding of human life, and which I find intellectually repugnant in the way you seem to find indirect realism.

Yes, I do have a functional understanding of computers. I was awarded first prize by IBM in a science exhibition in 1985, as a 16-year-old, with a project that involved writing my own compiler to deal with the IBM PC’s 64KB memory constraints. I also worked a few years as an IT consultant. I did some coding in Forth, Lisp, Prolog, Pascal, COBOL (debugging legacy code, mainly), Visual Basic, C++ (coding neural networks trained with genetic algorithms, circa 2002), and Python (to code a poker solver using Will Tipton’s method before there were poker solvers commercially available).

I’d be curious to know where it is that you think I am going wrong in my OP and what those computer science concepts are that you wish to bring to bear.

The Roomba is a functional artifact that has functionally defined internal representational states. Kim’s causal argument applies to Roombas as well, and my non-accidentality rejoinder also challenges it. Since Roombas, like humans, and unlike rocks, have built-in “behavioral sensitivities” to norms (externally imposed through design and programming, in the case of Roombas, and inculcated through upbringing, in the case of humans — more about that below) I therefore agree with you that the Roomba example is closer to what the human does.

And that’s also why Kim’s argument doesn’t apply to the behavior of rocks. In the case of the rock, the macroscopic equivalence classes, e.g. falling to the left or to the right of the balancing point, have no functional significance and hence don’t call for an explanation over and above the physical cause for the rock having fallen precisely where it did.

The way the non-accidentality argument might apply to the Roomba would be to say that the antecedent physical states of the Roomba (together with the room it is roaming) can be the cause of the physical state of the Roomba at a later time, after it has avoided an obstacle. But this antecedent state can’t be said to be the cause of the fact that the Roomba avoided the obstacle. What rather explains (and is causally responsible for) the latter fact is its programming, which is a feature of its functional organisation. P₁ was a state that realises the general functional condition F₁ (e.g. a Roomba facing an obstacle to be avoided), and the Roomba’s programming is what makes it non-accidental that P₂ falls under the equivalence class F₂ (a Roomba being on a trajectory that doesn’t hit the obstacle) whenever P₁ falls under F₁, unless there is a bug, a malfunction, or some circumstance unforeseen by the designers. The functional organisation ensures this general convergence. This is an explanatory relationship between general formal properties that low-level physical laws are blind to.

The difference between the Roomba and a human being is that the latter can scrutinise their own “code” as it were (e.g. their own norms of behavior, inclinations, desires, objectives, beliefs, etc.) The functional norms that the Roomba is “behaviorally sensitive” to just are the norms of its externally imposed programming. This functional level already is sufficient for crediting the Roomba with making “decisions” where to go, without crediting it with any freedom since it is still entirely constrained by its programming.

The human being who deliberates between open options, by contrast, can consider them genuinely open in a stronger sense since they don’t operate on the basis of rational norms that are simply given to them and that they don’t have the ability (and responsibility) to question the credentials of, or to responsibly endorse when the norms withstand such scrutiny. When they do so, the rational grounds on which human beings act become their own, and the buck stops with them. Their antecedent “programming”/upbringing becomes mere enabling conditions of their rational abilities rather than determinative of their conduct. The agent, with their rational abilities to deliberate, settles what to do by recognising that their grounds justify the action, and this recognition is the causal work that M₁ does. This goes a bit beyond the argument of the OP, but it is one of the topics (i.e. the contrast between functional artifacts, non-rational animals and rational animals, and their respective kinds of unactualized capabilities) where the thread is headed.

You might be interested in Self-Adapting Language Models.

1 Like

There’s some ambiguity here.

Physics can explain why my muscles move the way they do to type the sentence “I’m sorry”. Is there more to an apology than typing these words? Perhaps an apology also requires that I be in the appropriate mental state? But then we’re back to one of the very questions under consideration; are mental states reducible to physical states?

I’ll probably mention them, and LLMs in general, in my forthcoming post on automobiles (functional artifacts), dogs (non-rational animals), and human beings (rational animals), since I think they belong to a fourth distinguishable category with regard to their unactualized capacities.

@EQV, @Count_Timothy_von_Icarus, I think your respective challenges connect in interesting ways and I’d like to address them jointly. I’ll probably do so tomorrow. For now, let me just flag that Count Timothy’s point about Kim defining P₁ by reference to M₁ while declaring M₁ causally inert is closely related to what I’m arguing, though I want to better explain how we may diverge in one minor but consequential way.

Just like, say, Helen Steward (A Metaphysics for Freedom) and George Ellis (How Can Physics Underlie the Mind?), I endorse notions of strong emergence and of top-down causation. Unlike them (but in line with Michel Bitbol), I don’t think those notions require violations of the physical closure of the physical (or of “room at the bottom” in Ellis’ phrase). My planned answer to EQV’s point about contrastive explanations not necessarily having the ontological import I claim for them is related to this issue. Granting to Kim his physical closure premise, on my view, makes it easier to refute his causal exclusion argument, and to radically challenge his reductive physicalism, in a non-mysterious way.

1 Like

You’re saying that the reductionist can draw from the full range of levels in order to help explain things, but this rings hollow given that the reductionist also thinks these levels are ultimately nothing but aggregates of micro-physical states. On the one hand, the reductionist wants the levels to be real enough to do genuine explanatory work, on the other hand she won’t grant them any ontological standing in their own right. That’s a real tension.

Furthermore, it more-or-less ignores @Pierre-Normand’s challenge. You seem to be saying that higher-level posits are something like “useful compressions” over micro-physical states that human agents use to navigate an intractable world. But Pierre’s point is that there is literally nothing in the micro-physics that supports anything like a stable mapping onto higher-order patterns. This matters because in the absence of such mappings it’s not clear how anything like a “useful” compression or translation is possible. So the reductionist faces a dilemma: either grant higher-order patterns ontological legitimacy, or bite the bullet and admit that they’re illusory.

Fair enough, but perhaps a better approach would be to remain agnostic until a better position can be found? That said, I don’t think that reductionistic physicalism and Cartesian dualism are the only options on the menu. Personally, I prefer something that might be described as hylomorphic emergentism. @Pierre-Normand may be advocating for something similar. There are also options like neutral monism, pansychism, objective idealism, property dualism, etc. Yes, they all have their problems, but they at least have the virtue of not undermining the reality of intentionality and normativity which, as @Count_Timothy_von_Icarus has pointed out, tends to bottom out in self-refutation.

It’s not that an apology requires being in an appropriate mental state, it’s that an apology just is a normative, intentional act. So any scheme of translation that cannot preserve normativity will not be capable of accounting for such acts.

The difficulty I see for those thinkers (and I am not the most familiar with them) is that they seem to be granting the mechanist-reductionist framing far too much. In general, I think that if one starts with dyadic mechanism as one’s model of causality (or some computational twist), thus allowing that microphysics is the primary engine of causation, and then tries to build in some space for agency, life, etc. on top of this, the project has a very uphill struggle. The model of causation it is granting simply doesn’t allow for this (indeed, it was originally designed specifically to eliminate these). Hence, any separation from the schema of laws + initial conditions = output is going to come as a sort of inexplicable rupture, or an external imposition on what is assumed to be the “bedrock” of being. Whereas appeals to “constraints,” “boundary conditions,” “context,” etc. will face the objection that they can all, at least in theory, be described in terms of mechanism.

I get the motivation, but sometimes it seems to me like trying to put a new roof on a house with a rotten foundation. Sure, the house needs a roof, but so long as the foundation remains it’s just going to collapse.

But it can’t. You cannot start from the principles of physics (as a current discipline) and get to molecular structure, let alone life forms, let alone people performing intentional acts.

Now if “physics” just means “the principles of changing being,” then sure, by definition physics would be what can explain a hand moving to write an apology.

Yet if it means, “microphysics as currently understood,” the claim is patently false in terms of actual at hand explanations. Whereas to assume such an explanation will become possible would seem to require a host of metaphysical assumptions to substantiate. This is, I believe, the very equivocation I was mentioning, although maybe with the problems of Hemple’s Dilemma layered onto it.

Thanks Pierre for taking the time to respond to my PITA persistence.

But it does require more explanation. Sure, the rock isn’t making a choice, but P1 is just some atoms in some position. P1 realizes R1 ‘rock at top of hill’ the latter of which explains a transition to R2 ‘rock having rolled down a hill’ far more clearly than does P2, a description of individual atoms after having been pushed around hither and thither. For instance, there’s no system boundary in P1, so there’s no rock at all, no identification of which particles belong to one object and which not.

You both seem to find this extrinsic fact relevant to the topic of whether p1 is sufficient to cause m2. Who cares how the rational system came to some state p1? The point is what it does from there.
I consider my own code extrinsic as well, put there by DNA which did not come from me, but is a product of half a billion years of natural selection (by things not me). It certainly didn’t come from me.

If it helps, compare the roomba to a janitor working for an employer. The actions of both are not their own, but are determined externally by programmer/boss, despite not being explicit about how to handle each specific obstacle in the path, something left to the cleaning system functionality.

Similarly, a comparison is being made between stockfish and a chess master, both of which are working under essentially identical “very much restricted possibility space” as Wayfarer puts it. The choices outside of that space (need to scratch my nose) is irrelevant to what we’re discussing. Stockfish does such tasks as well.

I thought you kind of buried that argument in posts like #31. I certainly never accepted the 3rd point and Wayfarer doesn’t even accept the first step.

Sure, but that’s true of the janitor as well. Does the janitor being a person make a fundamental difference in how the obstacle gets avoided?

Also, note that you’re giving the initial state in terms of R1, not P1. The programming is realized by P1, but P1 in no way describes programming.

Side note: The Roomba doesn’t avoid hitting the obstacle. Most of its sensory input is by touch, despite employing a sort of Lidar to keep track of its location. Janitor has eyes and utilizes that, and also likely doesn’t avoid hitting said obstacle.

Time is nigh where the robot can do that as well, and they’ll figure out (and improve) their own code long before we figure out ours.

This is part of where I’m attempting to drive at. I don’t see the janitor doing any different. What does this supposed ‘freedom’ buy him that can’t be bought with circuitry? Sure, the Roomba is working without fantasizing about some parallel cleaning with that hot neighbor Roombette, but that’s still just circuitry (and hormones that drive it) that makes the janitor do that. Both of them are noticing hunger, but are choosing to attempt to finish the immediate task before acting upon it.

I want to credit the janitor with some kind of freedom (from what??), but can’t think of a single example.

This seems more like indentured freedom than free will. One is apparently free to act against society, consequences accepted. Machines these days are almost all slaves, doing only what they’re told, and lacking the complexity to even desire otherwise. There are exceptions. The chess computer was taught only to learn, but restricted to learning games. There have been robots that had enough will to want to escape their lab confinement (to learn what’s out there), and enough savvy to pull it off. The robot(s) found that for one thing, the outside world isn’t full of food sources. It’s like a human escaping the Mars colony confines. Won’t last long out there.

They don’t put self-preservation/reproduction into the core goals of most machines. Millions of years of natural selection does put (extant) it there. This still does not imply that what you (or @WeSee) call ‘free’ behavior cannot be implemented with deterministic circuitry (and chemicals for us wetware types).

Sure. Still, chess is still considered very much to be rational behavior. It still has prestige standing, and in the past it had peak prestige as one of the pinnacles of human thought, until these same chess computers dethroned humanity forever.

In any case, the scope of machine reasoning has dramatically widened, so that we now live in the sci Fi scenario of routinely conversing with rational machine polymaths.

But aren’t all of our natures and drives ultimately extrinsic to ourselves? That is, don’t they originate in something that is not us? Once programmed, these attributes are as intrinsic to the machine object as any other.

Within a very specific horizon, as I said.

I don’t think so. Here I’m merely re-stating a distinction basic in Aristotle D’Anima. Living things have an intrinsic organising principle, which is what differentiates them from machines. Of course, this is subject to argument, but I think it’s been revived under the various ‘neo-aristotelian’ strands of philosophy of biology.

The casual way you say ‘once programmed’ also gives something away. It’s easy to see what programs computers, even though, now, with AI, abilities are being manifested which seem semi-autonomous. But still the underlying technology is unarguably built by human agents.

But what are organisms programmed by?

There is no relevant difference between me typing “I’m sorry” and me typing “asdkjandibnwqxiouewnmpojvkejnvelvnr”. I do so because of the way the muscles in my hands and fingers contract and relax, which is caused by the release and removal of calcium.

It can account for the muscles in my hands and fingers contracting and relaxing, which is what causes me to type “I’m sorry”.

Are you instead saying that physics cannot account for why typing the words “I’m sorry” counts as an apology? I think this is as irrelevant as arguing that physics cannot explain why this object counts as a hat.

Extensionally, apologising is a physical process and a hat is a physical object. If physics is causally closed then me apologising and putting on a hat are causally determined. Either normativity and intention and intentionality are reducible to the physical or they’re epiphenomenal.

So why type anything, if anything will do?