Can Rational Agency Survive in a Physically Closed World?

My point was simply that the answer that is entered is determined by mathematics, not by ‘the disposition of the body’.

“Nothing more than” seems over reductive even for a reductionist. I think they would agree that it is ultimately casually explained by the micro physics.

You say “if”, do you agree that in principle it can?

Here is an interesting thought experiment. Suppose in 100 million years an advanced alien species discovers Earth. They find the only thing that remains of our civilization, my old Tandy chess computer. They took a completely different technological path, they use computational fungi, and have no clue about Von Neumann architectures. Given this, could they derive the rules of chess?

It would certainly be a heavy lift. They would need to discover the basic principles of the computer, then at a higher level the assembly language, then at the highest level they would need to deduce the program operation from the assembly code in rom. But I don’t see why in principle it couldn’t be done.

If this is so, can’t we say that the computers function, including what malfunction means, is already in the physical description? (The physical description being coequal with the physical object, as one can be transformed into the other).

I don’t think we yet have reason to think that the physical description is coequal with the object. The thesis that I (and others) have been defending is that the intelligibility of the object is not exhausted by its physical description.

Perhaps we should try to get aligned regarding what we mean by “physical description”, just to make sure we’re on the same page. I’ve been assuming that “physical description” refers to a description of the object at the most granular level of “resolution”. Given the current state of physics, this would correspondence to something like the standard model of particle physics. Is that what you’ve had in mind as well?

Now, to your questions: would a faithful simulation of the chess computer at this level reproduce all of the high level behavior? I think that for the purposes of this discussion we can answer “yes”. I doubt that this will ever be achieved in practice but, for now, let’s just assume that it could. And with regard to alien scenario, yes, I suspect that a sufficiently intelligent alien species could probably reverse engineer the chess computer and determine how each layer of functionality works.

Do either of these imply that the Chess playing algorithm is “in” the physical description of the computer? My answer is “no”. How would you define an “illegal move” or a “segmentation fault” using only the terms and relations of particle physics? I don’t see how you ever could. These concepts are defined entirely in relation to the normative structure appropriate to their respective levels. That’s not to deny that these things can be “realized” through particle physics, but they aren’t reducible to it.

To put it differently: the chess computer is a concrete unity exhibiting intelligibility across multiple dimensions of it’s being. Particle physics is just one of those dimensions, but the intelligibility of the object qua chess computer is not exhausted by that single dimension.

To properly understand the chess computer we need to understand what it is, and why it is, in addition to how it operates and what it’s made of. Traditionally, each of these “dimensions” of explanation were referred to as separate “causes”: material, efficient, formal and final. The basic idea was that if you wanted to explain something, you needed to appeal to all four dimensions. We’ve lost touch with this idea in modern times, and I think it’s led to a lot of philosophical confusion along the way.

Turning back to the example of the chess computer, I agree with you that if you “zoom in” to the level of particle physics you will not find anything “extra” happening at that level. No laws are violated, no extra “stuff” needs to be posited.

But the mistake is in thinking that this settles the question of what the Chess computer is. It doesn’t. The Chess computer is a hierarchically stratified, multi-dimensional intelligibility, with each level in the hierarchy exhibiting it’s own four-fold causal structure. The Chess computer can’t be fully understood without understanding it across all of these dimensions, and certainly not by restricting ourselves only to the level of particle physics.

There’s a lot more to say about all of this, but I’ll stop here. What are your thoughts on this so far?

This post clearly demonstrates the OP’s under a grave misapprehension. :laughing:

@hypericin

I had thought about this, because a lot of physicists sometimes talk like this. I had the idea for a thought experiment, Laplace’s Printer, which can perfectly replicate any object, provided you have the “specs” for it (problems with the No-Cloning theorem notwithstanding).

Anyhow, I’ll avoid boring you with it. The main takeaway I had was that all the specs for a key, taken in isolation, aren’t going to tell you what it is a key for. Maybe it opens a dragon’s vault of treasure, or lets you operate an alien deathray, maybe it’s for an '94 Accord. It couldn’t tell you what you ought to do with the key either, even if it were to some planet destroying deathray. The point being, it made me realize the authors I was reading were perhaps being a bit incautious (or I was, or both), because the description they were speaking to was really only “operationally complete,” i.e., complete as far as predicting a handful of things (but not even for explaining how those predictions worked). Similarly, even a very fine-grained description of a copy of Anna Karenina won’t tell you one bit about the story unless you bring in outside information. Plus, we could get into all the meaningful work the “void” seems to do in making particles, molecules, etc. behave as they do.

Anyhow, if context matters, then it would seem the “complete physical description” of any one thing requires giving a complete physical description of the entire cosmos. I guess this is sort of the horizontal version of what you get if you accept that there are hierarchically ordered principles at work in being, since explaining any one principle will ultimately send you ascending up to the very top (which encompasses everything).

I didn’t really have much to add otherwise, it’s just an area I find interesting.

Apologies for the long delay. I’ve been busy with very important things (such as binge-watching The Good Place).

You still are taking me to be disagreeing about theses we are agreed upon. I am arguing nearly the exact opposite regarding the relationship between P₁ and R₂. Their relationship is accidental in some respects and non-accidental in others. With respect to the causation of R₂, the occurrence of P₁ specifically is accidental. It is only in virtue of the fact that P₁ happens to realise R₁ that R₂ had to occur. And this “had to” functional modality is grounded in the functional organisation of the system that P₁, P₂, R₁ and R₂ are all features of (the former two, physical, and the later two, we might say, formal).

In my OP, the non-accidentality argument rests on a premise, overlooked by Kim, that the functional organisation of the system is what mainly explains why R₂ rather than some alternative functional state was caused to occur. It’s not P₁ as such that explains this. It’s the fact that P₁ realises R₁, together with the fact that it’s part of the system’s functional organisation to produce R₂ when R₁ obtains. In the case of a rational agent, both the agent’s standing rational dispositions and their relevant antecedent circumstances (which are likewise a feature of what P₁ happens to realise) explain why it’s no accident that P₁ produced a P₂ that realises R₂.

It’s true that few people hesitate to recognise the non-accidental relationship between R₁ and R₂ in non-philosophical contexts. But reductive physicalists like Kim argue explicitly that this relationship is epiphenomenal. They insist that the “real” cause of R₂ (which they misleadingly assimilate to P₂) is simply the fact that P₁ was actual. And here is where the problem arises: P₁, taken as a fully specific physical state, is simultaneously too specific and too unspecific to serve as the cause of R₂. It is too specific, because it buries the causally relevant feature, that P₁ is a realiser of R₁, under a mountain of physical details that are irrelevant to the production of R₂. And it is too unspecific because it fails to disentangle what in P₁ realises the agent’s rational abilities from what realises their circumstances. Without that disentanglement, you cannot even state what the cause of R₂ (or M₂ in the case of rational animals) is, let alone explain it. As @EQV also stresses, you cannot even state what it is R₂ is a state of.

This is where the contrastive character of explanation does its work, and where it connects to your justified concerns about generality and specificity. The contrastive stress we place on the explanandum (“why R₂ rather than R₃?”) determines the degree of specificity with which we attend to the event, and thereby singles out the relevant explanatory frame within which the explanans is to be specified. Asking “why did this physical state occur?” and asking “why did the agent apologise rather than deflect blame?” place different contrastive stresses on what is, allegedly, the same underlying event. The first question calls for physical antecedents at physical specificity. The second calls for rational antecedents — the agent’s grounds, abilities, and circumstances — at the level of specificity appropriate to rational explanation. And it is only at that level that the “system” can be disentangled from its circumstances, and what is functionally or intelligibly relevant about it can be brought into focus.

The second main theme of my OP was precisely that the agent must not be identified with their physical substrate. This identification merges into one undifferentiated physical state (1) the agent’s rational capacities, (2) the conditions that enable or impede their exercise, and (3) the features of their circumstances (e.g. their needs, opportunities, actual affordances) in the light of which their choices are warranted). Rational explanations of action must rather individuate agents at the level at which these three are distinguishable and, in the case of functional artifacts, at the level at which their functioning can be recognized as functional or defective.

1 Like

But if P1 is the cause of P2, and P1 “realizes” R1 and P2 realizes R2, then P1 is itself the cause of something that realizes P2, no?

I get that the accidental relationship between P and R makes it impossible to say, from P2 alone, that it is R2, but it still seems like the two (P and R) are operating on parallel tracks. Or is the point that P1 only causes P2 in virtue of R1? But in that case, P1 is not casually closed. Whereas, if P is causally closed, then it simply follows that P1 (which “realizes” R1) is sufficient to explain P2, which realizes R2. Maybe you cannot derive R from P, but this seems more like a sort of dualism bridged by a brute relationship than formal causality. R1 is not the cause of P2, or if it is, we have a sort of duplication of causes, since P1 is said to be sufficient to cause P2.

I don’t really agree either way since I’d reject the entire superveniance framing. To suppose that P is causally closed is already to suppose that matter (i.e., some fundamental substrate) possesses its own form and causal powers. That’s why I don’t think it works to import hylomorphism back on top of superveniance, since form and matter (act/potency) cease to be fundamental principles. Whereas, classically, there is no discrete P versus R (or M versus F) as horizontal “levels,” but rather, at each and every rung on the “Great Chain of Being,” there is both form and matter—substrate and that which informs it—own to sheer potency (which is nothing at all).

That, and all sorts of objections to simple local superveniance (e.g., brain states) require either a pivot to global superveniance (including across time), which makes the thesis so broad as to be vacuous, or more often, to advocates claiming that difficult properties such as “being the corpse of x,” or “being a key to y,” are actually reducible to brain states. The latter is just restating the thesis superveniance was called in to explain, making it viciously circular, the former I’d argue tends towards vacuous tautology.

Kim anticipates this objection with the B-minimal move, no? Still, I think you’re right. Kim’s move is circular because he can only define the B-minimal in terms of M, and so too for various other properties, normativity, etc.

But his point would be that, so long as P1 is sufficient to explain P2, and P2 is said to realize R2, he’s said all he needs to. Whatever “causal” force R has is, while perhaps explanatory, also duplicative.

1 Like

Yes, you’re right! I wasn’t aware of this move (and had to look it up) but, yes, indeed, the notion of a B-minimal property is exactly what Kim needs for his causal argument to go through in light of my criticism of it. But it’s also what turns a weak and unobjectionable supervenience thesis (that I think we can endorse) into a much stronger supervenience framework (and your instinct to reject this framework is one I share.) It’s a move that smuggles into the “physical” description of the system some of the very high-level mental/formal/normative features of the system (together its criteria of individuation: what counts as persisting, for such a system, what counts as being part of it rather than being external to it) that Kim’s argument purports to downgrade as epiphenomenal and causally inert.

When you say “P₁ is sufficient to cause P₂, which realises R₂, so R₁ is duplicative,” that’s the exclusion argument. My answer is already given in my OP. P₁’s sufficiency for P₂ answers one contrastive question (why this physical state rather than that one?), while P₁’s being a realiser of R₁ answers a different one (why a P₂ that realises R₂ rather than R₃?). These aren’t parallel tracks. R₁ is the feature of P₁ in virtue of which the second question gets answered. And Kim’s strongest attempt to eliminate it smuggles it back in.

I don’t know if Kim really would have much of a problem with your solution. The idea that R1 is a feature of the more basic P1, and that the move from P1 to P2 (which implies R2) is wholly explained in terms of P is all he is after. If P is causally closed and R is a feature of P, that seems to be all he needs for the exclusion argument.

I think we can break this down:

Either R exerts causal influence on P or it doesn’t.

P is said to be causally closed. Ergo, it doesn’t.

R is a feature of P. P is not a feature of R. P is said to realize R. R is not said to realize P. The dependence relationship goes in one direction. Hence, P is sufficient alone (closed) and R is dependent upon it (whereas if R is also closed, I would say we have parallel tracks).

These would seem to be simply present in the premises of R supervening on P in the first place. Combined with the premise of casual closure (that R is irrelevant to P).

But like I said, I don’t know if Kim would disagree with you. He would just say you’re tracking an epistemic, not ontic distinction. R is a description of P. And if we object that something’s being a man or an apology is a ontic fact, he will just point to the causal closure of P and the dependence of R on P, and say that it is merely descriptive in virtue of the fact that the causal power lies with P since it is P that is closed, not R, and R is dependent on P.

Really, I think his argument was taken as so “devastating” because it straightforwardly just explicates what is already in the premises. I agree it is devastating, but as a reductio against physicalism.

By this statement, I mean that given the right technology (which we will never invent), it is theoretically possible to produce the object given an exhaustive description, or the description given the object. So for the purposes of this discussion, we can treat these as the same. If one is intelligible, or not, then so is the other.

I think we are aligned as to what physical description means. I was thinking about an (practically implausible) inventory of each atom (of even subatomic particle) and their relative positions.

I think we at least partially agree. But you are conflating two very different things: what the object is, with how the object is understood.

A chess computer is what it is, it is complete in itself, whether or not it is understood by anyone. It is not just particle physics, or just a computer, or just an implementer of an algorithm. It is all of these simultaneously. But as soon as the computer is understood by someone, then it fractures into “levels”: It is understood as basic physics, or understood as chemistry, or computation, or algorithm. All of these levels of understanding are real, and all are partial, none fully captures the whole object. This fracturing is not a property of the object, it is a consequence of the limitations of minds that are unable to grasp the totality of the object at once.

As a level of understanding, particle physics is partial, and completely inadequate to grasp the object. The high level features are just not intelligible in those terms. It would be impossible to define higher level concepts like “algorithm” and “segmentation fault” using particle physics.

But as a specification of the object, particle physics is complete. The specification provides all the information needed to produce the specific, unique, physical working object. This object has the full causal potency of a chess computer. All the levels of description we have been talking about inhere in the object. The object is a Von Neuman architecture, running an assembly language program which implements a chess evaluation algorithm. And equally, the object is an assemblage of particles. Note, that none of level except for the fundamental physical level are adequate as a specification. All of them are abstractions, as innumerably many different objects might fulfill them.

This gets to the meaning of reduction. Reduction doesn’t work on the level of understanding. Every “level” requires its own understanding. If reduction was about understanding, then almost nothing could be reducible to anything. Rather, it works on the level of “is”.

1 Like

A veritable ding an sich!

But the problem is that the specification of the rules of chess is not and has never been described by particle physics.

“Instantiated by”, not “described by”.

I think this is far from obvious. How, from microphysics alone, could you possibly determine exactly which particles belong to a man versus his immediate environment? If you didn’t know what a tree was, and this tree, how would you uniquely delineate it?

Indeed, within analytic physicalist metaphysics itself this problem, often explored through “the Problem of the Many,” has been used to argue that objects/wholes are ultimately arbitrary from a variety of directions (often with the existence of object coming down to our feelings of “usefulness,” a sort of voluntarism where the human will makes everything what it is through metaphysically primitive desiring). This is really just a delimited case of the ancient problem of “The One and the Many.”

Nor is it clear that all the levels of description inhere in an object. Nothing about the microphysics of Anna Karenina taken is isolation will tell you it is a book, let alone fiction, let alone Russian, etc.

Nor is it clear that this gets you a description of a things’ causal powers. Even the basics of molecular structure hasn’t been reduced over a century on. The idea that it can be is based on metaphysical assumptions (univocity, cosmic homogeneity, etc.) which, ironically, come from theology not through looking at experimental data (for them to come from experimental data we’d need successful, complete reductions, but these are assumed to exist based on metaphysical presuppositions, not because we possess them). Even the paradigm case of thermodynamics and statistical mechanics is more nuanced, since the reduction works as an approximation under idealized conditions. Arguably, this makes it a case of asymptotic explanation, not reduction proper. Likewise, the Past Hypothesis is an extrinsic additional posit not derived from SM (even if well-supported). And then the assumption of discrete microstates as real entities is also blurred by QM; hence the complaint that the paradigmatic case of successful reduction is simply a reduction occuring between idealized approximations.

Plus, per the No Cloning Theorem, such a complete duplication is actually impossible.

1 Like

To the extent that an object’s boundary is fuzzy, that extent of fuzziness is irrelevant to us. So perhaps the description will include a few pieces of lint, or it might miss a few plastic molecules from the chassis. This is ok, so long as we deem this description to be the same as the original.

You are right in that I should drop the word “unique”. There are innumerable microphysical descriptions that will resolve to us as holistically the “same”.

The story of Anna Karenina does not relate to the physical book in the way the computation of chess positions relates to the chess computer. The book is a repository of symbolic content. The relation of symbol to meaning is conventional, and anyone who reads the book is required to participate in this convention by correctly decoding these symbols in order to arrive at the story. Whereas, computation of chess positions is part of the nature of the chess computer. That is what the computer does, and this operation does not require the context of an entire language to succeed.

Similarly, a key is a key to something by standing in a certain physical relationship to a lock. The “key to” property is relational, while the computational property of the computer is not.

I’m not fully parsing this paragraph. But if the description, even theoretically, can reproduce the object (not exactly, certainly not reproducing all the quantum microstates, but exactly enough in the ways that matter to us, which here means reproducing its causal power to play chess), then it must constitute a description, however obfuscated, of the thing’s causal powers.

Specifying what it is that the computer does, you’ve seemingly acknowledged, requires specifying the algorithm that it is running. And you also seem to be suggesting that a complete physical specification of the computer determines what this algorithm is. But the specification of the algorithm is normative in a way that the physical description isn’t. That’s because a computer cannot fail to abide by the physical laws that govern the evolution of its physical states, but it can fail to abide by its algorithmic specification. In other words, it can malfunction without violating any physical law.

For instance, increasing the voltage provided by the power supply to the CMOS or TTL chip, in a 1990s chess computer, just a little bit over the specification, may alter the switching threshold of a single transistor, and therefore also the behavior of a single logical gate, in such a way that the computer outputs a different move in the exact same chess position (or same sequence of input moves). Does that mean the computer fails to abide by its program or that it is, in the context of the new supplied voltage, implementing a different program? The only principled way to answer this question is with reference to (algorithm-level) design specifications that the underlying physics leave entirely open. The algorithm that the computer is implementing — a feature that you take to be non-relational (i.e. a matter of what it is that the computer “physically is” in itself) — turns out to be relational, by your own standard, after all.

Several others have already raised a number of good objections here, and I agree with most of them. Rather than recapitulate those objections, I’m going to take my critique in a slightly different direction.

First, you claim that the irreducibility of the various levels (physical, chemical, computational, algorithmic) is a limitation of our understanding rather than a feature of being itself. But how did you come to understand this? If you’re right, and understanding can’t grasp being as it “really” is, then grasping this would require occupying a standpoint outside of any and all understanding. But that’s incoherent. When it comes to the determination of being, understanding is the only game in town. So I would say that this line of argument is self-refuting in its current formulation.

Second, and coming at this from a different angle, your claim that many different objects could fulfill a given higher-level description actually cuts against your position. Here you are explicitly acknowledging that being a chess computer is multiply realizable. But this strongly implies that the chess algorithm has its own identity conditions independent of any particular physical implementation of it. Otherwise there would be no sense in which they are all implementing the same algorithm. And if that’s true, then I don’t see how you can maintain that particle physics is ontologically privileged with respect to higher levels. Sure, without particle physics there could be no chess computer, but neither could there be a chess computer without a chess algorithm. Neither is more “real” than the other. They simply have different roles to play in the ontological economy of the chess computer.

There are two things that actually exist in the machine, the program stored in ROM, and the high level logical operations it is actually performs (which in normal cases map exactly to the program in ROM). It is failing to abide by the program stored in ROM, because here it is performing a slightly different set of logical operations than what the program specifies. Neither of these is relational.

Neither of those two are the design specifications, which exist on paper and in the minds of the designers. These may or may not fully correspond to what is actually on the machine. More than likely, not.

The computer with its program, and with any flaws that might exist on the chip, is not physically “entirely open”. It is a deterministic machine, just like any other machine.

The intent of this post is to flesh out what I wrote earlier:

As a whole, the brain is a multistate parallel processing powerhouse.

For sake of simplicity:
Brain = hardware
Primitive mind = OS
Conscious mind = app running as a separate process with its own workspace, that is to say “state”. For all intents and purposes it can run largely idependently from the OS.

The conscious mind:
Can receive messages/requests from the primitive mind.
Can send messages/requests to the primitive mind.

Can deliberate, that is to say process information and make a choice. Can run simulations to do so.
Can learn adaptively via a highly complex, recursively feedback-looping system.

Can initiate actions by sending a message request to the primitive mind.

What’s more it is a meta-aware self-organizing system that can largely recondition the primitive mind as well. Think neuroplasticity and downward causation.

As such, for all intents and purposes it can serve as a self-caused cause for a given action. It has far too much autonomy to be reasonably considered to be predetermined or as merely a link in a chain of causes

But there are plenty of things we understand we don’t understand. It doesn’t seem problematic that I grasp that I don’t grasp the number google.

It is not all or nothing. I have some grasp of some of being, and I have some grasp of the kinds of things I do grasp, and the kinds of things I do not.

This particular problem seems analogous to the “levels” of physical scale. We tend to grasp scale as discrete levels: for instance, the microscopic scale, the scale of ants, the everyday scale, the scale of looking down from an airplane, the global scale, the cosmic scale. We understand that nature is a synthetic continuum, such discretization is an artifact of the limitations of perspective and our ability to conceptualize. This understanding is in no way “from a standpoint outside any and all understanding”. It is metacognitive: understanding about our own understanding.

My point was that there are many different physical configurations that would conform to our holistic understanding of “the same chess computer”. And so that holistic understanding, just like the computer, conceived in isolation, and the algorithm, conceived in isolation, are abstractions: they are things that can be concretely realized in multiple ways.

When we talk about “the chess algorithm”, we need to be very clear which we are talking about: the algorithm as realized by this particular computer, or the algorithm as an abstraction, conceived in isolation. Abstractions may be identical with other abstractions, but then again they are not real. At least, whatever reality you might afford abstractions, they are not concretely real in the same way physically realized things are. Whereas, concrete, physical things are absolutely individual, to compare with other concrete things the aspect that is compared must first be abstracted away from the physical.

The privilege of the microphysical is this: concrete “levels”, such as the algorithm implemented by a particular computer, stand in a “realized by” relationship with a more fundamental level. Whereas, at some point the microphysical is what it is, it does not need to be realized by anything.