But my claim was not “its all or nothing”, nor was I denying the reality of metacognition. My claim was that you have drawn a boundary between (1) how things are as disclosed via understanding (ontological irreducibility of the levels) and (2) how things really are (ontological reducibility of the levels). But the only way to assess how things really are is through understanding. Do you see the problem?
You seem to be trying to overlay the concrete/abstract distinction onto the lower/higher level distinction, but I don’t think this works. When I play a game of chess with the chess computer, the chess computer is actually playing chess. This particular game with these particular moves is not an abstraction, but a concrete execution of the chess algorithm by a concrete individual. So yes, “chess-playing” as a “general type” is an abstraction, but this instance of it is not.
Even if we were to grant that particle physics is foundational in the sense of not being realized by anything more fundamental (which may not be true), it simply doesn’t follow that it is more real. This is like saying that the foundation of a house is more real than the second story because the second story couldn’t exist without the foundation. “Realized by” does not imply “reducible to”.
I did not say that ontological irreducability of levels was disclosed by understanding. I said that, as a perspective for understanding most objects, particle physics is completely inadequate. We need to think in terms of levels of behavior in order to have any hope of understanding. This says nothing about irreducability.
I don’t think it works either, nor did I write it. If you read what I did write, I was responding to your claim that to admit multiple realizeability is to admit objects with their own identity conditions. My response is fine, so long as we admit multiple realization admits abstraction as well. “The algorithm” as a set of operations is a multiply realizeable abstraction, while this algorithm, implemented on this hardware, is not.
Or perhaps, “reducible to” does not imply “less real”. Few would consent to having their house demolished because the house isn’t real, it’s really just the components. No one is denying that the house has emergent properties (shelter, protection, stability) that aren’t present in it’s disordered components, but that arise only with those component’s correct arrangement. What is denied is the essentialist notion of the house as something over and above its components, that the components do not explain the house, that the house is not present in the components, in short, that the house is not reducible to it’s components.
There’s an interesting parallel to this in an ancient Buddhist text ,The Questions of King Milinda, a dialogue between that king (thought to be the historical Greco-Bactrian King Menander) and one Venerable Nagasena, who is representing the Dharma. The dialogue concerns the nature of the chariot that the King came to the meeting on. Paraphrased, Ven. Nagasena makes the point that the chariot is not the wheels, not the axle, not the carriage part or the horse-poles, and so on. So there can be no chariot without the parts, but it’s also nothing other than its parts.
Where I think Ven Nagasena’s argument is vulnerable, is that apart from this particular chariot, the King, or rather the King’s culture, also possesses the idea of a chariot. And in that time (~150-155 b.c.e.) having chariots was a major military advantage, possibly even one by which King Milinda had secured his throne. Furthermore that even if you were provided with all the parts for said chariot, without the idea of a chariot, you would in no way be able to assemble the actual vehicle. So in that sense, there is something other than the parts which comprise the chariot (or any other designed artefact), namely the design or the idea, without which it wouldn’t hang together.
I’m doing my best to interpret you on this point, but I confess that I’m finding it difficult.
You seemed to agree with me that we can’t understand what a chess computer is in terms of particle physics alone. For instance you wrote:
You then wrote:
My interpretation was that you were trying to draw a distinction between two different types of irreducibility:
(1) Epistemic irreducibility — we need multiple, mutually irreducible conceptual frameworks to understand the chess computer
(2) Ontological irreducibility — the chess computer really is a unity of multiple, mutually irreducible layers of real structure
In case it hasn’t been clear, I accept both (1) and (2), whereas I take it that you accept (1) but deny (2). In that case, you would be committed to something like conceptual/explanatory pluralism combined with ontological monism, which holds that we need multiple irreducible conceptual frameworks to understand the world, but the world does not actually have multiple irreducible layers of real structure.
To my mind, the question is then: how are (1) and (2) related to each other? If our very best understanding of the world requires multiple irreducible levels of conceptualization, then on what basis can we say that reality itself doesn’t correspond to those levels? That would require access to reality independent of our best understanding, which is impossible.
So perhaps, at this juncture, it would be best if you could either confirm or deny my interpretation and point out where my rebuttal goes astray. From my perspective you seem to be sliding back-and-forth with regard to your stance on (2). Could you clarify?
===========
Again, I’m doing my best to interpret you here and am not trying to put words in your mouth, but I think you may have missed my point. My original intention in bringing up multiple realizability was not to merely argue that abstract types are real, but that the type has its own identity conditions that aren’t reducible to any particular physical configuration. You seem to accept that abstract types have their own identity conditions, while arguing that this applies only at the level of the abstract type and has no relevance or bearing with regard to this algorithm, implemented on this hardware.
But the point that I was trying to make was that, while there is a difference between the chess algorithm qua abstract type and the chess algorithm qua concrete token, the identity conditions must apply to both. Otherwise what would make this concrete activity an instance of playing chess rather than something else? Or are you saying that the chess computer isn’t “really” playing chess at all?
Furthermore, this isn’t something that is peculiar to the chess algorithm. After all, the token/type distinction applies even at the level of particle physics itself. “Electron”, qua abstract type, is also a multiply realizable abstraction, whereas this electron is concrete. Applying your logic, we would have to conclude that electrons are “not real” in just the same way that you want to say that chess algorithms are “not real”. But that would put you in an awkward spot since, for you, particle physics is supposed to be the level that realizes everything else: an “unrealized realizer” if you will. So taking that path seems like it would be problematic for you.
Or I suppose you could try to take the bull by the horns and claim that the world does not instantiate any abstract types, but that leaves you with a world that exhibits no structure whatsoever, and I’m guessing you wouldn’t wish to go down that path.
So perhaps you could better clarify how you think the token/type distinction helps support your position?
==========
So this is an interesting response, and I think it gets to the heart of the matter. You say that “no one is denying that the house has emergent properties”, but you think that these can be fully accounted for in terms of the components and their “arrangement”. Is that right?
Assuming that’s right, here’s a question for you: is “the arrangement” itself ontologically reducible to the component parts?
If your answer is “no”, then I think you are in a bit of a pickle. After all, if “the arrangement” is not reducible to the component parts, then what is it? If the parts alone don’t constitute the house, then it must be the parts plus something like form. If you don’t like the word “form”, then call it “arrangement”, “organization”, “structure”. It doesn’t really matter. The salient point is that the component parts aren’t enough.
But if you think that “the arrangement” is ontologically reducible to the component parts, then you need to explain what distinguishes a house from a pile of bricks without appealing to “the arrangement”, since “the arrangement” isn’t “something more” than the component parts themselves. This seems like a very difficult problem.
Also, you said "the house has emergent properties (shelter, protection, stability) that aren’t present in it’s disordered components”. I totally agree, but look at what this now commits you to. If you grant that “the arrangement” is not ontologically reducible to the component parts and that the emergent properties are not present in the disordered component parts themselves, then the emergent properties must arise from the form of the house — where else could they come from? But this is the position that I have been defending the whole time.
Finally, your appeal to “essentialism” seems like more of a rhetorical device than a substantive point. When I say that “the house” is not reducible to its component parts, I’m not positing some additional ghostly entity. I’m simply pointing out that “the house” consists of its component parts plus their formal organization. In other words, I’m just taking the notion of formal structure seriously and acknowledging that it is just as much a feature of reality as the component parts themselves.
In any event, I’m happy to stand corrected on any of these points and I look forward to further clarifications on your end. Thanks.
This is something I agree with @hypericin about, but which I struggle to explain or write about or even fully conceptualize myself.
Understanding something in the way we humans do requires abstractions and generalizations. Those generalizations and abstractions, imo, may not be directly derivable from the lowest level of physics, even if every particular instance of those things happening is happening on a bedrock of low level physics.
For example the generalizations and abstractions we use to understand and predict human psychology, or economies - even if all individual events in those realms is as a consequence of physics happening, the broad abstractions won’t be found by analyzing physics.
I believe that the world does not “actually have multiple irreducible layers of real structure”, but I do believe the world DOES actually have multiple REDUCIBLE layers of real structure. I would call my stance here “middle emergence”. Middle emergence is functionally similar to weak emergence, and is a denial of strong emergence, but with the additional claim that emergent layers can have some kind of ontological “realness” to them. Idk how @hypercin feels about all of what I’ve said above, curious for his thoughts.
But for a specific example, think about conways game of life. It’s turing complete, which means you can program any piece of software ever written to happen inside a Conway simulation, but the ways in which we understand software may not be easily understandable in terms of the pixels in conways game. And yet it’s still the case that the behaviour that’s happening is happening as a result of the pixel-level behaviour.
Thanks for chiming in, and I think that the “middle emergence” position you are articulating is probably close to the position that @Hypericin has been defending. And while I definitely get the appeal, I think the same tension I’ve been pointing to in Hypericin’s formulation applies here as well.
I think that the following question exposes the tender point: what does it mean for a layer to be “ontologically real” but also “reducible”?
If reducible means “identical”, then it’s not clear how the higher layer can be truly real in its own right. In that case, it seems like it’s “really” just the lower layer with a new label.
If reducible means “dependent on”, then this is compatible with the higher layer having its own real structure. This is basically the position that I have been defending. Higher-levels are dependent on lower without being identical with them.
If reducible means “causally implemented by”, then this again seems compatible with the higher layer having its own real structure. I don’t deny that Hypericin’s chess computer is causally implemented by particle physics, what I deny is that chess computer is nothing more than particle physics.
And this denial of “nothing more” is where the discussion seems to get bogged down. The emergentist says the chess computer is “something more” than particle physics, and the reductionist interprets this as if another entity is being posited. But that’s not quite what’s being claimed. What’s being claimed is particle physics is merely one aspect of the chess computer’s being or objecthood. In other words, there’s more to being an object than (1) having a material substrate and (2) being a product of causal processes. There’s also (3) having a formal structure and (4) having dynamic orientation. To be an object is to be explicable across all four dimensions.
With respect to Conway’s Game of Life, yes, this is something that Hypericin and I discussed earlier in much the same vein. I don’t deny that “gliders” and “eaters” are reducible to the cells and their activation rules. But I question whether the Game of Life is an apt analogy for the world in general. We can’t just assume that it is without begging the question against the emergentist. And there seems to be good reason for thinking it’s a disanalogy rather than an analogy. Whereas it’s clear that the “motion” of the glider can be fully explained at the level of cells and activation rules, it’s not at all clear that “illegal move” or “segmentation fault” can be fully “explained” in terms of particle physics. But again, this gets back to the question of what it means to “explain” something.
And that’s probably the crux of the issue right there. I’m working with a picture of explanation that is more “robust” than the one that the reductionist (of any stripe) is typically working with. If you think that “explaining something” is a matter of elucidating its material substrate and modeling how it was produced, then emergentism will always feel like its trying to sneak something “extra” into the equation. So it comes down to whether the other aspects of explanation ((3) and (4) from the list above) capture anything “real” about the object that is not captured by the other two. I think they do. And I think that denying this leads to a number of unsavory conceptual difficulties.
Yes, that’s reductionism in middle emergence to me.
You refer to emergentist and reductionist as if they’re opposites, but in my view they’re just two sides of the same coin. I’m an emergentist and a reductionist. Emergence is walking up a set of stairs, reduction is walking down them. I believe in the set of stairs.
Nice. It sounds like our thoughts regarding this matter might be closer than it seemed. You seem to be acknowledging that, while higher layers are “dependent on” and “causally implemented by” the layers below, they are still real in their own right. So if you agree that higher levels legitimately have their own structure, and that this structure is not reducible to the structure of the layers below, then we may be on the same page.
I guess where I’d push back a bit is on calling this “reductionism”. Traditionally, reduction requires identity or elimination, which are stronger than either dependence or causal implementation, neither of which imply that higher level structures are reducible to the layers below. Is there a specific reason you want to hold on to the label “reductionism” when describing your position?
To me, causally implemented by and reducible to are synonyms. Maybe we’re just using those words differently.
If everything causally reduces to X, then there’s no specific event you can’t describe and explain in terms of X - but a “specific event” is different from the higher level abstractions and patterns we use to predict things in general. In principle, physics can explain any specific event, while it may not be capable of explaining high level general principles.
I think that this is running up against the same tension I’ve been discussing with Hypericin. You’re saying “specific events” are reducible to physics, but higher-level patterns aren’t. That’s fine, but if these same patterns are just “generalizations” over those very same events, then shouldn’t they be reducible to physics too? Or are you saying that these generalizations are somehow “more” than the specific events they generalize over?
Also, this notion that physics can explain any “specific event” doesn’t seem right. “The chess computer made an illegal move” is a specific event, but it can’t be explained in terms of physics. Physics lacks the generalizations to capture the normative structure of an illegal move. So “physics explains every event” seems to beg the question of what it means to explain something.
Finally, you said you consider “causally implemented by” and “reducible” to be synonyms, but its worth noting that this is not how the words are generally used in philosophy. Causal implementation is a dependence relation whereas reducibility typically implies something stronger like identity, elimination or translation. So it’s still not clear to me what the word “reduction” is doing for you.
It’s something I’m not quite sure about. Maybe, truthfully, it is all reducible in every sense of the word even if we can’t ever comprehend how to reduce it.
It sounds like you and I both agree that it’s unlikely that there’s strong emergence, right? Strong emergence as in, there are things that happen that aren’t causally arriving as a consequence of low level laws, of small things doing their things, but as a consequence of a whole new class of laws that govern macroscopic things. We both agree that that’s not likely to be the case, right?
I completely disagree here. You interpret the specific event as an illegal move, but really it’s… a bunch of pixels on the screen. So when you ask to explain why it made that illegal move, in terms of physics, you have to approach the question in terms of physics of course. So the question becomes, “what sequence of events led these pixels to appear on my screen?” And that is absolutely something that can be explained in terms of physics. I think.
I appreciate your honesty here, but it’s hard to see how “it’s all reducible but I can’t explain how” amounts to anything more than hand waving. If we can’t comprehend the reduction, then what positive evidence do we have for thinking it obtains? Couldn’t we plausibly argue that its incomprehensibility is evidence that it doesn’t obtain?
Sure, I wouldn’t argue that higher levels override physics. We can accept causal closure at the physical level while denying that all formal structure is reducible to particle physics. And with regard to @Pierre-Normand’s question regarding rational agency, formal structure matters a lot.
This is the heart of the issue. What does the word “really” mean here and what does it say about the relationship between the “illegal move” and the “pixel”? Are you proposing identity? Elimination? Translation? Something else?
And isn’t a pixel “really” just a light emitting diode? And isn’t a light emitting diode “really” just some organic compounds? How do we determine which generalizations are the “real” ones and which are merely “interpretive”?
Also, you say that the salient question is “what sequence of events led these pixels to appear on my screen?” But why is this the only question that matters? The answer to that question doesn’t actually tell us anything about what an illegal move is, so how can it count as a full explanation?
I DON’T think it’s the only question that matters, but if you want to understand the event physically, you have to talk about the event physically. You don’t get to smuggle in high level concepts into low level explanations. The physical explanation will involve talking about physical things, not abstract things - EVEN IF those abstract things still have some kind of emergent real-ness to them
Agreed. Though, to be fair, no one was proposing that we should smuggle high level concepts into low level explanations. @Pierre-Normand’s original claim in the OP was that causal closure at the physical level doesn’t rule out the reality of rational agency. That is, rational agency has a formal structure that’s not reducible to particle physics per se, and a full explanation of the agent’s behavior must appeal to that formal structure.
That’s why I keep pressing for clarification of phrases such as “is really just”, “is nothing more than”, “is explained by” and “has emergent real-ness”. The whole issue rides on what these phrases actually mean, and what they imply about the reality of higher level patterns.
Apologies, I haven’t read all the past responses but I am going to give my opinion frankly because it is possibly the least popular perspective but I maintain that it’s the right one:
I don’t think “free will” defines a coherent concept. Even if we lived in a non-deterministic universe, what we call a rational choice is inherently a product of prior events and our own predilections. Indeed, this is not hypothetical, because our current (tentative) understanding of the universe is that it is not deterministic – there appear to be genuine random events – but such randomness is usually met with a shrug in the context of free will, because randomness doesn’t seem like a choice.
But this is exactly my point – what we mean by choice is inherently deterministic.
What about if there’s such a thing as a soul?
Well, we could ask whether souls are blank slates? When I was born, was my soul the same as yours when you were born, or do souls have different characteristics from day 1? It doesn’t seem to help at all with understanding where uncaused choice comes from, it’s just a nebulous cloud we can claim grants this thing, somehow.
I agree with your summary of our positions, with the exception that I’m not comfortable with the term “irreducable” in 1. Because, our best understanding does not just include our conceptualization of individual levels. On top of this, it includes understanding of how the levels relate to one another. Together, these understandings form a holistic picture: a unitary object whose dynamics exhibit multiple levels of behavior that are best grasped with multiple levels of understanding.
For instance, we both know that a computer program generally cannot be understood at the level of assembly language or byte code. This is simply too low level, higher level features are completely obscured here. We need to look at source code. And in fact, we habitually divide source code itself into levels, each depending on the levels below and supporting the levels above.
Does this mean that we need to grant some kind of deep ontological reality to each of these levels? No, because we understand how these levels relate to one another, and we understand that each of these levels are partial views on an organic whole, which is in fact completely described by the assembly language, however poor that language is for actual understanding.
And where does that get you?
I agree, the particular chess computer fulfills membership in the abstract type chess computer. But what of it? My point in bringing up abstract types is that multiple realizeability implies we are dealing with abstractions, and abstractions suggest nothing in particular about ontology beyond the specific ontological requirements of type membership.
Sure, some abstract types, electron for instance, may indeed “carve to the joints of the world”. But most do not. Men with green hats is as much an abstract type as electron. Whether a particular fulfills an abstract type is a fact. But it is a fact involving something that must be defined into being. I think we agree there is no ghostly man with green hat type that infuses in me when I wear a green hat.
Is it though?
I have been remiss if I have suggested reducibility to mere matter was ever possible. It is not. Physical reality is a combination of form and substance, and I don’t think many physical reductionists would suggest otherwise. Form resides firmly in the p of the op.
And so when I say that the chess computer’s behavior is reducible to its microphysics, this is not the microphysics of a hodgepodge. It is the low level physical dynamics of a very constrained arrangement of matter. That constraint is essential, a small deviation in sensitive areas such as the memory can destroy the device’s casual power.
Whereas you seem committed to something over and above form and matter. If each of the levels is irreducable, that irreducability cannot be a function of form. It is the same object, and so the same form cuts across every level.
I’m mostly in agreement, with the caveat that I don’t think an additional “middle emergence” is necessary. This is already fully weak emergence.
I think the ontology you are grasping at is the ontology of behavior, and/or the casual power of the lower level components working together to produce the higher level effect. Weak emergence does not commit you to believing the emergent levels are illusions.
Overlook this if you’ve already considered it and I missed it. I’ve been reading a book by Brian Cox about light cones, world lines, Feynman diagrams, and what that has to do with causality. To a physicist, it may make sense to say that all the things you’ve done and will do exist, but much of it just isn’t accessible to you. I’m not trying to make the case that you need to believe that, but just saying it’s on the table as a serious take on how the universe may work. What Cox emphasizes is the reasons for our assumptions about time and space: how they’re related to what we are, and how physics can be imagined as Flatlanders trying to piece together the evidence of dimensions we can’t even imagine, but must be there.
Trying to work out the situation purely a priori may be a worthy exercise, but we might be running around in Flatlander pathways that don’t really match up with the universe. So maybe just take the force of a priori arguments with a grain of salt. Do you agree with that?
One note here. On the one hand, the chess computer defines what is and isn’t a legal move. This is captured in the programming, and ultimately the microphysics. Here the “normativity” is mechanical.
But then, what if there is an error in the code, such that the computer does not always play legal chess? This fact isn’t a part of the computer, at any emergent level. Rather the normativity here is of the human variety.