Philosophy of Mind and of Consciousness

No. That is a stereotyped idea of ‘the spiritual’. ‘Oh, people used to believe that thunder was caused by the Gods, but now they know it’s the result of moving masses of air.’

This is what I mean by an ‘outside’ or ‘external’ view. Here, in fact throughout, you are treating consciousness as phenomenon, something to be explained in other terms, presumably with reference to evolutionary biology and neurology. You’re arguing for what Thomas Nagel has described as ‘neo-darwinian materialism’. I’m not trying to cast pejoratives, so I’ll break it down:

  • Neo-darwinian - from the modern synthesis of evolutionary biology, Darwinian natural selection combined with Mendellian genetics.
  • Materialism - mind is the product of the brain, purpose is either adaptive or illusory.
  • Reality including the lived reality of beings can be understood in terms of molecular biology and related scientific disciplines

My objection to this is philosophical. In keeping with the OP, the kinds of objection that are associated with phenomenology do not require objective arguments or ‘falsifiable hypotheses’ about the origin of life. They are grounded in the observation that scientific practice is itself an inherently first-person activity.

Before genes, atoms, or neurons can be categorized, there must be a primary act of observation; after all, you cannot have a measurement without a measurer.

Which, of the cell under the microscope or the interpretive role of the of the scientist, is more “fundamentally real”? Cells, atoms, the ostensive “fundamental constituents’ of science, all exist within an explanatory framework. But that framework is not itself among the objects of analysis. It resides in the minds of the observers and interpreters of the data. But the mind has already been ‘bracketed out’ by the presumption of objectivity. That’s what I mean by an ‘outside view’.

A couple of academic references. Daniel Dennett’s The Fantasy of First-Person Science argues that the “first-person point of view” is a sub-personal illusion—a “user-illusion” created by the brain’s biological software. He treats the “I” as a fictional center of narrative gravity, essentially saying that if science can’t measure it from the outside, it isn’t “real” in any fundamental sense.

Zahavi’s Killing the Straw Man is a response from the perspective of phenomenology. Zahavi argues that Dennett and his ilk are attacking a “straw man” version of phenomenology. He points out that you cannot “explain away” the first-person perspective using third-person science because third-person science is itself a first-person achievement.

It is just this kind of remark the vitiates the point of discussion. Presumably in saying this, you seek to persuade a reader of a fact. A fact is subject to judgement. But if a judgement is simply a ‘process update’, then who has been persuaded of what?

@Togo @Meta_U

Thank you both for your replies. While I find the topic of free will interesting and would be happy to discuss it further, I don’t want to derail @Wayfarer’s thread, and (at the moment) I’m not feeling compelled to create a new thread on the topic. That said, if one of you wants to start a new thread and lay out your view, I’d be happy to participate. Thanks!

I think your objection may be easier to answer than you’re suggesting, though the answer might be unsatisfying for a different reason than you expect.

I wouldn’t dispute that you can point the methods of science at science — thats what the sociology of science, the cognitive science of reasoning, bibliometrics, etc. all do. And they produce genuinely useful results. So to your question “why can’t science study science?”, I think the answer is: it can, and it does.

But notice what those studies produce: empirical descriptions of how scientists behave, what institutional structures correlate with replicable results, what cognitive biases tend to distort judgment, and so on. What they don’t and can’t produce — not because of some mystical limitation but because of a straightforward logical point — is a justification of those practices in the first place. If you try to give an empirical argument for the evidential status of empirical evidence, you’re moving in a circle. Not a vicious circle necessarily, but I’d suggest that it’s a circle that can’t do the justificatory work being asked of it.

This isn’t the eye-can’t-see-itself metaphor, which i agree is too pat. Its closer to: you can’t use a measuring instrument to certify its own accuracy without some independent standard. And the question “what makes that standard the right one?” is not an empirical question — its a normative one. It requires you to say something about what ought to count as good evidence and why, and that “ought” isn’t the kind of thing you settle by running another experiment.

So the issue isn’t science vs. “some other system we have reason to trust far less.” The issue is that certain questions are of a different logical type than the questions science is designed to answer, and recognizing that isn’t an indictment of science — its a description of what makes it work. Science’s power comes precisely from restricting itself to a certain kind of question. The residual questions I mentioned aren’t competitors to science; they’re what you get when you try to understand why that restriction is so spectacularly productive.

I see what you’re saying: if intelligibility is something we access, then when inquiry fails we can’t tell whether the fault lies with us or with the thing. But that framing assumes intelligibility is all-or-nothing — either the thing is fully intelligible to us right now, or its unintelligible full stop.

What actually happens in inquiry is neither of those. We partially understand, we revise, we correct. Sometimes we hit a wall because we lack the right concepts or instruments, and sometimes because the phenomenon is genuinely more complex than we initially thought. But in both cases, the process of self-correction is what tells us the intelligibility is real while our grasp of it remains incomplete. Thats exactly the space where discovery lives.

And here’s the thing — I think that the problem you’re raising actually cuts harder against the Kantian position. If the conditions of intelligibility are entirely part and parcel of us, then failure to understand becomes deeply puzzling. Why would our own conceptual machinery fail to determine an object, if the determination is coming entirely from our side? You’d need some account of why the knower’s own equipment misfires, and that account is going to end up pointing back at something about the object that resists or constrains the knower’s activity.

I’m not saying that every thing we encounter will be intelligible to us — that we’ll always succeed in understanding it. I’m saying that the things we encounter are the kind of things that admit of being understood, whether or not we happen to pull it off.

Those are very different. The first would be a overzealous claim about human omniscience. The second is just the claim that reality has a structure that is in principle open to understanding — and that when we do understand something, what we’ve grasped isn’t something we’ve projected onto it.

Think of it this way: a lock admits of being picked. That doesn’t mean every locksmith will succeed, or that anyone is even guaranteed to. It means the lock has an internal structure such that, given the right approach, it yields. The structure is a feature of the lock, not of the locksmith. Some locks might exceed every locksmith who ever lives. That doesn’t make the lock structureless — it makes the locksmiths fallible.

So no, I’m not presupposing that we’ll find every thing intelligible. I’m saying the burden actually runs the other way. Every time inquiry does succeed — every time we correct a misunderstanding, or discover why a previous hypothesis failed — we get further confirmation that there’s intelligible structure there to be engaged with. The finitude of the knower is real, but its a limitation on us, not evidence that the world is opaque in principle.

It’s not just a temporal distinction. “Already understood” and “able to be understood” don’t differ merely in when understanding happens — they differ in what they say about the object. “Able to be understood” is a claim about the thing’s own character: that it has a structure which can ground correct understanding. That’s not a statement about the timing of cognition, it’s a statement about what’s there to be known.

And I think your move to say “well, it could just be our own thinking thats understood” actually strengthens my point rather than undermining it. Because when you do understand your own thinking — when you grasp, say, why a particular inference works — you’re not making it work by grasping it. You’re recognizing a structure that was already operative. The insight lands because it gets something right. That “getting something right” is exactly what I mean by accessing intelligibility rather than producing it, and it applies whether the object is a physical thing, a mathematical relation, or your own cognitive process.

The real question is: when understanding succeeds, is the success just internal coherence — thought agreeing with itself — or is it a matter of thought hitting on something that constrains it from beyond itself? If it’s only the first, then I genuinely don’t see how error is possible. Every coherent thought would be equally “successful.”

Ha — agreed on that last point. This has been a genuinely enjoyable exchange.

One parting thought. You say that because some inquiry is successful, it’s contradictory to call inquiry in general inexplicable. But I think there’s a subtle equivocation there. I’m not saying we can’t observe that inquiry sometimes works. We can. What I’m saying is that Kant’s framework lacks the resources to explain why it works — what it is about the relation between knower and known that makes successful inquiry possible rather than a happy accident. “Some inquiries succeed” is a datum. The question is whether Kant’s framework can account for it, or just shrug and note that it happens.

That’s where our real disagreement sits I think. You’re satisfied with a system that describes the internal mechanics of cognition and leaves the question of its purchase on reality modestly unanswered. I desire an account where the success of inquiry is intelligible — where we can say not just that we got it right but what it means to get it right. And that requires letting intelligibility be a feature of what’s known, not just a feature of the knowing.

But yes — we’ve each staked our ground and I don’t think either of us is likely to be moved by the other. Good exchange, genuinely. These are the conversations worth having.

There might be the same lack of surprise that I think the opposite. :grin:

Why do you consider the topic of free will as a derailing of the thread? The question of the op is the difference between philosophy of mind, and philosophy of consciousness.

My argument was that traditional philosophy of mind takes as a starting point, that the mind is an immaterial causal agent (free will). The more modern philosophy of consciousness tends to take a more scientific perspective, portraying consciousness, and consequently mind, as something caused by the material brain.

So I argue, that the difference between them is well represented with the difference between free will and determinism. And, I argue that the two are incompatible as free will and determinism are incompatible.

If you refuse to discuss the difference between free will and determinism, then my judgement is that you refuse to take the topic of the thread seriously. Instead of considering my proposal of where a real difference between the two may lie, you want to avoid that thorny area because it does not support your belief that there is no real difference between the two. Then you continue to pretend that there is no real difference between the two, without justifying your stance.

Well now, that’s an interesting question, isn’t it?

I think this is probably exactly what happens; you learn that you can learn by looking.

One way to put this would be to make your circle hermeneutic: we start with a pre-conceptual habit, thematize that and refine it. It seems clear enough to me that this is happens with individual humans, with cultures, and so on.

The original habit is likely biological, and how and why that comes about is no doubt a long and interesting story.

So yes, I think we (as individuals, cultures, species) almost certainly do rely on evidence to determine that we should rely on evidence.

The “should" in that sentence is probably pretty overloaded, but for me it’s mostly instrumental, right up to “should if you want to honor our epistemic norms." You no doubt read it as something else.

I don’t believe they are. My points are quite specific.

Then my objection that this is mere special pleading stands. Our memories are part of the decision making, because they are used as an input to decsion making. Our habits and personality are part of decision making, because they change the results of of decision making. If our conscious sense experiences are used as an input into future decision making, then they are just as much part of ‘us’ as any other part of the algorithm you describe. Whether we have any real control over it is another matter, but it’s part of the system either way.

Do you have a reason for considering this part of the algorthmic processing, in which sense experience is produced and the results stored, as being any different from any other part of the algorthmic system?

Even a time delay, and there is no evidence for such a delay, but even a time delay wouldn’t be enough to separate the two. Memories have time delays. So does personality formation.

Wrong kind of illusion. An optical illusion of water in desert is a physical phenomenon caused by bending of light. Other optical illusions may depend upon the physical structure of the processing that takes place. Those illusions have physical components, and physical outputs. The ‘illusion’ you are talking about is an entirely ephmeral thing that has no physical basis whatsoever. A purely mental construct time separated from neuronal firing. A thing beyond science.

Predictive coding is about pattern matching for understanding incoming or stored information. Predictions, that is long-term planning, are handled in a different way by a different part of the brain.

The similarity is entirely illusory. The only thing the two have in common is that both are running models in which, in theory, any input can tailored to produce any output. But the details on which they operate are not similar.

They thought they were similar, back in the 90s, but it turned out not to be the case.

Sure you can. You just have to distinguish between instant reactions, short-term operation and long-term operation, and have different systems for each. Which is what we find in practice. That way you can make instant reactions, very fast actions, and yet still usefully deliberate on a decision for seconds or even minutes at a time.

I don’t know how to respond to that without being incredibly patronising. Let’s just say I don’t think you’re better informed than I am.

Yeah, I don’t agree. But I’m trying to keep the discussion focused on the philosophy.

And there’s plenty to discuss there. Because what you’re describing is entirely speculative, and entirely beyond scientific observation, let alone falsification. It’s just a physical determined model with any inconvenient subjective bits exiled beyond a Cordon Sanitare, to a non-physical realm of pure experience.

So all our thinking in done in the determined bit, all our identity is located in the determined bit, and then we invent a purely mental realm of subjective experience, to contain all the bits that don’t really fit.

And there’s a lot to object to there. Why the special pleading for subjective experience, to the extent of making up a new form of reality/abstraction to safely contain it in? Why not just have subjective experience as physical as everything else?

Why this insistance that identity must be absent or on the physical side, when the subjective sense experience is used as an input along with sensory data and structural personality?

And what recommends this model over any other? Just try mirroring it into a theological argument. On the one side we have physical processes, and if you insist an algorithm that they run, and on the other side we have God, decision making and our subjective identity and experience. God makes all the decisions, but we feel like we’re making choices. However, it’s not actually us, it’s a algorithm produced by physical structures which are operated by God.

I’d expect you’d dismiss such a construction, with assumption piled on assumption, outright. But what’s the difference between that and your own model? Both insist that both decision-making and our real identity are absent or elsewhere, without any evidence to support that. Both posit the creation of another realm/abstraction to contain all the bits that don’t fit in the physical. Both lack any kind of explanatory power. Both owe their structure to an attempt to preserve an idea or set of ideas (determinism, theological determinism) that otherwise might not feature in the model.

Why not just follow the science? We know from scientific models that the best-fit explanations for how the world works are not all determined ones. And we equally know from scientific models that the best predictors of human behaviour are those that model us as having a subjective identity involved in decision making.

Sure it’s possible to create a model that removes that ability or declares it to illusory, and it’s possible to create a narrative to explain why this is totally undetectable to us. But why declare our decisions to secretly made by some undetectable process or being? What is gained?

Pretty much, yep; intelligibility does not have degrees. We do not say of a thing it is intelligible because it is blue, or round or heavy or next door or twenty thousand years old. We say a thing is intelligible because, first, it is a possible experience, and second, it is that to which concepts can be applied in order to identify how the thing is to be known if it should become an experience.

An interesting offshoot of that might be…..does that, or how does that, relate to the hard problem? While the intelligibility of the brain may be discoverable according to your thesis, even though sometimes we hit a wall for some reason, in cases such as the hard problem where there isn’t any self-correction, how does intelligibility of the brain remain real? Are you not left to say eventually the wall comes down? I don’t think it’s justified to prophetize here, insofar as the phenomenal complexity may just be too great.

And from the strictest possible Kantian argument, how in the dickens can one expect the brain, the alledged space where discovery lives, to explain itself?

Before going any further, try this on for size: when you say “conditions of intelligibility” you’ve accented the wrong subject. You’ve made intelligibility that to which conditions belong, in which case degrees of it are reasonable to suppose, when in fact, as I’m arguing anyway, intelligibility is itself a condition of something that is the subject, in which case degrees of it are unwarranted. And if you meant to say the concepts we assign are those conditions of intelligibility, then you’ve misplaced the conceptions, in that they belong to the thing said to be intelligible simply because concepts can be assigned or applied to it.

With that out of the way……

Intelligibility is a general qualitative condition, understanding is a particular cognitive function. One really has nothing to do with the other; the best one might say is the understanding can only work with something intelligible to it, with something to which the set of logical rules for its functions, applies.

Our conceptual machinery doesn’t fail to determine; it only fails iff whatever its determinations are, do not sufficiently accord with Nature’s provisions on the one hand, re: experience, or, our own will on the other. It is judgement that informs of the relative conformity of conceptual determinations, whether epistemological or moral. Intelligibility is irrelevant with respect to morality of course, even though determinations from conceptual machinery, isn’t.

While the determinations are entirely from our side, these one and all ensue from mere representations of that which is not on our side, or, which is the same thing, outside us.

I don’t think it’s fair to blame Nature because we misjudged something contained in it. It is hardly Nature’s fault we thought there was a lake in the middle of the road up ahead. Isn’t to call it a mirage an admission of our own faulty determinations? I submit we account for our own equipment’s misfires every time we find sufficient reason to change our minds.

I’ve never denied intelligibility to be a feature of what’s known, only contending with the notion intelligibility belongs to what’s known. I’ve stated for the record herein, that I think intelligibility is a qualitative condition of a thing, which is to say intelligibility is a secondary property, which is to say it is a feature, all of which depend on the knower to provide.

What it is between the knower and the known that makes successful inquiry possible, is a how, not a why.

Kant’s framework does account for both successful and unsuccessful inquiries, as long as the method he invented is understood the way he intended.

By my account, no inquiry will even be possible if its object is unintelligible.

Question for ya: what do you think of the idea that intelligibility serves as primary ground for the indirect realist paradigm?

I agree that Consciousness is “just one mental process among many”. Scientists typically divide mentality into two or three categories : Consciousness (intentional, effective), Subconscious (automatic, affective), and sometimes Pre-conscious (e.g. Memories). But I suppose the OP focuses on human-type Consciousness because it is inferred indirectly from behavior — including words — and less subject to scientific empiricism & reductionism, hence more endlessly philosophically debatable.

Single-cell Amoeba appear to interact with their environment automatically & mechanically, without conscious awareness. Their core DNA seems to serve as a rudimentary brain. But when you ascend the levels of animal behavior, interactions tend to be more flexible & adaptable, as their brains become more complex. So “higher” animals, such as dogs, behave more intentionally (with a clear self/other concept), and are able to discriminate (choose) between alternative responses. Apparently at the top of the brain-chain, Humans can be more definitive (categorical) & rational (logical) & memorious (factual recall). Therefore, Philosophy seems to require abstract engagement with other minds, via metaphysical thoughts & ideas instead of physical things & feelings.

That philosophical detachment from the physical world may be what led to the ancient concepts of immaterial Spirits & Souls as stand-ins for physical bodies and brains. Hence, the designation of Philosophy as Metaphysical, by contrast with Physical Science. As you implied, Mind & Consciousness are different, not at a fundamental factual level, but at the super-structure level of theoretical postulations. Which is why I have concluded that Consciousness is emergent instead of fundamental.

PS___ What is fundamental in the evolving Cosmos is Causation. :slightly_smiling_face:

Mental causation is vitally important to the integrated information theory (IIT), which says consciousness exists since it is causally efficacious.

But the bending of light isn’t, by itself, an illusion. The optical illusion of water is only an illusion to an entity that can see it, and think, even if it doesn’t think explicitly like humans do, “That looks like water.”

Not mechanically! That’s the dread ‘mechanist analogy’ which is a hangover from Cartesian dualism where anything physical could be described in mechanical terms. Whereas no machine acts in such a way (leaving aside simple analogies like thermostats that ‘react’ according to environmental cues or today’s AI systems which are highly adaptive but still artificial as it says on the label.)

I will look at that paper you linked as ‘mental causation’ is something I’m interested in. Integrated Information Theory (IIT) is big subject in its own right. I might put together a separate thread on that.

But you can, again, see how this eventually took the form of Cartesian Dualism. That’s why I continually hark back to René Descartes, and have created another thread on it, based on a Medium essay, Descartes’ Ghost. I maintain that Cartesian dualism is still highly influential in the way we think even if we’re not consciously aware of it. The instinctive use of the analogy of mechanism, which is used in many different contexts, ultimately goes back to it.

@EQV check this out https://www.bloomsbury.com/us/measuring-the-immeasurable-mind-9781793640123/

Author of above paper, Matthew Owen.

@Gnomon - that paper is important deserves close reading

1 Like

That’s a good point.

But still I feel there is a distinction to be drawn between a physical input that misleads a neural processing system and an entirely non-physical mental entity that exists as pure subjective experience without any phyiscal component, even to the extent of being time-delayed from it’s point of origin.

One is not an example of the other.

@Togo
I don’t see any justification for thinking that any mental activity or consciousness exists without a physical base. We’re certainly not aware of any such entities.

My point is that there is no illusion without consciousness. Which I think proves that consciousness is not an illusion. The idea that the thing that perceives illusions is, itself, an illusion seems nonsensical. I’m not saying you’re saying that. I’m just using what you said as a response to those who say consciousness is an illusion. (Which I saw being discussed, even though I can’t find it now. I don’t have a computer. I do everything on my cell phone, and I haven’t yet learned how to get around this new platform.)

If you are not aware of your own soul, then maybe you haven’t spent enough time in self-reflection.

Here’s a question for you. The first living physical body on this planet consisted of highly organized, directed activity. Wouldn’t it be the case that the cause of this highly organized, directed activity, which constitutes the “physical base” of living beings, is prior to that physical base? In other words, that which directs the activity is necessarily prior to the directed activity.

Can you provide a more thorough description of the first living physical body, and its specific highly organized, directed activities?

I would argue that intelligibility plainly does have degrees — or more precisely, our grasp of it does, and the thing itself has layers of intelligible structure that yield themselves to inquiry progressively. We understood combustion before we understood oxidation. We understood oxidation before we understood electron transfer. Each stage was a genuine grasp of something real about what was going on, and each was also incomplete. That’s not a case of something flipping from unintelligible to intelligible; its a case of deeper and deeper penetration into a structure that was there all along.

Your formulation — “that to which concepts can be applied” — is exactly where I think the Kantian picture goes wrong. It makes intelligibility a function of our conceptual readiness rather than a feature of the thing. But then you have no way to explain why some concepts work and others don’t. If intelligibility is just the applicability of concepts, then any internally consistent set of concepts should work equally well. The fact that they don’t — that nature pushes back against our conceptual schemes and forces revision — is precisely what I mean by intelligibility being on the side of the object.

The hard problem is a great test case actually, but I think it cuts the opposite direction from what you intend. Notice what’s happening when we say consciousness is “hard” to explain: we’re saying that our current conceptual and methodological toolkit hasn’t cracked it yet. But we’re also saying there’s something there that demands explanation — that the phenomenon is real, structured, and not yet adequately understood. That’s exactly the posture of inquiry toward something whose intelligibility outruns our current grasp.

Am I saying the wall will come down? No — I’m finite, the problem might exceed us. I said as much earlier with the lock analogy. But the right inference from “we can’t currently explain X” is not “X has no intelligible structure.” It’s “we don’t yet understand the intelligible structure of X.” Those are very different postures and they lead to very different methodological attitudes.

As for the brain explaining itself — I actually think that framing smuggles in a confusion. It’s not the brain that explains anything. We explain things, using our brains among other things, but the act of understanding isn’t reducible to a brain event. If it were, then the hard problem would’nt be hard — it would just be another neuroscience problem. The fact that it resists that reduction is itself interesting, and I think it points toward something real about the irreducibility of understanding to its material substrate.

I think you’re drawing a distinction between intelligibility and understanding that actually ends up undermining your own position. If intelligibility is just the bare formal property of being “something to which concepts can apply,” then its doing almost no philosophical work. It just means “possible object of experience” — which for Kant is tautological, since anything that shows up in experience is by definition something concepts can be applied to. You’ve purchased a clean distinction at the cost of making intelligibility trivial.

The interesting question — the one that matters — is why do some conceptual determinations accord with Nature’s provisions and others don’t? You say judgment informs us of the conformity. Fine. But conformity to what? You say: to representations of what is outside us. But now ask yourself — what is it about what’s outside us that makes some representations conform and others fail? If you answer “we can’t say, because we only have the representations,” then you’ve made the success of inquiry a brute fact with no explanation. If you answer “something about the structure of the thing constrains which representations succeed,” then you’re conceding what I’ve been arguing — that intelligibility belongs to the thing, not merely to our conceptual readiness.

You can’t have it both ways. You can’t say our determinations arise from representations of whats outside us and that intelligibility has nothing to do with the thing outside us. The representations are constrained from somewhere. That somewhere is what I’m calling the intelligible structure of the object.

But why do we find sufficient reason to change our minds? Because further experience and inquiry reveal that our initial determination doesn’t hold up. The mirage is a perfect example — we thought there was a lake, we investigate further, and reality doesn’t cooperate. We revise. But what forced the revision? Not just internal coherence checking. It was the world failing to behave the way our initial judgment predicted it would. That’s the object constraining cognition from outside.

I’m not “blaming Nature” for our errors. I’m saying that the very fact that we can recognize something as an error — that we can distinguish a mirage from a lake — requires that there’s a determinate way things are, independent of our determination. If the only story is “we made faulty determinations and then made better ones,” you still owe an account of what makes the second determination better. And “it accords more closely with Nature’s provisions” just is the claim that the object has a structure that our judgments are answerable to. Which, again, is my position.

Why? Do you not believe that the first living physical body on this planet consisted of highly organized, directed activity? If that is the case, can you give an example of a living body which does not consist of highly organized directed activity, so that I can understand what your concern is?