It seems to me that what traditionally goes under the label philosophy of mind in analytic philosophy is largely concerned with questions like:
The mind–body problem
Mental causation
Physicalism vs dualism
Functionalism, identity theory, etc.
In other words, it treats “mind” as a theoretical posit within a broadly naturalistic framework. Representative figures include:
John Searle
Ned Block
Jaegwon Kim
Hilary Putnam
But over the last few decades there’s also been something slightly different emerging — sometimes called philosophy of consciousness — influenced by phenomenology, enactivism, embodied cognition, and even cross-cultural sources (Buddhism and Advaita Vedanta in particular).
This second stream seems less concerned about the relation of mental states and brain states, and more concerned with:
The structure of lived experience
First-person givenness
The role of embodiment and environment
The limits of third-person explanation
It draws more heavily on developments in cognitive science — particularly the so-called “4E” approaches (embodied, embedded, enactive, extended) - as well as neuroscience, though often with a critical stance toward straightforward reductionism.
Key text:
The Embodied Mind, Varela, Thompson and Rosch, (revised edition 2015).
Other influential authors include:
Sean Gallagher
Alva Noë
Andy Clarke
So I’m wondering:
Is this just a change of emphasis within the same field?
Or are we seeing the emergence of a genuinely different philosophical project, with a different starting point?
Would it make sense to distinguish philosophy of mind (as traditionally analytic and theory-driven) from philosophy of consciousness (as broader, phenomenologically informed, and more interdisciplinary)?
Curious how others here see the distinction and if there is one.
It might make sense to make two different philosophical disciplines— one for mind and one for consciousness— but only if the two types of phenomena are different at some fundamental level. You won’t be surprised to find that I don’t think they are. For me, consciousness is just one mental process among many. It strikes me that splitting the subject up presupposes they are different when that hasn’t been established.
‘Our perception of reality is not an exact representation of the objective truth but rather a combination of sensory inputs and the brain’s interpretation of these signals. This interpretation is influenced by past experiences and is often predictive, with the brain creating categories of similar instances to anticipate future events.’
EDIT: this was supposed to be a reply to @Wayfarer, but it looks like I accidentally replied to @T_Clark instead. Sorry about that. Still getting the hang of the new platform.
========
@Wayfarer – I think you’re identifying something real, but I’d resist the binary framing slightly. The distinction you’re drawing—between a theory-first, third-person naturalism versus a lived-experience-first, first-person phenomenology—is sharp enough. But I suspect the more interesting question is why this split emerged and what it reveals about deeper methodological commitments.
The traditional philosophy of mind you describe tends to start from a particular stance: we have a completed third-person scientific picture of the world, and the philosophical problem is to show how mental phenomena fit into it. The mind becomes what needs to be explained away or reduced to or functionalized into the physical order. There’s an implicit theoretical ideal at work—the idea that real explanation moves from the general, structural, mathematical level downward to particular cases.
The emerging consciousness studies you mention inverts this starting point. It begins with what’s undeniably available: the structure of lived experience itself. From there, it asks what our best empirical understanding of cognition and embodiment can illuminate about that structure, rather than asking what structure must disappear to make room for a completed physics.
The question then isn’t whether these are different fields—they’re clearly operating with different orientations. The deeper question is whether they’re asking compatible questions or genuinely incommensurable ones.
Here’s what I’d suggest: the traditional philosophy of mind is driven by a particular epistemological anxiety—the fear that subjective appearance doesn’t match objective reality, so mental phenomena need to be vindicated by reduction to something more fundamental. But this anxiety already presupposes a gap that may not be there. The lived world of perception, intention, and embodied action is part of how the world actually is. A complete natural science wouldn’t render this irrelevant; it would illuminate how that lived world coheres with everything else we know.
So rather than two separate projects, you might have one unfolding project working through its own internal tensions. The newer work isn’t abandoning naturalism or third-person rigor—it’s insisting that both of those commitments are better served by taking first-person structure seriously from the start, not as an afterthought to be accommodated by the theory-driven approach.
The real payoff of phenomenologically-informed work isn’t that it gives us a different kind of knowledge insulated from science. It’s that it clarifies what we’re actually explaining when we do empirical cognitive science. You can’t explain perception, intentionality, or embodied agency unless you’ve first gotten clear on what those phenomena actually are—and that requires the kind of careful, non-reductive attention to lived experience that this newer literature brings.
In that sense, maybe the distinction is less between “philosophy of mind” and “philosophy of consciousness,” and more between two different moments in a single explanatory project: the moment where we clarify what needs explaining, and the moment where we develop theories to do the explaining.
100%. Right on the mark. I agree the deeper issue may be methodological rather than disciplinary.
The “epistemological anxiety” point is especially interesting. Much of philosophy of mind does seem driven by the assumption that the third-person scientific picture is already authoritative, and the task is to reconcile subjectivity with it.
Perhaps the shift is less a split between fields and more a shift in starting point — from fitting mind into a pre-given ontology, to clarifying what consciousness actually is before theorising about it.
The question then becomes whether these are genuinely complementary moments of one project, as you suggest, or whether the starting points lead to subtly different conceptions of what counts as explanation in the first place.
Either way, your point is well taken - but you might still agree that the ‘Consciousness Studies’ sub-discipline is at least somewhat different to ‘Philosophy of Mind’?
Right - I think there is a link between embodied cognition and Eastern philosophies. That is made clear in The Embodied Mind which draws on Buddhist ‘abhidharma’ (which is the philosophical psychology of Buddhism).
On further thought, my choice of subtitle was poor. I changed it to Philosophy of Mind AND Consciousness Studies - which overlap but which are different disciplines.
‘Consciousness Studies’ is much more eclectic. The agenda for the bi-annual Consciousness Studies conferences and the associated Department in Tucson amply illustrate that.
Yes, that’s a fair correction — “Consciousness Studies” as practiced at Tucson and in that broader literature really is more eclectic, both in method and in what it’s willing to count as evidence or insight. It draws on phenomenology, contemplative traditions, neuroscience, even physics, in ways that mainstream analytic philosophy of mind typically wouldn’t.
Philosophy of Mind is a field in which you encounter questions that concern at least three other fields: Philosophy of Science, Philosophy of Language, and Metaphysics.
For some time I was reluctantly accepting that a physicalist must reject the idea that mind exists. Dennett, for instance, explains it away as an illusion.
Turns out you can be a physicalist and believe that the mind exists. For example, as an emergent biological phenomenon, and, according to Searle, without assuming dualism.
I think physicalism is true but unfinished business.
I was going to post this as evidence that mainstream cognitive science can be “embodied” as much as approaches that are more in line with my understanding of your ideas. It’s from Damasio’s “The Feeling of What Happens.”
I have come to conclude that the organism, as represented inside its own brain, is a likely biological forerunner for what eventually becomes the elusive sense of self. The deep roots for the self, including the elaborate self which encompasses identity and personhood, are to be found in the ensemble of brain devices which continuously and nonconsciously maintain the body state within the narrow range and relative stability required for survival. These devices continually represent, nonconsciously, the state of the living body, along its many dimensions. I call the state of activity within the ensemble of such devices the proto-self, the nonconscious forerunner for the levels of self which appear in our minds as the conscious protagonists of consciousness: core self and autobiographical self.
While looking on the web, I found Damasio is considered a strong proponent of embodied cognition. I was surprised.
I believe the two are fundamentally incompatible, and the gap you mentioned cannot be closed. This is the gap between our lived experience of intention as a cause, along with the concept of free will in general, and the determinist assumptions which are employed by physicists to make activities predictable and intelligible.
In other words, there is a fundamental difference in the understanding of “causation” which makes the two incompatible, and this is a gap which cannot be closed without altering some foundational principles. Compatibilism pretends to close the gap, but it is just a superficial pretense, like an insistence that the gap can be closed, without paying due respect to the actual problems.
This is like claiming that because we can use statistics and probabilities to successfully predict occurrences in a large percentage of a specific type of action, it means that we understand that activity. An example is the predictability of radioactive decay. That the half-life is highly predictable means that the activity is properly “caused”. That we cannot say which atoms will decay means that the “cause” is beyond our grasp.
Then the trend is to posit some sort of randomness as inherent within the cause. That posit is to designate the cause as unintelligible. This is the trend in philosophical interpretations of certain types of causation which scientific principles cannot grasp, to assign randomness as inherent within the cause, designating the cause as unintelligible. Examples might include quantum fluctuations, abiogenesis, and even random gene mutations.
Ultimately, the type of causation which is most highly intelligible to the mind of a self-reflective individual, the ambition and final cause of intention, becomes totally unintelligible to the science of consciousness, as having inherent randomness.
I think, like EQV, that there’s no real conflict or difference between them, and this is a recent development that is the result of predictive coding, a more serious approach to studying psychedelics, and AI systems gaining more attention as concepts of emergent systems.
I think that we recently realized that there’s a limit to a reductionist approach to understand the brain and mind. With concepts like criticality being crucial for explaining the complete phenomenological experience and people returning from death experiencing a weird “re-assembly” of their cognitive experience and perception as the brain fires up different parts (reported to be almost like experiencing the dimensions of reality adding together until a full 3-dimensional experience was reached), the pure reductionist physiological approach is not enough, but also not invalid.
So I think it’s more a fusion of fields. The emergent aspects of our consciousness behave similar to the emergent properties of AI systems, able to do things that were not programmed because it is based on a criticality of a chaotic system with weights that drive the direction of this chaos.
So it is beginning to become clear that our consciousness is so interlinked with the entirety of our body as an adaptable system which is tuned to reach a certain criticality of chaos; like a stable electric arc between two poles that maintain a steady shape as long as every part of it maintains the right conditions.
What I’ve found the most fascinating thing in recent years, is the concept of our perception and consciousness being a controlled hallucination; the untethered mind would hallucinate to such extremes that we lose all ability to function properly. This is what experiments with psychedelics have shown, as they shut off parts of us that most likely handle the tethering; the grounding of our hallucination. That the hallucinations we get through psychedelics are not because we trigger something, but rather letting our brain lose to do whatever it wants.
Our perception and consciousness is therefore more about a hallucination driven by the prediction system trying to predict the next moment in time based on previous information, and then verified by the sensory data we constantly get. Without psychedelics, and outside of REM sleep, our senses ground this hallucination into a stable perception of reality.
Our different brain systems are therefore more of a suppression system for an extreme hallucination machine, aimed towards trying to predict the next moment of reality. With sensory information, it grounds this stream of hallucinations so that the next prediction is based on the grounded information, granting a sense of causality of the reality around us. As we see the glass fall, everything is predicted to form a proper experience of reality within the limits of our sensory perception.
But this also functions on a more complex level, which is the level that distinguishes us humans from other animals; our long-term memory being used as a prediction algorithm for higher complexities. We can process not just a prediction of each moment in time, but also store chunks of entire events, able to ground predictions of entire chunks of reality; i.e., we don’t just follow the glass falling using the predictive coding for maintaining a stream of current events, we also are able to hallucinate a prediction of the outcome of a larger chunk; that the glass will smash to bits when hitting the ground.
It’s this complex form of prediction that I think is the distinguishable difference between animals and us. The ability for our consciousness to predict much further forward in time.
As I’ve described in another thread, the reason for this is adaptability. It’s the evolutionary trait that humanity mastered. The ability to adapt to any situation in nature, so complex it generated communication between members of a pack for the purpose of adapting on a longer timescale. Collaboration is a byproduct of our consciousness able to make long-form predictions, which enabled us to include allies within a problem-solving process.
Fundamentally, I think the entire experience of our perception and cognition is a byproduct that emerged out of a simple evolutionary trait that, when driven to its maximum function, accidentally produces a more aware experience. That we didn’t evolve into consciousness as a complex trait, but rather a simple necessity to adapt to changing conditions, requiring aspects of our cognition to reach that adaptability and accidentally causing the necessary awareness that led to the experience we now have of ourselves and reality.
It is a physiological cause, but the phenomenological experience we have is an emergent result of a process that was meant to just solve a single thing in nature. The “software” started to behave more complex than initially intended.
And the “software”, the emergent state of consciousness, can be filled with many other phenomena that have to do with the structure of our lived experience. And I think that what is happening now is that people are starting to realize that we cannot approach that with a reductionist approach, because the emergent aspects of our experience don’t seem to come from the physical, but from the sum of “software” processing that gives the abstract emergent state of mind that demands an ability to adapt to changing conditions. It is impossible to work that backwards from the effect to a physiological cause as the cause is the sum of the already emerging phenomena. A second layer of abstraction from chaos essentially.
All our experiences are linked to this adaptability processing, giving us an illusion, or maybe even being granted a higher perception of reality as a byproduct that wasn’t intended by the evolutionary trait we initially developed.
Our consciousness would therefore be the unintended result of evolution solving a survivability problem.
And we may very well participate in nurturing a further evolutionary development of higher cognition as the world we’ve formed demands solutions and adaptation to complexities we initially didn’t have when developing our current consciousness. Like a perception of higher-dimensional thinking, narrative layering, symbolic linking, etc. Things we are today struggling to comprehend, which means there’s an evolutionary push towards developing even higher cognitive abilities for adapting to more complex experiences of reality.
To conclude, all of this requires all fields to loosen up and look at the entire thing holistically. We have physiological causes, creating emergent events that in turn create further emergent properties that form chunking of processing reality, which further produce even higher levels of navigating that reality as an abstract simulation.
I think all fields are starting to talk to each other, because the errors in one seem to find the solution somewhere else and vice versa.
This is a fine example of what I talked about above. When the type of causation involved cannot be understood by the determinist principles employed by science, the cause of the observed activity is designated as unintelligible, i.e. random chance, or “chaos” in this example.
So instead of allowing for a type of causation which is incompatible with determinism, final cause, occurrences which could very possibly be explained as the effects of a final cause type causation, are categorized as chance. This approach dissuades the individual philosopher from following up on the reality of final cause.
If everything which has been caused by final cause type causation is designated as caused by chance, then there is no need to consider the reality of final cause. Final cause slips out of the picture, under the illusion that things caused by this type of causation are random chance, or chaotic activity. That, I propose is what is common to “chaos theory” proponents. The importance of the initial conditions which are demonstrated to be so critical to the outcome, is dismissed as chance occurrence.
But that could easily be criticized as ‘evolutionary reductionism’, could it not? It’s pretty well standard neo-Darwinism applied to cognition. But there’s a yawning philosophical gap in this account. All the genius and creativity and existential angst and ecstacy, an accidental by-product of the ‘four F’s’ of evolution - fighting, fleeing, feeding and reproduction. Leaves something out, don’t you think?
I want to backtrack to this earlier comment, because it says something important. The ‘particular epistemological anxiety’ you refer to is, I suggest, very much characteristic of modern culture and society. It is what Richard Bernstein described as the ‘Cartesian Anxiety’:
Descartes leads us with an apparent and ineluctable necessity to a grand and seductive Either/Or. Either there is some support for our being, a fixed foundation for knowledge, or we cannot escape the foreces of darkness that enveloped us with madness, with intellectual and moral chaos.
(The context of the ‘chaos’ that Descartes wrote of was that of the Thirty Years War and the associated religious conflicts.) But according to Bernstein (Beyond Objectivism and Relativism) this gave rise to this state whereby knowledge either has an indubitable foundational certainty or it tends to devolve into a matter of opinion or social custom - that is, relativism. This apparent dilemma of ‘relativism v objectivism’ is precisely what his book seeks to address (through pragmatism.)
Furthermore, after Descartes, mind and world become separated in principle. Thought becomes an inner, private domain, while nature is conceived as an external system governed by impersonal laws. This is one of the distinguishing characteristics of the early modern period.
I say that this underlies much of the discussion and debate in academic philosophy of mind. So it’s not straightforward that the ‘perceived gap’ isn’t really there, or that we ought to be able to see how perception, intention and embodied action just is ‘part of the world we see’ - because by definition, these factors are not objective.
It was just this that was the basis of Chalmer’s problem of consciousness (‘Facing Up to the Problem of Consciousness’). It is why in his original paper on the question, he proposes a non-reductive theory in which experience is taken as fundamental. Hence this was one of the seminal papers that started modern “Consciousness Studies” as an eclectic sub-discipline distinct from pure “analytical philosophy of mind”.
Isn’t that simplifying evolution as a factor? And also ignoring the fact that since the dawn of consciousness, it has itself been evolving and expanding in its complexity. We didn’t magically wake up one day with self-awareness; the development of our modern form of consciousness has been an evolutionary process in itself.
In essence, demystifying the origin of consciousness, does not trivialize the value.
The actual implications of “adaptability” as an evolutionary trait are more consequential in complexity than I think people realize. It’s overlooked because people want consciousness to be more “magical” than it fundamentally seems to be.
We want our own sense of existence to be special. It’s deeply existential and a form of humanity’s ego at play; we get a nihilistic notion when we’re beginning to understand how we function. Throughout the history of medicine, many worked against new discoveries, claiming them to be a blasphemous mockery of the magical design of God or gods. Each time a new discovery was made and verified, the idea of “is that it?” appeared as a nihilistic emotion, driving rejection in society. Reaching the point in history of us examining our own consciousness naturally produces the largest rejection of explanations, as we are trying to maintain the “magic” of our very being, not just an organ or bone.
We don’t want our sense of being to be summarized by a reaction of “is that it?”. Reaching that point produces too much nihilism for people to accept, and it turns all questions of the nature of consciousness back into a religious realm as a form of survival mechanism.
But consciousness as we experience it today is most likely a byproduct of the trait of adaptability taken to its logical conclusion. When taking a holistic look at the entire field, it doesn’t point to us being more “magical”, rather, it points to consciousness as something more “basic”. However, that does not equal consciousness having less value or us losing something valuable. It only means that the basic mechanism is simpler than our religious concepts say or want to be, and it also doesn’t erase the fact that the ramifications and consequences of this adaptability mechanism are almost exponential in their potential and further emergence.
We may view our cognition as miraculous, close to a religious experience, but that’s because we’re “analyzing the tool used to analyze the tool.” It becomes a feedback loop that constantly tries to understand its own understanding, being held back by its limitation of such conceptualization as it fundamentally requires separation from itself to be fully able to analyze itself, and if possible, only runs into the same problem again.
We can only start to understand consciousness by examining how it could possibly develop. What are the evolutionary steps and reasons? This points us towards how the mechanism began and how it works, and from that we can extrapolate answers to the questions of why we act in certain ways, are able to think about our own thinking, etc.
This does not reduce the complexity of our consciousness, though. If consciousness is driven by adaptability as the main function, what we’re seeing is a chaotic runoff effect in which layers upon layers of these mechanisms of adaptability intersect and have evolved into more complex emergent forms.
The result of all these drops of adaptation to survive reality is an ocean of intersecting functions that produce, to us, a very complex experience that emerges a self-awareness and sense of being as a byproduct of all of this firing at the same time.
We will never be able to have a full introspective ability to think about our own mechanisms because doing so essentially leads to a form of mental breakdown. Our entire being is an automatic function, and our experience of freedom within this is an illusion brought on by the brain constantly adapting to new conditions with internalized models of reality to simulate the world around us for the sake of finding a path forward that’s adapted to those new conditions. And this automation has evolved into being able to adapt to larger ranges of being, able to simulate long-term consequences. To try and actively be aware of these steps becomes a mental wrestling match against what enables that awareness in the first place.
We do not think of all the gears, parts, software and hardware when we drive a car. The experience of driving is something other than just a conclusion of the mechanisms. You cannot say that the experience and emotion of driving a car is derived from the cogs, oil and explosive chemical reactions within the engine. Yet, it’s all of that which enables the car to be driven and in turn spawn the experience of driving.
Our experience of consciousness is a byproduct of the mechanism, but that doesn’t negate the quality of the experience, the importance of that experience and the experience itself becoming a thing of its own.
And one emergent systems usually produces further emergent systems at scale. The social realm with all of us sharing our individual experiences becomes a much larger system in itself. And we’ve yet to ask the question: what emergent thing comes out of humanity’s individual consciousnesses acting together as a chaotic system?
To fully understand the mind and consciousness, we need to fundamentally role-play as a separate entity, examining humans from afar. Because we can’t really use this emergent illusion of our experience as the measuring tape for what consciousness is. It’s like using a chair as a tool to verify what a chair is, instead of looking at the fundamental reasons for why a chair was created, and examining the parts that defines it as a chair. As we are as physical as the rest of the universe, we can study the formation of our being, and it will probably have more answers than trying to use the conclusion to figure out the conclusion.
Christoffer - there’s a lot here I find myself agreeing with at the level of orientation — particularly the point that demystifying consciousness doesn’t automatically deflate its value, and that emergent complexity can be genuinely surprising even when the underlying mechanism is relatively simple. That’s an important corrective to a certain kind of knee-jerk resistance in these discussions.
But I want to press on a tension I notice running through your post. On one hand, you want to preserve the reality and value of conscious experience — “the experience itself becoming a thing of its own.” On the other hand, you describe freedom as an “illusion” and our entire being as “automatic function.” Those two commitments sit uneasily together. If the experience is genuinely something, with its own character and consequences, in what sense is the freedom it seems to involve merely illusory? You’ve moved from “the mechanism is simpler than we thought” to “the experience is a byproduct” — but byproduct of what, exactly, and what follows from that?
The car analogy is vivid, but I think it actually highlights the problem rather than solving it. You say the experience of driving isn’t derived from the cogs and explosive reactions — and that’s right. But that means the experience isn’t simply reducible to or eliminable in favor of the mechanism. So when you then say consciousness is “a byproduct of the mechanism,” what work is “byproduct” doing? It sounds like you want emergence without wanting to commit to what emergence actually requires: that the higher-level organization has its own real properties that can’t just be read off from the lower level.
There’s also something worth examining in “adaptability” as the core explanatory concept. Adaptation explains why certain capacities were selected for, but it doesn’t straightforwardly explain the normative dimension of cognition — the fact that we don’t just respond to conditions, but assess them, get them right or wrong, hold each other to standards. An organism that tracks its environment is doing something, but an agent who makes a judgment and is answerable to whether it’s true seems to be doing something additional. Does “adaptability taken to its logical conclusion” actually get you there, or does it describe the platform on which that capacity runs without explaining the capacity itself?
I’m curious whether you think the “illusion of freedom” framing is essential to your view, or whether you’d be comfortable saying something weaker — that freedom as folk psychology conceives it is misleading, but that something real and important is picked out by our practices of reasoning, committing, and holding one another accountable. That seems more consistent with what you want to preserve about the value of experience.