Philosophy of Mind and of Consciousness

A weekend.

But I still don’t agree that the distinction you’re seeking is observable.

I note also that your definition of ‘action’ has gradually crept into line with your definition of thinking. You’re saying that you need to act like you’re thinking, in order to prove you’re thinking.

With the cat and the washing machine you say the cat is thinking but the washing machine is not. Suppose I create a robot cat that creeps into my room and meows at me. It runs a program designed to make it look like it is engaging in social practice, or whatever observations you care to name. So it looks like it thinks, but doesn’t. How do you observe the difference between the robot kitty and the real one, assuming that I can tinker with my program to immitate any behaviour you care to name? What is it that you can observe that the robot kitty can’t do, no matter how well programmed?

Does learning to flick between imagining tennis and imagining rowing into order to answer yes/no questions count as a new skill? That would seem to involve a reorganisation of thinking.

No need, let use David Chalmers’ words.

. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

— David Chalmers, Facing up to the problem of consciousness

As I understand it, your objection to this was that the hard question presupposes that consciousness arises from the physical. I’m not seeing that in his actual words.

Point missed. Formal logic was just an example of a branch of philosophy which clearly doesn’t employ falsifiability, not because it ‘doesn’t need it’ but because it would entirely miss the purpose of employing formal logic.

The point was that philsophy doesn’t have falsifiability as a precondition. Only empericism has that, and then only argueably. (I’m happy with the idea that empericism requires falsifiability, but it’s a position to be argued, not just assumed).

I don’t agree that they are. That’s why the hard question is called hard. Because it can’t be resolved emperically. That’s why we keep wrangling about your definitions of action and thinking. Because you seem to hold out your definition of action as observable, but when it’s challenged then you have to add non-observable conditions to it to keep it intact.

For what you’re proposing as consciousness to be observable, and emperical, it needs to be observably different to something without consciousness. When I give you a meteor doing the same physical work to the same physical outcome, you say the meteor isn’t thinking. When I give you a washing machine compared to a cat, again you say the cat isn’t thinking. And you’ve done a fair job of incorporating these cases into the definition of thinking that you’re using. Fair enough.

But thinking is what the observation is trying to demonstrate. We observe X, and we conclude that they are thinking. So unless you think we can observe thinking directly, then we can’t call it emperical. For it to be emperical, there must be an observable difference between something that thinks, and something that merely imitates a thinking thing in every observable detail, but does not think.

Ultimiately we must either be able to point to some observation that could be made of a thinking thing, and could not, as a matter of logical possibility, be made of a thing that does not think. Or thinking isn’t observable, and thus not emperical.

@Togo

The robot cat is the right question.

You build a robot that imitates every observable behaviour of a thinking being — meows, fails, sits on your face, reorganises its approach when the first strategy doesn’t work. You can tinker with the program to match any behaviour I name. So: how do I tell the robot cat from the real one?

Here is the honest answer: if the imitation is perfect at the level of observable behaviour, then on my criterion alone, I cannot. And I think this is important to say plainly rather than dodge.

But I want to push back on what “perfect imitation” means in practice. Your robot cat runs a program. When it fails and tries a different strategy, that strategy was already in the program — written by you in advance. The real cat, confronted with a genuinely novel obstacle, does something that was not pre-specified: it improvises. Not by selecting from a pre-written library of behaviours, but by reorganising its activity in response to the specific resistance of this object, right now.

The difference is not in the snapshot. It is in the generativity. A program can handle every case its designer anticipated. A thinking being can handle cases nobody anticipated — including its own designer. That is what the Zagorsk children did: they developed capacities that their teachers did not have and could not have predicted.

Now — you will say: I can write a program that generates novel responses too. Machine learning, genetic algorithms, reinforcement learning. True. And this is where the question stops being about cats and robots and becomes about whether there is a hard line between genuine thinking and sophisticated information processing — or whether it is a continuum. I do not think there is a hard line. I think there are degrees of generativity, and the question of whether something “really thinks” is not a binary but a gradient.

This is not a retreat. It is the consequence of taking Ilyenkov seriously. If thinking is a form of activity — not a mysterious inner substance — then the question is not “does it have the right stuff inside?” but “does it do the right things in the right way?” And “the right way” means: in response to genuine resistance from a genuinely independent object, with the capacity to reorganise, fail, and develop.

On definition creep. You say my definition of “action” has gradually moved toward my definition of “thinking.” That I am saying you need to act like you’re thinking in order to prove you’re thinking. Fair charge. Let me try to separate them.

Action: any directed engagement with an object that can succeed or fail. A child grasping a spoon. An engineer testing a bridge. A cat trying to get fed.

Thinking: a specific kind of action — one that involves reorganising the scheme of activity in response to the object’s resistance, using culturally constituted means.

Not all action is thinking. A reflex is action but not thinking. A tropism is action but not thinking. Thinking is a subset of action characterised by flexibility, self-correction, and cultural mediation. The criterion is not circular — it identifies thinking as a specific mode within the broader category of action.

On learning in coma. You ask whether switching between imagining tennis and imagining rowing to answer questions counts as a new skill. It is a fair point. The patient does something she did not do before the communication protocol was introduced. But notice: the protocol was introduced by the researchers — from outside. The patient did not invent it. She adapted an existing capacity (imagining activities) to a new use (communication) — but only when prompted by someone else’s design. This is closer to following an instruction than to genuine learning. A stronger test would be: can the patient, on her own, develop a new way of communicating that the researchers did not anticipate? If she can — I would take that as evidence of ongoing thinking. If she cannot — she is operating within the residue of what practice built, not generating anything new.

On Chalmers. Thank you for the direct quote: “Why is the performance of these functions accompanied by experience?” You are right that my paraphrase was not quite accurate. The original question is not “why does physical stuff give rise to experience” but “why is functional performance accompanied by experience.” The difference matters: Chalmers is not assuming a gap between physical and mental. He is assuming a gap between function and experience.

Here is my response to the original formulation: the question presupposes that you could have all the functions without the experience — that there is a possible world where everything works the same but nobody is home. This is the zombie thought experiment. And I think it is incoherent. If the functions include genuine self-correction, genuine sensitivity to the object’s resistance, genuine reorganisation in response to failure — then to say all of that happens “without experience” is to empty “experience” of any connection to what the organism does. Experience, on the account I am defending, is not an accompaniment to function. It is what function looks like from the inside of the system that performs it.

But I want to be honest: I cannot prove this. The zombie argument is designed to be irrefutable from outside. You cannot demonstrate from a third-person perspective that zombies are impossible. All I can say is that the concept of a zombie — something that does everything a thinking being does, in exactly the same way, but has no experience — makes no sense to me. And that Ilyenkov would say: asking “why is function accompanied by experience?” is like asking “why does running involve moving your legs?” — because that is what running IS.

On empirical observability. You say: unless we can point to an observation that could be made of a thinking thing and could not, as a matter of logical possibility, be made of a non-thinking thing, then thinking is not observable.

But this criterion demands observation in a single snapshot. Thinking is not a snapshot — it is a process that transforms the subject over time. A person thinks about a problem today, stores the result, and five years later acts differently because of it. The person after thinking is not the same as the person before. That change is observable.

Your robot cat: if it encounters the same failure a hundred times and responds identically each time — that is observable evidence of non-thinking. If it changes its approach in ways its designer did not anticipate — that is observable evidence of something happening inside. The observation is not “does it have experience right now” but “is it being transformed by its encounters over time?”

Thinking is empirically observable — not as a state, but as a trajectory. Not in what the subject does at a moment, but in how the subject differs from what it was before. The question is not whether you can catch thinking in a single frame. It is whether the subject is becoming something it was not.

Now let me turn your questions back. You work with AI professionally. You know the difference between a lookup table and a learning system. You know that some programs change when they encounter the world and others do not. Where do you draw the line? Not in principle — in practice. When you sit across from a system that learns from its mistakes, develops responses its designers did not predict, and transforms its own behaviour over time — at what point do you stop calling it imitation?

You said your falsification condition is: “a better theory.” Better by what measure? If not by empirical adequacy — then what? If not by the ability to explain cases like Zagorsk, like the coma patients, like the robot cat — then by what standard do you judge a theory of consciousness? I have named my conditions. You have named yours. But “a better theory” without a criterion for “better” is a blank check — the same move I called out in Mww.

@Wayfarer

On Hans Jonas. You ask what I would say to the argument that the appearance of organic life is a manifestation of being — and that this amounts to a basic ontological distinction between living and non-living.

I would say: Jonas is right about the distinction, and Ilyenkov would agree. Life is not just more complex chemistry. It is a qualitatively new form of organisation — self-maintaining, self-reproducing, capable of failure and death. Jonas’s concept of “needful freedom” — the organism must act to continue existing — is powerful and genuine.

But Jonas stops at biology. His ascending scale — from metabolism through perception through imagination to thought — treats each level as a further expression of the same organismic freedom. What he does not explain is the qualitative leap from animal cognition to human thinking. A chimpanzee uses tools. A human builds a tradition of tool-use that outlasts any individual. The difference is not degree — it is the introduction of a new ontological domain: collectively maintained ideal forms (language, science, law, art) that are not reducible to any individual organism’s biology.

Jonas gets the organism right. He does not get the human right. For that you need what he does not have: a theory of social practice, of the “inorganic body,” of the ideal as a real feature of social existence. That is what Ilyenkov provides.

So I accept the ontological distinction between living and non-living. I accept that rational agency is a further distinction. But I locate the source of that further distinction not in a higher grade of organismic freedom, but in a qualitatively new kind of activity: collective, tool-mediated, culturally transmitted transformation of nature. That is where thinking lives — not in the organism alone, but in the organism embedded in a historically constituted world of practice.

1 Like

Sure! But Jonas’ book is ‘a phenomenology of biology.’ It’s not anthropology. And the distinctions you and I are both in agreement on, actually go back to Aristotle.

But as for ‘activity’ – what enables that? Certainly it’s socially mediated – no man is an island — but it’s also dependent on the fantastically elaborated forebrain of h. sapiens, which opens up horizons of being that are not perceptible to other species.

I agree that consciousness is causal. There are any number of things that would not exist or happen if not for consciousnesswas. Things that do not violate the laws of physics (nothing can), but are not explained by them, either. Things that would never have come about without having been imagined, and intentionally brought into being, using methods that were needed, because the laws of physics alone would not have been enough.

I would think the soul, according to how I think most people mean it, is already conscious.

So, would you accept as a possibility then, that a nonphysical thing such as what is commonly called the soul, could be the cause of existence of the physical thing, which is called the living body? This wouldn’t violate any laws of physics because it refers to areas which are unknown to physics, nonphysical causes.

The issue is that by designating it “nonphysical” we say that it is activity which is outside the domain of “physical”, and assume that physics is not the field of study through which it could be understood. So for example, quantum mechanics demonstrates that there is much which cannot be known by physics, beginning with what is expressed by the uncertainty principle. But this doesn’t prevent us from assuming that some of these things could be understood through a study which accepts the reality of the nonphysical.

The question is, do you recognize the limitations of the empirical sciences like physics, and that there are some things, which are beyond the capacity of sciences based in sense observations to understand? And, that a discipline which starts with the assumption of real nonphysical actuality is better suited to understanding these things.

As a panpsychist, I’m in no position to rule out the idea.

Certainly.

Great way to put it!

Thank you. (20 char)

@Wayfarer

You ask: what enables activity? And you answer: the fantastically elaborated forebrain of h. sapiens.

Yes — but consider what Ilyenkov says about this. The human brain is not the cause of thinking. It is the organ through which thinking realizes itself. And here is the crucial point: the organ does not precede the function. The function creates the organ.

A human infant is not biologically destined for upright walking. Left to itself, it would never stand. Walking on two legs is imposed from outside — by adults, by culture — specifically to free the hands for manipulating objects. The child’s body is reshaped by a culturally demanded function. The same applies to the brain: the cortical structures that realize specifically human cognitive capacities are not pre-built. They are formed by the social functions the person must perform. The neural patterns take whatever shape is required by the conditions of external activity — by the concrete ensemble of relations into which the individual is placed from birth.

So the forebrain of h. sapiens is not the explanation. It is what needs to be explained. Why did this brain develop? Because the species entered a mode of life — collective, tool-mediated, culturally transmitted — that demanded it. The social practice came first. The brain followed.

This is not speculative. Feral children — raised without human social contact — have the same fantastically elaborated forebrain. They do not think. The hardware is there. Without the practice, it produces nothing.

The brain is a necessary condition for human thinking, not a sufficient one. The sufficient condition is the ensemble of social relations — the “inorganic body” — into which the brain is inserted. Remove the ensemble, keep the brain, and you get a biological organism without a personality. Ilyenkov’s formulation: “In the functions of the brain there manifests itself not the brain, but personality.”

I would think the reverse. We could not have entered the mode of life we have without a brain capable of it. True, each of our brains grow as they do because of the input they receive. Without the social interaction, the physical brain will not grow the same physical way. But the brain must be capable of growing that way, or it will not, regardless of any intersection or input. Raise a chimp as with a human, and it will not learn what human children learn, or do what children do.

Whereas I see it as something much more profound than society and culture. That what leads all of this development is very close to Schopenhauer’s will, which manifests as intentional behaviour in the most rudimentary of organisms.

1 Like

You need to go one step further Evald. You describe “culturally demanded functions”, “social functions the person must perform”. But this is not a logical necessity, and therefore cannot serve as causation. You need to allow that the person wants to conform to the social demands, and the wills of the individuals is the true cause.

The truth of this can be demonstrated by events such as “The Inquisition”. If the demands of society are contrary to the wills of the individuals, the individuals will revolt. Therefore “culturally demanded functions” is insufficient to be held as causal.

It may be true that the social practice came first, but this has no explanatory power unless you can describe what causes social practise. And for this we must turn to the will of the individual. Individuals demanded the collective mode of life, and this is what causes the existence of social practise.

As necessary, the brain is causal. As merely sufficient, social relations are not causal. However, the will to partake in social relations, in what is demanded by society, is necessary, therefore it is causal.

So a proper representation of the combined causes is the physical brain, and the will to participate. The social relations are incidental, as the specific types of relations, and the particular relations themselves, are the result of freely willed choices. That is why we are held responsible for inappropriate acts. Therefore social relations are contingent on the wills of the individuals.

17 posts were split to a new topic: Spinoza, Ilyenkov, and the Ground of Intelligibility

Note to participants: I’ve moved the posts from the last few days to a separate topic:

1 Like