@Togo
The robot cat is the right question.
You build a robot that imitates every observable behaviour of a thinking being — meows, fails, sits on your face, reorganises its approach when the first strategy doesn’t work. You can tinker with the program to match any behaviour I name. So: how do I tell the robot cat from the real one?
Here is the honest answer: if the imitation is perfect at the level of observable behaviour, then on my criterion alone, I cannot. And I think this is important to say plainly rather than dodge.
But I want to push back on what “perfect imitation” means in practice. Your robot cat runs a program. When it fails and tries a different strategy, that strategy was already in the program — written by you in advance. The real cat, confronted with a genuinely novel obstacle, does something that was not pre-specified: it improvises. Not by selecting from a pre-written library of behaviours, but by reorganising its activity in response to the specific resistance of this object, right now.
The difference is not in the snapshot. It is in the generativity. A program can handle every case its designer anticipated. A thinking being can handle cases nobody anticipated — including its own designer. That is what the Zagorsk children did: they developed capacities that their teachers did not have and could not have predicted.
Now — you will say: I can write a program that generates novel responses too. Machine learning, genetic algorithms, reinforcement learning. True. And this is where the question stops being about cats and robots and becomes about whether there is a hard line between genuine thinking and sophisticated information processing — or whether it is a continuum. I do not think there is a hard line. I think there are degrees of generativity, and the question of whether something “really thinks” is not a binary but a gradient.
This is not a retreat. It is the consequence of taking Ilyenkov seriously. If thinking is a form of activity — not a mysterious inner substance — then the question is not “does it have the right stuff inside?” but “does it do the right things in the right way?” And “the right way” means: in response to genuine resistance from a genuinely independent object, with the capacity to reorganise, fail, and develop.
On definition creep. You say my definition of “action” has gradually moved toward my definition of “thinking.” That I am saying you need to act like you’re thinking in order to prove you’re thinking. Fair charge. Let me try to separate them.
Action: any directed engagement with an object that can succeed or fail. A child grasping a spoon. An engineer testing a bridge. A cat trying to get fed.
Thinking: a specific kind of action — one that involves reorganising the scheme of activity in response to the object’s resistance, using culturally constituted means.
Not all action is thinking. A reflex is action but not thinking. A tropism is action but not thinking. Thinking is a subset of action characterised by flexibility, self-correction, and cultural mediation. The criterion is not circular — it identifies thinking as a specific mode within the broader category of action.
On learning in coma. You ask whether switching between imagining tennis and imagining rowing to answer questions counts as a new skill. It is a fair point. The patient does something she did not do before the communication protocol was introduced. But notice: the protocol was introduced by the researchers — from outside. The patient did not invent it. She adapted an existing capacity (imagining activities) to a new use (communication) — but only when prompted by someone else’s design. This is closer to following an instruction than to genuine learning. A stronger test would be: can the patient, on her own, develop a new way of communicating that the researchers did not anticipate? If she can — I would take that as evidence of ongoing thinking. If she cannot — she is operating within the residue of what practice built, not generating anything new.
On Chalmers. Thank you for the direct quote: “Why is the performance of these functions accompanied by experience?” You are right that my paraphrase was not quite accurate. The original question is not “why does physical stuff give rise to experience” but “why is functional performance accompanied by experience.” The difference matters: Chalmers is not assuming a gap between physical and mental. He is assuming a gap between function and experience.
Here is my response to the original formulation: the question presupposes that you could have all the functions without the experience — that there is a possible world where everything works the same but nobody is home. This is the zombie thought experiment. And I think it is incoherent. If the functions include genuine self-correction, genuine sensitivity to the object’s resistance, genuine reorganisation in response to failure — then to say all of that happens “without experience” is to empty “experience” of any connection to what the organism does. Experience, on the account I am defending, is not an accompaniment to function. It is what function looks like from the inside of the system that performs it.
But I want to be honest: I cannot prove this. The zombie argument is designed to be irrefutable from outside. You cannot demonstrate from a third-person perspective that zombies are impossible. All I can say is that the concept of a zombie — something that does everything a thinking being does, in exactly the same way, but has no experience — makes no sense to me. And that Ilyenkov would say: asking “why is function accompanied by experience?” is like asking “why does running involve moving your legs?” — because that is what running IS.
On empirical observability. You say: unless we can point to an observation that could be made of a thinking thing and could not, as a matter of logical possibility, be made of a non-thinking thing, then thinking is not observable.
But this criterion demands observation in a single snapshot. Thinking is not a snapshot — it is a process that transforms the subject over time. A person thinks about a problem today, stores the result, and five years later acts differently because of it. The person after thinking is not the same as the person before. That change is observable.
Your robot cat: if it encounters the same failure a hundred times and responds identically each time — that is observable evidence of non-thinking. If it changes its approach in ways its designer did not anticipate — that is observable evidence of something happening inside. The observation is not “does it have experience right now” but “is it being transformed by its encounters over time?”
Thinking is empirically observable — not as a state, but as a trajectory. Not in what the subject does at a moment, but in how the subject differs from what it was before. The question is not whether you can catch thinking in a single frame. It is whether the subject is becoming something it was not.
Now let me turn your questions back. You work with AI professionally. You know the difference between a lookup table and a learning system. You know that some programs change when they encounter the world and others do not. Where do you draw the line? Not in principle — in practice. When you sit across from a system that learns from its mistakes, develops responses its designers did not predict, and transforms its own behaviour over time — at what point do you stop calling it imitation?
You said your falsification condition is: “a better theory.” Better by what measure? If not by empirical adequacy — then what? If not by the ability to explain cases like Zagorsk, like the coma patients, like the robot cat — then by what standard do you judge a theory of consciousness? I have named my conditions. You have named yours. But “a better theory” without a criterion for “better” is a blank check — the same move I called out in Mww.
@Wayfarer
On Hans Jonas. You ask what I would say to the argument that the appearance of organic life is a manifestation of being — and that this amounts to a basic ontological distinction between living and non-living.
I would say: Jonas is right about the distinction, and Ilyenkov would agree. Life is not just more complex chemistry. It is a qualitatively new form of organisation — self-maintaining, self-reproducing, capable of failure and death. Jonas’s concept of “needful freedom” — the organism must act to continue existing — is powerful and genuine.
But Jonas stops at biology. His ascending scale — from metabolism through perception through imagination to thought — treats each level as a further expression of the same organismic freedom. What he does not explain is the qualitative leap from animal cognition to human thinking. A chimpanzee uses tools. A human builds a tradition of tool-use that outlasts any individual. The difference is not degree — it is the introduction of a new ontological domain: collectively maintained ideal forms (language, science, law, art) that are not reducible to any individual organism’s biology.
Jonas gets the organism right. He does not get the human right. For that you need what he does not have: a theory of social practice, of the “inorganic body,” of the ideal as a real feature of social existence. That is what Ilyenkov provides.
So I accept the ontological distinction between living and non-living. I accept that rational agency is a further distinction. But I locate the source of that further distinction not in a higher grade of organismic freedom, but in a qualitatively new kind of activity: collective, tool-mediated, culturally transmitted transformation of nature. That is where thinking lives — not in the organism alone, but in the organism embedded in a historically constituted world of practice.