The Harder Problem of Quiddity

I had mentioned this problem in another thread, but I figured I’d give the problem its own framing with some additional bullet points:

There is a “harder problem of quiddity” that emerges from the assertion that human reason is wholly discursive and calculative. The “harder problem” isn’t about physicalism per se. It’s about the the denial of any receptive noetic faculty (a view ubiquitous in modern thought, including in much modern idealism). Such a view leads to either the denial of noetic content (quiddity) or else the problematic claim that noetic content emerges from wholly discursive information processing and synthesis and is reducible to these (and so is implicitly epiphenomenal).

The problem is: how do we get the act of understanding and noetic content from sheer rule-following or computation?

The problem has two horns:

A. If noetic content is denied, the denial itself denies its own intelligibility.

B. If the claim is that noetic understanding “emerges” from discursive rule-following, it has to explain how. Prima facie, you cannot get content from computation or rule-following alone. You just get a Chinese Room. This “emergence” cannot be strong emergence either, else the claim that reason is wholly discursive would be false. An appeal to “strong emergence” simply defaults on the claim.

Many thinkers try to have their cake and eat it too here. They will describe reason as wholly discursive, deny any receptive noetic faculty (often as “woo”), and then go right along invoking noetic content whenever they want to explain freedom, reasoning, normativity, etc. This is inconsistent, or at best leaving a huge leap unexplained.

The focus on qualia distracts from this “harder problem.” Qualia is a deflated version of quiddity, stripped of its metaphysical import and noetic intelligibility.

I’d argue that the focus on qualia over similarly problematic issues (intentionality, normativity, etc.) is the result of upstream, often unstated metaphysical assumptions, most notably, the presupposition that reason is wholly discursive. With common formulations of “the Hard Problem,” the “unresolved remnant” that must be explained is often itself already defined so as to be incredibly thin. All we are left with, on many accounts, is having to explain “what it is like to taste strawberries,” etc. To my mind, this rather massively undersells the problem. Explaining “the taste of coffee” just gets us to a Chinese Room that tastes.

Now, folks might be skeptical of quiddity and values, seeing them as “spooky,” but such skepticism presupposes semantic/noetic content in order to even be stated. Concerns over “woo” are themselves only intelligible if something is already understood. That is, the challenge to the substantial reality of noetic understanding is parasitic on that very understanding (a transcendental argument).

The same issue crops up with value. Presumably, one must already understand that truth is superior to falsity in order for science to have any proper ends at all. You need the noetic apprehension of truth as genuinely good, as worth pursuing for its own sake, to even get the epistemic enterprise off the ground.

In public comments anywhere philosophy is discussed, or if you prompt any major LLM on this topic, you’re sure to see claims that any receptive noetic faculty, or the apprehension of simples (the “first act of the mind” in traditional logic), is “mystical,” “magical,” or “spooky.” I guess my point here is that having a qualityless, intelligibilityless shadow realm is not, prima facie, any less spooky, nor is positing that the act of understanding is some sort of inert “ghost in the machine” unrelated to behavior.


Appendix:

For more detail, here are some the key problems (there are many):

Problems of Self-refutation

A. If understanding is just something like statistical pattern recognition over sensory inputs, then the claim that “understanding is just statistical pattern recognition” is itself just a statistical pattern—with no more claim to truth than any other output. The theory undermines its own epistemic authority. Disagreement reduces to different outputs resulting from different “training data.”

B. To assert the theory is either to understand it—in which case genuine understanding exists and the theory is false—or to merely produce a high-probability output—in which case no argument has been made and there is no reason to accept it.

C. Defenders of this view may object that understanding “emerges from” but is “reducible to” computation or discursive rule-following. Yet there exists no explanation of how something “follows-rules so hard it starts understanding.” Dyadic mechanistic causality leaves no mechanism for such a transition. More to the point, either reason is wholly discursive and reducible to discursive synthesis and rule-following or it isn’t. If it isn’t, if this is “strong emergence,” then the claim that reason is wholly discursive is simply false, because a receptive noetic faculty “emerges” in a way that is irreducible, making it in some sense fundamental.

D. Any physical system of sufficient complexity can be described as completing any computation. Pancomputationalism renders computational theories of mind vacuous.

E. Eliminativism about noetic receptivity and propositional attitudes cannot be coherently stated, because stating it requires the very capacities it eliminates.

The Higher-Order Epiphenomenalism Problems

F. The wholly discursive, “rule-following,” view of reason makes the act of understanding and all noetic content epiphenomenal. This is strictly more corrosive than eliminitivism re qualia because it eliminates genuine agency and any normative force for reason, and denies the very content of the position itself.

G. If understanding is causally inert it cannot have been selected for by evolution. The evolutionary framework naturalism typically invokes to explain cognition is flatly incompatible with epiphenomenalism re noetic content.

H. This higher-order epiphenomenalism posits a unique one-way causal relation found nowhere else in nature, with no account of why it exists, why it is correlated with biological processes, or why it has the specific character it has. This is not parsimonious—it is ad hoc, and is itself a form of implicit dualism.

I. There is lots of evidence that the mind is structured like a “user interface,” (although I reject the analogy). Its selectivity, its organization around salience and meaning, etc., makes no sense on an epiphenomenalist account and requires increasingly elaborate “just-so” stories that cannot appeal to understanding playing any functional role in reproduction and survival. All of this, to save a set of metaphysical doctrines (e.g., a wholly dyadic, mechanistic, temporal view of causality, the homogeneity of nature, etc.) that are not prima facie true nor even particularly plausible.

The Quiddity and Intentionality Problems

J. A compressed statistical model has the what without the why. Genuine understanding involves the why, which is why it generates genuine insight rather than mere interpolation

K. The directedness of thought toward necessary, universal, and normative content cannot be reconstructed from a contingent, accidentally temporally ordered causal history that lacks these. The magic of “strong emergence” is needed to cross gaps at many stages here.

The Problem of Normativity and the Proper Ends of Reason

L. Computation and statistical inference are purely descriptive. They generate no genuine normative ought. They cannot get us to the understanding that truth is more choice-worthy than falsity and the proper end of inquiry.

M. The invocation of “pragmatism and instrumentality all the way down,” doesn’t resolve this issue. It simply bottoms out in voluntarism.

N. The gap between probably and necessarily is not closed by adding more data. The necessity of a valid inference, the impossibility of a contradiction, etc. cannot be reconstructed from any frequency distribution however large.

The Historical Debunking Problem

O. Denying that intelligibility exists in being per se, or that the mind can receive this content, is not common across philosophical traditions. Actually, it’s basically wholly absent outside Islamic and Christian traditions influenced by theological voluntarism. Hence, claims that any view other than this one is “spooky” or “mystical” needs to be argued to. As is, such charges often seem like little more than appeals to indoctrinated bias and aesthetic taste.

2 Likes

I suppose I should add that these problems apply, in a different register, to attempts to reduce reason to language (also discursive).

There you have an additional issue. If an appeal is made to what is “useful” we have to ask, “in virtue of what can we say that a way of speaking is truly useful, rather than merely apparently so.” On wholly descriptive accounts, a way of speaking is useful just in case it persists. But this reduces to the vacuous declaration that: “we speak (and reason) the way we do because this is how we have persisted in speaking.”

Likewise, any appeal to doing therapy in order to simply dissolve the issue of noetic content seems to presuppose some sort of understanding, else all therapy would amount to is firing out one set of language outputs to counteract other sorts of language outputs.

Strewth, what a post.

So you would like to have explained how we can get genuine meaning or understanding from just following rules or computation?

Is that your issue?

Of all I’ve read of yours, this one actually resonates with me. Most of it.

Good stuff.

Precisely. I’m being vague because it’s a large, broad issue. We could talk just computational theory of mind (CTM), just Bayesian brains, just Humean constant conjunction, Kant’s axiomatic assumption that reason is wholly discursive (which Fichte identifies as problematic in this way), etc.

Let’s just look at Bayesian brains. How does statistical processing of inputs somehow become understanding? It doesn’t seem like it can. CTM is particularly sketchy here because they will say you can get everything a mind does out of steam pipes just so long as the computation is instantiated, whereas someone like Kant is different because the synthesis isn’t quite so formal (although still wholly discursive).

1 Like

I won’t disagree. The computational theory of the mind does appear inadequate. But I suspect it’s also somewhat outdated.

The Wittgenstein/Kripke discussion of rule following, amongst other things, also leads us to rejecting the idea of rule-following as merely syntactic. Following a rule is in the end an activity rather than a set of symbols. Hence one way to deal with the issue is enactivism - treating rule following as an activity.

This would be to reject the very notion of an essence apart from our practices.

I think that a good plan. There is only a problem if one presumes that there are quiddities apart from our practices.

I doubt we will find agreement.

But good OP. Nice.

How does that resolve these issues?

If by ‘content’ you mean understanding… the moral of the Chinese Room might be merely that you get it only by learning to indulge a cooperative make-believe about reference. Searle’s complaint about AI was its lack of a proper semantics. Its reliance on statistical pattern recognition that is really just syntax.

Truly understanding talk is more like walking: negotiating and predicting and solving problems of physically getting around. Only, a level more (even more) subtle, in involving inference about pretended (non-physical) semantic relations, between hot air and objects.

A dog understands whereabouts in the world a ball has landed, literally. And how to retrieve it. A human child understands whereabouts in the world a word has landed, figuratively. And how to retrieve it or spread it around.

That seems a harder enough skill that it should evade robotics for some time yet.

If it was rule following all the way down, you would need additional "meta’ sets of rules that instructed as to how to follow any given set of rules.

So, the conclusion would seem to be that intuition, synthesis, plays a crucial part and that analysis and rule following are secondary and derivative―and completely unable to get off the ground on their own.

Indeed, and more importantly, knowing where it is worth walking too and why.

That would be my take. I guess an added layer here is that you also have the problem of all the skeptical arguments that have been built off the assumption that reason is wholly discursive. A lot of these are arguments from underdetermination. Pattern recognition over sense observations is inadequate to uniquely specify when a particular rule is being followed, when a word refers to a particular entity, when a particular scientific interpretation of experimental data is accurate, etc. This also includes Humean arguments against causality, many forms of the “brain in a vat” argument, Boltzmann Brain arguments, Gettier problems (in subtle ways), and really all sorts of arguments in philosophy of language and political philosophy (e.g., Rawls, Habermas).

Since the premise is extremely widely held (indeed so widely is is not even stated in most of the examples listed), it’s not surprising that it effects a lot of philosophy. (I should note that in many cases some of their substance might hold up without the assumption of wholly discursive reason, but with very different solutions available).

An interesting side note here is my last point. This view is relatively recent (first in Islam, then in Christianity, and then secular Western thought by inheritance). It’s not a common position across traditions, and the way a receptive noetic faculty was removed from consideration in the West in a bit odd, because the main focal point was claims of authority over the interpretation of Scripture originally, not science. But by this point it is often considered to be definitive of any view that is properly “scientific.”

I think it should be acknowledged that ‘reason’ can be defined in various ways. Ethological studies have shown that many kinds of animals behave in ways that can plausibly be understood to display reasoning.

So, this is another trajectory from which to show that reason cannot be exclusively defined as discursive, since discourse requires symbolic language which we have very good reasons to think that animals do not possess.

I mentioned earlier intuition and synthesis. It certainly seems that animals are also able to intuitively synthesize an understanding of what is going on in their environments.

It also seems phenomenologically obvious that not all our reasoning is conscious, or to put it another way, explicit. We seem to be able to render some of our implicit thinking explicit, but not all of it. This is where poetry and the other arts play a role that explicatively definitive thought cannot.

I would generalize the idea of “a receptive noetic faculty” to all of life. Also, whereas it is generally counted as insticnt in animals, the role of intuition in humans may become very confusing in the phenomenological endeavour to understand it, as it becomes entangled with the capacity for fantasy due to the symbolically enabled imaginative faculty that we alone seem to possess.

Regarding hermeneutics, the conventionally established forms of authoritative scriptural interpretation seem to have often been at odds with individual mystical experience and its expressions throughout human history.

Simply that your argument in the OP assumes the rules are found, not made. The picture is that there are rules for how things are, and that to understand a rule is to apprehend the rule correctly. Now mere syntax cannot do this; we need a semantics, an interpretation. Therefore we need a receptive “noetic” sense.

But if rules are only constituted as a part of a practice, then to understand is to participate int he practice, not to grasp something external to it.

It’s back to our core disagreement, from previous discussions. I don’t agree with the necessity of essentialism. You presume it.

Now that we get to play with actual Chinese rooms, we can see how they fall over. A recent example was when some told an AI that they needed to wash their car, and that the car was was only a few hundred metres away. Should they walk or drive to the car wash. The AI responded that it would make more sense to walk the short distance to the car wash…

It failed to participate in the “form of life” that is constituted by washing one’s car.

You cannot participate in a practice unless you understand it. Understanding a practice which consists in rule-following cannot be a matter of following further ‘meta’ rules, because that would introduce an infinite regress.

Then you can never participate in something novel.

That’s like saying that animals cannot respond appropriately to situations they have not previously encountered.

The simple fact is that we learn most activities, and more over demonstrate that prowess, not by remembering the rules but by participating in them. The assistant can attend study every evening for a year, be able to answer all the professor’s questions correctly, and yet may still bring a block when asked for a slab.

We can and do participate in activities without understand them. It’s how we learn.

I put that to ChatGPT 5.2, which responded (in part):

As for language models: they do not possess metaphysical commitments or attitudes. They synthesize patterns from the texts on which they were trained. If a tradition treats noetic receptivity seriously, that seriousness will generally be reflected in the response. If certain online communities mock it, that tone can also be reproduced. There is nothing intrinsic to discursive analysis or statistical language modeling that entails dismissing classical accounts of intellect as ‘magical’.

The substantive philosophical issue is not whether the idea ‘sounds mystical’, but whether discursive or computational accounts of cognition can adequately explain the grasp of universals, necessity, intentionality, and normativity. That question remains open in contemporary philosophy of mind, and thoughtful people can disagree about it without resorting to ridicule.

Me, I don’t think that LLMs display intentional behaviour in the same sense that humans do, because they’re not conscious beings, although they can simulate conscious intentionality with a spooky degree of realism - another point which ChatGPT affirmed:

Large language models do not possess consciousness in that sense. They do not have experiences, a point of view, or awareness of meaning. They generate outputs by modeling statistical regularities in language data. While their behavior can resemble reasoning and even norm-sensitive discourse, this resemblance does not by itself establish genuine intentionality.

I suggest the reason why some will contest this claim, is that the intentional nature of being is not something objectively discernable - which is also the source of what you’re designating the ‘harder problem of quiddity’.

Good OP. I also think that the argument you’re making can be framed by speaking of intelligibility, i.e. the property of entities to be ‘understood’ (at least in part) by reason. That is that conceptual schemes really seem to ‘capture’ something about the real and, also, that all explanations are bases on the assumptions that the world is intelligible, otherwise they would not be explanations. So, whoever denies intelligibility has to either consider it an illusion or think about it in purely ‘pragmatic’ terms. If the latter option is true, however, the problem remains: why are conceptual schemes useful? What makes them useful? At a certain point, it seems to me that pragmatism actually leads one eventually to accept either the idea that the world is intelligibility (that ‘quiddity’ and the receptive faculty you’re speaking of are real) or that intelligibility is completely illusory, which IMO doesn’t seem very credible.

So would something like the Kripkenstein be a solution here then? I think it’s actually one of the more stark examples of the problems listed above, a sort of reductio even.

For one, Kripke’s skeptic has no legs to stand on if there is a receptive faculty (e.g., on non-empiricist accounts of how mathematics is understood). Kripke doesn’t even allow this as an option. Kripke assumes, without argument IIRC, that any receptive or rational faculty would itself need interpretation, and therefore collapses into a regress. But that assumption already presupposes a wholly discursive picture of understanding.

Second, if Kripke’s theory were true, it wouldn’t be “true” in any thick sense, it would merely be validly assertable (i.e., an appropriate behavioral output), and only within the context of a particular community (not even all communities).

Maybe the position could be salvaged if community practices, and what is found “useful,” were grounded in some way. They aren’t though. Community practices are based on “usefulness.” But then “usefulness” is given no measure by which we can say whether a community’s practices are truly useful, as opposed to merely apparently useful. Practices are “useful” just in case we observe them persisting within a community. This means Kripke’s argument is “true,” i.e., assertable, just in case it is assertable within a given community. So it’s “true” just in case people accept it as “true.” Error, in turn, just becomes local deviance. There is no distinction between a community seeming right and being right

But community practices are wrong all the time, about what is good and what is true. There is, prima facie, no good reason to find community standards normatively binding in any thick sense.

Note also that because truth depends on a “usefulness” that bottoms out in whatever if felt to be useful, this is also straightforwardly voluntaristic, just in a more “democratic” fashion.

And sure, Kripke could appeal to a shared “form of life,” biological needs, etc. However, this is just another performative contradiction. Aside from being very vague, the idea of a “form of life” that grounds practices and anchors them towards truth (in any thick sense) is either:

A. Asserting exactly the sort of claim to metaphysical truth and causal explanations that is being denied to others; “truth for me, not for thee,”—a performative contradiction; or

B. Such appeals to how “forms of life” or biology ground practices are themselves merely “currently assertable according to some communities’ practices.” They are not true of how language works and evolves per se. That’s all there is to it. This explanation is not “true of language,” but rather “assertable in some contexts.”

Likewise, there is a transition problem. If a community changes its mind (e.g., moving from Newtonian physics to Einsteinian), Kripke’s model struggles to explain this as progress. Has thought become more or less adequate to being, or have assertability conditions just changed? Did they change this way because the new practice is more adequate to being? But then the same regress also exists. So, the sentence:

“Relativity replaced Newtonian physics because it was more accurate” would itself only be true in virtue of whether community practices currently render it as assertable. Progress and change become inscrutable (voluntaristic) because there is no meta-level from which to speak of them.

To be honest, I think the plausibility of these sorts of theories, or something like Rorty’s theory of truth, simply trade on equivocation. It would be more up front for them to simply deny truth and knowledge per se, and introduce some other substitute term. Flipping back and forth between the natural language senses of the term and the redefinitions just clouds these sorts of issues.

And I think this undermines appeals to “doing therapy,” in that the “errors” and “problems of language” being addressed would themselves only be “errors” and “problems” if the community feels them to be so.