Nature and reason

Splitting this off as its own discussion.

The question is why. Or, the question is must it be so?

Consider: the claim that there is a normative ‘realm’ and an empirical ‘realm’, is that a normative claim? It does not appear to be; it says there is such a distinction, not that there ought to be. Does that mean that it is an empirical discovery that there are these two realms? If so, then the normative realm must be empirically discoverable. Strictly speaking, I suppose, the existence of the normative must be discoverable; it’s not perfectly clear that we can count on being able to determine, for any phenomenon, whether it belongs to the causal or the normative realm. Maybe we can.

If memory serves, when Hume talks about the is-ought problem in the Treatise, there are two points: (1) that an ‘ought’ cannot be derived from an ‘is’; and (2) that ‘ought’ claims, not being about relations between bodies or relations between ideas, cannot be discovered by reason. These days, everyone seems to accept (1) as true by definition. As for (2), you will see claims, contra Hume, that cognitive and moral norms are discoverable only by reason, or, at the very least, are only graspable by reason. Hume may have thought that by showing that you don’t find ‘ought’ in nature, he was putting it outside rational inquiry; what people heard was that reason itself is not part of nature.

Rather than jump, as we usually do, straight to the question of whether there is or could be a naturalist account of reason (intentionality, morality, normativity, etc.), I want to focus on why this question arises at all. Why do we think there are things discoverable by reason but not available to rational inquiry as other things are? Why are there (at least) two different kinds of knowledge? (I haven’t even mentioned mathematics.) If we hold such beliefs, on what grounds do we do so, with what evidence, as a result of what kind of inquiry? How did we learn such a thing, and what kind of thing is it we believe we know? Is it a fact about nature that reason is no part of it?

Pat, thanks for splitting this off into a separate thread. It’s a great question and deserves focused attention.

I want to start by pushing back on the way you’ve framed the question. I think the trouble starts with the word “realm.” I don’t actually think there are two “realms” — a normative one and an empirical one — sitting side by side like adjacent countries on a map. That way of framing it invites exactly the puzzle you’re raising: what kind of discovery tells us there are two realms? Is that discovery itself in one realm or the other? And then we’re off to the races.

What I’d say instead is that the distinction is between two irreducibly different dimensions of a single activity — the activity of knowing. When I make a judgment, that judgment has a causal history (neural processes, developmental factors, social context, etc.) and it has a normative status (it’s warranted or it isn’t, the evidence is sufficient or it isn’t). These aren’t two separate things I’m doing; they’re two aspects of the same concrete act. But — and this is the key point — neither aspect is reducible to the other. The causal story doesn’t deliver the verdict on warrantedness, and the normative assessment doesn’t tell you which neurons fired.

So to answer your question directly: the claim that there’s an irreducible normative dimension to knowing is not an empirical finding in the way that, say, the conjunction fallacy is. But it’s also not a free-floating “ought.” It’s something you discover reflexively, by attending to what you’re actually doing when you inquire. You notice that inquiry has a structure — you raise questions, you grasp possible answers, you marshal evidence, you weigh whether the conditions for affirming are met — and that structure has normative features built into it. Not imposed from outside, but intrinsic to the activity itself. You can’t even formulate the question “is this judgment warranted?” without already operating within that normative dimension.

Hume is relevant here but I think the lesson cuts differently than he intended. He showed that you can’t derive ‘ought’ from ‘is’, and thats right. But the deeper point isn’t that normativity is therefore arational or outside inquiry — it’s that the kind of inquiry that discovers normativity is self-attentive inquiry. You don’t find it by looking outward at relations between physical entities; you find it by attending to what you yourself are doing when you look outward at relations between physical entities. It’s not that reason isn’t part of nature. It’s that understanding reason requires a reflexive turn that empirical method, by design, doesn’t make because its methodological abstractions bracket exactly that dimension.

1 Like

On this telling, there is a difference between

  1. Observing someone asking questions, forming hypotheses, gathering evidence, evaluating possible conclusions, and so on.
  2. Being aware of yourself performing the steps in (1).

What do you believe the difference is?

It’s the difference between observing someone else perform an act and performing it yourself. From a 3rd-person point-of-view you can describe the performance, catalog its features, and track its regularities — but its only through undergoing the performance yourself that the normative texture of the act becomes accessible.

When you encounter something perplexing you feel the pull of the desire to understand, you have the insight that resolves the tension, you go through the work of checking that the insight holds up. You’re living through an activity that has its own internal demands, and you become aware of those demands precisely because you’re the one subject to them. The insufficiency of the evidence isn’t something you observe the way you observe a change in blood pressure; it’s something you run up against in the course of actually trying to reach a judgment.

There’s a philosophical temptation to eliminate the latter or collapse it into the former, but this leads to all kinds of problems (see @Count_Timothy_von_Icarus’s thread on “The Harder Problem of Quiddity”, for example).

Actually, a great many philosophers disagree with the fact/values distinction. It is not something discovered, which all past thinkers somehow missed, but rather a consequence of Hume’s metaphysical assumptions.

Presumably, if there are facts about values, we can have perfectly valid syllogisms about what is choice-worthy.

Consider:

X is truly more choice-worthy than Y.
Therefore, choose X.

Now, the demand that we add an additional premise, “one ought choose the better (more choice-worthy) over the worse (less choice-worthy)” is not particularly objectionable. Who would deny this?

Then again, to say that “X is more choice-worthy,” just is to say it ought to be chosen, no?

Another issue is that this objection applies just as much for theoretical reason as for practical reason. So one could object to:

X is true.
Therefore, not not-X.

On the grounds that we need an additional ought premise of the same sort:

“One ought to infer true conclusions and not false ones.”

At any rate, Hume’s metaphysical assumptions do all the heavy lifting on the hard division. Yet I’d argue that this division of theoretical and practical reason isn’t even stable. To speak of “good” evidence, “good” reasoning, “good faith” argument, or even the claim that truth is “better” than falsity, requires normative claims. To say that reason is ordered to truth is to say that it has a proper end (telos) and it follows from this that applications of reason can be more or less choice-worthy. Indeed, this must be true if there is “good versus bad reasoning.”

Plus, there seem to be plenty of examples of value-laden facts from the empirical sciences.

Consider:

“It is bad for a fox to have its leg torn off by a trap.”

“It is bad for children to have high levels of mercury in their school lunches.”

Indeed, a lot of terms used in the sciences have both descriptive and normative force, e.g., “toxic,” “unhealthy,” etc.

I should add that Hume’s anthropology and psychology, laid out mostly in the first three books, also does a lot of heavy lifting here. Prima facie, pain is sensuous and value-laden. It is also continuous with the senses. Touch that has too much pressure is continuous with touch, a sound that is too loud is continuous with hearing, a taste that is disgusting is continuous with taste, a light that is too bright is continuous with our experience of sight.

For Hume, touch and sight tell us about objective facts, namely extension and local motion, but then are strictly subjective when they are value-laden.

I should qualify that statement though. Obviously Hume is also skeptical of even extension and motion when he wants to rebut metaphysical claims. Yet he appeals to them as objective when he wants to set up his fact/value distinction. He is rhetorically selective in his skepticism. This is much like how he appeals to “nature” to explain why animals act consistently in favor of self-preservation, which is of course a completely vacuous appeal if “nature” reduces to nothing but “past conjunctions in accidentally ordered observations.” The charitable reading is that Hume isn’t contradicting himself here but is merely guilty of some rhetorical equivocation I think.

I think this is a pattern in highly deflationary metaphysics in general. Authors have to constantly appeal to “thicker” notions (often equivocating) to make their points more plausible. This goes along with the idea that skepticism is parasitic on some understanding.

(Do you know James’s essay, “The Feeling of Rationality"? Some interesting overlap here.)

I think we would all agree that there are some things you learn best by doing, and that’s what you’re describing here. In particular, problem solving is such a skill, and this works pretty well as a description of the experience of problem solving.

The crucial point, of course, is that, when trying to solve a problem, the criterion for success is not of your choosing. But that also means it is just as available as a criterion for someone else’s success as it is for yours. People do in fact mistakenly claim to have solved problems with some regularity. (Now that “Fermat’s last theorem“ is off the table, mathematicians don’t get those letters anymore, but they still get plenty of “solutions” to the Collatz conjecture and other outstanding problems.)

Where does normativity come into your account? Are the feelings you have when engaged in inquiry the source of the sense of normativity? (That, if I recall correctly, was James’s suggestion.) I can’t see you going for that. Are those feelings simply how you experience normativity? An indicator of the encounter? But what makes what you’re encountering normative? With problem solving, the criterion is external, but you place the normative within inquiry. Your description of the experience is plausible, but what is it an experience of, and what gives it the special character of evading natural description?

The distrust of reason originally stems from the British Empiricists, on the grounds that reason appears as an innate faculty. Recall that Locke and the other empiricists designated the mind tabula rasa, a blank slate, onto which knowledge was inscribed by experience alone. (Even Berkeley subscribed to that, being hostile to any notion of universals.)

Whereas rationalism has always esteemed the apparent innateness of reason. From the Phaedo:

Socrates: “We say, I presume, that there is something equal, not of wood to wood, or stone to stone, or anything else of that sort, but the equal itself, something different besides all these. May we say that there is such a thing or not?

”Simmias: “Indeed, let us say most certainly that there is. It is amazing, by Zeus.”

“And do we know what it is?”

“Certainly,” he replied.

“From where did we obtain the knowledge of this? Isn’t it as we just said? From seeing pieces of wood or stone or other equals, we have brought that equal to mind from these, and that (i.e. ‘the idea of equals’) is different from these (i.e. specific things that are equal)” (reference)

In the dialogue, Plato ascribes this innate knowledge to the immortal nature of the soul (indeed this is one of the arguments given in favour of it.) The problem for modern culture is the overall deprecation of reason, due to it instrumentalisation. Reason is seen as a faculty or trait which is advantageous for specific purposes, but is no longer esteemed for its universality in the way that it was by the rationalist philosophers.

1 Like

I haven’t read the James essay, but it sounds like something I should look at — thanks for the pointer.

You’re right that I can’t go for feelings as the source of normativity. But I’d also resist framing it as normativity being something I “encounter” that the feelings merely indicate, as though the normativity were an object sitting there waiting to be bumped into. Both options assume that normativity has to be either a subjective state or an external thing, and I want to reject that disjunction.

What I’d say is that the normativity is constitutive of the operations themselves. When insight presses toward the further question “but is it correct?” — that pressing isn’t a feeling accompanying the operation, and it isn’t an external criterion the operation happens to aim at. It’s what the operation is. Understanding, by its very nature, is an incomplete act that demands verification; judgment, by its nature, can’t be responsibly performed without sufficient grounds. These aren’t standards imposed on inquiry from the outside. They’re what inquiry consists in.

Your point about the criterion being external in problem-solving is well taken, and I actually agree. Whether someone has solved the Collatz conjecture isn’t up to them. But notice: you still only access that criterion through the cognitional operations. You have to understand the conjecture, work through the proof, weigh whether it holds. The criterion is mind-independent but your grasp of it is mediated by acts whose normativity is intrinsic to them. So I’m not placing the normative “within” inquiry in a subjectivist sense — I’m saying inquiry is the kind of activity that has standards built into its operation, and those standards are what connect you to mind-independent criteria.

As for what gives this its “special character of evading natural description” — I’d push back on that framing a bit. It doesn’t evade natural description if by “natural” you mean “real and discoverable.” The normative dimension of cognition is a genuine feature of the world — its just one that emerges at a distinct level of intelligibility from the neural and computational processes that underlie it. The relationship between those levels is real but non-deterministic: the lower level sets conditions of possibility without dictating what arises within those conditions. What does arise — cognitional activity with its own internal norms — requires its own mode of inquiry to grasp, not because its spooky or supernatural, but because its intelligibility isn’t a compression or summary of the neuroscience. It answers different questions. The neuroscientist can tell you what’s happening in the brain when someone judges; she can’t tell you whether the judgment is warranted, and that’s not a gap in her science — it’s a different order of intelligibility altogether.

My working assumption too, so it’s helpful to have this noted explicitly. (Though I think your use of “intelligibility" there invokes the different “dimension" you referred to, and I’m probably not following you there.)

I think this is a fruitful approach. It suggests there is a certain structure to inquiry, and that this structure is incomplete by nature, but it generates as it goes new pathways, new points of attachment for the structure to grow. If we say the structure allows or enables or encourages this development but not that, we could as well say it’s because such and such development is rational and the other not. Rationality is following the paths that naturally open before you, and not trying to force your way through the thicket. The standards are implicit in the constraints on the growth of the structure.

Since I’ve already mentioned Hume, I’ll note that this doesn’t feel as far as we might want from the way Hume describes how much more easily (readily, frequently) you pass from thought A to thought B than from A to C.

And also I’m developing your description in such a way that it is clearly available to modeling of the usual sort in cognitive psychology. After the last post on problem solving, I had Herbert Simon on my mind, and here he is again. He did an enormous amount of empirical research into problem solving and one of his standard models was search: the challenge of solving a problem is finding a path though the space of possible solutions.

Coming back to your model, I find very attractive the idea that certain movements of mind—inference, say—are “evaluation-apt” rather in the sense that assertions are truth-apt; they have the possibility of being judged and evaluated relative to some standard (of rationality, I suppose) built in:

I’m not sure most people would consider the standards of rationality entirely internal to the practice of inquiry in this way. And your insistence on the individual experience is even less encouraging as an account of a normativity that aspires, I presume, to universality or at least generality. Anyway, some standard we share.

Oddly, there is a solid example of the sort of thing you’re describing, but it’s not rational inquiry, it’s art. The artist proceeds according to a rule only he knows and only he has experience of. (And only he may change.) At each step, he must judge whether he is succeeding or failing, and only he knows what’s relevant to the judgment. (“Why did you throw this one away? It’s beautiful!”)

I might believe you, if you argued that such idiosyncratic thought is largely outside the scope of science, though it may be within reach of biography.

As I said, I agree we ought not expect to find a modus ponens circuit in the brain. But I’m not sure we’ve discussed anything that doesn’t fall squarely within psychology if not neuroscience.

This thread began with a quote from you, invoking the usual distinction:

But we haven’t really talked about that. Am I right in connecting the idea of validity to the standards immanent in rational inquiry?

It would be nice to avoid “What is logic and where does it come from?" but it feels like a big piece of the puzzle is left unplaced if we don’t say something substantive about the validity you consider unavailable for empirical inquiry or explanation.

I know perfectly well what you’re gesturing at. I think I could even say most of the things you’ve said with a clear intellectual conscience. I’m just not sure I actually know what I would mean by saying them.

Added:

This vitiates some of my commentary, and opens a whole new can of worms. (Work must have interfered with my philosophizing.)

And a little more:

“Mind-independent criteria."

I’m genuinely not sure how to take that. If the standards built into inquiry are not external but are mind-independent, I’m not sure what a mind is.

Fair point — I should clarify what I mean by “intelligibility” there since it is doing some work. I’m using it in a pretty straightforward sense: a “level of intelligibility” is just the level at which something becomes understandable as the kind of thing it is. So you can understand a neuron firing at the biophysical level — ion channels, action potentials, etc. — and that’s one level of intelligibility. But when you zoom out and ask “what is this person concluding, and is the conclusion warranted?”, you’re understanding the same event at a different level. The event hasn’t changed; what’s changed is the kind of question you’re asking and therefore the kind of answer that counts as adequate.

So when I say the normative dimension “emerges at a distinct level of intelligibility,” I just mean: you won’t find it if you only ask causal-mechanistic questions, not because it’s hidden but because those questions aren’t the right ones to surface it. It’s not a mysterious metaphysical layer — its more like how you can’t understand what a chess move means by describing the physical motion of a hand picking up a piece.

I like the direction you’re developing this, but I want to flag where I think it subtly shifts the picture. The Simon model and the Hume point are both about how cognition moves — which paths are more accessible, which transitions are more natural given the structure of the problem space. That’s genuine and important, and I have no quarrel with it as empirical psychology.

But the normativity I’m pointing to isn’t just about which paths open up more easily. It’s about the difference between a path opening and a path being warranted. Sometimes the easy transition is a fallacy. Sometimes the natural association leads you astray. And — crucially — we can recognize this. We can stop mid-inquiry and say “wait, that felt right but it doesn’t actually follow.” That self-correcting move isn’t well captured by a search-space model, because within Simon’s framework the constraints on the search are given by the structure of the problem and the heuristics the agent brings. The question of whether those heuristics are truth-conducive is external to the model.

So the Humean parallel is illuminating but also shows exactly where the gap is. Hume can describe that we pass more easily from A to B than from A to C. What he can’t do — on his own terms — is tell you whether you should. The structure of inquiry I’m pointing to isn’t just that some paths open and others don’t; it’s that we hold ourselves accountable to standards like consistency, sufficiency of evidence, and explanatory adequacy, and we can reflectively identify when we’ve violated them, even when the violation felt natural.

The art analogy is really helpful because it shows exactly where my position doesn’t land. The artist’s standard is, as you say, idiosyncratic — only he knows the rule, only he can judge success. But the standards I’m pointing to in inquiry are the opposite of that. When I check whether my evidence is sufficient for a judgment, I’m not applying a private aesthetic criterion. I’m applying a standard that any inquirer can recognize and hold me to. If I claim “there’s sufficient evidence that X” and you show me my evidence actually underdetermines X, I don’t get to say “well, by my standard it was sufficient.” I have to concede. That’s not how art works.

So the “individual experience” point isn’t about privacy or idiosyncrasy. It’s about access. I’m saying: you can verify the normative structure of inquiry by attending to what you yourself do when you inquire. But what you find there isn’t something peculiar to you — its the same structure anyone finds, because it’s constitutive of inquiry as such. The move from data to hypothesis to verification isn’t my personal method; it’s what makes something inquiry rather than guessing or free association.

As for whether this all falls within psychology — I think it depends on what you mean. Can psychology study the process people go through when they reason? Obviously yes. Can it tell you that checking your conclusions against evidence is constitutive of rational inquiry rather than just a behavioral regularity that most people happen to exhibit? That’s a different kind of claim, and I don’t see how you get there with empirical methods alone. You need to already know what counts as good reasoning to design the experiment that studies reasoning.

I appreciate the honesty of that last paragraph — it’s actually a really good sign when someone can say “I can repeat this but I’m not sure I know what I mean.” That’s the right kind of discomfort, I think.

So let me try to say something concrete about validity. Yes, I’d connect it to the standards immanent in inquiry, but let me cash that out a bit. Take a simple case: you notice a pattern in some data, and you form a hypothesis. At that point you have an insight — a possible intelligibility. But you also know, just by being an inquirer, that having a bright idea isn’t the same as being right. So you ask: does this actually hold up? Is the evidence sufficient? Are there counterexamples? That transition — from “this might be the case” to “this is the case” — is what judgment is, and validity is the set of conditions under which that transition is responsibly made.

Now here’s the key part: those conditions aren’t arbitrary, but they’re also not a fixed list of rules you apply mechanically. “Sufficient evidence” looks different in physics than in history than in everyday perception. What stays constant is the form of the demand — you have to have adequate grounds for affirming what you affirm. Validity is the name for that demand being met.

Can psychology study how people negotiate that transition? Sure. Can it study when they do it well vs. badly? Also yes — but notice that “well vs. badly” already presupposes the normative standard. The psychologist has to know what valid inference looks like in order to classify some performances as errors. So its not that validity is unavailable to empirical inquiry. It’s that empirical inquiry depends on it rather than the other way around.

Ha — fair enough, that phrase does deserve unpacking because I can see how it sounds paradoxical.

What I mean is something like this: the standards of inquiry aren’t mind-independent in the sense of floating around in some platonic heaven waiting to be discovered. They’re operative in minds, in the activity of inquiring. But they’re mind-independent in the sense that I don’t get to choose them. I can’t decide that contradictory evidence doesn’t count against my hypothesis, or that internal consistency is optional. If I try, I’m not doing inquiry differently — I’m just not doing inquiry anymore. The standards are constitutive of the activity, not optional features I impose on it.

So “mind-independent” here doesn’t mean “existing outside of minds.” It means “not up to any particular mind.” The criteria constrain thinking from within thinking, but they aren’t products of any thinker’s preference or decision. That’s the sense in which they’re objective — not because they inhabit some non-mental realm, but because no mind can override them while still genuinely inquiring.

If that makes the word “mind” ambiguous, I think thats actually pointing at something important. There’s a difference between mind as “the subjective theater of my private experience” and mind as “the activity of understanding, judging, and deciding.” On the first sense, mind-independence sounds like you need to get outside your own head. On the second sense, it just means the activity has a structure you didn’t invent and can’t revise at will — which, I’d argue, is something you already implicitly accept every time you check your work.

A few little points first.

I think we basically agree, but I don’t think I would say “the same event" here. At the emergent level, there are new kinds, and if there are laws in the offing, the laws of the emergent behavior will be expressed in those kinds, which simply don’t exist for the lower level. That means talk about emergent behavior is not just a different way of describing the same thing. (Maybe that’s stronger than your position.)

Here I disagree more, even though the description I offered was off-the-cuff.

A search algorithm is a backtracking algorithm; there’s no way around that. So I’m taking what you’ve insisted on in many discussions about the world, roughly speaking, pushing back, not cooperating, forcing us to update, and I’m encoding that as constraints, namely, which paths are open and which aren’t.

(Meant to say here: We can say a lot more about a path than whether the entrance to it is open or closed. We can assign weights, heuristically, to the paths we have to choose from: does it seem to head in the right direction, best we can tell; does it look difficult, so that error would be expensive; etc.)

We will sometimes go down a path that turns out not to lead where we hope or expect, and then we will have to backtrack. The theory here is that, in the long run, we will find the path that leads to our destination. That “in the long run" is, I think, unavoidable for naturalist approaches to reason, such as I am implicitly offering. It’s how we get success without relying on dispositive judgments of the correctness of individual steps.

Exactly right, but the habits you form will be habits that work—until they don’t, and then you have to update. The key will always be that the world pulls you up short.

This is all related territory, but I’m going to focus, briefly, on one version of “natural” that comes up a lot on the forum, sometimes accompanied by some anxiety about our epistemic standards: Kuhn and his paradigms. I’ve only recently learned—so I have dramatically limited expertise here—that there is a considerable body of work in the way of Bayesian responses to Kuhn. The gist here seems to be that all we’re really talking about are strongly held priors. The new evidence that comes in, there will be some updating on that, but it’s not enough to move people off a strongly held prior. For others it is.

So sometimes we’ll head down a blind alley and have to backtrack; sometimes we will assign weights to our beliefs that make them too responsive or too unresponsive to evidence. As you say, this is just fallibility.

In one sense, we have similar beliefs about there being standards inherent to inquiry; we just have different ideas about what those standards look like. I’ll post something this evening about that difference (and the rest of your post), which is the heart of the discussion.

Consider a natural language you speak. You didn’t invent it and you can’t revise it at will. You do participate in its evolution, but to a considerable degree it presents to you as “objective”: you are taught or otherwise learn the meanings of words, rather than assigning those meanings yourself; you are taught or learn the grammar, the rhetoric, the prosody, the different registers available, and so on.

But absolutely all of it is convention, and all of it is subject to change, and whether it changes depends, if only very slightly, on your linguistic behavior. In the short term, for the vast majority of people the vast majority of the time, their native language might as well be carved in stone.

That, to me, is exactly the sense in which there are standards of inquiry, or epistemic standards in general, and why they have the character they do. We are all intimately familiar with these standards, they pervade our thinking and arguing, and they are not exactly up to us, all in exactly the same way we are all experts on and stewards of our native language, which is not of our choosing, not ours to revise at will, but which, through our behavior, does change over time.

I’m a sort of reluctant pragmatist here. I think the ultimate justification of an inference is that it worked, and over the generations what works has settled into habit passed down from one to another. Validity will always cash out as success. (And I think this is particularly clear in the way we teach children and the way they learn on their own, an area philosophy as a whole pays to far too little attention to.)

What I’m really looking for is a way to recover a somewhat more classical approach—truth and knowledge and validity and all that—as another emergent layer on top of a sort of pragmatist-Bayesian layer below, though still well above the brain itself. As it stands, I tend to think of the classical view as a sort of convenient simplification of how we actually think, but it’s a simplification we are aware of, and it plays a role in our thought. (Ideals are effective, involved in our performance, not merely descriptive. Whence mathematics.)

I think we’re actually closer than the wording might suggest, but you’re right to push on it. When I said “the same event,” I didn’t mean the descriptions are interchangeable or that the higher level is just a convenient gloss on the lower one. I agree that at the emergent level you get genuinely new kinds — “conclusion,” “evidence,” “warrant” aren’t fancy redescriptions of neural firing patterns. They pick out realities that only exist at that level of organization.

What I meant by “same event” is just that we’re not talking about two separate happenings — one in the brain and one in some ethereal rational space. There’s one concrete occurrence, but it’s intelligible along more than one axis, and the axes aren’t reducible to each other. So I think your stronger formulation is actually what I intend. The risk with “same event, different description” language is that it sounds like the levels are just perspectival, and I don’t think they are. The normative level is genuinely there, not merely projected onto the causal story by an observer who finds it useful.

So — yes, stronger than my wording, but not stronger than my position.

This is a substantive disagreement, so let me try to be precise about where it lands.

I don’t doubt that search-and-backtrack captures something real about how inquiry often proceeds in practice. You try a path, hit a dead end, revise, try again. That’s a perfectly fine description of the pattern. What I’m questioning is whether it can do the work you want it to do without smuggling in the very thing it’s supposed to replace.

When you say the algorithm backtracks because a path “turns out not to lead where we hope or expect” — what determines that? In a computational search, it’s defined by the problem specification: you’ve coded in what counts as a solution and what counts as a dead end. The algorithm doesn’t judge that a path failed; it just registers that a pre-set condition wasn’t met. But in actual inquiry, recognizing that you’ve gone astray is itself an act of judgment. You have to grasp that your evidence was misleading or your inference was flawed. That recognition isn’t a brute causal event like hitting a wall in a maze — it involves understanding why the path failed, which is what lets you avoid structurally similar errors in the future rather than just that particular dead end.

The “in the long run” move is interesting but I think it confirms rather than resolves the problem. Long-run convergence to what? If the answer is “to truth” or “to how things actually are,” then you’ve got a normative standard — truth — that’s governing the whole process, and the algorithm’s success is measured against it. If the answer is just “to stable predictions” or “to the path that doesn’t get blocked,” then you’ve redefined the goal of inquiry in a way that I think most inquirers, including scientists, wouldn’t actually accept.

But “pulls you up short” is doing a lot of work here. A thermostat gets pulled up short when the temperature deviates from the set point. It updates. It forms stable response patterns. Nobody thinks the thermostat is inquiring. So what’s the difference between the thermostat and the scientist? I’d say it’s that the scientist doesn’t just register the deviation — she asks why the prediction failed, forms a new hypothesis, and evaluates whether the new hypothesis is better grounded than the old one. That whole process is normatively structured. “Better grounded” isn’t a concept you can cash out as “less likely to be pulled up short in the future,” because evaluating that likelihood already requires the kind of judgment we’re trying to account for.

I think the Humean picture you’re sketching is powerful as far as it goes — habits form, the world disrupts them, new habits form. But it’s a picture of conditioning, and I think inquiry is more than that. The inquirer doesn’t just adapt to the world’s disruptions; she understands them. That understanding is what distinguishes science from very sophisticated pattern-matching.

I think the Bayesian framing is actually a nice illustration of my broader point rather than an alternative to it. Bayes’ theorem tells you how you should update your credences given new evidence. It’s a normative constraint on rational belief revision. So when you use it to respond to Kuhn, what you’re really saying is: “paradigm shifts aren’t irrational — they’re just what rational updating looks like when priors are strong and evidence accumulates slowly.” That’s fine, but notice it works precisely because you’ve adopted an explicit normative standard — Bayesian coherence — and measured actual scientific behavior against it.

So the Bayesian response to Kuhn isn’t a naturalistic reduction of normativity. It’s normativity with a particular formal shape. The question of why you should be Bayesian — why coherence and conditionalization are the right standards — isn’t itself answered by Bayes. You’re back to the same structure: a normative standard that governs inquiry from within but isn’t generated by the empirical process it governs.

As for having “similar beliefs about standards inherent to inquiry but different ideas about what those standards look like” — I think that’s almost right, but the disagreement isn’t really about what the standards look like. It’s about their status. You want them to be describable as features of a well-functioning causal process. I’m saying they’re the conditions under which a process counts as well-functioning in the first place, which is a different kind of thing.

The language analogy is elegant, and I can see why it’s appealing — it gives you objectivity without mystery, normativity without platonism. But I think it breaks down at a crucial point.

Language conventions are genuinely arbitrary in ways that matter. English puts adjectives before nouns, Japanese puts them after. Neither is correct — they’re just different conventions that serve the same communicative function. You could, in principle, have a perfectly functional language with radically different grammar, vocabulary, phonology, all of it. The constraints are real for any individual speaker, but they’re not anchored in anything beyond collective practice.

Now try running the same move with the standards of inquiry. Could there be a community of inquirers who don’t care about consistency — who happily affirm contradictions and see no problem with it? You might say “sure, some cultures have different epistemic norms.” But a community that genuinely doesn’t distinguish between affirming and denying the same proposition under the same conditions isn’t doing inquiry differently. They’re just… not inquiring. There’s no alternative “grammar of inquiry” where contradiction is fine, the way there’s an alternative grammar of language where the verb comes last.

That’s the disanalogy. Linguistic conventions are constitutive of a particular language — change them and you get a different language. The standards of inquiry are constitutive of inquiry as such — abandon them and you don’t get a different kind of inquiry, you get something that isn’t inquiry at all. The non-contradiction case is the sharpest example, but I think the same holds for subtler standards like requiring sufficient evidence before judgment. Relaxing that standard doesn’t produce an alternative epistemic practice. It produces credulity.

So the “carved in stone” feeling isn’t, I think, just the inertia of convention. Its the resistance of something you genuinely can’t think your way around while still thinking.

I think this is the clearest statement yet of where we actually disagree, so that’s helpful.

The picture you’re sketching has pragmatic success as foundational, with classical notions like truth and validity emerging as useful idealizations on top of it. I want to argue the dependence runs the other direction.

Consider your own claim: “the ultimate justification of an inference is that it worked.” What counts as working? If I infer that a bridge will hold my weight, and I walk across it, and it does — that worked. But if I infer that it held my weight because invisible angels supported it, that inference also “worked” in the sense that my prediction was correct and I didn’t fall. So pragmatic success alone doesn’t distinguish between a true explanation and one that just happened to get the right result. To make that distinction you need something more — something like “does the explanation actually grasp why the bridge holds?” And that’s a question about truth, not just about success.

The child-learning point is interesting but I think it actually cuts against you. Children don’t just learn what works. They constantly ask why — relentlessly, famously. “Why is the sky blue? But why?” That’s not pragmatically motivated; a child has zero practical need to know why the sky is blue. What you’re seeing there is the appetite for intelligibility running ahead of any concrete payoff. The pragmatic successes come later, as byproducts of understanding, not the other way around.

So I’d resist calling the classical view a “convenient simplification.” I think it’s actually the more fundamental description, and pragmatic success is what you get when inquiry is working well — a consequence of getting things right rather than the criterion of rightness.