@Wayfarer
Great passage to kick things off. Brennan gives a clean statement of the standard textbook reading — what you might call the “manual Thomism” account of abstraction. And its a perfectly servicable entry point. But I think its worth flagging early that this reading, while widespread, smooths over some real tensions in Aquinas’s own texts — tensions that matter for how we connect this stuff to contemporary philosophy.
The big issue is what we mean by “abstraction” here. On the Brennan reading, agent intellect basically strips away material conditions from the phantasm until you’re left with the universal essence. It sounds almost mechanical: you start with a concrete sensible particular, you subtract the individuating features, and out pops the intelligible species. But if you go back to Aquinas carefully — especially the De Veritate and the relevant questions in the Summa (I.84-85) — what’s actually doing the cognitive work isn’t well described as subtraction. It’s an act. Agent intellect doesn’t peel layers off a phantasm; it illuminates the phantasm so that possible intellect can grasp an intelligibility that was only potentially intelligible in the sensible presentation.
In my opinion, the person who really nailed this down was Bernard Lonergan, in his Verbum: Word and Idea in Aquinas (originally a series of articles in Theological Studies, 1946–49, later collected as a book). Lonergan argued that the tradition had badly misread Aquinas on this point by assimilating his account to a kind of conceptualist extraction — as if understanding were fundamentally about getting the right abstract representation. What Lonergan showed, through painstaking textual work, is that for Aquinas the primary cognitive event is what Lonergan calls insight into phantasm: an act of understanding that grasps intelligible form in the imaginative presentation, not apart from it. The concept (the “inner word” or verbum) is then a product of that act of understanding — its expression, not its source.
This matters enormously for your question about modern philosophy. Because on the manual reading, the Thomistic account looks like a kind of naive realism about universals that most contemporary philosophers would reject outright. But on Lonergan’s reading, the structure is actually quite sophisticated: you have (1) sensory presentations, (2) an active intelligence that raises questions about those presentations (“what is it? why is it so?”), (3) acts of understanding that grasp intelligible patterns, and (4) concepts that formulate what’s been understood. That four-level structure maps much better onto contemporary discussions in philosophy of mind and epistemology than the simple “senses know particulars, intellect knows universals” schema suggests.
There are genuine modern parallels worth noting. The closest structural analogue is probably Husserl’s account of sensory hyle and noetic acts. For Husserl, raw sensory content (hyle) is not yet an experience of something until it’s animated by an intentional act — a noesis — that gives it sense and directedness. That’s remarkably close to the Thomistic picture of phantasm being rendered actually intelligible by agent intellect, even if the philosophical context is very different. Robert Sokolowski’s work on “categorial intuition” (see his Introduction to Phenomenology, 2000, ch. 6) makes the connection nicely: we don’t just passively absorb sensory data, we articulate it — we see that “this is a house,” we register identities and differences, we grasp wholes and parts. Those acts of articulation are intellectual operations performed on and through sensory experience, not somehow outside it. Michael Polanyi’s distinction between focal and tacit knowing (Personal Knowledge, 1958) also tracks something similar from a different angle — the idea that intelligent grasp always operates through a subsidiary awareness of particulars toward a focal awareness of pattern or meaning. Polanyi didn’t frame it in Thomistic terms, but the structural isomorphism is hard to miss. There’s also significant overlap with some of the work being done in the 4E cognitive science space, especially those working at the intersections of phenomenology and cognitive science (Varela, Thompson, Noë, etc.).
Where this intersects with — and I think genuinely challenges — a lot of contemporary work is on the question of whether understanding is reducible to having the right representations. On Lonergan’s retrieval of Aquinas, understanding is an act that can’t be fully captured by any of its products (concepts, propositions, theories). That’s a claim with real bite against both classical empiricism and computationalist theories of mind, where cognition bottoms out in representations and their manipulation.
That said, there are serious modern challenges that any neo-Thomistic position has to reckon with. In my opinion, the most pressing comes from Sellars and his heirs — the “Myth of the Given” worry. The Thomistic picture seems to assume that sensory experience delivers a structured datum (the phantasm) that intellect then operates on. But Sellars argued convincingly that there’s no layer of cognition that is simultaneously (a) non-conceptual and “merely given” and (b) capable of justifying or grounding conceptual knowledge. If the phantasm is pre-conceptual, how does it serve as a basis for intellectual insight? If it already has conceptual structure, then the sharp senses/intellect divide starts to look artificial. McDowell picked this thread up in Mind and World and argued that perceptual experience is already conceptual “all the way down.” Now I actually think the Lonerganian position has resources here — because insight into phantasm isn’t a matter of the phantasm justifying the concept the way a premise justifies a conclusion; its a creative act of intelligence that grasps an intelligibility the phantasm merely occasions. But its a real challenge and it forces you to be much more precise about the phantasm-insight relationship than the manual tradition ever was.
The other major challenge is straightforwardly naturalistic. If cognitive neuroscience can give us a complete causal account of how humans categorize, abstract, and form concepts — in terms of neural networks, pattern recognition, statistical learning over exemplars — then what work is “agent intellect” actually doing? Why posit an immaterial power at all? This is basically the Churchlands’ line (see P.M. Churchland, Matter and Consciousness, 1984), and it puts pressure not just on the Thomistic framework but on any account that treats understanding as irreducible to physical process. The standard Thomist response is that the neuroscience describes necessary conditions for understanding without describing understanding itself — that grasping, say, the Pythagorean theorem is not identical with any particular neural firing pattern, because the same theorem is grasped through radically different neural substrates across individuals. Thats a legitimate point, but I’d be the first to admit it needs a lot more careful development than the neo-Thomist literature typically gives it.
Its also worth stepping back and noticing that the Brennan passage, precisely because it focuses on abstraction as a cognitive mechanism, leaves out the deeper metaphysical context that gives the whole account its real force. For Aquinas, the intellect’s capacity to grasp universal form isn’t just an interesting feature of human psychology — it’s a mode of participation. The agent intellect is a created participation in uncreated light (see Summa I, q.84, a.5; and De Veritate q.11, a.1), and the forms it grasps are themselves participations in the divine ideas. This is where the classical transcendentals come in: being, truth, goodness, and beauty aren’t four separate properties tacked onto things; they’re convertible aspects of being as such. Every being, insofar as it is actual (= being), is also intelligible (= true), desirable (= good), and — in the integrity of its form — beautiful. So when intellect grasps the form of a thing, it isn’t just performing an epistemological operation. It’s encountering the thing as a participation in being, and therefore as simultaneously true, good, and beautiful.
The modern thinker who recovers something like this most powerfully is probably Charles Taylor, whose account in Sources of the Self (1989) and A Secular Age (2007) of how naturalism progressively “disenchants” the world is really a story about how we lost this participatory framework — the sense that knowing a thing truly is simultaneously an encounter with its goodness. Taylor wouldn’t put it in Thomistic jargon, but the underlying worry is the same: once you flatten the knowing subject into a detached, neutral observer processing inputs, you’ve cut the nerve that connects epistemology to ethics and aesthetics. I think that’s a loss, and I think the transcendentals framework — properly retrieved, not just repeated as scholastic formula — gives you resources for re-establishing those connections that a purely naturalistic epistemology doesn’t have.
If you want to dig further into the textual issues (which are surprisingly complex), Verbum is the essential text, though its not easy going. Lonergan’s later Insight: A Study of Human Understanding (1957) then takes the basic cognitional structure he found in Aquinas and develops it independently of the Thomistic framework — and crucially, chapter 19 and 20 of Insight are where Lonergan works out his own account of how the dynamism of inquiry implicitly intends the transcendentals, arriving at a notion of being as “the objective of the pure desire to know.” Frederick Crowe’s and Robert Doran’s edition of Verbum (University of Toronto Press, Collected Works vol. 2) has helpful editorial notes. For a secondary source that connects this to analytic epistemology, see Hugo Meynell’s An Introduction to the Philosophy of Bernard Lonergan (U of T Press, 1991).
Another take from a different angle is Anthony J. Lisska’s Aquinas’s Theory of Perception: An Analytic Reconstruction (OUP Oxford, 2016). Lisska rehabilitates Aquinas’s perceptual realism from the bottom up, working in an analytic key, whereas Lonergan retrieves Aquinas’s theory of intellect from the inside out, working in a cognitional-theoretic key. They’re actually fairly complementary — Lisska gives you a much more developed account of what’s happening before the moment of insight that Lonergan cares about — though I suspect Lonergan would find Lisska’s naturalistic framing a bit too accommodating to precisely the reductionist pressures that the Thomistic account of agent intellect is meant to resist.
Also worth checking out is John Deely’s Four Ages of Understanding and Intentionality and Semiotics. For Deely, cognition is fundamentally semiosis (a la Peirce): a triadic relational process (sign–object–interpretant) that can’t be collapsed into dyadic causal interactions. He reads Aquinas’s species — both sensible and intelligible — as signs, and the whole movement from sensation through phantasm to intellectual grasp as a chain of sign-activity. His key Thomistic resource here isn’t actually Aquinas directly so much as John Poinsot (John of St. Thomas), whose Tractatus de Signis (1632) Deely edited, translated, and championed as the great overlooked achievement of the tradition.
The contrast between Deely and Lonergan is pretty sharp. Lonergan’s question is always: what is the conscious act that the knower performs at each stage? The emphasis is on cognitional operations — experiencing, understanding, judging — as acts that the subject does. Deely’s question is different: what is the relational structure that makes any cognitive contact with an object possible in the first place? His emphasis is ontological rather than cognitional — he’s interested in how signs function as a unique mode of being (what he calls relation in the technical scholastic sense, where relations can be “mind-independent” or “mind-dependent” in ways that cut across the modern subjective/objective divide). But Deely does flag something that Lonergan underdevelops, which is the role of signs and linguistic mediation in the cognitional process. Lonergan’s early work especially tends to treat insight as something that happens between the solitary knower and the phantasm, without enough attention to how sign-systems shape what can show up as intelligible in the first place.
Anyway, I’ve written way too much now. Let me know if there’s anything in particular you’d like to explore or push back on. Thanks.