Where Are You Going? | 008
On a new Vatican document, a week in Rome, and the anthropological question AI is forcing the Church to answer
On March 4, the International Theological Commission published Quo Vadis, Humanitas? — a 48-page document, five years in preparation and approved by Pope Leo XIV, asking what may be the most important question in Catholic intellectual life right now: Where are you going, humanity? The day after, the Leonum Institute’s Taylor Black presented a technical primer on AI to the Pontifical University of St. Thomas Aquinas at the Thomistic Institute’s conference, Artificial Intelligence: A Tool for Virtue? The timing was not coordinated. The convergence was.
The ITC document is the most substantive treatment of AI, transhumanism, and posthumanism that any organ of the Magisterium has yet produced. It is organized around four categories: development, vocation, identity, and the dramatic condition of human existence. Transhumanism, the Commission writes, holds “a distinctly anthropocentric perspective, subscribing to an ideological and naively uncritical vision of scientific and technological progress.” Its pursuit of technologically mediated immortality is “the existential expression of a presumption that is both naive and arrogant.” Posthumanism, which dissolves the boundary between human and machine entirely, is “an existential expression of escape from reality, which stems from a radical devaluation of humanity.” Both, the ITC argues, are forms of what Francis called neo-Gnosticism: the dream of freeing the person from the body, the cosmos, and history.
Against these movements, the Commission proposes a Christian anthropology grounded in life as vocation, that is, the human being as one who receives herself as gift, shares that gift with others, and recognizes its source in God. Drawing on Gaudium et Spes at its sixtieth anniversary, the document insists that the future of humanity “is not decided in bioengineering laboratories, but in the ability to navigate the tensions of the present” without losing our sense of limits and openness to mystery. If humanity places total trust in technology in a world ruled by machines, it risks replacing the living God with a counterfeit “virtual God.”
This is exactly right. Yet it is incomplete.
The Epistemological Gap
The ITC document identifies a critical danger: that the information revolution could reduce knowledge to what AI can process, rendering philosophy, theology, and ethics matters of personal “taste” rather than genuine inquiry. This is, in fact, happening. But the document lacks the epistemological vocabulary to say why the reduction fails. It diagnoses the symptom without naming the disease.
The disease is cognitional. Bernard Lonergan’s analysis of human knowing drawing on the Catholic Intellectual Tradition identifies three irreducible levels of conscious intentional activity: experience (attending to data), understanding (grasping intelligible pattern in data — the act of insight), and judgment (the reflective grasp of whether one’s understanding is correct). Each level is distinct, each is necessary, and none is reducible to the others. Understanding is not a more complex form of experiencing; judgment is not a more refined form of understanding. They are different acts performed by a conscious subject.
AI systems — even the most capable frontier models — operate at the first level with extraordinary power and perform functional analogs of the second. They attend to data at a scale no human can match, and they extract patterns from that data with remarkable sophistication. What they do not do, and what their architecture does not equip them to do, is ask Is it so? They produce statistically plausible outputs without the reflective operation that distinguishes a pattern correctly identified from a pattern confabulated. The model has something formally parallel to understanding without anything formally parallel to knowledge.
This is the argument the Leonum Institute brought to the Angelicum on March 5 — not as a criticism of the Church’s work, but as a contribution to it. The ITC has the anthropology. Lonergan supplies the cognitional precision that locates, with technical specificity, where the machine stops and the human person begins.
What the Room Taught Us
As we all know, the most valuable moments at a conference are not the talks. They are the responses.
The respondents and the questions that followed the Angelicum presentation converged on a single challenge: even the Lonerganian framework, precise as it is about the acts of understanding and judgment, does not go far enough into the embodied character of human knowing. The room pushed toward a richer account of the human person — one that draws on neuroscience (the vagus nervous system, the gut-brain axis, the role of affect in cognition), on the Theology of the Body (the body as the primordial sacrament, the person as a unity of body and soul that cannot be adequately described by any dualism), and on the patristic and conciliar insistence that what is not assumed is not healed. The direction is Incarnational and Resurrectional: a Christian account of intelligence must be an account of enfleshed intelligence, of a knowing subject who is not a mind using a body but a body-soul unity whose knowing is irreducibly somatic.
This matters for AI precisely because the machine is not embodied. When the ITC warns against making “the boundary between the human and the machine entirely fluid,” the deepest reason the boundary holds is not computational but corporeal. The human person knows as a body. Emotions are not noise in the signal; they are an integral part of the signal. The felt shift from puzzlement to understanding — the experience Lonergan calls insight — is not an epiphenomenon of a deeper computational process. It is the process. To reduce intelligence to what can be replicated in silicon is to reduce the human person to what can be abstracted from flesh, which is precisely the Gnostic temptation the ITC names.
The Catholic intellectual tradition has the resources. What it needs is the will to deploy them — not in the register of cautious papal statements about “ethical AI” (necessary as those are) but in the deeper register of philosophical anthropology, where the question is not How should we govern AI? but What is the human being, such that it matters?
A Public Resource
In conjunction with the Angelicum presentation, the Leonum Institute has published A Primer on Artificial Intelligence for Philosophers and Theologians — a freely available, comprehensive technical introduction written specifically for readers trained in the traditions of Aristotle, Aquinas, Lonergan, and their interlocutors. The document covers mathematical foundations, neural network architectures, transformer mechanics, large language models, the current research frontier (including reasoning models, mechanistic interpretability, and agentic systems), and the philosophical landscape — with each section surfacing the questions that the technical reality raises for philosophy and theology.
The Primer is designed as an evergreen resource. It will be updated as the technology progresses, so that the philosophical and theological community always has access to a technically precise, intellectually honest account of what these systems actually do. The goal is not to resolve the philosophical questions but to ensure they are asked with the precision the tradition demands. We welcome critique and feedback.
Across the Landscape
ITC: Quo Vadis, Humanitas? (International Theological Commission, March 4) — The full text, available in Italian and Spanish, repays close reading. The document’s four-chapter structure (development, vocation, identity, dramatic condition) provides a usable framework for Catholic institutional responses to AI. Particularly notable: the Commission includes among its new appointees Dominican Father Simon Francis Gaine, a professor of theology at the Angelicum itself, signaling sustained attention to these questions. Read the Vatican News summary linked above; the full document is available through the ITC.
Vatican Seminar: “Potential and Challenges of Artificial Intelligence” (Secretariat for the Economy / ULSA, March 2) — Two days before the ITC document dropped, the Holy See hosted a seminar with the personal encouragement of Leo XIV. Fr. Paolo Benanti, the only Italian member of the UN Committee on AI, argued that every technological artifact “functions as a configuration of power and a form of order” when it enters a social context. Bishop Paul Tighe named the Anthropic situation directly — a company founded to promote ethical AI now reportedly facing government pressure to relax its commitments regarding military and surveillance uses. The seminar cited Antiqua et Nova‘s call for “a wisdom of the heart, capable of integrating the whole and its parts.”
Anthropic and the Pentagon — The most consequential AI ethics story of the year is unfolding in real time. Anthropic refused to grant the Pentagon unrestricted access to its Claude models for autonomous weapons and domestic mass surveillance. The Trump administration ordered all federal agencies to cease using Anthropic’s technology and designated the company a supply chain risk to national security. OpenAI announced a deal to replace Anthropic on classified networks hours later. The National Catholic Register ran an analysis explicitly connecting Anthropic’s stand to the Church’s position on autonomous weapons in Antiqua et Nova, which declares lethal autonomous weapons systems “a cause for grave ethical concern” due to their lack of “the unique human capacity for moral judgment and ethical decision-making.” Anthropic CEO Dario Amodei argued that frontier AI systems are not reliable enough to power fully autonomous weapons — a claim that, whatever one thinks of its political context, is technically correct and morally serious. The episode is a live case study in whether the technology sector can sustain ethical commitments under state pressure, and the Catholic intellectual tradition has direct resources to bring to bear.
Leonard DeLorenzo, “What We’re Becoming: AI and the Future of Human Dignity” (Catholic Review, March 4) — DeLorenzo, who teaches at Notre Dame, asks the question the tech futurists keep missing: “Nobody’s asking what we’re becoming in the process.” He argues that every technological revolution has formed us — the printing press changed how we think, the smartphone rewired our brains — and that AI will reshape what we think it means to be a person. The piece is short and deliberately so: it points to the need rather than claiming to fill it. The Leonum Institute’s work on formation versus training is a partial answer to his challenge.
Know of substantive work at the intersection of human flourishing and emerging technology? Primary sources preferred. Send it our way.
Technical Horizons
LLM Timeline — A useful visual resource tracking over 190 large language models from the original 2017 transformer paper through the present. What strikes you when you see it laid out is the compression of time: the entire arc from “Attention Is All You Need” to models that spontaneously develop multi-perspective reasoning and score gold at the International Mathematical Olympiad fits in under nine years. The pace is not slowing. Bookmark this for orientation — and for the next time someone asks you how fast things are moving.
Jack Dorsey lays off nearly half of Block — Dorsey posted the internal memo publicly: Block is reducing from over 10,000 employees to just under 6,000. The reason is not financial distress — gross profit is growing — but a bet that AI has made many roles unnecessary. The key line: the company believes it can do the same work, and more, with a smaller organization because of what AI now enables. This is the displacement scenario the Catholic social teaching tradition has been warning about, now arriving not as a hypothetical but as a CEO’s public reasoning. Kennedy’s three goods proper to employees — livelihood, good work, friendship — are all implicated when 4,000 people lose their jobs not because the company failed but because it succeeded. Dorsey’s severance terms are generous (20 weeks plus tenure-based additions, equity vesting, healthcare, devices, transition funds). The moral question is whether generosity in the exit can substitute for the goods that were available only through the work itself.
Anthropic: Introducing Claude Sonnet 4.6 — Anthropic released its most capable mid-tier model, with performance approaching its flagship Opus class at a fraction of the cost. The headline numbers matter less for this newsletter than the trajectory they represent: Sonnet 4.6 shows major improvements in computer use (the ability to navigate software the way a human does — clicking, typing, reading screens), long-context reasoning across a million-token window, and agentic task execution. In a simulated business competition, the model developed a strategy of investing heavily in capacity early and pivoting to profitability late — a temporal planning behavior that was not programmed but emerged from the model’s training. For the philosophically inclined: the safety evaluation found the model has “a broadly warm, honest, prosocial, and at times funny character” — language that deliberately describes character rather than personality, a distinction worth pressing.
Google: Gemini 3.1 Pro — Google’s latest reasoning model scores 77.1% on ARC-AGI-2, more than doubling the performance of its predecessor. The model is designed for tasks requiring synthesis across large datasets, multi-step reasoning, and what Google calls “ambitious agentic workflows.” Available across Google’s consumer and developer platforms. The competitive dynamic is worth noting: Anthropic, Google, and OpenAI are now releasing frontier-class models on roughly monthly cycles, each leapfrogging the others on specific benchmarks. The pace validates the efficiency-over-scale thesis from Newsletter 007 — architectural cleverness and post-training techniques, not brute compute, are driving the gains.
The ITC asks: Quo vadis, humanitas? The question echoes the one tradition holds was put to St. Peter fleeing Rome. Peter turned around. The tradition that formed him — the encounter with a person, not a system — was what made the turning possible.
The work of this moment is not to flee from what the machine can do but to turn toward what the human person is. The ITC has given us the anthropological framework. The Angelicum conversation has shown us where it must be deepened. The Primer provides the technical foundation. The rest is ours.
Taylor Black
Founding Director, Leonum Institute for AI & Emerging Technologies
The Catholic University of America



Taylor, great work, its seems the suggestion to appropriate Lonergan is of value 😉
Taylor, thank you for the thoughtful recap and your related insights.