Leonum Institute Weekly | Issue 002
A builder’s pedagogy: why forming AI practitioners requires more than adding ethics to engineering
Opening Frame: Formation, Not Addition
A standard approach to “responsible AI education” treats ethics as a module—a course requirement slotted between data structures and machine learning, checked off and forgotten. Such programs produce graduates who may be able to recite principles but cannot exercise judgment, who know that bias exists but cannot recognize it in their own work, who have studied frameworks but never built anything that required choosing between competing goods.
The Leonum Institute takes a different path. We are developing a pedagogy that forms builders—practitioners who don’t merely use AI tools but shape them, who understand that every technical decision embeds assumptions about what human beings are and what they need. More than fluency, this is co-creation.
We offer a working thesis, not a finished curriculum. The landscape shifts daily; any pedagogy that claims completeness has already failed. What we can offer is a disposition: formation grounded in enduring truths about the human person, applied with the humility of practitioners who know they are learning alongside everyone else. We expect to revise as the technology develops, as our students teach us what they need, and as our collaborators sharpen our thinking.
Four Pillars of Leonum Pedagogy
The Builder’s Mindset
Whether the artifact is a single conversation with a generative AI tool, an agent executing tasks on someone’s behalf, or a venture-backed startup, the same fundamental disposition applies: ownership. We approach all our pedagogical work with the mindset of a product manager—someone responsible for whether something works, whether it should exist, whom it serves, what it assumes about its users.
Our students learn to ask the questions that precede the technical ones. What problem does this solve? For whom? What does this tool assume about human attention, memory, judgment, relationship? What happens when it fails? Who bears the cost?
The builder’s mindset refuses the fiction that technologists merely create neutral instruments that others then deploy wisely or foolishly. Every design choice is a choice about human life. We form people who understand this in their bones, not merely in their syllabi.
Disciplinary Breadth as Methodological Commitment
The university arose from the conviction that wisdom admits of myriad approaches—that theology, philosophy, natural science, law, medicine, and the arts each illuminate dimensions of reality that the others cannot reach. No single discipline exhausts what can be known; each brings its own questioning to bear on shared problems.
Artificial intelligence, then, cannot remain a technical domain that other fields may optionally consider. Every discipline will shape this general-purpose technology according to its own understanding of the human person. The nurse deploying clinical decision support, the lawyer using document analysis, the teacher integrating adaptive learning systems, the artist collaborating with generative models—each confronts questions that their formation uniquely equips them to answer.
Our pedagogy, therefore, refuses to confine AI education to computer science departments. We are developing pathways for students across the university to engage this technology from within their own disciplines—nursing’s understanding of care, law’s understanding of justice, education’s understanding of formation, art’s understanding of meaning brought to bear on novel problems. The result: enriched technological judgment from practitioners who understand their domain deeply enough to know what AI should and should not do within it.
We do not yet know what all these disciplinary pathways will look like. The nursing faculty will teach us what questions arise at the bedside that computer scientists never imagined. The theologians will surface assumptions we embedded without noticing. A genuine university learns from itself.
Formation in the Catholic Intellectual Tradition
Catholic universities have always understood that education forms persons, not merely professionals. The tradition holds that the human being is a unity of body and soul, that reason and faith illuminate each other, that knowledge divorced from wisdom becomes dangerous, that the common good is not the sum of private interests but a shared condition of flourishing.
Such formation grounds genuine technical excellence. Students shaped by this tradition bring to their work a robust understanding of human dignity, a suspicion of reductive accounts of the person, a commitment to solidarity with the vulnerable, and a long view that resists the pressure of quarterly metrics.
The Leonum Institute draws on this inheritance deliberately. We offer Catholic formation in technological judgment—holding open the question of what the human being is even as we equip students to build tools that serve human purposes.
Here too we proceed with humility. The Catholic intellectual tradition is vast; we are its students, not its masters. We bring twenty centuries of reflection on the human person into conversation with technologies that did not exist two years ago. The tradition gives us stable ground, but the application requires discernment we are still developing. We will get things wrong. We will learn. The tradition itself teaches us to expect this: Augustine revised, Aquinas synthesized, Newman adapted. Fidelity to a living tradition means thinking with it, not merely repeating it.
Industry-Recognized Credentials for Workforce Relevance
Formation without application becomes abstraction. Our students enter a labor market that increasingly requires demonstrated AI competency, and we take that reality seriously.
We are integrating industry-recognized micro-credentials into our pedagogical architecture—certifications that employers recognize, that translate across institutional boundaries, that verify specific competencies in concrete terms. As Learning and Employment Records (LERs) and Comprehensive Learner Records (CLRs) gain adoption across workforce systems, our graduates will carry credentials that speak both languages: deep formation and verified capability.
Our graduates will exercise their formation in workplaces; their capacity to do good depends partly on their capacity to be hired, trusted, and promoted. Intellectual depth and practical relevance belong together. Our students will have both.
The credential landscape itself evolves rapidly. Standards that matter today may consolidate or fragment tomorrow. We are building relationships with industry partners and standards bodies precisely so we can adapt as the ecosystem matures—committed to relevance rather than to any particular certification that may prove transient.
Learning in Public
We publish this newsletter because we believe the questions deserve company, not because we have answers. The Leonum Institute is young. Our pedagogy is in development. Our partnerships are forming. We are making commitments we will be held to and articulating positions we may need to revise.
The alternative—waiting until everything is polished before sharing—would deprive us of the collaborators and critics we need. It would also misrepresent the work. Forming technologists for a shifting landscape cannot be solved once and implemented thereafter. The work demands ongoing discernment, adjustment, and fidelity to first principles amid changing circumstances.
So we build in the open. We invite correction. We expect to learn from those who read these letters and write back. The Catholic intellectual tradition has always advanced through dialogue, disputation, and the patient work of getting things less wrong over time. We continue that tradition here.
Looking Ahead
Coming issues will share specifics: the certificate architecture we are developing, the partnership conversations underway with sister Catholic universities, the computational infrastructure coming online at CUA. The vision above will become curriculum, courses, credentials, and collaborations—each iteration teaching us what we did not yet understand.
For now, the thesis: AI pedagogy adequate to the moment requires more than technical training supplemented by ethics. It requires formation—the slow, integrated work of shaping practitioners who build wisely because they understand deeply. And it requires humility—the recognition that we are all beginners in a landscape that remakes itself faster than any curriculum committee can convene.
We are building this pedagogy in the open. If you are working on similar questions at your institution, reach out. The work is too important for any of us to do alone, and none of us yet knows enough to do it well.
Across the Landscape
Brandon Vaidyanathan’s Beauty at Work podcast released a two-part conversation this week featuring Jaron Lanier, Glen Weyl, and myself exploring what kind of story we tell ourselves about AI—and why that story so quickly slips into theology. Jaron traces a buried origin: “artificial intelligence” emerged not from superior explanatory power but from an academic turf war against Norbert Wiener’s cybernetics. The cybernetic view—technology as networked collaboration rather than autonomous entity—lost the rhetorical battle but won the technical one. Today’s neural networks are precisely what cybernetics described, yet we inherited the rival vocabulary that treats AI as a thing unto itself. The conversation turns on a figure-ground reversal: you can see a large language model as an entity approaching consciousness, or you can see it as collaboration—something closer to Wikipedia with statistics than to a new god. Glen offers a memorable reframe: “Be the super intelligence you want to see in the world.” Corporations, religions, democracies—all meet every definition of super intelligence ever proposed. Jaron’s proposal for the Talmud as design pattern deserves particular attention: generation after generation of contributors, each situated in a particular place on the page, perspectives preserved rather than flattened into a view from nowhere. The full conversation is worth your time: part one and part two.
Separately, the debate over Magisterium AI continues to surface important questions about what it means to build Catholic technology. Matthew Harvey Sanders of Longbeard published a thoughtful response to recent criticism, distinguishing between tools that attempt to replace authoritative teaching and those that help users navigate existing resources. The conversation illustrates precisely the kind of discernment our pedagogy aims to cultivate: not whether to build, but how to build wisely—with appropriate humility about what technology can and cannot do in service of formation.
Technical Horizons
Opus 4.5 and the builder’s threshold
Burke Holland’s “Opus 4.5 is going to change everything“ deserves attention not for its headline but for what it demonstrates about the builder’s mindset we’re trying to cultivate. Holland—a developer at Microsoft—built four functional applications in hours: a Windows image converter, a screen recording editor, a social media scheduling tool with Firebase backend, and a routing application for his wife’s small business. His reflection captures something important: the model didn’t replace his judgment, but it did collapse the distance between intention and artifact. The applications exist because he knew what problems needed solving and for whom. The hardest remaining questions—security, API key management, edge cases—still required human discernment. For our pedagogy, this suggests the builder’s mindset becomes more essential, not less, as capabilities expand.
Economic primitives for understanding AI use
Anthropic released the fourth installment of their Economic Index, introducing what they call “economic primitives”—foundational measurements tracking task complexity, skill level, purpose, AI autonomy, and success rate. Two findings warrant attention. First, more complex tasks see greater speedup: work requiring a college degree was accelerated 12x compared to 9x for high-school-level tasks, even after adjusting for lower success rates on harder problems. Second, the geographic distribution of AI use follows a clear adoption curve—lower-income countries use Claude predominantly for education, while higher-income countries show diversified use across work and personal applications. For those thinking about AI pedagogy globally, Anthropic’s partnership with the Rwandan government and ALX offers one model: begin with AI literacy, then support transition to broader applications.
Two hundred trend reports for sharpening judgment
Tim Duggan’s annual compilation of trend forecasts returns with over 200 reports from strategists across industries. His framing matters as much as the content: the top skills for 2026 include AI literacy (obviously), but also conflict management and judgment—the capacity to make calls that only humans can. Duggan’s advice for using the collection: read the reports yourself before summarizing with AI. “The skill of judgment, which you can also get by doing, comes from reading through the information and deciding what is useful, and what’s not, for you.” Formation requires friction. The Google Drive is worth bookmarking.
Closing Note
Newman wrote that the university exists to teach “universal knowledge”—not every fact, but the habit of mind that sees how knowledge coheres. The technologies now emerging make that vision both more urgent and more possible. We are forming people who can hold the whole in view even as they build the parts.
We do not yet know how to do this perfectly. We are learning. Join us.
Taylor Black
Founding Director, Leonum Institute for AI & Emerging Technologies
The Catholic University of America


