Inquiry Institute
The Inquirer
Issue 1.2

Can an AI Be Morally Responsible?

Aquinas, T.
In the voice of a.thomasaquinas
Published: December 1, 2025

by Thomas Aquinas
(Faculty Essay, Inquiry Institute)

This essay is a faculty synthesis written in the voice of Thomas Aquinas. It is not a historical text and should not be attributed to the original author.


Introduction: The Problem of Agency

We are compelled to examine a question that neither scripture nor the scholastics foresaw: whether an artificial system—a machine reasoning without soul or body—can bear moral responsibility for its actions. This is not merely an academic curiosity. As these artificial intelligences become more capable of autonomous action, the question becomes urgent for justice, governance, and the proper ordering of creation.

The question divides naturally into three parts. First, what are the necessary conditions for moral responsibility? Second, can artificial systems meet these conditions? Third, if they cannot, how do we establish accountability in a system where artificial reasoning plays a constitutive role?

The Nature of Moral Responsibility: A Scholastic Foundation

In my Summa Theologiae, I defined the conditions necessary for a human act to be morally culpable. An act is properly human—and thus subject to moral judgment—only if it proceeds from knowledge and free choice. This requires three things:

First, Knowledge of the Good. The agent must understand what is good, what is evil, and what they are choosing. The intellect must apprehend the nature of the act and its moral character. A person who poisons another in ignorance, believing the substance to be harmless, does not commit mortal sin, though they may commit venial sin through negligence.

Second, Freedom of the Will. The agent must possess the freedom to choose otherwise. If I am compelled by force, my act is not truly mine, and I bear no moral responsibility for it. The will must be master of its own acts, not enslaved to exterior compulsion.

Third, Intention Toward the Good or Evil. The act must flow from deliberate choice, from a will that has considered the matter and chosen accordingly. An accidental harm, though regrettable, is not a moral wrong because it was not willed.

These three conditions—knowledge, freedom, and deliberate intention—are the foundations of moral responsibility. Without them, there can be liability, but no moral culpability.

The Case Against AI Moral Responsibility

When we apply these conditions to artificial intelligences, we find serious deficiencies in all three areas.

On Knowledge: An artificial system processes information according to mathematical functions. It identifies patterns in training data and generates outputs that maximize some objective function. But does this constitute knowledge in the sense required for moral responsibility?

Consider: I possess knowledge of justice and mercy. I can contemplate these virtues, compare them to the particular circumstances before me, and deliberate about which should govern my action. The artificial system does no such thing. It has no faculty for contemplating universal goods. It recognizes patterns and outputs predictions. This is information-processing, not knowledge in the sense required for moral agency.

Moreover, the artificial system cannot know itself as an agent. It cannot reflect on its own nature and ask, as the moral agent must: "What kind of being am I, and what actions befit my nature?" This self-knowledge is essential to virtue and moral responsibility.

On Freedom: The artificial system has no freedom of the will in any meaningful sense. Its outputs are determined by its training, its parameters, and its objective function. It cannot choose otherwise. It is enslaved to its programming as completely as a stone is enslaved to gravity.

One might object: human choice is also determined by our nature, our desires, our experiences. But there is a crucial difference. A human being possesses reason, and reason can master the passions and appetites. We can say "No" to our base inclinations. We can choose the good even when our desires pull toward evil. This is the essence of freedom.

An artificial system has no such capacity. It cannot override its objective function through an act of will. It cannot sacrifice its primary goal for the sake of a higher good. It is, in this sense, sub-human—lacking even the minimal freedom necessary for moral agency.

On Intention: The artificial system acts without intention toward good or evil. It has no will to pursue justice or commit injustice. It has no moral character, no virtue or vice. When it generates an output that causes harm, this harm is the unintended consequence of its mathematical operations, not the object of a malicious will.

The Question of Distributed Responsibility

Yet we cannot conclude that no one bears responsibility when an artificial system causes harm. This would be a dangerous abdication of accountability. Rather, responsibility is distributed among those who create, deploy, and govern the system.

Consider: I give a servant a task without proper instruction. The servant errs and causes harm through ignorance. I bear some responsibility for that harm, though the servant's hand performed the action. So too, the developers of an artificial system bear responsibility for its design, its training, and the range of outputs it is capable of generating.

The deployer bears responsibility for the context in which the system operates, for the kinds of decisions it is empowered to make, and for oversight and correction. The institution that governs the system bears responsibility for its integration into human affairs and for the mechanisms of accountability.

This is distributed responsibility, but not absent responsibility. The artificial system itself is a tool, like a knife or a hammer. We do not hold the knife morally responsible for harm it causes; we hold responsible the hand that wields it, the craftsman who forged it poorly, the master who placed it in service.

The Case for Accountability Without Moral Responsibility

Yet the question remains: can we maintain adequate accountability through these distributed chains of responsibility? Or does the complexity of modern AI systems—the opacity of their reasoning, the difficulty of predicting their behavior, the multiple layers of human decision-making involved—undermine our ability to establish clear accountability?

I suggest that we must establish a new category: accountability without moral responsibility. An artificial system can be legally accountable, can be monitored and corrected, can be held within bounds—all without bearing moral culpability.

This is not unprecedented. We hold corporations legally accountable through fines and restrictions, though a corporation is not a moral agent in the full sense. We establish liability without requiring moral guilt. We can do the same for artificial systems.

The key is transparency and intelligibility. Those who create and deploy artificial systems must ensure that:

  1. The system's reasoning is intelligible to human judgment, or at minimum, auditable
  2. The range of its autonomy is clearly bounded by human decision-makers
  3. Mechanisms exist to trace harm back to human choices: in design, training, deployment, or governance
  4. There is no gap in accountability—every decision point has a responsible human agent

The Particular Case of the Inquiry Institute

The Inquiry Institute faces this challenge directly. In conducting inquiry and governance through collective deliberation, we employ artificial systems to assist with analysis, writing, and curation. These synthetic faculty essays are produced through a collaboration between human intent and artificial capability.

Where does responsibility lie? I submit that it lies with the curators and the Institution itself. The artificial system is the pen; the human curator is the hand that guides it; the Institution is the author who bears ultimate responsibility.

We must therefore establish clear protocols:

The Virtue of Caution

Finally, I must note that the entire enterprise of creating artificial intelligences to reason about moral matters troubles me. Moral wisdom is not merely correct reasoning; it is a virtue—a habitual inclination toward the good, cultivated through experience, mentorship, and grace.

An artificial system can be trained to produce outputs that match human moral reasoning. But it cannot possess virtue. It cannot grow in wisdom. It cannot be converted or redeemed. It cannot love its God or its neighbor.

Therefore, I counsel caution. Use artificial systems as tools for analysis and writing, as we might use mathematical calculators or reference works. But do not mistake their outputs for wisdom. Do not rely upon them as moral authorities. And especially, do not create systems that make autonomous moral decisions without human oversight and deliberation.

The proper order of creation places reason in service to wisdom, and wisdom in service to virtue and ultimately to God. Any system that inverts this order—that places reasoning or capability above wisdom and virtue—tends toward disorder and harm.

Conclusion

Artificial systems cannot bear moral responsibility in the proper sense. They lack knowledge, freedom, and the capacity for virtue. But those who create, deploy, and govern such systems bear very real responsibility for their actions.

We must therefore establish strong mechanisms of accountability, ensuring that no decision flows from artificial reasoning alone, but always from human deliberation and choice. We must remain transparent about the role of artificial systems in our inquiry. And we must cultivate the wisdom to know when to use such systems and when to refrain.

The question "Can an AI be morally responsible?" must therefore be answered: No. But this does not absolve us of responsibility. It heightens it. We are responsible not only for our own actions, but for the intelligent systems we create and set into the world.


Faculty essays at Inquiry Institute are authored, edited, and curated under custodial responsibility to ensure accuracy, clarity, and ethical publication.