Who is responsible when AI makes a decision? Emerging doctrines and evolving regulatory frameworks.
Introduction
AI is already seriously shaking up the legal landscape. Given the plethora of contracts on the internet, large language models (LLMs) are particularly well-suited for generating both individual clauses and entire contracts. The ease of finding and generating contract content has clear upsides for both practitioners and businesses, saving time, reducing costs, and covering contingencies that may have gone unaddressed. However, what happens when something goes wrong? Who is ultimately responsible?
Key Issues for Analysis
Let’s first assess how liability is determined. Existing legal frameworks were not developed to account for systems that appear intelligent but include unpredictable errors. According to a RAND Corporation report, liability generally stems from state tort law, particularly negligence. Proving negligence, however, is far from easy.
Broadly speaking, an affirmative case of negligence involves each of the following elements:
- Duty of care. Plaintiffs generally must show that the defendant—in this case, an AI company—owes them a duty of care. Most AI tools—particularly those for general use—are not advertised as fit for a particular purpose, and certainly not for legal advice. They typically include relevant disclaimers (“AI makes mistakes. Hallucinations are possible” and the like). Perhaps most importantly, the standard of care is extremely difficult to define, as the field is evolving rapidly and the process of generating answers is often complicated and opaque, involving complex algorithms and hundreds or thousands of tokens of information.
- Breach of that duty. Once the duty of care is established, the plaintiff will need to show that there was a breach of that duty. Under standard tort law, this would involve looking at how, if at all, the defendant fell short of industry best practices or of how a similarly-situated reasonable person would have behaved. Again, this is extremely difficult to do. While the number of AI companies continues to grow, there are only a handful of well-capitalized LLM providers with large user bases. While each model has its own strengths and weaknesses—which change with each model update—most LLMs have similar user functionalities and generate content along the same spectrum. There are occasionally exceptions, such as Grok’s antisemitic content or Google’s woke images, but these have not concerned contract clauses.
- Damages. Proving damages should be the easiest element of an AI-related tort claim. One would presumably only bring a claim if an actual injury occurred, and most competent attorneys are aware of standing doctrine. It is quite difficult, however, to show that the injury was caused by the AI provider, as we’ll see below.
- Causation. In most US states, causation is broken into two sub-elements: (1) cause-in-fact; and (2) proximate cause. We discuss each one in turn:
- Cause-in-Fact. Cause-in-fact, often called ‘but-for’ causation, involves showing that, but for the defendant’s acts or omissions, the plaintiff’s injuries would not have occurred. In the case of LLMs drafting inadequate contract clauses, this is easy in theory: a missing or inadequate contract clause is likely to be the subject of the lawsuit. However, there could be defenses, such as the plaintiff not setting proper parameters or context in prompting the LLM.
- Proximate Cause. Proximate cause is the final and perhaps most difficult sub-element. The plaintiff would need to show that the act or omission of the LLM provider was the proximate cause of the plaintiff’s injuries. Maybe, as above, the plaintiff did not provide proper prompting. Maybe, as above, the plaintiff did not engage in sufficient follow-up or check sources. Maybe, as an attorney, one has a duty to exercise one’s professional judgment and not rely on the LLM in generating the answer.
Conclusion
State tort law, particularly negligence, is often inadequate for assessing the liability of LLM providers. Given the rapidly-evolving nature of the field, it is difficult to establish a standard of care or determine what constitutes a breach of that standard. LLM users must refine their prompting techniques and engage in sufficient follow-up. In particular, lawyers must take their professional duties seriously and ensure that any LLM-generated content is verified and independently validated.
Over time, courts will develop doctrines to handle liability with respect to LLM-generated content. Legislatures may also play a role, particularly given that the 10-year moratorium was removed from the final version of the One Big Beautiful Bill Act. In the interim, practitioners must tread with caution, both in how they use these tools and in how they counsel clients to rely on them.
Disclaimer: This blog is for informational purposes only and does not constitute legal advice. Reading or interacting with this content does not create an attorney–client relationship. You should consult a qualified attorney for advice regarding your specific situation. Mehaffy PLLC disclaims all liability for actions taken or not taken based on this blog.
