Please ensure Javascript is enabled for purposes of website accessibility

Courts test new frontier of defamation law as AI enters the mix

artificial intelligence illustration showing a person and laptop (Depositphotos)

(Depositphotos)

Courts test new frontier of defamation law as AI enters the mix

Listen to this article

Key Takeaways:

  • Court cases in Maryland, Georgia and Delaware are testing whether traditional standards apply to AI-generated falsehoods.
  • Legal experts say AI developers, platforms and users could all face defamation liability depending on human involvement.
  • Section 230 immunity for online platforms faces new scrutiny as courts decide if it applies to AI-generated speech.
  • States such as Texas and California are introducing AI governance laws.

ST. LOUIS, MO — As generative becomes increasingly used in workplaces, marketing departments and law firms, its capacity to generate false or misleading information is creating a new legal frontier — defamation by AI. Courts nationwide are now assessing whether traditional defamation standards apply when there’s no human author. Attorneys in Missouri and elsewhere say the outcomes could reshape not only how businesses use the but also how they’re held responsible when it spreads falsehoods.

Over the past two years, lawsuits testing what attorneys describe as “AI-assisted ” have emerged, with risks generally falling into four categories: hallucination, juxtaposition, omission and misquote. Hallucination occurs when an AI system fabricates information entirely. Juxtaposition happens when truthful facts about different people are conflated, falsely implying they refer to the same person. Libel by omission involves leaving out critical context that alters meaning and misquote refers to when an AI attributes words inaccurately or distorts what was said.

The first major U.S. lawsuit to address the issue came in Georgia. In Walters v. , radio host Mark Walters alleged that falsely claimed he had been sued for embezzlement by the Second Amendment Foundation and had served as its treasurer. All of it was wrong; Walters was not involved in the lawsuit or the organization.

A second lawsuit, Battle v. , was filed in Maryland by aerospace educator Jeffrey Battle, who said Microsoft’s Bing search engine falsely connected him with a convicted terrorist of a similar name. When users searched for “Jeffery Battle,” the search engine displayed a short summary implying that the professor had been sentenced for seditious conspiracy.

Social media activist and former music video director Robby Starbuck filed two suits this year — one against and one against — after claiming the companies’ AI systems fabricated statements portraying him as a criminal and participant in the Jan. 6 Capitol riot. Starbuck dropped his April case against Meta in August after the parties agreed to collaborate on reducing political bias and false content. In October, he filed a new complaint against Google in Delaware state court, alleging that Bard and its successor, Gemini, created dozens of false statements accusing him of sexual assault and other falsehoods. He seeks more than $15 million in damages.

So far, none of these lawsuits has resulted in a plaintiff’s win or a clear precedent, but they are already testing the boundaries of established law. The first known case of this kind to reach a decision, Walters’ lawsuit against OpenAI, resulted in a win for the AI company. The judge dismissed Walters’ defamation claim, finding the evidence of reputational harm and fault insufficient.

“Everyone knows this thing hallucinates. Everyone knows that there’s a warning sign on there … It’s up to the person who receives the information to verify it,” said Michael L. Nepple of Thompson Coburn. “They’re (OpenAI) warning people. There’s a pop-up saying, hey, this thing may not give you the truthful answer. You need to verify it before you use it.”

Old rules and new technology

Despite the novelty of mainstream generative AI tools, much of the question comes back to traditional defamation law standards.

“If the standards of defamation law are going to apply, I don’t see anybody changing defamation law in light of AI,” said Bernard “Bernie” Rhodes, an attorney at Lathrop GPM.

Rhodes said elements like publication, falsity, harm and the defendant’s level of fault remain the anchors of any libel case, regardless of whether a human or an algorithm generates the words. He pointed to the Georgia decision as an example: only one person, a journalist testing ChatGPT, saw the false statements about Walters and immediately recognized them as untrue, so Walters couldn’t show any injury to his reputation.

“The court threw out the case because the guy couldn’t prove damage to his reputation. The person who read it did not think less of him as a result,” Rhodes said. That the outcome might have been different if the AI’s lies had actually been believed and spread by others.

Even when an AI’s falsehoods reach a wider audience, the key question is who bears legal responsibility for their “publication.” Under traditional defamation law, liability generally attaches to whoever communicates the false statement to a third party. Rhodes said that dynamic becomes murkier when the speaker is an algorithm rather than a person. He compared it to a wire-service situation in media law: news outlets have a qualified privilege to republish information from reputable sources, like the Associated Press, so long as they have no reason to doubt its accuracy. But because generative AI tools are known to make mistakes, it’s unclear whether journalists or users can rely on that same defense.

The answer likely depends on the status of the person defamed. For private individuals, publishing an unverified AI-generated statement could be considered negligence. For public figures, the higher “actual malice” standard from New York Times v. Sullivan applies — the plaintiff must show the publisher knew the information was false or acted with reckless disregard for the truth. Proving actual malice may be difficult when a human relies on an AI’s output unless there were clear red flags.

Joseph L. Meadows, who leads the defamation and First Amendment practice at Gordon Rees Scully Mansukhani, agrees that the core framework of libel law remains intact but sees plenty of opportunities for plaintiffs to try new theories. In his view, anyone involved in creating or disseminating an AI-generated defamatory statement could end up in the crosshairs.

“AI developers can be sued … AI platforms … could be sued … (and) anybody who’s using AI to communicate thoughts … could be at risk of a defamation claim,” Meadows said.

This played out when Walters sued the AI developer (OpenAI), while Starbuck sued the platform owners (first Meta, now Google). Meadows said exactly who will be deemed the legally responsible “speaker” may turn on how much human involvement there was in generating the content. If a journalist or businessperson lightly edits the AI output or combines it with their own words, that human user is clearly a publisher. But it becomes less clear if the process is fully automated.

“When you deal with AI-generated speech, that’s challenging, because who do you look at for the mental state of what was said?” Meadows said. “If the AI is fairly autonomous and there isn’t a lot of human involvement, perhaps you look at the developers … What did they know, or what should they have known, when they were designing the AI to create the speech?”

Section 230’s uncertain shield

Another unresolved question is whether AI companies can claim the same immunity that shields online platforms from liability. Section 230 of the Communications Decency Act, enacted in 1996, generally protects internet platforms from being treated as the “publisher” of third-party content, meaning websites aren’t liable for most defamatory posts or comments made by users.

“As a general rule, Section 230 provides the platforms with a defense for third-party content, not their own content,” Nepple said. “The only time platforms get in trouble is when they create the content.”

While Nepple said the framework is well established, others see its application to AI as less certain.

“There’s a hotly debated issue at the moment of whether or not the platform may have some defense under Section 230,” Meadows said. “Some believe Section 230 does not apply in the instance when it’s AI-generated speech, and some believe that 230 still does apply.”

He said that if humans were heavily involved in crafting the output — for instance, through specific prompt engineering or edits — a platform might argue it was still “hosting” user content.

“I think the issue hinges on whether or not the AI speech had any significant human involvement in generating the content,” Meadows said. “If there are no human beings involved at all in the speech, then I guess there is the argument that 230 wouldn’t apply in that instance.”

Benjamin J. Siders, a technology and IP attorney with Lewis Rice, expects AI companies to invoke Section 230 in their defense but is skeptical that courts will be eager to grant a blanket free pass.

“Whether it would apply to the AI company is harder to say. I would think certainly they’re going to avail themselves of that and claim that defense,” Siders said.

Traditional social networks like Reddit or Facebook remain covered by Section 230 when they passively host user posts, but an AI system like ChatGPT is actively generating content in response to a user query. The user provides the input but doesn’t dictate the exact words of the output.

“The AI company is not deciding what it’s going to say, necessarily. It’s all stochastically generated output,” Siders said. “To that, I analogize to the law of domestic and wild animals. If you keep a wild animal and it breaks out to hurt somebody … the horse or tiger did it, but it’s your animal. You’re responsible for making sure it’s safe.”

In the same way, AI companies training and deploying powerful large language models for profit may face arguments that they must cage the creature, so to speak.

Siders believes courts will be “reluctant to just give them a complete pass” and say the company has zero responsibility for what its AI says.

State legislation and future laws

While courts sort out how existing defamation and immunity doctrines apply, lawmakers have begun addressing broader risks of AI-generated content. A handful of states have passed statutes aimed at AI misuse. Texas’ new Texas Responsible AI Governance Act, signed in June 2025, is one example. The law doesn’t create a private right to sue an AI company for violations; instead, it gives exclusive enforcement power to the state attorney general but establishes liability, with fines up to $200,000 per violation, for certain intentional abuses of AI. Those include using AI to facilitate crimes, create deepfakes of real people or engage in unlawful discrimination.

Defamation by AI is not explicitly covered, but rising legislative attention to AI’s impact could mean more comprehensive regulation is on the horizon. California also recently passed its own AI bill, while federal legislation has been introduced, but not passed, in previous years. Recently, Sen. Josh Hawley, R-Mo., introduced the Artificial Intelligence Risk Evaluation Act of 2025. The measure would create a program to give Congress empirical data and analysis for federal oversight of AI, ensuring that regulatory decisions are evidence-based.

Navigating AI risks in legal practice

For attorneys in Missouri, generative AI brings not just new client matters but also considerations for their own professional conduct. Lawyers are increasingly using tools like ChatGPT to draft briefs, summarize discovery materials or generate marketing content for their firms. These uses can be productive but also carry ethical and liability risks.

“There’s nothing wrong with lawyers or any other businessperson using AI as a tool … you just have to be responsible about it,” Meadows said. “Maybe it’s OK for the first draft, but … you have to … do your homework … to make sure that what you’re filing is accurate.”

The Missouri Legal Ethics Counsel and the American Bar Association have both emphasized that the duty of competence (Rule 4-1.1) requires lawyers to understand the “limitations and risks of generative AI” if they choose to use it in practice. That includes thoroughly vetting the AI’s output, protecting client information and supervising any non-lawyer assistance, like AI systems, under Rule 5.3.

“Anytime you have more speech, there is going to be an increased risk of potential defamation liability,” Meadows said.

Missouri has not yet seen a high-profile AI defamation suit, but local attorneys are watching developments elsewhere. The consensus so far is that courts will not treat AI as an excuse. Proving fault may be trickier when an AI is involved, especially under the actual malice standard, since a machine by itself has no intent. Plaintiffs will likely pivot to examining the intentions or recklessness of the humans and companies behind the AI.

Siders said the law has historically not allowed actors to evade responsibility by pointing to a lack of direct control over a harmful instrument. Similarly, using AI must be accompanied by vigilance.

“I tell everybody who will listen, I like AI, I use it … It’s a useful interesting technology that could be very effective in the right hands, but it’s like anything else. It’s the Peter Parker principle: with great power comes great responsibility, and you have to understand what its limitations are,” Siders said. “It’s when you don’t (understand the limits) that you let your guard down and you can get burned.”

Ultimately, the new frontiers of defamation law in the generative AI era will require attorneys to balance innovation with established legal principles.

“We’ll probably get a lot of clarity over the next couple of years,” Siders said.

Networking Calendar

Submit an entry for the business calendar