The rise of generative AI has increased the number of lawsuits resulting from false outputs that can damage an individual’s reputation. Generative AI can directly generate defamatory content or be used to create misleading impressions, such as deepfakes. Under South African law, damage to a person’s reputation or dignity can give rise to legal claims.
AI-related defamation lawsuits are being filed with increasing regularity. The first one was introduced in Australia in 2023 for ChatGPT. Hepburn Shire Council Mayor Brian Hood has launched a defamation lawsuit against OpenAI, the owner of ChatGPT.
The case concerned false results generated by ChatGPT, which claimed to have served time on bribery charges related to an issue in which the mayor was actually a whistleblower. The case was resolved in early 2024 after corrections were made to ChatGPT’s output.
Another interesting case, this time in the United States of America (USA), involved American filmmaker, journalist, and activist Robert Starbuck. His complaint, filed on April 29, 2025, set the circumstances.
“Imagine waking up one day and finding out that a multibillion-dollar corporation was telling everyone that you were an active participant in the January 6, 2021 Capitol riot, one of the most disgraceful events in American history, and that you have been arrested and charged with misdemeanor crimes in connection with your involvement in that event.
Furthermore, imagine if these accusations were completely false…
…Finally, imagine that a technology company continued to publish these and other lies about you for nine months after you first asked them to stop. ”
Lawsuit against Metaplatform
Based on this, Starbucks filed a defamation lawsuit against Meta Platforms, Inc., the owner of the Meta AI chatbot. In August 2024, Starbuck discovered that the chatbot’s output contained false and harmful statements about him.
According to the complaint, Starbuck “did everything in his power to alert Meta to the error and seek its cooperation in addressing the problem.”
However, despite attempts to inform the company of this, the defamatory output reportedly continued.
All information about Starbuck was eventually erased from all text output, but additional misinformation appears to have been added via the Meta AI voice feature. These include allegations that Starbuck “pleaded guilty to disorderly conduct” in connection with the Capitol riot and that he “advanced Holocaust denialism.”
Who is responsible?
The question is, “Who is responsible when AI defames someone’s reputation?” While the Delaware Superior Court may have responded in this case, a public apology from Meta’s Joel Kaplan indicated that “the parties have (already) resolved this issue” and that the parties are working together to reduce the risks associated with hallucinations.
Another case, also in the US, involves Mark Walters, a media personality, radio talk show host, and Second Amendment (right to bear arms) advocate who filed a defamation suit against OpenAI in 2023.
He claimed that Frederick Leal, a journalist and editor of a news site focused on Second Amendment rights, used ChatGPT to issue statements implicating Walter in embezzlement.
Mr. Walters sued OpenAI (owner of ChatGPT). However, Georgia’s Gwinnett County Superior Court ruled in favor of Open AI in May 2025 on various grounds, one of which was that Walters, as a public figure, would have to demonstrate actual malice (false knowledge) on the part of ChatGPT.
The court held that OpenAI was not liable. The primary rationale for this decision appears to be that ChatGPT’s inclusion of a disclaimer below the prompt bar means that a reasonable reader would recognize that ChatGPT is making a mistake.
In considering whether the disputed output conveyed a defamatory meaning as a matter of law, the court scrutinized this “hypothetical reasonable reader” test.
The court found that “(d) disclaimer or warning language affects the determination of whether this objective ‘reasonable reader’ standard is met.”
Because of repeated disclaimers, ChatGPT users in Mr. Riehl’s position could not believe that the output consisted of “actual facts” without daring to verify the information.
The order references Mr. Leal’s testimony in that he was “skeptical” of the outcome. He knew it was “not true” and consisted of “misinformation.” And he said he was aware of ChatGPT’s ability to cause hallucinations.
Because Mr. Leal did not believe this output, the court concluded that it could not legally convey a defamatory meaning. The court found that this alone would have been sufficient to rule in favor of OpenAI and grant summary judgment.
south african law
Although the case has not yet been decided in South Africa, AI platforms may not be as lucky as ChatGPT was in the Walters case. Under South African law, despite the disclaimer, this publication is likely to be considered defamatory.
Disclaimers are not a “magic wand” to cure defamatory speech. And perhaps courts will need to take a very close look at the systems and processes that platforms employ if they are required to prove that they acted without negligence.
At the very least, such platforms would have a duty to act reasonably when notified of defamatory or illegal content. There is nothing artificial about defamation lawsuits, as the AI platforms operating in South Africa will quickly find out.
Unauthorized reproduction is prohibited. © 2026.Bizcommunity.com Provided by SyndiGate Media Inc. (Syndigate.info).

