A bizarre story comes from the federal court in Mississippi:
A ruling from a federal judge in Mississippi contained factual errors — listing plaintiffs who weren’t parties to the suit, including incorrect quotes from a state law and referring to cases that don’t appear to exist — raising questions about whether artificial intelligence was involved in drafting the order.
U.S. District Judge Henry T. Wingate issued an error-laden temporary restraining order on July 20, pausing the enforcement of a state law that prohibits diversity, equity and inclusion programs in public schools and universities.
This is a high-profile case, in which the judge obviously used an artificial intelligence program to write a pro-DEI opinion. As often happens with AI, the opinion was inexplicably, and wildly, inaccurate:
The original order lists plaintiffs such as the Mississippi Library Association and Delta Sigma Theta Sorority Inc., who have never been involved in the pending litigation and who do not even have cases pending before the U.S. District Court for the Southern District of Mississippi.
Wingate’s original order also appears to quote portions of the initial lawsuit and the legislation that established Mississippi’s DEI prohibition, making it seem as though the phrases were taken verbatim from the texts. But the quoted phrases don’t appear in either the complaint or the legislation.
Wingate’s corrected order still cites a 1974 case from the U.S. 4th Circuit Court of Appeals, Cousins v. School Board of City of Norfolk. However, when Mississippi Today attempted to search for that case, it appears that either it does not exist or the citation is incorrect.
Confronted with his errors, Judge Wingate withdrew the AI-generated opinion and substituted a different one. He has refused to make the original order a part of the record, and has also refused to explain what happened, chalking his erroneous opinion up to “clerical error,” and adding that “No further explanation is warranted.”
There seems to be an epidemic of AI hallucination in the federal courts:
Wingate withdrew the opinion on the same day that District Judge Julien Xavier Neals of the District of New Jersey withdrew an opinion that misstated case outcomes and contained fake quotes from opinions. The New Jersey case is not related to the Mississippi litigation.
More details on the New Jersey case here:
The letter was written by Andrew Lichtman, a partner at Willkie Farr & Gallagher, who had sought dismissal of a shareholder lawsuit filed against his client CorMedix Inc. The lawyer said the company was not seeking reconsideration of Neals’ denial of the motion, but he did want to point out the problems.
“We wish to bring to the court’s attention a series of errors in the opinion—including three instances in which the outcomes of cases cited in the opinion were misstated (i.e., the motions to dismiss were granted, not denied) and numerous instances in which quotes were mistakenly attributed to decisions that do not contain such quotes,” the letter said.
Like Judge Wingate, Judge Neals has gone to ground:
A representative for Neals’ chamber did not comment when contacted by Bloomberg Law. The ABA Journal placed a call to a number for Neals’ judicial assistant.
“Unfortunately, the court is unable to comment,” the judicial assistant said, without identifying herself.
As in the Mississippi case, the errors in Neals’ opinion were so egregious that they can only be the product of artificial intelligence. For example:
• Stichting Pensioenfonds Metaal en Techniek v. Verizon Communications. Neals’ opinion said the decision was issued in 2021 in the Southern District of New York, but he may have been instead referring to an opinion issued in 2025 in the District of New Jersey. Neals said the Stichting decision found that access to internal emails and memos supported a “finding of scienter.” But the opinion actually granted a motion to dismiss and rejected plaintiffs’ arguments in support of scienter. Nor did the opinion discuss internal emails or memos.
Lawyers have gotten into trouble for using artificial intelligence to write briefs. AI programs have fabricated cases that don’t exist, and made up quotations from those cases. That is scandalous; for judges to do the same thing is even worse. District court judges have clerks who often write first drafts of opinions. At a minimum, the judge reviews, edits, approves and signs them. Here, no one in the judge’s office could bestir himself to do legal research and write an opinion–i.e., to do his job–relying instead on an artificial intelligence program. And the judge either used the AI program himself, or was too lazy even to check the opinion for accuracy before he signed and filed it.
Impeachment proceedings should be brought against these judges, in which they can be required to explain what happened–although, to be fair, what happened seems obvious. If they relied on AI programs to write opinions on cases before them, they should be removed from office. The federal judiciary is already under a cloud because of the political campaign that a number of district court judges have mounted against the Trump administration. Confidence in the federal bench is at a low ebb, and these scandals can only make matters worse. Congress should step in, and the Supreme Court should ban the use of artificial intelligence in deciding cases and drafting opinions in the federal courts.