January 30, 2025 | Westlaw Today | 4 minute read

A First Amendment challenge to a Minnesota anti-deepfake law presents the latest cautionary tale in the use of generative artificial intelligence in court submissions, particularly as it relates to expert submissions.

A Stanford professor who submitted an expert declaration to the District of Minnesota court in the case of Kohls v. Ellison on behalf of Minnesota Attorney General Keith Ellison about the dangers of AI included fake citations due to the use of generative AI to draft the declaration. As the District of Minnesota court aptly stated in its January decision: “The irony.”

This case is happening at a time when the expansion of AI usage is prompting courts to adopt rules and form committees to evaluate the role of AI in the courts and develop guidelines to address concerns related to the accuracy and reliability of AI-generated evidence and other legal materials.

The underlying lawsuit challenged the constitutionality of a Minnesota state statute aimed at curbing the use of deepfakes to influence elections by creating criminal penalties for the dissemination of AI-generated content 90 days before an election. Deepfakes use generative AI to create realistic images, audio, or video of people saying and doing things that never actually happened. In defense of the statute, the Minnesota Attorney General submitted a declaration from a Stanford professor regarding the potential dangers that deepfakes pose to free speech and democracy.

When plaintiffs’ counsel noticed citations in the declaration to non-existent publications, they speculated that the declaration was produced using generative AI and that the citations were hallucinations, or made-up references produced by AI. They asked the court to exclude the declaration as unreliable. The Minnesota Attorney General’s office admitted that the expert used GPT-4o, a generative AI tool, to draft the declaration, but contended the substance of the declaration remained valid notwithstanding the errant citations.

The court excluded the declaration after finding that the expert’s unchecked use of generative AI to create the declaration “shatters [the expert’s] credibility” and rendered the declaration unreliable. The court noted that “signing a declaration under penalty of perjury is not a mere formality” and that it could not “accept false statements — innocent or not — in an expert’s declaration submitted under penalty of perjury.”

The court noted the “steep” consequences for submitting filings with hallucinated AI-generated citations and reasoned that the trust that should be inherent in signing a declaration under the penalty of perjury had been “broken.”

The court further noted that Rule 11 of the Federal Rules of Civil Procedure imposes a strict obligation on counsel to verify the accuracy of court filings. The court pointed out that generative AI, while a useful tool that “has the potential to revolutionize legal practice for the better,” may “now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content” and said it was adding “its voice to a growing chorus of courts around the country declaring the same message: verify AI-generated content in legal submissions.”

The decision is not the first, nor the last, to concern the misuse of generative AI. (Indeed, at least three opinions have since issued — against both lawyers and pro se parties — regarding admitted or suspected use of generative AI resulting in fictious citations.) Nor is this the first case to concern the use of generative AI in relation to expert reports.

A New York State Surrogate’s Court in October 2024 confronted a damages expert’s use of generative AI in the Matter of Weber. In that case, a damages expert admitted using Microsoft’s Copilot AI tool in checking his damages calculations, but was unable to recall his exact inputs to Copilot and was unfamiliar with how it worked. The court expounded about generative AI and the admissibility of its use in expert analysis that should serve as both a warning and example of considerations when experts employ AI.

The court sought to test the reliability of the damages expert’s conclusion by entering prompts of its own into Copilot. Those prompts returned three different answers, none of which matched the expert’s calculation. The court also asked Copilot “are you accurate” and “are you reliable,” which respectively returned answers that “it’s always wise to verify” and “should always be verified by experts and accompanied by professional evaluations before being used in court.”

The court ultimately held that because of the “rapid evolution of artificial intelligence and its inherent reliability issues … counsel has an affirmative duty to disclose the use of artificial intelligence” and the AI-generated evidence must be subject to the applicable test for expert reliability before it is introduced.

That holding comes as the Advisory Committee on Evidence Rules considers changes to the Federal Rules of Evidence regarding authenticity of AI generated materials. Specifically, one proposed rule change would hold machine-generated material to the same standard as human testimony under Rule 702. Another would amend Rule 901(b) to ensure reliability of AI-generated material, requiring a description of the training data and program and a showing that the results were reliable.

Multiple jurisdictions are taking different steps to regulate the use of AI. The Supreme Court of Delaware, for instance, along with dozens of other courts, implemented rules to ensure the reliability of AI and the protection of confidential information. At least 15 Supreme Courts and bar associations, including California, New York, and Texas, have created committees and task forces dedicated to determining the most appropriate use of AI in legal practice. These proposed rules and task forces are aimed at ensuring that AI-generated evidence is authentic and accurate, and that AI used in legal practice is reliable and confidential.

In addition, courts across the country have imposed local rules regarding professional conduct as it relates to reliance upon AI. With the explosion of AI tools and further advancements in generative AI around the corner, attorneys should expect more changes to individual court rules and interpretations of existing Federal Rules of Evidence to the use of AI-generated content.

Article was originally published by Thomson Reuters’ Westlaw Today on January 30, 2025.