January 15, 2025

AI Expert’s Credibility Undermined by AI-Generated Citations in Deepfake Case

by Alan Brooks

Alan Brooks

Vice President of Marketing

Alan is an experienced marketing executive focusing on fast-growth companies. Prior to ILS, he was VP of Marketing at ARCHER Systems. His expertise in eDiscovery... Read more »

In a striking example of AI-related irony, a federal judge has excluded expert testimony from a Stanford University AI researcher after discovering he used AI to generate fake academic citations in a legal declaration about the dangers of AI.

U.S. District Judge Laura M. Provinzino sharply criticized Jeff Hancock, co-director of Stanford University’s Cyber Policy Center, in a ruling on Friday that threw out his expert declaration in a case challenging Minnesota’s law on deepfakes. As the judge noted, “Professor Hancock, a credentialed expert on the dangers of AI and misinformation, has fallen victim to the siren call of relying too heavily on AI — in a case that revolves around the dangers of AI, no less.”

The case centers on Minnesota Statute Section 609.771, which imposes criminal penalties for sharing certain AI-generated political content that could influence elections. The law is being challenged by social media personality Christopher Kohls and state Rep. Mary Franson, who argue it violates First Amendment protections.

Hancock admitted using GPT-4o to help draft his declaration supporting the state’s position. The AI tool generated citations to nonexistent academic articles, which Hancock failed to verify before submitting the document under penalty of perjury.

Judge Provinzino found it “particularly troubling” that Hancock typically validates citations with reference software for academic articles but didn’t apply the same diligence to a legal declaration. “One would expect that greater attention would be paid to a document submitted under penalty of perjury than academic articles,” she wrote.

While Minnesota Attorney General Keith Ellison’s office apologized for the fake citations and requested permission to file a corrected version, the judge refused, stating that Hancock’s use of fabricated sources “shatters his credibility with this Court.”

The ruling adds to growing concerns about AI use in legal proceedings. Judge Provinzino referenced several recent cases where courts have sanctioned attorneys for including AI-generated fake citations, suggesting that lawyers may now need to specifically ask witnesses whether they used AI in preparing declarations and how they verified any AI-generated content.

Despite excluding Hancock’s testimony, the judge ultimately denied the plaintiffs’ request for a preliminary injunction against the law. She determined that the statute does not restrict constitutionally protected parody and satire, noting that content that “cannot reasonably be interpreted as stating facts about an individual” falls outside the law’s scope.

The case continues in the U.S. District Court for the District of Minnesota as Christopher Kohls et al. v. Keith Ellison et al., case number 0:24-cv-03754.

Categories: