Newsletter

Have you talked to your expert about their AI use?

Image of desk and keyboard, text reads: Have you talked to your expert about their AI use? Good Journey Consulting Newsletter Issue 17

Issue 18 

Lawyers should be aware of a sobering new twist on the cautionary tale of court filings containing hallucinations. In an AI deepfakes case titled Kohls et al. v. Ellison et al., a Stanford professor filed an expert declaration that included hallucinated academic studies.[i] The Hon. Laura M. Provinzino’s order in this matter excluded the expert’s testimony and cautioned that Federal Rule of Civil Procedure 11(b) may require lawyers to confirm whether their witnesses have used AI to draft their declarations, and what has been done to verify AI-generated content.[ii]  

Kohls et al. v. Ellison et al. 

Plaintiffs Christopher Kohls and Mary Franson brought a complaint against Keith Ellison in his official capacity as the Attorney General of Minnesota, and Chad Larson, in his official capacity as County Attorney of Douglas County, Minnesota.[iii] Plaintiffs challenged Minnesota Statute Section 609.771, which is entitled, “Use of Deep Fake Technology to Influence an Election.”[iv] The complaint sought to enjoin enforcement of the statute, and a ruling that the statute was unconstitutional under the First and Fourteenth Amendments of the Constitution.[v]  

Stanford professor Jeff Hancock was hired as an expert by the Office of the Minnesota Attorney General to address the influence of AI on social media and the psychological impact of deepfakes shared on social media.[vi] Plaintiffs filed a Daubert motion to exclude expert declarations, arguing in part that Professor Hancock’s declaration cited a study that did not exist.[vii]  

Mr. Ellison moved for leave to file an amended expert declaration, and Professor Hancock subsequently filed a new declaration in support of Mr. Ellison's motion that acknowledged three citation errors in his original declaration: two studies that were nonexistent, and a third error in naming the authors of a study.[viii] Professor Hancock stated that he used GPT-4o in the preparation of his declaration, and theorized that GPT-4o had misinterpreted his inclusion of the word “[cite],” intended as a placeholder reminder to go back and later manually add the citations, as a command to GPT-4o to add citations.[ix] Professor Hancock further stated that without the “[cite]” placeholders he overlooked the hallucinated citations, but stood behind the substantive points in his declaration, and sought to provide new citations for the points cited with hallucinated studies.[x]  

The Hon. Laura M. Provinzino granted plaintiffs’ motion to exclude Professor Hancock’s expert testimony, stating in part:    

Professor Hancock’s citation to fake, AI-generated sources in his declaration—even with his helpful, thorough, and plausible explanation (ECF No. 39)—shatters his credibility with this Court.  At a minimum, expert testimony is supposed to be reliable.  Fed. R. Evid. 702; see also Daubert v. Merrell Dow Pharms., Inc., 509 U.S. 579, 589 (1993) (explaining that expert testimony must be “not only relevant, but reliable”).  More fundamentally, signing a declaration under penalty of perjury is not a mere formality; rather, it “alert[s] declarants to the gravity of their undertaking and thereby have a meaningful effect on truthtelling and reliability.”  Acosta v. Mezcal, Inc., No. 17-cv-0931 (JKB), 2019 WL 2550660, at *2 (D. Md. June 20, 2019); see also In re World Trade Ctr. Disaster Site Litig., 722 F.3d 483, 488 (2d Cir. 2013) (explaining that the “under penalty of perjury” affirmation “impresses upon the declarant the specific punishment to which he or she is subjected for certifying to false statements”).  The Court should be able to trust the “indicia of truthfulness” that declarations made under penalty of perjury carry, but that trust was broken here.  Davenport v. Bd. of Trs. of State Ctr. Comm. Coll. Dist., 654 F. Supp. 2d 1073, 1083 (E.D. Cal. 2009).    

Moreover, citing to fake sources imposes many harms, including “wasting the opposing party’s time and money, the Court’s time and resources, and reputational harms to the legal system (to name a few).”  Morgan v. Cmty. Against Violence, No. 23-cv-353WPJ/JMR, 2023 WL 6976510, at *8 (D.N.M. Oct. 23, 2023).  Courts therefore do not, and should not, “make allowances for a [party] who cites to fake, nonexistent, misleading authorities”—particularly in a document submitted under penalty of perjury.  Dukuray v. Experian Info. Sols., 23 Civ. 9043 (AT) (GS), 2024 WL 3812259, at *11 (S.D.N.Y. July 26, 2024) (quoting Morgan, 2023 WL 6976510, at *7).  The consequences of citing fake, AI generated sources for attorneys and litigants are steep.  See Mata, 678 F. Supp. 3d at 466; Park, 91 F.4th at 614–16; Kruse, 692 S.W.3d at 53.  Those consequences should be no different for an expert offering testimony to assist the Court under penalty of perjury.[xi] 

Importantly, the Hon. Laura M. Provinzino cautioned lawyers:   

To be sure, Attorney General Ellison maintains that his office had no idea that Professor Hancock’s declaration included fake citations, ECF No. 38 ¶¶ 4–6, and counsel for the Attorney General sincerely apologized at oral argument for the unintentional fake citations in the Hancock Declaration.  The Court takes Attorney General Ellison at his word and appreciates his candor in rectifying the issue.  But Attorney General Ellison’s attorneys are reminded that Federal Rule of Civil Procedure 11 imposes a “personal, nondelegable responsibility” to “validate the truth and legal reasonableness of the papers filed” in an action.  Pavelic & LeFlore v. Marvel Ent. Grp., 493 U.S. 120, 126–27 (1989).  The Court suggests that an “inquiry reasonable under the circumstances,” Fed. R. Civ. P. 11(b), may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.[xii]    

What to Take Away from Kohls et al. v. Ellison et al 

First, when it comes to AI, no lawyer is an island unto themselves. As contemplated in Kohls et al. v. Ellison et al., a lawyer’s duty in AI-related matters may extend beyond the lawyer’s own use of AI. Lawyers must develop AI competency if they wish to proactively reduce their risk of becoming embroiled in an AI-related mishap.  

Second, if a Stanford professor who has published over 15 studies on AI[xiii] can miss hallucinated material in his court filing, so could nearly any lawyer. Lawyers, after developing AI competency, should develop an AI strategy if they wish to explore the possible benefits of AI, as well as to manage the risks associated with AI. Such a risk management strategy could include, among other things:

  • AI education for all the people in your organization;
  • Policies and procedures governing the use of AI in your organization; and
  • Checklists to identify AI-generated content and verify the accuracy of any such content filed with a court, including filings by experts.  

Reduce Your Risk by Developing AI Competency and an AI Strategy 

Lately, AI has been reminding me of microplastics – it feels like AI is showing up everywhere. And the pervasiveness of AI suggests that even those lawyers who are not using AI are going to notice AI impacting their work in an increasing number of ways.  

As illustrated by the circumstances of Kohls et al. v. Ellison et al., a proactive approach to identifying and managing AI risks is the best way to avoid unforeseen AI consequences. If you haven’t done so already, it’s now time for you to develop your AI competency and an AI strategy. When you’re ready, I can help you develop your AI competency and AI strategy faster with A Lawyer’s Practical Guide to AI. You can get the guide here

Thanks for being here. 

Jennifer
Good Journey Consulting

[i] Order Granting in Part and Denying in Part Plaintiffs’ Motion to Exclude Expert Testimony and Denying Defendant’s Motion for Leave to File an Amended Expert Declaration at 2-3, Kohls et al. v. Ellison et al., Case No. 0:24-cv-03754 (D. Minn. filed Sept. 27, 2024). 

[ii] Id. at 9-11. 

[iii] Complaint at 1, Kohls v. Ellison

[iv] Complaint at 1-2, Kohls v. Ellison; Minn. Stat. § 609.771 (2023). 

[v] Complaint at 8, Kohls v. Ellison

[vi] Expert Declaration of Professor Jeff Hancock at 2, Kohls v. Ellison

[vii] Plaintiffs’ Memorandum of Law in Support of Daubert Motion to exclude Expert Declarations at 1, Kohls v. Ellison

[viii] Motion for Leave to File Amended Expert Declaration at 1, Declaration of Professor Jeffrey Hancock in Support of Motion for Leave to File an Amended Declaration at 1-2, Kohls v. Ellison

[ix] Id. at 6-7. 

[x] Id. at 1-2, 7, 10. 

[xi] Order Granting in Part and Denying in Part Plaintiffs’ Motion to Exclude Expert Testimony and Denying Defendant’s Motion for Leave to File an Amended Expert Declaration at 10-11, Kohls v. Ellison

[xii] Id. at 9-10. 

[xiii] Declaration of Professor Jeffrey Hancock in Support of Motion for Leave to File an Amended Declaration at 3, Kohls v. Ellison

 

Stay connected with news and updates!

Join our mailing list to receive the latest legal industry AI news and updates.
Don't worry, your information will not be shared.

We will not sell your information.