Expert Witnesses Using AI
A federal court issued a stern order to show cause against a defense attorney who filed a legal brief containing nearly 30 serious errors, which the attorney admitted were partially generated by AI without proper verification. The case, Coomer v. Lindell, sends a clear warning to expert witnesses: generative AI demands the same rigorous verification standards that apply to all professional work submitted to courts.
Imagine being cross-examined on the stand, only to discover that a key citation in your expert report references a case that does not exist. That nightmare scenario became reality for a defense attorney in Coomer v. Lindell, a case involving Mike Lindell, CEO of My Pillow Inc. The attorney admitted that he used generative AI to partially draft a legal brief and failed to verify the AI-generated content before filing. The result? Nearly 30 serious errors and a scathing judicial order demanding explanations for why sanctions should not be imposed.
While this case centers on attorney misconduct, the implications extend directly to expert witnesses, particularly valuation professionals who testify in litigation.
The Court's Ruling and Its Foundation
The judge's response was unequivocal. The attorney violated the fundamental rule requiring that all court filings be grounded in law and fact. By submitting AI-generated content without verification, the attorney failed to meet basic professional standards expected of officers of the court.
For expert witnesses, the parallel is direct. Federal Rule of Evidence 702 requires that expert opinions be based on sufficient facts or data and be the product of reliable principles and methods. An expert report containing inaccurate legal references, fabricated case citations, or mischaracterized facts would fail this standard, whether those errors originated from AI or any other source.
Why This Matters for Valuation Experts
Valuation experts face unique pressures in litigation. You are often working under tight deadlines, dealing with complex financial data, and explaining technical concepts to judges and juries who may have limited financial expertise. The temptation to use AI tools to draft sections of reports or summarize case law is understandable.
However, Coomer v. Lindell demonstrates that efficiency cannot come at the expense of accuracy. Opposing counsel scrutinize expert reports line by line, searching for any weakness to exploit during cross-examination. If you rely on AI-generated content that proves wrong or misleading, you may be publicly discredited in court.
Consider the practical scenarios. An AI tool might generate a summary of comparable transaction data that mischaracterizes key deal terms. It could cite valuation case law that has been overturned or misstate the holdings of relevant precedents. It might produce financial calculations that appear sophisticated but contain fundamental methodological errors. In each scenario, the expert bears full responsibility.
The Growing Expectation of Transparency
Beyond verification, there is a growing expectation that professionals involved in litigation disclose whether they used AI in preparing documents or opinions. Courts are beginning to require transparency around AI usage. If an expert conceals reliance on AI or fails to verify its output, that expert may face judicial criticism similar to what we saw in Coomer v. Lindell.
The solution is straightforward: treat AI-generated content the same way you would treat work produced by a junior analyst. Review it thoroughly, verify every factual assertion, and take full responsibility for the final product.
Practical Steps for Expert Witnesses
First, establish a clear verification protocol. If you use AI to draft any portion of an expert report, create a checklist that requires you to independently verify each factual claim, each citation, and each calculation. Document this verification process so you can demonstrate your diligence if questioned.
Second, understand the limitations of generative AI. These tools are sophisticated pattern recognition systems, not legal or financial experts. They can hallucinate case citations, misstate legal principles, and generate plausible-sounding analysis that is fundamentally flawed. Your professional judgment cannot be delegated to an algorithm.
Third, consider disclosure. While not yet universally required, proactively disclosing your use of AI tools demonstrates transparency and may protect you from allegations that you concealed relevant information.
The Integrity of Expert Testimony
At its core, this case is about professional integrity. The court system depends on experts who provide reliable, well-founded opinions. When you take the stand as an expert witness, you are helping judges and juries make decisions that affect real people and real businesses. That responsibility demands the highest standards of accuracy and truthfulness.
Generative AI can be a valuable tool in your practice. It can help you research faster, draft more efficiently, and manage larger volumes of information. But it is exactly that: a tool. The professional judgment, the verification, the ethical responsibility, these remain squarely with you.
Key Takeaways
The Coomer v. Lindell ruling establishes that AI-generated content must be verified to the same standard as any other professional work submitted to courts, with failure to do so potentially resulting in sanctions and referrals for professional discipline.
Expert witnesses operating under Federal Rule of Evidence 702 must ensure all opinions are based on sufficient facts and reliable methods. AI-generated errors in expert reports can result in excluded testimony, diminished credibility, and harm to both the expert and the retaining party.
Courts are moving toward requiring disclosure of AI usage in litigation materials. Valuation experts should establish verification protocols, understand AI limitations, and consider proactive disclosure to maintain transparency and professional integrity.
Opposing counsel actively search for weaknesses in expert reports. Unverified AI content creates vulnerabilities that can be exploited during cross-examination, potentially discrediting the expert and damaging the client's case.
Conclusion
Coomer v. Lindell serves as a critical warning for all professionals who appear in court, including valuation experts. The message is clear: generative AI is a powerful tool, but it requires the same rigorous verification and professional judgment that has always been expected of expert witnesses. Your credibility, your client's interests, and the integrity of the judicial process all depend on maintaining these standards.
Source: BVWire, June 2025

