Skip to main content

The New Frontier: AI in Depositions and Legal Proceedings

Artificial intelligence (AI) is moving quickly, and the legal world is constantly being introduced to new AI applications. While many legal professionals are already employing such tools as smart notetakers and summarization assistants, a new generation of AI products is quickly entering the field for deposition use. These advanced tools promise tremendous potential for efficiency while also providing instant insight.

When considering the application of AI for your proceedings, there are some important factors to consider: What are you really permitting when you allow AI participation? Who owns or accesses the output? How might the use (or misuse) of AI impact confidentiality, privilege, witness credibility, and the integrity of your case?

Below, we will highlight some of the use cases of AI, what concerns may arise, and how you can proactively protect yourself and your clients.

What AI Tools Are Being Utilized in Proceedings?

  • Realtime or after-the-fact transcript summarizations: For example, tools that turn audio/video depositions into searchable transcripts, generate summaries of key points, label speakers, and let you query by topic.
  • Behavioral video analysis / credibility / demeanor tools: According to a recent article, “AI-powered behavioral analysis addresses the fundamental challenge of objectively assessing witness credibility and emotional responses during depositions. AI systems can detect micro‐expressions, vocal stress patterns, and behavioral inconsistencies with quantifiable confidence scores.” (https://sonix.ai/ai/ai-for-witness-interviews/?utm_source=chatgpt.com)
  • Admissions & Contradictions Detection: For example, Lexitas’s Deposition Insights+ offers features including on‐demand AI assistant, “agentic search” across large transcripts/videos, extraction of contradictions/admissions, and behavioral video analysis. Lexitas

Where & How Is the AI Output Stored, and What Are the Confidentiality / Privacy Concerns?

When you allow AI to access your proceedings, particularly for depositions as well as any audiovisual recordings that may be utilized to capture testimony, you open questions about data storage, access, ownership, reuse, security, and metadata. Some issues to consider:

  • Storage location, access, and retention: Many AI tools are cloud-based. Who hosts the data? Is the vendor using shared servers, multi-tenant cloud environments, or encrypted segregated storage? What are the retention policies (how long will the audio/video/transcript/analytics remain)? If the deposition video/audio is processed by an AI platform, does the vendor retain a copy (or derivative analytics) beyond the scope of your case?
  • Reuse of data, derivative analytics, or training data: Some platforms may use depositions (or aggregated anonymized data) to “train” their AI or build further analytics. Are you giving consent (explicitly or implicitly) for your data to be used this way? Could your deposition excerpt end up in a training set used for other cases or a benchmarking tool?
  • Metadata and behavioral analytics: If the platform is doing behavioral video analysis (micro-expressions, vocal stress, credibility scoring), those are derivative data layers beyond merely “what was said.” They might create profile data about the witness (demeanor, stress levels, credibility score). How is that data handled, protected, and stored?
  • Privilege, confidentiality, and attorney-client / work product concerns: Especially when you are discussing strategy, witness prep, or internal debriefing, are you sure that uploading audio/video to an AI tool doesn’t waive privilege or expose internal strategy? What if the vendor’s data handling policies are weak or ambiguous?
  • Recording/consent statutes: In some jurisdictions (including Washington), if an AI tool is recording or processing audio/video of a deposition (and it was not noted as a videotaped deposition), you must ensure compliance with local laws.

Suggested Notice/Consent Language for AI Use (or Non-Use)

To ensure that no AI (especially generative AI) is used without your express consent in a deposition or proceeding, you can use or adapt the following language in your notice or consent form:

All participants confirm that no generative AI (artificial intelligence) technology shall be used during the deposition, including but not limited to: summarizing the proceedings by audio or video means or performing real-time behavioral or credibility analysis of the deponent (or other participants), unless prior written consent of all parties is given and all parties have been advised of the nature of usage and storage of same.

You might also supplement with a clause specific to your jurisdiction. For example, in Washington:

  • Under RCW 9.73.030 (Washington’s intercept/recording statute) it is unlawful to intercept or record a private communication without the consent of all parties. Washington State Legislature+1
  • Washington is a “two-party consent” state (i.e., you must have consent from all participants to record a private conversation). DMLP+1

Additional Considerations for Legal Practitioners

Vendor diligence: If you do consent to AI usage, run a short “vendor checklist”:

  • What data is collected (audio, video, transcript, metadata, behavioral analytics)?
  • Where is it stored (jurisdiction, cloud vs local, encryption)?
  • Who has access (vendor staff, third-party subcontractors, other clients)?
  • Retention policy: how long will your files and analytics be kept, and can you delete them?
  • Reuse/training: does the vendor use your data (even anonymized) to train their models or provide metrics to other clients?
  • Security certifications (SOC 2, ISO 27001, etc.), breach notification policy.
  • Ownership: who owns the transcripts, analytics, video clips, metadata? Can you get copies?

Disclosure to opposing party: Even if you choose not to use AI, consider discussing with the opposing party whether they plan to use AI. If they do, you may want to negotiate specific carve-outs or limitations (for example, “no behavioral analytics without agreement,” or “vendor must certify data deletion after 90 days,” etc.).

Witness awareness & consent: Make sure the deponent or witness is aware of any recording/AI usage, especially if behavioral or video analytics are involved. Transparency fosters fairness and helps protect against arguments of surprise or unfair tactics.

Privilege/work-product boundaries: If you upload internal witness-prep videos or internal strategy session recordings to an AI platform, consider the implications for privilege and confidentiality. Make sure vendor contract terms preserve confidentiality and do not inadvertently waive your protections.

Future proofing & method transparency: AI is evolving rapidly — what is novel today may be commonplace tomorrow. Consider building into your agreements language about versioning (what algorithm version was used?), explainability (can the vendor show how the analytics were derived?), chain of custody (who touched the data, when, how?). These will matter if you ever get into admissibility challenges or motions in limine about AI-derived evidence.

It is important to educate your team. Ensure your paralegals, legal assistants, and team members understand the technology, potential risks, and your internal policies. Consider adding a short internal checklist or memo to your teams so that everyone is asking the right questions before any AI tool is engaged in a case.

As AI continues to reshape legal proceedings, having the right partner matters. If you have questions about this blog post and/or how we can support your cases, the team at Buell Realtime Reporting is here to help—reach out any time to start the conversation.