The Discovery Mismatch: Old Doctrine, New Systems
Editor’s Note: Discovery in 2026 is no longer defined by volume alone. Legal teams are now handling data sources created, transformed, and discarded in ways the rules of civil discovery were never designed to anticipate, from auto-deleting collaboration platforms to AI systems that generate summaries and records without human review. Courts are responding by applying long-standing discovery principles to these modern systems, raising new questions about intent, governance, and credibility. This article builds on insights from HaystackID’s recent webcast, examining how recent decisions are shaping expectations around preservation, proportionality, and defensibility. As the cases discussed make clear, the growing risk for organizations lies not in innovation itself, but in relying on default settings and passive governance as discovery obligations continue to evolve.
The Discovery Mismatch: Old Doctrine, New Systems
By HaystackID Staff
Discovery doctrine has traditionally assumed a relatively stable relationship between records and human behavior. Emails were written intentionally. Documents were saved deliberately. Retention policies, once set, governed records that behaved predictably over time.
That assumption no longer holds.
Modern collaboration platforms automatically record meetings, generate transcripts, summarize discussions, and delete underlying data on preset schedules. AI tools create prompts, outputs, and derivative content at machine speed—often without a clear owner or review process. Information now appears, transforms, and disappears in ways that strain the traditional discovery framework. Yet when disputes arise, courts continue to ask familiar questions:
- What should have been preserved?
- At what point did the legal obligation to preserve relevant information begin?
- Was the organization acting reasonably and in good faith?
That tension framed the recent HaystackID® webcast, “eDiscovery Lessons for 2026: Spotlighting the Top ESI Cases and Trends from 2025,” which examined how courts are responding to AI systems, ephemeral communications, and modern data practices. What emerged was not a rejection of precedent or a judicial scramble to invent new rules. Instead, courts sent a clearer and more demanding signal: traditional discovery principles still apply, and organizations are expected to apply them deliberately, consistently, and with foresight across technologies never designed with litigation in mind.
The cases discussed during the program reflect a broader recalibration. As data sources multiply and automation accelerates record creation, courts are less interested in novelty and more focused on intent, governance, and credibility. The message heading into 2026 is not that discovery is broken, but that passive reliance on defaults has become indefensible.
Deletion Defaults Still Work…Until They Don’t
Recent decisions around collaboration and video platforms reaffirm a principle long embedded in discovery law: automatic deletion policies can protect organizations when they predate any duty to preserve and are applied consistently. Courts declined to impose sanctions where recordings disappeared under established retention rules before litigation became reasonably foreseeable.
“We’re not going to impose any sanctions because these recordings went away before the duty to preserve attached,” said webcast moderator and eDiscovery attorney Phil Favro, Founder, Favro Law LLC.
To some observers, those outcomes felt counterintuitive. In reality, the courts’ decisions reflect continuity rather than change. Courts have long held that information governance policies, when implemented in good faith and followed consistently, do not become suspect simply because litigation arises later.
From the bench, the reasoning remains pragmatic, according to webcast panelist Judge Allison H. Goddard, Magistrate Judge for the U.S. District Court, Southern District of California, who said, “because they had policies in place and they were following their policies, they didn’t get sanctioned.”
Yet the absence of sanctions does not signal a safe harbor. Selective retention choices—where employees decide which meetings to preserve and which to allow to lapse—introduce risk even when those decisions occur before a preservation duty attaches. Inconsistent retention patterns invite questions about intent, completeness, and fairness that no policy language can fully neutralize.
As collaboration platforms continue to blend chat, voice, video, and shared workspaces, the margin for error narrows. Organizations entering cannot rely solely on deletion defaults. Those defaults remain defensible only when paired with clear escalation paths, well-understood preservation triggers, and disciplined override mechanisms once legal obligations arise.
The Hidden Risk Lurking in AI-Generated Meeting Notes
If deletion policies raise one category of concern, AI-generated meeting artifacts raise another, often quieter and more difficult to unwind. Transcripts, summaries, prompts, and autogenerated notes frequently persist long after the underlying recording disappears, creating written records that may be incomplete, inaccurate, or misleading.
“These AI-generated transcripts of meetings are particularly problematic,” said webcast panelist Ruth Hauswirth, Special Counsel and Head of Litigation & eDiscovery at Cooley LLP. “Unless somebody is going and verifying the accuracy of these transcripts… we don’t know if they’re actually accurate.”
That uncertainty matters because transcripts and summaries carry an inherent sense of authority. They look final. They read as complete. In adversarial or regulated settings, they may be treated as definitive, even when they omit tone, context, or nuance that shaped the original conversation. The risk intensifies when organizations retain transcripts without retaining recordings, leaving no reliable way to verify what was actually said.
Summaries amplify the problem. Unlike transcripts, summaries are interpretive by design. They compress discussion into conclusions, often without signaling that judgment was applied, or that it was applied by a system rather than a person. In litigation, those summaries can take on outsized significance, particularly when they capture sensitive discussions or legal strategy in simplified form.
The deeper issue is retention without purpose. Information that provides value on day one may carry disproportionate risk weeks later. As Eric Stansell, Senior Counsel for Discovery at Tyson Foods, said during the webcast, “there’s frankly more value in getting rid of data than keeping it too long.”
For legal teams, this year demands a shift in mindset. AI-generated derivatives require the same, if not greater, discipline than emails and chat messages. If organizations choose to generate them, they must explicitly decide how long they remain valid, who owns them, and when they stop serving a legitimate purpose.
Privacy Enforcement Has Entered the Discovery Chat
Privacy enforcement no longer sits adjacent to discovery; it shapes it. Regulatory actions in 2025 made clear that privacy failures surface quickly once litigation or an investigation begins, expanding scope, cost, and exposure before discovery even starts.
Regulators aren’t scrutinizing policy language, but operational reality. Opt-out mechanisms, retention controls, and third-party data sharing must function as promised across systems and over time. When they do not, enforcement actions introduce years of monitoring, audits, and reporting obligations that reshape how organizations collect, preserve, and produce information.
Favro framed the issue in terms of consequence rather than intent. Privacy missteps often occur long before litigation is anticipated, yet they dictate what data exists, where it resides, and how defensible an organization’s practices appear once discovery obligations attach.
As Stansell shared, the immediate penalty may appear manageable, but the real impact lies in “mandatory annual review” and long-term oversight. Those obligations introduce persistent discovery friction, forcing organizations to account for historical data flows they can no longer explain cleanly.
Judge Goddard captured the credibility dimension succinctly: “If you’re telling customers you’re not going to sell their information and then you do it through a third party, it’s not a great practice for a company in the long term anyway.”
In litigation, those gaps rarely remain isolated to privacy claims alone. Data collected or shared improperly often becomes discoverable evidence, drawing privacy failures into unrelated disputes. Discovery readiness increasingly depends on whether privacy controls align with actual data handling practices, not just regulatory expectations.
Applying Old Rules to New Systems, Intentionally
The cases shaping discovery heading into 2026 show courts applying familiar standards—preservation, relevance, proportionality, privilege, admissibility—to data environments that behave nothing like the records those standards were built around. Auto-deleting collaboration tools, AI-generated summaries, and privacy-driven data flows all require earlier, more deliberate decisions.
Defaults are no longer neutral. Retention settings, summaries, labels, and opt-out mechanisms carry legal consequences long before litigation begins. Courts continue to reward organizations that act intentionally, document purpose clearly, and align policy with practice. They show less patience for governance by convenience or assumption.
The rules remain the same. The discipline required to apply them has increased. In 2026, defensibility will depend less on technology choice and more on how thoughtfully organizations govern the systems they already rely on.
What This Means for Your AI Strategy
The webcast discussion naturally leads to the next question: how do these principles apply in practice as organizations roll out AI tools? Our upcoming program addresses exactly that.
Join us on February 25 for the webcast, “Real Benefits, Real Constraints: A Practical Guide to Copilot Rollout.” During the program, industry leaders will discuss how to:
- Understand how Copilot uses your data.
- Translate leadership objectives into business-owned pilots with defined outcomes.
- Get the best results through strong information management and governance practices.
Learn how to balance near-term enablement with risk management, outlining practical methods to measure value, adoption, and control.
About HaystackID®
HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.
Assisted by GAI and LLM technologies.
SOURCE: HaystackID