When AI Disclosures Say Too Little and Mean Too Much
Editor’s Note: The question regulators are asking has changed. It’s no longer whether an organization disclosed its use of AI; it’s whether anyone reading that disclosure genuinely understands what it means. That distinction, as the expert panel made clear during HaystackID’s recent webcast, “Meaningful Transparency in AI: What Privacy Laws Actually Require,” is what differentiates defensible disclosure from regulatory exposure. Vague language, inconsistent messaging, and policies that don’t reflect operational reality are the threads that enforcement investigations pull on first. The organizations that will fare best show their disclosures mean what they say, at every layer, across every team and third-party relationship. Transparency has stopped being a drafting problem and has become a governance one.
When AI Disclosures Say Too Little and Mean Too Much
By HaystackID Staff
Somewhere between a submitted résumé and a hiring manager’s inbox, AI made a decision about whether a candidate was worth someone’s time. The privacy notice on the company’s website mentions using AI “for internal business purposes.” Nobody flagged that ambiguous language as a problem or asked what it meant… until a regulator did.
That gap was the through line that legal tech experts discussed during a recent HaystackID® webcast, “Meaningful Transparency in AI: What Privacy Laws Actually Require.” The panel did not focus on issues with the AI itself, but had concerns with the language, or lack thereof, that organizations use when trying to explain how they use the technology.
“Transparency used to mean disclosure,” webcast moderator Christopher Wall, HaystackID’s Data Protection Officer, said to kick off the webcast. “Now, transparency means comprehension.”
That distinction carries weight. A disclosure that is difficult to understand is more than unhelpful; it creates exposure. A company might describe its AI use in broad strokes in its marketing materials, while offering more specific details on its website. Neither version may be wrong on its own, but together they leave the consumer with conflicting information.
“Organizations struggle to come up with a consistent approach across their different business units,” said webcast panelist Ken Suh.
This lapse not only creates a poor user experience but also leaves a paper trail. When regulators or litigants start pulling threads, contradictions between public-facing notices and internal practices are what they’re looking for. The disclosure that seemed defensible in isolation looks different when set alongside an internal policy that says something else entirely.
Transparency as an Enforcement Trigger
Across the EU, the UK, and dozens of U.S. states, the expectation is increasingly consistent: explain how AI works, what data it relies on, and how it affects the people on the receiving end of its decisions. That expectation builds on existing privacy frameworks, such as the GDPR and CCPA, which already require transparency around data use and automated decision-making.
The EU AI Act hardcodes those obligations for deployers and providers, with phased implementation continuing through 2026. In the U.S., state-level laws targeting AI in employment, healthcare, and consumer-facing applications sit alongside federal executive orders on AI development and safety.
What makes the enforcement picture particularly sharp is that AI-specific law isn’t the only tool available. State attorneys general don’t need a dedicated AI statute to act.
“State attorneys general’s offices have very broad and powerful mandates under consumer fraud,” said webcast panelist Patrick Zeller, General Counsel, JetStream Security, noting that there will likely be an increase in simple consumer fraud cases to enforce privacy and notices and AI regulations.
“There’s a basic fairness and transparency under consumer fraud that requires you to give proper disclosure to consumers,” he said.
Zeller framed non-disclosure less as a technical violation and more as a breach of contract. If a company isn’t telling consumers what it’s doing, it’s taking something they didn’t agree to give.
Private litigation follows the same pattern. Suh described the historical arc: state attorneys general identify the issues; enforcement actions establish exposure, and class actions multiply. The first targets, he explained, tend to be organizations that have already drawn public attention for other reasons, which means reputational risk often arises before legal risk.
The Clearview AI case runs through this arc at scale. By scraping billions of images from public websites to build a biometric database sold to law enforcement and private companies, Clearview triggered enforcement actions across multiple jurisdictions, including litigation under Illinois’ Biometric Information Privacy Act and investigations by European data protection authorities over unlawful data collection practices.
Webcast panelist Aleida Gonzalez, Global Advisory Managing Director at HaystackID, drew a hard line: improper collection doesn’t become acceptable just because the data is useful. If consumers weren’t given proper notice or a chance to opt out, the data is tainted from the start. Beyond regulatory exposure, unclear AI disclosures can disrupt operations, erode trust, and increase the cost of responding to investigations.
The Internal Problem Behind External Failures
Before explaining to individuals how AI affects them, organizations need to know the answer. That sounds painfully obvious. In practice, it’s where most fall short.
Many companies deploy AI across the enterprise, often quickly and in response to competitive pressure, without building a unified picture of where it lives, what data it touches, or how it influences decisions. Without “an inventory of where you’re using AI,” Zeller said, the leap to specific, meaningful disclosure is nearly impossible. That lack of visibility becomes more critical as the level of scrutiny—and risk—varies by use case, with areas such as biometric data, hiring, and healthcare drawing the most immediate regulatory attention.
The documentation problem makes that gap worse. Many organizations license governance software that comes with policy templates and assume the work is done. Gonzalez sees the consequences regularly. The notices get published, the policies get filed, and nobody goes back to check whether any of them reflect what the organization actually does.
A notice that looks complete on paper but diverges from operational reality doesn’t just fail consumers. In the hands of a regulator or a plaintiff attorney, it becomes evidence. The solution does not require more language. It requires better language.
Users expect explanations that answer three core questions:
- What is AI doing?
- What data does it use?
- How does it affect me?
In practice, that means replacing abstract statements with concrete descriptions, going from “we use AI for internal business purposes” to “we use AI to screen résumés based on defined qualifications. You can request a human review.”
Pinpointing the Breaking Point
Transparency failures accumulate quietly across layers that nobody is watching all at once.
Consumer-facing notices are the most visible failure point, but often not the first. Employee policies struggle to keep pace with the rapid adoption of AI across hiring, performance management, and termination workflows. A company might update its hiring tool without touching the disclosure. The gap opens quietly and stays open.
Vendor relationships introduce another layer. Data doesn’t stay inside an organization. It moves to law firms, consultants, and third-party processors, many of whom apply their own AI tools to it. Zeller described what he sees regularly: companies updating contractual agreements, auditing non-disclosure agreements, and trying to understand whether anyone downstream is using AI in ways that create additional exposure. Most find the answer more complicated than they expected.
Internal governance is where the fragmentation becomes hardest to defend. Policies exist. Impact assessments are filed. But when an investigator arrives and starts comparing what the documents say to what the organization actually does, the distance between those two tends to be the story. Each layer compounds the one before it—and together they hand regulators and litigators exactly the inconsistency they’re looking for.
Say What You Do and Do What You Say
Regulators aren’t looking for perfect disclosures. They’re looking for honest ones and organizations whose behavior aligns with those disclosures.
That alignment has to hold across everything: privacy notices, internal policies, data flows, vendor agreements, and actual system behavior. When those elements drift apart, the gaps become the story. Inconsistency across business units is one of the most common ways that drift starts. HR updates a hiring tool without touching the disclosure. Legal signs a vendor agreement without flagging the AI clause. While nobody is lying exactly, they’re not telling the same story either.
The challenge of maintaining consistency compounds at scale. An organization operating across multiple states—or jurisdictions—has to satisfy dozens of overlapping regulatory expectations while keeping disclosures concise enough that someone will actually read them. There’s no universal template. But the expectation running through every framework is the same: say what you do and do what you say.
That consistency starts with clarity about purpose. When AI expands what an organization can do, the temptation is to do more without stopping to ask whether more fits the mission.
From there, execution becomes more practical. The organizations that get this right use plain language rather than technical shorthand, layer disclosures so users can find the level of detail they need, and treat those disclosures as living documents rather than static ones. They revisit language as AI use evolves. They make it easy to ask questions. And they build internal checkpoints—active AI inventories, documented data flows, governance approvals—so that when a regulator asks what the organization does with AI, the answer doesn’t have to be reconstructed from scratch.
None of that eliminates risk. It reduces it by closing the distance between what an organization says and what it actually does. That distance is often where enforcement begins.
Out with the Old and In with the Defensible
AI transparency has moved. It’s no longer a line item in a compliance checklist; it’s what regulators, courts, and consumers are measuring organizations against first.
The organizations that treat it as a documentation exercise will find that out the hard way. Those that build it into governance, workflows, and user experience—so that what they say and what they do are the same thing at every layer—are the ones positioned to withstand scrutiny when it arrives. That work requires visibility into where AI operates, discipline around how that information gets communicated, and the organizational structure to keep it current as AI use evolves.
Wall closed the session with the standard that ties it together: “Transparency is not saying you use AI. It’s explaining AI, honestly.”
That’s the shift. Disclosure was the old bar. Comprehension is the new one, and the distance between them is where most organizations still have work to do.
Go from vague disclosures to defensible AI governance with HaystackID AI Governance Services. Learn how we can help your organization build the controls, documentation, and oversight frameworks that withstand scrutiny.
About HaystackID®
HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface and eDiscovery AI™ technology. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.
Assisted by GAI and LLM technologies.
SOURCE: HaystackID