[Webcast Transcript] Meaningful Transparency in AI: What Privacy Laws Actually Require

Editor’s Note: Transparency used to be a best practice. Now it’s a compliance risk. As AI adoption accelerates, privacy laws and emerging AI frameworks are converging on one shared standard: disclosures that are clear, consistent, and actually meaningful to the people they affect. In a recent HaystackID® webcast, “Meaningful Transparency in AI: What Privacy Laws Actually Require,” data privacy experts go beyond checkbox compliance to examine what transparency really demands: alignment between policies, practices, and real-world behavior. With regulators paying closer attention, the bar is rising. Generic statements aren’t enough. The organizations getting this right are building user-centered communication that drives real comprehension. For legal, compliance, and technology professionals: disclosure is the floor. Comprehension is the standard.


Expert Panelists

+ Aleida Gonzalez
Global Advisory Managing Director, HaystackID

+ Ken Suh, JD, MBA
Tech-Focused Attorney, Adjunct Professor (AI, IP, and Privacy), Faculty Affiliate (AI), Board Advisor, Computer Scientist, and Entrepreneur

+ Christopher Wall (Moderator)
DPO and Special Counsel for Global Privacy and Forensics, HaystackID

+ Patrick Zeller, FIP, CCEP-I
General Counsel, JetStream Security 


[Webcast Transcript] Meaningful Transparency in AI: What Privacy Laws Actually Require

By HaystackID Staff

Saying you use AI is no longer enough. As expert moderator Christopher Wall framed early in a recent HaystackID webcast, “transparency used to mean disclosure… but today, now it means comprehension.” During the program, “Meaningful Transparency in AI: What Privacy Laws Actually Require,” data privacy experts explained that regulators and users now expect clear explanations of how AI works, what data it relies on, and how it affects decisions. Vague, generic language like “we use AI for internal business purposes” doesn’t meet that bar. As expert panelist Ken Suh put it, the first question any attorney should ask a client is simple: What does that actually mean?

The panel was clear: language that creates confusion rather than clarity isn’t just unhelpful; it’s a liability.

Inconsistency carries just as much risk as omission. During the conversation, expert panelist Aleida Gonzalez noted that many disclosures “are too vague… far too much legal or technical jargon.” Messaging that shifts across business units leaves users with conflicting information; what’s vague in a marketing notice may become oddly specific on a website, and neither version serves the consumer. During the webcast, the panel stressed that transparency isn’t just an external communication problem. When policies, workflows, and real-world practices diverge, regulators and litigators take notice. Enforcement starts with ensuring that behavior matches your policies. If it doesn’t, scrutiny follows.

The path forward is practical and starts from the inside out. Before organizations can explain AI externally, they need visibility into where it operates internally. An active AI inventory, data flow maps, and documented governance approvals are the foundation from which any regulatory inquiry will start. From there, the panel encouraged teams to pressure-test disclosures with real users, simplify language, and layer detail where needed rather than overloading a single notice

Read the transcript below or watch the full recording to learn why effective AI transparency requires precision, consistency, and genuine alignment between policy, practice, and user experience.


Transcript

Mary Mack
Thank you for joining today’s HaystackID’s webinar, “Meaningful Transparency and AI: What Privacy Laws Actually Require,” hosted by EDRM. I’m Mary Mack, CEO and Chief Legal Technologist of the Electronic Discovery Reference Model, also known as EDRM. Today’s expert panel is led and moderated by HaystackID’s Data Protection Officer, Christopher Wall. It includes Patrick Zeller, General Counsel of JetStream Security, Ken Suh, a tech-focused attorney, professor, computer scientist, and entrepreneur, and Aleida Gonzalez, Global Advisory Managing Director at HaystackID. We’re recording today’s webcast for future on-demand access and for your ongoing reference needs. This webcast will be available on EDRM’s global webinar channel for the next quarter. Before turning it over to Christopher for a fuller introduction and the agenda, Holley Robinson of EDRM will share a few brief notes on the webinar console. Over to you, Holley.

Holley Robinson
Thanks, Mary. If you look at the top of your screen, you’ll see the HaystackID logo, which you can click on to learn more about HaystackID. You’ll also see an option to contact Team HaystackID directly, as well as speaker bios where you can learn more about today’s presenters. Moving down, you’ll see the Q&A box where you can type in your questions for today’s faculty, and we highly encourage you to do so. Below that, you’ll find today’s resources, including the slide deck and a link to learn more about HaystackID’s AI Governance Services. There are also registration links for the upcoming HaystackID webcast, “When Seeing Isn’t Believing: Deepfakes, Digital Evidence, and Proving Authenticity in the Age of AI,” on April 22nd at 12 PM Eastern, as well as the upcoming EDRM workshop with HaystackID, “Discovery at a Crossroads: Global Perspectives on Emerging Challenges,” on April 29th at 11 AM Eastern. We’d love to have you join us again. Lastly, you’ll see some emojis down at the bottom of your screen. Please feel free to use them and react throughout the webcast. Over to you, Chris.

Chris Wall
Thank you, Mary. Thank you, Holley. Hello everyone and welcome. Welcome to this month’s HaystackID webcast. I’m Chris Wall, your moderator for today’s presentation and our discussion. And on behalf of the entire team at Haystack, I’d like to thank you for joining today’s presentation in our discussion that’s titled, “Meaningful Transparency in AI: What Privacy Laws Actually Require.” And like all things AI, this topic is hopefully timely and practical as we find ourselves living in an increasingly AI-saturated world. This webcast, as Mary and as Holley mentioned, is part of HaystackID’s ongoing educational series, and it’s designed to help you stay ahead of the curve by achieving your cybersecurity, information governance, and eDiscovery objectives. And as Holley mentioned, we are recording today’s webcast for future on-demand viewing. And the recording will be available not just at EDRM, but along with the complete presentation script, will be available on the HaystackID website, HaystackID.com. Today, we’re going to talk about what AI transparency means, why it’s a good thing, and we’re going to talk about some considerations for making sure that your use of AI meets transparency standards in the world today. During the next hour or so, we’re going to focus on why and how we should elevate AI transparency from vague statements to clear, consistent, and operational disclosures and regulations. And more importantly, to make it clearer and more consistent for your customers, your employees, and you, so that you can understand. So joining me today are three awesome panelists who are leaders in the privacy information governance and AI fields. We have Aleida Gonzalez, Ken Suh, and Patrick Zeller. And I’ll invite each one of you to give yourselves a brief intro. We’ll start with you, Aleida.

Aleida Gonzalez
Hi, everyone, and thank you for joining us. I’m Aleida Gonzalez. I work for HaystackID as a managing director with the Advisory Services Group. I began my career as a litigator for about 10 years, primarily working with the Assistant State’s Attorney’s Office in Chicago as a prosecutor. I’m also military, so I spent the last 10 years working in policy all throughout what is now called the Department of War. I began with HaystackID last year, and I recently earned my certification as an AI governance professional. Pass it over to Ken.

Ken Suh
My practice primarily focuses on compliance and litigation matters related to privacy and AI. You saw in the intro that I have some side gigs going. I also have a startup that I co-founded with a classmate at the University of Chicago, using AI in the healthcare space. And I teach AI and data privacy courses at the University of Illinois, Chicago, and the University of Illinois College of Law.

Chris Wall
Thanks.

Ken Suh
Over to Patrick.

Chris Wall
Patrick.

Patrick Zeller
Good morning. I’m general counsel at JetStream Security. Don’t tell my marketing team, but I’m jumping the gun on that announcement this morning. I have 20 years of experience leading privacy on the legal side, cybersecurity, AI, and emerging tech at Fortune 100 companies and highly regulated industries.

Chris Wall
Thanks, Patrick. And as I mentioned, my name’s Chris Wall. I’m DPO in-house counsel and chair of our privacy advisory practice at HaystackID. And my job at HaystackID is to guide our clients through the AI privacy and data protection thicket as part of cyber investigations, information, and AI governance exercises for traditional discovery. But more importantly, I guess more immediately, my job today is to help guide this discussion with our three panelists and to evoke as many great bits of wisdom as I can from our three panelists as we talk about AI. Because who doesn’t want to hear more about AI these days? Our discussion today will be less about hallucination management, AI ethics, or the looming August EU AI deadline, which are all important topics, just not for today. Today, we’re going to be talking about transparency, transparency in the use of AI, particularly where that need for transparency intersects with our privacy obligations. As we get started, as a housekeeping matter, this webinar is designed to help you make the best use of your hour that you spend with us. So we welcome your input. We’re a big crowd. We’re about 75, maybe close to 100 people today in our webinar. But we want to make this hour or so as practically beneficial to you as possible. So we’ll watch the chat box. And if you have questions, please drop it in the chat, and we’ll try to address the questions as we go. If we don’t, we’ll try to take it up with you after the webinar’s over. And of course, I will repeat what’s now become a standard disclosure by saying that each of our panelists today is speaking on his or her own behalf, and their comments or any views they express may or may not reflect the views or the positions of their respective employers or the organizations that they work for and give them paychecks. So with that, let’s dive right in. Today, we’ll start with the shift to meaningful transparency. We’re going to look at the legal convergence that is the coming together of various AI regulatory frameworks. We’re going to define what the touchstones of meaningful disclosure look like, and then we’ll move from external notices to automated decision making and internal governance, and finally, some practical takeaways. What would a good webinar be without some practical takeaways that you can take away from your hour with us? So, turning to our panelists, I’ll lead off with one question for each one of you in one sentence. And I know we’re going to have lots of sentences from everybody here. But in one brief sentence, what frustrates you most as we lead off here today about AI disclosures that you see today? And I’ll start with… We’ll go alphabetically here today. Aleida?

Aleida Gonzalez
What frustrates me most is that the many disclosures that I see are too vague. They don’t exactly tell the consumer what’s going on here, whether it be related to AI, whether or not AI is used or not. Overall, far too big or far too much legal or technical jargon.

Chris Wall
All right. We’ll certainly cover that today. Ken?

Ken Suh
Yeah. I think what we see is that organizations really struggle to come up with a consistent approach across their different business units, and I think that ties in well with what Aleida said. So in your marketing, you may see something that’s very vague, and maybe there was some decision there. But if you go to their website, it becomes very specific. So even the specifics become confusing to the consumer in the context of other materials that have been put out.

Chris Wall
Yeah. We’ll talk about consistency for sure as a big part of transparency. Patrick?

Patrick Zeller
I think a fundamental challenge to transparency is being aware, or having an inventory of where you’re using AI in your company and on websites, to be able to make the leap to being able to be more specific and transparent.

Chris Wall
Perfect. Well, thanks. We’re going to cover all of those, all those bugaboos and more here. So why are we talking about this now, or what makes this a big deal today? Over the past decade or so, we’ve probably all become familiar with, well, hopefully we all have, the privacy notice or the disclosure statement, often a long, legalistic, and relatively rarely read work of prose on just about every website we visit today. We’re going to piggyback on that notice or that disclosure topic, but with AI. The AI issues that we’re going to be talking about today are really less about an algorithm’s math and more about what organizations say and how consistently they act when it comes to that notice, and what the user experience that you and I might experience when we visit these websites or deal with these organizations. Because while it’s an interesting exercise, surely, and it’s good for consumers, generally, it’s also what regulators are increasingly going to be looking at. We’re going to talk about what are often overly technical or generic or maybe even under-technical or vague disclosures that have been criticized and been found legally non-compliant. We’re going to talk about some case law or some case studies surrounding that. If those disclosures don’t enable a reasonable person, then we’ll come back to that standard of that reasonableness to understand what the significance and the consequences of their using that service provider’s services, or even just their website and the AI behind it. As we talk through the notice and disclosure topic today, you’re going to hear us stress the need for explanations that are plain, that are contextual, and that are actionable, not just technical, because transparency used to mean disclosure. If there’s one takeaway from me today, transparency used to mean disclosure. But today, now it means comprehension. Panel, I’m going to turn to you here, and I’m going to start with Patrick this time. Patrick, you’ve seen more than a few of these privacy notices. You’ve drafted more than a few. What’s your litmus test, if you had one, but what makes a disclosure operational or comprehensible, and not just a check-the-box privacy exercise?

Patrick Zeller
I think one of the biggest challenges with all notices is that they tend to be written for regulators and not consumers or customers, right? So I think that the ability to make it for a user for simple comprehension, I think a lot of times it could be solved with sort of an overview or structure of the data flows, and sort of what’s happening where. I keep thinking of a picture being worth a thousand words to simply explain what’s going on for better transparency.

Chris Wall
Yeah. And I think that could bleed over not just from AI to our privacy notices themselves, right? I think we could probably be a lot more transparent there, too, instead of the legal gobbledygook we see.

Aleida Gonzalez
Exactly.

Chris Wall
Aleida, you live in the governance world now. So, through a governance lens, where do you see gaps in transparency? Where do you see them first appear? Do you see them in the privacy notice or someplace closer to the user experience in contracting vehicles? Where do you see it usually pop up first?

Aleida Gonzalez
I typically see it everywhere. But whether it be a gap or vague disclosures, because an organization may have their notices out there, but they’re just so vague that you can probably consider it a gap, right? So the key thing from a consumer perspective, what they’re interested in is not just the fact that AI was used, but how it was used and what it means to them as the consumer. And if organizations would just use plain English, I think that would come across to the consumers much better, so that it appears that there is no gap or that their disclosures aren’t as vague.

Chris Wall
Thanks, Aleida. Ken? So, a really popular use of AI today, of course, and this is our first touch on practical implementation here, but it’s assisting HR departments with making hiring decisions. In the employment context, and I know all you do at Jackson Lewis’ work, all you do is work. What’s one sentence that every candidate facing disclosure should include when AI screens applications or when your AI is used in those hiring decisions?

Ken Suh
I’ll just use Illinois as an example because that’s where I sit. And I think here to Patrick’s point and to what Aleida said, regulators have actually tried to be helpful. And so the one sentence summary has to include the AI product that’s being used, and how it’s being used. Are you using this to screen resumes or to filter for certain criteria? And then a point of contact. So if someone has questions about this AI tool, they can reach out. And the regulation’s actually pretty specific about that. I think it’s a move in the right direction that’s helpful for our clients and for regulators as well.

Chris Wall
Let’s talk about why. Okay? The whys. So across the EU, the UK, and the US, the through line is kind of the same. You’ve got to disclose the fact that AI is used. You’ve got to disclose what it does, what data it relies on, how it affects people, and then, of course, what choices or rights the user might have. The GDPR, if we turn to privacy a little bit here, the GDPR in California’s CCPA took effect in 2018 and 2020, respectively, and both deal with fairness, automated decision making, what we would often term AI today, in keeping consumers informed. And then if we move from that privacy area to AI, you’ve got the EU AI Act that took effect way back in August of 2024, and it’ll be fully applicable in August of this year. And it hard codes transparency, both in terms of interoperability for users and the appropriate use of AI by deployers and providers. In the US, while there may not be any comprehensive federal AI law, just like there’s no similar comprehensive privacy law, at least 45 states, at my last count anyway, have passed AI regulations in one form or another that focus on transparency, deepfakes, AI, and employment or healthcare. Those state laws sit alongside federal executive orders targeting AI development and safety. And all of those have to deal with what we’re talking about today, and that’s transparency. Patrick, you’re a former prosecutor. Are there any tools available to state and local government today, regardless of whether they’ve got a comprehensive privacy law on the books or not, for us to protect consumers?

Patrick Zeller
All the state attorney general’s offices have very broad and powerful mandates under consumer fraud, and very large consumer fraud agencies. And I think we’re really going to see, outside of California and Texas, I really think we’re going to see an uptick in simple consumer fraud cases to enforce privacy and notices, AI regulations, because there’s a basic fairness and transparency under consumer fraud that requires you to give proper disclosures to consumers. And think of it as sort of a breach of contract scenario. If you’re not telling them something you’re doing or you’re not being fully transparent, they have very broad enforcement authority under those statutes.

Chris Wall
Very similar to how the FTC enforces privacy in the US in large part, right?

Patrick Zeller
Exactly.

Chris Wall
So, Aleida, how do you look to operationalize all of these regulations? How do you put them into practice? How do you incorporate them into your governance within an organization?

Aleida Gonzalez
For the most part-

Chris Wall
Specifically, I guess, particularly for a non-technical audience.

Aleida Gonzalez
Yes. For the most part, I would say, as for organizations, they shouldn’t lose sight of what their mission and purpose are for the business, for the organization. I’d recommend making sure that the mission statement is posted on a wall in very large print. Because when you maintain that mentality, this is my business purpose, this is what I am doing here, then within all your policy documents, everything else is going to flow. But when you throw AI into the mix, organizations may tend to think, “Oh, I can do all these additional things with AI.” And then they start to lose sight. And the minute you start losing sight, everything else goes along with that, including your policy documents, your disclosure documents, and your privacy notices. And that’s where organizations begin to struggle, because they lose sight of the entire purpose of their organization, and they start sidetracking. And once you start sidetracking, you have to go back and revisit that mission statement to make sure either you’re going to change that document or change how you’re going to use AI because that’s where it all begins, essentially from a business perspective, as well as from a consumer perspective. Because they understand your business does X, but now you’re changing things, and then you start losing sight of your customers as well.

Chris Wall
Yeah. Thanks, Aleida. And Ken, I can’t think of… Well, there are a few places where AI today has as much visibility, I guess, as it does in the workplace. So if I’m an employer, how can I tailor transparency for my employees, particularly around concerns for surveillance, productivity monitoring, and performance management, maybe, or even the use of AI in hiring?

Ken Suh
Yeah. I think that the challenge that employers often have is that, up to now, they’ve kind of cobbled together these policies. By that, I mean, privacy policy, use policies, there are device policies, and so there may be office… So there’s a stack of policies that employees kind of have to go through and understand. And then now we have this technology where there’s immense pressure for the business to implement. And for good reason. I think there’s some huge upside to some of these AI tools. So what we try to do is say, “Okay, you need to update the policies you have. You’ve gone down that road. But perhaps provide a single place or a single notice where you incorporate all of those and provide that to your candidates, your current employees, your former employees, whose benefits you may be monitoring. It’s part of your exercise as an organization, you’re doing that inventory every year and forcing yourself to go through those existing policies and make sure you understand them as well.”

Chris Wall
Yeah. Thanks, Ken. I think that if you don’t understand them yourself, you’re the one responsible for putting that policy out there; there’s definitely a problem, right? So I’ve got a question for all three panelists here. Speaking specifically about chatbots, that’s one way that a lot of us might interact with AI. Let’s say you need to schedule a healthcare appointment. And you want to schedule your healthcare appointment, and they’re using AI to help schedule that, to help maybe even diagnose over the phone. Maybe you can talk to us a little bit about how that touches, perhaps, on two-party consent in certain states, might affect or might bring into play, wiretapping laws. Maybe you can talk to us about the use of AI in that context. And I’ll start with you, Patrick. You want to weigh in here?

Patrick Zeller
Sure. Off the top of my head, the use of AI and chatbots requires disclosure under a couple of states currently, and there are more state regulations on the way. Wire-tapping laws come into play. This hasn’t happened yet, but I anticipate them coming into play. If your AI can identify the person that’s speaking and it’s doing word-for-word transcription without a proper notice and consent, it could trigger wire-tapping laws. I believe 14 states have all-party consents. And if you’re not getting consents from everybody that’s part of your meeting or part of the communication, that you could violate 14 state criminal laws. And what was the third one? Yeah, again, one party needs everybody to consent to the wiretapping. One party’s not going to work. It’s a common misconception because that’s what we see in a lot of movies and TV shows. That’s the federal standard.

Chris Wall
Ken or Aleida, you want to weigh in too here? If not, that’s okay. We can move on.

Ken Suh
Yeah. I think it’s important to know if you are launching a chatbot, what’s the purpose? And Aleida kind of touched on that a little earlier. There’s no general-purpose chatbot. I know we all want that, but what’s the function of this chatbot? Is it to schedule appointments for an existing patient? Is it to do some intake for a new patient? And I think that’s going to drive a lot of the regulatory and legal requirements around wiretapping disclosure and consent, and what has to go in there. And you can create workflows that help incorporate those things in the chatbot. You just have to know that’s what you want to use it for.

Chris Wall
Thanks. Aleida, anything to add?

Aleida Gonzalez
I would just add, as far as the disclosures, that you are dealing with a chatbot, just for greater transparency, explain how this chatbot is being used. Is it just used primarily for scheduling purposes, or is it collecting information from the consumer that would… I think that would put consumers at greater ease if they knew specifically how the chatbot is being used or for what purpose.

Chris Wall
And then of course, what data they’re collecting and how long it’s retained.

Aleida Gonzalez
Yes.

Chris Wall
That’s got to be included in their privacy policy. We are going to talk here in a few minutes about where these disclosures should be made and what’s sufficient. But let’s talk briefly now about… We gave a specific use case there for chatbots, and they’re used for scheduling or maybe even for general diagnoses. But we’re talking about clarity, context, accuracy, and really the underpinnings of any organization, right? Because you’re talking… Those are the touchstones, really, when we talk about transparency, clarity, context, and accuracy. We’ve got to remember who your audience is when it comes to disclosure. We’ve always got to think about with whom we’re being transparent and what they’re looking for. So let’s talk about these different… I created a little chart here, if it’s helpful. Let’s go through this. When we talk about consumer-facing, Aleida, let’s talk about consumers first, consumer disclosures. What’s a consumer-facing phrase look like to you?

Aleida Gonzalez
An ideal consumer-facing phrase would be written in plain English and would, at a bare minimum, address how the AI is involved in the decision-making and how it impacts me. Because most disclosures or privacy notices may have some of that in there. But I think if they had maybe a very short version of their two or three-page disclosures and notices that would be written for the consumer, the consumers would most likely appreciate that rather… And then with maybe a link to the two or three-page-long legal and technical jargon explanations.

Chris Wall
Yeah. I think the bottom line is the impact on them and their choices. So what’s something that an employee notices that you see, if we look to the next one, is the employees. And there’s overlap obviously between consumers and employees. What are some pitfalls, or maybe what are some things an employee notices that you might cringe at?

Ken Suh
I think that-

Chris Wall
Or maybe you share it. I don’t want to just point at the cringeworthy ones.

Ken Suh
I think the biggest challenge, and frankly, it’s an understandable one from a business perspective, is how to incorporate updating the AI disclosure in changes to your hiring and employment decision-making process. Most companies have a workflow for hiring. They have a workflow for annual evaluations. They have a workflow for terminations. And as they update those, they have to start incorporating, “How do I update the AI disclosure if I’ve used AI?” That’s tough to do because oftentimes those different workflows are siloed. And so it just starts a long conversation when we say, “Okay, where is this being updated and how are you disclosing this to employees?” Because sometimes the answer is, “Well, it’s once a year we send this out.” And so we have to really dig in there, and I think it catches a lot of clients by surprise. Where I think employees do really well, and this is maybe better, I’m just comparing to our experience early on-

Chris Wall
Employees or employers?

Ken Suh
Employers. Sorry.

Chris Wall
Employers.

Ken Suh
Yeah. And this is like comparing to privacy 10 years ago. I think employers understand now that just because they have an office in three states doesn’t mean they don’t have to comply with the other states if the recruiting is 50 states or beyond. And I think Patrick, and maybe everyone else on this call, can sympathize with the client who’s very surprised that your privacy needs to be compliant for California, even if you don’t have an office there. But these days, I think companies understand that. So that conversation tends to go really well.

Chris Wall
So shifting now, Patrick, let’s talk about the regulator view. All right. What are the regulators looking for? And we’ll tackle the vendors or our business partners too at last year. But one of these four elements or anything about clarity, context, accuracy in particular, that you see organizations struggle with most in practice?

Patrick Zeller
I think it’s a follow-up to understanding what’s going on, but also how you’re sending your data downstream, which could also be used in AI. So law firms, consultants, a lot of times have some of your most sensitive data. I see a lot of companies updating contractual agreements and also trying to get an understanding of whether anyone else is using AI downstream. And then also reviewing their non-disclosure agreements to see how that impacts their use of AI within their company, and then also potential downstream risks as well.

Chris Wall
I think you’re absolutely right. And I think those regulators, when they look at that, they’re looking for consistency from both within the organization and any place onward. And I’ll add under that last category, under business partners or vendors, that working with our vendors at HaystackID, every contract we write, either with our vendors, or where we are the vendor, every contract includes a DPA or a data processing agreement. That, among other things, clearly outlines party roles and responsibilities, including any AI application or AI prohibition, as the case may be, that the party might have or the parties might have. So if we have legacy clients or legacy vendors who didn’t have a DPA in place, they do now, or we have clients who write anti-AI language into their terms with us. And we’ve flagged that in our contract lifecycle management tool so that when they come back and tell us that they’d like to use AI in an engagement, because everybody does now, we can issue an AI amendment. That roles and responsibilities and transparency commonality between privacy and AI is welcome, and frankly, makes a lot of sense to me. But for our clients and our organizations, sometimes it’s a steep learning curve. We talked about how regulators are looking at consistency, both in what you say you do on your disclosures, maybe your cookies, your internal AI register, maybe, and your contracts. And we know that EU regulators in particular have faulted generic disclosures and that contradictions across notice or disclosure and your internal docs or your internal practices really raise red flags. The bottom line is what Aleida, I think, was talking about earlier, and that’s that vague does not equal compliance. So I’m going to stick with you, Patrick, if I can right here. We talked about knowing your audience. Excuse me. So when you see these notices, how do you know if they’re written for the regulator or written for the consumer? You said that upfront when we first started here, is… Well, anyway, I want you to dive into that if you could, please. How do you know, or what’s the distinction between that?

Patrick Zeller
I think the distinction is something Ken touched on earlier, as well as companies have sort of a pile of policies. So, looking at those policies and making sure you have consistency across them and look to sort of integrate your data collection points, where data’s going into AI, it’s really helpful to have that visibility so you can sort of tick and tie what you’re doing with the data, and then being able to easily explain it. And a couple of times I’ve seen it done where people sort of lay out what they’re doing, but also what they’re not doing. We’re not transferring. We’re not selling to make it sort of crystal clear in your notices of what you’re doing.

Chris Wall
Hey, Ken. Where do you see or where do you envision enforcement investigations, whether it’s from a state AG or from specific privacy or AI regulators, as the case may be? Where do you see that start with these potential inconsistencies?

Ken Suh
I think it has to start at the state AG, some kind of consumer protection arm of a state AG’s office. And that’s the pattern we’ve seen historically, whether it’s in the employment candidate context, or a person walking down the street context, or someone interacting with the website for privacy concerns. In many states, the AGs are the ones I think who are leading the charge on identifying issues. Now, I think quickly on afterwards, what we see also historically is private class actions, private litigation follows, and sometimes that tends to move the needle as well. But some of the issues that I think we’re trying to help clients with are making sure this is understandable. A tough job in compliance, and I think we can all sympathize, as you get the question, “Is this 100% compliant?”And that’s really a difficult question to answer, because it has to meet the views of dozens of state regulators, their AG’s office, as well as consumers now. And how do you do that in a way that is concise, readable, and understandable? I think companies are trying to do their best, many of them are, and we’ll just have to wait and see. Unfortunately, a lot of times, the first targets are not sympathetic defendants. They’re people who have somehow drawn the ire of the public.

Chris Wall
Yeah. We’ll talk about a couple of those here, for sure. So if you were still… Sorry, go ahead, Ken.

Ken Suh
No, no, go ahead. Yeah.

Chris Wall
I was just going to say, Aleida, if you were still in enforcement, what’s the low-hanging fruit you would go after?

Aleida Gonzalez
I’d go after whether or not the policies match the behavior or the behavior matches the policies, because a lot of software companies nowadays are making AI governance much easier for companies out there. So they’ll sell you the software. And within these software options, they have templates for you to draft your policies, to draft your privacy notices, your disclosure notices. But one thing most companies fail to do is actually read them and make sure that these notices or these policies actually conform to the business, the business model, the business purpose, all of that. And when they fail to do so, if an investigator were to come out and look at, read these policies, they’re going to look at, does your behavior within the organization match what you’re saying to the public? And if it doesn’t, you’re likely to receive greater scrutiny.

Chris Wall
Thanks, Aleida. And I echo your sentiment about preferring ordinary language over, and I’m a lawyer, technical or legal jargon. I think everybody prefers ordinary language. So the idea is to make disclosures tangible. We’ve been talking a lot, vaguely, frankly, ironically, about these AI disclosures, but they’ve got to clearly state who, what, where, when, and how AI is used. And I know I like to see claims like we are an ethical AI deployer, or we have explainable AI practices. But to your point, Aleida, I think it needs to be backed up with other documentation or a source of some sort. So, Ken, let’s start with you. As a practical starting point, can you think of a before-and-after example that you’ve seen implemented with a client? We’ve got up here some general language that you’re going to often see, I think, in these AI disclosures. Can you give us some other practical examples or some real-life examples?

Ken Suh
I think maybe the lowest-hanging fruit for the group is this idea that we use AI for our internal business purposes. And that’s a phrase that appears in privacy policies and continues to carry over.

Chris Wall
What does that mean?

Ken Suh
Exactly. That’s the first question I have for clients. What does that mean, and how do I explain this to somebody? And we can keep the phrase, and that’s fine, but you have to include what it means. Are you using it for administrative purposes, filling out paperwork? Or are you using it for internal purposes, such as determining raises? That’s also an internal purpose. And so we can work with businesses on good wording that’s accurate and also discloses properly, but these common phrases that have made their way around different types of policies are very challenging for us.

Chris Wall
So I’ll give you a real-life example. This morning, I was on the phone with an insurance company, and I got the recording that says, “We may use AI on your recording of this call to improve our services.” Very timely for me to get that notice when I had this phone call with this insurance company. So, Aleida, you deal a lot with clients across a lot of industries. How might they improve that statement? It’s a practical angle.

Aleida Gonzalez
I’d say they should at least explain how. And I get it, the attorneys for these companies, they need to protect the client, they need to protect the business. So maybe there’s a sweet spot where maybe they do need to be a bit vague, but have that sweet spot where you’re going to at least explain the how. Is the AI being used just to answer the calls, to get them answered quickly? Okay, that’s a lot easier to deal with than I’m dealing with a complete chatbot that doesn’t know anything or can’t answer my questions. But at least explain the how. Were they improving your calls by providing the representative with more information at their fingertips? Or are they collecting on you? How this AI is being used during your call would be far more acceptable for the consumers.

Chris Wall
In my case, I’d love to know how they are using AI to improve the services they gave me on that hour-long phone call. Sorry, Patrick, or Ken. Ken.

Ken Suh
Something I always remind clients of, very tangible advice, maybe is you don’t have to put it all in one place. Because I work with a lot of financial services clients as well, and well, we don’t want to make that message 20 minutes long. It’s like, okay. Refer them to a website. I’m not saying that’s the best approach, but it’s better than what you have today. It’s our job as lawyers to come up with workable solutions, not just say no.

Chris Wall
I’m going to come back to Patrick here to wrap up this topic, though. Patrick, the risk obviously is where we’re not actually doing what we say we do in our disclosures. So how do you document? How do you evidence the fact that what you say you’re doing is actually being done in that AI disclosure?

Patrick Zeller
I like to describe myself as a recovering litigator, and I go back to trial advocacy training where they sort of whip into your soul the KISS method of keep it simple, stupid, for basic communication. And I think, Chris, your question, the biggest challenge there is sort of the fact-finding. Like, rolling up your sleeves, figuring things out. I think a lot of companies will turn on a notice like that because they’re thinking about using it, and they’re trying to get their arms around it. So they’re trying to get ahead. But Ken has a great point. A lot of common sense can bring us to a much better position. Referencing a website, referencing documentation, there’s not a lot of guidance. So anything you can do to sort of make it easier and more transparent so people can get more information, I think you’re on the right track.

Chris Wall
All right. So let’s look at some of these disclosures. You’ve got good examples and some less good examples here on the screen. And I just want to go through these real quick. Let our audience look at them, the stronger approach, the less strong approach, in maybe 15 seconds each. Aleida, I’m going to ask you first, how can an organization pressure test its disclosures before regulators do?

Aleida Gonzalez
I go back to making sure that your behavior within the organization matches what your policies say, and vice versa.

Chris Wall
Patrick, same. Quick. What are some of the biggest themes that you see in meaningful transparency on these notices?

Patrick Zeller
I think getting creative and explaining things, and then having some of your employees read them and see if they can understand them, right?

Chris Wall
Yeah. That’s a great way to pressure test. Give it to somebody to see if they understand it. Give it to your high school students, see if they understand it, right?

Aleida Gonzalez
Right.

Chris Wall
Ken, anything you want to add here?

Ken Suh
No, that last bit is so important. Let someone else read it. You don’t have to have your outside counselor read it, you don’t have to have your in-house counselor read it, but let someone else read it.

Chris Wall
Yeah. Look, I’m going to mention here one more thing, though. If we look at leading AI regulations like the EU AI Act, those regulations are based on principles of product safety. And it’s helpful, maybe if we look at AI as a product with reasonably foreseeable outcomes of its use. Just like we would give product safety notices on any other product, we’d probably want to take that same approach with AI. I can’t think of a good product safety example that I’ve read lately, but the same idea. Let’s look at a scenario here. I think actually we touched on this. So I’m going to move to meaningful transparency, what it might be in a real use case. One of the most widely cited enforcement examples related to the AI transparency and data practices involved Clearview. So let me move to Clearview. Clearview, as many of you may be familiar, developed a facial recognition system by scraping billions of images from publicly accessible websites, including social media, news websites, blogs, and other online sources. Then the company used those images to build a massive biometric facial recognition database and sold access to that database to law enforcement, to private companies, and other organizations. And so individuals whose images were collected were never informed, and in most cases, they had no ability to consent or opt out. Obviously, for our privacy pros on the call, all of that creates a lot of privacy concerns. So regulators across lots of jurisdictions, including Italy, Greece, France, the UK, Australia, Canada, and Illinois, determined that those practices violated numerous privacy laws. The Illinois suit, in particular, was a BIPA suit led by the ACLU. I want to flip it over to the panel here to get some panel reactions. When you look at Clearview… Patrick, I’ll start with you. When you look at Clearview, where do you see the transparency breakdowns really quickly here, and how could they have possibly achieved meaningful transparency for the reasonable person?

Patrick Zeller
I think they would need to disclose upfront potential biometrics when capturing facial images. And I’ll also add that there was just a recent case last week against JCPenney who had its own AI. They had some sort of simulator for makeup before you purchase it. And there was a recent lawsuit that’s very similar to biometrics with their AI.

Chris Wall
Oh, thanks, Patrick. Ken, I see there’s also a tension here between the usefulness of… I mean, there’s a real use case for law enforcement and balancing that with the need for privacy and transparency. And AI is big here. I mean, that was the driver at identifying all of that biometric information. How do you see it playing out for companies that maybe aren’t in the extreme position, maybe Clearview was, but maybe still operate in sensitive spaces like the workplace?

Ken Suh
Yeah. I think here, kind of going back to making things very understandable to the average employee, and you have to assume the average employee is not your CEO. And they actually understand what’s happening. Clearview is very difficult. It’s out in the open. We saw a Super Bowl ad recently that got some backlash publicly. But in the workplace, you do have control over your environment. You can control the messaging. You can control how employees receive information. We always advise clients to make sure they understand what’s happening. I think the backlash from not understanding is much greater than the backlash of potentially knowing something that’s happening already.

Chris Wall
Yeah. Aleida, anything you want to add here, maybe from a governance and ethics standpoint?

Aleida Gonzalez
I like to use the fruit of the poisonous tree doctrine. And for those of you who aren’t familiar with that, imagine a poison tree. If you have a poison tree, you’re not going to take the apples off of that and make apple pie and feed people, right? So if you didn’t collect the information properly, meaning did you give consumers a chance to opt out? Did you provide the appropriate notices? Did you say that the purpose of your collection is actually the reasons for your collection? So if you didn’t do any of that from the very beginning, then don’t use the information. And so I think that’s where meaningful transparency comes in. Because if you do all the right things from the very beginning, then you should ideally be okay with your use of AI and collection.

Chris Wall
Thanks, Aleida. And I think that applies both in the law enforcement context and in our regular workplace. If we’re doing the right things, then it’s fully defensible all the way through our use of that AI, whatever the use might be. So let’s talk about where those disclosures might occur. Okay? Do we just put it in the company policy, in the company handbook? Is that sufficient? We’ve got a great policy at HaystackID, by the way. It’s beautifully written. Wonderful piece of prose. Don’t you agree, Aleida? It’s just marvelously written. Sorry.

Aleida Gonzalez
Agree. Greatly written.

Chris Wall
We typically see these AI notices in a privacy notice. We’ve talked about that for the last 45 minutes or so. Those long webpages that few people actually read. But the question is, when should an organization create a separate AI disclosure? Outside of the handbook, the employee handbook, that many people always read. And outside of maybe your privacy notice. Are there some AI uses that demand separate notice and not something buried in the middle of a privacy policy somewhere? Let’s lead off the panel here. Patrick… Well, we’ll start with Aleida here. We finished with you. We’ll stick with Aleida here. Aleida, in the FAQ structure, consumer services, any pointers there, if that’s where you decide to put it?

Aleida Gonzalez
I agree with what Ken said earlier. Don’t just limit it to one location. But I think the FAQs and help center language would certainly help, especially with help centers, because you’re dealing with other people, and they can break it down to plain English, particularly when you’re dealing with sensitive information or biometric information.

Chris Wall
Ken, in the workplace?

Ken Suh
Yeah. You can’t emphasize it enough, which I received from our firm as well, and they’re wonderful. And I read all of them, of course. But I think the more resources people have, the better your argument as a company to say, “I’ve done what I can to explain. I provided the policy, I provided FAQs, I provided additional plain language translations.”

Chris Wall
In the handbook, right? It’s also in the employee handbook.

Ken Suh
Exactly. So we’ve given the resources. And I love the analogy to the product line of regulations. We don’t know right now, but eventually it could go down that path where people reasonably can’t expect certain things from AI, and we can reel back some of these disclosures. But I think where we sit today, you’re taking a higher risk the less you disclose.

Chris Wall
Yeah. I don’t think we can disclose enough. I think that’s the common theme here. In the workplace and everywhere else, definitely the time you collect information and the time that it’s going to affect the consumer or the employee, or whatever it might be.
So, Patrick, we see in app patterns for these AI disclosures. Anything you’d like to see there when you see that model?

Patrick Zeller
I think that there’s another gap I’d like to touch on, and that is potential internal corporate transparency. A lot of people think that if they’re using enterprise or dedicated AIs, they’re safe, and that’s certainly safer. But they can also share data, and your execs need to be aware of this. It can share data within the company. So any type of-

Chris Wall
Shadow AI is what you’re talking about, right?

Patrick Zeller
Well, shadow AI, and then also if you buy an enterprise AI, and it’s dedicated to the company. If people outside of legal can see what legal searching, you could have a problem with the attorney-client privilege. You could have a problem with trade secrets. I’m aware of a company where a CEO was doing compensation analysis with an enterprise AI tool. He wasn’t in incognito mode, and other people in the company could see those searches. So there’s like an internal transparency angle here as well.

Chris Wall
Yeah, that’s a great point. Understanding where your internal information is shared. Yeah. Not all parts of the organization. Right. And I’ll wrap up with JIT if I can, just because I love saying JIT. Well, I think it’s a good rule of thumb for all of these notices. I think it is reasonable for a person to feel surprised or uneasy to learn after the fact that AI was involved. If I think about it, maybe that’s a good rule of thumb for all of these notices. But in those cases, the user should see a concise and action-linked disclosure before the AI runs. Again, just like any kind of product safety regulation, that’s kind of the touchpoint. Would a reasonable person expect their data to be used that way? Anything else anybody wants to add here on where those notices should appear?

Patrick Zeller
They should appear early and often.

Chris Wall

Early and often. Well spoken-

Patrick Zeller
Like voting in Chicago.

Chris Wall
I was going to say, spoken like our true Chicago panelists here. Early and often. All right, let’s talk briefly. We’ve got two more things to hear. A big part of transparency is making sure that you have your internal processes refined and documented. So we touched on documentation a little earlier, and for some reason, we keep coming back to those foundational information governance questions. Do you keep an active AI inventory? Do you have a data flow map? Have you done a privacy impact assessment of PIA, an AI impact assessment? Remember, those two things are not the same, not necessarily the same anyway. Do you have a description of what data you’re using to train? And do you have a file of governance approvals, if you have them? What AI applications or use cases have you approved within your organization? Any regulatory inquiry is going to start there, I think, with what you have internally. What documentation do you have? And then they’re going to look for alignment between what you say publicly and what you actually do. Aleida, I’m going to start with you if I can. How do you coach teams to write reviewable DPIAs and AIAs, and then also inform them or even incorporate those into your disclosures?

Aleida Gonzalez
I’d advise organizations to use these impact assessments, whether they’re for AI or for your data protection, but use these impact assessments as your explainability instruments. Tell your story here. Go back and think back to your organization’s mission statement. And this is a place to tell your story. Because when an investigator comes in, when you’re being audited, if you didn’t say it, whether in your impact assessment or another document, it’s like it didn’t happen. So use the impact assessments as a place to explain your 5Ws, your who, what, when, or why, because that hopefully will at least give you some ease when you’re dealing with regulators.

Chris Wall
Yeah. Thanks, Aleida. Ken and Patrick, anything you want to add here?

Ken Suh
Just one bit on that assessment. This is the place to stress test why you want to use the AI. And often just on the advisory engagements we get, we see the first few they’ve worked through, and it’s to save money, to be more… How is it going to save money? What kind of efficiency do you expect? What analysis have you done? The more information you can provide. Some of this is, how effective is this marketing campaign going to be? There’s always some analysis that has to be done. We encourage clients to do due diligence, document that diligence, and their expectations in this process.

Chris Wall
Yeah. Thanks, Ken.

Patrick Zeller
I would add.

Chris Wall
Go ahead, Patrick.

Patrick Zeller
To your point, shadow AI, and that’s the idea. Are your employees using something you’re not aware of, or is there confusion? Did you purchase an enterprise tool? And do they think that the same AI on their personal phone is the same? So, making sure things are labeled, what are people actually doing, like rolling up your sleeves and getting your arms around that, goes a long way.

Chris Wall
Thank you. Let’s wrap up here. Meaningful transparency it’s table stakes here. Remember to use plain language and tell the audience what matters to them. Consistency, I think we talked about. Back your claims with what you actually do and what your documents say you do. So I’m going to go to each panelist and get three actionable takeaways. All right? So I’m going to start with you, Ken.

Ken Suh
All right. I’ll keep it short. So you have to get executive-level buy-in. It has to be top down. They have to buy into this idea that you need strong governance. You should rely on your experience with similar processes, like privacy, marketing, and advertising. You have governance for those; rely on that experience. And then lastly, I assume I don’t understand any AI tool, and I try to learn what they’re doing through information and documents. I think it’s dangerous to walk in as attorneys and assume, “Oh, it works like this.” We don’t know. We’re not part of that process.

Chris Wall
Thanks, Ken. Aleida.

Aleida Gonzalez
Back to the fruit of the poisonous tree doctrine. If the information wasn’t collected appropriately, don’t use it. Second, assume that everything that you do behind the scenes is going to be amplified all over social media. Is that something you’re going to be proud of, or is that something you’re going to be embarrassed about? And finally, I’d say context matters. Make sure that your privacy and your disclosure documents are tailored to your audience.

Chris Wall
Thank you, Aleida. Patrick, we’ll give you the last word here today.

Patrick Zeller
Trust, but verify. I like Ken’s point of digging in to figure out what’s going on. There’s a big difference between public AIs and enterprise or private AIs within your organization, but you need to know how those function and who they’re sharing data with. And you could still be sharing it internally within your organization and still causing problems. I really like the fruit of the poisonous tree quote.

Chris Wall
Thank you. I’ll give everybody one last thought here. Remember, transparency is not saying you use AI. It’s explaining AI, honestly. So we thank everybody for joining today’s webcast and for allowing us to go over a couple of minutes here. And we truly appreciate your taking time from your busy schedules and your interests, and taking an interest in our educational series. So please, don’t miss our next webcast on Wednesday, April 22nd, “When Seeing Isn’t Believing: Deepfakes, Digital Evidence, and Proving Authenticity in the Age of AI. In that April webcast, our expert panel will discuss how digital media authentication works in real-world scenarios and how organizations can build repeatable processes to evaluate and respond to deepfake risks before they escalate into legal, financial, or reputational crises. So check out our website, HaystackID.com to learn more, to register for the April webcast, and explore our extensive library of on-demand webcast content. Thank you for joining us today for the webcast. And thank you, thank you, thank you to our panelists, Patrick, Ken, and Aleida, for spending your time with us. Back to you, Mary.

Mary Mack
Thanks, Chris. And we thank you indeed for joining today’s HaystackID webinar. And as Chris said, we have an upcoming webinar. And on behalf of EDRM’s sincere appreciation is extended to all of you for sharing your expertise with us and to our EDRM community for sharing great questions. Thank you.


Expert Panelists

+ Aleida Gonzalez
Global Advisory Managing Director, HaystackID

An accomplished attorney and military intelligence officer, Aleida Gonzalez brings a rare combination of legal expertise, national security expertise, and strategic advisory capability that uniquely positions her to lead in the field of AI governance. With nearly a decade of service as a prosecutor and extensive military experience, Aleida has operated at the intersection of law, policy, and security—advising senior leaders, managing complex investigations, and shaping outcomes in high-stakes environments. Building on her AI Governance certification and prior service as a prosecutor, Aleida’s professional development reflects decades of deliberate preparation for managing high-risk, high-consequence systems where accountability, policy, and operational disciplines intersect.

+ Ken Suh, JD, MBA
Tech-Focused Attorney, Adjunct Professor (AI, IP, and Privacy), Faculty Affiliate (AI), Board Advisor, Computer Scientist, and Entrepreneur

Ken Suh is an entrepreneur, business advisor, and trusted attorney with a strong record of assisting executive-level stakeholders achieve their business objectives and lead teams in high-growth environments. His business acumen, technical experience, and legal expertise enable him to deliver practical, impactful, and mission-focused advice to stakeholders in different functional roles at the highest levels of an organization. Ken has advised clients in almost every industry vertical on regulatory, litigation, and compliance risks associated with data privacy, cybersecurity, and intellectual property. He has successfully scaled multiple legal teams and mentored younger attorneys. Ken is a co-founder and board member of a telemedicine start-up, which was awarded first prize in the prestigious Global New Venture Challenge (GNVC) at the University of Chicago Booth School of Business, and advises numerous companies on corporate growth strategy and governance best practices. In addition, Ken is an Adjunct Professor and teaches substantive courses on Artificial Intelligence, entrepreneurship, leadership, data privacy/cybersecurity, and intellectual property.

+ Christopher Wall (Moderator)
DPO and Special Counsel for Global Privacy and Forensics, HaystackID

Chris Wall is DPO and Special Counsel for Global Privacy and Forensics at HaystackID. In his Special Counsel role, Chris helps HaystackID clients navigate the cross-border privacy and data protection landscape and advises clients on technical privacy and data protection issues associated with cyber investigations, data analytics, and discovery. Chris began his legal career as an antitrust lawyer before leaving traditional legal practice to join the technology consulting ranks in 2002. Prior to joining HaystackID, Chris worked at several global consulting firms, where he led cross-border cybersecurity, forensic, structured data, and traditional discovery investigations.

+ Patrick Zeller, FIP, CCEP-I
General Counsel, JetStream Security

Patrick E. Zeller is the General Counsel at JetStream Security. He was most recently Aristocrat’s Senior Vice President, Chief Privacy Officer, and managing Data Protection and AI Counsel. His team was responsible for Privacy, Data Protection, Cybersecurity, Gen AI, and Data Compliance in over 100 jurisdictions worldwide. His work experience includes global responsibility for privacy and data protection issues in 180 countries and over 110,000 employees for biotechnology, pharmaceuticals, medical devices, and direct-to-consumer products. He has created and led other global programs in information governance, data privacy (GDPR, CCPA, CPRA, China, Brazil, Vietnam, Russia, and China), records management (records cleanup and defensible destruction), eDiscovery, cyber-security, Gen AI, and data protection. He is a former litigator and federal computer crimes prosecutor. Patrick defines global privacy, data strategy, and data protection, enabling the use of data for business growth. He is a problem solver with a proven track record for practical business advice in creating and maintaining strategic privacy, data protection, and security programs. Patrick is a leader who draws on the power of positive and collaborative leadership to balance privacy and cybersecurity risks with business needs and growth opportunities. Patrick is a member of The Sedona Conference® international and Cyber security working groups. He is also a member of the International Association of Privacy Professionals and a Certified Information Privacy Professional (FIP, CIPP/US, CIPM, and CIPP/E). Certified Compliance and Ethics Professional, International, and is getting certified in Gen AI at Stanford University. Patrick is a frequent author and speaker on Privacy and data protection issues, including such topics as Privacy, Gen AI, Blockchain/DLT, IoT, cross-border discovery, data protection and privacy, developing trends, technology-assisted review; and privilege and ethics in eDiscovery, Privacy, and Digital Compliance.


About EDRM

Empowering the global leaders of e-discovery, the Electronic Discovery Reference Model (EDRM) creates practical global resources to improve e-discovery, privacy, security, and information governance. Since 2005, EDRM has delivered leadership, standards, tools, guides, and test datasets to strengthen best practices throughout the world. EDRM has an international presence in 145 countries, spanning six continents. EDRM provides an innovative support infrastructure for individuals, law firms, corporations, and government organizations seeking to improve the practice and provision of data and legal discovery with 19 active projects.

HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface and eDiscovery AI™ technology. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.

Assisted by GAI and LLM technologies.

SOURCE: HaystackID