[Webcast Transcript] When Seeing Isn’t Believing: Deepfakes, Digital Evidence, and Proving Authenticity in the Age of AI
Editor’s Note: Synthetic media is reshaping evidentiary standards, making it imperative for organizations to have a plan in place before a crisis forces the issue. During a recent HaystackID® webcast, forensics and legal experts drew on real-world cases and operational experience, walking attendees through the anatomy of a deepfake attack, the forensic and eDiscovery capabilities required to respond, and the five questions every organization needs to answer within 48 hours of a suspected incident. The experts dove into the cross-border dimensions of digital evidence, focusing on the EU E-Evidence Framework, which takes effect in August 2026. Read the transcript to learn more.
Expert Panelists
+ Rene Novoa, CCLO, CCPA, CJED
Vice President of Forensics, HaystackID
+ Jeff Shapiro
Managing Director, Europe, HaystackID
+ Todd Tabor
Senior Vice President of Forensics, HaystackID
+ John Wilson, ACE, AME, CBE [Moderator]
Chief Information Security Officer and President of Forensics, HaystackID
[Webcast Transcript] When Seeing Isn’t Believing: Deepfakes, Digital Evidence, and Proving Authenticity in the Age of AI
By HaystackID Staff
- Is the media authentic?
- Were identities compromised?
- What data was accessed?
- What regulatory exposure exists?
- Can you prove the timeline?
Transcript
Mary Mack
Thank you for joining our webcast, “When Seeing Isn’t Believing: Deepfakes, Digital Evidence and Proving Authenticity in the Age of AI”, hosted by EDRM. I’m Mary Mack, CEO and Chief Legal Technologist of EDRM, the Electronic Discovery Reference Model. Today’s expert panel is led and moderated by HaystackID’s Chief Information Security Officer and President of Forensics, John Wilson, and includes Rene Novoa, Jeff Shapiro, and Todd Tabor. We are recording today’s webcast for future on-demand access, and as with all HaystackID webinars hosted by EDRM, the recording will be available on the EDRM Global Webinar Channel through the next quarter to support your continued learning and reference needs. And before turning it over to John for a fuller introduction in the agenda, Holley Robinson of EDRM will share a few brief notes on the webinar console and some resources. Over to you, Holley.
Holley Robinson
Thanks, Mary. If you look at the top of your screen, you’ll see the HaystackID logo, which you can click on to learn more about HaystackID. You’ll also see an option to contact Team HaystackID directly, as well as speaker bios where you can learn more about today’s presenters. Moving down, you’ll see the Q&A box where you can type in your questions for today’s faculty, and we encourage you to do so. We’ll be answering questions during and after the webcast. Below the Q&A, you’ll find today’s resources, including the slide deck, a link to learn more about HaystackID’s VALID suite. THERE are also registration links for the upcoming EDRM workshop with HaystackID, “Discovery at a Crossroads, Global Perspectives on Emerging Challenges,” happening next Wednesday, April 29th, at 11:00 AM Eastern, as well as HaystackID’s next webcast, “The AI eDiscovery Sea Change: Privilege, Work Product, and Hyperlink Productions,” on May 20th at 12 P.M. Eastern. We’d love to have you join us again. Lastly, you’ll see some emojis down at the bottom of your screen. Please feel free to use them and react throughout the webcast. Over to you, John.
John Wilson
Hi, everyone. Welcome to another HaystackID webcast. I’m John Wilson, your expert moderator for today’s presentation and discussion, “When Seeing Isn’t Believing: Deepfakes, Digital Evidence and Proving Authenticity in the Age of AI.” This webcast is part of HaystackID’s ongoing educational series designed to help you stay ahead of the curve in achieving your cybersecurity information governance and eDiscovery objectives. We are recording today’s webcast for future on-demand viewing, and we’ll make the recording, along with the complete presentation transcript, available on the HaystackID website. Today, we will discuss how synthetic media is reshaping evidentiary standards and what you need to know about authenticating digital evidence, challenging fabricated content, and building defensible processes before a crisis forces the issue. By the end of the next hour, you will know the five questions you must ask in the first 48 hours when someone claims that digital evidence in your matter was generated by AI, and you will leave knowing which of these questions your organization cannot answer today. So quick word on me. I’m the CISO and president of forensics here at HaystackID, and that means I spend my time on investigations that clients don’t want to talk about publicly. Today, I’m the moderator. Let’s open it up, and then my colleagues will do the work here. So Todd Tabor is our senior vice president of forensics. He runs the day-to-day of our forensics team, and when a client calls at 2 A.M. with a problem, Todd is the one whose plan is running behind the scenes. He’s going to walk you through what response actually looks like in the first 48 hours. Rene Novoa is our vice president of forensics. He leads our forensics lab in Chicago, and he runs our R&D work on emerging technologies, which right now is mostly AI-enabled attacks on digital evidence. Rene will walk you through the threat landscape and the anatomy of a deepfake attack. And Jeff Shapiro is our managing director of Europe. He works on multi-jurisdictional matters at eDiscovery investigations, regulatory response across the U.S., the UK, and the EU. Jeff is going to take us through the framework, the five questions every organization needs to be able to answer, and what the regulatory overlay looks like. So, in January 2024, a finance worker at Arup, a global engineering firm, joined a video call. He recognized his CFO. He recognized his colleagues. The CFO walked the team through an urgent series of 15 fund transfers totaling $25 million. The transfers went out, then they discovered every other person on that call was a deepfake. The CFO, the colleagues, all of them, AI-generated in real time, speaking in the vernacular, speaking in the voice of those individuals. Two weeks later, the real CFO found out, and he had never been on that call. Now that’s a fraud story, but I want you to think of it as an evidence story because here’s what happened next. The company lawyers had to prove that the video their own employees watched on their own systems with their own eyes was a fake, and they didn’t have the tools, the process, or the expertise to do it on their own. So now I’ve brought two glasses of water, and I want to show you something. These two glasses of water are identical. They’re both clear. One of them is pure drinking water. The other has been contaminated with something colorless, odorless, and invisible. Now, your instinct right now is to try to figure out which one is contaminated. You’re looking for a difference, a tint, a particle, something that gives an indicator. That’s the detection instinct, and that’s the same instinct most people bring to the deepfake problem. If we just get better at spotting the fakes, we’ll be fine. But that’s not the hard problem. The hard problem is this. How do you prove to a judge, a regulator, or a board that this glass, the one you believe is real, is safe? Because once you know that contamination exists, you can’t see it, you can’t smell it, you can’t test for it with the tools you have today, the presumption of safety is gone, not just for the contaminated glass, but for both glasses and every glass on the table. That’s exactly what’s happening to digital evidence right now. The existence of AI-generated content doesn’t just create false evidence. It contaminates the presumption of authenticity for all evidence. Every video, every audio recording, every document in your case file now carries an implicit question, but is it real? Your case files are full of glasses of water, and the question isn’t which ones are contaminated. The question is, can you prove which ones are safe? And it’s not just us saying that. When Arup reported their incident, Hong Kong police confirmed it was one of multiple deepfake fraud cases they had under investigation in that same time period. The FBI’s Internet Crime Complaint Center flags synthetic media as an emerging vector in its 2024 annual report. And in the UK, the Law Commission has opened a formal consultation on whether existing evidence law is adequate for AI-generated content. The problem isn’t theoretical; law enforcement knows, regulators know, and the legal system is starting to catch up to what you just felt with the glasses. So the assumption is broken, but broken how exactly? What are the specific threat vectors? Where are most organizations actually exposed? Rene, why don’t you walk us through what you’re seeing?
Rene Novoa
Yeah, thanks, John. I think that was a great intro into the many vectors of how we see our intelligence, more of our primary vectors for our practice here at HaystackID. And I think that we also don’t want to get, as your examples of the two glasses of water, we don’t want to really narrow in on spending so much time trying to prove the authenticity of it, right? There still has to be business. We still have to have a workflow. We have to have a process. And I think this is where it’s very important to have a process of being able to identify how to look at all media and items like digital media that are coming into your organization, because we could get bogged down with trying to make what our eyes see and our ears hear, what our mind’s going to believe. At HaystackID, we have this threat landscape, and John did a fantastic job. Thanks, John, for that kind of intro on that case on the media tax, right? So that’s that synthetic media where the people believe that, based on what they heard, were impersonating the audio, what they’re seeing, were using a very familiar tool like Teams or the visual identification online that they’re able to detect. They believed it to be true, and the money was sent, right? And so when we’re going to also do identity impersonation, it goes beyond just fraud at a large organization like that. We also got to start looking at the different applications like Cash App or Bitcoin or things that you’re doing on the blockchain where you’re doing a lot of KYC, you’re putting your driver’s license, gambling websites, right? People have to upload their IDs. Now, if those can be synthetically created, someone can steal your information, go on to their FanDuel or the other type of underdogs that are betting, or open up bank accounts online using these types of money transfers. And so a lot of damage can be happening, not just where, “Oh, that would never happen to me.” We’re all in person, but we have to think beyond just these large cases, and as you said, $25 million was extracted from an organization. And so we have these different vectors so that we can break it down and make sure it makes sense, and we build workflows for these types of cases. And so this is why we have these different threat vectors. We have evidence manipulation where it is very easy now with the technology, even on our mobile devices, to edit the location of where we’re at, of your geolocation of photos; you can actually do that now. And it’s not for evil, but we have to understand what the technology is doing, how evidence can be changed, and how to detect when certain metadata and items are being taken for actual evidence that gets put into port or for the organization to review. So with that, we’re able to now edit evidence or manipulate it, but now we’re also having … We’re actually creating evidence, right? We’re having media fabrication, we’re creating fake individuals, there are entire websites for marketing purposes, I need a family or older adults dressed really nicely in a park, and we can generate real people. They’re not real people; they’re AI-generated individuals, but they look so real. And now, if we can do this for marketing, what else can we use these fake people that you’re never going to track down with facial recognition, right? And so we need to be able to understand when complete media is fabricated, when it’s created all from scratch, and how it can be used. And that’s where the technology is getting better. When we have media fabrication, and we create it from scratch, there are also technologies to then obfuscate the authenticity, so that with the things that we’re going to be looking at, can they manipulate it? Can they mask in the original source so that our tools find it very, very hard to detect that this might be a fake, right? And there is no certainty when we do detection. And that’s one thing that I think is very important to understand about these threats, these AI threats. It is very hard to say with certainty based on how digital media moves across our platforms. Just the fact that we text a photograph or a video from my Android to an iPhone, a lot of metadata is stripped down, and there’s compression. And so when it gets to, let’s say, John’s phone, John, I’m sending you a text message, certain metadata is already removed. And the idea that metadata is removed from that source media on your phone does not detect that it’s AI or it was manipulated, but that is a process about how digital media, digital evidence moves through different platforms. And so we need to understand how data moves from one platform to the other, what can be manipulated or added to, and really, to find our tools to be able to detect those. And I think last … Go ahead.
John Wilson
Rene, I think that leads right into the trust stack.
Rene Novoa
Yeah. I think all these are here, and I think that we need to turn over to understand how we trust this evidence. And I think you wanted to talk about how we see that.
John Wilson
Yeah. So let’s talk about the trust stack itself. Yeah.
Rene Novoa
Oh, wonderful. So I think that with the trust stack, that’s exactly what I was going through. How do we trust? How do we authenticate this information? And we’re getting it from so many different ways, whether they’re using FTP, whether they’re emailed. And then also just like I mentioned, on how we’re texting to each other, how media moves across. So, when we look at just data that comes into my inbox or is presented to me on a hard drive, what are our steps to just look at data? There is no way to authenticate just the media that’s just provided to me, right? So we need to have those levels, or we don’t want to slow down business. We want to be able to look at data, be able to trust it, and have a quick workflow to identify these key indicators that things have been either manipulated or completely fabricated, right? We have the different levels that the pictures get through very fast because there aren’t a lot of ways for most people, it’s what they can see. Your examples of the water were right on because we saw that they’re two clear glasses, but there is no way to know that one of them was contaminated. We can go more into communication as we work down the stack. We have our own teams, how we communicate with each other, having organizations, making sure they’re not doing off-channel communications. I know a lot of financial firms don’t allow WhatsApp or Signal chat off-channel. Everything needs to be done through either Teams or Bloomberg. They have all these other trusted communications, and it’s a way that it can be manipulated, but it makes it much harder. There is a much higher level of trust when I’m communicating with you on a trusted platform than you may have in the organization. And that’s why we need to make sure we’re controlling how we share information, how we send documents to each other, how we send photos, other digital media to each other, and how we communicate with each other. They have to be trusted platforms on ones that have been vetted. So we may not be able to stop everything 100%, but we really need to get close to making sure that the level of trust in information going back and forth is there, and then add additional education. And I think that’s where the identity trust comes into place because that’s going to be the individual, right? That’s going to be your 2FA, that’s going to be your authenticator apps. And so we need to kind of build a model that kind of takes where we’re most vulnerable all the way down to the most secure and find that equilibrium where we’re kind of in the middle, so that we have as much trust as possible without slowing down the organization. And I hear that a lot like, well, we’re going to have to have all these privacies, all these roadblocks, that we’re not going to get any work done. And so we need to find where that balance is, and really, education is key, so that we do take the time and not just blindly trust.
John Wilson
And I think that’s a great, great point, and it’s a perfect segue to … We now know what the threats are and where the defenses don’t hold. Todd, can you tell us what happens when the call comes in?
Todd Tabor
Thanks, John. Sure. Well, when that stack breaks down, I mean, you really have three pretty significant outcomes that are there. Fraud is the most obvious because it’s usually right there in front of you, like in the Arup case, with the 25 million gone with the wire transfer. And it’s usually the easiest part of the thing to quantify and understand because the money’s out the door. But then there’s data theft. Oftentimes the deepfake video caller is … Sure it’s a vector, it’s a social engineering vector, and it’s also a fraud vector, and while your fake CFO is explaining the urgency of the transfer, there may be something else going on that that deepfake may be a distraction to an end goal of grabbing other data on your network or getting encryption keys or some other highly sensitive piece of information. And then the longest tail in this thing is the regulatory exposure because you’ve got personal data that could be accessed, definitely exposed. You have GDPR and state privacy clocks beginning to run. And if the material is not public, then you’ve got other issues involved that you may have exposed yourself to SEC questions, regulatory industry questions, and financial stuff. And so the clock is moving, and you’re on it, whether you like it or not.
Rene Novoa
When you also mentioned a reputational exposure, too.
Todd Tabor
Oh yeah.
Rene Novoa
Because even though it’s fake and it’s not true, once it’s out there and once you’re exposed in some sort of criminal activity, and that’s what people believe, again, with their eyes and ears, they see their mind believes. And there are a lot of other things that can be very damaging to an organization when the trust fails. Sorry, John. And sorry, Todd, there you go.
John Wilson
Oh yeah. I mean, that’s perfect because what I think a lot of people … The case that we opened with, which is Arup, was a 25 million transfer. A lot of these cases, though, it’s a $100,000 transfer. It’s a $50,000 transfer. The impacts are smaller, but again, that’s because it’s not the primary attack. That’s the distraction. That’s what’s trying to draw you away from the activity behind the scenes, where they’re doing the rest of the attack, where they’re exposing further credentials or further details of your network. They’re trying to learn about a transaction that’s getting ready to happen, and so they’re trying to get insider trading information so they can leverage it for profit that way. And that’s what really makes this intriguing. And the work we’ve done so far is, the majority of it, the deepfake fraud that occurred was really just a distractor from the real activity that was trying to be perpetrated. So, Todd, let’s jump into the further complications of that. So now we understand what’s going on, but there is a whole other level to it when you start talking about how almost everything is cross-border these days because everybody is interacting on a global level. There are customers who might be from other countries that are participating in your organization, purchasing your items, or interacting with your company. Can you walk us through that a little bit?
Todd Tabor
Sure. And I’ll kind of try to give you a concrete example, so that makes the point a little clearer. Let’s say a client discovers that a product design, one of their product designs, has shown up in some competitors’ patent filings. Well, the evidence trail is going to run through multiple platforms, maybe Teams, SharePoint, corporate email, mobile messaging, all of that. And then you have multiple platforms that you have to deal with, data that has significant financial information behind it. And now if that information has moved across multiple borders, you’ve got multiple jurisdictions involved, so that you have to understand the laws related to those jurisdictions, the timing and emphasis on each of those platforms, and how the information transferred and what your movements are.
Rene Novoa
It’s really where the data sits, so where it actually lands, right?
Todd Tabor
That’s right.
Rene Novoa
Where it’s being stored.
Todd Tabor
That’s right.
Rene Novoa
And then it also breaks the providence, disruption just because of all those things you said, Todd, where the locations, the different platforms, and then when the chain of custody can be broken, and that can definitely cause, especially when you have just different jurisdictions… Ah, I’m all tied up. You guys understand what I’m saying, but in different areas, but we’ll get through this together. But yeah, you have those things disrupted to have those conversations that get more into the legal world.
Jeff Shapiro
Yeah. On Todd’s point, the Arup case is based on a UK-headquartered company, which is where I’ve been located since 2013. And often with cloud data, we think of it as being borderless, but in fact, it sits within certain tenets within certain geographic environments. Due to data privacy regulations and other regulations that exist, you can’t just grab data from one location and send it to another without potentially triggering data transfer restrictions and violations. So it’s incredibly important in these first hours when you suspect that an issue has arisen, that you figure out where your data sits, who has access to it, and where it can be transferred. What that indicates is that this is not just something you do at the point in time when an investigation has arisen. This goes further back into information governance and a proactive posture that you, as a corporate or law firm, need to find yourself in. You need to understand your data residency and landscape so that you can respond quickly. And I’ll touch upon this later, but with the new EU E-Evidence standard coming into effect in August 2026, this is going to be more critical than ever.
John Wilson
Yeah. And so I’ll talk about one specific example, and then we’ll kind of keep moving forward. But in one engagement, the mobile messages that were central to a case were held in a jurisdiction that doesn’t recognize U.S. litigation obligations. So we had to work with the local council there to build a lawful collection, a basis to accomplish the work under their framework and not under the U.S. litigation framework, while also meeting our client’s duty to preserve under the federal rules back here in the U.S. And that’s where the investigations get complicated. That’s where all of these things get very difficult because the forensics team has to not only know the U.S. procedure, or an eDiscovery team that doesn’t do cross-borders, they can both get stuck. And so, Rene, let’s jump into the actual deepfake attack chain. Let’s talk through that.
Rene Novoa
Yes. Yeah. Yes, John. So just like you said on the collections, where does that data sit? How is it going to be targeted? But one of the biggest things about collections, especially when you do have a deepfake attack, is getting it as close to much of the source as possible. So as it goes through the many different platforms, whether it’s been text message or emailed or it’s sitting somewhere on some SharePoint, getting as close to the point of creation, and I know a lot of times that’s hard, where it hasn’t been moved around from legal to IT and then over to a workflow or an organization like us that does have a workflow plan in place, we definitely want to making sure that we’re just not just collecting where they say, “Hey, just collect.” It’s really about preservation at the point of attack and not after it’s already been viewed by so many others, right? So once we are able to, then go ahead and collect, and we’re also able to look at other samples within the organization. So we need something to compare it to so that we can understand based on not just on a regular photo or even a document, we need to understand how do we train a model, how do we understand what metadata should be there, what type of little nuggets of a document, maybe there’s a way that a certain writing style, certain headings, just all those things that go into when an AI generated document, they may be off. It does the best it can, and it looks very basic; it’s going to fool you, but really, down into the pixel level or the document level, we’re able to then understand where those miscalculations can come up that can raise a red flag. So, having sample data for the company and then training the model and then understanding where that compromised channel was, and this is where we can kind of find those holes.
John Wilson
Right. Well, when you’re talking through this five-step attack chain, you have to understand that these are how the attacker is approaching an organization. They’re getting samples, they’re using those samples to move into training the model, and then they’re going to start to work on how to compromise channels. How do we get in? What’s our entry point? Do we get into the email? Do we get in through chats? Do we get in through video through their conference systems? How do we do all of that? And then how do we execute the requests and move into payments or data requests? And keeping in mind that executing a request is really complicated in depth because it’s no longer just, “Hey, I’m going to make this fake video and try to get a transfer.” Again, as we talked about before, that’s a distraction. Their execution is now very multi-pronged, multifaceted, and hitting the organization financially, and regulatory and responsibility. They’re going really deep across the organization for multiple purposes because A, the deepfake attack part is the distraction, but it’s also reinforcing their legitimacy when they have the right info, so that they can train their models. So when you’ve got the CEO on the line, you know that he’s from Texas and talks with a Southern drawl or you know the parlance of how he actually talks, all those things reinforce the legitimacy. And whether that’s a deepfake audio or just drafting an email, if they can get access to a mailbox, they index that mailbox, and they can understand the writing style of that individual.
Rene Novoa
And that’s the thing that we’re going to have to reverse, though, John. That’s exactly the hard part is too, is being able to reverse that in your threat vectors, right? Trying to understand how they got there, how they learned it, what was exposed for them to build those models, and how they got those compromised. So it’s also another reverse engineering, so taking what the intruder does, but also on the way back, on how we go back and undo that information, right? Because we have to understand not only where to stop it, but also where we’ll allow them to keep coming in and keep learning as we put up all these barriers in education and try to help the organization recover. We have to make sure they cannot happen again. And that’s also very important to understand how they did it by using almost the same process that they did. We’re going to have to reverse engineer it.
John Wilson
That’s right. Which is a perfect lead-in to Todd. Let’s talk through the first 48 hours. What happens? What needs to occur?
Todd Tabor
Sure. In the first 48 hours, you’ve got to preserve, secure, reconstruct, trace, and access the problems with your data, where they are. You’ve got to preserve your logs, your mailboxes, your chat histories, anything that could be potential evidence and could be potentially an access point; you need to preserve that. And that all needs to be done immediately and put everybody under legal hold. Then you need to secure your network and make sure the attacker isn’t still in your environment doing damage, pulling secondary sources, or looking for access points on other systems. And then reconstruct the timeline. Find out what actually happened across all your systems, who saw what, when, and which channel. This is where media authentication lives. And this is where specific videos, audio, and documents need to be claimed. And then trace, we need to follow the data. Where did it go? Who accessed it? Who left the environment? What persistence did the attacker establish? This is usually the step that reveals the full scope of the damage and is always larger than the first report of the incident. And access means putting it all together in a defensible record so that we have a way to legally move forward and commercially report on our obligations. So here’s what I want you to take from this slide. These five steps require two capabilities that are usually separate in our industry, forensic investigation and eDiscovery. You need forensic services to preserve and collect. You need forensic expertise to authenticate the media and reconstruct the timelines. And you need eDiscovery infrastructure to process the evidence defensively across platforms, across jurisdictions, and to format it in a way that will survive in litigation and regulatory review. At HaystackID, we run both those capabilities under our one engagement. That’s not a sales line. That’s a practical requirement of how 48-hour response usually works because if you have evidence in hand from a forensic firm and you have to hand it off to an eDiscovery vendor, you lose one to three days, and now you’ve added a custody gap to the exact moment when custody matters most.
John Wilson
Yeah. And I think that’s a great lead-in. So Jeff, let’s talk about what the questions are. What are the five questions? We said at the beginning of this that everybody would understand the five questions and then tie all of that into your international expertise.
Jeff Shapiro
Great. Thanks, John. Putting a lot of pressure on me to bring this home. That’s all right. So when a deepfake or an alleged deepfake attack occurs, there’s going to be five key questions that an organization needs to ask and try to answer in the first 48 hours. Now, the framework is generally the same everywhere, but the regulatory overlay changes depending on the jurisdiction and data type. So from an operational standpoint, if you can’t answer any of these five, the other four become significantly harder to solve because they’re inherently interdependent. Let’s look at question one. Is the media authentic? This matters differently depending upon where the potential dispute investigation, et cetera, plays out. In the U.S., under FRE 901, courts are increasingly requiring the party to introduce digital evidence to affirmatively prove it hasn’t been AI-manipulated. In the UK, documents are generally presumed authentic and less formally challenged, however. And then in the EU, evidence rules for civil trials vary by individual member state, meaning organizations must navigate different authentication standards across borders.Let’s look at question two. Were identities compromised, and then combine it with three, what data was accessed? This is where cyber overlaps with privacy and human impact. If the CFO is deepfaked, you have to determine where the threat actors obtained the audio. Further, if attackers breached internal systems to acquire biometric data, regulators like the UK ICO have indicated this can be viewed as a severe GDPR data breach and violation. Question four: What regulatory exposure exists? This requires coordinating with your legal counsel to assess holistic liability. A deepfake attack can trigger varied exposure. We already mentioned SEC cybersecurity disclosures previously in the U.S. There’s the NIS2 Directive in Europe. You’ve got the EU AI Act transparency obligations. And there’s also the introduction of civil litigation risk as third parties or vendors may be impacted by fraudulent instructions. And finally, we turn to question five. Can we prove the timeline? So, beyond this standard breach reporting window, this is about the forensic timeline of the attack itself. Proving that the attack timeline is a heavy forensic lift, and it’s becoming a highly sensitive one from a time perspective. So, for example, I previously mentioned the EU e-evidence framework, which goes live in August 2026. This allows European authorities investigating cyber crimes to issue emergency protective orders demanding underlying data in as little as eight hours. While there are legal mechanisms to object, such as physical impossibility or fundamental right concerns, these objections generally must be raised within that same eight-hour time window. This means if your internal teams struggle to rapidly trace the forensic timeline of a deepfake, responding to these statutory deadlines can become a significant organizational hurdle, and that really goes to what Todd was saying before about needing forensic expertise, as well as the technological eDiscovery expertise, in order to be able to do this. John, could we go to the next slide? Let’s transition to a practical scenario, such as your CEO announces a major partnership, markets react, and it turns out the video was entirely AI-generated. This is not a hypothetical future state. The underlying technology and distribution channels to do this exist right now and are being used right now. Synthetic media has already been used to attempt to move markets, often starting on a smaller scale. The question is, what happens when it’s a convincing video of a named CEO who pushed that market open through seemingly legitimate channels? So the challenge extends far beyond your IT department. It touches on securities and market abuse regulations because material statements have moved the market. It becomes a board-level governance issue requiring immediate communication strategies. It introduces operational friction as internal teams may face severe decision-making paralysis while trying to verify what instructions are actually real. And it introduces litigation and insurance complexities. Shareholders may react to financial losses. Organizations may find that their standard cyber insurance policies require a traditional network intrusion to trigger a full payout. But what happens if the deepfake is purely relying on social engineering over the phone or social media without breaching a firewall? Then the insurance coverage can be complicated and may not be able to be secured. All of these operational and legal challenges map back to those five questions on the prior slide. You need to verify the media, secure identities, and assess data access. You and your council may find yourselves coordinating simultaneously with the U.S. SEC, the FCA in the UK under the market abuse regulation, and ESMA in Europe. You need to prove the timeline, and that determines whether your public disclosures and trading halts were handled in a timely manner. The pattern of attack doesn’t necessarily change as it scales up, but the corporate stakes certainly do.
Rene Novoa
I mean, just one thing to add there, Jeff, I mean, I think that’s here today. I mean, we’ve seen major news organizations’ channels push accidentally deepfake videos that they thought were seemingly true, and it didn’t change markets, but it could have changed public opinion. And that’s just as dangerous. That is important for even organizations because public opinion could be, especially if it’s about their organization, which can be put onto mainstream videos and news organizations. It’s something that we need to be prepared for in understanding what the reactions to those videos are or to those statements, that we need to be prepared to have those statements ready and have the law on our side as well. And I think that’s here now. This is not “prepare for.” We’re already here, and I think we’re already behind the ball, in my opinion.
John Wilson
Yeah. I mean, just imagine Apple announced a new CEO that’s going to come out later this year, but he’s not currently a major public figure. He’s well known within the organization, but not as well known outside the organization. He’s done a few presentations and a few things in the public eye. How easy or how scary would it be if he came out and said, “Hey, Apple’s going to launch touchscreen computers,” or Apple’s going to reinvent AI, and they’re developing their own AI platform, and made announcements of that nature? What impacts would that have at that scale?
Jeff Shapiro
Yeah. And not only at that scale, but you also have the scale of a sole trader, a solo practitioner. What happens if you’re an influencer on social media? You have content that’s out there for anyone to access, potentially, which means there are multiple attack vectors to create that synthetic media and ruin your brand, ruin your reputation. We’re already seeing that happen on social media websites. So it’s really, really important that you follow the steps that we’re outlining here.
John Wilson
Yeah, agreed. So really, the future of digital investigations isn’t just about protecting systems from intrusion. Some of this, as Jeff just pointed out, can be done solely with publicly available media. They can get access to your voice, to your likeness, and can create complete video fakes, deepfakes, or synthetic media of a CEO of a company drinking a competitor’s product or whatever it may be. And they can do that without having to get behind your firewall at all. Now, that doesn’t mean that’s not what they’re doing. A lot of times, they are doing that as well because, again, they’re using the distraction and then they’re dipping further into the well.
Rene Novoa
I mean, John, there are entire businesses built on this, on doing the deepfakes and showing you how to do it to change and be an influencer. Like I think Jeff mentioned, on these influencers, you can be an influencer with deepfakes and AI, and there are complete business models on how-tos. And so this is going to be something that people become more and more educated on how to do it, and we have to be just as educated on how to prove it and authenticate. Sorry, go ahead, Jeff.
Jeff Shapiro
Just on this slide, on this future of digital trust, I want to leave you with four words today. Verify, authenticate, trace, and prove. So the way I look at this is the future of digital investigations involves proving the authenticity and providence of the digital evidence your organization relies on, right? We’ve seen regulatory shifts that generally map directly to these concepts. So, under verify and authenticate, you have to navigate frameworks like the EU AI Act, which asks organizations to be transparent about what content is synthetic. You need to trace. That’s adhering to privacy standards overseen by entities like the UK ICO, which are increasingly scrutinizing how biometric data is sourced. With proving, such as meeting evidentiary standards in court and preparing for frameworks like the EU e-evidence regulation, which demands rapid production of digital files. This really expands upon the traditional incident response plans that hopefully your organization has in place. It requires a capability that allows you to expertly answer, “Did an attacker get in, and did we contain them?” That’s the perimeter defense that remains critical. However, the next generation of incident response plans is increasingly being asked to answer a second question. Can we prove that the evidence we’re relying on, including data pulled from our own servers, is authentic? Because if a regulator or court or a business partner asks that question and your team cannot reliably answer it, subsequent legal and operational steps are put at risk. Really protecting digital trust is a core component of corporate governance. It protects financial value, reputation, and the people within your organization. So as we move forward, we need to think about and treat eDiscovery and digital forensics, not just as a traditional litigation support function, but as part of a rapid 24/7 incident response capability. And in that, it’s becoming a strategic necessity. Back to you, John.
John Wilson
Yeah. No, I completely agree. So thank you very much, Jeff. Thank you very much, team. I mean, five questions, 48 hours. In our experience, most organizations can’t answer all five. So what does that mean? Where do you take that? Where do you go? At the start of the hour, I told you I’d leave you with the five questions, so let me count them off. One, is the media authentic? Two, were identities compromised? Three, what data was accessed? Four, what regulatory exposure exists? And five, can we prove the timeline? If your organization can answer any one of those, keep that finger up, count it in your head. How many can you keep up with right now? Can you keep all five up? Are you at two? Are you at one? Are you at four? Take that one, the first finger you put down out of those five questions, which one did you put down first? That’s the question I’d say to you: go back to your team. Pick that one that you’re least confident about and say, “Hey, I’ve just received a credible report that a video used and an internal decision was AI-generated.” Ask them to answer that one question. How would they prove that or disprove it? How would they provide the evidence, the timeline? How would they answer the one question specifically? And then you can even move into the five questions. If they can’t answer the one question cleanly, then you’re in trouble. If they can answer it cleanly, hey, you’re ahead of most organizations. If they can’t, and in our experience, most can’t, it’s not a failure; that’s a finding. That’s your starting point. That gives you something to start going to work on in your organization. The five questions are how you start clearing the water. If you can’t answer all five in 48 hours, that’s your gap, and that’s the conversation we should be having.
Rene Novoa
John, can you repeat that? We have a question in the box. I just wanted you to say those questions again, slowly.
John Wilson
Oh, absolutely. So again, is the media authentic? Were identities compromised? What data was accessed? What regulatory exposure exists? And can we prove the timeline?
Rene Novoa
Yep.
John Wilson
So really, that is the end of our presentation. We will take, if there are any other additional questions or discussions that anyone would like to have. And I’m being asked to go back to the slide. So the questions are on screen again.
Mary Mack
John, your team identified eight hours as one of the timeline deadlines. Can you talk a little bit more about that eight-hour deadline?
John Wilson
Absolutely. Jeff, I will put you on the line for that one.
Jeff Shapiro
Sure. So there’s a new evidence framework coming online in August of 2026. It’s called the EU E-Evidence Framework. Its full name is Regulation (EU) 2023/1543 on European Production and Preservation Orders for Electronic Evidence. Now, this applies to criminal proceedings. It is across 26 EU member states, as Denmark opted out, and it comes into effect on the 18th of August of this year. It is not just for criminal proceedings happening within the member states. It could involve citizens or residents of those states in which an investigation is happening. And it allows EU judicial authorities to issue direct orders to service providers. So authorities can issue preservation orders freezing data for 60 days, or production orders demanding data in 10 days or eight hours in emergencies. Now, there are certain nuances and uncertainties around this. The framework puts data types into certain tiers. It mandates the use of what’s called the e-CODEX IT system, which features a 25 megabyte file limit, and that means large video files may require compression, which goes into a whole issue around the custody and providence of evidence and could potentially strip metadata necessary to authenticate deepfakes. And so there are some discussions being had around this framework in terms of what addenda need to go into it. What you need to know right now is that if you deal with the EU, if you have clients who are in the EU, if you are ever faced with a potential criminal investigation or prosecution, then you need to know more about this framework, and you need to make sure that your organization is ready to comply with this framework. This would become part of your incident response planning. So incident response planning isn’t just for cyber incidents. It could be for other reputational incidents, litigation, dawn raids, regulatory investigations, et cetera. But this is something that you need to build into your incident response planning so that you can ensure that you can comply with those eight-hour or 10-day timelines.
John Wilson
Thank you. Thank you very much for that, Jeff. I’m glad that you were able to share that. It looks like we have no other questions from the audience. So I will say thank you very much for being here today. We truly value your time and appreciate your interest in our educational series. Don’t miss our upcoming April 29th workshop with the EDRM, Discovery at a Crossroads: Global Perspectives on Emerging Challenges. During the program, our experts will share how discovery practices are changing, where legal frameworks are starting to align, and where they are still pulling in different directions, and what all of that means for legal compliance and investigative teams on the ground. Check out our website, HaystackID.com, to learn more. Register for this upcoming workshop and explore our extensive library of on-demand webcasts. Once again, thanks for joining us today for this webcast, and we hope you have a great day.
Mary Mack
Thanks, John. And we thank our wonderful, trusted partner, HaystackID. Thank you to our panelists for sharing their expertise. And before closing, we have another webinar for you to mark in your calendar. It’s HaystackID’s next webinar, The AI eDiscovery Sea Change: Privilege, Work Product, and Hyperlink Productions. It’s happening on Wednesday, May 20th at 12 Eastern. You can find the registration link in today’s resources, and we hope to see you there. And on behalf of EDRM, sincere appreciation is extended for your participation today, and wishing everybody a productive day. Thank you.
John Wilson
Thanks, everybody.
Expert Panelists
+ Rene Novoa, CCLO, CCPA, CJED
Vice President of Forensics, HaystackID
As Vice President of Forensics for HaystackID, Rene Novoa has more than 20 years of technology experience conducting data recovery, digital forensics, eDiscovery, and account management and sales activities. During this time, Rene has performed investigations in both civil and criminal matters and has directly provided litigation support and forensic analysis for seven years. Rene has regularly worked with ICAC, HTCIA, IACIS, and other regional task forces supporting State Law Enforcement Division accounts and users in his most recent forensic leadership roles.
+ Jeff Shapiro
Managing Director, Europe, HaystackID
Jeff Shapiro is the Managing Director for Europe at HaystackID where he oversees the development and growth initiatives across the region. Shapiro, a seasoned legal and technology professional, brings extensive experience advising on complex, multijurisdictional matters spanning eDiscovery, information governance, cybersecurity, litigation, investigation, and regulatory response. His career includes tenures at several industry-leading professional service firms, including a top global consultancy and a Magic Circle law firm. Jeff has a reputation for objective, consultative leadership and a proven track record in building large-scale operations. Jeff has long focused on giving back to the legal technology community in London, including his ACEDS UK volunteer work and vice president position, as well as his past roles as the ILTA UK Litigation Support Chair and a Relativity User Group steering committee member.
+ Todd Tabor
Senior Vice President of Forensics, HaystackID
In 2021, Todd Tabor joined HaystackID and is currently the Vice President of PMO, Forensics. In this role, he is responsible for the identification, hiring, training, and development of HaystackID’s Forensic Project Management Team as well as developing the processes and procedures of that team. Prior to joining HaystackID, Todd was the Executive Vice President of Operations for Veristar.
+ John Wilson, ACE, AME, CBE [Moderator]
Chief Information Security Officer and President of Forensics, HaystackID
As Chief Information Security Officer and President of Forensics at HaystackID, John provides consulting and forensic services to help companies address various matters related to electronic discovery and computer forensics, including leading forensic investigations, cryptocurrency investigations, and ensuring proper preservation of evidence items and chain of custody. He regularly develops forensic workflows and processes for clients ranging from major financial institutions to governmental departments, including Fortune 500 companies and Am Law 100 law firms.
About EDRM
Empowering the global leaders of e-discovery, the Electronic Discovery Reference Model (EDRM) creates practical global resources to improve e-discovery, privacy, security, and information governance. Since 2005, EDRM has delivered leadership, standards, tools, guides, and test datasets to strengthen best practices throughout the world. EDRM has an international presence in 145 countries, spanning six continents. EDRM provides an innovative support infrastructure for individuals, law firms, corporations, and government organizations seeking to improve the practice and provision of data and legal discovery with 19 active projects.
About HaystackID®
HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface and eDiscovery AI™ technology. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.
Assisted by GAI and LLM technologies.
SOURCE: HaystackID