[Webcast Transcript] Building Elite Cyber Incident Response Capabilities at Scale
Editor’s Note: When a breach occurs, the clock starts. Regulators expect timely reporting, individuals expect notification, and counsel requires a defensible path from discovery to disclosure—measured in days, not weeks.
Presented here is the narrative overview and full transcript of the HaystackID® webcast, “Building Elite Cyber Incident Response Capabilities at Scale,” recorded on Wednesday, October 29, 2025. The session features Michael Sarlo, Chief Innovation Officer and President of Global Investigations and Cyber Incident Response Services (Moderator); Kevin Golas, Managing Director, Advisory Group; and Anya Korolyov, Executive Vice President, Cyber and Legal Data Intelligence Strategy.
The webcast examines how organizations compress discovery-to-notification timelines while preserving accuracy and defensibility across jurisdictions. Topics include treating cyber incident response as a dedicated function; first-hour scoping that aligns legal and technical workstreams; staffing models that combine project management with subject-matter expertise; targeted automation for dense data classes such as spreadsheets and scanned PDFs; and the measured use of AI to accelerate identification and threat hunting with human validation. Practical governance considerations are addressed, including documentation and defensibility memoranda, tool versioning and chain of custody, licensing and concurrency limits, cloud elasticity, and expectations for carrier-funded engagements.
This introduction orients cybersecurity, information governance, and eDiscovery professionals to the program’s structure and key takeaways. The narrative provides context for the themes covered; the transcript offers the complete record for citation and reference.
Expert Panelists
+ Michael Sarlo [Moderator] Chief Innovation Officer and President, Global Investigations and Cyber Incident Response Services, HaystackID
+ Kevin Golas
Managing Director, Advisory Group, HaystackID
+ Anya Korolyov
Executive Vice President, Cyber Incident Response and Advanced Technologies Group, HaystackID
[Webcast Overview] Building Elite Cyber Incident Response Capabilities at Scale
By HaystackID Staff
On a Wednesday in late October, the webcast opened with a simple premise that felt anything but simple: when a breach hits, a clock starts. Regulators will expect answers, individuals will expect notification, and counsel will expect a path from chaos to clarity that takes days, not weeks. That timer framed the story the panel set out to tell—how a small, seven-person unit at HaystackID became a global, Chambers-recognized Cyber Incident Response (CIR) practice built to meet that clock without losing precision.
Michael Sarlo, Chief Innovation Officer and President of Global Investigations and Cyber Incident Response Services, and serving as moderator, introduced the setting with the matter-of-fact cadence of someone who has spent late nights in server rooms and war rooms. The narrative moved quickly to the origin scene: an eDiscovery and digital forensics business where handling unusual data sources, alternative operating systems, and uncooperative timelines was already routine. From that foundation, incident response was less a pivot and more a widening of the aperture—same urgency, higher stakes, new ownership.
Anya Korolyov, Executive Vice President, Cyber and Legal Data Intelligence Strategy, entered as the architect of the operating blueprint. Early on, she recognized that incident response only “looks” like eDiscovery. Waiting for perfect legal instruction would cost days that the clock would not spare. The team had to lead—asking the first-hour questions that shape every downstream decision: What industry is affected? Which jurisdictions and regulators apply? What data types are in play—structured databases, email, servers, cloud stores? Which third parties sit between evidence and outcome, and how much time will their confirmations add? Each answer shifted the workflow, the tools, and the notification strategy. The playbook would need to be firm enough to replicate and flexible enough to adapt.
Kevin Golas, Managing Director, provided the counterweight from the response line: services live or die on scoping and tooling. EDR and SIEM platforms help, but not all environments are equal, and not every endpoint is virtual. Some weeks still begin with a physical server pulled from a rack. Memory collection has gaps. License concurrency caps can stall surge work. Cloud elasticity can save a weekend; it also requires teams that can move between AWS, Google Cloud, and Snowflake data centers without missing a step. Growth demanded not just more people, but the right mix: project managers to keep milestones and communication clean; subject-matter experts for forensics, IR, cloud, network, mobile, and macOS; and a follow-the-sun cadence that reduces handoffs and burnout.
Tools became a character of their own. Early pilots chased “all-in-one” answers and lost time. The turning point came when the team focused on specific bottlenecks—the dense spreadsheets, scanned PDFs, and legacy formats where notification lists actually live. Purpose-built automation followed. A matter that once required two months to extract from 6,000 spreadsheets contrasted sharply with a recent case processing 45,000 spreadsheets in about two and a half weeks. The lesson was plain: build for the documents that move the outcome; buy where commercial platforms are strong; connect both to infrastructure that can flex when the clock demands it.
Artificial intelligence accelerated the story without replacing its protagonists. Detection and threat hunting moved from hours to minutes. Script generation shortened the path to patterns and anomalies. PII/PHI identification improved dramatically—though medical references still needed human judgment to separate general mentions from regulated health information. Vision models could parse stubborn, scanned content, but budgets had to balance model costs against measured human review. AI became the compass, not the captain: it pointed experts to the right terrain faster, while decisions remained grounded in legal standards and context.
Defensibility added the final act. The team wrote everything down. Defensibility memos tracked volumes inbound, processing paths, tool versions, decision points, validations, and outcomes. Forensic records captured timestamps, software versions, and chain of custody. These artifacts answered regulators before questions were asked and helped counsel close cases with confidence. Documentation did more than satisfy inquiry; it converted speed into credibility.
The narrative acknowledged a recurring subplot: carrier-funded engagements often aim to restore operations to a prior state, not to redesign a security program mid-crisis. Expectations must be aligned early—what “reasonable effort” means, when sampling is sufficient, and where stop conditions sit once notification goals are met. Clarity keeps clients, carriers, and counsel on the same page while the clock continues to tick.
By the time the webcast closed, the journey from a compact, high-talent team to an enterprise-scale practice felt less like a leap and more like a sequence of informed choices: treat CIR as its own function; hire for range and resilience; make the first hour count; aim automation at the highest-value bottlenecks; use AI to shorten the path, not to define the destination; invest in infrastructure that scales both up and out; and document every step so outcomes are not only fast, but defensible.
The clock still starts at breach. This story showed how to finish on time—and finish right.
Watch the recording or read the transcript below to learn more.
Transcript
Michael Sarlo
Hi, everyone, and welcome to another HaystackID webcast. I’m Michael Sarlo, serving as an expert panelist, lead, and moderator for today’s webcast titled Building Elite Cyber Incident Response Capabilities at Scale. This program is part of HaystackID’s ongoing educational series supporting cybersecurity, information governance, and eDiscovery objectives. We are recording today’s webcast for future on-demand viewing, and we’ll make the recording, along with a complete transcript, available on the HaystackID website at www.haystackid.com.
Today’s session examines building cyber incident response capabilities through the lens of how organizations compress timelines from discovery to notification through cyber incident response, moving in days, not weeks, while preserving accuracy and defensibility across jurisdictions. Before turning to the agenda, brief speaker introductions follow. I don’t know, Anya, maybe you could … Well, I’ll introduce myself.
So, I’m Mike Sarlo, I’m Haystack’s chief innovation officer, and our president of Global Investigations and Cyber Incident Response. I’m a digital forensic examiner by way of my operational background, I founded our eDiscovery practice and our forensics practice and, in my current role, I work as a cross-functional expert for solving some of our most pressing new data challenges leading the cyber team from a business standpoint and then also serving as a key technical and relationship manager advisor between some of our largest clients. Anya, do you want to introduce yourself… Or, Kevin, do you want to introduce yourself?
Kevin Golas
Sure. I’m Kevin Golas, I’m managing director of advisory services and cyber security services here at Haystack. Like Mike, I’ve been around for the last 20 years in the cyber security space, a lot of the time at the forefront of the cyber security space, working at some of the forensics and cybersecurity companies that developed the software leading their services organization. And I’ve worked at companies like T-Mobile, Grant Thornton, and OpenText, helping them start their cybersecurity programs and evolving that.
Anya Korolyov
Hi, everyone, I’m Anya Korolyov, I’m the executive vice president of cyber and legal data intelligence strategy at Haystack. I’ve been in the legal industry for going on close to 20 years. Before turning to cyber, I was mostly concentrating on antitrust investigations and second requests, and then, five years ago, I turned to the cyber data mining area and have built a team here at Haystack that we’re going to discuss.
Michael Sarlo
Great. So, just where we’re at. Haystack started really as a eDiscovery and digital forensics third-party vendor at a time when, really, your eDiscovery providers and your digital forensics providers were usually two separate organizations. Many of our competitors at this time, and I’m talking about 15 years ago, I’ve been with a company since the founding, would be working with maybe a more boutique digital forensics provider, one company providing eDiscovery processing. Back in those days, we didn’t have Relativity; we had concordance and summation, if that, and the most complex challenges were taking a hard drive, filtering it, and running some search terms and just tipping everything with a base layer.
So, I tell the story because part of our natural evolution into what I believe is probably one of the most sophisticated cyber incident response offerings in the eDiscovery space from an eDiscovery provider who’s made that transition at Haystack wouldn’t have been possible without our commitment to digital forensics, handling more complex data sources, and being at the forefront of that. From mobile phones to alternative operating systems and web-based systems, it was a natural transition for us, especially on the digital forensics incident response front, but also on the data mining front, as there are, certainly, robust human challenges that are required from personnel from pure grit and long sleepless nights on that front of the house as well.
Cybercriminals don’t sleep, and so, I tell you one thing, having the appropriate coverage usually going into the weekend is something that is important for anybody in the space. But not only that, you need to have experts available around the clock. You can’t have a team that’s just a fill-in team sitting there working at night or on the weekend; you need that expertise to really provide oversight through the process because every challenge is different.
And so, I have my first question for Anya. Anya has been with me for many, many, many years here through second requests and things like that, and she’s a relativity master, she has a computer science background, she’s an attorney, and she also is a key leader in our legal data intelligence function and cyber, all things AI. So, Anya, you’ve seen this grow different businesses throughout the years. What was the moment you realized that this seven-person team could become a serious, scalable cyber offering?
Anya Korolyov
Yeah, it’s interesting. Both of us keep mentioning second request, and I think second request for the legal community comes as close as possible to what is needed for a cyber incident response. The piece of it is even more rigorous than the second request. You usually have somewhere between 30 and 60 days from the incident to notification of people across the United States, across other countries, so it’s an incredibly fast-moving environment. And we were used to … We’re an eDiscovery vendor, we were used to handling large matters, fast-moving matters, and then we were working with some law firm partners that had a couple of cases for us, and they wanted a specialized team, and we got through the matters, and then Mike and I looked at each other and said this is-
Michael Sarlo
Who are they?
Anya Korolyov
This isn’t eDiscovery. This is close; we’re going to use the same tools, but this is not eDiscovery. And then, as we were trying to take our time, build the playbook, we got an offer to work with our law partners again and one of the largest incidents at that time, it was a zero-day incident and it was across, I believe, about 80 jurisdictions and it was a challenge and we were able to get through it, we were able to deliver. I believe, at some point, pretty much the entirety of Haystack had their hands in that matter because we needed all the personnel that we could possibly get. And at the end of that matter is when I think we both realized that we need to build a team that is made of people that are both technically and legally inclined, that can speak to the attorneys, that can speak to the people on the DFIR side like Kevin and understand the forensics talk and also are capable of moving for such a fast pace and are also the people, I’m going to say almost lazy people that don’t want to do something five times over and over so they will instinctively come up with a solution to automate everything that they do.
I think that was our starting point, we said we need all of these and then we started looking for that personnel. And that was quite a challenge as this industry’s fast-moving as it is, but it’s not … To find somebody who both has technical and legal abilities and is creative enough to create new automation is challenging.
Michael Sarlo
Let me add something there, too, Anya.
Anya Korolyov
Yeah.
Michael Sarlo
You talked really about a multidisciplined approach, and some of the folks that have allowed us to plant a flag here have been with us for 15 years, they started like Anya and Kevin, really, in an environment where you had to do everything. The way that vendors evolve now is, oftentimes, very segmented. You have your technical analysts who never really had to communicate with a client, you have a project manager who may have never actually had to use a tool to touch data, and you have some folks in between who have maybe touched both. Very rarely do you get the folks too that have also managed the digital forensic side of the equation, eDiscovery on project management, touching data, servicing data and delivering it to clients. And so, we’re looking for those key stakeholders that are going to really grow anything. I believe that organizations are often built on the back of a few heroes. Those few heroes are key in training; there are folks here.
The way we’ve structured this in my division is where everybody’s really getting a lot of exposure to everything, very much in the old days, how that was. And so, it allows us to be, I think, more fluid in our delivery to fill holes and to scale, and we’ll talk about scale because Anya described three incidents and throwing almost the whole company at something. And at a certain point, bodies do matter, especially in the beginning, because we’re going to talk about the technology and how it wasn’t great. We had to build our own; it’s still not that great what’s available off the shelf, and so we continue to build our own. Kevin, do you want to answer the same question for us? What was a turning point for you here in building a scalable offering that could be something to compete with some of the biggest players on the DFIR side of the equation?
Kevin Golas
Yeah. And what you and Anya already covered when we were talking about … As everyone knows, the Haystack, we won a large engagement a couple of years back, and that engagement allowed us to scale, and we had to really scale fast, and we had to scale with quality. So, like you and Anya have already talked about, the key element to that is when do you in source it and when do you outsource it and what skill sets do you need and you really have to have a really good understanding of where do you want to go, how do you want to get there and how are you going to do it because you got to think about margins, you got to think about resources, you got to think about …
Being a services company, you have to think about I can’t just bring on too many resources because then you’re going to have bench players. And then what Mike and Anya talked about, having the right resources, sometimes it is beneficial to go out to a partner because I don’t look at us as competitors, I look at us as partners. I can’t tell you how many times throughout my career I’ve partnered up with somebody that would be considered a competitor, but at the end of the day, we have different skill sets that we could leverage across that.
So, just being able to expand on that, build that, and then, like I said, looking for the right resources, we did it with a milestone. We wanted to put in, okay, over the next six months, here’s where we want to be, and then reverse-engineer how we’re going to get there. So, over those six months, we wanted to say we needed to build on the East Coast, West Coast, we wanted to have certain geographical timelines aligned with what the customer wanted, and then we were able to build on that with skill sets, geographical location, and then, like you just talked about, Mike, not everyone can do everything.
So, who’s going to be the project manager, who’s going to be accountable lead here and dividing up those responsibilities and what we’ve talked about before is having that communication is key across those stakeholders, like Mike was talking about, having that communication across those stakeholders are key to building that … From seven to, in our case, it was 121 folks throughout the United States and even into other geographical locations outside the United States but we’ll probably go into that a little bit further.
Michael Sarlo
Yeah, sure. So, Anya, what were some of the biggest early hurdles for you? Was it tools? I heard you mention throwing the whole company at something. Was it process? Was it a combination of it? How did you start to untangle and, at that stage, what did success really look like?
Anya Korolyov
Yeah. I mentioned earlier that cyber incident response, the data mining piece of it looks and smells like eDiscovery, but it’s not. And I think one of the earliest challenges that we needed to realize is that we usually rely on our law firms and our attorneys that we work with to tell us what they need, and then we deliver that. With incident response, it’s a little bit different. We needed to become the experts.
I think that was our first challenge to realize that that not all clients that have been through an incident, not all law firms that they’re working with, sometimes they don’t want to work with a law firm, sometimes they think they can handle it themselves are made equal and we need to become the experts in the field as far as consultation, as far as identifying the data, as far as getting to the end goal as quickly as possible.
We needed to build a team that will become those experts so they will be able to advise the end client, they will be able to advise the law firm. Of course, the law firm always makes the final decision, but we needed to become very strong in understanding the overall life cycle of an incident response. We didn’t really have documentation, we didn’t have playbooks, we didn’t have a process, and we needed to build that process, and that was one of the toughest challenges because no two incidents are alike.
There’s always going to be differences in data, differences in what is in the data, differences in how much the end client is willing to help or knows their data, so a lot of it is you build a playbook, but you also play by ear with every incident. So, that was one of the challenges as well. The tools were not there. AI, at the time we started, was just making it … It’s peeking out, but it wasn’t really what it is today.
Michael Sarlo
Rudimentary identification, really, was all we had, right? Yeah.
Anya Korolyov
Yeah. Which helped, but also we needed to figure out involving, not only the identification with the AI piece, but also search terms and calibration of search terms because, of course, search terms are always going to be over-inclusive when looking for PII. So, we needed to figure that out, but we also needed to figure out a lot of tools because, again, you are dealing with something where you have 30 to 60 days, and sometimes you’re looking at terabytes of data. Often, the company that is suffering from an incident knows there is an incident, but they do not know what was taken.
They know where it was taken from, but not exactly what. So, we’re looking at terabytes of data, and sometimes, halfway through the matter, we find out, okay, well, this is what was actually taken, so we needed to be nimble enough to pivot in the middle of the process as well. So, a lot of it depended on the matter coming in and figuring out also the initial questions to ask when the matter comes in. And as we realized, there were quite a lot of initial questions that we needed to ask, unlike eDiscovery.
So, all of that, and then figuring out how to work with Kevin and his team as well, because they get involved way earlier than the data mining, so that was a challenge a little bit as well. Even though we’re used to dealing with forensics, again, most of the time, we knew what needed to be collected. With cyber incidents, it’s not always the case.
Kevin Golas
Yeah.
Michael Sarlo
So, Kevin, onto that then, interested to hear, coming in, you’re relying on partnerships, you’re biasing on cross-functional expertise. One thing that I think Anya, that was mentioned but it’s a little bit obvious to us, when we started to do this, we were a $200 million plus company and we had the benefit of … And our CTO is on as an attendee so he always would like to yell at us for trying to build a plane as we flew it. And, unfortunately, it’s your IT folks and all the administrative folks, your leadership team who have to break down walls to allow you to do this stuff because, as Anya mentioned, we had a really large matter that zero day.
Actually, we had several under our belt, we got better, we knew what we were getting into, and we say jurisdictions, we mean global, so UK, EU, Asia. Having that global footprint just to begin with, people and expertise that can handle data from a digital forensic standpoint and from a document review and project management standpoint is key. But also just the raw infrastructure from hardware and firepower, and being able to deploy that quickly and to repurpose it from an eDiscovery workflow and digital forensics workflow to a cyber incident response workflow.
So, Kevin, how have you identified those key growth points, and how has your scoping process evolved when you’re dealing with an incident? What are the things you’re asking that maybe you weren’t asking before, and how are you prioritizing where you start with a client who might be experiencing something? And how are you working with Anya’s team from that perspective to enhance the overall downstream data mining process?
Kevin Golas
Yeah. There’s a lot to unpack there, but to your point-
Michael Sarlo
Sorry, that’s right. Those are four questions in one, so just pick what you want to answer. You get the gist, right?
Kevin Golas
Yeah, yeah. Well, I do want to just talk about tooling, which … We’ll migrate into those questions you just asked, Mike. When we talk about tooling, you really have to look at the services that you’re offering and see if you have the right tools in place to offer those services to the different clients. As you know, there are clients that are in retail, some are in aerospace, some are MSPs so you have to understand the different technology that you have. We went through a phase where we wanted to build our own EDR agent, that was craziness because, to build that, you got to keep up with all the Linux, Mac, OS, Windows, all the different updates, it became very problematic.
So, we learned early on, let’s use one of the commercial tools for EDR, won’t say any of the names, and then have folks that understand that because, if you know EDR one, it’s a different workflow than EDR two. And then some of those EDRs now are actually getting into forensics and I say forensics in quote because they can bring back data but is that, depending on what the engagements you’re in, is that good enough, is that going to be defensible in court. And, if it’s not, then you got to have the different tools which are more your forensics tools that are more satisfied for anything that’s going to go to court or even potentially go to court.
And then you just got to understand, like I said, what the customer’s needs are and where that customer … Where you need to be and then make sure you have the right tooling and, again, everything is SaaS based. Not everyone … We were dealing with something just last week where there was an actual server hit with ransomware, that was a physical server pulled offline, we couldn’t get to it, how do you do it? You put a jump server in there, you have to have someone that understands networking to be able to isolate that off and to be able to help the client with their firewall. So, to your point, Mike, you look at what the service engagement and what your service offerings are, make sure you have the right tooling that aligns with that, make sure, from a tooling standpoint, you got to have people that understand it because, like I said, every platform has a different workflow, they just do. And once you understand that workflow, then you can understand what you need to do.
Michael Sarlo
It’s interesting that you say a physical server because it’s … You can go back, actually. Let’s sit on the other slide for a minute. When you talk about a physical server and having the right skills to do things, you may have people who are very advanced in dealing with AWS and Google workflows and Snowflake but you put them into a server room and they don’t know where to start. So, it’s really amazing in information technology and software development in general, you’re really now in a world where you still need to know the hardware but you really need to know the different operating systems and endpoints and there’s so much out there that you start to really have to build teams with specialized expertise, nobody can be an expert in anything.
I remember, one time I was an expert witness for something and I was pretty young, I was in my early 20s and they asked me a question of, “Sir, were you in high school when this started?” and I pulled out my Facebook and I said, “Yes, but I could guarantee you I could find you a person, a 16-year-old who would be a better expert witness than I would today.” Likewise, we need guys who have experience with hardware at scale and deployments and building out data centers and those are folks who have, oftentimes, been in the industry [inaudible 00:22:20] 20 plus years like Kevin. And so, you need the full equation and gamut.
Let’s talk a little bit, Anya, Kevin, that unclear ownership between technical and legal workflows. I think this is really interesting and something that, when we first started to do this … We’ve really been doing this for almost four years, dabbling. We started cyber incidents before then but not really branding it as much. And what I felt is we really were blessed in a way to sometimes really get access to some of the top-level attorneys who are dealing with clients. Really, you didn’t have that layer of the associates and the juniors underneath them. But you also have to understand that, I think, a lot of times with a major partner and eDiscovery partner at a law firm or somebody who’s a data protection, I’m really talking about the AM 20 which we were very blessed to be working with as we build this out, which is a little bit different than the insurance space, for sure.
How do you somewhat balance the need of them wanting to be an expert and you knowing you have the experience and pushing back and establishing who owns what because we all know, in the vendor side, that shit flows downstream and so we’re going to get blamed no matter what. What’s that delicate balance there for each of you? If you can each answer that question.
Anya Korolyov
Yeah. And that goes back to some of the things I was saying earlier. We had to figure out what are the important things in the very beginning to ask. And, again, no law firm is different, no attorney … The same. No attorneys are the same, everybody’s going to have their own way of working with things. But we had to really establish for ourselves some of the, probably, more legal implication questions to ask before we even see the data because those legal questions absolutely drive how we build our workflow and how we can help the law firms with the automation, technology and all of that.
And some of the things we learned to ask early on are what industry the end client is in. Because a hospital versus a college versus an insurance company versus a mortgage company, those are different things. And not only what kind but where are they located, because the regulations are very different across the states, across the countries so we needed to be mindful of the timeline for ourselves but also understand are we looking for financial information, are we looking for HIPAA information, is there potentially involvement of miners in the data. All of those questions change how we proceed with the matter.
And then the other things we learned to ask is what type of data we dealing with. Is this structured data? Do we have databases? Because, at that point, we need to involve our data science team and make sure that we have somebody who can rebuild the database and who can work with us and help us, guide us through the database. Or is this just server data? Are we talking with an email compromise? Very different thing as well. And the other hard lesson I think we learned was incidents that involve third parties. So, if you think of an insurance company that sells insurance, their customers are the ones that are going to have to make the decision on notification and confirm the people involved, the people that have their sensitive information and the data but it’s almost adding an additional layer. So, not only do you notify the underlining people identified in the data, you also have to build in the time to notify the third parties and that’s usually … That adds on a month, sometimes two months because people take their time to review the data to respond.
So, that was one of the probably hardest lessons that we learned in the last five years is that third party incidents are not the same as just one single company with just employees or even the hospital with patients, it’s still almost a little bit easier to get through rather than somebody involving a lot of third parties.
Michael Sarlo
Sure.
How has the AI changed for that? Let’s talk about that for just real quick.
Anya Korolyov
Yeah.
Michael Sarlo
Has AI changed to help you with that specific use case scenario? We always get, when we get an email from an insurer, there might be PHI. Is AI filling the gap in data mining?
Anya Korolyov
It’s definitely getting there. I think the technology has improved a great deal in the last three years from what we have seen, there’s quite a bit of tools out there in the market. There is no magic wand, there is no tool that will look at a document that has three names, one social security number, and automatically know whose social security number it is, even within the context of the document, you’ll still need a little bit of confirmation from a human on that front. But it has definitely evolved.
As far as identification and extraction, we now have way more info types that we have in our toolbox that we can identify. The medical is always, I think, is going to be a challenge because there is a legal difference between what is just medical information versus what is considered PHI. One of the matters that we were dealing with, it was an email compromise and we did expect to see some HIPAA stuff and we did expect to see some patient information but the numbers were just staggering and it took us a while to narrow this down but we eventually figured out that the person whose email it was, it was subscribed to Ann Taylor and Ann Taylor made nursing clothing.
So, you had a name, Ann Taylor, you had nursing information so it kept flagging it as a human being in the document and also medical information. So, medical is always going to be the hardest to figure out and what crosses that line into actual notification on PHI. With PII, it’s a lot easier. Social security is a social security number, there’s clear formats on it, and AI has been an incredible tool in separating out what is just a nine-digit number versus what is an actual social security number. We have narrowed down that effort considerably. So, we’re excited about the AI revolution and we’re testing out a lot of tools and we’re hoping that, someday … The regulations might not improve-
Michael Sarlo
But we’re building a lot of our own tools too, right?
Anya Korolyov
Yeah.
Michael Sarlo
Yeah, agreed on that front. Kevin, same question for you. How is AI amplifying your practice and changing the way you approach cyber incidents at the various phases? Early on detection and then even through remediation and reporting, and I guess let’s even go as far as downstream to, in post-incident, how you’re securing environments. Anya, you can move to the next slide, please.
Kevin Golas
Yeah. I just want to cover quickly, like what Anya did with the scoping, and then I’ll get into the AI, how that has evolved as well. When you’re doing more of your incident response, scoping is key. When you’re on the phone with a client, you want to find out as much information as you possibly can. What is their environment? When did it happen? Because, at the end of the day, what you have to do is you have to start somewhere and then work back so that scoping is very important. Do you have any Cloud assets? Do you have any backups?
If it’s ransomware, you want to understand what happened, did you see a ransomware note? Again, to Anya’s point, what are your third party vendors? Do they have access? What do they have access to? You really have to map that out on the first or second call because you have to know where to go, what to do, how to get there and then understand what does that attack surface look like and then understand where did they see this happen first and then what downstream and upstream effects does that have from that.
Now to your point on AI, AI has evolved a lot. If you were to ask me this question, probably last year, I would say we are at the infancy stage. Well, I actually think we’ve gone probably one stage past that because AI has gotten really, really good. My son just started as a SOC analyst, won’t say the company, but they’re using AI a lot for detections. To your point, Mike, you can now do threat hunting pretty much through AI. I still agree with Anya, you need that human element, that human validation to it but, at the end of the day, you can write scripts. You don’t have to know Python, you don’t have to know Java, or the other scripting languages as well, because AI can write it for you, AI can look through that data, look for those anomalies very fast.
When you’re doing an incident response, it’s key to have that. A lot of times, folks want us to use their tooling currently, well, we like to use our own tooling as much as we possibly can because of all the workflow and automation and AI scripts that we’ve already written to go look for these particular anomalies or these IOCs or these TTPs, techniques, tactics and procedures, we’re looking for those things on that particular environment, but AI allows us to do it, instead of hours, we’re doing it in minutes.
So, it’s really have a lot of … It gives us that instant visibility. It gives us that here’s what you need to go look at instead of looking at 10,000 end points and trying to find out exactly what happened by having that telemetry to be able to look into that, dissect that and understand I think this is when it happened, I think this is the initial infection vector and here are the steps that I need to take. Anya was talking about playbooks before, having a playbook, understanding what that particular incident is and then using that playbook to be able to do it but, to Mike’s point, AI has helped you get there within minutes or even hours instead of days or weeks.
Michael Sarlo
Look, and if you get folks like CrowdStrike, they just announced a partnership, I believe it was yesterday actually, with NVIDIA, they’re going to be building full-source autonomous agents using GPU based on acceleration. So, you’re going to get that human element really at the end point now through agentic workflows. And I think with AI in general, I think you both said it, this is what we find in eDiscovery. AI gets you just to the documents or the data points that humans actually need to look at faster, it doesn’t eliminate you, you still need that human expertise, it’s critical. But when you get to an industry like cyber where we have a massive shortfall of expertise, it really is something that’s going to change the way we defend against incidents, the way we understand incidents and the way we fight cybercrime so it’s really an exciting time to be.
So, I want to go back a little bit and talk about the talent engine. And, Kevin, let’s start with you and, Anya, let’s talk with you since we actually started with on the whole back of the company and then we really started to hire and grow individually in different departments, data mining. I want to get a little bit more granular for folks who might be looking to actually build their own department. Kevin, so what profiles did you hire first? ICs, forensics, Cloud, IR, PMO? How did you avoid burnout the start-up and scaling?
Kevin Golas
Yes to all of them. So, what you have to understand is, you know this as well as I do, Mike, you have to have the right folks in place. So, a project manager is key because that person is going to make sure everything’s coordinated. But to your point, do you need a forensics expert because that’s different than an IR expert, that’s different because we have to understand the different tooling and the different methodologies behind it so it really depend on the services that we were doing.
In my experience, trying to have that unicorn, like we were talking about, very early on, it’s hard to find. That person who understands forensics, understands cyber workflow and process flow, and understands Cloud networking, who understands networking as a whole, those are usually very siloed skill sets, so finding that right person to be able to fill that is key. So, when you talk about it, what do you need? Again, it depends on what the goal is. On that one engagement, we really needed folks to be able to do source code review, to be able to do pen testing. That source code review is a different skill set than pen testing. And then is it pen testing mobile or is it-
Michael Sarlo
Totally different skill set. Big, big, big different skill set, yeah.
Kevin Golas
Yeah, totally. And I’ve-
Michael Sarlo
Go ahead, go ahead.
Kevin Golas
And in my past life, I’ve tried to have eDiscovery folks do forensics things and my jobs are processed and then we’re just going to start this again, I’m like, “Oh, my God, you just put us back 24 hours, you can’t just stop things. You got to let them go and then fix them, you got to adjust accordingly.” So, just having that different skill set, understanding the different things but you need to make sure that they all work together. Like I said, depending on what your service engagement is or what your contract is with that customer will depend on the resources that you need but I found a project manager is key to document everything, to keep everything flowing and make sure that all the milestones are met. But then having the right independent contractors might also be another element just depending on geographical location if you’re doing follow the sun or whatever the case may be but, yeah. Anya?
Anya Korolyov
I agree 100%. Finding a person that can do it all, there’s probably maybe 10 people in the industry that can and, obviously, having one person run the whole matter is not sustainable. So, for-
Michael Sarlo
Wait, Anya, isn’t that what we hired you for, to do everything?
Anya Korolyov
I was going to say, you can hire me but, outside of that …
Michael Sarlo
It’s not sustainable, yeah.
Anya Korolyov
It’s not sustainable.
Michael Sarlo
No, please don’t try to hire her.
Anya Korolyov
Yeah.
Michael Sarlo
She’s off the market.
Anya Korolyov
And so, for us on the data mining side of it once we have the data, I think, again, project manager, somebody who can take everything from point A to point B, can speak to legal and translate that and know when to involve forensics, know when to involve a data science team, know when to involve somebody to help with automation more, that is a key person, absolutely. But also underneath that person, when we started building this out, we were looking for people with slightly different skill sets.
We have people that are more knowledgeable in the data that come maybe from forensics and understand data to the extent that I might not even understand it but they can really get in there and they can figure out because there’s always going to be some esoteric data, it’s never going to be just nice, clean emails. And then you need somebody who can run the actual review because, at some point, you will bring in a team of contract reviewers or offshore people that have to look at the documents and have to confirm the extractions or do the extraction so you need somebody to manage those people and communicate with counsel on progress there.
And that person has to, not only be able to just be the go-between, but they also have to have enough understanding about the laws and the process to be able to flex something for counsel. Because counsel is not going to look at documents, they don’t have the time for it outside of some very specific matters and some very specific documents so you really need somebody knowledgeable enough and with right background to be able to do that for counsel. And then, at the end, you also need a specialized person because, sooner or later, you’re sitting on 55 million lines or 70 million lines that have been identified and extracted and you need to deduplicate them into what will essentially become the notification list. So, you need somebody technical enough to understand how that deduplication works and what makes sense and how to instinctively know which lines are the same person when it comes down to manual at the end.
So, those are very different skillsets. What they all do share, and this is one of the things we always ask when we are interviewing for somebody for our team, we can teach the process, we can show them our tools and we can teach them what to do but it’s such a fast-paced environment and it is stressful, it is. As Mike said, the threat actors, they’re not nice guys, they’re not our best friends, they’re people that want to make our lives as miserable as possible, they will always hit on a Friday, Saturday night, they will always hit on a holiday. And it’s always there’s a lot of pressure and the company that is going for this is never going to be a happy client.
You have to have the mentality that you will always talk to somebody who is stressed, who wants to get this through as quickly as possible. So, it’s really what we usually look for, outside of just specific skills, is enthusiasm to learn something new, somebody who actually thrives in a fast-moving environment. Not everybody does and that’s perfectly normal but we need those people that maybe didn’t study for the test until the night before the exam in college and then crammed it all in and were able to pull it off, it’s really those personalities that thrive in this environment.
Kevin Golas
Yeah, I just want to add one thing to that also, Anya. Having SMEs, subject matter experts. I found in forensics and IR, having that person that understands Mac. I don’t understand Mac that well so, if you’re going to have me do the investigation, I’m probably going to struggle because I’m going to have to be doing a lot of looking and researching. But having that person that understands Mac, having that person that cell phones, where information is stored, how is it stored, what’s the best tool to use to be able to extract it or collect it. And then, like I said, just having subject matter experts, it’s always good to have that so you can lean on those people either from the knowledge that they have and how would I do this or even being able to assist you in that particular investigation. I found that to be key as well when you’re building out a department.
Michael Sarlo
Let’s jump to the next slide a little bit here. And I want to just talk about tools, Anya. I think one of the questions I was going to ask you is what is one of the biggest mistakes we made and I think the biggest mistake that we made from my own perspective is we wasted so much time vetting so many different tools that clearly couldn’t do it at first blush with BoCs and things like that.
Where have you approached using off-the-shelf tools versus building, and how has your approach to building changed from the early days towards where we are now? And I want to say, when I say early days, Haystack maybe handled three to seven matters at a time with the start of this, granted the matters were larger in some ways, multi-terabyte and expunged many more hours. We still handle very large matters but we’ve optimized so much that we’re able to handle so much more upwards of anywhere from about 150 to 200 matters running currently at any given time.
And think about that too. We talk about, from speed standpoint, these typically need to be done in 60 days. So, Anya, let’s talk about that technology piece if you can touch on that a little bit. Kevin, same for you as well.
Anya Korolyov
Yeah. That was one of the, definitely, lesson that I think we all learned. And I think what we should have done is identified a very specific problem area and then looked around and said is there a tool that will address this particular specific problem versus is there a tool that can do it all. Because the truth of the matter is, right now in the market, there really isn’t a tool that can do it all. And what I mean by that is the data that any company, and I want to emphasize, any company no matter how small is sitting on is vast and it’s only growing.
All the communication tools that we use, we’re just accumulating data. And then everybody always in the industry says, well, at least we don’t do paper anymore, we don’t because we scanned it all and now it’s sitting in the Cloud. So, all of those early ’80s, ’90s PDFs, they’re sitting somewhere and they do usually are involved in an incident. So, what we identified for us as the biggest problem is it’s not so much, okay, we have a terabyte of data and half of the terabyte potentially contains PII, we need to get through it all. We looked at it from a different perspective eventually and we said there is always a subset of data that is the most dense in what it contains.
So, your structured data, your spreadsheets, that kind of thing, there’s the scanned PDFs that usually the problem, how do we solve for those and is there a tool on the market for those and then realize that there really isn’t because it’s either … Either there is or it’s incredibly costly and all of these incidents do come down to the money. You usually have your insurance, they have limits so you got to stay within that framework or they’re just not quite there yet. So, that’s when we turned around and we said we need to come up with our tools to address this very specific problem.
We need to identify the most dense documents and then we need to be able to extract in the most automated fashion with as little human involvement as possible from those documents because that is where 99% of the notification list will come from, that is where the heart of the data is and that is where majority of the mistakes get made if you are having a human do it. And that’s, I think, one of the biggest lessons we learned from that zero day, that matter.
And in five years, as Mike said, we’re handling from five to seven to 200 matters at the same time but also we had a matter that involved 6,000 spreadsheets that took us two months to get through and extract information from when we first started doing this because we were doing it manually to now we’re handling a … We just handled a matter that had 45,000 spreadsheets and we were able to get through extraction of those spreadsheets in two and a half weeks. And that is just an incredible growth and that speaks to the tools that we have created and we can continue to improve on and we learn a lesson from every single matter and we look at it and we say, well, here’s a new thing that’s introduced, let’s make sure our tools are keeping up with all of those challenges as well.
Kevin Golas
Yeah. And I’ll just add to that-
Michael Sarlo
I think-
Kevin Golas
Oh, go ahead.
Michael Sarlo
Go ahead, Kevin, please.
Kevin Golas
I was going to say, yeah, just when you talk about tooling, especially on the IR side of the house, what is the right tool. You were talking about, Mike, enterprise tools, it really depends. We found that enterprise tools are very good, commercially available tools, we were talking about EDR and SIMS are very good but then you have to have your one-off processes also like how are you going to collect memory. Because the EDR technology doesn’t really collect memory well so you have to have certain vendors that do that but do it really well so you only use the best. And then where do you put it back into? Do you put that into a Cloud-based environment that you have and then you also have to look at … I’ve had issues with licensing. Only three people can have access to that particular environment at one time and you need five so now you got to scramble and go get licensing, you find out how much that is and then you’re over budget.
So, understanding what your current workflow is, where that information’s going to reside, how many different groups need to have access to it will dictate what kind of tooling you’re going to need and how you’re going to do that and I can’t stress enough know your licensing. It sounds very simple but I’ve had issues with, again, I needed … Three people have access to it, I actually needed five and then I had to go get on the phone with the vendor and try to go buy two more licenses at a discount. When you are in need-
Michael Sarlo
Usually for a three-year extension, too, right?
Kevin Golas
That’s right.
Michael Sarlo
And this is interesting, you both really touched on it. You really need your own infrastructure. The infrastructure and the service and your capability to deliver are inextricably connected and they have to be scalable and they have to be able to flex. This can be tough for large enterprises that just aren’t used to procuring and growing workloads and shrinking workloads. Cloud has definitely been key for us, establishing Cloud expertise in order to push workload especially as we get into more AI workflows and things like that where we want to be able to scale rapidly to handle certain data sets.
One thing that Anya mentioned was cost too around AI and it really is those garbage in-garbage out style documents. And so, in the world of AI, it’s all very, very neat when you have perfect extracted text. Well, if you ever seen extracted text from an Excel file, that doesn’t usually look correct when you open it up. Likewise, you look at extracted text from a scanned PDF or a medical document, it doesn’t really exist, you get a few garbled characters. And so, even just being able to identify those documents that are the outliers that need that special handling, not even alone, just being able to act on them using AI but just being able to identify them has been a key element for us.
And the reason why cost with AI is you end up having to use vision AI based models that analyze PDFs and things like that more as a graphic or multimedia, this just costs more, has credibly more large amount of tokens. We’ve seen situations where we could have used Generative AI to extract a large PDF, it would’ve been $140 to extract that where human review maybe is 30 bucks an hour, $35 an hour. So, you end up having to balance those scales, it’s somewhat of a gamble and I think seeing a lot of data environments is critical for being able to scale as well.
And so, on that subject, we also see different client profiles. We deal a lot with insurance carriers, we deal with off panel type relationships. Kevin, how does your approach change when you’re responding to IR for, let’s say, more of a mid-market customer who’s really purely paying with the carrier? How do you meet the demands of the carrier? How do you make sure the client’s getting what they need? What does that look like from a budgeting standpoint, relationship management standpoint? What are the disadvantages of working with carriers as well as opposed to big law firms like Norman Rose Fulbright?
Kevin Golas
Yeah. I’ll actually start with the last question first. What we’ve seen is, when you’re dealing with insurance carriers, the insurance carriers want to put client back into the state that they were in before the incident happened. They don’t want to be really focused on evolving that client because that’s not what the cyber insurance is for, it’s for getting you back up on your feet beforehand. And what we have to stress with the clients, as much as they want to put in MFA and as much as they want to put in certain controls and procedures around it, at the end of the day, this particular incident is get them back up and running as that particular carrier wants that to happen. It’s also not trying to evolve them to the next stage on maturity as it relates to the security posture that a lot of clients want to get to, you have to understand it has to be done in phases.
So, when we’re doing that IR from an insurance engagement, we’re finding ourselves we know what we need to do, we understand what the goals are but trying to have that client keep that containment, if you will, and getting them back to normal operations from that ransomware event has been a little bit of … Not a challenge but you have to orchestrate what that client’s expectations are to what the insurance carrier’s expectations are for that particular engagement and what a good outcome looks for both of them may not be aligned all the time.
Anya Korolyov
Very good point and I think one of the personal challenges that I had to overcome in the beginning of this, if I know that my data was involved in an incident, me, personal, citizen, of course I want to know. But also, to the same end, the company has to decide and the law firms have to decide where’s the company located, what are the regulators requiring and what is a reasonable enough effort to achieve that. There’s never going to be, well, not never but most of the time, there’s not going to be a scorched-earth approach let’s look at every single document, let’s make sure we identify every single piece of PII, every single thing. It’s usually what is reasonable, what can we get through as quickly as possible, what is the most dense documents we can grab and that’s where the insurance almost barriers come in as well because the company has insurance that they’re paying for, the insurance has certain amount of money that it can dedicate to it, does the company want to pay for anything outside of that themselves.
Those are the questions and the pathways that we work with counsel on and we ask counsel questions in the beginning as well. Where are our limits? How much do you want to get out of this? We got through majority of the data and then we sampled what’s left and we identified one more person, is that one more person enough for us to keep going and looking until we cannot identify another new person or have we satisfied because we only identified one new person in 300 documents, is that the line where you want to stop? And I think a lot of those decisions are driven by insurance companies and by the limitations that we have money-wise for each incident.
Michael Sarlo
And I would say, sometimes, just because a company has so much money, they want to throw so many resources at this and get it to perfection and you never really can get it there. Just like any digital forensics investigation, you can look at a single computer for 200, 300 hours if you want but, usually, we know exactly where the drift is so to speak. Anya, why don’t you just jump to the next slide here, we’re going to wrap up pretty quickly. So, none of this matters without being defensible and so defensibility is prime and key. Anya, what have you done really to create defensible process, key work product? How do you align what your team is doing against the deliverable and really being able to put a bow on a matter and deliver your customers a baked cake when sometimes they don’t know what a baked cake looks like?
Anya Korolyov
Yeah. The key to defensibility, in my opinion, is documentation. So, we document every single thing that we do. All the sizes of documents that came in, everything we’ve identified, what decisions were made, when they were made, pretty much every single thing and we create what we call defensibility memos for each and every matter. Sometimes the regulators ask questions that are in those memos, sometimes they ask for the full memo. Sometimes counsel writes those memos and they use ours to put the key information in. So, for me, defensibility equals documentation every single time.
Kevin Golas
Yeah, I would agree-
Anya Korolyov
Kevin?
Kevin Golas
Yeah, I was going to say I would agree with that. When you’re doing an incident response, having everything documented from the date and time that you did the collection to when you did the processing, what machine did you use to use that processing, what version did you use because I’ve been in court before where they’ve asked me what version of the software did you use, was it the latest version that was there and, if it wasn’t the latest version, why was it not the latest version. And then you have to know all those things as you’re doing this, to Anya’s point, documentation is probably your best defensibility, using … You have to test the tools to make sure that they’re the right tools, that they have the right qualifications. And like I said, using commercial tools, it’s a little bit easier because all that’s public knowledge with SOC 2s or different … When they have different software releases, you have to understand what’s in that software release and understand that.
But to Anya’s point, if you document every step of the process that you’re doing and make sure that you’ve not missed anything in that incident response, that’s usually the best way for defensibility. I have, unfortunately, testified a lot where you have to have your Ts crossed, your Is dotted because you’re going to get called on it.
Michael Sarlo
Totally. Let’s go to the next slide here and we’ll close out a little bit here. Definitely, documentation is key. Formalizing your cyber incident response function as its own standalone business function is, I think, key for any organization dabbling in building a team here. You really can’t sustain the growth that’s needed or you can’t sustain the growth nor can you avoid the burnout without really making this a dedicated function. Build a roadmap. We’ve talked about documentation, identify where you’re at today, identify where you want to go and build a roadmap to get there where it’s people, investment, it’s software. Understand what your key benchmarks are around growth and keep those really in focus as you grow. Learn from your real-world scaling experiences, learn with your clients. Documenting your mistakes, documenting your lessons learned. These are high stakes, high pressure engagements.
I used to be a cook and we would scream and yell at each other usually through every shift when the rush happened but back then everybody smoked and so did I. We’d always go and have a cigarette afterwards and we pat each other on the back and talk about what we could have done better. This is so important in any business function. Don’t hide behind your mistakes, tackle them front on because, oftentimes, your clients are going to learn from those as well because they’re making mistakes with you. Identify those key folks that you’re working with and grow with them. That’s what we’ve done, we’ve been blessed to have some really amazing law firm partners that have given us grace as we grew and we’ve been blessed to work with many different groups and we believe we’ve taken the bits of the best and have made a bulletproof process.
So, really appreciate you guys here listening to us. Thanks for joining today’s webcast. The time and attention given into this educational series are truly valued. For those interested in continuing education, don’t miss the upcoming EDRM workshop on November 4th, 2025, Framing Construction Discovery’s Future with AI Powered Document Review. The program will outline defensible AI-enabled workflows for specialized construction on data types, still a ton of litigation in any large construction project. For you seasoned vets who’ve dealt with these data sources, you know there are challenges but, fear not, there’s technology available.
Visit haystackid.com to learn more. Register for the November 4th workshop and explore the extensive library of on-demand webcasts. We’ve got a ton of content up there, highly relevant, we’re really in to educating the community. Once again, thank you for joining today’s session on Building Elite Cyber Incident Response Capabilities at Scale. Have a great, wonderful day and we appreciate you. Thank you so much. Bye-bye.
Expert Panelist Bios
+ Michael Sarlo [Moderator] Chief Innovation Officer and President, Global Investigations and Cyber Incident Response Services, HaystackID
Michael Sarlo works closely with HaystackID’s software development and data science teams to deliver best-in-class data collection, eDiscovery, and review solutions that allow legal teams to act on data types typically not conducive to collection, review, or production in the context of eDiscovery. Sarlo works closely with clients on the most challenging and complex regulatory, investigative, and civil litigation matters. Sarlo also oversees HaystackID’s Cyber Discovery and Incident Response Services division. He leads a cross-functional team of HaystackID experts that regularly assists insurers, breach coaches, and their corporate clients when a data breach occurs.
+ Kevin Golas
Managing Director, Advisory Group, HaystackID
Kevin Golas is a Managing Director in HaystackID’s Advisory Group. An accomplished cybersecurity executive with over 20 years of experience in cybersecurity, risk management, and data compliance, Golas has worked at larger enterprise companies like T-Mobile, Grant Thornton and OpenText. He has a proven track record of success in developing and implementing effective cybersecurity programs, mitigating cyber risks, and protecting sensitive data. Golas is a passionate and dedicated cybersecurity leader who is committed to making the world a safer place. He is a highly sought-after speaker and thought leader in the cybersecurity community.
+ Anya Korolyov
Executive Vice President, Cyber Incident Response and Advanced Technologies Group, HaystackID
Anya Korolyov, the Executive Vice President of Cyber Incident Response and Advanced Technologies Group at HaystackID, has 18 years of experience in the legal industry as a licensed attorney, including 15 years of experience in eDiscovery, focusing on data mining, complex integrated workflows, and document review. In her role at HaystackID, Korolyov works on developing and implementing the strategic direction of Cyber Incident Response. She is one of the industry’s leading experts on Data Breach Incident Response, Notification, and Reporting, with a solid understanding of machine learning, custom object development, regular expressions manipulation, and other technical specialties.
About HaystackID®
HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.
Assisted by GAI and LLM technologies.
Source: HaystackID