[Webcast Transcript] Anatomy of a Second Request: Managing Antitrust Agency Second Requests

Editor’s Note: On February 26, 2020, HaystackID shared a comprehensive overview of the role of eDiscovery in the management of antitrust agency Second Requests. While the full recorded presentation is available for on-demand viewing via the HaystackID website, provided below is a transcript of the presentation as well as a PDF version of the accompanying slides for your review and use.

Anatomy of a Second Request: Managing Antitrust Agency Second Requests

Hart-Scott-Rodino Act-driven Second Request responses require an uncommon balance of understanding, expertise, and experience to successfully deliver certified compliant responses based on FTC or DOJ mandates, guidance, and deadlines.

In this presentation expert investigation, eDiscovery, and M&A panelists will present actionable considerations for managing responses to antitrust agency Second Requests to include tactics, techniques, and lessons learned from twelve recent responses.

Webcast Highlights

+ Defining Second Requests: The Requirement, Task, and Prevalence
+ A Different Type of Discovery: Five Second Request Characteristics
+ Managing Second Requests: A Provider’s Perspective
+ Anatomy of a Second Request: The Elements and Execution of Recent Requests

Presenting Experts

Michael Sarlo, EnCE, CBE, RCA, CCLO, CCPA
HaystackID – Partner and Senior EVP

 John Wilson, ACE, AME, CBE
HaystackID – CISO and President of Forensics

 Mike Bryant
Knox Capital – Operating Partner

 Seth Curt Schechtman
HaystackID – Senior Managing Director of Review Services

 Anya Korolyov
HaystackID – Senior Consultant

 Kevin Reynolds
HaystackID – Senior Project Manager


Presentation Transcript

Introduction

Hello, I hope you’re having a great week. My name is Rob Robinson, and on behalf of the entire team at HaystackID, I’d like to thank you for attending today’s webcast titled Anatomy of a Second Request. Today’s webcast is part of HaystackID’s monthly series of educational presentations conducted on the BrightTALK network and designed to ensure listeners are proactively prepared to achieve their computer forensics, eDiscovery, and legal review objectives during investigations and litigations. 

Our expert presenters for today’s webcast include six eDiscovery subject matter experts and authorities. 

First, we have Michael Sarlo. Michael is a partner and executive vice president of eDiscovery and digital forensics for HaystackID. In this role, Michael facilitates operations related to eDiscovery, digital forensics, and litigation strategy both in the U.S. and abroad. 

Also, we have John Wilson. John is our Chief Information Security Officer and President of Forensics at HaystackID and is a certified forensic examiner, licensed private investigator and information technology veteran with more than two decades of experience working with the U.S. government, and both public and private companies. 

Additionally, we have Mike Bryant. Mike currently serves as an operating advisor as part of Knox Capital Holdings and focuses on private equity investments in the legal, business services, and tech sectors. 

We also have Seth Curt Schechtman. Seth is a Senior Managing Director of Review Services for HaystackID. Seth has 15 years of industry and 13 years of big law experience, focused on legal review, supporting multimillion-dollar review projects, including class actions, NDLs, and Second Requests. 

Next, we have Anya Korolyov. As a senior consultant with HaystackID, Anya has 12 years of experience in eDiscovery and has managed large scale, complex projects in healthcare, antitrust, and patent infringement litigations, as well as navigated Second Requests as an attorney and as a senior consultant. 

Our final speaker today is Kevin Reynolds. Kevin is a knowledge expert in Second Request investigations and complex data manipulation and has a unique combination of international experience and exposure to high volume comprehensive eDiscovery projects to include the direction and management of a multi-language, 100-plus-terabyte on-site project in the European Union where he served as the forensic project manager, the processing technician, the relativity project manager/technician, and the production technician. 

Today’s presentation will be recorded for future viewing, and a copy of presentation materials will be available for all attendees. Also, as with all of our webcasts, they’re available on-demand for viewing directly from both the Haystack website and the BrightTALK network, and at this time, I’d like to turn the mic over to our expert presenters, led by Michael Sarlo, for their comments and considerations for eDiscovery support of antitrust agency Second Request. 

Michael Sarlo

Thank you very much, Rob. I appreciate the kind words as usual, and I’m excited to also be back on the webcast series for HaystackID. My name is Mike Sarlo, as probably, if any of you who’ve attended these on a frequent basis, we do quite a bit of these, and as Rob mentioned, today’s topic is on the Anatomy of a Second Request. We’re going to take a deep dive through HaystackID’s playbook, expertise, and custom solutions as it relates to Second Requests. We’ll begin by kicking off the mic to Mike Bryant, who’s going to define Second Request and really set the functional stage for us. We’re going to get into more of the characteristics of a Second Request. We’re going to get deeper into managing a Second Request from an operation standpoint, different caveats and things you need to be aware of to be successful in dealing with these projects. We’re going to jump into a managed attorney review. We have some really good people here to discuss that topic as well. We’re going to look at some of HaystackID’s proprietary technology as it relates to streamlining privilege logging and stemming and threat analysis, and quality control mechanisms for review, and then we’re going to finish off just touching on HaystackID’s overall expertise and capabilities as it relates to Second Requests. 

So, I’ll go ahead and kick off the mic here to Mike Bryant, who’s going to define Second Requests for us. 

Mike Bryant

Thanks, Rob and Mike, and if you’re on this webinar, you likely are familiar with the 1976 HSR Act. It was invoked to, in essence, avoid anti-competitive outcomes through the M&A process and requires the parties of an M&A transaction to notify the FTC and the DOJ and provide information and documentation related to the transaction, and at any time, the FTC or the DOJ may make requests. Hence, the Act is known as Second Request. 

Something of note is that when Congress passed the HSR Act, it created a minimum dollar threshold to limit the burden of pre-merger reporting, and in 2000, it amended the HSR statute to require the annual adjustment of these thresholds based on changes in the U.S. gross national product. So, the reportability under the Act changes from year to year as the statutory thresholds adjust, and so, recently, in fact, less than 30 days ago, the PNO did notify us that the thresholds did change and what was formerly known as the $50 million as-adjusted threshold, because that was the size of the transactions initially in 1976, has now pushed all the way… for 2020, that threshold is now 94 million. 

The fees associated with Second Requests have increased as well based on the size of the transaction. The only reason I mention this is because it’s just important to note, whether you’re a practicing attorney or a service provider in this space, that the goalposts change regularly, and we need to be familiar for instance at HaystackID as to that dynamic and be privy to these changes to be responsive to our constituents, both the in-house and outside counsel. 

Second Requests really follow the framework of the model requests for additional documentary material as published by the PNO inside the FTC. Not surprisingly, I know you’re generally familiar with this, around the accelerated timelines, which we’ll be discussing, standard of substantial compliance, the disparate cadre, and locations that are normally involved, because again, these are sizable transactions, and the support for multilingual requirements often as a result of disparate data set, and in all cases, as much advanced technologies as you can reasonably apply is fair game with regards to Second Request because of these intense requirements. 

I would say on that front, having had a front-row seat as HaystackID has increased its capacity to execute on these complex matters, The FTC is acutely aware of the demands in this Act, and what those impose upon the parties involved, both the attorneys and the service providers, and I think they’re also very familiar there’s a finite universe of attorneys that are up to this challenge and service providers, and that’s something that I think will come out in this presentation. The web of talent that you need to execute, and scale that you need to execute, is instructive, and we’ll talk a lot about that through this presentation, as we hand this over to the experts. 

I think final thought is that in the scheme of things, is that really only a couple percent of the total number of companies that submit to the FTC and DOJ are subject to Second Requests, so this is a narrow universe of less than 100 matters, but the table stakes are just so high, and I’ll just state, as someone that deals in M&A day-to-day, that I see this increasing. I see at least steady to increasing M&A activity, at least for the first, second, and third quarters of this year, and we look forward to continued growth here. Fortunately, you’ll hear from what has become an all-star team of forensics, tech, and managed review talent that have to work in unison to pull this off with the attorneys, the both outside and in-house counsel. 

So, I’ll hand it off to the team and thank you again for listening in. 

Michael Sarlo

Thank you for that, Mike. I really appreciate your expertise and thoughtfulness as it relates to this topic. 

So, we’re going to get into the meat of this now. Our next section is called A Different Type of Discovery. We’re going to get into characteristics of Second Request, and I think the biggest characteristic that frontloads all of this, certainly, is the accelerated timelines, and I’ll kick it off to Kevin Reynolds to give you all just a little bit of background on what that really looks and feels like from a service provider perspective. 

Kevin Reynolds

Thank you, Mike. With the DOJ’s requirements, as soon as the deal, the filings have been made with the antitrust agencies, 13 days after publicly announcing the deal, we have about 30 days to meet substantial compliance. Now, this is a very tight deadline, and the best way to understand and ensure that we meet these deadlines is understanding all elements of the Second Request. That means that you’re going to have to have high community between the law firm and the vendor, and then from the vendor understanding the communication level that between forensics, processing review, and production. Now, the only way that you’re going to be able to meet these requirements is to be able to really understand what the DOJ is requesting when it comes to production format, TAR documentation, sample productions, collection logs, the full gamut of the process. Now, with that 30 days, everything is going to happen very, very, very rapidly, so everybody on the team has to be communicating on all fronts very quickly and efficiently, and all cylinders need to be pumping. 

Michael Sarlo

Fully agreed, and it’s completely an exercise in controlled chaos, so to speak. There are so many things you can plan for, and you want to be able to execute early on everything you can control, but as a service provider, your capability to be reactive, and to anticipate being reactive, is just critical to meet these timelines, and, really, there’s so much work that needs to be done. What’s a little bit different about these types of cases, as opposed to a typical litigation or an investigation, is really more of a standard of substantial compliance. Meaning that, typically speaking, you’re responding interrogatory or you’re in a very structured discovery on workflow as far as conducting a defensible search, and producing documents in whole, there are no deficiencies and you’ve handed over them, that’s typically the standard for most regulatory requests, and for any type of general litigation. With Second Request, it’s a time-driven best effort, and you still need to get 95% of the way there, because you never want to be in a position where you’re not able to say you fully have complied with the Second Request. It can expose your client to fines and/or quite a bit of risk as it relates to dramatic scope creep, and just a major capability for DOJ and FTC to expand scope around different types of custodians or departments they’re interested in, so it’s definitely a best effort and time, and being able to document that and to really end-to-end doing all the tracking logs and leave no stone unturned is key to being able to defend your position that you have fully complied with that. 

This really always does begin at data mapping and data [lake collection]. I’ll kick it off to my colleague, John Wilson, just to discuss how we handle these from a data standpoint and collection standpoint. 

John Wilson

Thank you. So, when we’re engaging with clients, it becomes really important to really gain a solid understanding of where all of the client data is as soon as possible, as early as possible in a secondary question matter, because timing is of the essence, so it becomes really important to not only understand the geolocations where the important custodians in the matter may potentially be located so that we can make sure we have resources ready to deploy to those locations, we also need to understand fully where all of the key repositories of data within the organization are and making sure that we understand what those repositories look like and how we can access them. Some tools have multiple levels of subscription that allow for different types of exports to occur, depending on what type of subscription they have, and so making sure we have all of that information mapped out and planned out so that we can be very time efficient in order to get the work done, get the data collected, and then to the processing as quickly as possible becomes really critical when you’re dealing with the short time fuse of a Second Request. 

Michael Sarlo

Thanks, John, and one thing we encounter always is counsel and client pushing for a certain scope, and we’re always really trying to get them to look towards the data types that are going to require more work, which is their mobile phones, things Slack where there’s a lot of computational time to prepare that data and format that data, so we want to be able to pull the trigger quickly if we’re not able to pre-collect in full upfront, which is oftentimes what we’re really always aiming for, is to go in with a bigger shovel. 

Another unique characteristic is multiple languages and Seth Schechtman, who oversees our managed review division, he deals with multi-language reviews on a day to day basis, so we’ll let him opine as well. 

Seth Curt Schechtman

Thanks for the introduction, Mike. This is Seth. I run the review division at HaystackID. Given the Second Request, how large the transactions are, dealing with multinational companies, you’re always going to see foreign language documents. Most of you, I don’t know if you know, but HaystackID does review a little bit differently. Before reviewers come into our ecosystem, they’re all tested. We give them a protocol, we give them questions, and we see how they perform, grading them on speed and accuracy. We do that in English. We also do that in foreign languages, so we have multiple foreign language tests posted. A typical way other review vendors do it is to have the reviewers take an Alpha test, but all they’re testing them on is their competency level in that individual language. It has nothing to do with their ability on review, so we test them on both to ensure that before they come, before they review any documents, we can be assured that they have both the qualities and skills that are necessary to meet our standards. 

Michael Sarlo

Thank you. And, Kevin, I might want to talk about some of our translation technology, which I think is also critical. 

Kevin Reynolds

Absolutely. It’s absolutely critical, Mike. Due to the DOJ’s requirement to have foreign language documents translated and provided in a separate production volume when meeting compliance, the key factor here is you do want to be proactive, and as soon as data is hitting your review platform, you want to be translating or identifying any other languages on documents and getting those translated as soon as possible to help streamline and not cause any backlog in work when trying to meet the compliance. It is also important to get this process started to start translating documents as quickly and efficiently, so when you do apply priv terms, that you’re accounting for as much potentially privileged information as soon as possible, not to hinder any slowness in managed review getting those documents reviewed. Internally at HaystackID we use Authenticity.AI, which does foreign language translation on the fly, as well as bulk foreign language or machine translation, and it does a very good job at allowing us to add additional documents and get us 90% of the way to the production volume data set. Thank you, Mike. 

Michael Sarlo

Kevin, I would start out by saying too one of the really unique features of Authenticity is the ability to deliver back a translated native from a machine translations endpoint, which makes things so much easier and so much easier from a review standpoint, as far as integrating them into your productions. All of this, to really streamline this and I think HaystackID’s is both a services company and a technology company, and there’s just a need for advanced technologies, and they’re the gold standard in order to complete these matters, having worked quite a bit with the DOJ and FTC really as an enterprise vendor or to check the box as even a go-to, they’re going to want you to have Brainspace and Nuix, and the regulators are weighing in. There’s full transparency for these matters on what technologies are going to be used. They’re looking into all of this. They want to know what you’re using for a processing platform. They want to know what you’re using for review platform. They want to know what you’re doing for your technology system review and analytics platform, and they prefer Brainspace and Nuix for sure. Additionally, Nuix gives us some advanced capabilities to handle more iterative workflows. As we get downstream, when we’re [pre-leg] prepping for a Second Request, as far as more just massive throughput, and also the ability to handle multiple deduplication sets on the fly, which can be critical if you’re adding and dropping custodians into a processing universe. 

I want to kick it off to Anya as well, because the TAR component is a huge piece of this, and in general, Haystack, and we find is DOJ and FTC and our clients really do these, favor a TAR 1.0 workflow, and Anya, if you want to get into some of the granularities there of how we do this, I would appreciate it. 

Anya Korolyov

Thank you, Mike, so as we’ve already covered, given the timelines and the scope of the data that will need to be reviewed to substantially comply here, the traditional linear review is not a favorable workflow, and so we turn to the Technology-Assisted Review, which is a review process where humans will work with the software to train and identify relevant documents. We have two workflows available to us, which is TAR 1.0 and TAR 2.0. The key difference between the two is in 1.0, it will use the simple learning approach where the subject matter expert will train the TAR algorithm until it reaches stable and acceptable results, and at that point, the algorithm will rank the documents as identifying responsive and not responsive dockets. TAR 2.0 will use ae continuous active learning approach, which allows the algorithm to learn continuously throughout the entire review process. TAR 2.0 is generally the preferred option for a large scale eDiscovery review, especially in instances where we have rolling productions or it’s an investigation type of review where the scope of discovery is not clearly defined in the outset and might evolve over time. However, in the cases of Second Requests and third-party subpoenas, TAR 1.0 is absolutely the most reasonable and affordable approach document review. 

Our objective here is to quickly and efficiently make a reasonable effort to identify and produce requested documents, and actually, the Department of Justice pretty much spells out the workflow of TAR 1.0 in its predictive coding model agreement. 

Now, not all data that we collect is favorable to put through the TAR process. TAR will most be effective when applied to a task-based user-generated document such as emails or electronic documents, Word documents, PDFs, presentation slides, so on. Documents with little tasks and no substantive discussion, for example Outlook calendar invitations that don’t have any contents in the body of invitations, those cannot be effectively analyzed. Neither can audio, video, image files at the offset, so what we do at the beginning is we identify a set of documents that we can put into Brainspace and put for the TAR process by identifying documents with tasks and any substantive discussions, and then we also have a set of data such as Slack and mobile data, again, with those chat messages, SMS messages, those must go through a different approach. 

And I’m going to turn it over to Kevin to discuss our workflow for those. 

Kevin Reynolds

Now, internally, we have developed a program that can process mobile phones, and it’s called Mobile Integrator, and this allows you to review documents very quickly and rapidly and also be able to comply with the DOJ’s requirements for mobile phone productions. Typically speaking, we would sometimes run search terms, sometimes you don’t need to, but with this application, it allows you to leverage and identify conversations between different individuals and then group those together, and then we can pull in five messages before or five after to show content of those communications. Another big one is going to be Slack chat, Skype databases, as well as Microsoft Teams, and being able to extract that data, process it, search that information, and then also organize it for production in such a way that it’s readable, clear, and concise. The other piece to this would be hard copy documents. Hard copy documents would not be good for TAR because of the OCR quality, which would need to be human reviewed for responsiveness and privileged, and then also produced right away, and then any other type of very specific collections would be a big one. So, if a custodian identified a folder on their laptop, desktop, or within a document share, that is highly relevant to the Second Request, easily identified, push it into processing and have that material reviewed, as well as produced very early on in the matter. 

Michael Sarlo

So, there’s a lot of coordination using different technologies, and you need to be thinking about leveraging them before you even start, and they’re really just a given, and all of that goes into management of Second Request, just really being prepared, planning around what you do know and what you don’t know, and then being able to execute. 

Strictly speaking, the first phase of the Second Request begins with the notification that you’re now subject to a Second Request, as Mike Bryant mentioned, after filing HSR. That said, [and I think] is really a common misconception in the industry in the service provider side, is these aren’t unanticipated Black Swan events. Good counsel, part of their job is to advise their clients on the deal side, and more on the enforcement side, whether or not their deal is going to be subject to a Second Request. They always know these are coming, and in that sense, there is more time to prepare, and that’s why, in reality, on the law firm side, there’s really only a few shops that do these right, and our industry-standard go-to resources for these, you end up working with the same people and they become battle-tested teams as well, and certainly the FTC has the model request guide for Second Request, and as an eDiscovery vendor, you should have all of your operative procedures based on this model, and you need to know it like the back of your hand from a playbook perspective across all teams who are working together on these. 

Really, the big thing to keep in mind on these is we’re dealing with a lot of iterative workflows. Typical eDiscovery for a general litigation, it’s very much the left to right workflow. Investigation is more circular. With Second Request, it’s really a top down, as in as soon as we’re collecting data, we’re reviewing data and we’re pre-producing data and all this is happening at once from a preparation standpoint, knowing certain custodians we think will be involved in Second Request versus getting that Second Request and actually then having to maybe add other custodians or be prepared for different resources, and Anya and Kevin, if you guys can talk about just how you manage that workflow, I think it would really set the stage here. 

Kevin Reynolds

Well, thank you, Mike. Being proactive is the biggest key factor when it comes to Second Request, understanding all of your collection points, and how information is going to be handed from team to team. Now, if you understand that early identification of your data, you’ll be able to quickly identify what data sets will be able to go through TAR, what data sets will not be able to go through TAR, and what we can get the review teams moving and started very, very quickly. At this time, you want to look at the low hanging fruit of what we can get the review teams started on first, as that always allows them to start understanding the data itself as they are reviewing. 

So, when we were looking at that low hanging fruit, very similar to hot documents, or hardcopy documents I should say, documents that can’t go through TARs, that we wouldn’t put through TARs, such as Excels or audio and video, you can get the review teams moving on that data very rapidly, and once you get them moving, all of the other balls will start to fall into place, but it has to be very proactive, and you have to understand your data. 

Michael Sarlo

Thank you, Kevin. So, really, getting ahead of the data is really key here, and John and I, we work on very large data collection matters, sometimes involving three or 400 devices, and John gets stuck with a lot of my rush requests for resource management, and John, if you want to talk about that, I’d appreciate it. 

John Wilson

Yes, well what we alluded to earlier, it becomes a real resource management challenge, because these matters, they move very quickly, and as the data is digested and delivered, you may get new requests and those new requests, while you’re in the middle of the process, are we need to go get an executive at this location, and they’re only going to be available for two hours, and that location is in the middle of nowhere, so being able to maneuver and have the resources staged and ready when you’re involved in one of these matters takes significant planning so that we can respond to those and get the individuals that are needed out to locations – or disparate systems. Sometimes you have a mainframe that has data that you need to get and so you need to have someone get out to the data center to get that mainframe data. But, again, it really is a big logistics challenge. It’s making sure that you’ve planned ahead, you understood what all the data sources potentially could be so that when the request comes in, you have the right team members in place to get to the right locations to have the work done completed in as short a timeframe as possible, or meeting the time constraints of the Second Request, and that that’s generally the whole key, is starting with the planning stage and then keeping… in your planning stage, you’ve got to keep in mind and plan to have those resources available throughout the process. It’s not a ‘just a go out, collect, and we’re done’. You have to have those resources ready throughout the entire process because additional requests come in, or clarifications, or expanded timeframes, or expanded custodian lists, they all can occur and need to be responded to and dealt with in extremely short order. 

Michael Sarlo

So, really, then what it’s about is you need to have a triage team on standby pretty much for the duration of the matter, and just leaving resources available who are going to get on a flight at 6 p.m. at night to be at a client site at 7 a.m. the next morning. There’s a lot of work here as far as preparing data for a TAR and dealing with rolling data volumes, and Kevin, I don’t know if you want to talk a little bit about how you set up a lot of these searches and ways that you cut downtime, and you’re going to market on collected data through analytics. 

Kevin Reynolds

Yes, Mike, thank you. One of the best ways is to always, always document your process, and in doing so that will allow you to quickly identify documents that are not eligible for TAR and to quickly have that base foundation, and at Haystack we have gathered a lengthy list of file types, file extensions, and information that we can apply to any data set after a quick high-level review of all aspects and information after the data has been processed, so this is a process that we have adapted internally, so we do have a database of information that will typically point out all of the documents not eligible for TAR, so it’s an easier, quicker process for us to get up and running than having to look and really, really analyze everything from top to bottom. Where in this instance, since we do have that database, we do not have to. We only have to worry about the outliers that fall outside of what we have already identified internally. 

Michael Sarlo

And then finally, the big piece of all this, and we’re talking a lot about tech, and, Seth, if you want to talk about the ways that you prepare to get ahead of data on the review side, and also then going into the next slide on some of the additional tasks that often, sometimes, people don’t plan for. 

Seth Curt Schechtman

Sure, Mike. Staffing is a huge component, making sure you have certified reviewers who know what they’re doing coming into the matter. In terms of numbers of staffing and timing, typically, first level and Q.C. will take about half the time for the review, and the remainder half is logging, so it’s essential that you’re, I’ll say, “overstaffing” from the beginning of the matter, because you may not know what other data is coming in, given rolling corrections, so it’s important to get in front of it, you do not want to miss that 30-day deadline. These are multinational companies with huge mergers, you do not want to miss it and put the deal in jeopardy. Overstaff at the beginning with planning for review to be done during the first two weeks, at least the initial stage and Q.C., and then you shift over to logging. 

I know eve talked a couple of times about the FTC/DoJ model guide. The model guide it’s just a guide. Everyone on this webcast can read through it. Learning through experience of the 12 Second Requests we did last year, you see there is wiggle room. You may notice [inaudible] Second Requests with regard to privilege logging, there’s a partial privilege log mentioned, which regulators and which attorneys that the agencies allow you to do that and which ones don’t allow you to do it. It can be variances across logging procedures as well as use of technology, stemming, where the production is inclusives or non-inclusives, so you want to use our experience that we have doing these requests to use it to your advantage. There’s more leeway than you think, given the model rules and model rules we will lay out. What’s PII? What’s PHI? Technology usage. But like I said, there is leeway there and you want to learn where you can go and where you can’t go with certain regulators. 

Michael Sarlo

Thank you, Seth. You know all of this is focused on a data reduction strategy, and I would say that (a) we’re always trying to prime our clients for TAR on some of these matters. Certainly, you may have cases where you might use TAR to define with responsiveness and your client might be in a highly regulated industry or has been through many litigations or trade secrets that may not be in scope that they may want to put eyes on documents that have been coded “responsive” by a Technology-Assisted Review Engine. That can happen sometimes. That can significantly expand the requirement for first-level review. Typically speaking, the workflow here is that data is processed, it’s in ECA, that de-duplicated data in ECA, that’s all pumped into Technology-Assisted Review platform. We’re not introducing bias via search terms. The cut there is usually about 40%, hopefully, via de-duplication and also by shaving off a portion of the population. It’s non-TAR eligible or just through culling document types that are just not relevant to the matter using forensic culling techniques, document extension reports that. Signature analysis, that’s actually often happening before we go into ECA. We’re always cleansing data; we’re not just jamming hard drives through a processing platform. 

The next step is really going to come as you get into analytics and assisted review, and that’s going to get you your cut down to somewhere between about 40 to 20% of the source data volume. It’s a little bit larger than what you would expect for leveraging TAR for more granular issues when dealing with general litigation or investigation matter, so you need to be prepared for just larger review populations happening en masse and this is a good slide, I think, that actually gives accurate figures to some proximity, we’re just trying to sum these up, based on the many Second Requests we have worked on here at HaystackID. 

I’m going to hand it off back to Seth and, really, the managed attorney review component of this is where a lot of the controlled chaos happens. We really have a battle-tested method as it relates to eDiscovery. We would like to think we have a battle-tested method as it relates to the review components of these matters but being able to anticipate changes in protocols and things of that manner is just critical to these matters. It’s leveraging kind of – from a client’s standpoint, advising to push people into multiple centers, [inaudible] quality, bringing in the right attorneys. 

Seth, do you want to talk a little bit about ReviewRight and how that works here, I would greatly appreciate it. 

Seth Curt Schechtman

We spoke a little bit earlier about testing our reviewers, they’re all tested and certified. When they come into our ecosystem, we track their metrics and overturn rates, accuracy, and performance throughout the reviews. We have multiple review centers around the country. We also have secure virtual remote review, and it really works for these Second Requests, because you may have specialized skills needed on the language front that you may have to go to outer markets to get people that are qualified. The big advantage for remote is to scale up and down according to the review. 

We’ve had multiple Second Requests where we’ve needed to add hundreds upon hundreds of people in a short timeframe. If you go to one market or even multiple markets on-prem, you can tap out those markets very quickly. A lot of our Second Request reviews may emanate out of D.C., as most of them do, but they can easily spread throughout our locations, as well as secure remote. It gets you better quality people, because you’re choosing from a larger pool of people, so you can get better quality and better pace, better performance there. 

Moving onto discussions of privilege logging, we’re going to get into the weeds here. It’s a huge component of the Second Request. Obviously, the regulators are looking for the produced documents, but they also want to see what you withheld from production. We’ve developed unique tools, proprietary tools to just us to help us get through these logging procedures. 

I’ll hand it off to Kevin and Anya to talk through some of the things that we’ve developed. 

Anya Korolyov

Thank you, Seth. During our experience with these 12 Second Requests last year, we kind of noticed that during all the phases, all the rolling phases of collection, processing, assistant review, and rolling production, one of the stages that kind of gets overlooked and is not addressed early enough is the privilege log, which actually is part of this essential compliance requirement. We think it’s very important that everybody, starting with counsel across the review team, and the eDiscovery support teams are all on the same page from the very start of the matter, about the automation steps that need to be taken for the priv log. 

Questions like, what gets logged? How does privilege log sentence is actually going to read? Whether the parent emails or the children attachments are logged and whether the description gets propagated down, and all the things [inaudible] deadline for the privilege log are met across everybody’s teams. 

We have identified name normalization as one of the biggest components of the privilege log and the one that takes up the most significant amount of time. We think that stage should begin pretty much as soon as we want for the TAR 1.0 process, identified our responsive population, and then ran our privilege terms across that and identified our potential privilege population. At that point, we can begin the name normalization. 

Again, having gone through so many, we’ve developed tools that – I’m now going to turn over to Kevin once more to discuss what those tools were and how we found them helpful. 

Kevin Reynolds

For name normalization, as Anya was saying, it does take up a significant amount of time, so we have actually developed a process internally and an application that will help us parse out names and email addresses. So, this process will actually help us get about 50% the way to the finish line, but it allows us to step into creating a true dictionary database of the normalized names and the email addresses associated to that normalized name. 

Now, this can be changed at any instance. We can change the formatting. It can be last name-comma-first name, first name/last name, however we want to do it, and it also allows us to add esquires into the datasets. Once the dictionary has been created, we can actually update fields in relativity very quickly, or any review platform very quickly, to leverage the actual priv log movement. 

Now, this is a huge benefit when you do have custom dictionary. For example, with one of our Second Requests, it was when the client actually had three Second Requests. So, how does that help us? It helps us speed up doing priv log for the other Second Requests, because more than likely names are going to be used over and over again since they are at the same company, just different Second Requests. So, we’re able to leverage that for multiple matters, and be able to quickly lessen the burden of the priv log process for the managed review team, and we can rinse, wash, and repeat and make things a lot easier and more efficient. 

Seth Curt Schechtman

Thanks, Kevin and Anya. Now, I’m going to talk to some specifics for privilege logging. These questions you could ask outside of Second Requests, but it’s particularly important within Second Requests to try to narrow the scope of logging. I’m going to talk about the partial priv log earlier, we don’t see it that often, to be honest, but it is available. 

The first question to ask is can you do a categorical log, not document by document? Some jurisdictions encourage it. We haven’t seen it in the Second Request arena, but it’s something to ask in all matters. In terms of what documents actually get logged, is it all privileged documents or can you log email families in a single entry and note that they have privileged attachments? Do privileged redacted documents have to be logged? Maybe you will hand over some information [inaudible] the government will actually get the documents and you will have the to/from/cc information in there, you still won’t know what’s being physically redacted, but sometimes the government will not require those to be logged. 

In terms of non-inclusives – I see we have a question about use of threading and producing most inclusive emails – the answer is sometimes, in some situations they will allow you not to log those documents or produce those documents. In general, though, they are asking you to review and produce non-inclusives as well. But you can always ask the question, if those non-inclusive documents have to be logged. We’ve developed techniques for pulling up the to/from/cc information from lesser-included threads onto the log, that will give you some bargaining power in terms of not having to log those individually. 

Do exact duplicate documents have to be logged? Typically, you will see them in the attachments. We have seen that for a few matters where they do not require the duplicates to be logged. 

In terms of what information has to appear on the log, can the to/cc and bcc field be collapsed into a single field called “Recipients” as opposed to breaking them out separately. The reason why you might want to do that is the government sometimes will attack the log in terms of verb choices, so if you’re making a claim that attorney advice has been requested, but the email isn’t going to – and I say to, specifically to, as oppose to cc – the attorney, sometimes those will bounce back and they will ask questions, so people collapse the to/cc/bcc into a single entry field called “Recipients” to preemptively prevent questions. 

Do families have to be logged if they are to/from/cc outside counsel? We have seen certain Second Requests that documents, emails, and attachments that are to/from outside counsel do not have to be logged, but not for those that cc outside counsel. 

Another piece which can save time is, can document titles and email subject forms be used to describe what the legal advice is with regard to, instead of having reviewers enter a free text field or using dropdown menus. Every little bit helps here in terms of meeting the deadline and making sure that you’re having over the information that needs to be handed over. 

In terms of third parties appearing on the log, especially if you’re producing a unique people list with company affiliations, which in general has to be produced in these Second Requests, you want to make sure you’re targeting those documents early, running screens, and presenting those domains or individuals to outside counsel or to the client to try to find out what their relationship is. Are they agents? Are they functional equivalents of employees? You want to make sure that if there is a common interest claim or [inaudible] claim that you’re actually making it. On the log, typically, you will have to include those parties when you’re making the common interest claim. 

Who are the per se privilege breakers? You can create a highlighting set for them, not only government entities, state entities, but just third parties in general. You can certainly scrub to/from/cc information or screen to/from/cc information for those and pull them out of the review set of your privilege-only review. 

In terms of instructing reviewers, this is huge for Second Requests or any other matter. Always instruct them to never assume a third party breaks privilege. Why? Because they don’t know. They’re seeing a potential privilege breaker, but it’s just potential. You want to make sure you’re logging it, keeping track of it, sending the list off, and then researching it before you make a final call. It’s much easier to flip them not from “not privileged” to ‘privileged”, to then go in and try to hunt them back down if you’ve coded them as “not privileged”. 

Again, common interest holders/joint defense, you want to make sure you’re listing them. Watch merger dates, you do not want to claim a common interest with the merging entities before the merger announcement. 

Run sweeps for privileged documents that are to/from/cc .gov and .state domains and addresses. We will do .orgs as well, those don’t always break privilege, but you want to make sure you’re taking a second there to look at them. 

Also, make sure you’re running searches for privileged withhold documents that contain .gov and .state. At best, if they are forwarded on internally at the company, they should be partially privileged, they shouldn’t be withheld for privileged. 

Does every entry need an attorney in the description or the to/from/cc/bcc-ing field? Typically, the answer is yes, but you want to make sure that you’re consulting with outside counsel or in-house counsel to make sure that is a requirement. 

In terms of verb choices, I know I’ve mentioned this before, requesting legal advice. You want to make sure that if you’re requesting legal advice, that those emails are going to attorneys. You can collapse that to cc/bcc fields into a single recipient, even better. But if you can’t, just make sure you’re using the correct verb choices. 

The same with rendering legal advice, because you’re making a claim of rendering legal advice, you want to ensure that the advice is actually coming from an attorney, not being sent to an attorney. There are easy ways for the government to attack the log. 

The next portion we want to get into is Q.C., stem detection in particular. This gets me really, really excited and for those of you deep in the review world, you will see how terrific of a tool it is to ferret out inconsistencies at the Q.C. level. We will do Q.C. to track down inconsistencies across and MD5s, typically non-emails, as well as [near] dupe groups, but given rolling collections and review, obviously, portions of email threads may be reviewed non-sequentially. If you have all the data at-front, the consistency becomes a little bit higher at first level reviews as reviewers are going through them, because they’re reviewing all documents sequentially, but in particular if they are out of order, you can get inconsistencies. We always recommend threading, to group the documents, emails and their attachments within thread groups, so we’re reviewing sequentially from earliest to latest, to make sure that the reviewers have context. 

Now, if you don’t have to, in the end, produce the non-inclusives, then you can always suppress them from review at the first jump. That is certainly rare in these Second Request cases. 

I’m going to walk you through what it means to be a consistent thread and then we will talk through what is inconsistent and then I’ll show you an example of an inconsistency and why these threads get really complicated and being able to parse them and use our proprietary tool to determine which stems of the threads are inconsistent is what’s so revolutionary within the review world. 

You see the first example there, all coded “not privileged”, starting with the non-inclusive, consistent throughout, not a big deal, nothing to check there, nothing that would be caught in Q.C. The next one would be privilege withhold throughout, and these are, of course, are coding on emails not attachments. We look to the email for the consistency or the inconsistency [of] privilege. The third one is P.P. (partially privileged) or priv redact, again consistent throughout the thread there, nothing to catch. The fourth one (column D) starts off as not privilege, and then gets to partially privileged, which is consistent not inconsistent, the first email sent, not priv, then it gets includes some request for legal advice or adds an attorney requesting for legal advice or rendering, redact out that piece of information and then produce the rest of it so that not privileged or partially priv is consistent. Then the last example is starting off as a privilege withhold going to partially privileged, again, consistent, you can withhold the first email and then when the replies or forwards come in, redact out the earliest-in-time email and then produce the rest of the stuff as consistent. 

Here are some examples of inconsistent stems. You’re starting off with not privileged on the earliest-in-time email and then it switches to partially privileged, so again a reply comes in, or it’s a request, it gets forwarded, consistent coding there. What causes the inconsistency is the third example there where it’s coded as “privilege withhold”. You’re withholding the most email in this string, but earlier in time it’s not privileged, the second-in-time is partially privileged, there is the inconsistency. You want to make sure you’re catching these examples on Q.C. to make sure that that one that’s coded as “privilege withhold”, may in fact need to be partially privileged, or maybe, in fact, the full thread is not privileged. This is easy to catch. You can do a search for all inclusives coded “privilege withhold” that have partial priv or not priv non-inclusives and can be caught quickly. 

The next example, again, will show where it starts off as privilege withhold, then goes to partially privileged, which again is allowed, and then switches to not privilege. These are the ones you want to catch where the most inclusive is coded “not privilege”, but the underlying emails are coded as “withhold”, whether it gets forwarded onto a third party, or it should have been coded as “not privileged” in the first two emails. It’s important to make sure that you’re catching these examples as well. 

Now, we will come to what we, typically, see. If things were in a straight line, it would be pretty simple, but we want to catch threads are branching off in multiple directions. A piece of this thread is coded all as “not privilege”, a couple of non-inclusives there, and then inclusive consistently coded, similar to the example we showed earlier. We will then add in another branch or what we call a “stem” here. On the left side, you will see there a couple that are coded as “partially privileged” with its common node coded “not privileged”. Again, legitimate, right, consistent, you have the left side being forwarded to an attorney, they give the request for legal advice, and then the attorney replies, “you’re redacting out that information, but the first email-in-time where it branches off (the common node) is not privileged”. 

Now, traditionally in the review world, if you did inconsistency searches across threads, this would come up in a search. Why would it come up in a search? Because you have one inclusive coded “not privileged”, and then you have another inclusive that’s coded “partially privileged”, as well as a non-inclusive partial priv on the left side branch. You know when you’re reviewing this document that you’ve coded it correctly, they are correctly coded inconsistently, because the conversation branches of, but the tools as they currently state in the market or exist in the market cannot distinguish between these stems. You run a Q.C. search, as this thread stands now, it will show up as an inconsistency even though it’s consistent. 

Why is that troubling? Because it takes a lot of time and effort and you have to jump into this thread, even if you’re using the thread visualization within Relativity to see that it’s consistent or correctly coded inconsistently, the powerful tool that we’ve built is able to parse out these stems at a Q.C. level, so it will check the coding across the individual stem, one that’s coded consistently “not privileged”, one that goes from not privileged to partial priv and come back as a consistently coded thread. You are not spending time and money trying to hunt down something that does not exist. 

Now, I’ll add one additional piece to this thread, and this is where the inconsistency pops up a true inconsistency. If you have an earlier-in-time email, the common node for all the branches is coded as “not privileged” in yellow and then you have a little piece on the left side as inclusive that’s coded as “privilege withhold”, you will see that a document that’s inclusive and coded as “privilege withhold” and what you’re saying is the entire email is privileged, but an earlier-in-time email has been previous coded, already coded as “not privileged”, that is where the inconsistency pops up. So, our thread stemming tool will flag that piece of the stem as inconsistent and a QC-er or a higher level person will jump into that and clear it up, whether to code that “privilege withhold” “not privileged” or to flip that yellow document to partially privileged, and then it may cause other portions of the stem to change, but that’s how powerful to the tool is. It saves time, reduce Q.C. costs and, actually, hunts down the portions of the thread that are coded inconsistently at first level and ensures that coding is up to standard and will make your privilege logging more efficient and easier to do once you move to that stage. 

Michael Sarlo

Let me just summarize the main pieces of this tool as well as it’s giving you the capability to exponentially suppress inclusive threads that are within a tree, which no email threading technology really analyzes individual fully inclusive emails against each other. This really does that and just the cut at what needs to be logged, assuming you’re able to get a regulator to agree to certain workflows, which is… a big thing in all of this is they always are collaborating with the regulator to get workflow efficiencies that they will agree to. There’s just a massive timesaving and massive cost savings. For a vendor, it allows us to really focus our quality control resources on keeping them fully busy, where they can make major impacts at an entire conversation level. 

So, a really great tool, I don’t believe that, really, any other service provider has perfected this. We’ve spent several years perfecting it and it became battle-tested over the past year, as we’ve gone to market, so feel free to reach out for a demo and certainly if you’re using us as a current review client, definitely ask us if we are using this tool, because we always are trying to. 

All of this comes down to why HaystackID for these. First and foremost, it’s the team, it’s our ability to, as an organization, even at the executive level, to be fully engaged to breakdown walls that appear in very large organizations. We try not to build them but to create cross-departmental teams, people who actually really take this more seriously than a job. As a vendor, these are make or break your clients and they make your break your reputation, you cannot mess one of these up. If you’re the reason fingers are being pointed at you because half a billion transaction didn’t go through, there’s a lot of risk. There’s a certain comfort level of doing these and making sure you have the right resources. Having been in this industry for many years and working with all the people on this call for many years, I’m always incredibly confident we will be able to over deliver for any project and any size and scale. 

In particular, we did six Second Requests in a four-month period, which – talk to anybody, any law firm, that’s unheard of. These don’t come up that often. Some firms might get lucky where they get a spree of these and they don’t get them for years. Some maybe do smaller ones. These can range anywhere from 10-50 custodians and onwards. Being able to do six of them in a four-month timeframe is an accolade that nobody else in this industry can claim. 

Having a seventh one pop up in the middle of a deal and then another one and then you add on a second deal during the transaction. Being able to be that fluid for, really, beyond 90 days – last year, nine months, not the entire year, we handled a total of 12 requests and that speaks volumes to, I think, our footprint and also just across the industry and antitrust world. We have become a go-to resource for these. 

There’s quite a bit of marketing material and case studies you can find online. Certainly, it really just comes down to the overall team and our ability to kind of scale and be responsive and to just put the right people in the pocket and make sure you have individuals who are functional and able to work and collaborate with your law firm clients who are overseeing these deals and just eliminating the amount of questions that are being asked. It’s all about anticipation when you think of teams. HaystackID, we’re all about anticipating our clients’ needs and we like to form a collaborative friendship with the people we work with so that we’re building that team dynamic and it’s not an impersonal buying experience. We’re totally invested in our clients’ success. 

I would encourage anybody to reach out to us. If you have a pending Second Request, we would be happy to assist you and we will open the floor to questions. 

Seth Curt Schechtman

Mike, we do have a question about standard production format with respect to what’s produced natively, so maybe Anya or Kevin can speak to that. 

Anya Korolyov

Sure, we can… Kevin, go ahead. 

Kevin Reynolds

With respect to the natives being produced, typically, with the DoJ, it’s going to be PowerPoints will be produced natively, Excels will be produced natively, as well as audio/video will be produced natively in the production format. 

The DoJ for Second Requests, and the antitrust division does have a document you could download from their website that provides all insight as to their very, very specific production format. 

Michael Sarlo

“Is this presentation eligible for CLEs?”

This individual webcast is not, but if you’re looking to have this presented to your law firm or corporation as a CLE, we can get it quickly depending on what locale you’re in. It is in some areas. We can usually work pretty quickly to get the presentation CLE-ed, it meets the content standards. Feel free to reach out, again, to anybody on this thread to discuss that. As always, [email protected] for general information. 

Thank you all again for joining today. We really appreciate your attendance. If anybody has any follow-up questions, things like that or they just want to talk about our experience doing these and trends that we’re seeing, it is an evolving landscape. A theme I don’t think we really touched on is that although these DoJ and FTC are highly dispersed organizations with different teams, the eDiscovery side, as far as legal discourse is concerned, it’s really the same people who are setting the standards here. We have close relationships with them as well, and that type of institutional knowledge allows us to anticipate just what we’re dealing with. 

“Are there ever claw backs with Second Requests?” 

Typically, no and, actually, very dangerous as far as producing certain privileged comms can actually open up the entire request to a certain privileged commons. Depending on the nature of it, it can change the standard for privilege, which is unique, actually, to Second Requests. You need to be very careful there. 

Any other questions? 

All right, well, I’ll kick it off to Rob Robinson to end the webcast. 

Closing

Thank you, Michael, and thanks everybody on the team for the excellent information and insight. We also want to thank each and every one of you that attended today. We know how busy your schedule is and we appreciate you taking the time to share it with us. As John and Mike mentioned, an updated PDF copy will be uploaded to the website and you will be able to download it from the “Attachments” tab in your current presentation viewer immediately upon completion of the webcast. 

We also hope that next month you will have an opportunity to attend our webcast. It will be on March 25 and it’s on the topic of eDiscovery Gotchas. In that webcast, many of today’s presenters, along with additional eDiscovery authorities, they will share information on eDiscovery gotchas and how to avoid them through project management approaches and creative issue resolution methods. 

Thank you again for your attendance today and please have a great week.  


CLICK HERE TO DOWNLOAD THE PRESENTATION SLIDES (PDF)

HaystackID – Anatomy Of A Second Request – Final

CLICK HERE FOR THE ON-DEMAND PRESENTATION (BrightTALK)