[Webcast Transcript] From Remote Collections to ReviewRight: eDiscovery in Our New Remote World

Editor’s Note: On June 17, 2020, HaystackID shared an overview and explanation of our extensive remote collections and review capabilities as part of the Electronic Discovery Reference Model (EDRM) educational webcast series on remote eDiscovery. While the full recorded presentation is available for on-demand viewing via the HaystackID website, provided below is a transcript of the presentation as well as a PDF version of the accompanying slides for your review and use.

From Remote Collections to ReviewRight: eDiscovery in Our New Remote World

Advances in remote eDiscovery technologies coupled with developments that are restricting the ability of organizations to support traditional onsite eDiscovery operations are accelerating the need for remote collections and remote legal document reviews. However, not all remote offerings are equal, and knowing how to evaluate and compare different offerings may mean the difference between positive outcomes and unacceptable results.

In this presentation, industry eDiscovery authorities will share remote eDiscovery insight and demonstrate how HaystackID’s remote offerings may benefit eDiscovery professionals as they consider eDiscovery in our new remote world.

Webcast Highlights

+ Defining Remote eDiscovery: Definitions, Differences, and Decisions
+ Considering Remote Collections: Targets, Tasks, and Technologies
+ Reviewing Remotely: From Reviewer Selection to Secure Technologies
+ Remote eDiscovery Best Practices: Practical Considerations and Recommendations

Webcast Host

+ Mary Mack, CEDS, eDEx, CISSP, CIAM –  As CEO and Chief Legal Technologist at EDRM, Mary is an acknowledged industry expert, author, and speaker who is frequently sought out for her commentary, insight, and teaching on eDiscovery.

Presenting Experts

+ Michael Sarlo, EnCE, CBE, CCLO, RCA, CCPA – Michael is a Partner and Senior EVP of eDiscovery and Digital Forensics for HaystackID.

+ John Wilson, ACE, AME, CBE – As CISO and President of Forensics at HaystackID, John is a certified forensic examiner, licensed private investigator, and IT veteran with more than two decades of experience.

+ Vazantha Meyers, Esq. – As Vice President of Discovery Services for HaystackID, Vazantha has extensive experience in advising and helping customers achieve their legal document review objectives.

+ Seth Curt Schechtman, Esq. – As Senior Managing Director of Review Services for HaystackID, Seth has 15 years of industry and 13 years of big law experience focused on legal review.

Presentation Transcript


Mary Mack

Hello and welcome to the EDRM Global Webinar Channel. My name is Mary Mack. I’m the CEO and Chief Legal Technologists for EDRM. Today’s remote offerings webinar is From Remote collections to ReviewRight, eDiscovery in Our New Remote World. It’s sponsored by our wonderful partner, HaystackID, one of our very first partners, who also sponsored the inaugural Legalweek Jumpstart for us. Our faculty experts are Mike Sarlo, John Wilson, Vazantha Meyers, Seth Schechtman. We welcome your questions and feedback in the console. All questions are anonymous. This webinar will be recorded and available for replay at your convenience. We’re very grateful you are spending time with us during these challenging times. EDRM is augmenting our substantive webinars with practical ones like this one to support our community as we adapt to our changing times.

Our moderator today is Rob Robinson, the Chief Marketing Officer of HaystackID, and a principal analyst for ComplexDiscovery. Rob is a friend of EDRM and has been very helpful during our first year, co-creating the Legalweek Jumpstart that HaystackID sponsored. Rob, welcome to you and your team.

Rob Robinson

Thank you very much, Mary. We truly appreciate it and are grateful for the opportunity and thank each of you for attending today’s webcast. We know how valuable your time is, and we appreciate your sharing it with us.

Today’s webcast, as mentioned kindly hosted by EDRM, is part of HaystackID’s monthly educational series of presentations conducted on the BrightTALK network, primarily designed to ensure listeners like yourself are proactively prepared to achieve your computer forensics, eDiscovery, and legal review objectives during investigations of litigation. Today, as Mary mentioned, our expert presenters will share on remote discovery and our first expert presenter is Michael Sarlo. Michael is a Partner and Executive Vice President of eDiscovery and Digital Forensics for HaystackID, and in this role, Michael facilitates all operations related to electronic discovery, digital forensics, and mitigation strategy, both in the US and abroad for HaystackID.

Our second presenter is digital forensics and cybersecurity expert John Wilson, and as Chief Information Security Officer and President of Forensics at HaystackID, John’s a certified forensic examiner, licensed private investigator, and IT veteran with more than two decades of experience working with the U.S. government and both public and private companies.

Our next presenter is the Vazantha Meyers, Vazantha serves as the Vice President of Discovery Services for HaystackID and she has extensive experience in advising and helping customers achieve their legal document review objectives, and she’s also recognized as an expert in all aspects of traditional and technology-assisted review. Vazantha graduated from Purdue University and obtained her JD from Valparaiso University School of Law.

And last, but certainly not least, is our eDiscovery legal document review expert, Seth Schechtman, and as Senior Managing Director of Review Services of HaystackID, Seth has 15 years of industry and 13 years of big law experience focused on legal reviews supporting multimillion-dollar review projects, including class actions, NDLs, and second request.

As Mary mentioned, today’s presentation will be recorded for future viewing and a copy of the presentation materials are available for all attendees, and you can access these materials directly beneath the presentation viewing window on your screen by selecting the Attachments tab on the far left of the toolbar beneath the viewing window. Also, the recorded webcast will be available for viewing on both the EDRM website, the HaystackID website, and the BrightTALK network immediately following today’s live presentation, and at this time, I’d again like to thank EDRM, Mary and Kaylee, for the opportunity to present today and I’ll turn the mic over to our expert presenters, led by Michael Sarlo, for their comments and considerations on our new world of remote discovery. Mike?

Michael Sarlo

Thanks so much, Rob, for the introduction and all the EDRM folks as well for allowing us to be here. This is Mike Sarlo from HaystackID speaking. I’ll be emceeing the event along with the rest of my colleagues. We’ve got, I think, a really good presentation here, it’s highly relevant for the audience today, really focusing on remote eDiscovery with more of a granular look around data collection and document review. We’re going to kick off with just setting the stage for what remote eDiscovery was, what it has become. We’re going to talk about how, as practitioners, be it a lawyer or a legal professional or anybody else involved in the eDiscovery or the information lifecycle process, different strategies for adapting to the remote world. We’re then going to get into somewhat of a deep dive about the key aspects that make a secure remote review and how that relates to HaystackID’s ReviewRight technology, and then we’re going to dig a little bit deeper and get more into the nitty-gritty just regarding general review workflow in a remote world.

So, really, the first question is what is remote eDiscovery, and I like the definition here, it’s funny because I wasn’t thinking EDRM when we created this, and I guess EDRM works very well, so it’s pretty poignant, but really, what we see remote eDiscovery as, is it’s a process of executing on a task, phase or action, across the EDRM where physical distance between humans, data sources, software experts, legal resources, and legal venues are bridged via technological means to achieve all the different types of goals that you can go on and on here. I think that the world certainly had been moving remote. People ask me how I’m adapting and I always think that while I’ve been working fairly remotely for several years now, but what we found with that many of our clients haven’t been, and case in point, certain organizations, be it law firms or even corporations, are more well-positioned for adapting to remote eDiscovery workflows in general. We find that organizations with strong information on governance policies and/or eDiscovery procedures tend to adapt or have adapted very quickly.

Typically speaking, we always say this, it’s really up to eDiscovery practitioners to guide organizations, and what we find is that we’re working closer than ever with IT and custodians directly, and there’s really a lot of scoping that goes on there.

I don’t know, John, if you want to maybe touch base on just some of the different things here.

John Wilson

Yes, absolutely. When you start getting into all of this, the world has definitely changed. There are travel constraints, there are health constraints, you’ve got to make sure that you have the social distancing, that your staff can wear masks, can check temperatures, wear gloves if necessary, even with the airlines announcing thing this week that most of them are going to require masks while you’re on the flight, all of these changes have had a significant impact on what we can and can’t do. Even some areas of legal services have been deemed as an essential business, some areas, they have been deemed as not essential business, depending on your local jurisdictions. There’s a lot of impact across all of these things, and those are the pretty obvious things. The things that are more difficult to see that have had some substantive impact are when you start talking about the logistics and supply chain disruptions. We needed to get additional systems for our remote kits or additional hard drives for our collection work and those things were not available because a lot of them were coming from China or from factories where the factories have been closed. So, you’ve got to deal with all of these additional logistical challenges and how do you move all of that forward, and then you have your internal disruptions.

Within an organization normally getting a collection done, you need to speak to IT, you need IT to provide maybe an administrative account or special permissions, but now all of a sudden, those guys are completely… the IT teams are completely maxed out, they’re not able to come to a certain location because of the restrictions. There are all sorts of challenges that have had a significant impact.

So, when you start looking into remote eDiscovery and doing remote collections, remote forensics, when an organization has info governance policies in place, they have eDiscovery systems and policies in place, those are the organizations that can adapt to it much more readily. It’s not to say we can’t be successful with a smaller organization without an InfoGov program or without an eDiscovery Program, but those organizations have the policies, processes, procedures that help with that transition to the remote efforts of being able to have remote connectivity, being able to get remote control, access to systems, dealing with the physical geography issues, the challenges around, hey, the office is closed, we’re not allowed to have vendors or outside parties come to the offices because of COVID, dealing with all of those challenges and bringing all of that into the remote discovery process, or addressing all of that within the remote discovery process, because just doing it remote, in some instances you have to have an actual remote kit, you have to have some hardware that gets shipped out and gets connected, and under these restrictions, you may not even have the ability to have somebody connect those things. So, that’s where these larger organizations are definitely better suited. They have the mechanisms in place, they have the best practices for dealing with the legal hold internally and dealing with preservation requirements internally, and they have the infrastructure to help support all of that. They have the IT capability, the staff that’s already got a VPN set up, all those sorts of things.

Mike, anything you want to add?

Michael Sarlo

No, it’s all true. Certainly, we’re oftentimes having to really gauge who’s on the other side of the line. I’ll add that. As we’re dealing with small organizations that don’t have centralized IT, there’s a lot of expectation set in that we go through on this data collection side. As soon as we hear about a matter, we’re pushing them towards remote, and it’s important to realize that even when you are doing remote, centralized IT, it’s great when a company has that and they have strong asset tracking, and they have strong lines to their employees. A lot of startup cultures or if you’re dealing with a ton of C suite type folks, they’re not as manageable when you’re trying to schedule these types of remote collections, and when we think more of a rescheduling, it is more for personal devices, PCs, laptops, desktops, things that.

One thing we have to be really aware of as a large company, where we also are advising our clients, is some of the new proportionality standards that we’re seeing across many of our matters as we think about timelines and capability and ability to execute, be it on certain discovery objectives, be it on any type of production deadlines, or really where it’s become a big issue has been access to physical assets, and all of this turns on data subject safety. Even if we’re shipping a remote kit, we’ve had people ask us are they sanitized, and when I say sanitized, I mean physically cleaned. Absolutely, right? So, we came up with some knowledge [slick] there. You have to think about the practitioners, the lawyers as well. There are certain areas that are opening up for in-person depositions. You have other regions where it’s not the case. The regional differences and regional perceptions around the pandemic certainly can make it difficult to fall into standard operating procedures; certain cases, just identifying evidence can always be critical, where we’ve had situations where maybe it’s more of an adversarial forensic event where we’re in between maybe two parties as a neutral and there’s a phone that needs to be imaged, and these remote kits, sometimes you don’t know what you’re seeing or what you’re touching, and it’s always great to get something in there so you can take pictures of it. So, we’ve been using webcams and things like that and these scenarios to make sure that we can identify visually and attest to what we’re capturing, in some cases with a more of a personal device that doesn’t have an asset [inaudible], and really just getting to the evidence and being able to do it in that.

And think about paper. You have some cases where you have boxes and boxes of paper where the government is asking for this stuff and nobody is scanning paper really, and/or a lot of corporations aren’t going to allow their employees to go in and pick paper, so there’s a lot to think about, and it can be a strategic asset as well. Some practitioners I think have definitely gotten very strategic around the timeline and using COVID as an excuse, but one thing we really do see is that at the end of the day, these shortcuts, and we’ve been asked to make certain shortcuts and we’re always reminding our clients that COVID-19 does not ease your burden to prevent spoliation, and it’s just so important to really be thinking downstream as far as just… more your data sources that tend to die on the vine and not waiting too long to preserve because of the social distancing norms just so that you don’t find yourself in a situation, because people are always going to go after this stuff any time there’s even a hint of it.

It gets back to also where standard workflows for large cases, I think a lot of firms [inaudible] get a new client, nobody really wants to throw them straight into discovery. I think discovery is always very scary for organizations, especially as you start to see the bills on the lawyer’s side, and the defender’s side, the expert’s side, they add up, and oftentimes what I find more from a relationship standpoint, some litigators may opt for more of a piecemeal workflow around data collection and around eDiscovery processing. In the pandemic world, we need to think sooner and bigger around potentialities that could happen. So, if we have a custodian list, if we think we might have five more or 10 more, should we go and get those now knowing that we may be under a deadline? Are there repositories that are going to rely on servers to be shifted? Do we need to implement infrastructure that might be very difficult, depending on certain travel restrictions, and then just the devices with automated kits, or even if you’re doing remote dial-in, you always have to plan for some failure or some human error, or not human error, but more failure around being able to get in contact with the end person.

What this it all comes down to is it’s the data mapping, and I know it’s an eDiscovery tale as old as time, but it’s really more important than ever now, and you just need to have a good roadmap on all these different things and these different data sources.

And then certainly, again, the social distancing piece, we’re always operating in the realm of caution, and we always try to give our clients that opportunity. I think in certain regions where we are weighing and have been weighing doing more on-site collections and have been doing on-site collections, and it really is going to depend on the comfort level of more of the organization, even our staff, and just what the matter requires.

John, maybe you can speak about ways that you’ve created more cleanroom imaging processes.

John Wilson

Yes, so again, the challenges become dealing with the social distancing norms and operating within that, you’ve got to have some really well thought out, well-designed workflows where… because you’ve got to deal with (a) has the system been disinfected? Do you have to interact with the custodian directly, or can you interact at a remove, hey, the dropbox or drop location, drop desk is over here, you sign the paper on the red Xs where the Post-its are, and then we come in, we grab it, then we do what we’ve got to do and then you return it. So, there’s a lot of challenges around building those entire workflow processes to ensure that you’re providing for the COVID restrictions, you’re following CDC guidance regarding the sanitization of things, not only the custodians’ devices as they’re bringing them to you, but also the devices that you have to use to do the work, and then other people may have to touch or connect to their systems within their corporate environment, and so it really does become a much more substantive workflow challenge, where you’ve really got to have that substantive planning element to, OK, we’re going to have two custodians… we can’t have two custodians show up at the same time, so we have to have them scheduled, they’re going to show up every… one custodian every two hours or whatever the appropriate rotations need to be given the work that has to be accomplished and the workflows. So, you have to think through all of those things and that’s not always necessary, especially when you’re going out and you’ve got a couple of custodians at a location, you may just go, hey, and meet them and grab all their stuff, and with the COVID restrictions, you have to follow these much deeper protocols. You’ve got to have that level of separation between the custodians, the examiners, and the operators within the organization that you need to interact with.

Michael Sarlo

Yes, and really, it’s about integrating early on and setting up the organization around integrating into the eDiscovery process. This is especially true for those cases that we know likely could be a larger investigation, just your typical, more robust matter where we expect shifting goalposts. Oftentimes at point of contact, they hear about a matter where setting up the remote access, even as a part of our contracts, there are certain things we’ll need. It’s incredibly important, I think, for vendors, and even law firms to really be prepared to answer a lot of security questions, and HaystackID posts security audits and questionnaires on a daily basis, you start to build a library of these, but I think being able to respond very quickly and really thinking with a forward-leaning eye towards security is really important to streamline and getting moving quickly, and usually asking for those audits to be conducted early on, which, back eight years ago, maybe that wouldn’t have been the case. I wouldn’t have been asking to fill out these long forms, but it just makes it so much easier to bridge into repositories that may come into scope at any time during an engagement, especially when you have very large organizations that have hundreds or thousands of data repositories that could be web-based tools, different types of databases, all of whom may have their own [SME] leadership team and security protocols, being able to have that gold standard document that has been fully approved is critical, and getting the IT people, and cyber, and legal very aligned early on regarding what could be seen as more paradigm breaking workflows that would have been an absolute no-no before the pandemic.

I would say VPN alone oftentimes is not suitable for remote collection tasks. We spend so much of our days dealing with web-based repositories that are hosted behind our clients’ firewalls, JIRA, Confluence are my custom ERPs, the list goes on and on, source code repositories, and oftentimes we get set up sometimes or try to get pushed to accessing data through a VPN. Usually, we’ll get email addresses and go through some vetting to get our examiners into a client’s domain formally. Oftentimes, the VPN is never going to be fast enough, you really need to clearly articulate dial-in access needs, and make sure that you’re pushing for dedicated virtual machine infrastructure early on if it’s a larger case. It’s really about bringing the vendor’s infrastructure, and some scenarios, into your client’s firewall, and again, it’s just about the application base discovery where we spend so much of our time these days to get the throughput required, where you may have millions of small web pages that have attachments or database records that not only do they need to be captured, but they need to be presented in a review platform relativity in a way where attorneys can act on that data. It’s so important to have that extra horsepower with the data volumes we see, or not even just the data volumes, but the amount of data points that need to be analyzed to produce a single record.

Certainly, there are different approaches to this. So, there’s on-premise, behind the firewall relativity somewhere, usually, you don’t need to do that unless you’re in certain countries, but getting sometimes a processing tool behind your client’s firewall can have a great benefit if you have huge data volumes, or it’s not a clear line of who had access to what data. Certainly, there’s a situation where we’re just bringing it out. We have some form of dedicated infrastructure, be it more of a remote forensic lab, it’s behind our client’s firewalls, or we have different types of forensic tools, more heavy gauged stuff installed, and that’s coming out straight to us and the hybrid model is a little bit of both, the processing and forensics workflows, and certainly for some very sensitive reviews, we have even staffed reviewers behind our client’s firewalls on terminals on their end, and that’s almost pre-pandemic, but just some food for thought.

And then finally, all the stay at home stocks are doing great as the rest of the economy is declining. There’s a reason for that. Everybody is using web-based infrastructure to conduct businesses, and when you talk about supply chain, the funny thing that we don’t think of, and just thinking of it now, is the squeeze on internet bandwidth that I know we all suffered this even when COVID first hit, and everybody would try to get on a conference call at 10:30 across the entire country and the phone lines weren’t working, and there’s been so much work to stabilize that ability here in this country in a rapid timeframe. But from an eDiscovery standpoint, the clients and their acceptance of the cloud is maybe even de facto more secure oftentimes, and the way they may view security companies like HaystackID’s infrastructure has made it much easier to turn these into short conversations about scaling infrastructure. Again, maybe at a hybrid approach where you might have a vendor spun up Azure instance that has more of a direct tap into your client’s Azure instance, and there are ways to move data very quickly between things like that.

So, really, I always encourage people to think about how they can leverage the cloud to solve more complex computational challenges and where you might need infrastructure where it would be difficult before, it’s become much easier.

So, we’ll kick it over to John and we’re going to now start talking about ReviewRight, which is really a combination of services under the ReviewRight brand name and our Document Review Services Division. In particular, we’re going to be talking about Secure Remote, and I always tell everybody I know a lot of server providers had switched and needed to move to remote with their reviews over the past three months. HaystackID has been doing it for the past eight years with over 800 successful remote reviews and second requests, and very large investigations, and what we found is remote, actually with the right people and the right processes and technologies, and checks and balances, is always going to be cheaper, and faster, and better. So, where there’s been a lot of questions is on the security piece, and I think we have a really good solution there.

So, I’ll kick it over to John and the rest of the group. Thank you.

Vazantha Meyers

So, I’ve got to reference John a little bit and piggyback on what Mike said. A lot of the questions that we have for remote review security is actually asking us how do you replicate brick and mortar security, and we’ve done three things, and we’re going to go over those in these next three slides, but we’ve tried to replicate access, system controls, and physical controls.

So, one of the ways in which we control access is by using a virtual review environment that controls how reviewers and our clients have access to our internal systems, meaning the review tools, the chat system, the email system, and we want control over that so that we can control some of the ins and outs of our environment in the same way we can do when it’s physically accessed. So, through this gateway, through this portal, we can control access to the internet. We can control what reviewers have access to in terms of white sites. We can control what they don’t have access to in terms of blacking out sites. We can control access to personal email accounts, limiting connections to sending emails from certain accounts and to certain accounts, receiving documents, and emails from certain accounts. You can also control the ability to restrict and download PDF, printing, packaging, etc. We can also control whether or not a reviewer can access and install external software, and whether or not they have access to applications that we don’t approve of. The whole idea around this is controlling what comes into our environment and what we allow out of our environment. We have some other controls that John’s going to go through over the next slide and is digging a little bit deeper than this high-level overview for us, and that goes into system controls, network controls, and software requirements.

John, I’ll turn it over to you.

John Wilson

Yes. So, I mean within our environment, V was already speaking about a lot of the environmental controls, which is we can lock the machines down to only allow the review application access from certain geo-restricted areas by specific users. We can control all aspects of any traffic coming in and out of the machine, etc, and then beyond that, we take a lot of steps with the systems in general and have a certain level of requirement around the systems that are utilized for accessing the system. They have to have a certain level of IS. They have to have a certain level of patches applied to those systems, that we have minimum password standards that are required. We require two-factor authentication. Making sure that antivirus is up to date, there’s no lag in the antivirus updates or the security software updates on the systems; having that private VPN secure broadband connection to the jump box; having router firewall gateway control and consistent speeds is a thing that is frequently overlooked when you’re talking about how you’re addressing remote review, is making sure that the reviewer location has adequate speeds where documents are going to load and they’re going to load appropriately. It’s not going to slow down their review process.

And then lastly, again, making sure that there’s a mobile device or a token ID, whether a hardware token or a software application token on a mobile device or SMS authentication on a mobile device for the two-factor or multi-factor authentication within the environment.

Go ahead, Seth.

Seth Schechtman

So, in addition to everything that V and John spoke about, we have physical workspace security, as well as privacy controls, and so you always want to make sure that they’re agreeing to do these things while they’re working, as well as have privacy controls in place. So, we require that work is performed in a secure space that’s physically and visually secure, and so it needs to have a lock on the door, needs to be locked while they’re working so nobody else can access. It also needs to be vision secured, so if there’s a window, the blinds need to be shut, or curtains and the monitors need to face away from the window. Other things that we require them to agree to is no voice-activated home or phone devices in the room, in terms of privacy controls there are certain time periods in which their computers automatically lockout, boot them out of the system, but the computers can’t be unattended and unlocked. While we’re having team calls – we will talk about later – headphones are required. If they are watching documents or viewing records with sound, they have to use headphones there as well. On client calls, team calls, they can’t mention the name of the client, can’t mention the name of the case, also no handwritten notes, no recording of pictures, and no printing of review materials, we keep everything electronic and secure.

We also require them to – we codify these requirements in our documentation to them, so for every project, there’s a confidentiality agreement, and, yes, we also have them sign a specific work agreement. We have a company handbook with the codes of ethics that they must sign and attest to. As with all projects, they must attest that they don’t have conflicts of clearance, and we always remind them that they’re bound by their jurisdictional rules of professional responsibility.

I’m going to talk a little bit about reviewer selection so, basically, we come with the philosophy that wouldn’t it be great to know how someone was going to do at their job before they actually do it. Certainly, the case in other industries, other jobs, but in the review world we don’t want to get into the situation where you have reviewers who are not quite getting the concept of review, don’t have minimum standards… there are four points that I’m going to talk about today in terms of their qualification, identification, screening, and then some ratings and certification.

The first pillar that we build on is what we call ReviewRight Match, so we take the inputs from the project, and this is every single project, the practice area as well as the industry. We keep those things separate and distinct, so someone could have expertise in an industry and they may not have the expertise in the practice area, which may not be as important for a case. Certainly, we have cases in areas that are very esoteric and we want reviewers to have that background. What I always analogize is take a random document that was sent X-number of years ago and tell me what it’s about, why you sent it. That’s what we’re asking reviewers to do, you’re asking them to not look at their documents, a random person’s document, and within an hour or two of training and reading the background materials and being complete and confident and being able to target accordingly. So you want to have reviewers tailored to the individual projects, whether it’s specific practice areas or knowledge and those areas or just industry background and experience. I’m sure everyone on this call knows the things that they’re super passionate about and just think of a review in that space, how much more fluent you would be in a document, how that learning curve would be versus if it was a totally foreign subject to you.

In terms of the skills, we test all the reviewers that come into our system, so in terms of knowing their qualifications, we give them a sample protocol, we even give a practice exam so they can get the feel for it and then we give them documents. They’re pulled from the data set, it’s a random sample of documents. We ask them four questions for each of those documents. The questions touch on privilege, touch on issue coding, relevance, and responsiveness, and we score all of those reviewers on a scale of accuracy and speed. On our remote side, which we talked a little bit about, and Mike led off, he gave better, faster, cheaper, the reason being is because you have a larger pool to pull from, so when you’re pulling from one locale, let’s say, in DC or New York, even in smaller places, even lower-cost places, the pool of available reviewers is really limited by who is available. Yes, we have offices in those locations pre-COVID, where we would get the best of the best in those locales, but for other parties out there, you’re narrowing your pool, it really limits the abilities of people at the high end, so what we do is we target people who not only have the requisite skills and background in the industry or for the matter, but you also target the fastest reviewers and the most accurate reviewers. It certainly allows you to save on QC or focus on certain important things, hot docs, interesting docs, and you’re able to get faster reviewers. Again, how you get better and faster is just opening up your pool to reviewers and testing those reviewers as they come into the system. We don’t stop there, so we track reviewers as they move through our system. Every task that they do, we track speed and accuracy per task and they get rated at the end of each one of their projects and each one of those tasks, and we roll that up and feed it into our back end, that’s how we’re able to match reviewers to the projects not only from that initial score, but also all of the metrics that we keep along with the project.

In terms of coming into our ecosystem, we do the traditional things of background checks – we will talk about – we also have phone interviews. We’re only interested in, especially on the remote side, is asking questions to see whether they can succeed at remote. Obviously, remote is different than everyone being in a review center together. We’ve developed systems and processes in place to ensure that the reviewers are kept on the same page, that they’re monitored and audited, which we will talk about, but there are certain questions that we want to ask of our reviewers, whether they’re self-starters, whether they’re a team player, whether they’re better alone or within a team, how they communicate, how they raise up issues, trying to get at individuals who will succeed at these tasks.

Of course, you guys will have a copy of this presentation, you can go through it and see all the particular questions that we do ask.

In terms of background checks, license verification, conflicts checks, we do all of those things as well before any of the reviewers are approved to work on any matter.

Just a little bit about the information that we track on reviewers, we rate them, we also certify them when they move up from first level to QC to assistant review manager, to review manager. We want to make sure that they have those requisite skills and abilities and able to test them along the way that when they get on those projects at those levels that they have the ability to do certain things.

That leads into the next part of the discussion, which I will hand over to V, which talks about Managed Review, not just staffing.

Vazantha Meyers

Thank you. One of the goals of Managed Review is to make sure it’s sensible, efficient, and cost-effective. As Mike pointed out earlier, traditional rules still apply even though we’re in a pandemic and even if we’re in a remote review workflow. One of the things that we want to do is champion our experience in this industry and make sure that we’re giving advice to counsel in terms of culling down data and making it more efficient. We have managed reviewers with years of experience and a breadth of experience that can bring some of that to the table. The idea is to increase the speed of the review and we’ve talked a little bit about selecting the right reviewers to do that and increasing the efficiency of the review, and that deals a lot with workflow and the tools that you bring to the table.

I would like to speak about all the tools that we have as seen in the tool chest and we want to bring them all about and apply to them every project to make sure that we’re doing everything possible to reduce the data set and the speed of review and the efficiency of the review, as well the defensibility of the review.

A lot of you guys are going to be familiar with some of the methodologies that we can apply to a review. It still starts, for the most part, with search term methodology, testing the search terms, analyzing the search terms, making sure that we’re being efficient in how we use search terms because maybe it doesn’t apply to every data set, but we take that very seriously and aggressively and the idea is to cull down the data set. We also want to optimize workflow. We want to make sure that we are applying the proper tools in terms of analysis or culling or positional review for the data set so that we can reduce this data set and increase the accuracy and speed of the review. Some of those are custom de-duping, I’m sure you guys are familiar with the idea of coming in and de-duping across custodians or across the universe. We’ve also applied de-duping at a workflow level to optimize the efficiencies in that particular workflow.

We also do non-responsive document identification. I really can’t stress this enough. It’s not technology-driven in the way that continuous active learning is or predictive culling is, it’s actually taking the documents and saying, hey, we’re [this and that] and we can identify as non-responsive, let’s look for more of those documents and either apply mass coding or cull these completely out of the system, but it’s effective and we’ve seen massive reductions in cost when we apply that technique.

We also do single instance review of search term hits, meaning we will look at just the search term hits and not review the documents that are a part of their family, only if that document is responsive. I’ve heard that called several terms, first-term response is one, but the idea is that we’re only obligated to look at the documents and assess how you negotiated it and only then if the search term is – the document that hits on the search term is responsive, do we need to look at the rest of the family. It’s a very impactful culling methodology, but it’s very simple to implement and so we try to do that, at least suggest it, on all projects.

Also, propagation, of course, of coding and redaction saves a lot of time, particularly when it comes to redactions. We work with data sets that have a lot of complexity because they are a part of different families because sometimes one of the tasks is the same and we want to apply coding or redactions across all of those very similar documents. And when we can, we try to do that. All of these techniques can apply including some that we haven’t talked about, like for different coding and interactive learning, but to data sets, the idea is to reduce the data set and increase the speed of the review. Not only do these methodologies do both of those things, but they also increase the consistency of the coding and, therefore, the defensibility of the project or the review.

I’m going to turn it right back to Seth so he can talk about the Gauge Analysis, which is a very effective QC process.

Seth Schechtman

Thanks, V. I know we talked a lot about how we screen reviewers coming in, keeping all the data of all their projects, but it doesn’t mean that the project that they’re on is something that they will pick up very quickly, so something that we absolutely love to do and we can do an entire presentation on this is what we call Gauge Analysis.

Basically, we pull out documents from the review set that we think are good examples of fine line, some things on either side of the fence, some things that reviewers may use discretion in coding. We have reviewers review all of those documents and code all of them, literally the same document. The sample size depends on the complexity of the protocol. If it’s on the more basic side, maybe 15-20 documents. On the more complicated side is when it comes to privilege, 50, maybe even north of that. What does this allow us to do? It allows us to compare and contrast reviewers’ coding to each other. We also want counsel to code those same documents. Why? Because we’re able to compare our coding to their coding. We can’t tell you how many reviews we’ve done where we’ve done this in our internal team is 100% consistent and we get counsel’s feedback on those documents and they say, ‘no, this is what we want the coding to be’. Why does that happen? As much as we love to think that everything we draft is super clear and everyone can understand it, sometimes it’s not. Different people can interpret the same words a different way. Also, the documents may be different than counsel was anticipating. Why? Maybe there’s a hole in the protocol and they didn’t address it and they thought they did and they have to change, or maybe they want to change course. What we did is correct the written guidelines’ construction, but they want to do something different. It allows us to get on that same page internally as well as with counsel right away.

If you don’t do this and the review team jumps in and starts coding, how long is the lag between when the team signs off on documents or releases it to counsel, and has counsel phone them. You want to narrow that gap as much as possible and this is the way we found the absolute best way to narrow this gap immediately after training before releasing people to general population of documents.

Now, we will talk about communication. V, I will hand it over to you before you give it back to me to talk about some auditing that we do.

Vazantha Meyers

Thank you. I guess I’m the one that links it back to brick and mortar, but this is also some things that we found that replicates what it is to have reviewers in the same room with their QC-ers and their managers. The one thing that we’ve done is to make sure that reviewers are happy to participate in the mandatory training session. If this was brick and mortar, they would enter the building, we would take attendance and that doesn’t actually change, and Seth is going to talk about auditing in a little bit, but we want to make sure the reviewers are invested at the very beginning of a review. Therefore, training sessions, whether or not it’s the first training session or iterative training sessions down the road, they have to participant in that. That seems sort of obvious, but we get a lot of calls from reviewers who are situated in a lot of places and they have different time schedules and they want to know whether or not they can jump on a project that starts on a Monday, on Tuesday. The answer is a firm no from us.

The idea is that part of that training session is to make sure everyone is on the same page. Counsel is invited to that training session. Clients are invited to that training session and we’re going over in-depth in sometimes one, two, three-hour sessions the training protocol and we require reviewers to read that in advance and then ask questions at the time of the session. It’s too valuable to miss, and so we make it a requirement.

The other thing that we encourage on projects, and depending on the size of the project or the complexity of the project and how fast it’s going, we will encourage daily or weekly project calls either both internal and external, and the idea is to keep up. a lot of the things that we talk about in the slides coming up is the ability or to have them put controls over making sure everyone is on the same page and that applies to reporting and questions and answers, and we will talk about that a little bit, but it also applies to this ability to do a daily or a weekly checkup to make sure everyone is on the same page, that reviewers have the ability to check-in and get updates on the project. We’ve seen a lot of projects where after the Gauge Analysis or after some Q&A has come out, that maybe the protocol will change or the directions have changed, or the instructions will change, and we want to make sure that we stay on top of that. The one thing that we think is very important is that we correct all coding changes, variables, mistakes very early on, and the daily and the weekly project call… I’ve even had it where I’ve had calls twice a day, because the project was moving so fast is our ability to keep up with the changes in the project and make sure everyone is on the same page.

It’s also invaluable for the reviewers to be a part of the living and breathing matter that it is. These questions come up every day, some questions get answered on a daily basis, sometimes a week, but we want the reviewers to have the ability to be caught up as opposed to catching up.

We talked about limiting the reviewer’s ability to print documents or take notes etc, and so we do require that instructions are only written and by written I mean electronically, and we have it available for the reviewers to access in real-time, for the most part.

One of the things that we don’t have to deal with when we talk about remote review is reviewers talking to their neighbor and instructing the neighbors on what to do with coding. One of the benefits is that we control how instructions are given through our system.

Another way that we replicate real-time communication is through a secure chatroom, again, via the portal that we talked about earlier, I talked about earlier. It allows reviewers to ask questions, be it technical or subjective in real time and have access to QC-ers and review managers who can answer those questions. One of the good things I like about the chatroom system that we use and we’ve used it very effectively in projects that were 2.2 million documents that had to be reviewed in a matter of 20 or 30 days, particularly one that was over the holiday, is that we had to separate that review into workflows, and we were able to separate the reviewer into chatrooms associated with the workflow that they were working on. For instance, if we had a team that was working on privileged review, we can put them in a privileged review chatroom. Or if we had a team that was working just on responsiveness review, or redaction, we can put them in that chatroom. It allows the reviewers to focus on the questions that matter to them and see answers coming in that matter to them from their coworkers questions. It gave our RM staff and the QC staff insight into all the questions that the reviewers are asking, we could also focus those answers where they’re most effective.

Clients can also have access to that. You can restrict permissions or grant permissions, but it’s a really effective tool and it’s much quieter than a room of talking reviewers, and so we think it’s actually better.

We still do a lot of the traditional methods of communicating on a project, and I think everyone is probably very familiar with the concept of Q&A logs, so that allows to memorialize very important impactful questions and answers that come through, not just have a practical guide for revisers, but you also timestamp those questions and answers and where they came from. A lot of times we have questions about privileged terms, privileged names, privileged breakers, we like to memorialize that in the Q&A logs, because it might impact how we QC the data set. And so that is very traditional, it’s not just limited to remote review and it’s outside of the chartroom question and answer for a reason. A lot of the times, the chatroom will deal with subjective questions, technical questions, passwords that have to be answered, they still have to be answered, but it doesn’t really impact the review or QC, and so we pool of those questions that do add and put them in a communication log, a Q&A log, and not just the reviewers have access to that Q&A log, but we also send that back and forth to clients. Again, it memorializes the question and answer, it really also speaks about defensibility, meaning that the reviewers are being supervised and their questions are being answered, but it also documents when those questions and answers came in, in case we have to change course or go back and correct a course change.

The other thing that we ask reviewers to do and Seth talk a little bit earlier about the work from home agreement, and one of the terms that the reviewers have to agree on before they get on a project is that they will notify us once they have a technical issue. That is either on their side and our side. It’s very important that even though reviewers are using their networks and their systems and we have requirements for those systems, if something happened on their side and if it happened on our side, it could impact the efficiency of the review, the timing of the review, and so we make it an agreement that they have to notify us of technical issues so that we can resolve them or adjust for them. I think especially when we get into remote review, it seems like a very simple thing, but it’s very impactful. If someone is working on trying to figure out a password because we’ve made it complex and they can’t figure it out, we want to know as soon as they have the problem and not eight hours into the review. That’s the same with issues with their Wi-Fi or our connections etc, and so we make it a requirement.

The other thing that’s very important for a review and we’ve seen this be very impactful. Like a lot of times when we start a review, the client or counsel will come to us with a list of terms, a list of attorneys, some acronyms, some project names, some code words, they bring us everything that they know about the project, but it’s very important to make that a living and breathing document and make sure the reviewers have access to that as a group document as opposed to notes that they’re taking, because (1) we tell them they can take notes, but (2) we want to make sure everyone is reading from the same dictionary. And so shared resources, and we make it accessible via [inaudible] is very important to a review and it’s also very important to the team that’s working on the matter, be it an investigation or litigation, to make sure that we’re sharing back and forth in terms of what we know about the data, what we know about the company.

One of the other ways that we do and we encourage this on all projects is sharing sample documents. Now, we’ve talked about Q&A (questions and answers), but the other thing that we have usually on that same document or at least that same communication is here are examples of documents that we’re seeing, here’s how we’re coding them, take a look at that just so you’re aware. If you don’t necessarily have a question on it, but if you want to correct it, if you want to know it, and if this is important information, we want to share it. A lot of times, those documents are – samples come up because we’ve changed custodians or we’re testing out what hot looks like, etc, but we use that same communication flow to also test our understanding of documents via the feedback and sample portion of that document.

So I’m going to turn it over to Seth and he is going to talk about how we audit a lot of these requirements. We talked about some communication requirements etc and we’re going to get into time auditing, which is also going to be impactful on the efficiency of a review. Everything that we do, we try to audit, test, verify, or at least get agreement on.

Seth, I’m going to turn it back over to you.

Seth Schechtman

Thanks, V, really appreciate it. It explains it all, Trust but Verify. You want to make sure that the reviewers are doing things that you’re requiring of them, that you’re asking them throughout the entire length of the project.

V talked about mandatory participation on calls with verbal attendance, we want to make sure that they are there and that they are throughout the entirety of the call. Read receipts on project emails, so when sending emails, you want to make sure that they’re opening them and looking at them. Secure chatroom attendance, when the reviewers are working, we require that they are in chat so they can see and view other questions that are being asked, plus they can ask their questions. We audit document history, so on the sample documents that V talked about that we want the reviewers to look at, especially documents that are on the Q&A log, within review platforms, there’s a document audit history you can pull out and make sure that every single reviewer on the team has actually viewed that document. Also, issue log viewing history. So on the shared resources, on the share drive where we keep the files, we can see the history, who has been in the document the last time, so if the issue log has changed since the last day and we see that reviewers have not read it, we ping them, tell them to go in there and read the log, especially the new entries there, or just remind themselves. You want to make sure that reviewers are keeping up to date with the protocol and keeping abreast of what’s actually going on within the review.

Other audits we do, and we talked about the Gauge, which I get super excited about, we also do reviewer quizzes, whether it’s at the beginning of the project, whether it’s the middle, whether it’s the end, we want to make sure that they actually know what they’re supposed to be doing and actually are doing it, so we will ask them questions. It could be basic questions about protocol, about coding propagation, about families, which, by the way, we discourage. How it is discouraged? It could be substantive issues or serial issues. We want to make sure that they’re following the rules that are in place at that time, and you can see here you can roll that up, make sure the reviewers are in compliance and get results out.

We also track DPH, so team document per hour speed, and overturn tracking, as well as an individual level, so you want to make sure that each individual reviewer is above standard in all the tags that they’re using, if they have weak points, maybe it’s privilege, maybe it’s confidentiality, we want to make sure that you’re pulling metrics on each individual task. When we do QC, we do QC by task, we QC the entire document sometimes, but other times we’re just looking at it for priv or confidentiality, we are tracking overturns at those field levels so we can rollup metrics based on random sample, based on the whole entire review set population. We want to make sure that the review vendors are pulling this and gathering this data to make sure that they’re doing their job, have the defensible process.

We also do time and productivity audits. I mentioned about individual DPH, you want to make sure that you’re checking their clock-in time versus the first document time, their clock-out time versus their document time, if there are any gaps in time and explainable, of course, client calls, reading issue logs, reading sample documents all that stuff is billable, you want to make sure that there aren’t any gaps in their day that are unexplained. We also track by task, so if they’re doing first-level review, in the system that’s easily matched up relative to usage or active time. We also want to make sure that any training that’s spent is categorized separately, so you can double-check that to make sure that actual time outside of the system is reasonable for what they’re supposed to be doing.

Coding consistency metrics, so I know we talked about Gauge when they first start. MD5 is a really powerful tool, we talked about making sure that documents that are the exact same time, in certain circumstances, are coded exactly the same, especially for redactions, but you can use MD5 to check on reviewer consistency. You can test and evaluate them to see who is off from the majority. Let’s, say you have X-number of documents to go out to the population and one person is in disagreement two-thirds of the time of their colleagues, it’s a powerful tool to check to make sure that they are being kept in line, especially is random sampling is lagging or other things at QC, you can rollup that data easily and quickly, even without having documents at the QC level, just comparing coding on the exact same documents at that first level review.

I think that brings us to the end. V, anything else you want to say on communications or audits, or I think we can open it up to the audience.

Vazantha Meyers

What I will say is that it should be the normal course of how you manage a review. It should how you do it from the very beginning. It shouldn’t be ad hoc. There are times where you have to go in and do some triage or ask specific questions, but communication should be controlled from the very beginning. I think it helps make a review not just defensible, but a quality review, and not as expensive as it could be.

Michael Sarlo

Thank you. We have a question here, “What are the plans for London and surrounding EU e.g. Germany?”

We already have, basically, full operation in Europe and in the UK. Reviews have gone all marketplaces, largely remote, so certainly that makes easier and it has allowed us to actually reduce price in certain scenarios where we don’t have the burden of space, necessarily. Reviews are conducted on infrastructure via a secure terminal that is hosted in either the EU or the UK for GDPR reasons and also for post-Brexit reasons, those have diverged a bit. We like separate infrastructure and a client wants separate infrastructure, oftentimes between the UK and the EU, and we offer that. We also offer full data collection, eDiscovery processing, all the manner of which I do personally quite a bit of – global investigation type matters where we’re straddling multiple time zones.

The name of the game on the review side, largely, is really ultra-secure terminals in the EU or UK and then more locked down access into a hosted review database.

All right, any other questions from the audience?

Thank you again, we really appreciate it. I will kick it off to Rob Robinson to close us out.

Rob Robinson

Thank you, Michael, and thank you to everyone, on the team for the excellent information and insight. We appreciate it very much. We also, again, want to thank the EDRM for hosting today’s educational webcast, again, as well as all of you who took the time out of your schedule to attend today’s webcast. We also hope you will have an opportunity to attend our next month’s webcast from HaystackID and that’s scheduled for 15 July at 12 p.m. Eastern Time and it will be on the Handling of Non-Traditional Data Sources in eDiscovery. We hope you can attend at this time. I will turn it back to the EDRM team.

Thank you very much.

Mary Mack

Thank you, Rob. We would like to thank you, of course, and your organization, our wonderful partner, HaystackID, for making yourselves, Mike, John, Vazantha, Seth available to us today, and so we’re very appreciative of the information and the education that you have shared with our EDRM community. We’re thankful to the community for your kind attention.

We will see you next on EDRM Global Webinar channel.


HaystackID – From Remote Collections to ReviewRight – 061720