[Webcast Transcript] Best Practices for Selecting, Staffing, and Supporting Complex and High-Velocity Legal Document Reviews

Editor’s Note: In this insightful webcast hosted by HaystackID on April 19, 2023, our expert panel delved into the best practices for selecting, staffing, and supporting complex and high-velocity legal document reviews. The primary goal of the webcast was to equip attendees with the knowledge required to ensure successful review outcomes.

Led by Kevin Clark, a seasoned professional in discovery analytics and managed review, the expert panel tackled specific review challenges, such as domain and industry expertise prerequisites, location and language requirements, and time and data sensitivity needs. While the entire recorded presentation is available for on-demand viewing, dive into the complete webcast transcript below to gain valuable insights from panelists on effectively managing complex legal document reviews.


[Webcast Transcript] Best Practices for Selecting, Staffing, and Supporting Complex and High-Velocity Legal Document Reviews

Presenting Experts

+ Kevin Clark
HaystackID – Discovery Counsel and Vice President – Analytics and Review Operations

+ Richard Robinson
Toyota North America – Director, Legal Operations and Litigation Support

+ Eric Boyd
Locke Lord LLP – Director of Litigation Support and Docketing

+ Noah Miller
HaystackID – Deputy General Manager, Review Division


Presentation Transcript

Moderator

Hello everybody and welcome to today’s webinar. We’ve got a great presentation lined up for you today, but before we get started, there are just a few general admin points to cover. First and foremost, please use the online question tool to post any questions that you have, and we will share them with our speakers. If we have time to get to them at the end, we will do so. Otherwise, we will follow up via email. Secondly, if you experience any technical difficulties today, please let us know using that same questions tool and a member of our admin team will be on hand to support you. Finally, just to note, this session is being recorded and we’ll be sharing a copy of that recording with you via email in the coming days. So, without further ado, I would like to hand it over to our speakers to get us started.

Kevin Clark

Hello and good day, everybody. Welcome to another HaystackID webcast. We hope you have been having a fantastic week so far. My name is Kevin Clark and I’ll be the moderator and lead today’s presentation discussion entitled Best Practices for Selecting, Staffing, and Supporting Complex and High-Volatility Legal Document Reviews. This webcast is part of HaystackID’s ongoing educational series designed to help you stay ahead of the curve in achieving your cybersecurity, information governance, and eDiscovery objectives.

Today’s webcast is being recorded for future on-demand viewing. We’ll make the recording and a complete presentation transcript available after the webcast is finished and it’ll be available on our website.

Our expert panelists for today’s webcast have extensive experience with law firms, legal departments, and service providers. They will be discussing best practices for selecting, staffing, and supporting complex and high-velocity legal document reviews. So, I’d like to introduce the expert panelists we have joining us today. Next slide.

Our first panelist is Rich Robinson. Rich Robinson is running the Legal Operations and Litigation Support at Toyota North America, which has extensive experience in eDiscovery and information governance, and has been in the industry for many years. He has worked at law firms within corporations such as Toyota and also on the vendor side. Next slide, please.

Eric Boyd. Eric Boyd is the Director of Litigation Support and Docketing at Locke Lord. He also has extensive experience in this industry. He currently runs the Litigation Support and Docketing divisions at Locke Lord and has experience both at law firms and on the vendor side. Next slide.

Noah Miller. Noah Miller works with me at HaystackID, and he’s the Deputy General Director of the Review division. He has been at HaystackID for over 15 years and has been working in eDiscovery and on dock review. Next slide.

I am the Discovery Counsel and Vice President of Analytics and Review Operations at HaystackID. I also have extensive experience, both with law firms and on the vendor side, and we are excited that you have joined us today to hear a panel. Let’s start off with a little housekeeping. Next slide.

So, this presentation is offered for educational and informational use only. All views, opinions, and recommendations expressed are those of the speakers personally and do not represent the views, opinions, or recommendations of any of the firms or organizations with which the speakers are associated. This presentation does not constitute legal advice of the speakers, either individually or on behalf of any of the firms or organizations, and may not be relied upon as legal advice with respect to any specific legal, regulatory, or compliance issue or set of facts. Next slide.

So, now that we’ve gotten the housekeeping taken care of, let’s go over what we’re going to talk about today. So, we’re going to talk about selecting the right team. We’re going to talk about staffing and managing the project. We’ll be discussing managing high-velocity reviews, and we’re going to finish up by discussing using technology to improve eDiscovery doc reviews. Next slide.

To start off with, selecting the right team. Next slide.

To level set, I want to ask a question of our panelists, and I’ll ask you, Rich. What is a complex and high-velocity document review for eDiscovery?

Richard Robinson

So, I think it’s somewhat self-explanatory. As the name suggests, a complex and high-velocity document review is a complex review that requires a quick turnaround time, and there are a number of reasons why a document review might have a quick turnaround time. Perhaps it’s a government investigation, DOJ or SEC, where the volume is large because the search terms are broad, and they don’t give you a lot of time and not a lot of leeway on your response times; or perhaps outside counsel has been involved in a long, drawn-out process for developing the ESI protocol or the search terms, and when that’s finally completed, they turn around and say, okay, review team, we’ve got four weeks to review these X million documents. So, there are a lot of reasons you can get there, but the process involves using technology, software, and expertise to quickly and accurately identify the relevant documents and ensure that the process is defensible and meets all of your legal requirements. High-velocity document review is a critical component of the eDiscovery process because it helps parties efficiently identify key facts, issues, and evidence that might be relevant to the case.

Kevin Clark

Thank you, Rich. Now, Eric, a question for you. When you’re selecting your team, based on your experience, what are some key factors that you look for?

Eric Boyd

Experience, availability, qualifications. You want to make sure that the vendor you’re looking at has been in the space, they have the qualifications, they have the experience, and probably most importantly, they have the availability. So, depending on the timeline and depending on the technology you’re utilizing, you may need 30 folks, you may need 90 folks, you may need 10. It just depends on the platform you’re in, the technology you’re utilizing, and the number of documents that are likely relevant that have to be reviewed. There’s been a change in the industry over the last few years where AI is becoming such a part of the process that you really want to go through and test a couple of seed sets to determine the likelihood of responsiveness, what percentage of your data set is going to have to be reviewed. Then you use that to create how many reviewers you’re going to need, how much time you have, and how to move forward. You’re looking for a partner, someone like a HaystackID who has all of those intangibles, who has the experience, knows how to utilize the AI or the [CAL] that you want to utilize, and then has the team available to get started and jump in.

Kevin Clark

Thank you, Eric. How about you, Rich? Any factors to add?

Richard Robinson

Yes, no, I think Eric said it perfectly. I think that sometimes when we think about experience, we’re thinking about how many years of experience does a reviewer have, but that might not be the only one. We might be thinking about the platform. Are we using a relativity review, or are we using a reveal review, or are we using some other platform to conduct the review? What experience does the team have with that platform? Most review companies claim to be agnostic, but it does help to know what experience.

The other thing is that you might have experience with the subject matter. A financial fraud case has a very different review team than a manufacturing or an R&D or intellectual property case. Experience in the field or on the specific issues can be a very important piece of it.

The other thing that I look for in terms of experience is the experience of the management team. Not just the reviewers themselves, but there’s got to be very strong project management from the review team.

Kevin Clark

Very good point, very good point. I want to head over to Noah. Noah, I’d like for you to share some of the best practice methods that you’ve seen in your experience.

Noah Miller

When you are selecting a team, it’s like if you’re planting seeds, and if you plant a strong crop of good seeds, you’re going to more likely yield a good crop. We believe, and I’ve seen it borne out in practice, document review is a skill. You want to design the screening process that you have and the methodologies to identify the people that are best suited to do the work. It’s not every attorney. There’s a variety of reasons for that.

How do you want to access that? Typically, you want to do some type of testing and evaluation. You want to gauge the skills of the team. Hopefully, you want to do that prior to the review, but you also want to do that as the review is ongoing. Any session you do in advance of the review is always preferred, partially because now they’re not actually working in your data. They’re not creating errors in the system that you have to then remediate, and you’re getting a sense of what they do in a subject matter-agnostic environment. They’re just doing it on your general testing. As you test them, you want to make sure what are you trying to test them for. What are you looking for? Is the review going to be, as I think Rich touched on, a complex financial review? Do they have that requisite knowledge about finances? Are they motivated in what they’re doing? Did they respond quickly to the email asking them to get the testing done? Are they diligent in responding to the recruiting team when they’re bringing them online? Is the review going to be a simple review where speed is going to be their priority? Is it going to be complex where there’s going to be a lot of issue spotting? Or as is most of the time these days, is it going to be both of them? Is your testing designed to yield that information that will show you both aspects?

After you assess them, you also want to make sure you’ve had a meaningful interview process. You’re going to use that interview not just to discover if they can have a conversation in a normal fashion, but also want to find out details that you might not otherwise discover. Are they motivated to do well in their position? Do they not have quality control experience on their resume, but they might be an untapped resource in that area? Are they going to be reliable in what they do? There are a lot of data points you can gather in that interview process.

Speaking of data points, every piece of data that comes into your ecosystem, you want to make sure you categorize it. You want to track it, you want to codify it, you want to search by it, you want to filter by it, and you want to use that to intelligently staff your reviews, both in the review that you are staffing currently, as well as anything coming in the future. That’s going to include both what they’ve done previously, what they do when they’re working for other people if they’re not permanent employees, and then obviously what they’ve done for you, what kind of granular feedback did they receive, and what were their results in comparison to the rest of the teams?

Some of the normal screening that you want to undergo is just general background screening. You want to verify if you’re looking at attorneys that their bars are active. You want to make sure that you’ve done your normal conflict checks. You want to make sure that there are no jurisdictional issues in terms of overtime or any other pay issues. Depending on client needs and company policies, you may want to conduct background checks, with a caveat that oftentimes background checks take a little while to process. If they are new, that may be a delay, and especially in a high-velocity situation, you may not have that time. So, it’s always good to have a backlog of people that are already cleared through whatever processes are necessary.

One more thing I will touch on is the candidate sourcing. Where do you find your people? A lot of this is just having been in the industry for a while, having a large backlog of people that have applied to work with you or applied to work on different matters. I believe Eric touched on, obviously, availability is always key for a particular review. If someone’s not available, you can’t staff them, but you don’t want your decision-making simply to be, these are the first 20 people that are available. You want to make a data-driven decision on the people who are available. Maybe the 65th candidate, which wouldn’t come up if you went in a linear alphabetical list, where you just went into people that came in, would actually be an excellent QC resource that you wouldn’t have got to otherwise. So, you want to make sure that everything we talked about previously leads you to that decision. Once you’ve got it, you want to get your team together, make sure your management team has vetted them, and then you can move on.

Kevin, all yours.

Kevin Clark

Thank you, Noah. Richard, Eric, anything to add?

Eric Boyd

I would just say one of the other things that we like from our vendors is to stay on top of the daily metrics. It’s easy to provide us with reports that allow us to look at who’s doing what. There are outsiders. We’ve had reviews where we’ve had folks doing 400 docs a day and other people on the team getting through 80 docs a day. We want to know, okay, what’s the deal here? Sometimes there’s a perfectly good excuse for that. They show us the documents. They’re very varied and different. Sometimes they’re large or whatever. Just keep track on who’s doing what and what are the return rates in the QC side if you’re doing a lot of turnover. We just ask that the companies that we hire, people like HaystackID, manage all of that for us so we don’t have to get into the weeds of that. Because when we do it, we’re able to look and see, and we want those anomalies taken care of before they get to us. HaystackID’s done a very good job of that for us.

Kevin Clark

Thank you, Eric. Great points. We’ll definitely delve into that a little deeper in a few minutes, but very good points.

Richard Robinson

I’d like to add, I think when you’re talking about the recruiting process, when you’re spinning up a review team, there’s a temptation to go with the lowest bid. Certainly, I’ve been involved in cases where we had to spin up a large review team for a high-velocity review, and leadership, the decision-makers decided to go with the lowest cost as opposed to potentially the best or most qualified team. There are repercussions for that. Oftentimes, when you get the lowest bid, you get a team of reviewers that may be a little bit more mercenary, and when the opportunity comes for a slightly higher paying project, just 50 cents per hour higher, they’re willing to jump ship and move to another review. I think that the bidding process can sometimes lead to some mistakes, and I think that that’s where you should remember that you get what you pay for.

Eric Boyd

Rich is 100% right. You have to pay for quality, and you have to be willing to pay for quality if quality results are what you’re after.

Kevin Clark

Very good points, thank you. Next slide. One more.

Moving away from selecting the team and moving on to staffing and managing the project, this question is for Noah. What are some best practices that you can share on staffing and managing the project?

Noah Miller

Sure. We touched on the staffing piece earlier, and so we’re now moving on to the next step, which is engaging with the clients and counsel. Just as a preliminary matter, the goal of any review provider is always to work as an extension of the client. We literally just heard from two of our clients mentioning that they want to have the problems taken care of before they get to that.

Our goal is to basically remove all of the difficulties and iron out the process so that the counsel and client can effectively manage the case properly, make sure that they have the tools they need to prepare for litigation or the deal or the internal investigation, or whatever the case may be, and especially in a situation where you have a limited amount of time with a high volume of data, it’s very important that each aspect and each stakeholder is handling their end of it, and the provider’s end of it is just to ensure that everything is smooth and all of the tools are available for the clients and counsel to effectively manage everything.

With client clearance, you want to make sure that you are submitting your reviewers to the client and counsel for their sign-off. Oftentimes, counsel will have their own process for vetting reviewers, and there may have been situations where those particular reviewers worked with that client or that counsel, and they have additional knowledge that the vendor may not have, so you want to take advantage of all of that. Even if the client is like, “Oh, I don’t care, you select the team”, you at least want to give them names and likely resumes, scorecards, some kind of analysis of what they do, just so that you’ve ensured that counsel has the ability to manage the project. You also want to have them have a granular understanding of who is reviewing their data.

Once you’ve ironed out who’s going to be on the team, you want to move on to the onboarding process, and especially in the remote environment, this onboarding process is pretty critical because you’re going to have a varying level of technical skills and technical equipment that people are utilizing, so you want to make sure you allow ample time, even in a high-velocity situation, to get everybody onboarded and to ensure that they have the necessary tools and skills to get access to the data in an efficient manner so they can eventually review efficiently and do a good job. When it was on-site, that was more of a given. It was your systems, if they were down, you were quite aware of it. In the remote world, you want to leave time for that. If you want to start that process early, again, you don’t want to take up any of your actual review time with setup and onboarding, so if you can accomplish that in the vetting process, get everybody signed up and ready to go, that’s definitely better.

Next is going to be review setup. I generally divide this into two parts. First, you’ve got your platform setup, and then what I like to refer to as your stakeholder setup. The first is more self-evident, so I’m going to talk about the latter first, mainly the stakeholder setup.

This is just to make sure that every stakeholder in the matter – whether it’s client, counsel, internal stakeholders, vendor, secondary vendor, forensics team, whatever – is communicating in an appropriate and efficient manner, whether that’s including project distros, whether it is here’s your person that you’re going to for this, here’s the person that you’re going to for this. This delineation of tasks. You want to make sure everyone is aware of that because you’re always going to cause issues when emails are going off-chain, it’s being done in a Teams message instead of in front of the whole group, and so you just want to ensure that that communication, which we will get into later in more in-depth with the team, is free-flowing. Because again, if the goal is to be an extension of the client and counsel, it’s hard to do that if you’re talking at odds, so you want to make sure that is set up efficiently.

The second is the platform, so obviously whether you’re in Relativity, you’re in Reveal, you’re in DISCO, you’re in another platform, you want to make sure that setup is accurate and well done. Because if you set up the initial stages of the project wrong, you’re going to have a lot of downstream problems, and that’s as simple as making sure that your data has been collected, and is the data set that you think it is. If it was derived from search terms, did you verify that all your data was there? When it gets to the review, did you set up your metric searches correctly so that you are identifying the population at issue? Did you lock down permissions so that people can’t accidentally code the wrong documents? If you went back 10 years, there was an all-documents view that every single reviewer would have every single time, and invariably, 15% of the review would just review in that all-documents review, so the first 100 docs might be coded 50 times by 50 different reviewers, and so obviously, we’ve matured a little bit from that point, so you want to make sure that you lock that down so that they can only do what they’re supposed to be doing, and so that you have a delineation of permissions based on role. Your first-level reviewer is going to have less permissions than your quality control, which is going to have less than your team lead, which is going to have less than your ARM, which is going to have less than your review manager, so you want to make sure that that is all consistent and well set up. So, again, set up is exceedingly important.

Moving to review kickoff, you want to make sure that first everyone has the appropriate invite. You’re talking on the same Teams meeting or webinar or whatever it is. You want to make sure that everyone knows what they are supposed to do. You would think at this point, everyone’s pretty sophisticated with eDiscovery, that everyone would be aware this is how we generally do it, but there are different understandings throughout the industry. So, our typical attack is to have the counsel provide the substantive background and walk us through the coding layout, and the analytical decisions that are to be made in review, and a lot of that is sometimes pedantic. You’re going over the same thing that you might have done in every review, but it’s really important because what you find is that putting a bunch of different people on a call together, you come up with different questions and different information that you might have not otherwise, and maybe something that worked in a previous review for a particular reviewer or a particular vendor or client would work well in the matter at hand, and so you want to make sure that that kickoff call is both substantive and procedural in nature.

Maintenance is a large topic. Obviously, during the review, you want to make sure that you have project upkeep. Especially in quick cases with high volumes and immediate turnarounds, rolling production, whatever else is going on, there are a lot of things going on, and you want to make sure that you both have ample resources to handle those tasks. On a larger review, you might have one person responsible for each of the different workflows and overseeing it, one person responsible for coordinating them, and then one person responsible for coordinating the coordinator, depending on what’s going on. With a five-person review, that’s not going to be the case. With a 500-person review, there are a lot more procedural dynamics at play, so you want to make sure that you have the resources to do that.

You want to make sure that you are communicating effectively. What is happening in the review should be communicated effectively on an on-demand basis to the client so that they know, okay, we’ve got a million documents, we’re through 275,000, we’ve been reviewing for a week and a half, and we are on pace, or we are not on pace, or will we need to go to the government or the other side and say, hey, based on the volumes, we need a little more time? Is that not an option? Do we need to expand the team? Again, if you don’t have these data points it’s easy to expand the team at week one of a four-week project. At week three and a half, two days before you have to go hands-off with production, it starts to be a little more cumbersome. You can still do it, but you’re starting to really impact your quality. So, if you have all that information early on and you are dynamic in the way you engage with those data points, you can have a much more effective project.

And then review completion. Just be aware of when the project will complete. What are your deadlines? When do you need to be hands-off so the production team can take over? What are the final steps for counsel? What do they want to do at the end? Are they just looking over the production setup or the specifics? Are they looking over any qualitative or quality feedback? Whatever that process is, you want to make sure that it’s ironed out in advance. So, again, you don’t get to four days out and you now have a surprise.

All yours, Kevin.

Kevin Clark

Excellent. Thank you, Noah. You covered some great points. Eric or Rich, do you have any examples you want to share?

Richard Robinson

I guess I’ll jump in first. There’s so much to dig in there. Noah just gave a great bullet point list of things that you need to do to have a successful high-velocity review, but a couple of key points for me are at the beginning, with the onboarding process and the importance of screening. A lot of this depends on what the review is about, how you’re going to staff it, and what the stakes are. So, there are not many cases today that I would consider spinning up 200 reviewers to manually and linearly review a couple of million documents. That’s just not the standard workflow anymore. I think what I would generally do at this point, in most cases, is a continuous active learning model in which I would have a much smaller team, in which case that screening process is so much more important. I can’t even imagine telling my managed review company, just go ahead and pick and assign them because I need the numbers. Not that it doesn’t happen, because it absolutely could, but generally not the case as often anymore. So, that screening is incredibly important for pulling together a team of highly efficient, highly experienced individuals because you want that level. With continuous active learning, it’s so important, the beginning stages.

Another point to that is that in the screening process, depending on the stakes of the case, there might be even other decisions that you need to make like the standard background check that my managed review company is going to perform on each of the reviewers may not be enough in a particular case where the stakes are that the company, and the issues are ones that if they were leaked to the press would be damaging to the company. So, I might need an additional or further background check beyond what the standard is. So, those are kinds of things that you need to think about in advance.

And then adding to that point in the kickoff, in the maintenance phase, especially early on for maintenance. The communication with the persons most knowledgeable about the relevance of a document is so hugely important. Again, project management is key. You have to have a great project manager to keep the team working together. But also, you have to have a really great communication plan between the project manager and the persons most knowledgeable, whether it’s outside counsel or in-house counsel, in those early stages, because the algorithm that you’re training, the continuous active learning model is picking up those early decisions in prioritizing your documents. So, poor decisions early on in the process, which is perfectly normal. Your review team is learning the case. So, they start to make some poor decisions, but how quickly you overturn those decisions, and fix that, and retrain, and re-educate the team on what is actually relevant and what is not relevant is so important. So, that early maintenance phase from the kickoff, the first stage of training, and the early maintenance are both incredibly key to having a successful review.

Kevin Clark

Excellent points. Thank you.

So, next slide. And one more. So, we’re going to move to our next topic, managing complex and high-velocity reviews. And we’ve skimmed the surface on the previous section, but I think we’re going to get a lot deeper here. And so, I want to ask our panelists, let’s start with Eric first. What are some of the main areas that must be covered when we’re talking about having a successful, complex, and high-velocity doc review?

Eric Boyd

Well, I think it’s important that you have counsel put together a very good and thorough program, such as a coding manual, so that the reviewers understand what’s the case about. It typically will include pleadings so that they can review and understand what are the issues at stake and what matters to us.

We even provide examples of responsive and non-responsive documents, so that the reviewers get a good understanding of what they’re looking for and what they’re looking at. So, it’s very important – and Rich touched on this, as did Noah – that as people are reviewing, our QC-ers are coming up directly behind them, and catching any mistakes they’ve made, marking something responsive that’s not or vice versa. Because in the end, that’s what’s key. That’s what’s teaching your AI how to find and locate what’s potentially relevant.

So, I would say one of the most important pieces is that initial training, where you’re explaining what this case is about, what we’re looking for, what constitutes responsive and not.

And then on top of that, depending on what other issues you may – are their issue codes? Have we identified specific issue coding that we want the reviewers to handle? All of that plays a role into who we choose and how we choose. And it certainly indicates you don’t – if you’ve thought it through, you don’t want to go with the cheapest version available.

Kevin Clark

Thank you, Eric. And Rich, do you have anything to add? What are your thoughts?

Richard Robinson

I think Eric said it well. I touched on this a little bit before that the training and supervision are so incredibly important. I think it’s important also to –you’ve got a large volume of document, you’ve got tight timelines, you have to have a plan on how you’re going to approach it.

So, even if it’s a continuous active learning model, and you’ve got a relatively small team addressing a large volume of documents, you still want to have a plan of approach to prioritize how you’re going to do the review. How are you going to create the initial review sets? What’s the richness of your document set? Do you want to do a targeted review set or a random review set?

So, all of those things are really important on getting the review started out correctly. And your outside counsel and your project management team, from within the review company, have to be all on the same page with you about how you’re going to approach this. Because I have seen reviews go awry when the review team just starts the process without any kind of clear objective for how they’re going to manage the review. Are you doing four corners doc review? Are you doing threading so that you can review an entire email thread and then use that to build the engine? So, questions like that all have to be worked out in the beginning. If you’re doing continuous active learning, are there still sections of the document set that have to be reviewed linearly, because they just don’t fit the model of continuous active learning? That turns up in my document sets all the time.

And then setting realistic goals. The fact of the matter is, in cases like this, the production deadline is usually an unreasonable deadline. So, you’re trying to do the best you can, you are making a good-faith effort to meet the deadline. But things happen. So, being prepared to say, “Okay, what’s our contingency plan? Can we do a rolling production? Can we start with the most responsive set of documents or the key custodians to do that review first, make that production, and then do a rolling production?” That’s where your outside counsel has to be working with opposing counsel, and setting realistic goals and timelines.

Kevin Clark

Thank you, Rich. And Noah, can you share some of our best practices at HaystackID that you’ve learned through your experience running managed review?

Noah Miller

Yes, I can if I can figure out how to unmute. So, I think we are starting with the quality control process. So, we can move to the next slide.

So, obviously, one of the most important things is what is your quality control process. What is the feedback getting to the reviewers? How are you ensuring that raw first-level output is both accurate, efficient, and on point? And one of the things that we will frequently start with is our gauge analysis which is, again, a test we do on 10, or 15, or 20 documents from the dataset that counsel has identified, that hit the reviewers decision-making process so that they can accurately gauge documents before, again, we get into the data set proper.

And if we have time, and oftentimes we don’t, but if we have time to do this, it’s a really valuable tool because you start to be able to delineate the team between these are our high superstars, they’re getting 15 of 15 right away. They’ve got a grasp on the complex matters. These are the people that are in the middle, and these people are struggling. Or these people are not appropriate for this review.

And oftentimes, it takes a few days to make that decision once they get into the data. So, if you can make it early on – especially with a uniformly coded dataset, where you have accurate data points per person comparable, across the same set of data – the thing you run into in the documents is one person may be going at 60 documents an hour, or one may be going at 35. And maybe that’s appropriate. Maybe the person with 60 got into a large family of NR documents that zip by. The person that 35 was in complex spreadsheets, or PowerPoints, or 100-page meeting minutes, you never really know. Whereas when they’re looking at the same data set, it provides a good point of comparison, and it helps you identify potential holes in your coding regime. Because if you have a question that everybody misses, for some reason, it’s very possible that your coding instructions were not that clear.

There’s obviously a lot more that goes into the quality control. I’m going to try to zip through these to make sure we get through all the slides. So, let’s go to the next slide.

Communications. So, I touched on this via the stakeholders. It’s also important to have communication with your review team. Again, we are in remote environments pretty uniformly across the eDiscovery industry at this point. So, how do you communicate? What is your process for doing it? What touchpoints do you have on a daily, on a weekly, on a project basis to ensure that people are getting the information that you need and the communications, and changing as the guidance changes?

So, you’ve got your initial training session, you want to make sure that, as multiple speakers on this call have touched on, you want to have a fleshed-out coding protocol, which both goes into the background, as well as the coding decisions, and covers all your bases as you know the information. Obviously, when you get into the data, things are going to change. But you do want to make sure you make that as comprehensive and fulsome as you can at the beginning.

Are you going to have daily, or weekly, or twice-daily calls depending on the rapidity of the high-velocity review? We’ve had cases where they come in on a Friday, and they’ve got to be done on a Tuesday. We had calls twice a day because the calibration needs to occur in a much shorter timeframe. If you have six months to do a project, you’re probably not going to have daily calls past the first week or two. If you have a two-week project with a million documents or 500,000 documents, you’re probably going to have many more calls. You do have to weigh that, of course, with efficiency. When you’re having a call with 50, 60, 70, 80 people, whatever it is, obviously, that’s time and that’s money. So, you want to make sure that that time is well spent.

You also want to make sure that all instructions are in writing. You are obviously going to have communications where you discuss matters. You always want to follow those up with summations in the “Here’s where we are going, here’s how we are proceeding forward” just so there is both a record of what occurred. And so, there are no gaps in understanding. “Oh, I said this, and not this. Oh, you said this”. Obviously, you don’t want to get into that situation. So, it’s always good to encapsulate everything you have said.

Eric, Rich, any questions or comments on that one before we move on?

Richard Robinson

No, I don’t think – I think that both Eric and I have stated already that communication is so important, those instructions and that ongoing training, and maintenance of the review is key.

Noah Miller

Next slide. So, how do you communicate with your team? How are you replicating that environment that we used to have in the review room where people could stand up, go to someone’s computer, and look at documents together?

One of the ways you can do that is having a secure chat room. So, you’ve got your email communication, you’ve got your Teams or messaging communications. And then you’ve got your public square. And this is to replicate the function of the review manager standing up in the middle of the room and saying, “Hey, counsel just gave us an update on X. And here’s what we’re doing”.

Well, how do you replicate that? You can send an email but people don’t get emails as immediately. If you’ve got a chat room, or a communication regime that allows you to communicate in real time with everyone on a collaborative basis, that’s going to allow you to more effectively communicate to people. The way people learn or understand from communications is going to be different. So, you want to try to access all of them.

If you have a written email, you want to do that. If you want to have a chat room, you want to do that. If you need to have individual calls with someone, you can do that. And basically, you’re trying to reach as high a percentage of your team so that there is comprehension and understanding. And obviously, that’s much easier with a smaller team, because you can have much more individualized communication methods.

Obviously, within the secure chat room, you don’t want it to devolve into a Lord of the Flies situation. So, you want to make sure you have people leading the discussion, set out some ground rules on what the communication looks like.

One of the things we always tell people is if it’s a larger team, there’s no need to say, “We’ll set up a place where you can say, “Hi, how’s it going? How was your lunch?” But if you have 100 people doing that, you are very quickly going to have an unusable morass of communication. And so, you want to make sure that everything is streamlined and efficient so that you both communicate the information and are not wasting time in that communication.

Next slide.

Eric Boyd

The only thing I would add to that is that it’s a fluid situation. We never know what is actually in the document collection, there’s always going to be surprises. So, being able to answer questions in real-time on specific documents or types of documents is very important to us.

Noah Miller

Agree completely.

So, Q&A logs and technical issues. So, looking at the latter one first, you want to make sure that the team has been notified of what the process is when they have a technical issue. Who do they email? Who do they go to? Is there a helpdesk line? Do they call someone? What types of issues go to what different people or what different stakeholders? It’s going to be different if you have an end-to-end provider versus one provider during the processing and hosting, and one doing review. So, you want to make sure that no time is wasted figuring out what to do.

And so, oftentimes, a pre-emptive solution to this is to establish a project guide at the beginning, a reviewer instruction guide. If this issue occurs, talk to this person. Here’s the email. If this issue occurs, talk to this person. Here’s the email, here’s the phone call, here’s the number. Always make sure that they notify your review management of any issues and copy that person, or the review manager, or the review management team, or the distro on all of these issues. Because the individual may experience an issue, but it may be more pervasive than that. And so, if the one person diagnosing is affecting them, it’s possible the review manager and management team can diagnose it for the whole team at large and save you some headache as whatever issue is occurring.

And then obviously, the issue log. We’re big fans of that at HaystackID. And it’s basically an encapsulation of the analytical decisions that you make throughout the review. And of critical import here is the timing of when these things come down.

A very simple example. If you identify an attorney a week and a half in, you obviously want to go back and look at the data you’ve reviewed before that for that attorney, because they wouldn’t have been highlighted, they might not have been picked up. Obviously, you still hope to pick them up. But if there’s no indicia that it is an attorney, or that it’s a legal communication you might not have. And so, you want to make sure that from that point going backwards, you look for that information in that attorney and their communications, but you won’t need to do that going forward. Because at that point, you’ll have identified it for the team, it’ll be subject to the normal priv screens, but not the special priv screen of “Oops, we missed this because we didn’t know it was there”.

Same thing with substantive decision. If it turns out, “Oh, we now understand that this player does this or there’s a common interest involved with this player”. Again, you want to know when that decision came down, so you can look at the data before, and so you can again, so there’s no communication gap, so that everyone understands the issue log is the arbiter of truth. This is where your information is. If you need to change something, it goes in the issue log, whether that’s a protocol change, whether it’s a change based on documents you encounter. From the point at which the review begins, you want to keep some kinds of documents, whether it’s internal to Relativity, whether it’s an Excel, whether it’s a shared database, whatever it is of your decisions so that you can ensure that everyone is on the same page. And you don’t have half the review doing one thing, and half the review doing others.

Eric, Richard, anything to add or questions?

Richard Robinson

Honestly, no. You’re doing a great job.

Noah Miller

Thank you very much. Next slide.

It’s important to provide on-the-fly positive encouragement as Rich and Eric just did. So, if you can do that for your review team as well, really important.

Shared resources. So, at the beginning of the review, you’re not going to know everything you know at the middle, and you’re definitely not going to know everything that you know at the end. And so, you want to generate a compendium of information that you update as the review goes on.

So, who are your attorneys? Who are your third parties? Who breaks privilege? What are the relationships between the players? Do you have any project codings? Is there information that you’ve discovered about the way they communicate? The first time you look at a bunch of traders talking over their various chat methodologies, you’re like I don’t even understand any of the words they’re saying. And so, sometimes you have to develop a dictionary to understand what’s being discussed, especially when they’re talking in shorthand, because you may be missing something exceedingly critical that may form a really important part of the case. Who knew what? What did they know? And when did they know it? And you might not catch it without that granular knowledge of what their communication means.

And so, if you start to track that, and you start to track it from everyone in the review – and so one of the things that we always like to do and recommend is have some kind of functionality, whether it’s a shared document, a shared Excel, an email chain, whatever it is where your first level reviewers can identify informational pieces that they didn’t know. And then it may turn out that no one knew. It may turn out that counsel wasn’t aware of it, and you can start to assemble this encyclopedia of the case.

So, by the time you’re done, you not only have your analytical decision that you’ve made on the documents, but you also have this resource for future depositions, trial, anything else that goes along, witness prep, anything else that counsel may do, so they can say, “Oh, you know what, we really need to talk to them about this, because we didn’t even know this was an issue. And it may not be fleshed out in the documents, and maybe we need to do some more collections”. It can help guide how the case proceeds forward.

And obviously, individual feedback is of paramount importance. A lot of times, with a lot of reviews – and we don’t like to run them like this so, hopefully, this doesn’t happen with us – reviewers just review, nobody talks to them, there’s no feedback, they just sit in a room, sit in a room by themselves, and they code. They may be doing a great job, and they may not be doing a great job. And there’s no real mechanism to ensure that. And so, what we find is that feedback offered frequently, individually, and to the team really improves your quality. Because if you talk to someone – what you will find is if you talk to someone and you go through the decision-making process, you’ll identify where the gap in understanding was, what point did they diverge from what you would have done? And sometimes what you find is “Oh, they actually diverge in a way that makes sense. Let’s change the coding option. Let’s update counsel. Let’s see if that that’s actually a better way of doing it”. So, it’s a two-way street. And the feedback will help your reviewers, but will also ensure that you’re doing a good job as a team.

Eric, Richard, anything to add?

Eric Boyd

Well, I think we can probably dovetail through some of the next slides on reporting that deals with this.

Kevin Clark

We are running out of time. So, let’s go over Reporting, then we’ll jump to the next section, Technology.

Eric Boyd

The reporting piece is important to us as the end users. This is the most important to us. We need to see statistics. We need to see who’s doing what, how many documents per day, stuff like that. We like to see that, in the law firm side, simply so that we can measure and plan accordingly. But also, I know that the vendors like to see this so they can do that as well. They’re better at reading these statistics that they read daily, and every day of the year, than I am. But I think we can extract some of the same knowledge, put our heads together and see where to go next.

So, these are very important to us. How do you feel, Rich?

Richard Robinson

No, I agree 100%. I think that the daily reporting metrics is a huge value-add to any managed review.

Eric Boyd

Right and daily questions. We usually keep a log of daily questions for decision-making. As we said before, there’s always surprises. You come across things you’re not expecting.

Noah Miller

That covered it on my end.

Kevin Clark

Let’s jump to the next slide, slide 25. So, we are running out of time. So, we’re going to jump right into the technology. So, this is our final topic. And again, we’re running out of time.

So, technology, use of technology is extremely important in being able to run a successful, complex, and high-velocity review. The first point, artificial intelligence, and machine learning. We’ve already mentioned that several times during our discussion. And that’s another webinar itself. So, we won’t really cover that right now.

But let me hand over to Noah to cover a few of these next points as we close out.

Noah Miller

I’m going to diverge. So, I think there’s supposed to be 350 billion (of mic) per day in 2023. So, what I noticed in the review industry was early on you would just give (off mic) some search terms and some data deduplication. That was your technology. The technology caught up and reduced the volumes pretty severely. And so, now, in the middle, you now had reduced your volumes, and you could get through them with smaller teams. But what I’ve noticed in the last couple of years is now the data volumes are so enormous that even with the technology culling, you still end up with 800,000, a million documents, 2 million. And these are just priv hits, you’ve culled them down to everything you can.

And so, having facility with technology is exceedingly important, because you just can’t get through the datasets anymore, whether it’s CAL, whether it’s TAR 1.0, whether it’s new AI versions, it’s really important.

So, I think we can jump to 30 after this.

Eric Boyd

The only thing that I would like to add is when you’re utilizing the technology, you need to work with your teams on the best-case use. And what I mean by that is trying to explain to some of the attorney teams why a four-corner review is necessary when utilizing this technology is important. Because they’re like “Wait, you want to read the email absent its attachment to determine relevancy”. And you have to go through that explanation and explain it, and make them comfortable that, on the back end, as you do your QC, and what’s going to get produced versus what not, you’re going to catch everything that should be privileged, you’re going to catch everything that should not be produced before you do your production. But I think that that is an important conversation to have.

Most attorneys, at first glance, don’t like that idea until you explain to them the importance of it for a successful use of CAL.

Kevin Clark

Great, excellent points. I have had many of those discussions, so very good points.

So, just to recap, some takeaways. Effective legal document review projects, high-velocity, complex projects require the selection of the right team. They require effective project management. And they require the proper use of technology to increase the efficiency and accuracy of the reviews. By following best practices and selecting the right team, staffing, staffing the correct contract attorneys on the team, you’re going to have successful outcomes if you follow the right process, if you follow best practices, and if you use the right technology.

So, in wrapping, I want to see if our panelists have any final thoughts they’d like to share.

Richard Robinson

You go first, Eric, since you need to—

Eric Boyd

Yes, I’m going to have to jump for my next meeting. But thank you all for having us and putting this together. I think everything on this is 100% right.

If you want to be successful, be prepared. Put together a good platform, put together a good manual, put together a good plan, and have a good partner.

Kevin Clark

Excellent, thank you, Eric. Rich.

Richard Robinson

Great advice. And just add that there’s no easy button, that what we’ve laid out here is a great manual for step-by-step what are best practices, but you have to take a look at the matter at hand. What are the issues? What are the stakes of the case? What’s the value of the case? And make decisions based on all of that information using this as a structure to determine what the best way to staff and manage the project.

Kevin Clark

Excellent. And Noah.

Noah Miller

No, that covered it. There’s a lot to it, and just make sure you’re communicating with everybody. That’s really what it comes down to, so that you effectively manage the project.

Kevin Clark

Excellent. Next slide.

So, in closing, we want to thank all the expert panel members for sharing their insight and information. Thank you for joining us. We want to thank everyone, members of our audience, who took the time out of your busy schedule to attend today’s webcast. We truly appreciate and value your time, and glad that you’ve been interested in our educational series.

Don’t miss our next educational series webcast, it is going to be on May 17th, and it’s going to focus on targeted remote collections, and emphasize the importance of consistent and standardized device and service reporting. You can learn more, and register for this upcoming webcast, and explore our extensive library of on-demand webcasts, visit our website haystackid.com.

In closing, we do want to mention that we do have some amazing products at HaystackID, our ReviewRight services. Services, ReviewRight Match AI, and ReviewRight staffing, which provides the databases and provides a lot of these – it basically covers a lot of what we’ve been talking about. So, we’d love to talk to you more about those, further share our ideas and concepts of best practices and how to run doc reviews, and how to run complex and high-velocity doc reviews.

So, thank you all for coming. Hope you all have a great day. And thank you again to the panelists. Take care.

Richard Robinson

Thanks, Kevin. Thanks, Noah.


About HaystackID®

HaystackID is a specialized eDiscovery services firm that supports law firms and corporate legal departments and has increased its offerings and expanded with five acquisitions since 2018. Its core offerings now include Global Advisory, Discovery Intelligence, HaystackID Core™, and artificial intelligence-enhanced Global Managed Review services powered by ReviewRight®. The company has achieved ISO 27001 compliance and completed a SOC 2 Type 2 audit for all five trust principles for the second year in a row. Repeatedly recognized as a trusted service provider by prestigious publishers such as Chambers, Gartner, IDC, and The National Law Journal, HaystackID implements innovative cyber discovery services, enterprise solutions, and legal discovery offerings to leading companies across North America and Europe, all while providing best-in-class customer service and prioritizing security, privacy, and integrity. For more information about its suite of services, including programs and solutions for unique legal enterprise needs, please visit HaystackID.com.

Source: HaystackID