[Webcast Transcript] Automating Privacy and eDiscovery Workflows: Operational Excellence as a Service

Editor’s Note: On May 19, 2021, HaystackID shared an educational webcast designed to inform and update legal and data discovery professionals on how organizations can better manage the volume of DSAR, PII, Discovery, and Incident Response requests in today’s world.

While the full recorded presentation is available for on-demand viewing, provided for your convenience is a transcript of the presentation as well as a copy (PDF) of the presentation slides.

[Webcast Transcript] Automating Privacy and eDiscovery Workflows: Operational Excellence as a Service

Managing the volume of DSAR, PII, Discovery, and Incident Response requests in today’s world of work is challenging. Many industries are seeing linear growth in these types of requests, but some are seeing an exponential increase.

In this presentation, experts share insight into the automation of unique workflows to better engage with these rapid-response projects. The timelines are short, the data volumes are larger than ever, and the reporting obligations are becoming increasingly onerous. Clients are asking for alternatives and technical solutions to a problem that is not going away any time soon.

Webcast Highlights

+ Automation considerations and legal challenges.
+ Slowdown to speed up – tailoring the workflows.
+ We have the technology, let it do what it does best.
+ Working with outside counsel and 3rd parties to define the process.
+ These processes aren’t changing drastically but we are developing a higher quality product.
+ What Operational Excellence as a Service truly means.

Presenting Experts

+ Jonathan Flood – Mr. Flood is the Director of EU Discovery Ops at HaystackID. Jonathan is a thought leader who has worked with top-tier law firms in Ireland, in addition to vendors, financial institutions, government agencies, and regulatory bodies.

+ Susanna Blancke, Esq. – Ms. Blancke is the Director of EU Litigation and Client Services for HaystackID. She has extensive global experience concerning cross-border eDiscovery operations and managing larger international and national eDiscovery teams and review units, remotely and on-site.

+ Jennifer Hamilton, JD – Ms. Hamilton is the Deputy GC for Global Discovery and Privacy at HaystackID. Jenny is the former head of John Deere’s Global Evidence Team.

+ David Wallack, Esq., CIPP/E – David is the DPO at HaystackID and has significant experience in complex issues involving data breach, crisis management, GDPR, US privacy regulations, and US litigation discovery obligations.


Presentation Transcript

Introduction

Hello, and I hope you’re having a great week. My name is Rob Robinson and on behalf of the entire team at HaystackID, I’d like to thank you for attending today’s presentation and discussion titled automating Privacy and eDiscovery Workflows: Operational Excellence-as-a-Service. Today’s webcast is part of HaystackID’s monthly series of educational presentations conducted on the BrightTALK network and designed to ensure listeners are prepared to achieve their cybersecurity, computer forensics, eDiscovery, and legal review objectives.

HaystackID is excited today to highlight our support for and partnership with women in eDiscovery. Women in eDiscovery is an organization that brings together women around the world who are interested in technology related to the legal industry. Its goal is to provide opportunities for businesswomen to grow personally and professionally through leadership, education, networking support, and recognition. It was founded in 2007 with thousands of members operating in more than 30 chapters throughout the world. Women in eDiscovery has kindly supported today’s presentation through its ongoing education advocacy and highlighting of today’s webcast.

Additionally, we’re grateful today for support from the Association of Certified eDiscovery Specialists, better known as ACEDS. ACEDS provides training, certification, and professional development courses in eDiscovery and related disciplines to law firms, corporate legal departments, service providers, the Government, and institutions of higher learning, and we are delighted to partner with them on efforts such as today’s webcast.

Our expert presenters for today’s webcast include four of the industry’s foremost subject matter experts on international privacy considerations in cyber discovery and legal discovery incidents, requests, cases, and matters. I’d like to introduce our four panelists today, our four presenters today. The first introduction I’d like to make that of Jonathan Flood. Mr. Flood is the Director of European Union Discovery Operations at HaystackID. He’s a thought leader who has worked with top-tier law firms in Ireland and also has worked with vendors, financial institutions, governmental agencies, and regulatory bodies. Next, I’m honored to introduce Susanna Blancke. Ms. Blancke is the Director of EU Litigation and Client Services for HaystackID, and she has extensive global experience concerning cross-border eDiscovery operations, and managing larger international and national eDiscovery teams and review units both remotely and onsite. Next, I’d like to introduce a well-known eDiscovery counsel, Jennifer Hamilton. Ms. Hamilton is the Deputy General Counsel for Global Discovery and Privacy at HaystackID, and she is the former head of John Deere’s global evidence team. Last but certainly not least, I’d like to welcome David Wallack. Mr. Wallack is Data Protection Officer at HaystackID, and he has significant experience in complex issues involving data breaches, crisis management, GDPR, and US privacy regulations, and US litigation discovery obligations.

Good morning or good afternoon and welcome everyone. Today’s presentation will be recorded for future viewing and a copy of the presentation materials will be available for all attendees and you can access many of these materials right now directly beneath the presentation viewing window on your screen by selecting the Attachments tab on the far left of the toolbar beneath the viewing window.

And at this time, I’d like to turn the mic over to our expert presenters, led by Jonathan Flood, for their comments and considerations on automating privacy and eDiscovery workflows. Jonathan.

Core Presentation

Jonathan Flood

Thank you, Rob. Thank you to everybody who has joined this BrightTALK event. I’m very honored to be the presenter for this particular talk. Also, very glad to have this esteemed panel with me as well. I’ve worked with many of these individuals for a number of years and we’ve had a pretty good working relationship over that time. So, it’s a good chance for us to talk about what we do anecdotally on an event like this.

I’ll jump forward and talk a little bit about our agenda first, just to give everybody an idea of what the discussion points will be today. Our first talking point will automation considerations and legal challenges. We’ll kind of break up what that means into some subgroups, and we can talk about those subgroups based on our experiences. The second thing I want to talk about is slowing down to speed up – this is something very dear to my heart – where we talk about how diving into projects can cause problems and taking the time at the beginning is important and I think is something that anyone who has done a project or done large privacy projects would attest to is that taking the time at the beginning is important. Thirdly, we’ll talk about the technology involved in these particular projects. I’m not going to talk about any particular technologies, I’m going to talk about them, in general, just to give everybody a sense of what’s out there and what’s available and how it works. And we’ll talk about some of these areas from a workflow perspective, how some of that works for us and how we’ve managed to make some really good headway in automating our processes. The fourth topic will be working with outside counsel and third parties to define the process. This is something that is – particularly in the last 12 months for us has been a very interesting part of the world where we’re learning a lot about how different industries and jurisdictions do things, and the value of speaking to everybody who is involved in the process is quite important, and that’s a pretty big topic for us. Fifth, we will talk about developing a high-quality product. And lastly, we will talk about operational excellence as a service, and what that really means.

So, on our first slide here, Automation Considerations and Legal Challenges, this is a big title, and what we’ll talk about here is each of these four different areas, and this is not every single area that can be automated or has legal challenges, but these are the ones that we’ve broken it down into and where each of us has particular expertise. So, we’re going to break it into each of these sections, and I’ll have each of our panelists talk about their experiences with this particular section.

So, the four sections are:

  1. Breach review
  2. Data remediation
  3. Cross-border discovery
  4. DSARs

And with that, I think what we’ll talk about first is breach review, with respect to it being a mature product [inaudible] and we’ve got a number of workflows in this particular section here. So, I’m going to hand this over to Jenny. Jenny, you’ve been involved in many breach reviews and you’ve handled this from the legal perspective and an advice perspective. Can you give us some of your insights into what this means to you and what a mature product looks like and how some of the workflows work?

Jennifer Hamilton

Yes, I am happy to do that, Jonathan. Thanks for the introduction. So, the breach review is the result of all this cyber activity we’re seeing out there. many, many different attacks, incidents, investigations, and reporting obligations, and the interesting thing about it is that in order to review your documents for what you may need to support a cybersecurity incident, whether that’s remediating data or making a report to a regulator is very similar in terms of the workflow and the automation considerations that you would have for, say, a regular plain eDiscovery matter, or a cross-border matter or data remediation.

So, I’ll just set the table in terms of what the breach review would look like and some of the key challenges, and then we’ll move to some of the other different service offerings where there’s going to be some overlap.

But specifically with breach review, again, we’re seeing a lot of this work because there’s just a lot of cyber activity out there, and companies are putting a lot of resources into identifying these risks to the data, and whether that’s personal information, whether it might be a violation of CCPA or GDPR, but also for trade secrets, and other sensitive data that the company is obligated to and wants to protect.

And those challenges, again, are similar to some of these other services where you’ve got a high volume of documents, a high velocity of the creation of new documents, receiving new emails, and then the numerous locations where data could potentially be breached or accessed, and all the different filetypes that go along with it. So, there’s just a huge question mark, at the beginning, of one of these matters where you believe your data is at risk, it’s been accessed inappropriately, and you have to figure out, well, what data, where is it? So, this goes hand in hand, and maybe in a lot of cases, in parallel, with your data incident investigation, so that as you identify those locations or those filetypes, then you can start to collect them or scan them to figure out what is the scope of what might be at risk and you might have to report to a regulator. So, this is also going in parallel with communicating with breach counsel, your in-house folks, your outside counsel in terms of what are the reporting obligations for different types of documents and different locations of them.

And so, the more you can do on the front end the better, and I think that will flow nicely into the points about data remediation, and we can be more proactive in classifying and categorizing your high-risk data. So, you’re that much farther ahead in a breach review. But let’s assume that hasn’t been done, and again, the struggle we all face in this space is that we’re receiving more data and more data is being created than we can remediate. And so, it’s posing these high-security risks, but also high cost and a burden to the company when we end up with a breach review project.

And again, just to underscore why it’s important to do this, you could attack this review and identification of at-risk documents in a number of different ways. You can do it manually, you can do it with internal people, businesspeople, in-house counsel. Our service offering is to do it in, again, a similar vein with a lot of eDiscovery projects, which is to do it in a diligent, consistent, and repeatable way, so that you can defend to a regulator how you know that this is the data that has been potentially accessed and what that means in terms of reporting obligations. And keeping in mind that anytime you’re talking to a regulator about the data, you are certifying to them that what you’re saying is correct and accurate.

So, what we see is more and more of these situations that companies want to turn this type of work over to a partner who has experience doing this, and who can demonstrate, if needed, to the regulator and to also even just outside counsel or the executive suite, this is what we do, this is how we do it, this is why we do it this way. It is consistent. It is repeatable. It is defensible.

Jonathan Flood

That’s great, Jenny, thanks. And maybe something that’s more akin to the automation consideration here, and this is the only word under the breach review thing here is “Normalization”. For anyone who is not familiar with what normalization means, with the context of data like this in a breach review, normalization refers to the ability for us to understand where we might be referring to the same person in many different ways. So, Jenny, you might be referred to as JH in an email or as Jen or Jenny or Ms. Hamilton. Being able to normalize all that data into a single entity, for example, know that in every circumstance where we’re talking about Jenny Hamilton that that data that we’re looking at we can, with confidence, say that every piece of data attached to your name, with respect to breach data, we know we can say that that data, for you, has escaped the company, and therefore, when it comes to notification, it makes it much easier for us to write that report at the end and say, “Hey, look, we know that there are 10 different Jennies here, but we’re pretty sure that they’re all the same. We can provide the context for how sure we are based on the data we have”.

And that sort of comes into one of the automation considerations of breach review, and it’s really important for the expediency of a project for us to quickly identify the variations in names, and the variations in companies and credit card numbers and social security numbers and so on, and move it from what was, traditionally, a very human-orientated process of identifying these differences and moving them into a more automated process where we can QC it with human interactivity, but for the most part, we can take the heavy lifting off of the early part of the review, automate that process, and focus, really, on the bigger legal challenges within the breach review.

Moving onto the second topic here, data remediation. David, you’re the HaystackID DPO, and you’ve got a lot of experience with discussing data remediation projects and understanding the concepts within various organizations of how their data is stored. Could you talk to us a little bit about the legal challenges and any automation considerations that you’ve seen inside the data remediation space?

David Wallack

Happy to, Jonathan. I wanted to comment on some of the stuff that Jenny just said but I was on mute while I was talking. So, I’m going to circle back on it.

There was a lot to unpack there with what she said, and I wanted to add one point on the breach review. There’s some recent case law, particularly out of Virginia, that suggests that if you are preparing reports for regulators in a breach that you may not also be able to claim attorney work product privilege over those reports so that it’s really necessary to make sure that you are working in silos to maintain privilege if you are preparing reports that are in anticipation of litigation, it’s absolutely crucial that those reports would be prepared separately from the reports that are being prepared for regulatory purposes. And it’s really important to establish that communication protocol, both internally and also with any outside vendors, you law firms, whoever, to make sure that they’re not inaccurately classified for the wrong purpose, for a business purpose or for a regulatory purpose, to maintain privilege over those. Everything that Jenny is saying is absolutely spot on there, so that’s definitely really great stuff.

On the data remediation, and particularly we’ll probably spend most of our days here focused on GDPR, I think just about everybody in the universe now, within GDPR, is aware of Schrems II and some of the guidance has trickled out from the EDPB and people are starting to get their hands around it a little bit. But it’s pretty clear from the EDPB that data minimization is a real thing now, that not only sort of in theory, but actually in documented practice.

Some of the legal challenges that are out there are growing right now when it comes to finding comfort in transferring this data to third countries, to the EEA. There’s no privacy shield anymore, obviously, and the standard contractual clauses are great, but it’s also clear that those are not enough. So, the EDPB is pretty helpful, they’ve sort of put out a roadmap, if you will, and they list six steps that you can take in order to follow the Schrems II ruling.

The first one is you have to do a data transfer mapping exercise, so that means that you need to know where all of the data is going and you actually need to visually display that, where it’s held, where it’s going, and where it’s endpoint is going to be.

You need to document the legal basis for your transfer tool. Under Article 49, for instance, if you’re going to be using – in the context of US litigation, that you’re transferring the data in defense of an anticipated legal claim, that all needs to be documented.

And then you need to assess the effectiveness of the third country that you’re transferring the data to. And particularly what that means is do they have any local law or provision which may render any safeguards for the data ineffective. So, in context of the US, we might say, “Well, is this data going to be subject to some type of further regulatory review or investigation, for instance, that could result in criminal sanctions against any of the individuals whose data is being transferred?” and then the analysis under the assess effectiveness test might be that there are not adequate protections in that country to transfer the data. So, you might need to find another mechanism of doing, like a Mutual Legal Assistance Treaty.

And then most importantly, next, is the adopt supplementary measures, which is the EDPB’s way of saying that the standard contractual clauses are no longer enough on their own. You actually have to back them up with something of merit.

So, as we sort of circle back to our presentation, what does that mean in the automation world? because they do want to see things like data minimization, data anonymization, data pseudonymization.

Now, tools that can help get that done, but you need to actually document what you did along the way in order to satisfy a lot of these transfer requirements, and you need to demonstrate, somehow, that you’ve taken active steps to put in supplementary measures to reduce data and to protect data.

So, those are some of the challenges that are out there right now. It’s definitely a daunting task just throwing data into the US. I don’t think anybody really knows what the outcome of some of these transfers might be. Everyone is waiting with bated breath to see who the first person is that gets slapped on the wrist, and I don’t think anyone really wants to be that, especially in the context of US litigation, because you already have a big enough headache going on.

So, I do think that it’s really important that you go to the nth degree here to make sure that these transfers are as compliant as you possibly can. So, at the very least, if you are called to the mat, that you can say this is what we did, we did our best under the circumstances of the US discovery obligation, and under GDPR obligation, and here’s what we did along with our outside vendor or outside counsel, and we’ve documented it each step along the way and this is what we came up with, and it was the best we could do.

Jonathan Flood

Thanks, David. That’s some great insight there and I think data remediation is a very interesting topic for a lot of, not just large organizations, I think for any organization which has a legacy, I’ll call it a “problem”, I think it’s fair to say that there’s a problem out there, a long legacy problem of storing data for a very long time on the basis that you don’t know what it is and you don’t know what to do with it. And some of the automation considerations for this, which I’ll pepper along the way as we talk about these, being able to create a robust workflow to identify that falls outside your own retention policies. For example, if you don’t need to store data that’s older than seven years, for whatever reason, outside of things that might fall into a regulated environment, let’s say, a telco or an insurance agency or something, but if you know you don’t need to keep data that’s older than seven years and there’s no commercial value of that data to you. Keeping it is a bigger liability than it is an asset, so identifying that automatically, saying looking for things that have not been accessed in over years and creating a policy, and a structure, and a methodology, and recording, as David said, it’s important that you record these things, and getting rid of it. If it’s not there, it’s not a risk.

Quite often, you’re not going to know what data you had seven years ago, so I think it’s fair to have a discussion about it, but also create those automated environments that allow you to remediate your data.

Now, before I move over to cross-border discovery, there’s a question from one of our listeners here, and I think it sits squarely between the data remediation and cross-border discovery. And the question is, “How about legal/operational challenges for cross-border discovery outside the EU, such as in Asian countries like India, China, Philippines where there are numerous intercompany or third-party activities, be it back office, R&D, and other outsourcing initiatives?”

Jenny and David, I think you both have some pretty good experience working in these, and Susanna, I think you’re going to cover some of this in your cross-border discovery discussion here in a minute, but I think it fits nicely in the middle here. If any of you want to jump in and tackle this first, I think it’s a good opportunity here.

Jennifer Hamilton

Yes, we have some experience in this area where it’s not just the EU and it’s not the US, and it fits into this category which is a little vague and unfair called “Other”, because it’s more clear what the rules and the workflow need to look like to meet the rules in the EU and also in the US, these are pretty well built out models, but when you get into some other countries where there’s legislation pending, or the legislation is new, from the legal perspective, you don’t want every proposed legislation or things that are new or you don’t know how they’re going to be interpreted or applied and build your whole model around that makes it very challenging to advisor your company on what the real risks are. and if you don’t know what the real risks are, then you really don’t know what your workflow needs to look like, unless you’re one of those companies that you just have the buckets of money approach and unlimited budgets to build the perfect framework. For those, go for it.

But for everybody else, for the normal people, then you really have to, in some ways, give a lot of extra thought to some of these countries, that’s why this is such a good question. India, China, and the Philippines, there are some drivers in China that end up directing the workflow like the infamous – I’m trying to think of the name of it – law where either the country (China) can classify anything as—

Jonathan Flood

—Estate Secrecy Law, I think is what—

Jennifer Hamilton

State secret and you don’t know until after they’ve classified it, which is after you’ve already collected and processed the data. And it used to be – this really came up in everyday conversation in this space about 10 years is when it started really becoming an interesting topic, and attorneys started advising around it, but it’s still unclear how frequently China was going to apply this and what the consequences were.

And then several years after that, it became more clear that the Government was willing to and was prosecuting both on a [tort] basis, and a regulation violation, but also as a crime. And also, really high-ranking members of the Communist Party, as well as companies who were doing internal investigations.

So, that’s when you start seeing the workflows being driven by, again, the more known application and the more known consequences and doing something more GDPR-like where they’re trying to… companies, if they have investigations, they’re going to investigate within the Chinese borders, which is difficult, because you don’t have the number of providers and technologies, and you have to rely heavily on outside counsel and deployment of independent instances that you can bring in to address that. Other countries either don’t have quite that driver, but a lot of civil cases still have criminal sanctions attached to them, business disputes. Technically, they’re alleging violations that are more criminal-like, so you want to take that into account in terms of what you do. But also, because there is less focus on these federal rules of civil procedure like rules and the sanctions you’d find in the US, you actually end up – some of those countries have a little more flexibility on how you do things and you just want to feel like you have the right story to tell the judge or the regulator as to why you did what you did, and where you felt it was necessary to comply.

So, there’s definitely some room for more flexibility and creativity in our world when it comes to some of these other countries.

Jonathan Flood

I think it’s fair to say that there’s a big circle around the legal challenges when you’re dealing with some of the further Eastern countries, for example, with China and the State Secrecy Laws is a really good one. OK, I think that answers the question, and if you’ve got any further questions about that topic, we can dive in there a bit later.

Susanna, I’m going to hand this over to you now, talking about cross-border discovery. I think there’s a lot of legal challenges that we can talk about with cross-border discovery, and automation, I think, applies here as it always does. But your experience is pretty well placed to discuss cross-border discovery, so handing it over to you.

Susanna Blancke

Thank you so much, Jonathan, for your kind introduction, and thanks to my co-presenters here who make my job now very easy, because they have pretty much touched on every item that I wanted to discuss in regards to cross-border discovery.

For the sake of focusing on the main points, I would like to look at a few items that always present a challenge in cross-border discoveries, that is location, the location of litigation, the legislation, reviewer qualification, if it actually comes into a review situation, and the availability of technology.

As we have discussed and heard before from Jonathan, if it’s not there, there’s no risk, so if we don’t have any international lawsuits, there would be no problem. However, since we do and most big corporations today have branches all over the world and sprinkled in every country, the problem is that… it’s not the problem, the answer to all these challenges is really working very closely with local counsel. And that is easier in some countries, and more difficult in others. I will touch also on China in a little bit.

So, in regards to location, we are dealing here with a huge variety of data privacy rules and requirements even outside of the GDPR. GDPR is a wonderful piece of legislation for the EU. I actually like to read it because it’s easy to read and it’s easy to understand. while countries in Europe have their own ability to develop these GDPR rules a little bit further, other countries do not.

If you look at the US, for example, when you look at ownership of IP ownership, and emails, and documents, and communication, in general, in the US, companies own, in general, the communication of employees. Certain creative businesses like architectural offices in the US, and in some other countries as well, they own the building design of their employees. This can all be very different if you look at other countries.

For example, in Germany, employees have to sign off the right of the company to hold their communication. They own their own emails, they own their own documents, and the consequence of all these different approaches in the different countries of the world is that eDiscovery can potentially take a very long time.

Finding out these things after you have collected data or when you want to start to collect data and you encounter these privacy issues is something that can cause data to be collected at a later point, it could lead you into spoliation and altogether, it creates a huge delay, and that can be prevented by planning appropriately, planning early, know your client and know the case. And if in doubt, always contact local counsel to make sure that you understand what the local rules are for the case that you are dealing with.

Which brings me to legislation. Legislation has a huge impact on how data is being handled between two or more different countries. For example, if you have a GDPR related case that requires review in both countries, US, and some country in Europe, and it is a dataset that needs to go through the same workflow because it’s an overarching workflow that is wanted, nonetheless, European data needs to be protected and can’t be – I don’t want to use the word “leaked” – but can’t be transferred into the US. So, that requires a certain specification of workflows, which means data needs to be sanitized of PII information, of social security numbers, credit card information, all these personal information that can identify a person and their whereabouts, what their shopping habits are, and everything.

So, that is something that needs to be kept in mind when you’re dealing with international cases where you are asked to review datasets from different sides of the pond.

Another issue or – it’s actually my favorite part of cross-border, because it hits you so unexpectedly is the privilege. The handling of privilege in cross-border litigation can hit you completely unexpectedly when you are dealing with a country where, for example, a paralegal-type position is carrying a privilege. It doesn’t in all countries. Certainly, in some countries like Ireland, there’s no position like a paralegal, so we don’t have this. Now, would legal secretaries be the kind of job that would carry a privilege? Probably not. But those are things that can change a case tremendously.

So, in addition to understand what data protection is applied in the country you are dealing with, it is also important to know the privilege rules. I think we can all agree there are many multiple things and other issues that eDiscovery, particularly international eDiscovery, is carrying, but privilege is one of the most important and most impacted areas and items that I feel have a huge impact on a case.

The same is basically true for banker’s privilege and the extent of the banker’s privilege.

If you have a case that goes further, beyond the investigation, actually in litigation or litigation is imminent and you have to go into review, then we have to look at two things. First of all, what is the reviewer qualification that you need here, and where are you conducting the review.

Coming back to China, there was a time – and Jenny, you may help me if I’m up to date with that information or is that has changed – but there was a time when China insisted that document reviews that was declared a certain state secret or a state matter can only be reviewed in China from and of Chinese citizens that have been in the country for a certain amount of time. I think we can all agree that reviewing in China is probably one of the most difficult things that you can do. First of all, getting into the country, getting the right reviewers into the country, and then conducting, actually, the review under such huge supervision. But that is something that I feel is also very often forgotten when you deal with international eDiscovery projects. Are the reviewers actually required to be a citizen?

Now, in the US, that is true for certain state and military matters, and then there’s actually a distinction being made. What kind of citizen do we have? While, in normal life, a citizen is a citizen, in litigation, there is a difference based on what state secrets or military secrets are being reviewed. So, there may be reviews that allow a naturalized citizen to do and conduct the review after a certain background check, but certain reviews require a citizen to be a born American in the US.

So, those are things that are not necessarily known to an international company that is located in the EU and that has business in the US and is now confronted with certain things and regulations that they accidentally have violated and not purposefully followed.

So, that is something that I feel people and review managers need a lot of education on because it’s easily overseen. If something like this is being discovered while a review is going on, it can have potential effects on the duration of the review and the quality of the review, of course, and the outcome in court.

We have spoken a lot about technology. I think we can also all agree that technology is everywhere. We have an abundance available to our fingertips, but it is not available in high-level sophisticated ability in every country, so we have to make sure to understand and to see that if we are working with cross-border cases, our remote collection item may not work the way it used to work in the US, or in France, or in Spain. There may not be the access to the internet to connect to certain tools in certain countries. And all these things need to be addressed in advance of making a move to litigating a cross-border case.

And again, the solution to all of this is preparing in advance, and making sure to contact local counsel and a local tech team that can parachute in and help you with connecting with the right points in litigation in that country to make sure that all of these things are addressed.

Yes, I think these are my four favorite points that I’d like to bring to the attention of the audience, because they always come up on almost each and every case. They’re very popular, just like DSARs on which I am handing it back to Jonathan.

Jonathan Flood

Thanks, Susanna. That’s… I think cross-border discovery is a very interesting topic because it’s extremely complex, and although the core of it is very similar to regular discovery and regular reviews, it is, in many ways, infinitely more complex, and having experience in it, as you say, it’s really important to have that local knowledge and reach out for support to local resources where it makes sense.

And it’s very similar with DSARs. DSARs have always been a thing, at least in the 12 or so years I’ve been working in the industry. I recall there’s always been some element of an access request based on an employee looking for information from their employers about some matter, or a client of some company wanting to find out about some data they have on them. So, where DSARs, I think, fall into this nicely is that they sum up all the various things that we talked about already. We talk about collecting unknown amounts of data, we don’t know what the content of it is, it needs to be redacted in many ways, because the… in many circumstances, the DSARs that I’ve handled, certainly the larger ones, have been… over a shorter timescale, have cost more per document and have been more context than some of the larger discoveries we’ve done, because when you get to the end of it and you realize you need to produce 5,000 documents to this person who has asked for this data, you need to also redact all of the other information that’s not pertinent to that individual.

So, there’s a huge redaction requirement on DSARs. And where we… when that be the legal requirement, obviously, you can’t produce under GDPR, you can’t produce other people’s private information to a third party without their consent, but we can use some of the tools that we have at our fingertips to automate that now and there are some great tools out there where rather than applying a search term and redacting the search term, you can apply an infinite amount of regular expression of pattern type searches to redact anything that looks like an email address or anything that looks like a credit card number, or anything that looks like a date of birth or any of those kinds of identification numbers or other patterns and data. We can now pretty much automate that, for the most part, to a much higher degree or much higher quality degree than having 20 or 30 different people doing it.

We come back to this discussion often in the industry about whether or not people or technology are better. My view is that they are inextricably linked and you need to have both to make a good workflow and a good process. so, we can automate the process, but we need to have that QC there by Q&Is to make sure that it’s achieving the goal that we set out to achieve.

DSARs are a really interesting topic, close to my heart because it’s a much more technically focused review, I think, these days than others.

In an effort to buy back a bit of time, because I think we’ve covered quite a lot in this first slide and we do have a few others. So, I’m going to cut my DSAR presentation short a little bit here and just to finish up this slide with a little note at the bottom here. We say, “GDPR considerations, generally”, and I think, to be fair, we can probably replace GDPR in this circumstance with any of those local data protection legislations. And really, the consideration that you need to take is what is the location that you’re in, what permission do you have, and what are your notification requirements, because these are the three things that you need to consider in any dataset, whether you’re moving data from Germany to the US, or whether you’re moving data out of France, for example. You don’t necessarily have the right to just collect data in France, for example, you need to have permission to remove data. These are very pertinent questions that you should have, as we’ve mentioned in, I think, all of our various sections, you need to have local resources and you need to make sure that you can get that advice in pretty much real-time. So, having those resources available to you is important.

So, moving onto the next slide. I said at the beginning this is something that’s pretty dear to my heart, which is slowing down to speed up. I’ve spent the best part of two years, pretty much, in an almost entirely automation role, not in a replacing people with technology role, but truly automating the processes that we can really produce a better product for our clients and that our staff are able to spend their time doing the more important work that they’re good at, not spending time spinning plates.

So, with that, I want to break this into a few different sections here, and the first one is that planning is key. I think any one of the three other presenters here can talk about this at length, but I’ll try and ask this in a more logical way. At the earlier part of this, I think, Jenny, you’re probably involved in some of the earliest parts of these discussions. From your perspective, how important is planning and particularly in having an interdisciplinary team?

Jennifer Hamilton

Well, it’s absolutely critical, because… especially where there are already preexisting workflows and you’re under tight deadlines, then the temptation is to jump right in and just treat the project at hand like the past projects, and following whatever workflows, using the tools that you’ve used in the past and move forward, and you lose the direction and the focus when you haven’t gotten together and really identified what are the goals for the project.

And the goals are usually driven by the legal considerations, risk, and compliance needs of that organization, along with what is technically available, possible, and helpful. Then you’ve got the DPO and the group for planning where you’ve got a whole different set of considerations about, in the project, are the employees and other stakeholders’ data is going to be treated appropriately in compliance with the other rules and regulations that come into play in our ever-increasing, complex legal landscape.

And so, if you don’t have these planning calls, what you have is you have a workflow with no direction, and not taking into account what the true goals are, being driven by the true legal risks. So, having that initial conversation does cut down on time, rework, and cost, but it also manages the risk appropriately.

Jonathan Flood

That’s a really good point, and I think I’d probably be a very wealthy person if I had a dollar for every time I’ve asked at a meeting about something before we kick off a project just for [inaudible] where you know you can see the writing is on the wall that people are diving in and collecting data and processing data before we even understood what the categories of discovery are or what the issues at hand are. so, it is a really key thing and particularly the interdisciplinary part, for me, I think it’s extremely important for any matter that you’re doing that you have stakeholders from every part of the organization involved to at least have their say. They may not have experience in it, but they may have some invaluable insight that is going to be incredibly important for the process that you’re about to undertake.

David Wallack

I would also jump in there, just to piggyback a little bit off of what Jenny was saying. My experience has been, as well, that when we request to have additional resources brought to the table to have these discussions, not only are they thrilled and pleased that they were invited. Oftentimes, it may be, for instance, somebody in a compliance role like the DPO who doesn’t have maybe a lot to do with discovery, but it never fails to amaze me how much information is shed during those meetings from those initially uninvited resources, who give a tremendous amount of content, not only to us, as the service provider, but also I see people within their own organization or their outside counsel learning things that they had no idea were taking place before the DPO showed up at the meeting and had a conversation about cross-border discovery and what his/her view was on the world. So, it really is important.

And just to skip back a beat too, this was a collaboration between some of our US team and our European team, so it was a little bit GDPR-heavy, but in no means would we ever boil down the world and reduce it to flashpoints like GDPR, there’s obviously privacy regulations all over the world, in APAC, India has one as was pointed out in the question from one of our audience members. So, yes, this stuff is incredibly complex, and there is somebody, probably, within the organization that has a good handle on it, and finding that person and bringing them to the table is just absolutely invaluable and crucial.

Jonathan Flood

Yes, you don’t want to find out at the end of your collection project or the data that you’re about to collect is actually on a different continent, and that’s not an alien concept to us. We regularly have conversations with clients where it collects the data assuming it’s all in the US or assuming it’s all in whatever country the primary business is, only to learn from the IT person or from some other third-party resource that actually, remember that project, we moved that to India, or we moved that to Finland, or we moved it to Poland or whatever. So, it is really important to learn as much as you can as early as you can.

The second point here is about silos and disinformation. It’s kind of in the same vein as the planning thing and having interdisciplinary teams. One of the things we’ve tried to avoid in our years of experience in doing the projects we’ve done is creating a silo, there are times when you need a silo to keep a project running on its own rail aside from other parts of the project. But silos can also be dangerous. You can also end up with a lot of disinformation about a project and not knowing what’s going on.

So, some of the main things here – and I’ve kind of broken it into some high-level topics – recruiting quality control, project planning, these are all really, really key, and having somebody who really understands the project, I think, is very important and having a very high-level view of what’s going on and being able to control all the various aspects.

Susanna, you’ve done a lot of large multi-country reviews with lots of facets. Can you talk a little bit, just briefly, about the recruiting efforts and the quality control and project planning aspects that you’ve been involved in, just to give our audience a flavor of how complex that can be, but how important it is at the end of the day?

Susanna Blancke

Absolutely. The spiciest flavor that I can provide is ransom insurance, for teams that you have to send in political problematic countries, that’s certainly something that needs to be done at a project planning when you are expecting to go into a country to collect data that cannot leave the country and is to be reviewed, nonetheless.

Going back a little bit, when it comes to recruiting on the national ground, very often, I feel that the recruiting department is necessarily separate from project management, and it doesn’t need to know what the case is about. What is important, however, is to know and communicate to recruiting what phase the project is in, because we all have made the experience that projects have different phases. A project that is done may just come back after a couple of weeks, then stops again, and then starts up again, so if you have four or five startups and maybe, in phase number three, you have additional training being provided, sending a blast to the entire team that was on the original review can be potentially dangerous, because you may have people reply to the call out that have missed certain training that was provided in phase number three. And they may not be participating in that phase or have not participated in that phase, because they were unavailable or they were on vacation or whatsoever. It doesn’t need to be a quality issue why that individual was not on phase three, but it is important to know for recruiting, and that is the job of the project manager to communicate that. Hey, we had different training here and it’s recorded and it’s in certain folders, can you please make sure, for your next blast, to bring the team back that you only blast people who were on that phase. Because we expect the client wants this data reviewed as fast as possible, and we don’t have the time to retrain. We put all the document information in a certain secured folder on the review platform so that it can be accessed, but we can’t have a full team retrained.

And that is true for the first level and for the QC team, of course. I think those are like the highlights. That, in connection, was managing client expectations when it comes to review and recruiting. Very often, I feel that clients think we… or in general, every vendor, or recruiting agency has foreign language reviewers that are just sitting on the shelf waiting for a call to review just 50 documents. Because very often on multi-language reviews, one or two or three major languages, but there are a few languages that are just not very popular, you don’t find Hungarians that also speak Spanish very easily, Thai review languages are usually very difficult to staff. And those are in the mix with maybe French, Italian and Russia, which you can staff a little bit easier. I think that is important that you have interdisciplinary communication that goes through the non-typical project management phases so that everybody can do their job better.

Jonathan Flood

Thank you, Susanna. And the last point I had here was to understand the legal requirements. I think we’ve talked about normalization enough for our audience to understand what it means. David, we’ve had numerous conversations about the joy of saying anonymization and pseudonymization over and over again in meetings. Aside from the difficulty of saying those phrases, can you give a quick sentence or two descriptions of what anonymization and pseudonymization means and why it’s important in these projects.

David Wallack

Yes, sure, just at a really high level, data anonymization would be any method that removes any trace of identification to an individual.

Data pseudonymization would be – is a tool that is a little more recent and it’s where the name is simply replaced with a pseudonym. It could be another name, it could simply be a token, like Employee 1, Employee 2, anything that would be automatically repeated throughout a dataset, so that you would know which person, in theory, was being spoken about, but you wouldn’t know who that person actually was. And this is increasingly what data regulators want to see as part of supplementary measures under GDPR. It gets a little bit of pushback, obviously, in the context of US litigation, for the same reason that redactions do, because it removes context from the document, it could be expensive to go back and find out who the people actually were should you need more context for your US litigation demands. But it, nonetheless, is a tool that probably is going to be seen as more and more of a de facto privacy setting in these types of reviews coming up.

Jonathan Flood

Thanks, David. I’ll finish this slide off with a brief… I’ll almost leave it at the three words – research, invest, and train. I really, think as a team, HaystackID has really excelled, in my opinion, anyway, from other places I have worked and seen and teams I’ve seen, the effort in researching the technologies that you use needs to be high, and I don’t mean that – don’t take that with an ap inch of salt, it’s important. And the reason why, I think, reaching out to a vendor for large projects is really important. If you’re dipping your toe into a project for the first time, you’re not going to have the time to research all the tools and know which one is best for the project at hand and they do require investment. These tools aren’t cheap.

The reason vendors exist is that we can make the investment to buy these large-scale products and deploy them across multiple clients and still make it work for us. And the other side of it is really making sure that the team are trained, and that goes right from the bottom. If you’re talking about the first-tier reviewer, they need to understand the system. If they’re not trained, they’re not going to be efficient, and the same goes for all the tools along the way. If you’re using analytics tools, if you’re using automatic redaction tools, or anonymization tools, if you don’t know what you’re doing with them, it’s going to be done- you might eventually get there – but it’s going to be done wrong a few times, you’re going to learn along the way, and when you’re in the thick of a project, it’s really key that you have the right technology, you know how to use it, and then it’s available to you.

So, as always, I would say, like we’ve said with the other points here, reach out to your vendors, reach out to your consultants, and ask them what tools are available and what tools are important for particular projects. Stay informed, understand it, it’s really key and it will save a lot of time when it comes to kicking off a project for the first time.

So, on the topic of technology, this slide is We Have the Technology, Let it do What it Does Best. There are lots of technologies out there, this is not a technology discussion per se, it’s more about automation of tools and the various workflows that you can apply. Again, we kind of ran a little bit long on our first couple of slides here.

So, Jenny, I’m going to throw it over to you to talk about collection a little bit. I know it’s a topic close to your heart. What’s your experience in the collection side of things and the technologies that you’ve experienced in the past?

Jennifer Hamilton

Well, one thing I’ve learned is what the collection tools don’t do, and they are not good at really large data collection projects where there’s not a lot of scoping, and that is what we call Boiling the Ocean. And sometimes when you see the demonstrations of the tools, you get this idea as a non-technologist and especially in a legal ops setting that these really can do everything, and that’s just not true. You still need to tell the tool what to do, including putting as much scoping around the collection as possible, and then ensuring, from a legal perspective, the collection tool is configured the way you understand that it needs to effectuate the goals of the project.

So, for example, a lot of times you turn off the “Collecting photographs or images in a standard regular run of the mill regulation in the US” that is not important for certain types of disputes, but it can be extremely important for certain types of investigations, or litigation. And so, the challenge there is always making sure that legal and IT are communicating on the case. It goes back to that planning discussion. What are the requirements for the project? Are they driven by legal or is it a technology issue in terms of the limitations of what the collection tools can or can’t provide?

Jonathan Flood

Yes, I think that’s a really good summary. Ultimately, as the technologist, I think, of this group here, in particular, my background has been in learning about the tools and using them in the best way possible, and I’m at pains to say, I think, oftentimes, people are kind of – they throw all the technology and all the tools at something from the very beginning and they have way too much stuff. As you said, over-collection can be a huge burden and cost a lot of time and effort, and if you can, like you say as well, decide to not collect the images, it’s much quicker to do that and have a meeting at the beginning (a planning discussion) beside that implemented, and don’t collect the images on the 100, or 200, or 1,000 machines that you’re collecting rather than excluding it a thousand times. These are things that have real cost implications down the line.

Next on that list there is Processing Tools. I’m going to keep this really high level. My point about the processing tools is there’s loads of them out there. People have their preferences. We have our preferences. We use certain ones ourselves. The key thing really is though that from a technology perspective, whatever you’re using, try and automate the simple stuff as much as possible. If people who manually move stuff from A to B all the time, if you can automate that and save yourself an hour a day, well, that’s a lot of hours in a year and you can put those hours to better use. There’s a lot of inbuilt automation steps that you can do with some of the tools, like deciding whether or not to DeNIST or whether or not you dedupe, or whether or not you do the various processing options, OCR, and the things. If you can queue all those steps up and have it done and come in the next day and the work is all done for you, that’s far preferable than trying to manually work through all these steps yourself.

David, have you got any comments on the PII identification tools that are out there? Any advice or gotchas that you’d like to discuss?

David Wallack

Yes, they are certainly coming along quite nicely. The types of PII have grown to very sophisticated levels. One thing I would note is that a lot of the PII now can be done – this PII searching and filtering – can be done at the processing step. And then on the automated redaction sort of forefront, you have tools like Blackout, which is a wonderful tool for automated redactions. There are also other apps that plug into Relativity that have automated data anonymization and pseudonymization that work quite nicely. There is still an element of human review that will always be necessary to sample and test to make sure that they’ve actually done what they purport it to do. But again, it’s discovery, the goal is not perfection, it’s reasonableness. And I would even venture as far as to say that, to some extent under the GDPR, there is a reasonability factor too. Data is complex, data regulators know this, and this goes back to sort of documenting that you did your very best to comply with the regulation and to safeguard the data and to not move any out that wasn’t absolutely key and essential to move out.

So, that’s the brief update of where these tools stand, but they are out there and, certainly, they come leaps and bounds over the past few years.

Jonathan Flood

It’s a great point. The two bottom left and right boxes, I think, sum up this slide nicely, which is know what you’re doing, understand what you’re collecting, where you’re collecting it, and how you’re going to do that and use the appropriate tools. It seems simple, and once you have done this a few times and you know the limitations of the tools, you know which ones suit the job the best, and obviously, automation in those particular scenarios can save you an awful lot of time.

There are tools out there, for example, we use themselves where you can collect… remotely, you can collect data from as many machines as you want and it can be done all simultaneously, they can all come to a centralized point, pre-searched, and ready for loading into a processing engine. That depends on the project that you were looking at. If you know what you’re looking for, these things can be pre-configured and you can save yourself a lot of time.

Next slide here, I think this one is probably dear to everyone on this call’s heart as well, less so mine, because I’m more of a technologist, but I think it’s an important one, nonetheless, and that is working with outside counsel and third parties to define your process. I know we’ve covered this to some degree and we’re heading up to our 75 minutes here. So, Jenny, if you want to kick off here, please?

Jennifer Hamilton

Sure, I’m happy to. Again, each project is a little bit different from the next, so you have to have these planning calls in the beginning to identify goals, as we’ve discussed, and the way I see it, the goals of the project is a combination of whatever the legal requirements or compliance risk are in combination with a particular breach review, what the security risks are as well. And then once you have those goals established, you can set either the parameters, start the collection, then explore the data, understand what may be interesting or important, and then sort of remediation or report that you might need to make, and then launch into the processing.

The challenge with any of these projects and working with outside counsel and third parties, the process is that it’s all iterative all of the time. So, you have to stay closely connected and each has to play their role. Outside counsel has to provide the advice with the experiences they’ve had, the company who has multiple layers of risk needs to come in with an open mind to understand what are the different options to meeting the goals they’re established in the beginning. The data processor has to make sure that we’re taking our own responsibilities seriously, both to the client and in the space for increasing regulation, we have increasing responsibilities to effectuate those.

And so, at the iterative part is that this sharing has to keep going back and forth, and back and forth throughout the entire process, not just the planning phase, but when you’re actually doing the execution or the testing, building the [inaudible] testing leading up to the execution and back and forth. So, there’s, again, this needs to stay closely connected throughout the process.

And I know that from working with David and he has a lot of experience on these issues as well, so I think that this would be a great time to hear some more detail from him on what he finds really important here.

David Wallack

Well, I think that you really covered it nicely, and I agree with sort of how you phrased it Jenny, it’s responsibility sharing, and I do think that, again, to point back to the recent EDPB guidance that it has become pretty clear that data processors to no longer simply rely on the direction of their clients when it comes to fulfilling their responsibilities for cross-border data transfers. The data processor also has a responsibility to document all their steps, make sure that the supplementary measures are in place, make sure that they are using the correct legal basis for the transfer. And that could be difficult because sometimes service providers are sort of data blind, so to speak. We, oftentimes, don’t know what we get until we get it and start looking at it, and oftentimes, that might be too late.

So, again, I think that this goes back – and it sort of makes me recall a recent experience that we had. We had a large Swiss client with US presence and also a broader EU presence as well, and they were involved in US litigation, and the outside counsel in the US was of the opinion that they just wanted to throw everything over the wall right away, and that it would be fine for us because they had signed off on it, and we were just working with their direction.

But we were able to work with their in-house counsel, and we were able to have a really informative meeting on some of the potential issues that we saw with simply bringing all the data over. And the in-house counsel was able to then rendezvous internally with their DPO who affirmed and highlighted all of these issues that we had potentially brought up. this really sort of informed the entire project.

We came up with a totally different workflow, we bifurcated it between the EU for privacy filtering and review before only the absolute bare essential dataset was brought into the US for US litigation review and eventually production to opposing counsel. And we’ve had long talks about data anonymization, pseudonymization, and we sort of figured out what the best workflow was. But it wasn’t very long ago that that wouldn’t have gone that way. I think most vendors would have simply said, OK, well, if you say so. but I’m not sure that that’s really the state of the world anymore.

So, it’s not… I’m not sure that I like liability sharing so much, but I do like the concept of responsibility sharing here because I do think that that sort of accurately describes the context here, that we’re all in this together. I think the data processor and the data controllers’ interests dovetail nicely. Mostly, the obligations marry each other, whether you’re the processor or the controller, so why not all get on the same page with this stuff, why not find out exactly what’s in the data. As Susanna was aptly pointing out, work with local counsel to find out if there’s anything peculiar, like a blocking statute or if there’s a local labor union law, or if it’s a totally different foreign data privacy regime that maybe everybody is unfamiliar with. Why not all work on those issues together to make sure that this is done in the most compliant way possible.

Jonathan Flood

Yes, perfect, and that summarizes that up, and all these slides have been working together in the same direction here about working with outside counsel, and planning and discussions, and the automation and legal challenges as we talked about in the earlier slides, really, are about defining the process and that’s how you build your automation workflow.

I’m not going to do much talking on this slide, in particular, but I think what this really summarizes, this discussion really summarizes here is what we’re really trying to build is a really high-quality product. What our clients are looking for is the ability to reach over, pick up the phone, or send an email to one of our team and say, “Hey, this is the project we have, here’s how we feel it needs to run, what are your thoughts?” it’s a very consultative approach now these days. We’re not so much the receiver of instructions much like to your point, David, on sharing responsibilities is that we really are in an advisory capacity many times, and we’re discussing these things on an ongoing basis.

The end goal is that we have a really high-quality product. We’re doing the same thing we’ve done for years. the general process hasn’t really changed, we’re collecting data, we process it, we put it into some system, it gets reviewed and then the end product, whatever that is, whether it’s a litigation or a DSAR or some sort of a breach recovery, or any of these other newer versions of the same thing, ultimately, we’ve got a bunch of data that needs to be looked at in some fashion and it needs to go somewhere at the end.

And what the last 10 years has really shown me is that we’ve got a really broad spectrum of people on the team. The more skills and the more experience you have on the team, the better you’re able to answer the more complex questions that come up. I feel like what we’re seeing now is more so an example of the development of the complexity of the systems that are out there and how we’ve been agile enough to move with those changes to be able to still produce this high quality at the end, which is, I think, really key to the process here.

I’m going to move onto our last actual slide, but this is where I’ll have everybody maybe speak for a little minute or two. This is one of my favorite words or favorite phrases, which is “Operational Excellence”, and I, ultimately, what I kind of summarize this into is for our operations team. What we’re doing and what we’re having our sales team sell to our clients is we’re trying to get our clients to outsource operational excellence. They want our clients to come to us so we can give operational excellence as a service.

But what does that really mean? I think it’s a really important thing to understand what a broad phrase like operational excellence really means. At a high level, everything we’ve talked about, it’s subject matter experts, it’s best practices, helping clients. We use a consultative approach all the time, multidisciplinary teams, repeatable workflows, and more importantly than anything else, we’re taking our own medicine. We are doing the things ourselves that we tell our clients to do. We are SOC2 certified, we are ISO 27001 certified at data centers. Whatever requirements that we’re telling people they should be needing, we’re trying to meet them ourselves, so that we can at least speak from experience and tell people how they work.

I’m going to pass it back to the team here, the panel here. Has anyone got any closing remark? I’d like to hear from everybody just to close out this people, process, tools final point here. Any words of wisdom from the team here?

Susanna Blancke

I can start here, Jonathan, with my words of wisdom. Its communication is really the glue that keeps all these operational steps together. making sure that all the teams are informed about the next steps. Having daily, weekly calls. Making sure that the client is not just instructing over the phone, but also provides written instructions. That with all the remote review going on these days, that we make sure that there are enough chatrooms that are specialized for first level, for QC, for review managers. If you have a very hands-on client, maybe that client wants to be included in one of the chatrooms. Budgets are very important, and they all touch on people, process, and tools. The way how you provide feedback, be it via email with making sure that a return receipt is being sent when it has been read, that audits are being conducted for attendance for productivity. For hours worked that then enhance and in return are being billed to the client, and reporting that is done on a daily basis so that there’s no surprise at the end of the month or at the end of a billing cycle for a client when they see the bill when they see how far the process or the project has been matured, or not matured.

I wanted to come to what David said earlier, it is really important to make sure to connect to everybody, and to listen to people that may be are not usually on the forefront or presenting people. The best individuals of knowing about where data is located is the team or the individual that works and processors data on a very basic level. They know how the information is defined, or where they are hidden, or where you can find what you’re looking for, I think.

That’s how I would like to close it up.

Jonathan Flood

Thanks, Susanna. Jenny, David.

Jennifer Hamilton

I would just say that as a former client and a buyer of these services, what I’m looking for in terms of operational excellence by a provider is one who knows what I don’t know, and can offer the context of their broader experiences in other industries, and what is developing as best practice that can form the legal decision-making process and bring it to the table proactively. And so that it is included in the operational framework, in the workflow, and it can be executed on. So, that’s a big ask, especially when the clients and outside counsel, the voices can drown out the service partner at times. But I think then it’s incumbent on the client to ensure that they recognize they don’t know what they don’t know, and what the provider is really good at in bringing to the table and encouraging the provider to be proactive in that critical communication piece that Susanna is talking about.

David Wallack

And I’ll just add, Jonathan, I suppose here with a little bit of a lesson on compliance I think that I’ve learned over the years, which is to tell people what you’re going to do, do what you said you were going to do, and then explain what you did. You need to communicate it three times, and you need to be able to, at the end say, “Listen, we knew it wasn’t going to be a perfect process”, as we’ve alluded to you several times, all of these privacy regimes are incredibly complex and may be in direct conflict with one another. And so, there may not always be what we could consider legally to be a completely compliant or ideal solution, but what you can do is you can document every step along the way and part of that is this sort of conversational piece, this integrative communicative piece that we’re talking about right now. That is actually part of the documentation and record-keeping requirement.

So, I do think that not only are these conversations important to have theoretically, or for better outcomes but they may actually be required to have this level of consultation.

Jonathan Flood

Thanks, David. Sorry, I changed the slide there, I didn’t mean to cut you off. I’m just conscious, we are over by a little bit of time, but I feel like the topics that you were covering were important and what you guys had to say is coming from real experience. We do dozens of matters a month, we complete millions of documents of review every month and we’ve got a lot of combined experience in the company.

So, with that, I’d like to hand this back to Rob. Thank you to all the panelists for being so knowledgeable and talking about the topics with me. Rob, over to you.

Closing

Excellent. Thank you so much, Jonathan, and the entire team. An excellent presentation, we truly appreciate it and we also appreciate the support from our partners, Women in eDiscovery and ACEDS.

Additionally, as Jonathan mentioned and I want to thank all of you who took time out of your schedule to attend today, we really know how valuable your time is and we appreciate you sharing it with us today. We also hope, just as a reminder, that you’ll have a chance to attend our next monthly webcast, which is scheduled for June 23rd next month on Wednesday at 12 p.m. Eastern. It’s going to be on the topic of Operationalizing Data Mapping. And again, we hope you can attend.

Thanks for attending today, and this formally concludes our webcast. Have a great day.


CLICK HERE TO DOWNLOAD THE PRESENTATION SLIDES

2021.05.18 - HaystackID - Automating Privacy - May Webcast

CLICK HERE FOR THE ON-DEMAND PRESENTATION