[Webcast Transcript] Breaches, Responses, and Challenges: Cybersecurity Essentials That Every Lawyer Should Know

Editor’s Note: On September 15, 2021, HaystackID shared an educational webcast designed to inform and update cybersecurity, information governance, and eDiscovery professionals on how organizations can prepare, address, and respond to the challenges associated with incident response and post-data breach requirements.

While the full recorded presentation is available for on-demand viewing, provided for your convenience is a transcript of the presentation as well as a copy (PDF) of the presentation slides.


[Webcast Transcript] Breaches, Responses, and Challenges: Cybersecurity Essentials That Every Lawyer Should Know

Every large corporation and organization today face the significant threat of cybersecurity incidents. However, most practitioners who handle litigation and investigation matters are unfamiliar with the basics of responding to cybersecurity incidents and the challenges associated with incident response and post-data breach requirements.

In this panelist discussion and presentation, cybersecurity incident response, legal discovery, and privacy experts will share considerations for handling a cybersecurity incident.

Webcast Highlights

Areas highlighted that will impact incident response and post data breach discovery and review practices and processes in the coming years will include:

+ The Role of the Cyber Insurance Company and Trends in the Cyber Insurance Industry
+ The Roles of Internal Legal, Compliance, IT, and Business Personnel
+ The Roles of Outside Counsel and Incident Response Firms
+ The Workflows of Incident Response Between and Within the Participating Companies and Groups
+ Federal, State, and International Legal Requirements and Strategies for Meeting the Expectations of Different Regulators
+ The Protection of Privilege in Light of Recent Case Law
+ The Proactive Steps that Companies Should be Taking to be Prepared to Respond to Cybersecurity Incidents

Speakers

+ Ashish Prasad – Vice President and General Counsel, HaystackID
+ Michael Sarlo – Chief Innovation Officer & President of Global Investigation Services, HaystackID
+ Jenny Hamilton – Deputy General Counsel for Global Discovery & Privacy, HaystackID
+ Matthew Miller – Senior Vice President of Information Governance and Data Privacy, HaystackID


Presentation Transcript

Introduction

Hello, and I hope you are having a great week. My name is Ashish Prasad, and on behalf of the entire team at HaystackID, I would like to thank you for attending today’s presentation and discussion titled “Breaches, Responses, and Challenges: Cybersecurity Essentials That Every Lawyer Should Know.” 

Today’s webcast is part of HaystackID’s regular series of educational presentations to ensure listeners are proactively prepared to achieve their cybersecurity, computer forensics, eDiscovery, and legal review objectives.

Our expert presenters for today’s webcast include individuals deeply involved in both the world of cyber discovery and legal discovery as some of the industry’s foremost subject matter experts on discovery and review. 

Let me take this opportunity to introduce our speakers. First will be Mike Sarlo, who is the Chief Innovation Officer and President of Global Investigations and Cyber Discovery Services for HaystackID. In this role, he facilitates innovation and operations related cybersecurity, digital forensics, and eDiscovery, both in the US and abroad. He is also charged with leading the development and design of processes, protocols, and services to support cybersecurity-centric post-data breach discovery and reviews. 

Next, I’d like to introduce Jenny Hamilton. Jenny is Deputy General Counsel for Global Discovery and Privacy at HaystackID. Formerly the head of John Deere’s Global Evidence team, Jenny is a leading expert on corporate discovery, legal and regulatory compliance, and international privacy, and she’s regularly educating and advising law firms, legal departments, and government agencies on these issues. 

Next is Matt Miller. Matt is Senior Vice President of Information Governance and Data Privacy at HaystackID. With a background in legal, then in eDiscovery, Matt formally co-developed Ernst & Young’s Information Governance services practice, and was the Global IG Advisory Services either at Consilio LLC. He has led many complex incident response related forensic investigations and multi-national, petabyte-scale, data governance and privacy engagements. 

Finally, back to me. I’m the Vice President and General Counsel at HaystackID. I serve as an expert witness to defend discovery compliance procedures in litigation and investigations. I have formerly served as litigation partner, founder, and chair of the Mayer Brown LLP Electronic Discovery and Records Management practice; founder and CEO of Discovery Services LLC; and General Counsel of eTERA Consulting. In addition, I have served as the Executive Editor of The Sedona Principles: Best Practices, Recommendations, & Principles for Addressing Electronic Document Production; Co-Editor in Chief of the PLI treatise Electronic Discovery Deskbook: Law and Practice; and Executive Editor of The General Counsel’s Guide to Government Investigations. 

Welcome to the entire HaystackID team. We’re delighted to have you here with us. On a sidenote, today’s presentation is going to be recorded for future viewing, and a copy of the presentation materials will be available for all attendees within about a day on the HaystackID website. 

At this time, let’s get started on today’s presentation and discussion, and I’d like to hand the gavel over to my colleague and friend, Mike Sarlo.

Core Presentation

Michael Sarlo

Thank you very much, Ashish. We’ve just had the agenda up. We’re going to be covering quite a few topics today. Thank you all for coming. We’re going to really be focusing a little bit more on the high levels. We’ve had a few of these presentations in a different series getting down more into the nitty gritty on post-breach events. Today’s presentation is going to cover a little bit more on the wider gamut. 

So, just to move on, we’re going to jump into some of the stats here to set the stage. And just so everybody’s aware, the average cost of a breach is rising, despite what has been a massive investment by large enterprises into larger InfoSec teams, better security training, and resilience all around across the enterprise. 2020, we were at about 8.64 million. Now up to about 9.05 in 2021. 2020 from a global scale, and apparently the US breaches cost quite a bit more, we went from 3.86 million to 4.24 million. There is a lot that goes on beyond just sealing the breach, so to speak, that factors into the budget, and this also includes costs of lost business, which is a critical thing for any practitioner on the legal side to be aware of beyond sensitive data, that needs to be mined, collated, and organized into notification lists. There’s massive costs just related to the detection, escalation, and notification of the breach events, and post-response activities such as credit monitoring, and even the tail end of other lawsuits and DSARs in Europe, and now coming to the US in different capacities, can add quite a long tail to these.

Next slide. And every time if any of you ever save that document that has a ton of sensitive data to your desktop, when it’s stolen in the US, we’re looking at about $161 per record. It’s a pretty heavy increase in costs here. When customer PII was lost or stolen, it’s also PII, not just the record itself, you’re at about $180 in 2021, up about 20% in general, and healthcare organizations continue to be the heaviest hit at about $429 per stolen record.

Just to give anybody who’s not fully aware, what is a ransom attack, which often, we see it all the time in the news these days, it seems to be an endless barrage of ransoms, and certainly we hear a lot about ransoms, but there’s also quite a bit of other cyber-attacks going on, being phishing attempts, just general intrusions, DDoS attacks that are designed to take down business operations and to disrupt major sectors, Amazon, Microsoft, Google infrastructure, we hear about these, they affect us all, and your Netflix just suddenly stops working, that’s often the reason why. Ransomware is very much like a virus. It functions a little bit differently. The threat actors usually will actually infiltrate a network, and somehow gain some level of elevated command and control over a system or set of systems. Typically speaking, what we see is there’s usually very far and wide reaching control of the enterprise. This is why encryption is so very important these days. 

Commonly, there’s usually an operation that goes on where data is actually exfiltrated from the network, and reason being is that large scale enterprises and even medium-sized businesses have pretty good backups these days, and although in many cases, it may not be instantaneous to restore those systems, they oftentimes can be restored from a backup state. So, threat actors have gotten much smarter about this in the past year and a half. We’ve seen a massive rise in exfiltration of data, in addition to ransom. The reason being is that once your data is ransomed, usually you get a nice little pop-up whenever you open up a file, or even as far as to have a pop-up that appears on your operating system that will ask you to actually contact the threat actor to negotiate with them. They’re no longer now just asking for single source payment per record. They’re working more in big game hunting tactics, it’s called on the side of the aisle, to identify targets that they know have the capability to pay very large ransoms. Even if they can restore that data, because they’ve exfiltrated data, if you don’t pay them, they’ll threaten to basically post it freely and publicly available on the general internet. This is strategy and tactic largely called data shaping, and we’re seeing, in general, that cybercriminals are less interested in stealing personal information, although they do, and they’re really looking for usually knocking out key enterprises, and big, entire enterprises that oftentimes, from basically not being able to function, are, typically speaking, going to pay that ransom. We’ve seen these with Colonial Pipeline with critical infrastructure here in the US. It just has to come back online. The ramifications are too extreme not to pay, even if you could restore it.

Next slide. So, I’m going to kick it out to my colleague, Matt, who’s going to talk about the APT lifecycle.

Matthew Miller

Yes, thank you, Michael. And so what we have here is – as we know, the sophistication of the malicious actors has really increased over time, and what you see here is what is going on, on a daily basis. These advanced persistent threats are out there. Right now, there are malicious actors attacking, using a variety of different tactics, techniques, and procedures to try and get into the network. They begin with the intelligence gathering phase, where from, let’s say, for example, a social engineering perspective, they’re scoping out your LinkedIn profile, and trying to figure out what is the easiest method of entry. They’re trying to guess passwords. They have automated algorithms that are just banging against the password section and trying to log in using your credentials, and if they end up establishing a foothold, and getting into the network, they’ve completed that initial exploitation, the idea is that they can then take over command and control leveraging the rights access and permissions of the user whose credentials that they’ve compromised. 

From there, they are now searching around the network, and trying to escalate their privileges to a higher level so that they can access more critical and sensitive information and find what they know is going to be something, for example, that they can exfiltrate that is going to be valuable to the organization stuff that contains personally identifiable information, critical PIIs, critical intellectual property, things of that nature. They’re gathering it, and they encrypt it, like Mike was just talking about the ransomware, and exfiltrating it from the network. So, these IT security teams are really challenged to complicate what the attackers are doing. There’s a variety of different countermeasures that we will get into a little bit later, but ever since the pandemic started, and we’ve been working from home, if you’re not using two-factor authentication, or multi-factor authentication, or VPNs, those are going to be points that these malicious attackers are going to use. So, the earlier that you can pick up, as IT security team, where the bad actors are coming in, the easier it is to respond. 

And then we’re going to look at the workflows for incident response and get into the interaction between all the different groups and how complicated is to handle the lifecycle of a data breach. 

Michael Sarlo

Thanks, Matt. So, obviously trying to discover a breach, when you find out there’s been a breach, may not happen right away. It’s not uncommon for threat actors to gain control of an environment or to be in any type of IT infrastructure, networking nodes, for quite some time before the breach is actually discovered. Sometimes, the alarms oftentimes aren’t kicked off until there’s a ransom, or you start getting very odd things in the network when elevated commands start to go on, and they are trying to gain access to certain types of resources that are typically better protected, things like backup systems, decryption keys. Sometimes people get greedy and teams start to actually deploy and enter through compromised endpoints. 

Once the breach is discovered, via any means, obviously there’s a massive effort to first at least try to close any available holes, and there’s a large investigation that will usually start to go on where logs need to be preserved, systems need to be preserved, and the goal is to really lock down systems and to lock out threat actors from doing any more damage, and it’s not uncommon that we come into a matter and find that the organization we’re working for believes they’ve actually sealed the breach and have actually stopped the attack, and we come to find that it’s actually still going on. 

Internal response teams are critical here, and my colleague, Jenny Hamilton will certainly talk about roles and responsibilities as it relates to different stakeholders inside of the organization. Once that response team is built out and operationalized, which certainly may be a combination of channel partners, managed services providers, a lot of our large organizations obviously outsource a lot of these security operations, or may have bits and pieces in-house, and some folks externally, just due to the size and scale of their networks and geography. Law enforcement usually may be contacted if the breach meets the bar, so to speak. Third party consultants will start to come in from every angle. 

Usually, for organizations that have cyber insurance, they’re going to go straight to their insurer and will then be able to work with different breach coaches who have specialty knowledge, usually in their industry and/or in the field as a whole. And those folks are going to be a key resource, these are the lawyers, for managing all the different issues going on, including providing advice as far as the third party consultants who are going to come in. 

There is a lot that goes on really once the breach coaches come in, there’s so many different input streams, looking at contracts, really trying to pick the pocket of data that actually could have been exfiltrated, where there’s harm, could be harm to a company from any type of individuals that their personal data has been compromised. Nonetheless, also in addition to any type of business partners. 

The notification process may begin there. Certainly, for any types of business partners or customers who are covered entities via BAAs or contracts that require notification that there has been a breach as soon as it’s been identified. This doesn’t necessarily mean that your data mining and sending notification letters at this point. You may be just notifying third parties who you’re doing business with where you have a contractual relationship to notify. There’s a whole stream of workstreams, so to speak, that start to happen there where a single breach can become a breach of many. And this is something that everybody is concerned about these days, and we’re looking at different types of liability provisions and contracts, when you’re assessing your insurance as a business. And if you’re doing large scale global business, it’s very important to manage your contractual obligations from a liability standpoint and to have a close relationship with your cyber insurers and your insurance providers, in general, on the business side. 

Obviously, the PR implications can be extreme, and a measured approach that’s transparent is oftentimes the best approach these days. We’ve seen many stories in the news of big enterprises that chose not to make public the fact that they’ve had a breach, and that has not worked out for them whatsoever. It’s happened with Facebook, Uber, other large enterprises where there were issues here, and in general, the advice that organizations are given, certainly, more towards transparency. 

There’s a lot that goes on just getting to notifications, massive human review leveraging AI, different pieces of technology to streamline identification of sensitive data from a geographic standpoint, needing to collate and understand where the data subjects are where their data may have been compromised. You can have different state requirements for what constitutes a breach, how they need to be notified, in what timeframe. 

In general, HIPAA in the United States is a good clock to measure all things by. And then there’s just many types of responses to inquiries. You may have Government inquiries. You may be required to deal with a slew of different state and Federal regulators depending on the size and scope of the breach, and the level of harm that’s occurred as a result of the breach. 

And then there’s usually – and even if you pay a threat actor a ransom, they’re actually going to usually give you a nice little report that indicates how they got into your network in the event of a ransom – that was usually not a nation state on the other end – that will serve as, at least, a small piece of the puzzle as far as increasing your defensive posture and resilience. All of a sudden, you start to see organizations spending a lot of money on security, and after a breach, they usually are in a much better posture, although we’ve seen in the news lately, some organizations continue to be affected over and over again. This just, oftentimes, happens just due to having massive global networks can be very difficult to secure everything. 

Go ahead, Matt, next slide. 

And really, as an incident response contractor, right away, you’re trying to get your arms around what the incident is and what do we believe the incident is. Oftentimes, you walk into any environment and you’re working with internal stakeholders who may have their own ideas or have come to some theories as far as what the incident is, where it began, and what the scope is. It’s really important here to walk in with an open eye and to really assess everything on your own. First and foremost, beginning with log preservation and preservation of evidence, which we are all so attuned to in the eDiscovery and digital forensics world. The IT world, some of the things – we don’t think about legal hold and what that means and what that means for log preservation and data that tends to die on the vine. That can sometimes be missed. 

So, many of you are coming in as a legal practitioner or litigation support, you’re professional and are asked to give advice on these matters. The log capture is critical, very important. At the same time, in order to really validate the incident, you’re looking for indicators of compromise. These are any type of remnants and logs as far as data flows and human flows and threat actor flows through network endpoints, starting really at the switch and the firewall, flowing down to machines, through NAS-es, down to individual application usage like credential management. All of that goes into a timeline and is analyzed for, really, any type of aberrations and anything that falls out of what we would assume typical usage. Usually, there’s key events that are seen when the breach begins, when a ransom begins, when there’s exfiltration. That, oftentimes, will be a good indicator of data flows that aren’t necessarily a standard baseline workload for that machine, that application, or that process. 

We hear a lot now and we see a lot with third parties who subject other parties to a data breach, the Kaseya breach, as an example. One of the largest ransoms ever paid. You have a software provider, you have many organizations using their software. They’re breached, there’s a security flaw on their software, everybody is using it, and now a threat actor has gained access to hundreds of their customers’ data through a security flaw. 

Really important to make sure that third parties that you’re contracting with, that are doing work for you, and also the software that you’re using, SaaS models, things like that that are deployed in your stack, even as an application – the little apps that you might add to Office 365, to your G Suite, all those little add-ons – all those would be security threats. Not every company is created equally, and not every software company is attuned or needs to follow the same standards to make sure that their software is secure. 

So, it’s really a follow the breadcrumb trail, and you’re building a plan to create a map of what’s been compromised, how to block it off, how to seal it, how to preserve it, how to shut it down. And even to go as far as to, basically, create internal honeypots to redirect threat actors to resources that are closed off that they’re not aware of, which is also very important here. And at the end of the day, you are totally trying to remediate their access, and any type of malware or other security flaw that may be cause for the breach to begin with. 

Next slide. And Matt’s going to talk about here kind of a timeline of a major breach and what it looks like. 

Matthew Miller

Let me personalize this slide a little bit. It was Thanksgiving of 2014 and I’m at my grandmother’s house in Cape Cod, and I live in Los Angeles, and I get a call that says, “How quickly can you be in London?” And I said, “I don’t know, two days”. And they said, “Can you make it in 24 hours?” And I said, “Why?” He said, “Well, you’re going to run the advanced forensics recovery team for all of EMEA related to the Sony Pictures hack by a nation state”. So, I’ve got to fly back to LA and pick up some suits, and I did. 

And the reason I tell that story is because the… if you’ve never experienced something like a data breach at your organization, it really triggers emotions. I saw, once I arrived, a grown man with tears in his eyes because his machine was bricked, it means it had malware installed on it, he couldn’t get into his operating system, and it had about 14 years’ worth of contracts sitting on that computer, and he was afraid it hadn’t been backed up recently. 

In order to get that data off the machine and do so in a – and be able to recover all of the user created data, you have to set up a number of different workflows internally. First of all, machines are also encrypted usually, and that is a way to prevent hackers from getting into your information, but when there’s a situation like a data breach and there’s malware on the computer, if that disk is encrypted, we have to decrypt it in order to just clean off the malware and get that user created data off of there. And scan it and make sure that what we are handing back to the organization no longer has any of the remnants from the original attack. 

This is a relatively accurate, quick timeline for a response of a thing of that magnitude, but what the IBM Ponemon 2021 Report is telling us is that ever since the pandemic and the remote work situation that we find ourselves in, is actually… if a company has more than 50% of their workforce working remotely, it’s taking 58 days longer to identify and contain breaches than those with 50% or less working from home. 

And additionally, since no one in the past was planning for these breaches, there wasn’t much proactive activities by the legal, IT, security, data privacy, and records management teams to actually go out and understand on a file level where sensitive data was sitting out on the network. And so, what you can see in this particular use case, again, it’s real life, there was a lot of time spent trying to identify and find all of the personally identifiable information that was out there, while simultaneously, you’ve got all these other activities going on to do the data recovery, and have attorney review spun up at the same time. And we’re talking – you need hundreds of reviewers to be able to get through this data and identify where all of the PII is located, so that eventually down the timeline, before a 30 or 45-day notification window has eclipsed, that you’ve actually been able to find out all of whose data has been compromised. 

So, in that particular case, we were able to get back 94% of the user created data, and have them back online, but that company is still constantly getting attacked, whether it’s through the PlayStation side of the house, or the corporate side that we saw in that case. 

So, then we’re going to go and look at the roles of the key stakeholders, and I believe Jenni Hamilton is going to pick up here. 

Jennifer Hamilton

As you can see from this slide with all the details, there are so many cooks in the kitchen, and if you’ve been involved in one of these incident response teams, you know that the challenge is that many of these workflows need to be done in parallel and it’s hard to know who is in charge of each different workstream, and who is in charge of the overall matter and running point, making decisions, and who needs to communicate up the chain and down the chain as needed. 

So, best practice, in my experience, define a core team who can manage multiple parallel workstreams out to an extended team. And then a playbook that can show concise workflows, for both low impact breaches, separate apart from the high impact breaches, as you can see here in the middle. That is really much more helpful than all in one playbook. Obviously, you can’t create a playbook for each different type of breach with each different type of impact, but the ones that are the most common, commonly expected, highest priority, particularly with the high impact breaches would be worth the time to invest and creating those and getting alignment across the team and what their roles are for each. 

So, let’s talk about stakeholders in a little more detail. You’ll see here some different roles and responsibilities by key stakeholder. And you’ll see that there is also some overlap. In-house counsel versus compliance, for example, where they’re both evaluating the data privacy and disclosure laws, the regulatory requirements for notification, and interacting with outside counsel, so my experience is that companies have different ways of parsing out these responsibilities and clarifying them, and it’s really a matter of history and culture for each organization. 

Matthew Miller

If I could just chime in, so the point about the playbook is really key, that incident response plan. If you’re counseling corporations that do not have one in place, it’s something to take a look at. And I wanted to point out something too, I know this is what lawyers need to know around IT security. If you haven’t read any of the NIST guidelines, one that I would point out as a great starting point would be 800-53 Rev. 5. It came out in September 2020, updated, I believe October, they added in a few more points. But that talks about the intersection of cybersecurity and data privacy and what we are all concerned about, which is cybersecurity related privacy events. And it also points you towards other NIST guidelines for setting up those types of playbooks and incident response plans that Jenny was talking about. It’s a good starting point. 

And it literally has different frameworks and processes and checklists, so that you, as counsel, can go to those corporations and say, “Have we really been able to check all the boxes to make sure that we’re ready in the event of a breach”. 

Jennifer Hamilton

Matt, my preference with incident response playbooks is to put it in forms of one-page checklists and diagrams and workflows as opposed to a multi-page text-only playbook, because when it’s time to go, there’s really not the luxury of reviewing and analyzing and thinking through how to apply it in each different situation. So, the more concise, the more visual easier to consume the better on the playbook. 

So, now let’s talk about our Federal, state, and international legal requirements for notification and who you need to talk to and level-set with. 

There are a number of notification laws, if the disclosure contains personal information, whether it’s to individuals, agencies, regulators, for example, and need to be prepared to report out what went wrong and what could go wrong from there, would be enforcement proceedings, fines, and this is a significant point in the United States, litigation. Particularly class action litigation, which tends to put a lot of companies on the bubble. As we all know in the cyber insurance market right now, we’re really all taking a beating and what coverage you have is being tightened up, and is harder to get. Again, in the US, this is really a key point, because the cost of notifying, let’s say, a disclosure of patient health records, you have to, under HIPAA, send a letter and that letter might cost $1 per patient. So, just the report alone, just the notification alone could really eat through a lot of the insurance, and also what’s available in the event of litigation and a settlement. 

So, let’s talk about specifically what is the major legislation that’s on the radar for cybersecurity attorneys and companies, which would be, obviously, GDPR is huge on everyone’s mind, HIPAA I just mentioned, CCPA, CPRA coming up here in the next year. FIPA and New York Shield, as well as state law. And state law in the United States is significant, because it’s back to… if you ever had to do one of those 50-state survey assignments as a young lawyer in a law firm, it is… they’re really time consuming and go and evaluate state by state what is required to be reported for any one breach, even if there’s not a lot of affected individuals. 

So, let’s take a little deeper dive into GDPR and HIPAA. With GDPR, the key is that you need to notify without undue delay, and if feasible, no later than 72 hours. And notify the supervisory authority in accordance with Article 55. This becomes challenging where you have some contractors, you have vendors who may have been breached and you need them to notify you in enough time, so that you can figure out what needs to be reported in 72 hours. And this has also become very challenging, because in 72 hours, you probably don’t know very much, if it’s a major breach, enough to report, and so it’s burdening the system with a lot of notices that don’t have much information that are acting as placeholders, so that we don’t violate that 72-hour rule. 

With HIPAA, it’s a little different in terms of how many affected individuals there are. If there’s more than 500, we have to notify without undue delay to the Secretary, and without unreasonable delay. And of course, no later than 60 days following the breach, otherwise it’s a notice on an annual basis. And then let’s not forget that it’s not just the laws we’re concerned about, back to what I said about subcontractors, third parties. The contracts often specify what the requirement is to notify the client, and depending on how many parties are in the chain, this can also be quite challenging. So, pulling the contract early on and having someone on the team review it is extremely important. 

So, let’s talk about exceptions to our notification. So, for a fair amount of the reporting authorities, you conduct a risk of harm analysis, and under GDPR, you would look at what is unlikely, whether it’s likely or unlikely in the risk to the rights and freedoms of natural persons. And then what you specifically, out of that, the risk of harm of analysis, if we go to the next slide, another big exception here is encryption. 

So, if you can demonstrate that the information was encrypted and the key was also not compromised in the breach, then you may not have a duty to report, and this can be very important on a larger scale. 

So, let’s talk about who to notify. It depends on the type of breach, of course, but as we already talked about, the affected individual. But we didn’t talk about it could be the entire household in California. The Department of Consumer Affairs. The State’s Attorney General’s Office, we’re obviously in the States. And then outside the United States, you’ve got the foreign data authorities for each of the different countries, and be prepared to let them know what happened, what was stolen, how the affected individuals can protect themselves and what steps you’ve taken. 

There is one issue here, I won’t belabor it, but we talk about Rule 26 (b)(1) in litigation, in the United States, there’s a rule that limits discovery, the scope of what is discoverable in our pre-trial discovery rules, as what is it limited… I’m sorry, any non-privileged matter that is relevant to any party’s claim or defense and proportional to the needs of the case. This can be extremely helpful under our very own broad discovery rules here in the States. But it does not apply to the situations where you have an obligation to notify, it will only apply if you end up in litigation. 

And speaking of litigation, as many of you know, the importance of attorney-client privilege, it’s used here in the United States, it is different in other countries, it is extremely important because of the risk of litigation and the burden of the discovery. Then privileged communications between client and counsel are considered protected from the disclosure in litigation. It is narrowly construed, and there’s also a separate attorney Work-Product Doctrine that can protect documents that are prepared, and here’s the key, in an anticipation of litigation. And that also could include an investigator, a breach investigator specifically here, who is taking direction from the attorney who is asking questions and trying to form opinions and communicate their advice to the client. 

So, this has been used, particularly the attorney work product privilege, which is separate from attorney-client privilege, to protect reports and advice in the event of a breach. And again, it’s very important to try to protect it in the event of litigation. However, in July, there’s a case, the Capital One Consumer Security Data Breach, where the judge said maybe not. The judge said in July, specifically, that disclosure of a forensic report to the plaintiff in this lawsuit stemming from the data breach should be provided. And the judge rejected Capital One’s argument that the report is protected from disclosure by the Work-Product Doctrine. One thing that I will note here, and you can see some of the specifics – and I encourage you to look at the case online, to really understand how this impacts your organization – but one thing I would remind everybody here, back from the eDiscovery days, if you are a US practitioner, you know that in order to get Work-Product protection, that part of what supports that is if you also issue the legal hold simultaneously to asserting it. So, you may have an internal investigation and be claiming attorney-client privilege and running it at the direction of counsel to get that protection, and also directing investigators to get the attorney work-product privilege protection, but you have to be in anticipation of litigation. And one of the tells that you’re truly anticipating litigation, that you have a credible threat, is that you have issued a legal hold simultaneously. The legal hold is a notification to the employees to tell them to preserve records. If you don’t issue it, then you’re at risk of not getting the protection of the attorney Work-Product Doctrine in any event. 

So, here I’m going to turn it over to Mike and Matt to discuss the sensitive data breach reporting that can help with this process of understanding the impact and what might be reportable to a regulator. 

Michael Sarlo

Thanks, Jenny. There’s quite a bit of technology available, information governance, and also more privacy management that can allow for detection of PII, sensitive data, some better than others. It’s more common practice in the incident response world to use this and to do this running searches and regular expressions. 

One of the key pieces of reports that you should be looking for in any breach, and really from the eDiscovery side now, really any matter where sensitive data is a concern, a growing list of state regulations, and broad in Europe and other countries, in general, with data privacy law it’s very important to understand what you’re dealing with inside of a dataset. Because data tends to be unknown and tends to build up, and classification of data, and data hygiene is in fact gray in most organizations. 

HaystackID has actually developed their own AI called Protect Analytics that allows us to detect a set of sensitive data types, even as far as gender, and things like religion. And we use this during breaches, and really after breaches, both as a method during a breach to identify critical pockets of data that we may need to focus really our investigation on to make sure that exfiltration or access hasn’t occurred. Early on, it becomes a hierarchical approach, especially when there’s wide command and control, and we believe that we’ve secured at least the major endpoints as to which and whence the threat actors have gained access to the network. So, it becomes really a work of where to start. This type of impact assessment reporting is very useful for teams on the incident response side, for breach coaches, for the end client, and for the data mining vendors so that we can really understand where our risk is, the types of sensitive data that may be out there. If we start seeing a lot of health data, credit card data if they’re a payment processor, what type of legal frameworks are really subject to from a reporting standpoint. And this is fully customizable using our Protect Analytics. And there’s a lot of different ways to visualize data that we offer our clients and that can be very useful, especially when you start to look at things like different types of geographies where data subjects may be located. It’s really great if you can extract that data as well. 

Really important during any incident and even after any incident is to really get a good data map, so to speak, of where organizations believe their sensitive data may be located, in general, and what systems are being used to store customer data, or other sensitive data types, be it health records, medical records. And oftentimes, that data can be used to enrich data that is being extracted from a compromised dataset, to streamline the efficacy of AI and/or searches to reduce costs as you work to get to a rolled up listing of individuals’ sensitive data that may have been compromised, with the end goal of sending them a notification list. 

So, I always recommend a two-pronged approach, really the blind piece of it, which is going in, scrape it, running the AI, running the searches, and also working with any known repositories that may contain this type of data in more of a structured format. 

Any time that you’re going through a data mining, post-breach discovery exercise, really good reporting is important. We offer our clients robust metrics on a daily basis and live via dashboards that allow them to see the different types of PII that reviewers are actually validating beyond searches and AI. And we use that technology to identify pockets of data that should be elevated for human review. And as that’s happening, these metrics are overlaid against the actual initial impact assessment metrics to get a sense of where we stand from a defensibility standpoint. How good is the AI doing? How good are our detection methods? Do we need to change our searches? What’s our precision? Statistical sampling is your friend here, especially when you’re dealing with large quantities of data where there might be a lot of false positives. AI is really a force multiplier for humans in any case for legal discovery, for cyber discovery, and validation of AI and searches, in general, text searches is very important, in as far as giving all the operators here capability to make strategic calls as it relates to risk and accuracy. 

We spend a lot of time also looking at data points that necessarily do not hit on searches or AI, and that’s where statistical sampling is definitely very important as well, because there’s a lot of risks in what you can’t detect. 

Go ahead, Matt. 

Matthew Miller

So, it’s complicated, as everyone can see. 

Now, there are some ways to get in front of these situations with some proactive steps that organizations can take. In 2020 and 2021, ever since the COVID-19 pandemic, I’m focused on this a lot, because this is what we’re dealing with on a daily basis, and they are the attackers are getting smarter, they are leveraging social engineering tactics with real life situations to be able to take advantage of folks that are in an uncomfortable situation that they haven’t been in before. They’re putting out fake stimulus payment – click on this link in order to sign up for your stimulus payment or something of that nature. 

And that one click of the link, that’s where it all breaks down. And the next thing you know, they have gained access to that employee’s credentials and they start that attack process with those TTPs that we talked about earlier today. 

The nation states’ organized crime has really stepped up its efforts here around the ransomware, and the complexity of the way that they get into these networks. I was talking to a CISO that told me he has seen a 400% increase in the number of advanced persistent threats over the course of the past year then they used to see. 

So, in order to be a little defensive, what can we do to get in front of this? Make sure that there’s good backup disaster recovery. Make sure that you’ve got your data backups, so you can roll back to a clean state of the network, and identifying that PII and PHI ahead of time, that’s really something that I focus on, on a daily basis in helping organizations with that. On top of the way that people use their computers on a daily basis, patching and/or the lack thereof has caused numerous situations for organizations that they take advantage of these security flaws that haven’t been updated, so making sure that the patching is out there and all up to date. 

The two-factor authentication/multi-factor authentication, that is where you’ve got to… every time now when I log on to my computer, it’s asking me… or even if I want to reset my Google password, my Gmail password, it’s asking me to send me a text message or type into this box the code that they’re sending me on my phone, because that’s something you have, it’s something that’s in your hand. The additional layer of authentication that falls upon the employee, it’s really going to help prevent bad actors from taking over your credentials. 

Train your employees. I know that we all maybe don’t like going through the trainings, but for example, that KnowBe4 company has those really good training videos where they point out if you hover over a link, you can see whether or not that link is actually pointing to where it’s supposed to go. And if your organization hasn’t added into their 365 environment into Exchange the phishing alert button, I would get on top of that. You can just send emails that are suspicious directly to IT security and have them take a look at it. 

So, you want to approach this from… these different questions will really open up from an information governance perspective, where do you stand from understanding where all our sensitive and critical information is out on the networks. Do we actually know what type of data that we have and how we retain that data, what type of data that we process? As lawyers when I used to hear the word “processing”, I just thought of someone collected data from an eDiscovery perspective and they had to process that data to put it in a review tool. But when we’re talking about processing PII in the context of GDPR and CCPA, it’s all these different methods of what you are doing with PII, and you have to… you really need to understand how PII is flowing around the network and do we have visibility into that, and who can access and control that. 

So, by taking a look and being able to answer these different questions, and getting to the point where we’re actually implementing defensible data disposition on the network is really where you want to get to, so that you can eliminate risky data that is sitting out on the network. And if its records retention policy or schedule time for holding that information has expired and the data is not on legal hold, and you don’t have a business purpose for holding onto that PII, why not get it off of the network. 

I know we’re close to the end here, but I just wanted to show a quick real life example of scanning an HR folder, and within that HR folder we determined that there was about 82% of the drive that had data that was over seven years old. And there’s some overlap between these different types of data. This is the ROT data – if you ever hear that terminology, redundant, outdated, and trivial data. And if you can clean that up, you won’t find yourself in a situation where you have, within that HR folder, 5.2 million social security numbers that are sitting out there that could have been deleted because they were all in folders that were over seven years old. 

I think that really does kind of bring it full circle for why those proactive measures need to be taken. So, these steps right here, if you want to learn more about it, we’d be happy to talk to you further. 

And solidifying the foundational elements, that’s a great place to start, getting that data map in place, and being able to identify that data, so eventually you can get to reasonable security measures. 

And let me hand it back over to Ashish. 

Ashish Prasad

Thank you very much, Matt, and also thank you, Jenny, and thank you, Mike, for your excellent comments. Thank you also to everyone who took the time out of their schedule to attend today’s webcast. 

We have a variety of excellent questions. We’re only going to have time for one question and maybe I’ll address it to you, Matt. We received the following question. 

“In the world of electronic discovery, where I am from, there are ethical duties on lawyers to keep abreast of changes in law and technology to prevent unauthorized disclosure of information to non-clients and to supervised and managed non-lawyer services competently and effectively. It seems to me just a matter of time before the law develops in a way to apply these duties, specifically, to lawyers who are managing cyber incident response projects. Do you agree with that or do you think I am being paranoid?” 

Matthew Miller

I do not think you’re being paranoid. I do agree with that. The practitioners in this field are handling the most sensitive data that organizations hold, be it employee, customer, or… they are also… whether it’s PII or IP for that organization. This is the critical… information assets, that is your business value, and so there should be a duty put upon cybersecurity practitioners, in my opinion, and I think we will get there, to have that same sort of level of responsibility that they place upon attorneys today. 

Ashish Prasad

Thank you, Matt. And thank you, again, to Jenny and to Mike. 

Closing

We hope that our audience members will have an opportunity to attend our next monthly webcast, which is currently scheduled for October 20. The topic for that webcast is Mobile Device Discovery. It will be featuring one of the nation’s foremost experts on Android and iOS mobile device discovery and analysis tools. We’re also going to have commentary from experts from the digital forensics, eDiscovery collection and international privacy spheres. So, please join us for that and we hope that you have found our webcast to be valuable. 

Thank you, again, for attending. Look to our website, HaystackID.com, for the recording of this webcast. 

This concludes our program. Have a nice day, everyone. 


CLICK HERE TO DOWNLOAD PRESENTATION SLIDES

2021.09.14 - HaystackID - Breaches Responses and Challenges - September Webinar Presentation 091521

CLICK HERE FOR THE ON-DEMAND PRESENTATION


About HaystackID™

HaystackID is a specialized eDiscovery services firm that helps corporations and law firms securely find, understand, and learn from data when facing complex, data-intensive investigations and litigation. HaystackID mobilizes industry-leading cyber discovery services, enterprise managed solutions, and legal discovery offerings to serve more than 500 of the world’s leading corporations and law firms in North America and Europe. Serving nearly half of the Fortune 100, HaystackID is an alternative cyber and legal services provider that combines expertise and technical excellence with a culture of white-glove customer service. In addition to consistently being ranked by Chambers USA, the company was recently named a worldwide leader in eDiscovery services by IDC MarketScape and a representative vendor in the 2021 Gartner Market Guide for E-Discovery Solutions. Further, HaystackID has achieved SOC 2 Type II attestation in the five trust service areas of security, availability, processing integrity, confidentiality, and privacy. For more information about its suite of services, including programs and solutions for unique legal enterprise needs, go to HaystackID.com.