[Webcast Transcript] Real Benefits, Real Constraints: A Practical Guide to Copilot Rollout

Editor’s Note: Copilot for Microsoft 365® is moving from experimentation to enterprise reality, and that shift is showing how closely AI performance is tied to everyday information practices. In the recent HaystackID® webcast, seasoned legal tech leaders explored what it takes to launch Copilot responsibly, starting with a pilot that’s anchored to a real business objective and measurable outcomes. The panel explained why permissions, sensitivity labels, and DLP are not “nice-to-haves,” but the guardrails that determine what Copilot can surface and how it can be handled. They also spotlighted the hidden risk of ROT and competing document versions, where outdated content can undermine accuracy and, in some environments, safety. The takeaway is straightforward: progress comes through iterative pilots, strong data stewardship, and change management that keeps pace with Microsoft’s rapidly evolving feature set. Read the full transcript to learn more.


Expert Panelists

+ Steve Barsony (Moderator)
Managing Director, HaystackID

+ Michael Elkins
Global Advisory Consultant, HaystackID

+ Dean Gonsowski
Chief Revenue Officer, RecordPoint

+ Glenn O’Brien
Senior Manager, Data Governance Policy Management, RTX


[Webcast Transcript] Real Benefits, Real Constraints: A Practical Guide to Copilot Rollout

By HaystackID Staff

The rapid enterprise rollout of Copilot for Microsoft 365 is forcing organizations to confront a long-deferred reality: AI does not just accelerate work; it amplifies the quality, governance, and risk profile of the data environments it touches. During the HaystackID webcast, “Real Benefits, Real Constraints: A Practical Guide to Copilot Rollout,” a multidisciplinary panel of information governance, data management, and legal technology leaders explored what it truly takes to move Copilot from pilot to production. Their message was clear: successful AI deployment is far less about technology enablement and far more about data readiness, governance maturity, and organizational change.

Copilot will inevitably surface existing weaknesses in enterprise content ecosystems — from over-permissive SharePoint environments and redundant or outdated content (ROT) to inconsistent metadata and unclear data ownership. Unlike prior technology initiatives, where imperfect data could be tolerated, AI systems amplify both strengths and flaws. As one panelist noted, Copilot enhances what you give it: well-governed, curated content produces trustworthy results, while fragmented or stale information leads to unreliable outputs, hallucinations, and risk. This shift is driving a surge in demand for foundational practices such as sensitivity labeling, permissions hygiene, data classification, and curated repositories of authoritative content, disciplines that many organizations historically treated as optional.

Throughout the in-depth discussion, the experts highlighted a critical but often overlooked dimension of AI adoption: defensibility and compliance. Copilot prompts and interactions are discoverable enterprise records stored within Microsoft 365, meaning organizations must treat AI use as part of their legal and regulatory footprint. At the same time, Copilot respects existing permissions and security controls, reinforcing that oversharing and governance debt — not AI itself — are the primary exposure vectors. Panelists urged organizations to view pilots not as isolated experiments but as governance and change-management exercises, where success depends on training, user feedback loops, and measurable business outcomes alongside technical safeguards.

Ultimately, the webcast experts emphasized that there is no seamless Copilot deployment — only iterative progress. Organizations that succeed will be those that treat AI rollout as a continuous cycle of pilot, learn, refine, and scale, grounded in strong data governance and cross-functional ownership. As enterprises expand Copilot and other AI tools across the Microsoft ecosystem and beyond.

Read the transcript below or watch the full recording to learn practical strategies for building a trustworthy, production-ready Copilot environment.


Transcript

Mary Mack
Thank you for joining today’s HaystackID webcast, “Real Benefits Real Constraints: A Practical Guide to Copilot Rollout” hosted by EDRM. I’m Mary Mack, CEO and Chief Legal Technologist of EDRM, the Electronic Discovery Reference Model. Today’s expert panel is led and moderated by Steve Barsony, Managing Director of HaystackID, and includes the following faculty. Dean Gonsowski, Chief Revenue Officer of RecordPoint, Glenn O’Brien, Senior Manager of Data Governance Policy Management at RTX, and Michael Elkins, the Global Advisory Consultant at HaystackID. We’re recording today’s webcast for future on-demand access, and slides will be available both during and after the webinar. The webcast will be available on the EDRM Global Webinar channel for the next quarter, to help support your ongoing learning and reference needs. And before turning it over to Steve for a fuller introduction in the agenda, Holley Robinson of EDRM will share a few brief notes on the webinar collect console. Over to you, Holley.

Holley Robinson
Thanks, Mary. If you look at the top of your screen, you’ll see the HaystackID logo, which you can click on to learn more about HaystackID. You’ll also see an option to contact Team Haystack directly, as well as speaker bios where you can learn more about today’s presenters. Moving down, you’ll see the Q&A box where you can type in your questions for today’s faculty, and we highly encourage you to do so. Below that, you’ll find today’s resources, including the slide deck, a link to M365 with HaystackID suite for Microsoft 365 services, and HaystackID’s information paper, Copilot for Microsoft 365®, a Step-by-Step Flight Plan for Legal Teams. There are also registration links for the upcoming HaystackID webcast, “Meaningful Transparency in AI, What Privacy Laws Actually Require,” on March 25th at 12 P.M. Eastern, as well as the EDRM workshop with HaystackID, “Discovery at a Crossroads: Global Perspectives on Emerging Challenges,” on April 29th at 11 A.M. Eastern. We’d love to have you join us again. Lastly, you’ll see some emojis down at the bottom of your screen. Please feel free to use them and react throughout the webcast. Back to you, Mary.

Mary Mack
Thanks, Holley. And our moderator, Steve Barsony is a managing director at our wonderful trusted partner, HaystackID, where he advises organizations on complex challenges at the intersection of information governance, privacy protection, cyber incident response, Microsoft 365 and legal operations. Steve joined HaystackID from AFH Partners, the advisory firm he founded in 2021, and he has held roles at the leading organizations in our space that include Senior Vice President in Innovation and Product, Vice President of Innovation and Technology, and Vice President of Analytics. And with that background, Steve, please take it away.

Steve Barsony
Thanks, Mary. So hi everyone and welcome to another HaystackID webcast. I’ll be your expert moderator for today’s presentation and discussion. The webcast is part of HaystackID’s ongoing educational series, designed to help you stay ahead with the curve and achieving your cybersecurity, information governance and eDiscovery objectives. We are recording today’s webcast for future on demand viewing, and we’ll make the recording along with a complete transcript presentation available on the HaystackID website at haystackid.com. Today we’ll explore strategies for implementing Copilot from how to start safely to measuring value and deciding what’s next. A reminder, the opinions expressed today are those of the panelists and are not representative of their organizations. Before getting into the agenda, I’m going to do some speaker introductions. Dean Gonsowski is the CRO of RecordPoint, a cloud-based data and information governance platform that helps organizations discover, classify, secure, and manage data at scale across multiple systems, while ensuring privacy, compliance, and defensible lifecycle control. He has more than 25 years of customer-facing executive leadership experience in the GRC and eDiscovery space. Dean holds a JD from the University of San Diego School of Law and a BS from the University of California, Santa Barbara. Glenn O’Brien is a Senior Manager, Data Governance and Policy Management at RTX, and leads initiatives focused on data governance, framework, policy development, and operational execution in a highly regulated enterprise setting. He has two decades of experience aligning people, processes, and technology to support complex legal compliance and risk management environments. Michael Elkins is my colleague at HaystackID in the global advisory practice. He’s a strategic information management consultant with over 15 years of experience implementing content-focused solutions and enhancing information accessibility and security across diverse industries. He has a proven track record in developing information management, security, and governance programs that significantly reduce costs and improve operational efficiency. So, to our agenda, we’re going to start today, going through the journey that many of us have already started, and maybe some of us are planning on embarking on with a Copilot experience. Now that’s going to take us from the beginning of basically where do we start moving through what needs to be in place to make sure that it’s secure and that we’ve got information that feeds the models and feeds our answers in a way that is both accurate and controllable. We’re going to talk about how we measure success and make sure that we know what’s working and what’s not, so that we can make adjustments to what we’re doing so that we can better align with what our goals are. And then finally, once we get through that stage of where we start, how do we move forward? What are our next steps in our journey? So trustworthy AI. Trustworthy AI sits at the crossroads of AI governance, data governance, and knowledge management, bringing structure and clarity to how organizations use intelligence systems. It’s about making sure that data, feeding your models, is accurate, well-managed, and ethically sourced, while also enforcing policies that guide how AI behaves and is monitored. At the same time, it depends on strong knowledge management practices so teams can actually understand, share, and improve the insights behind their AI decisions. When those three areas work together, you get AI that’s transparent, reliable, and aligned with your business values rather than operating as a black box. In practice, it feels less like a compliance exercise and more like building a trustworthy ecosystem where people, data, and AI work together. So the first item in our agenda is start with a pilot. Four words, easy enough to say, but the question is, where do we start? Michael, what are your thoughts here?

Michael Elkins
Start with a business problem. Copilot is, there’s obviously a technological component to it, but we are going to deploy AI because we’re trying to solve a business issue, or we’re trying to drive productivity to solve a business issue. It’s not really an IT. IT is going to have plenty of time, and they’re going to have a huge role in the process, but it’s really about what we’re going to do to align the business piece with it. We’re trying to not boil the ocean here, so we’re going to start small. There are a lot of components that are part of the readiness, and everything that’s going to happen. So the pilot gives us an opportunity to tune ourselves as we go through a larger deployment for the organization.

Dean Gonsowski
And Mike, on that one, as you were talking, I was thinking, I’m sure there are just tons of initiatives out there that say, “Launch, do something with AI.” How do you see people hone in on what the right use case is? You can pick any use case, and we’re seeing the failure rates for AI pilots be pretty high. I’m sure people have seen the statistics, but they could be as high as 90, 95% of failure rates. How much of that is due to not picking the right use case at the start, do you think?

Michael Elkins
I would say a fair amount. When you go to pick your pilot, part of what you’re looking for would be things like, do I have buy-in, do I have the governance that’s associated with it? Are the leaders behind that? And even from planning my pilot, do I know what I want out of that pilot? Do I have a group that’s really ready to go for it with a business case, and that’s measurable? Do I know what I want at the end? Can I measure it through the process to make sure I’m being successful? That’s one that’s kind of a high-level flyover of it, but as we start getting into how do we drive the pilot and how do we define it, there are a number of components, as we’ll see as we step through this. So good example. So align the pilot with your objectives. It’s really a business case. It’s not an IT issue. IT is going to be busy. They will be plenty busy throughout the process, but it’s really a business objective that we’re trying to resolve here. So, the ownership and the governance side of things, there’s IT that’s part of that. There’s the governance on the policies and the procedures. Do I have smart? Do I have governance? That’s AI governance; that’s basically telling me I’m doing this responsibility responsibly? Do I have IT governance, which is managing all the controls, the safety, and everything in the background? Tracking outcomes. I want to know what’s going to drive me to be successful. And then when I look at that readiness, most people kind of think readiness, and they think IT, I’ve got my security components in place, now I’m ready to roll. And that’s not really accurate. There’s a whole lot more to readiness that we need to take into consideration. So people think that once I’ve got my sensitivity labels and I’ve got my DLP in place, that’s part of the process. But the reality is, as I go forward with my pilot and I start looking at it, the data quality starts becoming an issue. We’re not training LLMs, we’re not training Copilot out in the broader web world, but we are training our environment. So, the data quality that we have, the readiness around that, do we have the right taxonomy? So we’ve gotten rid of ROT. And Dean, you can talk to the ROT and all of the need for really strong compliance. So I would go ahead and jump in.

Steve Barsony
So before we talk about ROT, let’s talk about the pilot when we are looking for use cases, Dean and Michael, when you’re looking for use cases for a Copilot rollout, I mean, what’s the matrix that you’re looking for in terms of complex tasks versus repeatable tasks, working with highly sensitive data versus tasks that summarizations kind of tasks, and who are the cohorts? Who are those groups we’re looking for initially to target for our Copilots and engage with them?

Dean Gonsowski
Yeah, I mean, I think we think about pilots all the time, and just any piece of software functionality, it’s very standard to run a pilot and evaluate success criteria. I think with the AI initiatives, one of the things that we’re seeing is that it’s important for the pilot to be successful, but you also need a broader view to think about how we’re going to take production. And I feel like that’s where a lot of folks get stalled, because you can really curate a sandbox of data and use cases and stakeholders, and I don’t want to say make sure it’ll be successful, but that gap between going from that pilot to production I think is where a lot of folks get stalled because of all the issues that we’ll cover throughout today, which is the data doesn’t just, and the results don’t need to just be right, they need to be trustworthy, safe, scalable. Those are all the things that they’re the reasons why people don’t get into production as often as they want to. So I think it’s some of this, how do you set the pilot up, but you need to have a path from pilot to production, and almost anybody can get a pilot going. It’s not easy, but with all the tools, it’s certainly doable. But that next level of production, governance, security, data quality, curation, all the things we’re going to talk about today, that’s the hard bit.

Michael Elkins
You can start looking at it and saying, okay, what is it we’re trying to do with Copilot? It’s going to be, am I generating, am I looking at SOPs? Am I generating more content more effectively? Am I being better at creating content? Whether or not that’s meeting content, and one of the core areas where people deploy out of the box. I’m not talking about the agentic side, but I’m talking about just the core components of Copilot 365, where I’m getting into the tenant, and it’s going to be things like I want to capture meeting information and resolve all of that quickly and easily and with better scope. I want to be able to generate content. I’m going to be doing presentations. I want to be able to generate marketing materials and things along those lines. Or when I start getting further in, I may look at places like, I want to look at contracts, and I want to be able to identify contract clauses or things. So those are the types of things. You want to look at, where’s the value that you want to get out of it? You want to pick a group that’s really got a good idea of what they’re going to do with the content and is ready to move forward. And again, the pilot is really defining something that you can wrap your arms around and control and you’re really testing the way you’re moving forward with the whole process before you put that on a broader scale to the rest of the organization, because there’s a lot of moving pieces both from a security side, from a data side, data protection, from a process and policy side. So we’re really defining something that we can control and that we know that we can measure the outcomes of.

Steve Barsony
So in terms of the… So maybe one way of saying that is a well-understood business process, something that’s repeatable and not reinventing the wheel, and then also looking at those business processes from the perspective of AI in general, certainly Copilot, will not fix a misunderstood or inefficient business process. It’ll just make it faster, more inefficient in a sense.

Michael Elkins
Yeah, Copilot’s going to… Go ahead.

Steve Barsony
Go ahead.

Michael Elkins
I was just going to say Copilot is going to enhance what you give it if you give it poor information; if your data quality is bad, it’s going to show that, it’s going to emphasize that. If your data quality is great and you’ve got your governance correct, it’s going to emphasize that. That’s basically what’s happening: Copilot is going to expose or emphasize good or bad your readiness. So be prepared.

Dean Gonsowski
Sorry, you’d mentioned ROT a few minutes ago, Michael, and I think that’s the prime example, and we’ve been doing information governance for a couple of decades, and ROT, redundant obsolete trivial information, has been mildly necessary to clean up for storage purposes, data obfuscation, search times, and a few other things. But I think now your point around amplification is really critical. If you don’t want your AI results to hallucinate and over-index on certain data types, the cleanup is just mission-critical, and I think a lot of people have ignored that because there wasn’t really a sort of compliance risk or other risk associated with having ROT. There’s sort of a nice to-do. I think now with AI, if you want your program to be successful, that level of governance and curation is just mission-critical.

Glenn O’Brien
AI is going to shine a very bright light into an otherwise dark, dusty corner. It’s going to expose all of your past sins and forgetfulness, and I’ll get to those later problems. Things like ROT, things like overexposed permissions, things like sensitive data being secured by obfuscation, just because people don’t know it’s there, well, guess what? They’re going to know it’s there if they can start finding their AI.

Steve Barsony
So that leads into, once you’ve got your pilot set up and you know what types of information that they’re going to be needing in order to feed their workloads, we get into the next step, which is usually the step that starts off being quite daunting, which is this idea of, earlier Mary mentioned something about a governance debt in addition to technology debt. I think that is a really interesting way of looking at it because, as Dean just said, if you’ve got no penalty in place to have cleanup or labels or your governance and your information management aligned, then you’re not going to see any negative effect. However, when we’re beginning to work with AI systems, you do end up relying on both from a security standpoint, the data protection standpoint, as well as your information and knowledge standpoint, really reliant on these things that maybe were not as heavily invested in previously. So maybe we could talk a little bit about a combination of that readiness around that and what steps we can take initially to get to a not boiling the ocean, a minimum viable. What are the risks we can eliminate, and what are the things we can do in order to move forward with our pilot, but keep an eye on what we’ll need to do enterprise-wide in order to have wider adoption of Copilot? Glenn, you want to touch on some of that?

Glenn O’Brien
Yeah. I spent, in your intro of me, you talked about my couple of decades worth of experience, which made me feel really old considering my birthday was just last week, but most of those decades were spent in the legal departments finding data in any discovery context, and now we want to hide that data a little bit in the information governance context where I’ve been for the last couple of years now. And one of those first things that you can do is check your permissions, take a look through your SharePoint environment, and check to see how overexposed your content is. That would be the first place that I would look to get the biggest bang for your pilot buck because a lot of people will open up a SharePoint site and they will make it available to the whole company because they don’t realize what they’re doing, and then they’ll start putting stuff in there and that’ll layer onto the ROT and the redundant information and versions and all that stuff. But first and foremost, check those permissions because you’re not going to realize how wide open a lot of this content is.

Dean Gonsowski
Just on that, because for me, I hadn’t spent much time on sort of the permissions front, and this will be the five seconds of promotional RecordPoint stuff. We started with a historical sort of information governance platform that does all the classification and record management. Great. Recently, we’ve added a permissions assurance piece. We know how critical that is to the totality of what we’re trying to do, and then on top of that, we’re doing the AI governance. So it’s just really interesting as we’ve looked at the table stakes for solutions, the governance is one of them, and we see this here. They’d be able to classify and auto-classify, certainly can’t have your custodians classifying information, managing the permissions, and then managing the totality of the pipeline that goes into the AI agents, and being able to understand, replicate, and defend that is really critical, and then won’t get into all the shadow AI and some of the other stuff later. But for us, that’s kind of the necessary stack of solutions that you need to make these pilots work because Copilot and all the other AI tools work great; that’s just the underpinnings that are going to make them successful and ready to go into production.

Glenn O’Brien
You want to be able to know what Copilot is going to find, and the converse of that is what don’t you want to find? So once you get those permissions squared away, then you need to start looking at what I need to restrict out of the Copilot? Am I going to the SharePoint sites where your C-suite is putting their board of material stuff? Well, restrict that because obviously you don’t want that available to Copilot. Start to think about sensitivity labels. Don’t allow it to reason over your most restricted content and then start to funnel that down a little bit more, and then we can start talking about curating data. Again, it goes back to that. What is the problem that we’re trying to solve? What’s the use case we’re getting at here? Is it to surface enterprise-level content? Hey, who’s my contact on this project? Or is it what’s going to happen? I need to look up something on a material safety data sheet or an SOP because safety is on the line here, and we’re going to get into the curation piece of that in a minute, but it goes back to what the problem we’re trying to solve is, and start working down that path.

Michael Elkins
There are a number of things that, I mean, when you look at the backend of 365, you will notice that there are tremendous changes in Purview, specifically that have been driven by Copilot adoption. So one of the things that’s available, and you get with 365 with your Copilot license, is SharePoint Advanced Management. And so if you are a system administrator, you have access, you can get to the reports, and you can identify the sites that have sensitive information types on them or that have oversharing, so you can use those tools directly. The other side of that is this is changing constantly. So you look at where we used to do, okay, now that I’ve got those sites, I’m going to use maybe restricted SharePoint search, and I’m going to say I’m only going to allow a Copilot to use a certain bit of information. You can use these sites, but not the rest of those sites. So you could lock that down. Now coming out, we’ve got, I think it’s in preview now, and it’s about to come out into production. You’ll have dynamic DLP for Copilot, meaning you can set up DLP policies that will dynamically look at content and say, okay, if a sensitive information type starts showing up on sites, we can lock that down. We can prevent that content from being delivered to being utilized by Copilot. So this is constantly changing the features that are coming out. The DSPM, the data security posture management, is one of those things where you start looking at, and you can monitor what’s going on in Copilot, when you have your investigations, your data investigations. That’s a new feature that’s out there. New features in DLP, new features in the sensitivity labels. We’ve got the new agents area, so there’s a lot of tools there, and they’re changing constantly because Microsoft is also moving at the speed of AI, and we need to move at the speed of AI to be able to make sure that we are doing our best to be able to manage the back end and continue to stay on top of the AI capabilities. So that includes data classification, the sensitivity labels, and the access controls. Again, use SAM, SharePoint Advanced Management, to identify those components, and that will give you your baseline for your security guardrails. That’s your security guardrails. We’re not talking about the data side. That’s a whole other issue because when you pick your Copilot as your data ready, are we ready to be able to take that? Because everything that you feed Copilot, if your back-end SharePoint environment is a series of glorified file shares, the quality of what you’re going to get out, the hallucinations and information you’re going to get out of Copilot is going to reflect that. That’s what we’re talking about, where that information is going to be enhanced or exposed.

Steve Barsony
Before we talk about the actual knowledge management, information management side of things, maybe for folks who haven’t worked as deeply on the classification sensitivity label DLP side, maybe we could talk a little bit about what that means in terms of what we’re trying to protect, how we’re protecting it, and what tools we have available to us. Now, certainly, we’ve got Purview, which does a lot of it inside the M365 environment, but I mean, Haystack is a record point partner. So I mean being able to use tools across the entire landscape and a state of our data, not just those things that sit inside M365. Maybe Michael, you can talk a little bit about sensitivity labels in general, data classification, and then Dean, you can follow up with what you’ve seen in terms of the change of focus, perhaps of folks that are implementing tools to address the different standard problems versus the new problems that AI brings.

Michael Elkins
Sure. So we’ll talk about what happens when I put in a prompt at the Copilot. That prompt gets sent into the orchestration area, and basically, it takes it, structures it, and then sends it to Microsoft Graph. And what that is, that’s your protection layer. That graph is what takes your sensitivity labels, which is really saying, I have my data classification, I’ve got general, I’ve got confidential, I’ve got restricted, or it’s highly confidential. It takes that, and it puts it into progress. So what happens with that? It says, well, if it’s confidential, do I want it encrypted? And so it allows you to put behaviors behind your classification system. It may be encrypted, so it prevents it from leaving. So when we talk about Copilot and sharing and people that are worried about information, going out and leaving the organization, Copilot is not your issue. It’s not what’s sending information out. It’s people. So it’s what they have access to. So when somebody does a prompt, and they get that information back, it’s the sensitivity labels that will be one piece of what prevents them from doing something with it. Then the second piece on top of that would be the data loss prevention, which is really taking, could be based on the sensitivity label. I can put behaviors behind the label itself. I can prevent, with DLP, I can prevent people from actually using that and putting those into prompts. So with DLP for Copilot, I can say if I’m putting in a prompt that is, Hey, whose social security number is this? When I put that prompt in, it’s going to come back, and based on the sensitivity labels and the DLP, it’s going to go, “You’re not allowed to do that. You can’t get that information back.” It’s going to stop me straight upfront. So those are the protections of the guardrails because what we’re really protecting is when somebody gets that information, what do they do with it? Do they send it out? Do they use that information incorrectly or irresponsibly?

Steve Barsony
Because there’s a difference between what Copilot in a general way has access to, which is based on the permissions that you put on the actual sources, versus what I have access to. So I may be able to see something that I can’t share, not through Copilot per se, but I could take something out of Copilot and then put it into a document that I then can distribute. But if I have my sensitivity labels correctly configured, they’re being automatically applied, I’ve got my data loss prevention in place, while ultimately the individual is still responsible for both the content of that’s being generated as well as policy enforcement, those tools can help us reduce the risk. Maybe not eliminate it completely, but certainly reduce the risk.

Michael Elkins
Copilot doesn’t share anything. It doesn’t do anything on your behalf to do that. What Copilot does is it exposes the information you already have access to. So if somebody shared something with you, you’ll have access to it, and it may be one of those things where somebody shared something, and you didn’t know you had access to it, but when you do a search, suddenly it’s there. That’s a process of oversharing or sharing in general. So that’s going to happen, but Copilot is not… It’s not granting access to something you shouldn’t have, search in general.

Glenn O’Brien
It’ll also respect the rules that you have on your handling side. So it respects the sharing, and it also respects the handling rules based upon sensitivity.

Michael Elkins
Right.

Steve Barsony
So do you see an uptick in the way people are viewing what they’re doing with RecordPoint in terms of being able to take an existing and expand it to make sure that the AI implementations and Copilot implementations are?

Dean Gonsowski
Yeah, I think the big change is for us is that the most forward-thinking organizations around information governance were always heavily regulated entities, financial services, critical infrastructure, et cetera. There are plenty of regulations that govern how they manage their information, and particularly the practices around sensitive data, and then there is the interplay between that and data breaches. So the place you don’t want to be in and gets you in the crosshairs is if you’re over-retaining data, not protecting sensitive data, and that gets exfiltrated and you get ransomware, and you get the whole well-known consequences from that. I think AI really changes the game in that type of governance, information governance, and then subsequently data governance and then AI governance, all of which are now table stakes for companies that may not really have ever thought about regulatory compliance and information governance generally. It was kind of, they thought about storage. Yeah, we got information. It’s all about our knowledge. Workers’ data sprawl could be everywhere and anywhere. It didn’t really matter. And so I think a lot of these, the drivers that drove other more regulated entities have been there for a while, and so I think that’s why we’re seeing a lot of… There’s this, let’s go forward really quickly, and then because with AI approaches, it’s really easy to spin up pilots and then they’re required to go back into, take a step backwards and say, okay, let’s understand things like records management and legal hold and information governance, and the debt. That just governance angle was not a primary concern for a lot of entities. And so yeah, there’s been a big uptick of we need to do the basics now if we really want AI to work. And then some of it finally is we didn’t even really think about the basics and we had our pilot get launched and now it failed, and it failed really because of the, we talked about the business strategy, but that data quality and then the governance around it from an IT perspective, because ultimately if you don’t have the right permissions and controls in place, IT is not going to let you go from pilot to production. So that’s where we’re jumping in that mix where people want to get out of the blocks quickly, and now they’re getting stuck in a ditch in a bit.

Steve Barsony
And you raised a great question, a great topic about the content management and around the actual, the quality of the information we’re putting into our profile in our AI. But before we jump off of this, I mean, we’re talking about protecting the data. So Glenn, you’ve spent quite a lot of time in highly regulated areas. I mean, when you talk about doing things like data classification and sensitivity labels and all these things, it’s easy enough to say we break it down simply and we say, oh, it’s confidential, restricted, protected, highly confidential, public, that type of thing. In your experience, what are the challenges that folks who aren’t familiar with it but could potentially anticipate going into that kind of project?

Glenn O’Brien
It’s easy to say it’s extremely hard to do when you actually get down to pulling the trigger on it. We spent about a year getting ready for records… Not for records, for just the use of that button, the sensitivity button. Lunch and learns, messaging out. It’s coming, it’s coming, it’s coming. Understanding, putting videos together about how to actually press the button, and then more importantly, why, understanding what we used, a pretty simple approach, CPR, confidential, proprietary, restricted. We thought that was an easy mnemonic to remember because everybody knows CPR somewhere along the line. So understanding what makes those various levels appropriate and then what to do about them, because sometimes the tools aren’t as perfect as everybody thinks that they are, and there is some configuration that needs to go on the backend. And then understanding what those ramifications are going to be, that if you do say that this is restricted, oh, I cannot email this out without encrypting it. And so making sure that your recipient is aware of what’s going on. Maybe you’re dealing with a customer that they’re not as sophisticated of a customer, understanding what that feedback from them is going to be, and how to help them through this process, because we are required to send this out being restricted. Change management, change management, change management. If you’re not talking about it six months in advance of getting it done, you’re going to run into problems with it.

Michael Elkins
Yeah, I think just to tie this kind of back to the pilot process, all of these, every organization’s on a different part of the spectrum of deploying the technology. So if you have your DLP and your classification set up, great, you’re not done because DLP is going to change. Everything changes as we expand and provide more features. But at the same time, some organizations don’t have that. So when I start planning what’s going to happen in my pilot, then I start looking at, okay, if I don’t have classification and I don’t have DLP, then I have to provide some of those capabilities. I still have to provide those guardrails. So in that sense, that’s part of that pilot discussion of who’s the right cohort to do that with, because these are things that we still have to put the guardrails around and we  still have to test that, but we don’t want to boil that for the entire organization. We need to do that in an area that we can test, and we can validate and confirm before we do that for the rest of the organization. So all of these go into that decision point, right?

Glenn O’Brien
And then along those lines, though, be not afraid. So if you find yourself starting down this pilot road and you say, oh, I don’t have sensitivity labels out there and this guy told me I need to start talking about six months in advance, don’t be afraid of that, though. If you’re thinking about what your goals of your pilot are, because that’s what we should be talking about here, is the goals of the pilot. If the problem that you’re trying to solve is this, and suddenly you run into the problem that, “Oh man, I’ve got a lot of rot over there. I’ve got a lot of sensitivity over there. I haven’t squared that away yet.” You don’t necessarily need to back away from your pilot because part of the pilot should be a business process and a technology exercise as well. Refocus your effort on the technology piece of it and look somewhere else and exercise the AI process on a smaller subset of your data that is less critical. Maybe you’ve got your company policies on a specific website or a SharePoint site. Great. Point your AI over at that. It’s low risk. You don’t have to worry about a whole lot of that stuff while you get your other stuff squared away, because like I said before, you’re going to shine a very bright light in a dark corner. You’re going to find those things over there. Focus somewhere else where you can manage it while you get the rest of your house in order.

Steve Barsony
So now we’ve looked at protecting the data. We’ve looked at securing the data to make sure it doesn’t go outside the organization, and the people have the right level of permissions to what they need. Now we’re talking about the actual quality of the data that we’re going to be using inside of our AI, inside of our Copilot pilot. Let’s talk a little bit about content quality and metadata readiness. Glenn, we’ll just continue with you in terms of how you think about it in terms of when you start looking at this knowledge management issue.

Glenn O’Brien
So my thoughts on this have changed with my new role. And as I mentioned before, I was in legal before, and I did cut knowledge management with matter-related stuff, but I’m now in a data governance role, and my eyes have become wide open. Go find your data governance people. They’re probably sitting in an analytics type of department dealing with data, structured data all over the place. I’m proud to call myself one of those geeks right now, because they are going to tell you how to square your data, what the unstructured data is. They’re going to tell you about lineage. They’re going to tell you about a glossary. They’re going to tell you about how to log your data, and you don’t need to log any document, but how to understand where your data lives and put it in a glossary so that people can find the data. And one of those AI governance principles is a human in the loop. If now you’ve got a repository of data that says Glenn O’Brien is the owner of that, and something goes wrong with that data, I now have a human in the loop. A, I’ve got a point of contact, go fix it. And B, I’ve got a process in place that says, Glenn’s going to look at that data every year, every six months, and verify that it’s still accurate. That is good, clean data governance wholesomeness, and is going to help you square away your unstructured data. It’ll help you get some of that ROT squared away. It’ll help you get some of your sensitivity, because we’re also going to tell you about classification along the way, because we classify our structured data all the time. We’re going to help you understand how to do that on the unstructured data side. So go find your data governance geeks. They’ll give you a wealth of information.

Michael Elkins
100%. We also have the opportunity to improve our environment as well. So some of you are probably familiar with SharePoint search, and there used to be this acronyms component that was out there, and it got pulled out of SharePoint search. Well, it’s back. It’s actually in Copilot. So the acronyms are there to kind of help provide context when we’re doing prompts. So if I see this acronym, what does it mean in relation to my organization? There are also custom data dictionaries that are there. And what those provide is again, more context, because what you’re really doing is you’re trying to provide Copilot more information about our organization. So this may be acronyms that are very specific to my organization for our company. And when I do a prompt, if that information has been fed into Copilot, we’re training it, then what you’re really seeing in that sense is we’re providing it context to be able to answer more effectively the prompts that we’re entering in. And think of it from a metadata perspective. I may have. Hey, this is SharePoint. If I can’t find it, but somebody sends me a copy, I’m going to hang onto that forever, and it’s going to be in my OneDrive or something. So I’ve got my own procedure. So if somebody puts in, “Show me the procedure to get the best SOP for X,” then it’s going to go out, and it’s going to find whatever that person has access to, whether or not, and we’re training this information into our Copilot. So it’s like, oh, there may be 30 versions out there, but we only want one to be accurate. There’s one that’s the official one. So this is part of that data cleanup. It’s like, how do I get rid of all this stuff that’s no longer there? Because we don’t need people keeping their own copies that are out of date. How do I make sure and tell Copilot that this is the authoritative version, that this is the one, and this is the metadata that I need to have associated with that? Is it approved? Is it a draft? Is it archived? So that’s where the metadata starts to really assist us. All right.

Glenn O’Brien
And I hate to say this, but safety is an issue there. It depends on what business you’re in. If you are dealing with SOPs or material safety data sheets or medical devices, and you’ve dropped something from a five-foot drop, and you’re like, oh, what’s the most this thing’s allowed to drop? And it’s really three feet, but you found an out-of-date document that said five feet is still okay; you might rely upon that information, and safety could be a problem at that point. I don’t want to be overly dramatic or anything, but it is a true statement. You could say lives are at risk. They are, but they’re not. Or your product is at risk, or your reputation is at risk, or your finances are at risk. These are all standard risk management things that you should be thinking about to help you understand, to get rid of that content so that you do reduce your risk at the end of the day.

Steve Barsony
So let’s walk through a couple of strategies around how we can do that against some concrete examples. I know that we’ve talked about a little bit in our discussions about live versus curated data, and you touched on something there very specifically that you need an actual owner of an area that deals with safety data, product safety sheets. So let’s walk a little bit through how somebody can think about this and how they might apply it.

Glenn O’Brien
I would again go back to and talk to your data governance people, because what’s going to happen first is our premise is the pilot, and how to get started with the pilot. What’s the problem I’m trying to solve? Okay, so the problem I’m trying to solve is I need to get all of my material safety sheets, my SOPs, my quality reports, or whatever in one area so that I can increase efficiency, reduce overhead time by simply being able to query my Copilot and say, what is the maximum capacity of this Allen wrench? So you want to be able to, if that’s the problem you’re trying to solve, what’s the source of truth of that information? It’s not going to be SharePoint, and we’ve talked about it already. It’s not SharePoint in the wild. It’s not everything that’s on your OneDrive, because you’ve got copies of it. It’s not the group SharePoint site. At this point, you want to start to think about, I want to curate my data. Again, another data governance term. I’m going to curate this data, I’m going to pull, and those of us who are around for a while, remember this from the knowledge management days, is I want to pull the examples, the source of truth, and I want to put them somewhere that I know is trustworthy. I’ve got an owner on it, I’ve got a data steward on it. Guess what? All of those are logged in your data catalog, Axon, or whatever it is that you’re using for a data catalog. And now I’ve got a process set up where they’re being reviewed every year. I’ve got a process set up where I’ve got a change control method where if I’ve got to update one of those documents, I’m updating in the catalog and saying that there’s been an update and now I’ve got an owner and a steward assigned to them, and then using your Copilot studio with that orchestrator that Michael you were talking about a few minutes ago, your prompt is going to say, oh, this is a curated question because somebody’s smarter than me figured out how to program that agent. And it’s going to say, I need to go over here for this. I don’t want to go to 365 in the wild. I need to go specifically to this repository, and I need to look at this data, and this is where I’m going to return my answer from.

Michael Elkins
Yep.

Steve Barsony
I’m going to move from… Go ahead, sorry.

Michael Elkins
I was going to say metadata plays a key role in that. So I look at the one on SOPs, my time in energy and utilities. So you may have an SOP that is specific to a different location, a facility. So your taxonomies, your metadata that you apply to your content plays a key role in making that information effectively available. And that’s a core component.

Steve Barsony
I’m going to go into the compliance side, which is when we’re talking about the… As we start to use Copilot, we still have the same obligations under our general legal compliance and eDiscovery. Michael, do you want to touch a little bit about… Because eDiscovery is probably a concern that we all share, everybody on this call shares, and everybody who’s attending shares. We’re just beginning to see those requests come in for providing that information around prompts and around what AI was relying on. Let’s talk a little bit about how Microsoft deals with that in terms of where it’s stored and how we get to it.

Michael Elkins
So if you’re entering prompts, I think this is one of the things that we always kind of freak people out with. Copilot doesn’t enter sensitive information because it’s going to go out. It’s not that it’s going out, it’s that it’s discoverable. Every prompt that you put in, all of the information that you do, it’s saving that. There’s a memory of that that’s going into Exchange. So it’s discoverable when somebody wants to look at something. So if there’s a case that comes in, what you’ve been typing into Copilot will be there for somebody to go look at. The other side of that is IT is probably going to be looking at that as part of the monitoring process. So when I look at DSPM for AI in Purview, it’s putting out information that says these are the people that are putting out the prompts, and these are the sensitive information types that are getting hit. So IT is going to start monitoring and looking at that information to make sure that the policies and procedures that we’ve got are in place and are doing the right thing. So it is logging who’s doing the prompts and when. It’s logging all of the information from a transactional perspective, but it’s also saving a lot of that information, that information so that it can retrieve that if it needs to. So that is discoverable. Quick note on the compliance side, in Purview in compliance manager, the other side that we want to look at from compliance is that there are… So there’s the NIST AI framework, and there are the EU AI regulations that we have. Within Purview in your compliance manager, those regulations are there. I would highly recommend that they’re optional premium assessments, but you can create assessments off of those regulations, and it’ll give you a better idea of what’s going on within your environment to be able to monitor and make sure you’re meeting those requirements. So there’s international compliance, there’s AI compliance in general, and then there’s what’s going on within eDiscovery.

Steve Barsony
Specifically, around eDiscovery, when I’m sitting inside of my Purview eDiscovery, and I’m trying to collect this information for a discovery request, is that something I can do through Purview eDiscovery, and where is Microsoft storing it?

Michael Elkins
So you can. So it’s out of your Exchange. So if the mailbox, in essence, if e-mail and the mailbox are part of that process, your discovery goes back into your environment, because again, it’s where it’s storing all of your prompts and all of that information. So that’s all discoverable and in through Purview. So just be aware when you’re doing that, that it is discoverable and don’t put… This is where responsible AI comes in. Does that cover what you were looking for? The audits are also Purview’s got the audits and the audit logs. So the audit logs are also being tracked, but that’s not the prompts. That’s really who’s doing what, when.

Steve Barsony
So all of this that we’re doing in order to get the pilot even on a small scale is going to require behavioral change, organizational change. Maybe we can talk for a moment about change management and how we help people get to where they need to be in order to take advantage of Copilot. Glenn, you’ve had some experience with that.

Glenn O’Brien
Yeah, I think it goes out on a couple of paths. One is announced early and often, it’s coming, it’s coming, it’s coming. It’s here. Get ready, it’s coming. And then one of the AI governance principles is that we need to make sure that our employees are trained, so prompts education, make sure that they’re just not typing in a Google search because it’s not a Google search. So, understanding and coaching them along the way, what the proper way of using it is, what you should and should not be using it, to Michael’s point, so it’s a continuum along the way of change management, and just like you would manage any other change management exercise, of get ready, get ready, get ready, it’s here. And then support. And one of the things I think we were going to talk about is how to judge whether or not you’re successful or not. That’s part of the change management process, as are user surveys, feedback loops, understanding, as well as incident management programs. Because that’s also, you need to have an AI incident management program as part of your AI governance piece of this as well, is understanding what the after effects are post implementation, not only to understand whether or not you can and should move on with your pilot into production or need to take it back from the shop and make some changes, or I can expand this thing or, oh, this is satisfying use case I hadn’t even thought of before and now I can branch off. But then you have to be careful and make sure you have no unintended consequences. So, understanding how your users are using it through the Purview reports, as well as user feedback, so that you can get sentiment as well, because the Purview reports aren’t going to give you that. How well did this work for you? Did you accept the prompt? Did you change it, or did you just give up because you just didn’t like it? How did that work out for you?

Steve Barsony
So that brings us to measuring success. So you’ve mentioned a couple of different metrics there in terms of other metrics that we might be looking at in terms of the success of our pilot in particular, but then as we roll out Copilot in general, Michael, any key ones that you’re looking at when we’re doing this kind of readiness assessment and pilot deployment?

Michael Elkins
Yeah. I mean, there are adoption metrics. So are you seeing people that are starting off quickly and then no longer using it and dropping off? Are you seeing increasing utilization? Part of that’s going to be you are going to have your surveys and things that you’re going to ask about as far as effectiveness and usage, and what people like and don’t like. I would strongly recommend community, as far as having people out there within the environment, champions within the business, to be able to help people from that training perspective. We kind of skipped part of that, but the training is a big piece, and then measuring what the outcomes are. So it may be quality outcomes. Are we improving? Are we better at how we’re responding to proposals? Are we better at how long it takes us to create presentations? Are we faster at being able to review and revise contracts? Those are all metrics that can be defined up front and then measured on the back end. But a lot of it has to do with, again, are people using it? Are people using it? Are they using more and more features of it? So, in other words, I’m using Copilot, but am I also using it in Outlook? Am I also using it in PowerPoint? Am I also using it in Excel? Those are types of things that can all be measured.

Steve Barsony
So, assuming a successful Copilot deployment, we’ve demonstrated that we can safely use Copilot against our data while protecting our PII, PHI, company trade secrets, and it’s providing some efficiencies using it in the M365 tools that are available to us. In terms of what happens next, let’s talk a little bit about extending the capabilities within M365. Dean, are you seeing people move successfully out of Copilots and starting to look more broadly around different kinds of deployments in using Copilot Studio and so forth?

Dean Gonsowski
Yeah, I mean, I think certainly there’s expansion within the Microsoft Stack, and then there’s just expansion just generally. We talked about this in some of our prep calls. I mean, everything is moving so quickly, and I think you have to think about both the Microsoft environment, obviously, but what’s outside of it, and what you’re going to connect to, and your structured and non-structured information sources, and what blend of AI solutions you’re going to use as well. So for us, it’s how do you control the totality of that, particularly once you leave the Microsoft estate, because all the same things that we talked about now in terms of information governance, data governance, etc., all still apply. So you can’t be super locked down within Microsoft. And to the extent you get outside that and you’re in the wild, wild west, that’s not going to be good either.

Steve Barsony
So we’re running up against time. I left a little bit of space for questions, but we only have one. And I think it’s a good question. The question is, have you truly seen a seamless Copilot deployment?

Dean Gonsowski
Nope.

Glenn O’Brien
But don’t be afraid of that, though. Don’t be afraid of that shift and learn what didn’t quite happen and move on. Don’t be afraid of it.

Michael Elkins
Pilot, lather, rinse, repeat. Pilot, lather, rinse, repeat.

Steve Barsony
Right, right. Well, I want to thank everyone for joining.

Dean Gonsowski
I would say really quick… Sorry, Steve. Yes, you’re eventually going to get the pilot nailed. You’ve got to get into production. Otherwise, all the work you’ve done at the pilot level, if you’re not ready to clear the hurdles to get into production, then you’ve set not a great scenario.

Steve Barsony
So I want to thank EDRM, and I want to thank our panelists, and I want to thank the attendees for joining today’s webcast. We really value your time and appreciate your interest in our educational series. Don’t miss our upcoming March 25th webcast, “Meaningful Transparency in AI, What Privacy Laws Actually Require.” During this program, we’ll share how organizations can translate complex AI systems into disclosures that are understandable, accurate, and aligned with legal requirements. Check out our website, haystackid.com, to learn more, register for this upcoming webcast, and explore our extensive library of on-demand webcasts. Once again, thank you for joining us. Hope you all have a great day, and I’m going to turn this back to Mary. Thank you, Mary.

Mary Mack
Well, thank you, Steve, and thank all of you for joining today’s HaystackID webcast, and thank you to our panelists for sharing their expertise and experience in this growing and evolving area. Before closing, Steve had mentioned HaystackID’s next webcast, “Meaningful Transparency in AI, What Privacy Laws Actually Require.” Happening on Wednesday, March 25th, and you can find the registration link in today’s resources. We hope to see you there. And on behalf of EDRM, sincere appreciation is extended for your participation today. Wishing everyone a productive day. Thank you.


Expert Panelists

+ Steve Barsony (Moderator)
Managing Director, HaystackID

Steve Barsony is a Managing Director at HaystackID, where he advises organizations on complex challenges at the intersection of information governance, privacy protection, cyber incident response, CFIUS compliance, Microsoft 365, and legal operations. Steve brings decades of experience helping enterprises manage sensitive data, regulatory risk, and technology-driven change. Steve joined HaystackID in May 2024 from AFH Partners, the advisory firm he founded in 2021. At AFH Partners, Steve worked closely with corporate legal, compliance, and IT leaders to design practical, defensible strategies for information governance, privacy, and cross-border data risk. Prior to AFH Partners, Steve spent more than a decade at Consilio, where he held several senior leadership roles, including Senior Vice President in the Innovation & Product Office, Vice President of Innovation & Technology, and Vice President of Analytics at DiscoverReady. During this time, he played a key role in advancing analytics-driven legal services and shaping technology-enabled approaches to discovery and investigations.

+ Michael Elkins
Global Advisory Consultant, HaystackID

Michael Elkins is a Global Advisory Consultant at HaystackID and a detail-oriented, strategic Information Management Consultant with over 15 years of experience implementing content-focused solutions and enhancing information accessibility and security across diverse industries. He has a proven track record in developing information management, security, and governance programs that significantly reduce costs and improve operational efficiency. Michael is skilled in leading cross-functional teams and fostering partnerships to drive market expansion and deliver exceptional client value.

+ Dean Gonsowski
Chief Revenue Officer, RecordPoint

Dean Gonsowski helps CEOs, executive teams, and venture- and PE-backed companies build scalable revenue engines and accelerate growth. With more than 25 years of customer-facing executive leadership experience, Dean has empowered high-growth SaaS companies—including ActiveNav, Relativity, Clearwell/Veritas, and Recommind/OpenText—to reach their next level of market impact. As the current CRO of Recordpoint, a B2B SaaS company in the GRC space, Dean leads the entire revenue lifecycle—from go-to-market strategy and sales to marketing, customer success, and business development—driving repeatable, measurable growth across products and teams. Dean is passionate about designing effective playbooks, optimizing revenue operations, and helping leaders make data-driven decisions that deliver predictable results. Dean holds a JD from the University of San Diego School of Law and a BS from the University of California, Santa Barbara.

+ Glenn O’Brien
Senior Manager, Data Governance Policy Management, RTX

Glenn O’Brien is a senior information governance, legal operations, and eDiscovery leader with more than two decades of experience aligning people, process, and technology to support complex legal, compliance, and risk management environments. Currently serving as Senior Manager, Data Governance Policy Management at RTX, Glenn leads initiatives focused on data governance frameworks, policy development, and operational execution in highly regulated enterprise settings. His work centers on continuous improvement—ensuring governance programs are practical, scalable, and defensible across global organizations.


About EDRM

Empowering the global leaders of e-discovery, the Electronic Discovery Reference Model (EDRM) creates practical global resources to improve e-discovery, privacy, security, and information governance. Since 2005, EDRM has delivered leadership, standards, tools, guides, and test datasets to strengthen best practices throughout the world. EDRM has an international presence in 145 countries, spanning six continents. EDRM provides an innovative support infrastructure for individuals, law firms, corporations, and government organizations seeking to improve the practice and provision of data and legal discovery with 19 active projects.

HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.

Assisted by GAI and LLM technologies.

SOURCE: HaystackID