[Podcast] HaystackID® in the EDRM Illumination Zone: Michael Cammack and Stephanie Wienke

Editor’s Note: AI governance is often treated as a policy function. In practice, it’s anything but; governance is a hands-on discipline embedded in how organizations deploy and manage emerging technologies every day. In this episode of the EDRM Illumination Zone, HaystackID’s Michael Cammack and Stephanie Wienke discussed how governance shifts from a simple approval process into an enterprise-wide capability distributed across teams. The conversation highlighted the role of leadership, cross-functional coordination, and independent validation in building programs that hold up under scrutiny. Cammack also introduces the concept of “governance debt,” a practical framework for understanding the risks created when deployment outpaces control. For legal and compliance professionals, the discussion offers a grounded view of what it takes to operationalize governance in real environments, where policy alone is not enough.


AI Governance That Moves Beyond Policy and Into Practice

By HaystackID Staff

Organizations that treat AI governance as a compliance checkbox have already made their first mistake. That gap is becoming more visible as organizations across industries accelerate AI adoption without fully operationalizing governance to match. The issue isn’t the checklist itself; it’s the assumption that a policy document and an approval workflow amount to a true governance program.

This distinction was at the center of a recent episode of the EDRM Illumination Zone podcast, where Michael Cammack, Deputy Information Security Officer at HaystackID, and Stephanie Wienke, Security Specialist at HaystackID, outlined how we have built AI governance from the inside out, treating it as an operational discipline embedded in how our company actually runs.

More Than an Approval Queue

When HaystackID’s AI Governance Committee launched, its scope was intentionally narrow: evaluate incoming requests, assess risk, and determine whether a tool was fit for deployment. That model worked well until AI adoption began growing. As more teams began exploring these tools, the volume and complexity of requests made it clear that the committee needed a governance architecture that could scale.

The committee’s remit expanded accordingly, and its members oversee enterprise-wide AI governance anchored in a formal framework, with responsibilities spanning policy alignment, risk management, regulatory monitoring, cross-functional expertise development, and organizational trust. The committee is led by HaystackID Data Protection Officer Christopher Wall, whose approach has been deliberately cross-functional, creating, as Wienke described it, a space where every department has a voice and decisions are made collaboratively rather than handed down from a central authority.

The Strongest Governance Programs Start at the Top

Elevating AI governance from an approval process to an enterprise program requires active leadership ownership.

“AI governance only works when we treat it as an enterprise program, not as a technical checklist. C-suite sponsorship is essential. When leadership is fully engaged, something important happens. Governance becomes about clarity, confidence, and culture, not just control for control’s sake,” said Cammack during the podcast.

That distinction—governance as clarity and culture rather than control—is what separates organizations that realize real value from AI from those that only manage its risks. When leadership is visibly committed, something structural shifts. Pilots evolve into programs. Adoption becomes more consistent across business units. And employees adapting to rapid workflow changes get the clarity and investment in re-skilling they need to use new tools effectively.

Cammack introduced a concept worth broader circulation in this space: governance debt. Just as technical debt accumulates when engineering shortcuts defer complexity to the future, governance debt builds when AI deployment outpaces the oversight infrastructure behind it—models evolving, guardrails shifting, capabilities behaving in ways nobody planned for at launch. Leadership engagement is what keeps that debt from compounding by defining where humans must stay in the loop, ensuring people feel supported as their roles change, and building the internal stability that shows up directly in client-facing delivery.

That last point is where the competitive case becomes concrete. Organizations where leadership treats AI governance as a strategic priority build the type of workforce confidence and delivery consistency that clients notice. Responsible AI practice, structured this way, stops being overhead and starts being a differentiator. In practice, that shows up in faster response times for regulatory inquiries, more consistent review outcomes, and greater confidence in defensibility when decisions are challenged.

Good Governance Needs a Witness

The workforce confidence and delivery consistency Cammack described don’t come from internal commitment alone; they require external validation. HaystackID’s recent expansion of our HITRUST certification is where that principle becomes measurable.

HITRUST has long been the gold standard for compliance assurance in regulated industries. It validates that controls operate in practice, that SOPs align with what’s documented, and that security and privacy programs are truly embedded in how the organization functions. AI introduces new risks across both dimensions, and traditional frameworks weren’t built to address them. HaystackID deliberately expanded our HITRUST certification to include AI risk controls, bringing those threats into a validated compliance framework rather than managing them separately.

“What makes our move into AI risk so important is that it introduces an entirely new set of threats—defect-driven fraud, prompt-based data leakage, rapid model changes that create supply chain exposure, and unexpected agentic behavior,” Cammack said. “All of this is unfolding within a constantly evolving regulatory environment.”

By integrating AI-specific controls into the HITRUST framework, HaystackID moved beyond ad hoc risk management to an independently validated, enterprise-grade approach, giving our clients clear assurance that AI systems are deployed with the right safeguards in place. The scope of that expansion was significant: growing from roughly 300 controls to more than 400 in a single certification cycle, adding new privacy and AI-focused requirements to an already rigorous baseline.

“Adding additional privacy controls and AI risk controls to this audit cycle allowed us to demonstrate our growing maturity as an organization and highlight HaystackID’s strength in these areas,” said Wienke.

Real-World Experience Sharpens Compliance Frameworks

Certification processes like HITRUST are designed to hold organizations accountable, and the strongest ones create space for that accountability to flow in both directions. Cammack has spent years in those conversations, contributing to operational expertise that strengthens the framework for everyone. One of the clearest examples involves shared accounts.

Most compliance frameworks flag shared credentials as problematic, and the reasoning is sound: without individual attribution, accountability becomes difficult and incident response harder. But our industry operates under constraints that demand a more nuanced approach. Processes run for hours across shift changes, and stopping a six-hour job to let someone clock out simply isn’t an option. Cammack brought that operational reality directly into his conversations with HITRUST auditors, making the case that the goal was never to eliminate shared credentials outright — it was to ensure accountability. And accountability, properly engineered, is fully achievable even where shared credentials are operationally necessary.

At HaystackID, every user has a unique identity secured with multi-factor authentication (MFA), number matching, and location verification. Where shared credentials are necessary, compensating controls—such as enterprise password management and remote desktop access—ensure each session is tied to a specific user and device, with full auditability. Users never see the password; access is limited to a task-specific session.

“Every single checkout is logged. Every session is tied to an individual. Every action is attributable, and that gets us all the way back to non-repudiation,” Cammack said.

Auditors and client security teams consistently reach the same conclusion: the framework’s intent—control and accountability—is fully preserved. By working through these scenarios directly with HITRUST, that intent becomes more practical and durable in complex environments.

The Governance Debt Clock Is Running

AI governance done well looks less like a program and more like a practice, something continuously maintained, tested, and validated rather than declared complete at launch. Leadership sponsorship, cross-functional coordination, independent certification, and operational embedding aren’t milestones to clear. They’re the ongoing conditions under which responsible AI use actually holds.

For legal technology professionals, the question worth sitting with is more than whether the governance program exists. It’s whether that program would produce defensible evidence under scrutiny from a regulator, a client, or an auditor who wants to see not just what the policies say, but how the organization behaved when they were put into practice. Governance debt lives in that gap, and it has a way of becoming visible at the worst possible moment.

Cammack’s closing take on AI was characteristically grounded. Somewhere between the optimists predicting AI will cure cancer and the voices warning of existential catastrophe, he lands firmly in the camp of AI as the greatest tool humanity has yet built, and it’s one that demands responsible implementation and controls. That balance between capability and accountability isn’t a constraint on what AI can do. It’s what makes it sustainable.

Governance debt doesn’t resolve on its own. Every deployment without proper oversight, every model update without re-validation, every policy without a testing protocol adds to it. The organizations that treat governance as infrastructure—built before it’s needed, not after—are the ones that will move confidently when the next capability shift arrives.

More About Michael Cammack

Michael Cammack is the Deputy Information Security Officer at HaystackID. He has extensive experience in the field of IT. Cammack is responsible for leading the IT and Security controls team to achieve various certifications, such as ISO 27001 certification, SOC 2 Type 2 audits, and, most recently, HITRUST r2 certification. Before his role at HaystackID, Cammack worked at NightOwl Global from 2013 to 2020, where he served as the Director of IT and Security. During his tenure, Cammack played a key role in expanding the company’s presence internationally, establishing data centers in the US, EU, and APAC regions.

More About Stephanie Wienke

Stephanie Wienke is a Security Specialist at HaystackID, where she manages compliance frameworks such as SOC 2 Type 2, ISO 27001, and HITRUST r2, oversees internal audits, and leads the company’s security awareness and policy acknowledgment programs. Wienke also administers third-party vendor security assessments, drafts and revises policies, and creates standard operating procedures to ensure audit compliance. A Certified Information Privacy Manager (CIPM) through IAPP, Wienke brings specialized data privacy and security expertise. Before HaystackID, she worked as a billing and administrative assistant at NightOwl Global and taught high school English, including courses in literature and composition. Her background in education and administration provides a unique perspective, blending analytical and communication skills to address complex security challenges.


[Podcast] HaystackID® in the EDRM Illumination Zone: Michael Cammack and Stephanie Wienke


The podcast is available on your favorite listening app, including Spotify, Apple Podcasts, and Google Play. The podcast is also available on the EDRM website and is provided below for convenience.



Join HaystackID’s experts as they share actionable insights on today’s most material topics—from how GenAI is reshaping legal data strategies to the latest approaches in digital forensics. Explore our full library of EDRM Illumination Zone podcast episodes.


About the Electronic Discovery Reference Model

Empowering the global leaders of e-discovery, the Electronic Discovery Reference Model (EDRM) creates practical global resources to improve e-discovery, privacy, security, and information governance. Since 2005, EDRM has delivered leadership, standards, tools, guides, and test datasets to strengthen best practices throughout the world. EDRM has an international presence in 136 countries, spanning six continents. EDRM provides an innovative support infrastructure for individuals, law firms, corporations, and government organizations seeking to improve the practice and provision of data and legal discovery with 19 active projects. Learn more at EDRM.net.

About HaystackID®

HaystackID® solves complex data challenges related to legal, compliance, regulatory, and cyber requirements. Core offerings include Global Advisory, Cybersecurity, Core Intelligence AI™, and ReviewRight® Global Managed Review, supported by its unified CoreFlex™ service interface and eDiscovery AI™ technology. Recognized globally by industry leaders, including Chambers, Gartner, IDC, and Legaltech News, HaystackID helps corporations and legal practices manage data gravity, where information demands action, and workflow gravity, where critical requirements demand coordinated expertise, delivering innovative solutions with a continual focus on security, privacy, and integrity. Learn more at HaystackID.com.

Assisted by GAI and LLM technologies.

Source: HaystackID