The stack that wasn't built for agents
The enterprise compliance technology stack got built up over decades, one layer at a time, in response to specific risk events and regulatory requirements. eCommunications archiving came out of financial services regulation in the late 1990s. eDiscovery platforms expanded after the FRCP amendments in 2006. Data loss prevention grew as data privacy regulation proliferated. Each layer solved the problem of its moment.
Every one of those layers was designed around human-generated information. Humans create volume that's bounded by typing speed. Humans make errors in patterns you can study and predict. Humans can be trained, disciplined, and held accountable.
AI agents don't work any of those ways. They generate communications at a volume no surveillance system was calibrated for, make errors that look nothing like human errors, and can't be trained on your compliance requirements through a one-day seminar. The existing stack wasn't designed for this. Gartner projects that 40 percent of enterprise applications will feature AI agents by 2026, up from less than 5 percent in 2025. Understanding what each layer can and can't do for AI agents — and where the gap is — is the right starting point for any compliance leader preparing for what's coming.
"By the end of 2026, 'death by AI' legal claims will exceed 1,000 globally due to insufficient AI risk guardrails."
— Daryl Plummer, VP Distinguished Analyst, Gartner · Gartner Top Predictions 2026, October 2025
What each existing layer covers (and where it falls short)
| Stack layer | What it covers | Gap for AI agents |
|---|---|---|
| Identity & Access Management | Controls what data AI agents can access | Doesn't govern what agents say about data they can legitimately access |
| Data Loss Prevention | Detects known sensitive data patterns in outbound communications | Can't catch reasoning-based disclosures or apply regulatory context to the same content |
| eCommunications Archiving & Surveillance | Archives and scans communications after the fact | Post-send model; review queues become unworkable at AI agent volumes |
| AI Output Quality Tools | Scores outputs for accuracy, coherence, general safety | Generic heuristics only — can't enforce org-specific policies or regulatory obligations |
| AI Governance Platforms | Model inventories, risk assessments, access controls, bias audits | Model-level oversight only — doesn't govern individual inference-level interactions |
| Pre-generation Policy Enforcement | Injects org-specific policy context before each inference call | The missing layer — the only one that prevents violations before they occur |
Identity and access management
IAM governs who and what can access which resources. For AI agents, it's a meaningful first line — an agent that can't access sensitive data can't expose it. Most enterprises with serious security practices have IAM reasonably well-developed.
The gap: IAM controls access, not communication. An agent with appropriate access to customer data can still produce communications that discuss that data in unauthorized ways. Access control is necessary but not sufficient for communications compliance.
Data loss prevention
DLP detects and prevents unauthorized transmission of sensitive data by scanning outbound communications for patterns — PII, financial data, classified document fingerprints. It operates on the channel level, not the content reasoning level.
The gap: DLP catches known data patterns. It doesn't catch reasoning-based disclosures — an AI agent that accurately summarizes confidential information in its own words, without reproducing verbatim data. It also can't apply regulatory context. The same sentence may be permitted in one regulatory context and prohibited in another, and DLP has no way to make that distinction.
eCommunications archiving and surveillance
Smarsh, Global Relay, and similar platforms archive electronic communications, apply lexicon-based and ML-based scanning, and surface flagged items for compliance review. They're the core of the communications compliance stack in regulated industries, and they work well for what they were designed for: managing the volume of human-generated communications from registered representatives.
The gap for AI agents is structural. eComs surveillance is a post-send model. At AI agent communication volumes, post-send detection creates review queues that no compliance team can meaningfully work through. More fundamentally, post-send detection can only document violations — it can't prevent them. The supervisory framework FINRA and similar regulators require is increasingly oriented toward prevention, not just documentation.
AI output quality and safety tools
A newer category of tools evaluates AI-generated content for quality, factual accuracy, and adherence to content policies. They operate as post-generation filters or scorers on the output side of inference.
The gap: these tools run on generic content quality dimensions — factual accuracy, coherence, general safety. They're not calibrated to your organization's specific policies, regulatory requirements, or legal posture. An output can pass every generic quality check and still violate your company's specific approved-claims policy or Regulation FD obligations. Organizational specificity isn't something generic AI quality tools can provide.
AI governance platforms
Enterprise AI governance platforms — a growing category — focus on model-level oversight: model inventories, risk assessments, bias audits, access controls, audit trails. They help organizations track which AI models are deployed, who owns them, and what risk assessments have been completed.
The gap: model-level governance doesn't address individual agent interactions. A model that passes a governance review can still produce a policy-violating output in a specific inference call. The governance platform knows the model exists. It doesn't govern what the model says in a given context.
The missing layer
Look at the existing stack and a pattern emerges: every layer operates either on the access side (what can the agent touch?) or the output side (what did the agent produce?) — and typically after the fact. For general counsel and CCO buyers, AI governance for the agentic enterprise means owning this gap, not waiting for IT to close it. Nothing in the existing stack operates at the inference point, before output is generated, with full knowledge of your organization's specific policies and current context.
That's the missing layer: a policy infrastructure system that, at every inference call, assembles a structured context object containing the policies relevant to this specific interaction and injects it into the agent's context window before the model generates its response. The agent completes its task knowing what it can and can't say — not because it was "trained" to comply, but because the constraints are explicitly present in its context at the moment of generation.
This is the only layer in the stack that provides prevention at the inference point, based on organizational-specific policy context. Everything else reacts after the fact.
How a complete agentic compliance stack fits together
At inference time, before any output is generated: policy context injection (relevant policies, regulatory constraints, and organizational context delivered into the agent's context window), tenant context assembly (active litigation posture, approved claims, regulatory history, competitor sensitivities — assembled per organization and kept current), and conversational context tracking (awareness of what's already been said in ongoing engagements, to prevent contradictions and unauthorized escalations).
After generation, before send: a review layer for high-stakes outputs — regulatory filings, significant contract terms, client communications above a certain risk threshold. Human review for the highest-risk cases; automated review for routine ones.
After send: eCommunications archiving remains required for regulatory record-keeping obligations regardless of pre-generation controls. Surveillance and lexicon scanning serve as a backstop and provide the audit trail documentation required by FINRA Rule 3110 and similar supervision requirements.
At the platform level: model inventory and risk assessment, IAM governing data access, and DLP catching known sensitive data patterns that escape other controls.
Where InPolicy fits
InPolicy operates at the pre-generation layer — the policy infrastructure that assembles and injects organizational-specific policy context at inference time. It also extends to human communications: the browser extension and document editor integration apply the same pre-send policy enforcement to employee communications that the context injection layer applies to AI agents.
The same policy that governs what a sales rep can say in an email also governs what an AI agent drafting a proposal can say. That consistency is built into the architecture, not bolted on afterward.
How to build this incrementally
Most enterprises can't build the entire stack at once. A practical sequence: start with the highest-risk AI agent use case — the workflow where an AI agent produces client-facing or commercially significant communications at scale. That's where pre-generation governance has the most immediate value. Map the policies that apply to that workflow and convert them to a structured format the governance layer can apply. Deploy the policy context injection layer for that workflow and validate with compliance that outputs are consistent with policy before scaling. Add archiving and audit trail for record-keeping obligations. Then extend to additional workflows and, in parallel, to human communications.
Frequently Asked Questions
- What's the most important gap in most enterprises' current compliance stacks for AI agents?
- The absence of a pre-generation policy enforcement layer — a system that injects organizational-specific policy context into AI agents before they generate output. Most existing compliance tools operate after the fact, which isn't sufficient for AI agent communications at scale.
- Does eCommunications surveillance still matter in an agentic enterprise?
- Yes. eCommunications archiving remains required for regulatory record-keeping, and surveillance provides a backstop and audit documentation. It should complement pre-generation controls, not replace them.
- How does AI governance platform tooling relate to pre-generation policy enforcement?
- They address different layers. AI governance platforms track model-level risk at the program level. Pre-generation policy enforcement governs individual agent interactions at the inference level. Both are needed; neither substitutes for the other.
- Where should enterprises start when building an agentic compliance stack?
- Start with the highest-risk AI agent use case — typically the first workflow where an AI agent produces client-facing or commercially significant communications at scale. Implement pre-generation policy enforcement for that workflow first, then extend incrementally.
