AI Governance9 min read

Guide to AI Governance for Legal Buyers

Andrew Becker

Andrew Becker

CEO & Co-Founder, InPolicy ·

The wrong buyer has been running the meeting

Pull up any enterprise AI governance vendor's website and read who they're talking to. The demo videos feature CTOs and CISOs. The ROI calculators reference model performance metrics. The implied buyer is somewhere in IT.

That's a problem. Not for the vendors — they'll sell to whoever shows up. The problem is that the exposure AI agents create isn't primarily technical. It's legal, regulatory, and contractual. The people who own that exposure are general counsel and chief compliance officers. And most of them have been watching AI governance get bought by the wrong function for the past two years.

A Gartner survey of 104 general counsel published in October 2025 found that 36 percent of GCs are now focused on AI adoption, building AI skills in their legal department, or improving AI risk management. That share is growing. The GCs who wait for IT to solve a legal problem are learning that lesson the expensive way.

36% of general counsel focused on AI adoption or risk management Gartner, Oct 2025
40% of enterprise applications will feature AI agents by 2026, up from <5% Gartner, Aug 2025
63% of organizations have an AI use policy — as a document, not as enforcement White & Case, 2025

Why the risk is yours, not theirs

There's a reasonable case that AI safety — ensuring models behave in alignment with human values broadly — belongs with technical teams. That's a model architecture question.

AI governance is different. AI governance is about ensuring AI systems behave according to your organization's specific policies, regulatory obligations, and legal constraints. That's not a technical problem. It's yours.

When an AI agent deployed for client communications at a broker-dealer describes a security as suitable for a retail investor, that's a potential Regulation Best Interest violation. FINRA and the SEC care who supervised the communication, not what model produced it. The compliance function owns that exposure. When an AI agent generating marketing materials states that a product "meets all applicable regulatory requirements" — language no legal team ever approved — that's legal's problem to clean up, not engineering's.

In 2025, FINRA brought 12 enforcement actions for misleading communications, resulting in $6.5 million in fines. Communications violations appeared in the agency's top five enforcement categories for the first time in five years, according to the Eversheds Sutherland 2025 FINRA Sanctions Study. None of those cases got resolved by a CISO.

"What governance mechanisms are in place to oversee AI systems—especially those systems that may employ 'black box' algorithms, where it's not clear how inputs are weighed or outputs derived?"

— Commissioner Caroline A. Crenshaw, SEC · SEC AI Roundtable, March 27, 2025

The agents are already deployed

If your organization is typical, AI agents are probably already operating in workflows that touch external communications. Gartner predicts that 40 percent of enterprise applications will feature AI agents by 2026, up from less than 5 percent in 2025. Most of those deployments happened without a governance framework designed to address legal and regulatory exposure. Engineering built a useful workflow, product shipped it, and legal found out months later.

By the time legal shows up to audit what's already running, the agent has been sending client communications for months. That's a worse position than being in the room before deployment. The goal isn't to slow AI deployment down — that's not a winning position either. It's to establish governance frameworks that let AI agents operate at scale while staying within legal and regulatory boundaries. That requires legal ownership from the start, not a post-hoc review process.

The AI governance market is crowded with tools that address different problems under the same label. For a legal buyer, here's a quick map of what matters and what doesn't.

Dimension AI Safety AI Governance
Concern AI misaligning with human values; societal harm AI violating your specific policies, regulations, and legal obligations
Who buys it CISO, AI/ML Engineering General Counsel, CCO, Chief Risk Officer
What the tools do Model alignment, bias detection, output filtering for harmful content Policy enforcement at inference, regulatory context injection, audit trails
Catches Harmful or offensive content FINRA Rule 2210 violations, unauthorized commitments, litigation hold breaches, unapproved claims
Urgency for most enterprises Medium-term strategic concern Immediate — agents are already deployed and communicating

"Depending on the ways in which a member firm may use Gen AI, such use could implicate virtually every area of a member firm's regulatory obligations."

FINRA, Regulatory Notice 24-09, June 27, 2024

Not your problem: AI safety tooling

AI safety tools focus on general model behavior — output filtering for harmful content, bias detection, alignment metrics. These are legitimate technical concerns. They don't address your organization's specific legal and regulatory posture. An AI output can pass every generic safety check and still violate your company's specific approved-claims policy, FINRA Rule 2210, or an active litigation hold. AI safety tools aren't designed to know those things. They won't catch those violations.

Relevant but incomplete: model-level governance platforms

Enterprise AI governance platforms — model inventories, risk assessments, access controls, audit trails — give organizations visibility into which AI systems are deployed and who owns them. That matters for program management. It doesn't govern individual agent interactions. A model that passes a governance review can still produce a policy-violating output in a specific context an hour later. Model-level oversight and inference-level policy enforcement solve different problems.

What you actually need: inference-level policy enforcement

The governance layer that addresses legal and regulatory exposure operates at the moment an AI agent generates output — before that output reaches any external party. Usually called pre-generation policy enforcement or context injection: a system that, before each inference call, assembles the relevant policies and organizational context and delivers them to the agent. The agent completes its task knowing what it can and can't say, based on your company's specific obligations — not generic guidelines.

Think of it as briefing a new hire on your legal exposure before their first client call, except it happens at machine speed, before every single interaction, including the 10,000 that happen while everyone is in meetings.

The regulatory exposure map

The specific exposure depends on your industry, but a few risk categories show up broadly.

Industry / Area Applicable rule AI agent risk
Financial services FINRA Rules 2210, 3110; Reg BI; Reg FD Suitability representations, misleading claims, MNPI disclosure, supervision failures
Healthcare HIPAA Privacy & Security Rules PHI reproduction in outputs, unauthorized disclosure to external parties
Any industry Contract law Unauthorized commitments on pricing, delivery, or service levels in commercial communications
Multi-state operations Colorado SB 24-205; state privacy laws (CCPA, etc.) High-risk AI deployment obligations, consumer disclosure requirements, opt-out rights

Communications compliance in financial services

FINRA Rule 2210 requires public communications to be fair, balanced, and not misleading — no projecting investment performance unless specifically permitted, no exaggerated or unwarranted claims. FINRA Rule 3110 requires broker-dealers to establish supervisory systems reasonably designed to achieve compliance. An AI agent operating without pre-generation governance will routinely produce language that doesn't meet these standards: suitability representations no human reviewed, product descriptions missing required disclosures, forward-looking language presented as fact. The supervisory obligation doesn't change because the output was AI-generated.

Selective disclosure

Regulation FD prohibits selective disclosure of material non-public information to market professionals or shareholders. An AI agent with access to internal financial models, deal pipeline data, or investor communications is a Reg FD risk vector. In September 2024, the SEC fined DraftKings $200,000 for a Reg FD violation after material nonpublic revenue information was distributed through an executive's social media account before public disclosure. The SEC's analysis centered on what was disclosed and when — a framework that applies identically to AI agent output, regardless of intent.

Contract and commitment risk

When an AI agent handling commercial negotiations makes a statement about pricing, deliverables, or service levels, that statement may be construable as a representation or an offer. Your contracts team probably doesn't know the agent made it. This risk is largely invisible until a counterparty cites a commitment in a dispute — and it applies in any industry, in any negotiation where an AI agent is involved.

Privacy and data handling

An AI agent with access to customer data or medical records can reproduce protected information in outputs that violate HIPAA, GDPR, or state privacy statutes. HIPAA civil monetary penalties can reach $1.9 million per violation category per year, and the agency doesn't require proving that the disclosure was intentional. Neither do most state privacy laws.

State AI regulation

Colorado's SB 24-205 — the first comprehensive state AI law — imposes risk management obligations on developers and deployers of high-risk AI systems. California, Texas, and several other states have AI-specific bills in various stages. The regulatory surface is expanding faster than most programs can track it. Legal and compliance need to own the monitoring function; IT won't catch these unless someone tells them to look.

What to look for when evaluating AI governance tools

When you evaluate an AI governance platform, the questions worth asking are different from what an IT buyer would ask. Technical integrations matter less than policy specificity. Here's what to focus on.

Does it know your policies, or generic ones? A system that enforces generic content quality guidelines isn't enforcing your legal obligations. The governance layer needs to work from your company's specific approved claims, your regulatory history, your active litigation posture. Ask the vendor how organizational-specific policy context gets into the system, how it's maintained, and how it gets updated when your circumstances change — when a new litigation hold is issued, when a product claim gets approved or pulled, when a regulatory inquiry opens.

Does it operate before or after generation? Post-generation output checking can flag violations. It typically can't prevent them from reaching external parties. For regulated communications, prevention matters more than detection. Verify whether the system operates at the inference layer — before output is generated — or only as a post-generation filter.

Can it produce an audit trail a regulator can follow? Under FINRA Rule 3110 and similar supervision frameworks, you need to demonstrate that your supervisory system is reasonably designed. That requires documentation: which policies were applied, what the agent said, what near-misses were caught and corrected, what override requests were made and by whom. Make sure the system produces a documented record, not just aggregate dashboards.

Who owns the policies in the system? If policies live in IT-owned configuration files, they'll drift from your legal requirements. The governance layer should have a clear owner in Legal or Compliance, with controlled update workflows and change logging. The policy context needs to stay current and authoritative — not locked in a config that engineering wrote two product versions ago.

The budget conversation

Compliance infrastructure isn't usually framed as IT spend, and AI governance shouldn't be either. The framing that lands in a budget conversation: a single significant enforcement action in financial services typically costs more than years of prevention infrastructure. A moderate FINRA enforcement action — fines, outside counsel, internal investigation, enhanced supervision requirements — runs from hundreds of thousands to several million dollars. Litigation exposure from a single unauthorized AI commitment can run larger. A HIPAA violation can reach $1.9 million per category per year before outside counsel.

Prevention infrastructure doesn't need to avoid many incidents to pay for itself. The challenge is making it concrete when the wins are counterfactual. One approach that works: pre-generation governance systems produce a log of near-misses — every communication flagged and corrected before it sent. That's a concrete record of incidents prevented, not a theoretical benefit. It makes the ROI conversation much easier than explaining risk reduction in the abstract.

One additional point worth making in these conversations: a 2025 White & Case global compliance survey found that 63 percent of organizations have a policy governing employee AI use, and 26 percent plan to implement one. Having a policy document is the easy part. The organizations without the infrastructure to enforce it at the point of communication — for both humans and AI agents — get the liability exposure of a policy they can't demonstrate they're actually enforcing.

Frequently Asked Questions

Should AI governance be owned by Legal or IT?
The organizational risk from AI agent communications — regulatory violations, litigation exposure, unauthorized commitments — is a legal and compliance problem. Legal and Compliance should own the policy framework and governance requirements. IT handles implementation and integration. Both need to be involved, but legal ownership of the policy layer is the difference between governance that stays current and governance that drifts.
What's the difference between AI safety and AI governance for enterprise compliance?
AI safety focuses on ensuring AI systems behave in alignment with human values broadly — a model architecture and research concern. Enterprise AI governance focuses on ensuring AI systems comply with your organization's specific policies, regulatory obligations, and legal constraints. An output can pass every generic safety check and still violate your specific legal requirements. They address different problems and require different tools.
Is eCommunications surveillance enough to govern AI agent communications?
No. eCommunications surveillance is a post-send model — it catches violations after they've reached external parties. At the volume AI agents operate, post-send review queues become unworkable and the surveillance doesn't satisfy FINRA Rule 3110's requirement for a supervisory system reasonably designed to achieve compliance. Pre-generation governance is the right layer for AI agent communications.
What regulations specifically apply to AI-generated communications?
In financial services: FINRA Rules 2210 and 3110, Regulation Best Interest, Regulation FD. In healthcare: HIPAA. Broadly: state AI laws (Colorado SB 24-205 and others in progress), contract law for unauthorized commitments, and applicable privacy statutes. The common thread is that regulations apply based on the content and context of communications — not on who or what produced them.

See InPolicy in action

Pre-send enforcement and agentic AI governance — built for General Counsel and CCOs.

Try it Free

Get Started In Minutes.

Upload your policies, use a starter pack, or start from scratch.

✦ No credit card required

InPolicy

InPolicy turns your policies into active, real-time guardrails. It uses AI to check what employees write in email and chat, instantly flags violations, explains the issue, and provides a one-click fix. Browser extension + Google docs agent.

© 2026 All rights reserved.

InPolıcy