Regulated Industries9 min read

FINRA, SEC, and AI Agents: The Communications Compliance Crisis No One Is Ready For

Andrew Becker

Andrew Becker

CEO & Co-Founder, InPolicy ·

The rules weren't written for this

Financial services firms operate under some of the most prescriptive communications compliance frameworks in any industry. FINRA Rule 2210 governs communications with the public. SEC Regulation FD covers selective disclosure of material non-public information. FINRA's suitability rules and Reg BI govern investment recommendations. Broker-dealers are required to supervise all electronic communications — an obligation built around the assumption that a human registered representative produced them.

None of these frameworks anticipated AI agents. Not because the regulators were careless. When these rules were written, an autonomous AI agent handling client communications on behalf of a registered broker-dealer wasn't a near-term operational reality.

It is now. In 2025, FINRA brought 12 enforcement actions for misleading communications, generating $6.5 million in fines — and communications violations appeared in the agency's top five enforcement categories for the first time in five years, per the Eversheds Sutherland 2025 FINRA Sanctions Study. That enforcement trend is running concurrent with AI agent deployments that most firms haven't yet governed. The gap between where regulation is and where AI deployment is heading is precisely where compliance risk concentrates.

What the existing rules actually require

Supervision of communications — FINRA Rule 3110

Rule 3110 requires broker-dealers to establish supervisory systems reasonably designed to achieve compliance with applicable regulations. For communications, that means reviewing and supervising outgoing electronic communications. The rule doesn't specify human-only review, but the supervisory framework has always been designed around human-generated communications.

When AI agents generate communications, the question of what constitutes adequate supervision becomes materially harder. Is sampling 5% of AI-generated outputs sufficient? What does a "reasonably designed" system look like when the agent is sending thousands of client communications per day? No enforcement guidance yet answers this directly, which means firms are making judgment calls in the dark.

Communications with the public — FINRA Rule 2210

Rule 2210 requires all public communications to be fair, balanced, and not misleading. Specific prohibitions: predicting or projecting investment performance unless specifically permitted, making exaggerated or unwarranted claims, omitting material information that would make a statement misleading.

An AI agent that confidently describes a security as "appropriate for your investment goals" or a strategy as "historically reliable" is almost certainly violating Rule 2210 unless a registered principal reviewed and approved that specific language — which, by definition, they haven't if the agent is operating autonomously.

"The content standards of Rule 2210 apply whether member firms' communications are generated by a human or technology tool."

FINRA, Regulatory Notice 24-09, June 27, 2024

Regulation FD

Reg FD prohibits selective disclosure of material non-public information to market professionals or shareholders. An AI agent with access to internal documents, financial models, or deal pipeline information and the ability to communicate with external parties is a Reg FD risk vector most firms haven't fully thought through. The agent doesn't need to be "trying" to make a disclosure. It needs only to reproduce information that happens to be material and non-public in a context where it's selectively communicated. In September 2024, the SEC fined DraftKings $200,000 for a Reg FD violation after material nonpublic revenue performance data was distributed through an executive's social media account before public disclosure — an action that required no intent, only unauthorized distribution of MNPI.

Regulation Best Interest

Reg BI requires broker-dealers to act in the best interest of retail customers when making a securities recommendation. If an AI agent produces anything that could be construed as a recommendation — which a sophisticated language model, given enough context about a customer's situation, very easily can — the Reg BI framework applies. The agent's intent is irrelevant. The content is what matters.

Rule / Regulation What it requires AI agent risk
FINRA Rule 2210 Public communications must be fair, balanced, not misleading; no unwarranted claims or performance projections Agents describing product performance, implying suitability, or making claims no principal approved
FINRA Rule 3110 Supervisory systems must be reasonably designed to achieve compliance Post-send sampling fails at AI agent volumes; supervision may be found inadequate on exam
Regulation Best Interest Broker-dealers must act in retail customers' best interest when making securities recommendations Agents generating personalized, account-context responses that constitute de facto recommendations
Regulation FD Prohibits selective disclosure of material non-public information to market professionals or shareholders Agents with access to internal financials or deal data reproducing MNPI in external communications

Four scenarios that create real exposure

The client service agent that becomes a de facto registered rep

A broker-dealer deploys an AI agent for client inquiries. The agent draws on product documentation and the client's account profile and responds to a question about whether a particular fund is appropriate for the client's goals with language that sounds like a personalized recommendation. Under Reg BI, this interaction may constitute a recommendation. The agent isn't a registered representative. No principal approved the communication. Compliance finds out when someone reviews the transcript — or when a client files a complaint.

The research summary that contains MNPI

An AI agent generating research summaries for client distribution accesses an internal model incorporating earnings information shared by an investor relations contact under a confidentiality agreement. The summary contains inferences from that information that aren't publicly available. That's selective disclosure of material non-public information, and Reg FD doesn't require intent.

The marketing content that violates 2210

An AI agent assists with drafting marketing materials for a new fund offering. Without specific constraints, the agent produces language describing the strategy as one that has "consistently outperformed in volatile markets" — drawn from historical performance data in its training context, presented without required disclosures and implying predictive performance. This violates Rule 2210 on its face.

The supervision gap that shows up in an exam

A firm deploys an AI agent for high-volume client communications. Compliance establishes a process sampling 5% of outputs — reasonable given resources. A pattern of suitability-adjacent language in the other 95% goes undetected for months. A FINRA examination identifies the pattern. The supervisory system is found inadequate under Rule 3110. This is the scenario that should keep compliance officers up at night, because it's entirely plausible at most firms deploying AI today.

Why eCommunications surveillance alone doesn't cut it

Financial services firms have invested heavily in eCommunications surveillance platforms. These tools are well-suited to the problem they were designed for: managing the volume of human-generated communications from registered reps. Applied to AI agents, they hit two structural problems.

First, volume. AI agents generate far more communications per unit time than human reps. A surveillance system calibrated for human communication volumes will either produce unworkable review queues at AI volumes, or reduce sampling rates to levels that won't satisfy supervisory obligations under Rule 3110.

Second, timing. eCommunications surveillance is a post-send model. The communication has already reached the client before it enters the review queue. For suitability-adjacent language, Reg FD disclosures, and Rule 2210 violations, the damage is done the moment the message sends. Post-send detection creates a paper trail. It doesn't prevent the violation.

What AI agent communications require is a pre-generation governance layer: a system that injects the relevant regulatory context — approved claims, FINRA rule constraints, Reg BI requirements, active Reg FD sensitivities — into the agent's context window before it generates any client-facing output. The agent produces compliant output by construction, not by luck. Building that layer is an AI governance framework question owned by Legal and Compliance — not an IT infrastructure question.

What an adequate supervision framework looks like

Compliance leaders at financial services firms are actively working through what adequate supervision of AI-generated communications under Rule 3110 actually requires. The practical answer: pre-generation governance plus an audit trail that documents the governance decisions made.

Pre-generation governance without documentation doesn't satisfy Rule 3110 — you can't demonstrate the system was reasonably designed if you can't show what it did. Documentation of violations without prevention doesn't satisfy it either — you've just proven the system wasn't working. A governance layer that injects policy context at inference time, logs the policies applied and outputs produced, and flags cases where a policy override was requested or an output was near a constraint boundary — that's what an adequate AI supervision framework looks like in practice.

"The rapidly evolving landscape and capabilities of AI agents may call for supervisory processes that are specific to the type and scope of the AI agent being implemented."

FINRA, 2026 Annual Regulatory Oversight Report

A checklist for compliance officers before deployment

  1. Map every communication channel where the agent will operate and every category of party it will communicate with. Client-facing communications are highest risk under FINRA/SEC rules.
  2. Identify the applicable regulatory constraints for each communication type. Rule 2210 governs retail communications. Reg BI governs recommendations. Reg FD governs disclosures. Reg SP governs privacy.
  3. Define what the agent can and cannot say. Work with product and legal to establish approved claims, required disclosures, and prohibited language specific to each communication context.
  4. Implement pre-generation governance. Those constraints need to be injected into the agent's context at inference time — not as a post-send check.
  5. Document your supervision framework. For Rule 3110 purposes, maintain a written record of the governance controls in place, their design rationale, and how they're monitored.
  6. Build an escalation protocol. Define the cases where AI output should go to a registered principal before sending, and wire that into the agent's workflow.

Frequently Asked Questions

Do FINRA rules apply to AI-generated communications?
Yes. FINRA rules governing communications with the public (Rule 2210) and supervision (Rule 3110) apply based on the nature of the communication and the firm's regulatory obligations, regardless of whether a human or AI agent generated it. Firms remain responsible for supervising AI-generated communications.
Can an AI agent make a securities recommendation under Reg BI?
An AI agent that produces output construable as a personalized recommendation to a retail customer may trigger Reg BI obligations, regardless of intent. The regulatory analysis focuses on the communication's content and context, not who or what produced it.
Is eCommunications surveillance sufficient for supervising AI agent communications?
Not on its own. eCommunications surveillance operates after communications are sent, creating review queues that are impractical at AI agent volumes. Financial services firms should implement pre-generation governance in addition to post-send surveillance.
What does an adequate Rule 3110 supervision framework look like for AI agents?
Pre-generation policy enforcement to prevent violations, an audit trail of governance decisions to demonstrate supervisory design, and escalation protocols for cases requiring principal review. The framework should be documented in a written supervisory procedures manual.

See InPolicy in action

Pre-send enforcement and agentic AI governance — built for General Counsel and CCOs.

Try it Free

Get Started In Minutes.

Upload your policies, use a starter pack, or start from scratch.

✦ No credit card required

InPolicy

InPolicy turns your policies into active, real-time guardrails. It uses AI to check what employees write in email and chat, instantly flags violations, explains the issue, and provides a one-click fix. Browser extension + Google docs agent.

© 2026 All rights reserved.

InPolıcy