Just drafted an AI policy that bans your ChatGPT side hustle. Shadow AI? Not on my watch.
Why so blunt? Because in financial services, “accidentally pasted client data into ChatGPT” isn't just a whoopsie. It's a compliance nightmare. A GDPR fine. A front-page headline. A FCA notification. Possibly a Senior Managers Regime accountability trigger. The chain of consequences is terrifying once you trace it.
Why AI policy matters more in regulated industries
In an unregulated context, an employee pasting customer data into ChatGPT is bad practice — but recoverable. In a regulated industry it's a reportable event with statutory deadlines, mandatory disclosures, and potentially personal liability for senior managers.
The maths gets ugly fast. A single GDPR Article 33 breach notification costs ~€50K in legal and forensics fees even if the regulator takes no action. An actual fine for unprocessed personal data going through an external LLM can run into millions. Then there's the supervisory follow-up: enhanced monitoring, mandatory external audit, restricted product launches for the next eighteen months.
Against that backdrop, “we have an AI policy” isn't a defence. The policy has to actually prevent the behaviour — and you have to be able to prove it does.
What an actually-enforced AI policy contains
The four building blocks every regulated-industry AI policy needs:
No pasting prompts into random AI tools
Default deny. Every external AI tool is forbidden unless explicitly approved through a vendor risk process that includes data residency, training-data policy, sub-processor list, and incident notification SLAs. The approved list is short on day one and grows with deliberate process.
Critically: the rule has to be enforceable, not just stated. A policy line saying “don't paste client data into AI tools” without a technical control behind it is a wish, not a policy.
Monitoring for rogue usage
You can't enforce what you can't see. The policy needs a paired monitoring layer that captures actual AI usage at the device level — what tools are accessed, what data crosses into them, by whom. Browser extensions and desktop apps with accessibility-API access are the only methods that see the prompt content itself; network-level inspection misses traffic that goes via mobile data or personal accounts.
Auto-incident response when GenAI gets spicy
When the monitoring fires — sensitive data detected in a prompt, an unapproved tool used with customer records, an MCP agent invoked with elevated privileges — the response can't depend on a human noticing. Automate the routing: high-severity events open a Jira ticket, page the on-call, and (where possible) revoke the tool's access pending review. The first 30 minutes of incident response set the regulator conversation that comes later.
Labels, encryption and prompt-content controls
Data classification is the bedrock. Every customer record, contract, financial figure and internal document carries a sensitivity label that travels with it. The AI tool either honours the label (some do) or gets blocked at the prompt boundary (the realistic option for SaaS LLMs).
For Microsoft-stack environments, this means Purview sensitivity labels + DSPM for AI + a tool that does the prompt-content scanning Purview can't do natively for non-Edge browsers. For everyone else, it means a dedicated prompt-shield layer.
The “would this stop Gary?” test
Gary works in your sales team. He's been with the company for three years. He's motivated, smart, and time-poor. He just discovered that pasting his client's last quarterly report into Claude gives him a beautifully formatted summary in 8 seconds. He's going to do this every Monday morning unless something physically prevents him.
Apply the Gary test to every line of your AI policy:
- Does this rule make Gary's next paste fail loudly, or does it just make him technically in violation?
- Will Gary know the rule exists at the moment he's about to break it?
- If Gary breaks the rule anyway, will the security team know about it within an hour?
- If the breach is significant, will the response routing happen automatically?
If the answer to all four is yes, the policy is real. If the answer to any is no, that line of policy is decoration.
Common policy mistakes
Five patterns we see repeatedly:
- Policy that lists tools instead of behaviours. “ChatGPT is forbidden” ages badly — the list is out of date the day after it's written. “External AI tools require approval before client data is shared” ages well.
- Policy that lives only in a Confluence page. Nobody reads it. The first time most employees see it is during the post-incident interview.
- Policy with no monitoring layer. If you can't detect violations, you can't enforce them. Every undetected violation erodes the policy's authority.
- Policy that bans without offering an alternative. If the approved tool is slow or the approved process takes weeks, employees will route around it. Always pair a ban with an approved equivalent.
- Policy without senior-management buy-in. If the head of sales says “just use ChatGPT, who's going to know,” the policy is dead. Senior accountability has to be visible.
What companies actually do today
When I ask security leaders what their company's shadow AI strategy actually is, the answers cluster into five honest categories:
- “Everyone's doing it. Even Legal.”
- “Blocked at the firewall. Still used on phones and personal Wi-Fi.”
- “We pretend not to know.”
- “We drafted a policy. I cried a little.”
- “Prompt Shields + DLP + blessings from Compliance.”
Only the last category survives a regulator visit. Of the four others, the most dangerous is the third — “we pretend not to know” — because once a regulator establishes that the company chose ignorance, the penalty multiplies.
Stop bringing unlicensed LLMs to a regulated gunfight
You want to use AI at work? Great. Just don't bring unlicensed LLMs to a regulated gunfight.
Two routes to a real, enforceable AI policy:
- Have the policy, need the controls? Atlas AI Insight Platform gives you the discovery, monitoring and framework-mapped reporting that turn policy lines into enforced controls. 4-week pilot.
- Need the policy itself? Our 8-week AI Governance & Risk Assessment ships an AI policy suite, an operating model and a board-ready evidence pack — written for your regulatory environment.
