Last week I spoke at a workshop in Oslo and ran a simple exercise. I asked the room — about a hundred data and AI leaders — to write down their top three AI risk concerns. Then I asked their security teams the same question separately. The two lists barely overlapped.
That gap between perceived AI risk and actual AI risk is the single biggest reason AI governance programmes stall. Boards are worried about the wrong things. Security teams are firefighting the right things. And the result is policy that looks comprehensive on paper but misses what's actually happening in the business.
Why perceived and actual AI risk diverge
Perceived risk is shaped by what gets attention — vendor briefings, conference keynotes, the news cycle. Actual risk is shaped by what employees do every day with the tools they already have. Those two information streams almost never meet.
Most boards form their AI risk view from three sources: the vendor pitch deck (which talks about hallucinations because that's their differentiator), the news cycle (which talks about lawsuits and copyright), and the analyst report (which talks about regulation). Real exposure rarely makes it into any of those streams — because nobody has the data.
What companies are worried about
The recurring three:
- Hallucinations. Models making things up. Real risk in some contexts (legal advice, medical diagnosis), but in most enterprise workflows the user catches the error before it ships.
- Scary headlines. The Air Canada chatbot lawsuit. The lawyer who cited fictional cases. High-salience, low-frequency events.
- Rogue outputs. The fear that an AI will say something offensive, biased or wildly off-brand. Mitigated by basic content filtering on most production systems.
Notice what's common to all three: they're problems with model output. Visible. Demonstrable. The kind of thing you can put a screenshot of in a board deck.
What's actually happening on the ground
Now compare that to what security teams are actually fighting. Six recurring patterns we see in every engagement:
Shadow AI usage
Employees using ChatGPT, Claude, Copilot, Gemini, Perplexity, Cursor and a dozen other tools — on personal accounts, on mobile data, with no SSO, no logging, no policy. The typical mid-market enterprise has 5–10× more AI tools in active use than IT knows about. None of them are on a register.
Unapproved vendors
SaaS vendors quietly adding generative AI to existing products. The CRM your sales team has used for five years now ships a Copilot. The recruiting platform your HR team uses now scores candidates with an LLM. None of these went through a fresh procurement review. The data already flowed.
Sensitive data in prompts
The single biggest data-leakage vector in 2026 isn't a hacker — it's an employee pasting a customer list, a contract, or unreleased financial figures into a chat box. One in five AI prompts contains something sensitive, and most enterprises have no visibility into the prompt content because their DLP tools weren't designed to inspect generative-AI traffic.
No ownership of AI tools
Even where the AI use case is known, nobody owns it. Marketing's customer-segmentation AI? “The agency runs that.” The candidate-screening model? “HR uses the vendor's default settings.” When something goes wrong — biased output, wrong recommendation, regulatory question — there's no name attached. That's an audit failure waiting to happen.
No audit trail
Most enterprises can't answer simple questions: what AI tools were used last quarter? What data went through them? Who approved the EU AI Act risk classification? The answer is usually a hand-edited spreadsheet that's already three months out of date.
Weak third-party controls
The supply chain is the new attack surface. Your vendor uses an LLM trained by another vendor, hosted by a third, and the data flows through all three. Most procurement processes ask about SOC 2 and ISO 27001 — they don't ask about AI subprocessors, training data provenance, or model-update policies.
Five questions every board should be asking
If your board is asking about hallucinations and your security team is answering questions about ChatGPT leaks, the conversation is broken. These five questions reset it:
- How many AI tools are in active use across the business right now? If the answer is a round number ending in zero, it's a guess.
- Which of those use cases would be classified as “High Risk” under EU AI Act Annex III? The deadline for high-risk obligations is 2 August 2026. Norway counts via the EEA.
- Who is the named owner of each AI use case? If the answer involves the word “the team,” you don't have ownership — you have plausible deniability.
- What sensitive data has gone through external AI tools in the last 90 days? Most companies can't answer this. The few who can usually wish they couldn't.
- If an auditor asked us tomorrow for evidence of AI risk controls, what would we hand them? A policy PDF doesn't count. A controls spreadsheet from January doesn't count. A live register counts.
Done properly, governance is the unlock
The reflexive response to AI risk is to slow things down — block tools, restrict data, freeze experimentation. That's the wrong instinct. AI governance done properly does the opposite: it lets the business move faster because the risks are visible, owned, and bounded.
Think of it like financial controls. Companies don't restrict employees from spending money — they put spend on a card with a limit, a category, an owner, and an audit trail. The result is more spend, not less, because the board can sign off on growth without losing sleep about fraud.
AI governance is the same primitive applied to a new resource. Show every AI use case. Assign every owner. Score every risk against a framework regulators recognise. Then let the business build.
Where to start this week
If you're reading this and thinking “we don't have any of this” — you're not alone. Most enterprises don't. The good news is that catching up takes weeks, not quarters.
- Run a 4-week pilot that gives you a live AI register, ownership per use case, and risk scoring against EU AI Act, NIST AI RMF, ISO 42001 and OWASP LLM Top 10. That's the standard motion for the Atlas AI Insight Platform.
- If you don't have a programme to run the pilot into, start with our 8-week AI Governance & Risk Assessment — policy, operating model, framework mapping, board-ready output.
Either way: stop letting the conversation about AI risk be shaped by the things that look scary on stage. Ground it in what your employees are actually doing today.
