The Security Problem Has Moved
When enterprises began deploying LLMs internally, the first wave of security concern focused on outputs. Could the model be jailbroken? Could it generate harmful content? Could it be manipulated into revealing its system prompt?
Those concerns were legitimate. The response was a generation of guardrail tools that inspect prompts and filter outputs. They are now widely deployed. And they are largely solving last year's problem.
The risk in 2026 is not what your LLM says. It is what your LLM can see.
What Changed When AI Became Agentic
The shift happened when organisations moved from LLMs as answering tools to LLMs as acting tools. A conversational chatbot that retrieves answers from a fixed knowledge base has a bounded exposure surface. An agentic workflow with access to CRM data, HR systems, email, financial records, and a codebase has a fundamentally different risk profile.
In agentic deployments, the LLM does not just generate text. It makes tool calls. It retrieves documents. It queries databases. It sends requests to APIs. Each of those actions carries potential for data exposure that has nothing to do with the quality of the model's outputs and everything to do with what the agent was permitted to access in the first place.
A RAG pipeline with access to HR documents, financial records, and customer data can return sensitive compensation data to an employee who asked a general question about company structure. Not because the model misbehaved. Because it had access to information it should never have been able to retrieve.
Where Current Tools Fall Short
The three most common categories of LLM security tooling each address a real risk, but none addresses the access governance gap.
Prompt injection defences block malicious inputs from manipulating model behaviour. They are necessary but operate before retrieval. They do not control what the model is allowed to retrieve once a legitimate query is processed.
Output scanning tools inspect model responses for sensitive content, PII, or policy violations. They catch what comes out, but by the time a response is generated, the model has already retrieved and processed the data. Scanning the output does not undo the retrieval.
Behavioural monitoring tools track what models are doing in aggregate. They are useful for forensic analysis but are not enforcement mechanisms. They tell you what happened, not what was prevented.
None of these tools sits between the agent and the data source. None of them classifies data by sensitivity before it enters the context window. None of them assigns identities to AI agents and enforces least-privilege access policies at the retrieval layer.
The Layer That Is Missing
The access governance gap in enterprise LLM deployments is the same gap that existed in enterprise software a decade ago before identity and access management matured. The principle is the same: every actor that can retrieve data must have an identity, and access must be determined by that identity and the sensitivity classification of the data being requested.
Applying this principle to AI agents requires several capabilities that do not come built in to any LLM platform. Agents need assigned identities and roles. Data sources need sensitivity classification. Access policies need to be enforced at the retrieval layer, before data enters the context window. Sessions need to be monitored for behavioural deviation. And risk events need to be raised when an agent session begins retrieving data outside its normal pattern.
Without this layer, every agentic AI deployment is running with effectively unlimited access to whatever its tooling can reach. That is not a model safety problem. It is an access governance problem. And it requires an access governance solution.
March 16, 2026
