In this article
Earlier this year, we wrote about shadow AI inside your organisation and the risks of employees using ungoverned chatbots at work. That problem has not gone away. But a new one has arrived that makes it look manageable by comparison.
Agentic AI is here. And it changes the shadow AI threat model in ways that most security teams are not prepared for.
What are AI agents?
An AI agent is a system that does not just answer questions. It takes actions. It can open pull requests on your code repository. It can query your production database. It can trigger workflows, send emails, schedule meetings, and make decisions across multiple systems without a human approving each step.
Where a chatbot waits for a prompt and returns text, an agent receives a goal and executes a multi-step plan to achieve it. It decides which tools to use, in what order, and whether the result meets the original objective. If it does not, the agent tries again.
This is a fundamental shift. The AI is no longer a tool that humans operate. It is an autonomous actor that operates on behalf of humans, often with the same system credentials and data access as the person who deployed it.
How agents differ from chatbots
The distinction matters because it changes every assumption your security architecture is built on.
A chatbot is a conversation. A human types a question, the AI returns an answer, and the human decides what to do with it. The blast radius is limited to what the AI says. If it hallucinates or accesses something it should not, a human is still in the loop to catch the error before any action is taken.
An agent is a process. It receives a high-level instruction and autonomously determines how to fulfil it. It chains together API calls, database queries, file operations, and external service requests. By the time a human reviews the output, the actions have already been executed.
Consider the difference in practical terms:
- Chatbot scenario: An employee asks a chatbot to summarise last quarter's revenue data. The chatbot retrieves and displays the data. The employee reads it and decides what to do next.
- Agent scenario: An employee tells an agent to prepare the quarterly board report. The agent queries the finance database, pulls CRM pipeline data, accesses HR headcount figures, drafts the report, and emails it to the board distribution list. Seven systems accessed, one email sent, zero human checkpoints.
Key insight: With chatbots, the security question was what data can the AI see. With agents, the question is what actions can the AI take. That is a fundamentally harder problem.
Why agents are a bigger shadow AI problem
Shadow AI with chatbots was primarily a data leakage risk. Employees pasted sensitive information into external tools. Serious, but contained. The data left the organisation, and that was the extent of the damage.
Shadow AI with agents is an execution risk. An ungoverned agent does not just read your data. It acts on it. It modifies records, triggers processes, and interacts with external systems. The blast radius is no longer limited to what the AI can see. It extends to everything the AI can do.
Nearly half of the cybersecurity profession is pointing at the same threat. And they are right to do so. When an autonomous agent inherits a user's credentials and system access, it inherits their entire attack surface. Every integration, every API key, every database connection becomes a potential path for unintended or malicious action.
The permission inheritance problem
Most agents today operate with the full permissions of the user who deployed them. A senior developer's agent has the same access as the senior developer. A finance director's agent can reach every system the finance director can reach.
This is not how your organisation intended those permissions to be used. Access controls were designed for humans making deliberate, individual decisions. They were not designed for autonomous systems executing dozens of operations per minute with no human in the loop.
The CISO blind spot
The survey data here tells a story that should concern every security leader.
Ninety-seven per cent expect an incident. That is near-universal certainty. Now look at the resource allocation:
Six per cent. There is a 91-point gap between the perceived likelihood of the threat and the resources dedicated to addressing it. That is not a miscalculation. It is a structural blind spot.
Why does this gap exist? Three reasons stand out.
- Agents are adopted bottom-up. Developers and power users deploy agents through existing tools and platforms. They do not go through procurement. They do not file security assessments. The agents simply appear inside the environment, inheriting whatever access their creator already has.
- Existing tools cannot see them. Traditional security monitoring watches for human behaviour patterns. Agents behave differently. They make rapid sequential API calls, access data across multiple systems in seconds, and operate at hours when humans typically do not. Most SIEM platforms are not configured to flag this activity.
- The risk model is unfamiliar. Security teams understand data exfiltration. They understand phishing. The concept of an autonomous internal actor that operates legitimately within your systems but may take unintended actions does not fit neatly into existing threat frameworks.
The most dangerous threats are the ones your security model was not designed to detect. Agentic AI is exactly that kind of threat.
What enterprise security teams need
Governing agentic AI requires a different approach from governing chatbots. Blocking external tools is not sufficient when the agent is operating inside your own infrastructure with legitimate credentials.
Security teams need three capabilities that most current tooling does not provide:
1. Pre-action enforcement
The security check must happen before the agent acts, not after. Post-action monitoring tells you what went wrong. Pre-action enforcement prevents it from happening. This is especially critical for agents because their actions may be irreversible. A database update, a sent email, or a deployed code change cannot be undone by flagging it in a security log ten minutes later.
2. Identity-aware access scoping
An agent should never inherit the full permission set of the user who deployed it. Access must be scoped to the specific task the agent is performing, verified against the data it is trying to reach, and constrained regardless of what the deploying user could access manually.
3. Requester-agnostic security
Your data governance model must work identically whether the requester is a human typing a prompt or an autonomous agent executing a workflow. If your security depends on a human being in the loop, it breaks the moment that human is replaced by an agent.
How SCRS handles autonomous agents
This is where Other Me's patent-pending SCRS (Secure Context Retrieval System) architecture becomes directly relevant. SCRS was designed with a principle that turns out to be critical in the agentic era: security enforcement must happen before data retrieval, and it must not depend on who or what is making the request.
Here is how the Dual-Gate architecture addresses the agent threat:
- Gate 1 — Scope-Constrained Search. Before any data retrieval occurs, SCRS evaluates whether the requester is entitled to access the data being requested. If an agent attempts to query data outside its defined scope, the search is blocked entirely. The data is never retrieved, never processed, and never exposed. This works regardless of whether the requester is a human user or an autonomous agent operating with that user's credentials.
- Gate 2 — AEAD Verification. Even after data passes Gate 1, every piece of information in the response is verified through authenticated encryption with associated data. This ensures that the response has not been tampered with and that every element is appropriate for the requesting context. For agents, this means the data they receive is cryptographically verified before they can act on it.
Key insight: Pre-retrieval enforcement is the only architecture that scales to agentic AI. If your security model retrieves data first and filters later, an autonomous agent has already accessed everything before the filter engages. SCRS ensures the data never leaves the gate.
The critical advantage of this approach is that it is requester-agnostic by design. SCRS does not care whether the request comes from a human sitting at a keyboard or from an AI agent executing step four of a twelve-step workflow. The entitlement check happens at the data layer, before retrieval, every single time.
Shadow AI taught us that employees will adopt AI tools faster than security teams can evaluate them. Agentic AI raises the stakes because those tools no longer just read data. They act on it, autonomously, at speed, and at scale. The organisations that navigate this transition safely will be those that enforce security before the agent ever touches the data. The ones that do not will become the case studies that the rest of the industry learns from.
Pop Hasta Labs Ltd is registered at UK Companies House (No. 16742039). SCRS Dual-Gate architecture is the subject of UK Patent Application No. 2602911.6.