top of page

The Access Control Problem With AI In Workflows

  • Writer: Nikita Silaech
    Nikita Silaech
  • Jan 20
  • 3 min read

Most organizations are deploying AI agents into production systems right now. These agents operate in HR, IT operations, customer support, and finance. They execute transactions, provision infrastructure, move data, and approve changes without waiting for human intervention. They do all of this under a single set of organizational credentials that grant them far broader permissions than any individual human would have.


This was deliberate engineering. An AI agent needs to serve multiple users across multiple roles and workflows. If you required the agent to authenticate with each user's individual credentials before executing requests, the system would become operationally complex. You would need credential handoffs, token management, and explicit user identity passing for every single action. Instead, organizations granted agents broad permissions upfront. The agent authenticates once, operates as a shared resource, and handles requests from anyone who asks (USCS Institute, 2026).


This works well operationally. It also creates a security problem that traditional access control systems were never designed to handle.


Access control has worked the same way for decades. Alice has permission to view dataset A. Bob has permission to view dataset B. They cannot access each other's data because security systems enforce permissions at the individual user level. If a user cannot directly access a resource, they cannot access it (The Hacker News, 2026).


But AI agents break that assumption. The agent has permission to access both dataset A and dataset B, along with most other systems it might need. When Alice, a new hire with intentionally limited data access, asks the agent to "analyze xyz," the agent executes that request under its own identity. It retrieves sensitive data from dataset B. The logs record that the agent pulled the data. They do not record that Alice requested it or that Alice lacks authorization to see it (USCS Institute, 2026).


No policy was violated since the agent was authorized to access the data. Alice's restricted access level was simply bypassed through an intermediary. Most security teams would not catch this because the transaction looks legitimate in isolation, since it is legitimate from the agent's perspective.


The issue compounds when you consider organizational structure and audit trails. Audit logs show which agent performed which action and when. They do not easily show why the action was performed or who initiated it. When an incident investigation occurs, analysts lose critical context. They can see that an agent pulled sensitive information, but they cannot easily reconstruct whether the request came from a legitimate user, was manipulated through prompt injection, or represented a security compromise.


Multi-agent systems make it even more complex. An accountant agent might have permissions to execute transactions. A manager agent might have permissions to override certain approval workflows. If the manager agent becomes compromised or manipulated through subtle prompt injection, it can direct the accountant agent to execute transactions that would have triggered human review if a person had made the same request (USCS Institute, 2026). Chains of trust collapse because machines do not verify the legitimacy of requests the way humans would.


Traditional access control theory assumes the person requesting an action and the person executing it are the same entity. If permissions are enforced on that entity, access restrictions are enforced automatically. But with AI agents, the requester and executor are decoupled. A security team can enforce least privilege on individual users and still have those restrictions bypassed through agent-mediated requests.


Organizations are attempting a few solutions to this issue. They monitor agent behavior more closely, but effective monitoring requires knowing what behavior to flag. An agent retrieving customer data on one day is legitimate; the same action on another day is potentially a breach. The difference is context, and logs do not capture context well enough to automate detection (USCS Institute, 2026).


Soon enough, organizations will discover authorization bypasses that have been occurring through their agents for months. Some will find them during routine audits. Others will discover them after security incidents. Most will find them only if they start looking actively.

Comments


bottom of page