Agent Access Control: Secure AI agents must act on behalf of humans and services
Agent Access Control: Pilot → Production, Part 1 of 6. Why production AI agents must execute on behalf of a human or service identity, not with a shared "agent token." Introduces Subject/Actor/Authority and per-action scoped access for auditable tool calls, plus the Identity Mesh concept for consistent per-action policy decisions across tools.
Agent Access Control: secure AI agents must act on behalf of humans and services
Part 1 of 6 in my “Agent Access Control: Pilot → Production” series
After a wave of hype across enterprises and tech companies to put AI into every layer of the organization, many teams hit reality. It’s not that agents aren’t useful, they are. The problem is that most agents, as they’re built today, aren’t practical in production. I’m writing this series to explain the core issues, where organizations typically go wrong, and what it actually takes to ship agents that are enterprise grade and production ready.
This series will cover these topics:
- Agent Identity (current article)
- Policy Enforcement
- Tool & Data Boundaries
- Agent Runtimes & Monitoring
- Evidence Trails
- Wrap-up
Examples are anonymized and details are intentionally blended to protect customer confidentiality.
What is "identity"?
Modern enterprises are built on identity infrastructure: SSO, directories, IAM, service principals, workload identities, API auth. These systems are deeply embedded. They’re mission-critical.
They were designed for two kind of subjects. Humans, someone logs in, gets permissions, and performs actions. The other is Services: predictable workloads with stable ownership and a managed lifecycle.
Agents don’t fit cleanly into either. Agents are dynamic. They plan, route, and chain actions across tools, and they can decide to do something the developer didn’t explicitly anticipate.
That single property breaks the default assumption behind most “agent identity” implementations that you can give the agent a token and call it a day.
But you can’t. Authentication is table stakes. The real question is: what is this agent allowed to do, right now, and on behalf of whom and why?
The breaking point is the tool call
Early agent stacks are optimized for connectivity: plug tools in, call them, ship a working version. Security comes later, and “identity” is treated as a single token.
That leads to three recurring gaps:
1) Static credentials and all-or-nothing scopes
One broad token that unlocks “the connector” is convenient but dangerous. It creates over-permissioned agents by default.
2) No actor/subject distinction
Many systems either impersonate the user completely (“everything looks like Alice”) or collapse everything into the runtime (“everything looks like the agent service”). Both break accountability in different ways.
3) Tool access enforced in ad-hoc code paths
Even when you add checks, they tend to be embedded in application code as conditionals. That makes policies hard to audit and hard to change, and it creates the worst class of enterprise security bug: inconsistent enforcement.
What is “agent identity” then?
Some engineers translate “agent identity” into “an identity account for the agent.” That’s understandable, and it matches how we’ve done things for applications. Even Microsoft’s own framing (Agent ID) points at the operational and security strain of treating agents like classic application identities. The point isn’t that Microsoft is wrong; it’s that the old abstraction doesn’t hold.
But agent identity is not “an agent account.”
Agent identity is the ability to answer (mechanically) these three questions for every tool or API call:
- Subject: who is this action for (a human user or a service role)?
- Actor: what is executing it (which runtime/workload)?
- Authority: what permission slice is granted for this action, in this context, and why?
In one sentence:
Agents authenticate as a workload (Actor), but every tool call must be authorized as a bounded delegation from a principal (Subject), constrained by purpose and context (Authority), evaluated and logged per action.
That principle is what keeps generic agents safe.
Why this matters? Imagine a reusable “financial report” agent used by multiple departments. It plans steps, hits ERP + data warehouse + spreadsheets, generates charts, and produces a PDF.
If you grant that agent broad financial access “because it needs to work,” you’ve created a permanent exception. Now a marketing leader can request a report and, through hallucination, misrouting, or tool misuse, the agent can leak accounting data.
The fix is not “better prompts.” The fix is: the agent’s authority must be bounded by the subject it represents and constrained by purpose and context.
If you can delegate the executor (subject) identity to the agent, with less (or the same) level of access, then you can be sure that the agent cannot access unauthorized records or actions! This is how we need to define agent identity.
What “good” looks like: identity-bound execution
When it comes to execution delegation, it becomes important to identify the source of the call and the subject. There are two obvious identities of the executors: human and service.
An example of a human identity is the assistant chatbot or search.
When a user asks a question from the chatbot, the agent passes the identity to the auth service and, after normal authorization and authentication, it creates a scoped token for each data source or Enterprise Resources for the user.
This is the same for service. During the service’s agent execution, the auth service needs to take care of scoped access for the agent.
As presented, the idea is not new compared to OAuth/OIDC token exchange and delegation in traditional security concepts, but there are some differences:
- Per-action identity context as a hard requirement: every tool call must be explainable mechanically (Subject/Actor/Authority), not just the session.
- A clean separation between workload identity and delegated identity: the runtime (Actor) is authenticated/attested as a workload, while authorization is constrained to the delegated Subject’s authority.
- Runtime-scoped authorization: permissions are computed and enforced at action time (per call), reducing damage from LLM mis-planning/hallucinations compared to static agent-wide permissions.
- Auditability/evidence posture: the triple becomes an “evidence spine” for logging and incident response (who/what/why per action).
Agent asks for credential, and use it:
token = sopio.get_token(
user_id="alice_123",
tool_source="google",
scope="email.read"
)
# use token with google sdk
The idea of "Identity Mesh" concept
What we’ve discussed so far isn’t a brand-new security concept, it’s the same identity and authorization problem, but stretched across many tools, many identity types, and agent-driven workflows. As soon as agents operate across SaaS, databases, and internal APIs, “just impersonate the user” stops being enough.
User impersonation breaks down for two reasons:
- It often grants broad, identity-level authority to an agent that can do any action an identity can.
- Many third-party systems don’t support fine grained delegation (or any impersonation at all), which forces awkward workarounds and inconsistent enforcement.
The complexity increases further when a service triggers an agent on behalf of another identity (for example, a scheduled job generating department-specific reports and distributing them to the right owners). That creates multi-dimensional “on behalf of” relationships and makes per-tool consistency hard.
Identity mesh is a unifying identity-and-authorization coordination layer that helps answer (consistently), for every tool call:
- WHO is this action for? (Subject)
- WHAT is executing it? (Actor)
- WHAT is allowed, right now, and why? (Authority derived from policy + context)
It does this alongside existing AuthN/AuthZ by normalizing identities across providers and tools and ensuring policy decisions remain consistent per action, even when each downstream system has different capabilities.
Closing thought
If your agent can do more than any human or service is allowed to do, you’re not shipping automation. You’re shipping a new class of super-user.
This is the elephant in the room right now. Startups and small teams can sometimes ignore it early; fewer tools, fewer integrations, lower blast radius. Enterprises can’t. Without an strong identity model, agents get stuck in pilots, because nobody can approve broad, shared credentials for systems of record.
An “identity mesh” isn’t a brand-new idea. But for agentic systems, it’s becoming a required layer of infrastructure: the thing that makes actions attributable, scoped, and auditable.
Comparing notes
I’m in the weeds on this with my team at Sopio, working with a small number of enterprises moving agent workflows from pilot to production. Those constraints are shaping what we build in real time.
If this post maps to what you’re seeing, I’m happy to compare notes on a quick call.
My calendar
Subscribe for Deep Dives
Get exclusive in-depth technical articles and insights directly to your inbox.
Related Posts
GDPR vs. the EU AI Act: Why This Time, the Stakes Feel Different
Exploring how the EU AI Act extends the principles of GDPR from data protection to system accountability, and why this new regulatory wave feels fundamentally different.
Why I Started Sopio.ai: Building the AI Command Center for Compliance and Control
Why I founded Sopio.ai: addressing the urgent need for compliance, visibility, and control in enterprise AI deployments under the EU AI Act and GDPR.
The Future of LLMs and AI Agentic Platforms: Opportunities and Strategies
An in-depth exploration of how Large Language Models (LLMs) and specialized AI agentic platforms will shape the future, examining current challenges, technological advancements, practical use-cases, and strategic insights.