For most of IAM’s history, “identity” meant “human.” The systems we built — directories, access reviews, approval workflows — were designed around people joining, moving through and leaving organizations. But in any modern enterprise, the majority of identities are not human. They are service accounts, API keys, workload credentials, bots and, increasingly, AI agents. Industry data suggests non-human identities outnumber human ones by orders of magnitude, and the gap is widening fast.
In my experience, this is one of the biggest blind spots in enterprise IAM today. We govern human access with rigor — approval workflows, periodic reviews, JIT policies — and then let service accounts and API keys sit with permanent, broadly scoped access that nobody audits. The OWASP Non-Human Identity Top 10 (2025) reflects this reality: improper offboarding, secret leakage and overprivileged NHIs are the top risks, and they’re based on real breaches at major companies.
The Shift to AI Agent Identity Governance
Modern IAM (Identity and Access Management) is evolving to treat AI agents as first-class identity principals rather than standard service accounts. To govern these autonomous entities effectively, enterprises are adopting four core pillars:
-
Distinct Classification: Categorizing agents as copilot, human-initiated, or ambient.
-
Mandatory Human Ownership: Assigning every agent to a responsible person or team.
-
Just-In-Time (JIT) Access: Applying time-bound permissions to minimize the attack surface.
-
Instant Kill Switches: Implementing centralized controls to suspend misbehaving agents in seconds.
AI Agent Identity: A New Category of NHI
What is genuinely new — and what we are actively investing in — is the emergence of AI agents as a distinct identity type that doesn’t fit neatly into existing models.
AI agents are not service accounts. Service accounts represent workloads: They’re tied to deployments, not to human teams. An AI agent, on the other hand, is a long-lived, semi-autonomous or fully autonomous software process that uses an LLM to take actions with real side-effects: writing code, creating pull requests, filing tickets, calling APIs. It needs ownership, access review and lifecycle management that service accounts were never designed for.
The industry is converging on this distinction. Microsoft (Entra Agent ID), Okta (AI Agents in Universal Directory), and Google (Agent Identity for Vertex AI) are all modeling agents as first-class identity principals, separate from service accounts and human users. IETF is drafting standards for AI agent authentication. The consistent pattern: Agents need their own identity primitive, not a borrowed one.
What Agent Identity Governance Looks Like
Agent identity management needs to follow a few core principles:
1. Agents Are First-Class Identities, Not Extensions of Service Accounts
Every long-lived, side-effecting AI agent gets a stable identity with a clear type classification. We distinguish between copilot agents (human-tied assistants), human-initiated agents (human triggers it but isn't present during execution) and ambient agents (fully autonomous, event or schedule-driven, no human in the loop). This classification directly informs what level of review and oversight each agent requires.
2. Mandatory Human Ownership
Every agent identity must have a human owner and an owning team. If the owner leaves the company, the agent gets flagged, just like we handle DRI transitions for human-owned resources. Orphaned agents are one of the top NHI risks, and we prevent them by design.
3. Same Governance as Humans
Agents participate in the same authorization model as human users. They can join groups, hold permissions and be evaluated by the same access checks. They go through periodic access reviews. When a reviewer decides an agent’s access is no longer appropriate, it gets revoked through the same workflow.
4. Kill Switches That Work in Seconds
This item is non-negotiable. When an autonomous agent misbehaves — starts looping, creates hundreds of junk tickets or writes to systems it shouldn’t — we need to stop it immediately. Our design uses a single lifecycle state as the control lever: flip it to “suspended” and every authorization check for that agent returns “denied” within seconds. We can do this per-agent, per-agent-type, per-platform or globally with one API call.
5. No Long-Lived Secrets on Agent Identities
The agent identity record is a governance primitive. It stores who the agent is, who owns it, what it’s allowed to do and whether it’s active. It does not hold credentials. Runtime authentication stays with the workload identity layer (short-lived certificates), while the authorization and governance layer handles everything else.
Why This Matters Now
According to Okta’s research, 91 percent of organizations are already using AI agents, but only 10 percent have governance in place. That gap is a ticking clock. As enterprises deploy more autonomous agents — for code generation, incident response, data pipelines, customer operations — the blast radius of an ungoverned agent grows. An agent with broad, permanent access and no oversight is a more dangerous insider risk than most human accounts.
The right time to build governance into the identity model is before the fleet scales, not after the first incident. We are making this a priority because we believe the identity layer is the natural control plane for agent governance, the same way it became the control plane for human access. The patterns are the same: register, authorize, review, revoke. The execution just needs to account for the fact that agents operate at machine speed and machine scale.
Operationalizing EIAM: Lifecycle and Compliance
Designing an identity architecture is one thing. Keeping it running correctly — continuously, automatically, and at scale — is the real engineering challenge.
Eliminating Privilege Creep and Standing Privileges
In my experience, two problems silently hurt every enterprise’s security posture: privilege creep and standing privileges. Privilege creep compounds when people switch teams and pick up new access but nobody revokes the old one. Standing privileges persist when elevated access — production environments, admin roles, sensitive customer operations — is granted permanently even though it’s only needed once in a while. We address both by wiring the identity lifecycle directly to our HR system of record and enforcing time-bound access by default.
Lifecycle Automation: Let HR Drive Provisioning
We treat our HCM platform as the authoritative source for workforce attributes — team, reporting chain, job function, employee type — and propagate changes downstream in near-real-time through an event-driven pipeline. The critical design decision is refresh cadence: the gap between an HR event and its downstream enforcement is the exposure window, so we aim to keep that as tight as possible.
Instead of hardcoding group memberships, we define policies using expression languages that evaluate against employee attributes automatically. When someone joins, our natively built group engine grants the right birthright access based on who they are and what team they belong to. When someone switches teams, it recalculates, removing what they no longer need and adding what they now qualify for. When someone leaves, deprovisioning fires within minutes across all connected systems. No tickets, no delays, no, “I’ll get to it next week.”
We complement this with SCIM for pushing decisions to downstream SaaS apps, but SCIM alone isn’t enough. Not every vendor supports it cleanly, and it doesn’t handle all lifecycle states well. So, we layer in event-driven sync for real-time changes and periodic reconciliation to catch drift. Together, these three mechanisms ensure that intended state and actual state stay aligned across the ecosystem.
Even with full automation, periodic user access reviews remain the safety net for edge cases — shared groups granting broader access than intended, manual overrides or roles whose risk profile has quietly changed. We’re pushing toward automating the revocation side as well, so rejected entitlements are removed without requiring system owners to submit manual follow-up requests.
Just-In-Time Access: No Standing Privileges
Our principle is straightforward: No one should have standing elevated access. Need admin access for a production incident? Request it through a self-service portal, get approved, use it within a time window and it auto-expires. The entire lifecycle — who requested, who approved, when access was granted and revoked — is auditable end to end.
Not all access deserves the same time limit. We tie maximum JIT duration to a risk score derived from role metadata: privilege level, data sensitivity and potential blast radius. Lower-risk resources get longer windows to reduce engineer toil; high-risk production and customer-data access is kept deliberately short. This tiering is enforced at the platform level, so if a role's risk score increases, the system validates that the current JIT window is still within the allowed maximum and blocks the change if it isn’t.
This extends to non-human identities too. In cloud environments, short-lived certificates and workload identity federation replace static API keys — turning “key rotation” into continuous renewal with small blast-radius windows by default. For AI agent identities — autonomous agents that generate code, triage alerts or run operational tasks — we apply the same controls with explicit lifecycle states, clear ownership and kill switches that work in seconds. We are actively investing in first-class agent identity management, treating AI agents as a new identity type with the same governance rigor we apply to human users.
What We’ve Learned About EIAM
Keep JIT Memberships Flat
nesting groups within JIT-enabled groups bypasses time-bound semantics and creates paths for persistent access.
Self-Removal Should Be Approval-Free
Voluntarily dropping elevated access is always a security-positive action.
Drift Detection Is Non-Negotiable
Pair event-driven sync with periodic reconciliation so changes made directly in vendor consoles don’t silently bypass governance.
Measure Adoption, Not Just Availability
Track what percentage of high-risk access is actually time-bound versus standing. That metric tells us whether JIT is our operational norm or just a feature that exists on paper.
AI-Driven Metadata Enrichment
Auto-generating role descriptions and risk classifications by analyzing code repositories dramatically improves the quality of both lifecycle automation and JIT enforcement. Better metadata means smarter automated decisions.
When the secure path is also the easy path, adoption follows naturally.
AI Behind This Article
I practice what I preach. This article was partially drafted with AI assistance. I used it to help structure my thoughts, refine wording and catch gaps in my reasoning. But the perspectives, the architectural opinions and the lessons learned all come from real experience building IAM systems. AI is a force multiplier for getting ideas out of my head and onto paper; it does not replace the years of actually doing the work. That is exactly how we think about AI at Coinbase more broadly. We adopt it aggressively where it accelerates us, but the humans still own the decisions. If building AI‑driven security and identity systems like this appeals to you, take a look at Coinbase’s Product Groups to see where this work shows up across the company.
Frequently Asked Questions
Why is Machine Identity (NHI) management becoming a critical EIAM priority in 2026?
Because the problem got too big to ignore. Non-human identities have always outnumbered human ones — service accounts, API keys, workload credentials — but for years they sat in a governance blind spot. We applied rigorous lifecycle controls to human access and mostly left NHIs alone. That was a calculated risk when NHIs were predictable: a service account runs a cron job, calls the same APIs and doesn’t make autonomous decisions.
AI agents changed the math. An ambient agent that autonomously creates fix PRs or files tickets is not a static service account on a schedule. It makes decisions, writes code and interacts with systems in ways that are hard to predict upfront. If it misbehaves or gets compromised, the blast radius is not “one API call failed” — it is “hundreds of unauthorized changes landed before anyone noticed.” That demands a fundamentally different identity model.
Three things converged in 2025-2026: the fleet scaled from a handful of experimental agents to hundreds across code generation, incident triage, and customer operations; the industry aligned with Microsoft, Okta and Google all shipping agent identity primitives while OWASP published the NHI Top 10; and the governance gap became measurable — 91 percent of organizations using AI agents, only 10 percent with governance in place.
For us, the answer is treating agent identity as a first-class concern in our IAM platform, not bolted on later, but built into the same authorization model we use for humans. Agents get registered, they get owners, they go through access reviews and they have kill switches. The patterns are not new; we are just extending them to a new type of principal that operates at machine speed
What role does SCIM play in modern EIAM lifecycle management?
SCIM is important, but it gets overrated as a silver bullet for identity lifecycle. It plays a real role, just not the one most people assume. SCIM gives us a standardized protocol for provisioning and deprovisioning identities into downstream SaaS applications. Before SCIM, every integration was a custom snowflake with its own API and data format. SCIM brought a common schema and REST API, which meaningfully reduces the cost of onboarding new systems. That standardization matters.
That said, in my experience SCIM alone is not enough. Not every system supports it. Its user model does not cleanly handle all the lifecycle states we care about — the difference between “suspended” and “deactivated” matters for security, and SCIM does not make that distinction easy. And it is a push protocol, not real-time — when someone is terminated, we cannot wait for the next sync cycle to revoke access. The exposure window matters.
In practice, we use a layered approach: event-driven sync for real-time changes (consuming lifecycle events via message queues so terminations and team changes propagate immediately), SCIM for standard provisioning into SaaS apps that support it, and periodic reconciliation as the safety net to detect and correct drift between intended state and actual state.
SCIM is a valuable piece of that puzzle, but treating it as the whole solution is a mistake. The real work is building a sync and reconciliation layer that handles the messy reality of enterprise identity at scale — partial failures, inconsistent vendor behavior, lifecycle states SCIM does not model and the absolute requirement that terminations propagate in minutes, not hours.
