The Challenges of Generative AI in Identity and Access Management (IAM)
- Share:
GenAI is transforming every industry, and security is no exception. One area where this transformation is most evident is Identity and Access Management (IAM).
The rise of AI identities creates new, never-before-encountered challenges in the IAM sphere, with traditional methods proving insufficient to handle AI’s new dynamic, complex capabilities.
In my previously published series of articles, “The Challenges of Generative AI in Identity and Access Management (IAM),” I explored four major questions central to adapting IAM for AI-driven environments:
Through these questions, we’ve explored actionable solutions to help developers adapt their IAM to AI-driven environments. In this final piece in the series, I bring together these insights to outline a comprehensive, proactive IAM framework that addresses the unique challenges posed by generative AI.
Let’s explore how understanding who, what, where, and when in IAM can improve your application’s security.
Rethinking the “Who”: Understanding AI Identity
Throughout the years, we’ve clearly defined the treatment of human and machine identities - for human identities, we had passwords, challenges, tokens, and short session spans. For machines, we had secrets and secret managers.
This also applies to vulnerabilities. Human identity vulnerabilities mostly rely on human error, such as phishing and password theft, while machine identities could suffer from misconfigurations or the improper storage of secrets.
With the rise of AI agents came the creation of new hybrid identities - making the traditional separation between human users and bots no longer viable.
To handle this newfound challenge and achieve a fully integrated, proactive IAM framework that effectively addresses the complexities of AI identity, I suggest a few key guidelines:
- Always challenge assumptions. Avoid static sessions and continuously re-evaluate authentication and authorization.
- Use ranking over simple verification. This will help us better understand what is known about the user and what isn’t.
- Accept the blurred border between humans and bots. Instead of categorically blocking or accepting bots, recognize new identities in software and secure them appropriately.
- Combine authentication and authorization. Move beyond the traditional separation where authentication verifies identity and authorization defines permissions. Instead, they should be integrated into a unified system that ranks users and tracks their allowed actions.
Tools to Manage AI Identities:
- ArcJet: Provides advanced identity ranking tools to assess and score requests based on behavior, helping to make nuanced access decisions.
- OPAL (Open Policy Administration Layer): Facilitates dynamic policy enforcement by integrating identity rankings, relationship-based access controls (ReBAC), and risk-scoring models.
- Permit.io: Supports advanced delegation mechanisms, like the "check-agent" method, enabling secure relationships between AI agents and the humans or systems they represent.
Understanding ”Who”, however, is only the first part of the equation. The next critical question to explore is ”What” can these identities do once they gain access?
Rethinking the “What”: Proactive Authorization
Traditional IAM has been heavily focused on managing ingress traffic, or, in other words, ****what enters an application. However, AI agents operate dynamically, often creating outbound (egress) requests that must also be managed. That means assuming all permissions can be controlled through ingress traffic alone is no longer realistic.
Here are some guidelines that can help create a system that evaluates requests dynamically and adapts permissions as needed:
- Streamline permissions by establishing clear relationships between inbound and outbound requests, ensuring consistent enforcement.
- Decentralize enforcement using separate engines for ingress and egress traffic while maintaining a unified permissions model.
- Anticipate evolving requirements, designing systems that dynamically adapt to the changing needs of AI agents without compromising security.
Tools for Proactive Authorization:
- Lunar.dev is an open-source API gateway designed specifically for egress traffic. It acts as a proxy server for all outbound or internal requests, allowing us to implement authorization checks at this stage.
- Open Policy Agent + OPAL or Permit.io can help Lunar make relevant decisions based on the egress traffic. Being fine-grained authorization solutions, they let ou create policies that control both inbound and outbound traffic. Using an authorization service can help you manage authorization and fine-grained policies for both ingress and egress traffic, while other tools can be used to enforce the streamlining of policies.
This proactive approach ensures that AI-driven systems remain both flexible and secure. With the question of “What” addressed, we can turn to the next critical question: “Where” are these identities trying to go?
Rethinking the “Where”: Managing AI Permissions
Determining “where” identities are allowed to go has also become more complex than ever. AI agents often bypass traditional safeguards and can sometimes access resources dynamically without explicit authorization. Addressing this challenge requires moving beyond static access control models.
The old methods of managing access—static whitelists and blacklists—are no longer sufficient and, Instead, must be dynamic, continuously monitored, and adaptable. Thus, developers require the ability to ensure that AI behavior is understood and secured during all phases of its lifecycle.
Here are a few tools you can use for this purpose:
Tools for Managing AI Permissions:
- Retrieval-Augmented Generation (RAG)
Provides a structured methodology for filtering AI access to authorized data sources. By combining semantic search with access controls, RAG ensures that only approved data is retrieved and utilized by AI systems. - Dynamic Authorization Services
Authorization services like Permit.io, which support Relationship-Based Access Control (ReBAC), allow teams to enforce fine-grained, contextual permissions. By integrating these services with your AI pipelines, you can dynamically adapt permissions in real time based on user context and behavior.
RAG can filter AI access to only authorized data sources, while dynamic authorization services with ReBAC support provide the necessary flexibility and control.
By addressing where AI agents are allowed to go, developers can maintain secure operations while adapting to the challenges introduced by these advanced systems.
With this in mind, we will turn to the final question: when should access be granted?
Rethinking the “When”: Dynamic Access Control
The question of “when” access should be granted, adjusted, or revoked is often overlooked in traditional IAM discussions. Static methods like token expiration or session-based authentication are inadequate for the dynamic requirements of AI systems.
To fully answer the question of when, we must shift from viewing access as a static, time-bound concept to understanding it as part of a dynamic, event-driven timeline. This means continuously challenging assumptions, reevaluating sessions, and incorporating real-time data into every access decision.
Here are some key methods and tools to do so:
- Shift to event-driven access control, continuously reevaluating permissions based on real-time data.
- Implement methodologies like Continuous Access Evaluation Profile (CAEP) to adapt to changes in user behavior or system conditions dynamically.
- Leverage tools like OPToggles and OpenFeature to integrate event-driven mechanisms, ensuring access decisions reflect current risk levels and context.
- Create a feedback loop between authentication and authorization providers to ensure strict but dynamic enforcement over time.
Tools for Continuous Access Monitoring:
- OpenFeature: Facilitates event-driven access control by integrating real-time monitoring into IAM pipelines.
- OPToggles: Tracks user and agent behavior, allowing for immediate updates to permissions in response to changes in context.
Rather than relying solely on token expiration or session timeouts, we should design systems that can intelligently assess when access should end based on changing conditions.
The Path Forward: Toward a Proactive IAM Framework
Addressing the challenges introduced by generative AI in IAM requires a fundamental shift in how we think about identity security. By reevaluating the questions of who, what, where, and when, we can build systems that are more adaptive, secure, and capable of handling the complexities of AI-driven environments.
The future of IAM lies in integration, dynamic monitoring, and proactive adaptation. By embracing these principles, organizations can turn the challenges of generative AI into opportunities for building smarter and more secure applications.
If you have questions or want to learn more about IAM, Join our Slack community, where there are hundreds of devs building and implementing authorization.
Written by
Gabriel L. Manor
Full-Stack Software Technical Leader | Security, JavaScript, DevRel, OPA | Writer and Public Speaker
Daniel Bass
Application authorization enthusiast with years of experience as a customer engineer, technical writing, and open-source community advocacy. Comunity Manager, Dev. Convention Extrovert and Meme Enthusiast.