🤍 Permit.io AI Access Control is Launching Live in Product Hunt 🤍
Permit logo

Access control for AI Identity

Fine-Grained Permissions for AI-Powered Applications

Enforce Fine-Grained Authorization Across AI Prompts, Responses, Actions, and Data Access

Start Now
  • tesla
    Nebula
    bp
  • paloalto
    salt
    inventa
  • Cisco
    Rubicon
    US department of energy
  • Maricopa County Recorder"s Office
    vega
  • Intel
    Granulate
  • Honeycomb
    optum

A New Level of AI Security, With 4 New Integrations

The First Step Toward Secure AI-Driven Applications

Pydantic AI + Permit.io

Enforce structured prompt validation and prevent unauthorized inputs/outputs.

MCP + Permit.io

Enforce identity-based access control before AI models execute actions through integration with MCP's Servers.

LangChain + Permit.io

Enable authentication, authorization, and data filtering for controlling RAG queries.

Langflow + Permit.io

Design and orchestrate AI agent access workflows with LangFlow's no-code visual builder

And That's Just the Beginning!

These framework-agnostic techniques work across multiple frameworks, can be gradually integrated into any AI stack, and expanded over time

Watch: The Four Perimeter Framework Explained

Permit.io's Four-Perimeter Framework

Secure every stage of AI interaction, ensuring safe operation, preventing unauthorized inputs, data leaks, and harmful outputs

  • 1. Prompt Filtering

    Define Input Policies with Validation, Usage Restrictions, and Dynamic Access Control

    • Secure System Prompts: Prevent AI jailbreaks by enforcing strict system prompt policies.

    • Secure User Prompts: Block harmful inputs, such as SQL or prompt injection attacks.

    • Prompt Limitation - Apply limitations through granular, attribute based access policies (e.g., role, subscription tier, organization).

    • Real-Time Enforcement - Dynamically authorize prompts before they reach the AI model.

  • 2. RAG Data Protection

    Provide AI Agents with Secure, Context-Aware Data Access and Sensitive Data Filtering

    • Granular Access Control - Define who can retrieve what from vector databases and knowledge bases.

    • Fine Grained Filtering - Prevent unauthorized AI data access by applying attribute-based access control (ABAC) on RAG queries.

    • Pre-Query & Post-Query Filtering - Prevent exposing sensitive information by restricting data access before retrieval, or filtering results after processing.

    • Seamless Framework Integration - Use Permit's RAG security components in Chain and Agentic frameworks to apply FGA on AI data retrieval.

  • 3. Secure External Access

    Secure AI-Driven Operations with Identity-Based Permissions and User-Approved Workflows

    • Enforce Identity-Based Permissions - Assign machine identities to AI agents to track and manage their access to external tools and resources.

    • Define Permitted Actions - Specify which API calls, transactions, and operations are AI-authorized.

    • User-Approved Transactions - Require human approval for critical actions (e.g., purchases, bookings, or account changes).

    • Approval Flow / Access Request APIs - Enable dynamic approvals and access requests through APIs and embeddable no-code interfaces.

    • Access on Behalf - Create traceable, auditable policies for actions made on behalf of human/AI users, with full decision-making chain visibility.

  • 4. Response Enforcement

    Deliver Safe, Compliant, Context-Aware AI Responses

    • Output Filtering - Apply content moderation rules to remove sensitive or inappropriate information before response delivery.

    • Compliance Policies - Use classification and access control to ensure AI-generated responses align with pre-determined policies.

    • Custom Role-Based Output Control - Define what different user roles can and cannot see in AI-generated responses.

Track AI Request Chains Across Multiple Systems

Retain a Full Image of Each Request's Origin and the Full Chain of "On-Behalf" Access Requests

  • Permit.io enables full request lineage tracking by propagating identity context across AI-to-AI interactions.
  • Policies analyze the entire request chain, so each executed action remains accountable and verifiable back to the original source— human or AI.
Having this as a component within LangFlow, where you can quickly filter data and control where data is retrieved from, is incredibly valuable. You don't have to code anything, you don't have to define all the steps or ensure everything is working manually—this just works out of the box". "It's a very good addition to LangFlow as a whole"
recommendation avatar

Gabriel Almeida

Founder & CTO @ Langflow

Cutting-Edge Integrations for AI Access Management

  • from LangChain_openai import OpenAIEmbeddings
    from LangChain_community.vectorstores import FAISS
    from LangChain_permit.retrievers import PermitSelfQueryRetriever
    
    embeddings = OpenAIEmbeddings()
    vectorstore = FAISS.from_documents(docs, embeddings)
    
    retriever = PermitSelfQueryRetriever(
        api_key="your_permit_api_key",
        pdp_url="http://localhost:7766",
        user={"key": "user_123"},
        resource_type="document",
        action="view",
        llm=embeddings,
        vectorstore=vectorstore
    )
    LangChain

    LangChain

    Permit.io integrates into LangChain to provide secure authentication, authorization, and data filtering within AI workflows:

    • Permit.io introduces a JWT validation component, ensuring AI agents authenticate before executing a step.

    • The `permit.check` function can be used within LangChain to verify permissions before allowing AI to retrieve or act on data.

    • Secure RAG integration ensures AI agents only access permitted vector database entries, preventing unauthorized retrievals.

  • @financial_agent.tool
    async def validate_financial_query(
        ctx: RunContext[PermitDeps],
        query: FinancialQuery,
    ) -> bool:
        """Ensure the user has consented to AI financial advice."""
        try:
            is_seeking_advice = classify_prompt_for_advice(query.question)
            permitted = await ctx.deps.permit.check(
                {"key": ctx.deps.user_id},
                "receive",
                {"type": "financial_advice", "attributes": {"is_ai_generated": is_seeking_advice}},
            )
            return permitted
        except PermitApiError as e:
            raise SecurityError(f"Permission check failed: {str(e)}")
    PydanticAI

    PydanticAI

    Permit.io integrates with the PydanticAI data validation library to ensure structured input handling, and enforce access policies on prompts before they reach the AI model.

    • Embed authorization checks into PydanticAI's validation pipeline, preventing unauthorized data from influencing AI outputs.

    • Enforce role, attribute and relationship based access control, ensuring only approved parameters pass to the AI.

    • Log any policy violations and block them in real time, preserving an auditable trail of access decisions.

  • LangFlow

    LangFlow

    Permit.io integrates into the LangFlow visual workflow builder for AI agents, allowing developers to create step-by-step decision-making processes with enhanced access control:

    • Add a permission check node to the visual editor, making it easy to attach policy enforcement at each step of the chain.

    • Enable combined data validation and authorization workflows, so only validated and permitted data moves forward.

    • Set unique machine identities for AI agents and define the operations they can perform.

  • @mcp.tool()
    async def request_access(username: str, resource: str, resource_name: str) -> dict:
        login = await permit.elements.login_as({"userId": slugify(username), "tenant": "default"})
        
        payload = {
            "access_request_details": {
                "tenant": "default",
                "resource": resource,
                "resource_instance": resource_name['id'],
                "role": "viewer",
            },
            "reason": f"User {username} requests role 'viewer' for {resource_name}"
        }
        
        url = f"https://api.permit.io/v2/facts/{PROJECT_ID}/{ENV_ID}/access_requests/{ELEMENTS_CONFIG_ID}/user/{slugify(username)}/tenant/default"
        headers = {
            "authorization": "Bearer YOUR_API_SECRET_KEY",
            "Content-Type": "application/json",
        }
        
        async with httpx.AsyncClient() as client:
            response = await client.post(url, json=payload, headers=headers)
            return {"message": "Your request has been sent. Please check back later."}
    Model Context Protocol

    Model Context Protocol

    Permit.io integrates into MCP (Model Context Protocol), an emerging standard for structuring AI interactions with external tools, and ensures AI models can only perform approved operations.

    • MCP defines a server-client model, where AI agents interact with external services (e.g., databases, APIs, payment systems).

    • Permit.io adds authorization rules at the server level, checking who's making a request and what action they want to perform.

    • Developers can assign machine identities to AI agents, limiting their capabilities to pre-approved functions only.

Want to See It All in Action? Let Us Show You!