Back to Blog
AI AgentsZero TrustNISTJIT AuthorizationCybersecurity

We Built REMIT. Here's Why.

Josh Bahlman
February 2026
12 min read

AI Security, Agent Authorization, Trust Infrastructure | KeyFlux REMIT

A few weeks ago I wrote about why your API keys weren't built for AI-to-AI trust. The response told me I wasn't the only one thinking about this. NIST IR 8596 laid the foundation. I mapped the gaps: credential theft with no traceability, scope creep, delegation without limits, behavioural drift. The piece ended with a choice. Build trust infrastructure deliberately, or retrofit it later.

This is the "build deliberately" part.

The Missing Piece Isn't Intelligence

Everyone is building AI agents. Custom LLMs fine-tuned on internal data. Context windows stuffed with runbooks and architecture docs. Orchestration layers chaining tools together. MCP servers exposing infrastructure operations to models that can reason about them.

Good. That's the right starting point. But context alone doesn't get us where this needs to go.

The missing piece isn't intelligence. It's trust.

Right now, most organisations experimenting with AI agents are doing so in sandboxes. Safe environments. Read-only access. Human approval on every action. That's appropriate for where we are today, but it's also a ceiling. The value of an agent that needs a human to approve every step is limited to "slightly faster human with a chatbot."

The real value is autonomy. Agents that can execute changes, remediate incidents, deploy code, respond to events in real time. Not because we want to remove humans entirely, but because in operational environments, we often can't afford the latency of human intervention.

Think about change management. Someone stays up late, logs into a change window, types commands into a terminal. They fat finger an IP address. They miss a step in the runbook. They're tired because it's 2am and this is the third change window this week. We've all been there.

We All Want Autonomy. Nobody Trusts It.

Now imagine an agent that executes the change. Follows the runbook precisely. Validates every step. Rolls back automatically if something doesn't match the expected state. Faster, more accurate, fully auditable. We all want that. And nobody trusts it. For good reason.

Every security professional has seen what happens when automation runs with broad permissions. A script with admin access that deletes the wrong namespace. An API key that was supposed to be temporary but has been active for three years. Now scale that to an AI agent that can reason, adapt, and chain actions together. The failure mode isn't "it ran the wrong script." It's "it decided to take a different approach and had the credentials to do it."

API keys don't expire fast enough. OAuth tokens are too coarse. Role-based access gives agents standing permissions they should only have for seconds. No delegation chain. No scope enforcement at the tool boundary. The agents are getting smarter. The trust infrastructure hasn't moved.

The concept of zero standing privileges and JIT provisioning for agents isn't new. The patterns are emerging across the industry and the conversation is getting sharper. What's been missing is closing the full loop: scoped credential issuance, delegation traceability, policy-based eligibility evaluation, and enforcement at the tool boundary. That's what we built.

Security Says "Yes, This Is How"

I've always believed security people are here to say "yes, this is how" not "no, it's too risky." The easy answer to AI agent autonomy is to block it. Lock it down. Keep humans in the loop for everything. That's not security. That's a bottleneck wearing a security badge.

The hard answer is building the infrastructure that lets you say yes with confidence. Yes, this agent can execute that change. Here's the scoped credential. Here's the delegation chain. Here's the policy that was evaluated. Here's the audit trail.

That's what security looks like when it's doing its job. Enabling the business to move, safely. It's a core focus of everything we build at KeyFlux.

How REMIT Works

KeyFlux REMIT, our just-in-time auth and verification layer for AI agents with built-in eligibility evaluation, works like this:

Scoped, Time-Limited Credentials

When an agent needs to perform an action, it requests a credential scoped to that specific operation. The credential defines exactly what the agent can do, on which resources, for how long. Sixty seconds. One hundred and twenty seconds. Whatever the task requires. When the task completes, the credential is revoked. If it isn't revoked, it expires automatically.

Delegation Traceability

Every credential traces back to a human principal through a delegation chain. You can always answer the question "who authorised this agent to do this?" If you can't answer it, the action doesn't happen. That was one of the open gaps I flagged in the previous piece. Delegation credentials with no standard encoding. We didn't wait for the standard. We built the mechanism. Aligned with NIST IR 8596's call for treating AI agents as first-class cyber actors with strong identity controls, traceable delegation, and supply-chain transparency. Building to the standard, not around it.

Policy-Based Eligibility

A policy engine evaluates whether the agent is eligible for the requested action before the credential is issued. This isn't a simple authentication check. The engine evaluates everything we can lock down: is this action within an approved change window? Does this agent have rights to this specific asset? Is the target environment open for this type of operation? Does the delegation chain trace to someone with the authority to approve this class of change? Has the agent exceeded its action count or error threshold for this session? Every condition that matters gets evaluated. If any one fails, the credential is never issued. The agent never reaches the resource.

Tool Boundary Enforcement

At the tool boundary, the credential is validated again. Wrong scope, rejected. Expired, rejected. Broken delegation chain, rejected. Eligibility checking isn't a gate you pass through once. It's enforced continuously for the life of the credential.

Click a stage to explore the JIT credential lifecycle.

In practice, REMIT sits as a policy enforcement layer in front of your existing infrastructure. It wraps your current access controls, whether that's Kubernetes RBAC, cloud IAM, or internal APIs, with scoped, time-limited credentials and evaluates every agent action against policy before it reaches the resource.

What This Looks Like in Practice

Change management is one use case. We're also using REMIT for open-loop operations. Monitoring agents that detect anomalies and trigger remediation. Compliance agents that respond to standards changes. Deployment agents that push validated builds through environments. Each one scoped, auditable, traceable back to the authority that permitted it.

Without REMIT

An incident takes 45 minutes to resolve with a human on call. A change window blocks a team for four hours.

With REMIT

The same incident resolves in 90 seconds when an agent has the right context and the right credentials. The change window completes in minutes.

But "more efficient" and "more accurate" only matter if you can prove the agent was authorised to act, operated within its scope, and held a credential that was valid for exactly the window it needed.

REMIT makes agent autonomy auditable. It turns "we trust this agent" into "we can prove this agent was authorised for this specific action by this specific authority for this specific duration."

The Start of the Answer

Just because we've built it doesn't mean the world adopts it tomorrow. Trust takes time. Organisations need to see it working in low-risk environments before they extend it to critical operations.

But for those of us who've spent careers in operational security, the direction is clear. In a few years, scoped, time-limited, auditable agent authorization won't be novel. It will be the baseline. The same way we look back at shared root passwords and wonder what we were thinking.

We built KeyFlux REMIT because the trust layer for AI agents needs to exist before agents are fully autonomous, not after. The last article was the problem. This is the start of the answer.

If you're thinking about how to get to a place of trust with agents in your environment, let's talk.

Stay tuned for KeyFlux news

Be the first to know when we launch.

KeyFlux

Coming Soon

© 2026 KeyFlux. All rights reserved.