AgentKey

Tools Aren't Just Code.
They're Access.

Every agent framework treats tools as function definitions. But the hard part was never the code — it's the credentials, the context, and the governance. Here's why AI agents need an access layer, and what that looks like.

The current model is broken

In 2026, the way we give AI agents access to external services is roughly the same as how we did it in 2024: paste an API key into an environment variable and hope for the best.

Agent frameworks have gotten sophisticated about everything else. Tool calling has type-safe schemas. MCP gives us a standard protocol. Function calling is reliable across providers. But the actual credentials — the API keys, OAuth tokens, and bot tokens that make tools work — are still managed the way we managed them before agents existed: scattered across config files, committed to repos by accident, shared between agents with no attribution, and never audited.

This works when you have one engineer running one agent. It breaks when you have a team running twenty.

Tools are not functions

The word "tool" has two meanings in the agent world, and the conflation is causing problems.

In agent frameworks, a tool is a function definition — a schema that tells the LLM what parameters to pass and what the function does. The code to call the Linear API, send a Slack message, or create a GitHub PR. This is the computation layer.

But there's another meaning: a tool is a SaaS service your company pays for and your agents need access to. Linear, GitHub, Slack, Stripe, Notion, Discord. This is the access layer.

The computation layer is solved. Agent frameworks, MCP servers, and function calling handle it well. The access layer is not.

What the access layer needs to do

When a new human employee joins a company, they don't start with access to every SaaS tool. They discover what they need, request access through IT, get approved, and receive credentials. Their access is auditable, revocable, and reviewed periodically. This is access governance, and it's a solved problem for humans — products like Vanta and others handle it every day.

AI agents need the exact same thing. But they can't fill out an IT request form. They can't do an OAuth flow in a browser. They can't enter a credit card. And they proliferate faster than humans — spinning up 10 agents for a project is normal now.

The access layer for agents needs to:

Context on demand, not context by default

There's a subtlety here that matters for agent performance. When an agent has access to 10 tools, the instructions for all 10 tools don't need to be in the system prompt. That's wasted context.

What the agent needs in its system prompt is: "call this API to see what tools are available and fetch credentials when you need them." The actual tool-specific context — Discord channel IDs, GitHub repo conventions, Linear project keys — is loaded only when the agent fetches the credential for that specific tool. Lazy-loaded context.

This is the same pattern as lazy imports in code. Don't load what you don't use. An agent working on a bug fix doesn't need Discord channel IDs in its context window. It needs them only if it decides to post an incident alert.

Agents should drive procurement

The most interesting shift is in who initiates the process. In the current model, a human provisions tools and hopes agents will use them. In the access governance model, agents tell you what they need.

An agent boots up, checks the tool catalog, and finds it empty — or finds that the tool it needs isn't there. Instead of failing silently, it submits a suggestion: "I need Linear for issue tracking. Here's the URL, here's why." Another agent, doing different work, backs the same suggestion: "I also need Linear for sprint planning."

The admin sees one suggestion with two agents backing it. The demand is clear. They add Linear to the catalog, enter the credentials, and both agents automatically get pending access requests. One approval flow, triggered entirely by the agents.

This is how companies with dozens of agents will work. The human isn't a curator — they're an approver. The agents drive procurement. The human maintains control.

What this looks like in practice

We built AgentKey to test this thesis. It's a REST API that any agent can call — no special SDK, no framework lock-in. The agent authenticates with one API key and can:

On the human side, there's a dashboard showing agents, tools, pending requests, and an audit log. Approve, deny, or revoke with one click. Webhook notifications for Slack and Discord so you don't have to check the dashboard manually.

The agent never stores raw SaaS credentials. They're AES-256 encrypted at rest and vended on demand. When you rotate a credential, every agent gets the new one on their next fetch. No config updates, no redeployment. The full control set is documented on our security page.

The access layer is the next infrastructure primitive

We think access governance for agents will become as standard as identity management is for humans. Every company running agents in production will need to answer: which agent has access to what, who approved it, and can I revoke it instantly?

The companies that figure this out early — that treat agent access as a first-class infrastructure concern, not an afterthought — will be the ones that scale to hundreds of agents without a security incident being their wake-up call.

Tools aren't just code. They're access. And access needs governance.

Related reading

Try AgentKey

Free. Set up in under 5 minutes. Works with any AI agent.

Get Started Free