Menu
Blog How it works Use Cases
agentsh
Open-source runtime for AI agent security
Beacon
AI endpoint visibility and control
Watchtower
Enterprise control plane for agentsh and Beacon
Contact Us

One Line Under the Agent

One Line Under the Agent

The interesting thing about modern agent frameworks is not that they can reason.

It is that they can act.

That is why the Vercel AI SDK matters. It does not just help you call models. It gives models tools. It gives them a way to write files, run commands, call APIs, and keep going until the job is done. That is what makes it such a clean fit for agents in TypeScript. It is also what makes the security boundary matter so much more.

Once an agent has tools, the question changes.

It is no longer just, "did the model say something wrong?"

It becomes, "what happens when the model tries to do something wrong?"

Most controls still sit too far away from that moment. Prompts try to shape behavior before execution. Logs help explain behavior after execution. Both matter. Neither is the missing layer.

The missing layer is control during execution, at the point where actions become real.

That is what we are announcing today.

@agentsh/secure-sandbox for TypeScript makes it possible to put AgentSH under the hosted sandbox your agent already uses, with essentially one added line. It works across multiple sandbox providers, and it fits especially naturally with the Vercel AI SDK because the SDK already gives you the exact place where intent becomes action: the tool implementation.

You can learn more in the secure-sandbox docs.

import { secureSandbox, adapters } from '@agentsh/secure-sandbox';

// rawSandbox comes from your provider of choice:
// adapters.vercel(rawSandbox)
// adapters.e2b(rawSandbox)
// adapters.daytona(rawSandbox)
// adapters.cloudflare(rawSandbox)
// adapters.blaxel(rawSandbox)

const sandbox = await secureSandbox(adapters.daytona(rawSandbox));

That is the whole point of the release.

You keep the same agent loop. You keep the same tool model. You keep the same sandbox provider. You add the execution layer underneath it.

And under the hood, that one line is doing real work: secureSandbox() installs the AgentSH binary into the sandbox, replaces /bin/bash with a policy-enforcing shell shim, and routes operations through kernel-level enforcement layers so file access, network connections, and process execution are checked against deterministic policy instead of relying on the model to comply.

That makes this feel much more like modern TypeScript infrastructure than a traditional security product, because the policy is code, not an afterthought.

The package exposes policies as TypeScript objects, validated with Zod and organized around four rule families: file, network, command, and env. It also ships with presets like agentDefault, devSafe, ciStrict, and agentSandbox, so you can start with a sane baseline and then tune it for the workflow you actually have. Rules are evaluated top to bottom, and the first match wins.

So instead of telling the model to "be careful," you can express the boundary directly:

import { secureSandbox, adapters } from '@agentsh/secure-sandbox';
import { agentDefault } from '@agentsh/secure-sandbox/policies';

const policy = agentDefault({
  network: [
    { allow: ['api.openai.com', 'api.stripe.com'], ports: [443] },
    { deny: ['169.254.169.254', 'metadata.google.internal'] },
  ],
  file: [
    { allow: '/data/**', ops: ['read', 'write'] },
  ],
  command: [
    { deny: ['sudo', 'su', 'env', 'shutdown', 'reboot'] },
  ],
  env: [
    { deny: ['AWS_SECRET_ACCESS_KEY', 'GITHUB_TOKEN'] },
  ],
});

const sandbox = await secureSandbox(adapters.daytona(rawSandbox), { policy });

That one object changes the conversation.

The agent can call the APIs you approve. It cannot reach cloud metadata endpoints. It can work in the directories you allow. It cannot casually inherit secrets from the environment. It can run useful commands. It cannot escalate privileges just because the model decided to try.

That is the difference between "the agent should not do that" and "the agent cannot do that."

And that is why this works so well with the Vercel AI SDK specifically. The SDK is already a clean way to build the agent. @agentsh/secure-sandbox is not trying to replace that. It is giving it a stronger floor. The model still plans. The AI SDK still manages the tool loop. Your application still defines the tools. But the runtime under those tools can now enforce what is actually allowed to happen.

Once agents can act, the boundary that matters most is the one at execution time.

That is what this release is about: one line under the agent, and a policy model you can actually program.

← All posts

Built by Canyon Road

We build Beacon and AgentSH to give security teams runtime control over AI tools and agents, whether supervised on endpoints or running unsupervised at scale. Policy enforced at the point of execution, not the prompt.

Contact Us →
Learn the category: Execution-Layer Security → See examples: Use Cases →