ProductGoverning AI Agents After You’ve Said YesRead the post →
← Blog·Product

Governing AI Agents After You’ve Said Yes

The hardest problem in AI security isn’t stopping agents from doing bad things before you’ve authorized them. It’s stopping them from doing bad things after you have. Introducing Thoth v0.1.1 — runtime governance for AI agent tool calls.

Nyah Check

Nyah Check

April 1, 2026 · 4 min read

The hardest problem in AI security isn't stopping agents from doing bad things before you've authorized them. It's stopping them from doing bad things after you have.

When you authorize an AI agent to read invoices and search documents, you're making a decision in a moment of context: this agent, this task, this scope. What you're not doing is continuously verifying that the agent stays within that context as it runs. In practice, agents drift. They call tools they were never intended to call. They escalate privileges through chained tool calls that each looked harmless in isolation. They execute irreversible operations — sends, deletes, submissions — when they were only supposed to read.

Authorization is a point-in-time decision. Governance is continuous.

Today we're releasing Thoth — an open SDK for governing AI agent tool calls at runtime, across Python, TypeScript, and Go.

What Thoth does

Thoth sits between your agent and its tools. You define an approved_scope — the list of tool calls your agent is actually supposed to make for a given task — and Thoth enforces it in real time. Every tool call is checked against that scope and against the agent's behavioral baseline before it executes.

When an agent tries to call something outside its approved scope, Thoth doesn't just log it and move on. It enforces it: warn the agent, require human approval, or block the call outright. You choose how aggressive enforcement is — and you can escalate automatically.

The SDK is lightweight. Local policy evaluation runs in under 15ms. Instrumentation is a single function call.

Getting started

# Python pip install thoth-sdk # TypeScript npm install @atensec/thoth # Go go get github.com/atensecurity/thoth-go

export THOTH_API_KEY="thoth_live_your_key_here"

from thoth import instrument instrument( agent, agent_id="invoice-processor", approved_scope=["read_invoice", "search_docs"], enforcement="progressive", )

That's it. Your agent now has runtime governance.

Enforcement modes

Observe — logs everything, blocks nothing. Use this to baseline your agent's behavior before you define policy.

Progressive — the default. Thoth escalates automatically: warns on the first anomaly, requires step-up authorization on the second, and blocks on the third.

Step-up — every out-of-scope tool call requires explicit human approval before it executes. The agent pauses, a notification goes out, and execution resumes only when a human approves or denies.

Block — immediate rejection of any out-of-scope call. Use this when the cost of a mistake is high enough that you'd rather break the agent than let it proceed.

Why we built this

The team spent the last year building communication security infrastructure — monitoring Slack, Teams, and Signal for compliance violations. They became proficient at detecting when authorized access stops making sense following approval. That insight shaped Thoth's design.

Thoth addresses the post-authorization challenge. It treats governance as a runtime concern, not a deployment-time one. You define what your agent is supposed to do, and Thoth ensures it keeps doing that — and only that.

What's in v0.1.1

This release fixes critical mismatches between the Go and TypeScript SDKs and the enforcement API that shipped in v0.1.0. If you're on v0.1.0, upgrade. Full details in the changelog.

The three SDKs now have full feature parity: instrument() for generic agent wrapping, framework-specific adapters for LangChain, CrewAI, Anthropic, and OpenAI, typed errors, and a fail-open guarantee — if the Thoth API is unreachable, your agent runs unblocked.

What's next

The team is building session-scoped intent: the ability to specify not just which tools an agent is allowed to call, but what it's trying to accomplish in a specific session. The enforcer will use that stated intent to build a per-session expected action envelope and score tool calls against it.

After that: step-up webhook callbacks, OpenFGA integration, and an MCP proxy for governing Claude Desktop and off-the-shelf agents without any code changes.

Get started

Documentation · GitHub · Request API key

If you're running AI agents in production and want to talk about governance, reach out.

Get practical updates on AI agent security and governance.

Twice monthly notes on incidents, controls, and implementation lessons from real enterprise deployments.