field_note // 03 / vector V/03 / attestation · proof 10 min read

Verifiability of agent runs.

vectorV / 03
statusin progress
updated2026.05.14
length~ 2,100 words

A useful agent is one that takes actions in the world. A trustworthy agent is one whose actions can be checked — replayed, attested, and proved — by someone other than the platform that ran it. This note describes our work on the verifiability layer for autonomous systems: the cryptographic substrate that turns an agent run from a black box into a record an outside party can independently inspect.

The problem

Today, when an agent does something on behalf of a company, the evidence that the action was correct lives almost entirely on the platform that ran it. The platform logs a trace; the customer reads the trace; the customer takes the platform at its word. If something goes wrong — an unauthorized payment, a hallucinated approval, a tool call that violated policy — the only artifact is whatever the platform happened to record.

This is fine in the demo. It is not fine for finance ops, regulated environments, or any deployment where an external auditor, regulator, or counterparty needs to verify what happened. The current state of agent observability is "trust me." That's not a substrate we can build on.

The verifiability problem has three concrete questions, and the field has not been answering them well:

What we are building

A verifiability layer that captures all three, with cryptographic properties that make the record meaningful to someone who doesn't trust the runtime. The work has three threads — not yet a unified product, more a research program.

None of these techniques are individually new. The interesting work is in composing them: figuring out which parts of an agent run need which level of guarantee, and what the overhead looks like when the substrate is actually deployed.

design rule The run record must be useful to someone who does not trust the platform. If verification requires taking our word for anything, the record is doing the wrong job — it's just a fancier log.

Why now

Two trends are converging. Agents are doing things with real consequences — payments, contracts, infrastructure changes — faster than the audit substrate has caught up. And the regulatory shape of AI is starting to require provenance: not just "what model did you use?", but "show me the run, signed, with the inputs and the policy state."

A platform that ships agents into regulated environments without a credible verifiability story is going to spend the next five years answering subpoenas. A platform that ships with one becomes the default for the customers who need to actually defend their deployment.

Applied with the ecosystem

The Aventus ecosystem has been a useful first home for this work because the settlement layer is already a public verifiability surface — transactions are signed, ordered, and observable. The lab's work extends that property to the cognitive layer that produced the transaction in the first place: not just "this transfer happened," but "an agent of this identity, running this version, with this policy state, proposed it; this human approved it; here is the signed trace."

The open questions we are still inside are around overhead (how much of a slowdown is the verifiability tax, and where can it be amortized?), granularity (which decisions deserve a TEE attestation, which deserve only a hash-chain entry, which deserve a ZK proof?), and discoverability (how does the verifier even find the relevant trace at the moment it matters?).

// vectorV / 03 — verifiability of agent runs
// partnersaventus ecosystem, et al.
// open questionsoverhead · granularity · discoverability
// next noteagentic frameworks for zero-human companies
// open a channel with the lab →