field_note // 01 / vector V/01 / agents · capital 12 min read

Agents that move money on-chain.

vectorV / 01
statusin progress
updated2026.05.14
length~ 2,400 words

Most discussion of "AI agents" stops at the cognitive layer — what the model reasons about. The harder problem is the rail layer: what happens when an agent decides to actually move value. This note describes our work on the architectural shape an autonomous agent has to take when settled capital is on the line — and why that shape looks different from anything you can build on legacy financial rails.

The problem

Companies that operate on programmable capital still run their back office on tools designed for a different world. Bills arrive as PDFs in a shared inbox. Invoices live in spreadsheets. The books get closed monthly — sometimes — and idle balances sit unused while the operating team manages around them. The cognitive work of finance ops is enormous, the action surface is programmable, and nobody has joined the two with anything that an operator can actually trust.

The obvious answer is automation. The honest answer is that automation here has been structurally unsafe. Anything that moves money needs custody; custody means keys; keys mean the automation vendor becomes a regulated entity, a target, or both. Most "AI for finance" tools today resolve this by becoming read-only dashboards — useful, but stopping exactly where the interesting work begins.

We think the right shape is narrower and more interesting: an agent harness that does all the cognitive work — classification, extraction, matching, proposing — but executes nothing without a signature from the operator's own wallet. The agent is the analyst; the operator is the principal; the wallet is the only thing with custody. None of those three can act outside their scope.

The architectural commitments

Before any specific feature, the work has to make a few commitments that shape everything that comes after. These are not preferences; they are the constraints that make the harness legible to operators, counterparties, and, eventually, regulators.

design rule Agents propose; humans approve; wallets execute. Each of the three is bounded; none can act outside their scope. This isn't a UX preference — it is the architectural commitment that makes the whole thing legible to regulators, to operators, and to the agents themselves.

Why the rails matter as much as the model

The popular framing of "AI agents" puts almost all of the weight on the model — better reasoning, longer context, sharper tools. We think that framing under-rates the rail. A model that can correctly classify a transaction is only useful if there is a settled, programmable, low-friction way to act on the classification. On legacy financial rails, that loop is broken: action requires human-mediated APIs, batch settlement, three-day clearing. The model is fast; the rail is slow; the agent ends up as a glorified email drafter.

On-chain rails close the loop. A wallet signature is final. A stablecoin transfer settles in seconds. The cost surface of an autonomous action collapses from days to seconds, which is the only regime where an agent harness actually does something interesting. The agent's value isn't its prose; it is the speed and confidence with which a correct decision becomes a settled outcome.

The result is something a little unusual: a research direction that is half ML and half mechanism design. We spend as much time on transaction construction, signing flows, and policy primitives as we do on prompts and evals.

Applied with the ecosystem

The first work in this direction runs inside the Aventus ecosystem and a small set of partner companies. Working alongside the ecosystem gives us three things the research direction needs: a real settlement surface to prototype against, a counterparty identity layer that makes "who is the agent transacting with?" a tractable question, and a set of design partners willing to deploy early versions in environments where the consequences are real.

This is not a product announcement — it's a research direction. The hard questions we are still working on are around confidence calibration (when is the agent allowed to act with less review?), policy expressiveness (how does an operator describe acceptable behaviour without writing code?), and graceful failure (what happens when the agent is wrong, and how is that visible before the loss compounds?).

// vectorV / 01 — agents that move money on-chain
// partnersaventus ecosystem, et al.
// open questionscalibration · policy IR · failure modes
// next noteguardrails for agent behaviour
// open a channel with the lab →