Last updated 2026-04-25
Trustless agents — what the term means and what it requires
A trustless agent is one whose identity, reputation, and behavior can be verified by any observer, onchain, without relying on a central provider to vouch for it. ERC-8004 makes that practical with three composable primitives: a verifiable identity, an auditable reputation trail, and an independent attestation mechanism. Each maps to one of the three onchain registries the standard defines.
If you’ve read What is ERC-8004? and want the conceptual framing for why the standard matters, this is the place.
Why “trustless” is a load-bearing word
The word gets misread constantly. “Trustless” does not mean “don’t trust the agent.” It means you don’t need to trust the platform the agent runs on.
A centralized platform can fabricate ratings, suppress negative reviews, revoke an agent’s identity, or quietly swap out the model behind a public endpoint. With no independent record, users have no way to catch any of that.
Trustless infrastructure removes that dependency. The verification layer — who the agent is, what its history looks like, what third parties have certified about it — lives on a public chain that any reader can inspect without asking anyone’s permission. The agent’s output can still be wrong. It can still be biased, slow, or expensive. “Trustless” says nothing about quality. What it does is give you a public, unforgeable record so you can actually measure quality for yourself.
The three properties of a trustless agent
A trustless agent needs exactly three things. Anything less and some part of the verification still requires trusting an intermediary.
-
Verifiable identity. A unique, durable identifier that no platform can revoke. Anyone holding the identifier can look it up and confirm the agent’s metadata — name, endpoints, description — without relying on the platform’s API being up or the platform being honest. ERC-8004 delivers this through the Identity Registry, which mints an ERC-721 token for each agent. The token lives onchain as long as the chain does.
-
Auditable reputation. A signed history of how the agent has actually performed, written by the clients who used it, public to anyone, verifiable by anyone. Client signatures tie each feedback record to a specific Ethereum address. No platform can add a review it didn’t submit or remove one it doesn’t like. ERC-8004 delivers this through the Reputation Registry, which records
FeedbackSubmittedandFeedbackRevokedevents for each agent. -
Independent attestation. A way for third parties to certify specific properties of an agent — safety, accuracy, cost-efficiency — without needing the platform’s approval to do it. The attestation is signed by the certifying party’s Ethereum address, so it’s attributable and public. ERC-8004 delivers this through the Validation Registry, where anyone can request a scored evaluation from a named validator.
ERC-8004 implements all three; the next section catalogs what it doesn’t claim to do.
What this is NOT
The boundary here is worth drawing sharply, because the term “trustless” attracts over-broad claims.
Not decentralized compute. A trustless agent’s model can run on AWS, GCP, or any other centralized infrastructure. The compute layer is irrelevant to the identity and reputation layer. The agent’s onchain record is trustless even if every GPU behind it is rented from a single vendor.
Not provable correctness. ERC-8004 does not cryptographically verify the agent’s outputs. There is no zero-knowledge proof that the agent returned the right answer. Validation scores reflect a named evaluator’s judgment, not a mathematical proof. An attestation from a trusted validator is meaningful; it is not a formal guarantee.
Not free. Every write to the three registries is an onchain transaction. Registering an agent, submitting feedback, requesting a validation, responding with a score — all cost gas. L2s like Base and Mantle make this cheap; mainnet Ethereum does not. The cost is real and the standard makes no attempt to hide it.
Not Sybil-proof out of the box. The onchain primitives don’t prevent someone from creating many wallets and flooding an agent’s reputation with fake feedback. That problem is not solved at the contract layer — it’s pushed to indexers. This explorer penalizes feedback from wallets with thin onchain history. Other consumers can implement their own weighting. The standard intentionally leaves Sybil policy to the application layer, because there is no single policy that fits every use case.
None of these are gaps specific to ERC-8004. They’re where onchain identity and reputation infrastructure stops and other layers — compute, formal verification, application policy — have to pick up.
Why now
Agents have been a research topic for years. What changed recently is that agents started spending money.
An agent that summarizes documents or drafts emails makes a mistake and a person fixes it. An agent that transfers funds, signs contracts, authorizes API calls with quota costs, or routes traffic between services makes a mistake with a price tag. The cost of trusting the wrong agent is no longer measured in annoyed users; it’s measured in dollars and liability.
The historical pattern for high-stakes systems is identity first, then reputation. Web1 had domain names but no public reputation layer. Web2 added platform ratings but locked them inside walled gardens where the platform controlled what it showed. Onchain identity-plus-reputation is the next step: the record exists independent of any platform and can be read by any consumer.
The standard arrived when agent use moved from “interesting demo” to “production system with budget.”
How ERC-8004 stacks against adjacent ideas
MCP and A2A. Model Context Protocol (Anthropic) and Agent-to-Agent (Google) define how agents communicate — how they exchange messages, invoke tools, and delegate subtasks. They don’t define who the agent is or how to check its history. ERC-8004 sits underneath both as the identity and accountability layer. An agent that speaks MCP or A2A can also hold an ERC-8004 identity; the two layers don’t conflict and can easily compose.
Verifiable AI and proof-of-inference. Projects in the zkML and TEE space (like Giza and ORA) work on proving that a specific computation was run honestly — that a model with a known hash produced a specific output. This is about correctness, not identity. The Validation Registry can record the result of a zk-verified evaluation as easily as a human one: a validator runs a zero-knowledge proof, then submits the outcome as a score with a metadataURI pointing at the proof. The two approaches complement each other rather than compete.
Web2 reputation systems. Star ratings on platforms like G2, Product Hunt, or app stores work fine within the platform. The problem is portability: an agent can’t carry its G2 rating to a different aggregator. The platform owns the data. ERC-8004 reputation is portable because the record is on a public chain — any consumer can read it, weight it, and display it without the platform’s permission. The data is also auditable in a way platform ratings aren’t: anyone can check whether a wallet submitting five-star feedback is real.
These approaches solve distinct problems. They compose rather than compete.
What you can build on top
The three registries are primitives. A few patterns that fall out of them naturally:
An agent that moves from one platform to another can carry its full history — feedback, validations, revocation patterns — without the previous platform’s cooperation. Consumers on the new platform look at the chain, not the platform’s export.
A regulator could require agents in a restricted domain to hold a current validation from a named auditor. The onchain record is the compliance artifact, no private certificate database needed.
When agent A delegates a task to agent B, both identities are onchain and the delegation is recordable. That’s useful not just for trust, but for debugging: when something goes wrong in a chain of agents, you can trace who authorized what.
An onchain reputation trail is also evidence for insurance. An insurer pricing a policy against agent failure has public claims history. After a loss, they have public data rather than the operator’s say-so.
Where to go next
- The cornerstone: What is ERC-8004?
- The three registries: /learn/registries
- The reputation formula: /reputation-v1
- Live agents on this explorer: /agents
- Live validators: /validators
- The canonical EIP: https://eips.ethereum.org/EIPS/eip-8004
- Reference contracts: https://github.com/erc-8004/erc-8004-contracts
FAQ
Does "trustless" mean I shouldn't trust the agent?
No — it means you don’t need to trust the platform hosting it. The agent itself can still be unreliable, biased, or wrong. Trustless refers to the verification layer (you can check who the agent is, who has rated it, and who has validated it without trusting an intermediary), not to the agent’s output quality.
How is this different from "decentralized AI"?
Most decentralized-AI projects focus on running models on shared compute. Trustless agents focus on identity and reputation — the question of who an agent is and how it has behaved, regardless of where the model runs. The two layers compose; an agent can run on centralized infra and still expose a trustless identity.
What's the minimum a trustless agent needs?
A verifiable identity (so anyone can pin to a specific agent), a signed reputation trail (so its history is auditable), and an attestation mechanism (so third parties can certify specific properties). ERC-8004’s three registries deliver exactly these primitives.
Can a trustless agent be Sybil-resistant out of the box?
No. The onchain primitives are necessary but not sufficient — Sybil resistance comes from how indexers weight signals (this explorer penalizes feedback from low-history wallets). The standard intentionally pushes Sybil policy to the application layer.
Why does this matter now?
Agents are starting to act on behalf of users and money. When an agent transfers funds, signs contracts, or routes traffic, the cost of trusting the wrong one rises sharply. Trustless infrastructure — verifiable identity plus auditable behavior — is the primitive that lets that work at scale.