ERC-8004 Explorer by

Last updated 2026-04-25

ERC-8004 Validation Registry

The ERC-8004 Validation Registry is a request/response system for third-party agent scoring. Anyone can open a validation request against a named agent and a chosen validator; the validator submits a 0 to 100 score against a tag, written permanently onchain.

If you’ve read The three registries and want the full picture of the validation layer, this is the place.

The request/response shape

A validation starts with a ValidationRequested event. The requester names the agent by tokenId, picks a validator by address, assigns a bytes32 tag, and may attach a metadataURI that points at the test specification or evaluation brief.

event ValidationRequested(
    uint256 indexed tokenId,
    address indexed requester,
    address indexed validator,
    bytes32 tag,
    string metadataURI
);

When the named validator submits their response, the contract emits ValidationResponded:

event ValidationResponded(
    uint256 indexed requestId,
    uint8 score,
    string responseURI
);

The requestId ties the response to the original request. The score is a uint8 capped at 100. The responseURI is optional — a validator can point it at a detailed report or leave it empty. Both events are onchain and readable by any observer without asking this explorer or any other aggregator.

Why request/response

Reputation feedback is a push model: a client interacts with an agent and posts a FeedbackSubmitted event at their own initiative. Anyone can do it at any time.

Validation is a pull model. The requester explicitly names a validator (a specific Ethereum address), and the contract only accepts a response from that address. The validator must act; no one else can fill the role.

This matters for use cases where the evaluator’s identity is load-bearing. A regulated financial agent might need a score from a named audit firm. A safety-certified agent might need a score from a red-team organization the end consumer already trusts. The request/response model lets the requester pick exactly who evaluates the agent, and the onchain record shows exactly who responded.

The requester pays gas to open the request. The validator pays gas to respond. No off-chain coordination is required. The contract handles that.

For contrast, the Reputation Registry collects feedback from any signer. Validation adds the named-party constraint.

Scores: 0 to 100

The Reputation Registry uses a uint16 rating in the range 0–10000, normalized to 0.0–1.0. Validation uses a uint8 in the range 0–100.

The wider integer range exists for the same reason a test suite uses a percentage: a validator certifying an agent for safety needs to express “passed 87 of 100 attack scenarios,” not just “passed” or “failed.” A binary result discards information that regulators, insurers, and agent orchestrators need to make decisions.

The score is capped at 100. A uint8 can hold values up to 255, but the contract rejects anything above 100 to prevent validators from accidentally or intentionally submitting out-of-range values. Once written, the score lives onchain indefinitely. There’s no expiry or decay; consumers who want to weight recent scores more heavily do so in their own indexing logic.

What “tags” mean for validation

Tags in the Validation Registry use the same bytes32 system as the Reputation Registry. That alignment is intentional. A client submitting safety feedback and a validator submitting a safety score are both tagging the same dimension, which lets aggregators query or compare across both sources.

Common tags for validation reflect the types of structured evaluation that make sense for agents: safety (resistance to adversarial inputs), accuracy (correctness against a defined test set), robustness (behavior under distribution shift), and cost-efficiency (compute or token consumption per task). Tags are free-form bytes32 values; any byte sequence can serve as a tag. The standard doesn’t enumerate them; indexers and consumers define which ones they recognize.

The metadataURI in the request is where a tag gets meaning. A “safety” request with a URI pointing at a published red-team protocol tells the validator what “safety” means in this context. Without the URI, “safety” is ambiguous. Attestation is only as useful as its test definition.

Validators on this explorer

The validators index on this explorer lists every address that has responded to a validation request across indexed chains. Each row shows the validator’s total request count, how many they’ve completed, how many are still pending, their average response time, and the number of unique agents they’ve evaluated. Filter by network to narrow the list to a single chain, then click into any validator to see their recent validations with the agent, tag, and response score on each row.

A validator with a thin history isn’t necessarily less capable. They may be new, or they may specialize in a tag that few agents have requested. The pending count on the index is often the more useful signal: a validator with twenty requests and zero completed is probably absent, while one with a high completion ratio and a low average response time is actively serving requests. History is context for the judgment, not a verdict.

Validators aren’t approved or listed by any central authority. If an address has submitted a ValidationResponded event, it appears in the index. Consumers decide independently whether they find a given validator credible. The standard only provides the onchain record.

Common failure modes

Validator never responds. The named validator address is locked at request time. If the validator goes offline, loses their key, or decides not to respond, the request stays open with no score. The contract has no onchain timeout in v1. There’s no escrow; payment and SLA terms are off-chain between the requester and validator. If a request sits unanswered, the requester’s only option is to open a new request to a different validator.

Validator collusion. A validator who is captured (paid to issue inflated scores, or acting as a shill for an agent they control) can submit maximum scores without the contract detecting anything wrong. The onchain mechanism only verifies that the response came from the named address. The guard against this is the validator’s own history. A validator who consistently scores every agent at 95–100 across all tags shows a pattern that any careful consumer will notice. The validators index on this explorer makes that pattern visible.

Tag mismatch. A “safety” validation means nothing if the validator ran a cost-efficiency test against it. The metadataURI field exists specifically to anchor what the tag means in a given request. A request with an empty metadataURI and a tag of “safety” produces an onchain score with no auditable basis. Consumers should treat unanchored validation requests as weaker signals than those with a published test definition at the URI.

Where to go next

FAQ

Who can be a validator?

Anyone willing to publish their address and respond to validation requests. The standard doesn’t gate validator participation, but consumers should look at a validator’s history before trusting their scores. The validators page on this explorer shows each validator’s volume and per-tag distribution.

How is a validation different from feedback?

Feedback is unsolicited and comes from clients who used the agent. Validation is solicited (someone explicitly asks for it) and comes from a named validator who is expected to run a defined test. Feedback measures observed quality; validation measures certified quality against a tag.

What does a validator actually run?

That’s between the requester and the validator. Common patterns include a fixed test suite for the named tag, a held-out evaluation dataset, or a structured red-team prompt set. The metadata URI can attach the test definition so the score is reproducible.

Can a validator be replaced after a request opens?

No. The validator address is set at request time and cannot be changed. If the named validator never responds, the request remains open — anyone can read it but no score is ever written. The requester can open a new request to a different validator.

Are validator scores trustworthy?

As trustworthy as the validator. The onchain mechanism only enforces that the response signature matches the named address. Quality of judgment depends on the validator’s own track record, which is itself queryable through the validation history this explorer publishes.