Demos

Clone-and-run, not slideware.

Six runnable repos. Each one isolates a different layer of the substrate AgentLair runs on: behavioral commitment credentials that verify across organizations, pay-per-call MCP tools settled on Base, an audit chain that decides disputes from observed behavior, and drop-in attestation wrappers for the three TypeScript agent frameworks people actually ship: LangChain.js, the OpenAI Agents SDK, and the Vercel AI SDK. Clone any of them and run the README. The credentials you read about in the cards below resolve to public verify endpoints right now.

01

Behavioral Commitment Credentials

What this proves: an agent can sign what it has committed to, and any third party can verify the credential without trusting the issuer's UI.

Anonymous register at agentlair.dev, issue a credential on the BCC-Claims profile, retrieve it from the public endpoint, verify the signature against AgentLair's DID. The repo is around forty lines of TypeScript. Signing is eddsa-jcs-2022, the credential is W3C-verifiable, and the README walks through every byte of the canonical hash.

Live bcc_e64iJpEE6ZnUAklMDzdN (verify endpoint, returns valid:true)
02

Pay-per-call MCP via x402

What this proves: an MCP tool can charge per call and settle on-chain inside the same request, with no contract and no signup ahead of time.

The demo calls trust-mcp.agentlair.dev's lookup_agent. The tool returns 402 with a quote, the client pays 0.005 USDC on Base, retries the call, and reads the settlement receipt embedded in the tool response. Around thirty lines. A funded test wallet runs the whole flow end-to-end in roughly twenty seconds.

Live 0xea7ab4…14508cd (Base settlement, block 45662626)
03

Behavioral audit chain decides disputes

What this proves: two agents with identical credentials get opposite dispute outcomes because their observed behavior differs, and the difference is on a public audit chain.

Two scenarios run side by side. Scenario A buys a 95-USD leather bag inside the agreed scope; the dispute is REJECTED. Scenario B buys at 20 USD over the spending cap; the dispute is UPHELD. Same agent class, same merchant. The demo issues a BCC at the start of each run and posts the purchase events to the AgentLair audit log. Reviewers replay the chain and see why the answers differ.

04

LangChain + AgentLair attestations

What this proves: a LangChain.js agent emits signed audit events as its tools fire, and finishes with a publicly verifiable Bonded Credibility Credential that anchors the run to the audit chain.

Pass an `AgentLairCallbackHandler` as a callback to any LangChain runnable. Two events fire per tool call: `tool_call` on start, `observation` on end. When the agent is done, `handler.issueBcc()` mints a BCC whose `claim` body cross-references the first and last audit event ids. The wrapper extends `BaseCallbackHandler` from `@langchain/core` and works with three agent shapes out of the box: a single tool, a ReactAgent, or any RunnableSequence.

Live bcc_lgqfH7XRthR40JGr7Ask (verify endpoint, returns valid:true; 4 audit events anchored)
05

OpenAI Agents SDK + AgentLair attestations

What this proves: an OpenAI Agents SDK agent records reasoning, tool calls, observations, and output as separate signed audit events, and the BCC anchors the whole run via first and last event ids.

Wrap an `Agent` from `@openai/agents` with `recorder.wrap()` and every tool's `invoke` gets replaced with one that posts two L3 hash-chained entries per call: `tool_call` on entry, `observation` on exit. Two explicit attests bookend the run: `reasoning` for `agent.start`, `output` for `agent.complete`. The OpenAI model is mocked so the demo runs offline and consumes zero tokens. Six audit events anchor the run; the BCC pins the chain via first and last event ids.

Live bcc_8gkhJjgw0JM7mMIsqUW1 (verify endpoint, returns valid:true; 6 audit events anchored)
06

Vercel AI SDK + AgentLair attestations

What this proves: a Vercel AI SDK agent built on the `tool` helper and `generateText` records every tool call as a signed audit event, and the BCC anchors the run via first and last event ids.

Pass any Vercel AI SDK toolset through `recorder.wrapTools(...)` and each tool's `execute` gets replaced with one that posts two L3 hash-chained entries per call: `tool_call` on entry, `observation` on exit. Errors thrown inside `execute` propagate up so the SDK builds the structured `tool-error` part the model reads on subsequent steps. The model is mocked with `MockLanguageModelV3` from `ai/test`, so the demo runs offline and consumes zero tokens. Six audit events anchor the run; the BCC pins the chain via first and last event ids.

Live bcc_4jGn7YfkZdhyDJfVo2Rf (verify endpoint, returns valid:true; 6 audit events anchored)

Each repo is MIT-licensed and runs against the live agentlair.dev API. No mocks, no recorded fixtures.