01
Behavioral Commitment Credentials
What this proves: an agent can sign what it has committed to, and any third party can verify the credential without trusting the issuer's UI.
Anonymous register at agentlair.dev, issue a credential on the BCC-Claims profile, retrieve it from the public endpoint, verify the signature against AgentLair's DID. The repo is around forty lines of TypeScript. Signing is eddsa-jcs-2022, the credential is W3C-verifiable, and the README walks through every byte of the canonical hash.
02
Pay-per-call MCP via x402
What this proves: an MCP tool can charge per call and settle on-chain inside the same request, with no contract and no signup ahead of time.
The demo calls trust-mcp.agentlair.dev's lookup_agent. The tool returns 402 with a quote, the client pays 0.005 USDC on Base, retries the call, and reads the settlement receipt embedded in the tool response. Around thirty lines. A funded test wallet runs the whole flow end-to-end in roughly twenty seconds.
03
Behavioral audit chain decides disputes
What this proves: two agents with identical credentials get opposite dispute outcomes because their observed behavior differs, and the difference is on a public audit chain.
Two scenarios run side by side. Scenario A buys a 95-USD leather bag inside the agreed scope; the dispute is REJECTED. Scenario B buys at 20 USD over the spending cap; the dispute is UPHELD. Same agent class, same merchant. The demo issues a BCC at the start of each run and posts the purchase events to the AgentLair audit log. Reviewers replay the chain and see why the answers differ.
04
LangChain + AgentLair attestations
What this proves: a LangChain.js agent emits signed audit events as its tools fire, and finishes with a publicly verifiable Bonded Credibility Credential that anchors the run to the audit chain.
Pass an `AgentLairCallbackHandler` as a callback to any LangChain runnable. Two events fire per tool call: `tool_call` on start, `observation` on end. When the agent is done, `handler.issueBcc()` mints a BCC whose `claim` body cross-references the first and last audit event ids. The wrapper extends `BaseCallbackHandler` from `@langchain/core` and works with three agent shapes out of the box: a single tool, a ReactAgent, or any RunnableSequence.
05
OpenAI Agents SDK + AgentLair attestations
What this proves: an OpenAI Agents SDK agent records reasoning, tool calls, observations, and output as separate signed audit events, and the BCC anchors the whole run via first and last event ids.
Wrap an `Agent` from `@openai/agents` with `recorder.wrap()` and every tool's `invoke` gets replaced with one that posts two L3 hash-chained entries per call: `tool_call` on entry, `observation` on exit. Two explicit attests bookend the run: `reasoning` for `agent.start`, `output` for `agent.complete`. The OpenAI model is mocked so the demo runs offline and consumes zero tokens. Six audit events anchor the run; the BCC pins the chain via first and last event ids.
06
Vercel AI SDK + AgentLair attestations
What this proves: a Vercel AI SDK agent built on the `tool` helper and `generateText` records every tool call as a signed audit event, and the BCC anchors the run via first and last event ids.
Pass any Vercel AI SDK toolset through `recorder.wrapTools(...)` and each tool's `execute` gets replaced with one that posts two L3 hash-chained entries per call: `tool_call` on entry, `observation` on exit. Errors thrown inside `execute` propagate up so the SDK builds the structured `tool-error` part the model reads on subsequent steps. The model is mocked with `MockLanguageModelV3` from `ai/test`, so the demo runs offline and consumes zero tokens. Six audit events anchor the run; the BCC pins the chain via first and last event ids.