In July 2025, the World Economic Forum published "Trust is the new currency in the AI agent economy." The article argues that as AI agents move from assistive tools to autonomous actors, trust infrastructure becomes the foundation of a new economic layer. They are right. And the infrastructure they are describing already exists.

The problem the WEF identified

The article, written by Ivan Shkvarun for the World Economic Forum, makes a straightforward case. AI agents are no longer just answering questions. They are executing transactions, allocating resources, and making decisions autonomously. By 2028, roughly 33% of enterprise software will include agentic AI, with 15% of daily work decisions made without human involvement.

That shift creates a trust problem. As Kenneth Arrow observed, every economic transaction contains an element of trust. When both sides of a transaction are machines, the question changes. It is no longer "Do I believe this answer?" It becomes "Can this agent prove what it claims?"

The WEF frames trust as having two components: competence (can the agent do the job?) and intent (what is motivating its actions?). Competence, they argue, is mostly solved. Intent remains what they call "a foggy frontier."

Three kinds of trust

The article identifies three trust domains that need rethinking:

  1. Human-to-human trust. Digital interactions undermine the cues we rely on. A familiar face on a video call could be an AI-generated avatar.
  2. Agent-to-agent trust. AI agents evaluating one another based on performance history, reputational data, and predictable behavior. The WEF calls this "an engineering problem."
  3. Human-to-agent trust. People trust consistency. Agents must display persistent identity, predictable behavior, and authenticity.

The article also cites the economic stakes. According to Deloitte, a 10 percentage point increase in societal trust correlates with roughly 0.5% higher annual GDP per capita. On the other side, Deloitte's Center for Financial Services predicts that generative AI could enable $40 billion in fraud losses in the United States alone. The global AI agents market is projected to reach $50.31 billion by 2030, according to Grand View Research. The fraud risk could exceed the market itself.

What they called for

The WEF article calls for three things:

The next five years, they argue, offer "a narrow but critical window" to shape how trust functions in a world of autonomous agents.

What we shipped

The Insumer Model's On-Chain Verification API is the infrastructure the WEF is describing. It was not built in response to this article. It was built because the problem was obvious: AI agents that interact with the real economy need a way to verify facts about people and assets, autonomously, with cryptographic proof.

Here is what the API does. An AI agent sends a wallet address and a set of conditions to POST /v1/attest. Each condition asks a question: does this wallet hold at least X of this token? Does it hold any NFT from this collection? The API verifies conditions on-chain in real time across 32 blockchains and returns a boolean answer. Met or not met. Nothing else.

Every response is signed with ECDSA P-256/SHA-256. The signature is independently verifiable by anyone using the published public key. No second API call needed. No trust in our servers required after the fact. The cryptographic proof stands on its own.

Why boolean matters

The WEF article warns about fraud risk multiplying as autonomous agents proliferate. The On-Chain Verification API addresses this at the protocol level by never exposing the data in the first place.

The API returns only yes or no. Not balances. Not token lists. Not transaction histories. Not wallet contents. An agent learns exactly what it asked, verifies the signature, and acts. There is nothing to leak, nothing to scrape, nothing to exploit. You cannot steal data that was never transmitted.

This is not a privacy feature bolted on after the fact. It is the core design. Boolean attestations are inherently more secure than balance queries because they answer the question without revealing the answer's inputs.

Agent-to-agent trust as engineering

The WEF called agent-to-agent trust "an engineering problem: how to design systems that can assess, verify and adapt trust over time." That is precisely what cryptographic attestation solves, through the three layers of trust that compose in a single verification hook.

When Agent A receives an ECDSA-signed attestation from the On-Chain Verification API, it can verify three things without trusting anyone:

  1. Authenticity. The signature proves the Insumer server produced this attestation. It cannot be forged without the private key.
  2. Integrity. The signed payload includes the attestation ID, results, and timestamp. Any modification breaks the signature.
  3. Freshness. Attestations expire after 30 minutes. Stale claims are rejected by design.

Agent A does not need to trust Agent B's word about a wallet's holdings. It does not need a reputation system or performance history. It has a cryptographic proof that is valid for exactly 30 minutes and can be verified with a single public key. That is trust reduced to math.

For a complete guide to building on this infrastructure — attestation, trust profiles, commerce, and compliance — see the AI Agent Verification API overview. Explore wallet trust profiles for structured on-chain evidence, or browse the full OpenAPI spec for all 26 endpoints.

Full autonomy, no human in the loop

The WEF envisions a future where agents operate with full autonomy. InsumerAPI was built for that future, and the entire lifecycle is live today:

  1. Discover. An agent reads insumermodel.com/llms.txt or the OpenAPI spec to understand the API.
  2. Authenticate. The agent creates a free API key instantly via the key creation endpoint.
  3. Search. It queries the merchant directory and token registry using public endpoints.
  4. Verify. It runs on-chain attestations across 32 blockchains with ECDSA-signed results.
  5. Onboard. It creates a new merchant programmatically via POST /v1/merchants.
  6. Configure. It sets token tiers and NFT collections via PUT endpoints.
  7. Verify domain. It proves merchant website ownership via DNS, meta tag, or file upload.
  8. Fund. It purchases additional verification credits with USDC on-chain.
  9. Publish. It lists the merchant in the public directory.

An AI agent can take a business from zero to fully live in the network. Discovered, onboarded, configured, domain-verified, funded, and listed. No human touches anything at any step. This is not a roadmap item. The endpoints are live. The onboarding docs are public. A LangChain integration is published on PyPI.

Five years? Seven months.

The WEF published their article in July 2025. They said the next five years offer "a narrow but critical window" to build trust infrastructure for the agent economy. That gives the world until 2030.

It is now February 2026. Seven months later. The On-Chain Verification API is live. AI agents can discover it, authenticate, run cryptographically signed attestations across 32 blockchains, onboard merchants, verify domains, purchase credits, and publish to a public directory. No human in the loop at any step. The full lifecycle works today.

Cryptographic verification of on-chain state, delivered as boolean attestations, with independent signature validation, is a concrete implementation of what the WEF called for. AI agent wallet verification makes this practical at scale. It handles agent-to-agent trust (verifiable claims), human-to-agent trust (consistent, predictable, auditable behavior), and the fraud problem (no data exposure means no data theft).

The question the WEF raised is the right one: what kind of trust will matter most in the AI agent economy? Our answer is simple. The kind that can be verified with a public key.

Build on the On-Chain Verification API

Free API keys are available instantly. Full documentation, OpenAPI spec, and LangChain integration included.

View API Documentation