The Agent Stack: New Topics & Protocols Every Brand Should Track in 2026
The web is being rebuilt for machines. Underneath the surface noise about ChatGPT and Gemini, a new infrastructure stack is taking shape — protocols for how AI agents talk to tools, talk to each other, authenticate, pay for things, and consume content.
Most of these standards didn’t exist 18 months ago. By the end of 2026, they will define which brands AI agents can find, trust, cite, and buy from.
This is a comprehensive map of that stack — every layer, every major protocol, and what each one means for brand visibility in an agent-first world.
1. The Agent Communication Protocol Stack (Beyond MCP)
MCP is just the bottom layer. The full stack is rapidly stratifying into distinct protocols for distinct jobs.
MCP (Model Context Protocol)
Anthropic-originated, now governed by the Linux Foundation. In December 2025, the Linux Foundation launched the Agentic AI Foundation (AAIF) — co-founded by OpenAI, Anthropic, Google, Microsoft, AWS, and Block — as the permanent home for both A2A and MCP.
By April 2026, MCP is implemented on more than 10,000 enterprise servers, with over 97 million SDK downloads. It’s the agent-to-tool layer.
A2A (Agent-to-Agent)
Google’s protocol, also donated to the Linux Foundation. A2A standardizes how AI agents discover, communicate, and collaborate with each other — regardless of their underlying framework. Think of it as HTTP for AI agents. IBM’s Agent Communication Protocol (ACP) merged into A2A in August 2025. This is the agent-to-agent layer.
ANP (Agent Network Protocol)
The decentralized vision. ANP pursues a fully decentralized architecture using W3C DIDs (Decentralized Identifiers) that is technically compelling but not yet ecosystem-ready.
WebMCP
The browser-native variant. In February 2026, Google shipped an early preview of WebMCP in Chrome Canary. WebMCP is a protocol for structured AI agent interactions with websites, introducing two new APIs:
- A Declarative API for HTML forms and standard page elements
- An Imperative API for dynamic JavaScript-driven interactions
Google is developing WebMCP with Microsoft through the W3C, aiming for an open standard that all browsers can adopt.
Why this matters for brands: Every protocol layer is a new surface where brand visibility, citation, and discoverability operate. Companies that have an MCP server are increasingly the ones agents pull from. That’s a measurable visibility dimension — and one Sanbi tracks.
2. Agent Payments — The Money Layer
This is one of the biggest emerging stacks, and it’s moving incredibly fast. Five competing protocols are jockeying for position — but they’re not really competitors, they’re a stack.
x402 (Coinbase) — Stablecoin micropayments over HTTP
x402 is an open-source payment protocol that uses the HTTP 402 “Payment Required” status code to enable machine-to-machine micropayments. Launched by Coinbase in May 2025, it processes payments in USDC on the Base blockchain and had processed over 50 million transactions by early 2026. The Coinbase-led x402 payment protocol processed roughly 165 million agent transactions in its first months.
ACP (Agentic Commerce Protocol) — OpenAI + Stripe
OpenAI’s ACP, developed with Stripe, is a community-designed, Apache 2.0–licensed framework that enables secure, frictionless transactions inside conversational platforms such as ChatGPT.
Already live: With Instant Checkout, powered by ACP, ChatGPT users in the US can now buy directly from Etsy sellers, with support for Shopify merchants — including Glossier, Vuori, Spanx, and SKIMS — coming soon.
AP2 (Agent Payments Protocol) — Google
Google’s authorization framework. More than 60 organizations are already collaborating with Google to bring AP2 to life, including Adyen, American Express, Ant International, Coinbase, Etsy, Forter, Intuit, JCB, Mastercard, Mysten Labs, PayPal, Revolut, Salesforce, ServiceNow, UnionPay International, and Worldpay.
AP2 uses mandates — tamper-proof, cryptographically signed digital contracts that define exactly what an agent can do on a customer’s behalf.
MPP (Machine Payments Protocol) — Stripe
Stripe’s broader machine-to-machine framing. MPP launched on March 18, 2026.
TAP (Trusted Agent Protocol) — Visa
Visa’s identity layer for payments. Visa’s Trusted Agent Protocol (TAP), launched October 14, 2025 with Cloudflare, signs the agent’s identity into HTTP request headers; merchants verify the signature against Visa’s directory.
The reality: it’s a stack, not a war
x402, ACP, AP2, and MPP are not interchangeable — they operate at different layers:
| Protocol | Layer |
|---|---|
| x402 | Payment rail |
| AP2 | Authorization framework |
| ACP | Checkout flow |
| TAP | Agent identity in payments |
| MPP | M2M payment infrastructure |
Why this matters for brands: The discovery layer becomes the bottleneck. All of these protocols assume the agent has already found your product. If agents are doing the buying, then being findable and citable by the agent before it transacts is the whole game.
3. Content Monetization for Agents — The Pay-Per-Crawl Layer
This is the defensive side of the same coin: content owners charging for AI access.
Cloudflare Pay-Per-Crawl
In the private beta, publishers can choose to allow individual crawlers, block them, or set per-crawl rates. Cloudflare serves as the go-between, handling payments and forwarding revenue to website owners.
Pay-per-crawl integrates with existing web infrastructure, leveraging HTTP status codes and authentication mechanisms. Each time an AI crawler requests content, it either presents payment intent via request headers for successful access (HTTP 200), or receives a 402 Payment Required response with pricing.
Scale signals: On an average day, Cloudflare customers are already sending over one billion 402 response codes.
And critically — Cloudflare now blocks AI crawlers by default for all new users, meaning tools like OpenAI’s GPTBot can’t access your site unless you let them.
Why this matters for brands: This creates a measurable shift in what AI can see. If your competitors are behind pay-per-crawl walls, AI visibility tilts toward whoever’s open — or whoever paid. AI accessibility scores are becoming a core part of any AI visibility report.
4. Agent Identity & Authentication
Once agents can pay, you need to know which agent is asking.
Web Bot Auth
An IETF-track standard for cryptographic bot identity. Google published documentation explaining its testing of Web Bot Auth, an experimental IETF protocol that lets websites cryptographically verify some automated requests from bots and AI agents. The protocol adds a verification layer by letting agents sign HTTP requests with cryptographic keys.
The IETF Web Bot Auth Working Group was chartered in early 2026 with milestones for standards-track specifications and a best current practice document.
W3C DIDs + Verifiable Credentials for Agents
Equip each AI agent with a self-controlled digital identity, comprising a ledger-anchored Decentralized Identifier (DID) and a set of Verifiable Credentials (VCs). A DID is a self-issued identifier whose public key material verifies ownership.
ARIA Protocol
A newer post-quantum identity initiative. Filed with NIST docket 2025-0035 on March 9, 2026. NCCoE concept paper submitted April 2, 2026.
ERC-8004
An on-chain agent registry standard with 21,500+ registered agents.
Why this matters for brands: A new dimension of “is this brand’s agent presence trusted?” Brands that publish verified agent identities will be cited and trusted faster than those who don’t — a future ranking factor.
5. The AI-Readable Web — llms.txt and Successors
The web is being rewritten for machine consumption.
llms.txt
A plain-text file hosted in a website’s root directory that provides a concise, Markdown-formatted map of a site’s most important resources. Proposed in 2024 by Jeremy Howard of Answer.AI, llms.txt serves as a new open-standard convention designed to help LLMs navigate website content with greater precision.
Adoption reality check: As of April 2026, no major AI platform has officially committed to reading llms.txt as a first-class input. Anthropic, OpenAI, and Perplexity have not made formal commitments — though their retrieval pipelines can be prompted to fetch the file, and developer-facing tools like Cursor, Continue, and Aider already do read it.
ai.txt
Proposed by Spawning.ai, this standard addresses the question: Should my content be used to train AI models? Separate from llms.txt, focused on training permissions.
llms-full.txt
Token-optimized full-content version, gaining traction in developer docs. Tools like Fern serve Markdown instead of HTML when they detect LLM traffic, reducing token consumption and accelerating content processing.
Why this matters for brands: This is the most directly adjacent layer to AI visibility. An AI-Readability Audit — does your site serve markdown to bots, expose llms.txt, and surface schema correctly — is one of the fastest ROI moves you can make in 2026.
6. GEO / AEO — The New SEO Layer
This is the category every AI visibility platform competes in — and the market is exploding.
Market size and conversion data
The U.S. Generative Engine Optimization (GEO) market is expected to reach USD 365.4 million in 2026, with a 42.9% CAGR over the forecast period.
The conversion data is the killer stat:
| Source | Conversion Rate |
|---|---|
| ChatGPT | 15.9% |
| Perplexity | 10.5% |
| Claude | 5% |
| Organic search | 1.76% |
Agent-cited traffic is 3 to 9 times more valuable than organic search traffic.
Competitive landscape
The field includes AthenaHQ, Goodie AI, Rankscale, KAI Footprint, Brandi AI, Brandlight, Evertune, Profound, Gauge, Otterly — and Sanbi.ai. Research from Brandlight suggests the overlap between top Google links and AI-cited sources has dropped from 70% to below 20%.
Why this matters for brands: Tracking mentions is table stakes. The differentiation is in the adjacent layers above — protocol presence, accessibility scores, agent identity, per-browser citation behavior — that the leaders haven’t claimed yet.
7. Agent Browsers — Where Buying Decisions Happen
The new “browser” is fundamentally different from Chrome.
Industry estimates suggest 25–35% of operational web traffic at large companies will be agent-generated by end of 2026. Gartner projects that 40% of enterprise applications will include task-specific AI agents by year-end.
The current field
- ChatGPT Atlas
- Perplexity Comet
- Claude Cowork
- Microsoft Edge Copilot Mode
- Arc / Dia
- Brave Leo
- Opera Neon
Capability benchmark
Claude Sonnet 4.6 hit 72.5% on OSWorld in February 2026, reaching the rough ceiling of human performance on that benchmark. Comet Agent runs on Sonnet 4.6 by default and Opus 4.6 for Max users.
The legal signal
The Amazon-Perplexity lawsuit is a key indicator. Amazon’s January 2026 lawsuit challenges Comet’s automated shopping capabilities — the first legal action against agentic browser technology. Platform owners are starting to defend their checkout surfaces.
Why this matters for brands: Each browser has different citation behaviors. If you sell to enterprise buyers, Copilot Mode is probably the agent shopping your pricing page in 2026. It heavily prefers schema.org markup and tends to misclick custom form widgets that don’t use native HTML controls. Per-browser visibility audits are a clear product direction — and one of the highest-leverage things you can measure right now.
How to Prepare for the Agent Stack
The brands winning the next two years will be the ones who treat the agent stack as a checklist, not a buzzword. A baseline readiness audit looks like this:
- Communication layer — Do you publish an MCP server? Are your APIs agent-friendly?
- Payments layer — Are you ACP/AP2/x402 ready for agent-initiated checkout?
- Crawl layer — Have you set explicit policies for AI crawlers? Are you blocking, allowing, or charging?
- Identity layer — Are you prepared to verify trusted agents via Web Bot Auth or TAP signatures?
- AI-readable web — Do you serve llms.txt, llms-full.txt, and clean Markdown to LLM user agents?
- GEO / AEO — Are you measuring visibility, sentiment, citations, and share of voice across ChatGPT, Gemini, Perplexity, and Claude?
- Agent browsers — Do your forms use native HTML controls? Is your schema.org markup complete and validated?
Every one of these is a measurable lever. And every one is a place where the gap between leaders and laggards is widening every week.
Your Next Step
The agent stack isn’t a future bet — it’s the infrastructure your customers’ AI assistants are already running on. The brands that map their presence across every layer before their competitors do are the ones AI agents will discover, trust, cite, and transact with.
Run a free AI visibility audit at sanbi.ai and see exactly where your brand stands across ChatGPT, Gemini, Perplexity, and Claude — plus where your gaps are in the broader agent stack. It takes 2 minutes, no credit card required.
Sanbi.ai monitors your brand’s AI visibility daily across ChatGPT, Gemini, Perplexity, and Claude — tracking visibility scores, sentiment, citations, agent accessibility, and competitor movements so you always know where you stand in the agent-first web.