AI & Automation

From Agent Sprawl to Agent Society: Why Google's A2A Protocol Hitting 150 Production Orgs Matters More Than the Models

2026.04.27 · 31 views
From Agent Sprawl to Agent Society: Why Google's A2A Protocol Hitting 150 Production Orgs Matters More Than the Models

Cloud Next 2026 Reveals the Real Bottleneck Isn't Smarter Agents — It's Agents That Can Actually Talk to Each Other

For most of 2025, the AI industry's argument was about which model was smartest. By April 22, 2026 — the keynote day of Google Cloud Next — that argument quietly stopped mattering. Within a few hours, OpenAI announced ChatGPT Workspace Agents, Google unveiled the new Gemini Enterprise Agent Platform, and Salesforce expanded Agentforce's integration with Gemini. The headline number nobody expected was buried deeper in the announcement: Google's Agent2Agent (A2A) protocol has reached 150 organizations in production — not pilot — routing real tasks between agents built on different platforms.


A2A launched a year ago with 50 technology partners. It is now governed by the Linux Foundation's Agentic AI Foundation and has shipped version 1.2. That governance shift — from a Google-controlled spec to a foundation-stewarded standard — is the part that quietly broke the previous status quo. Vendor-locked agent frameworks had been the default. A vendor-neutral wire protocol with real production volume changes the bargaining power between buyers and platforms overnight.


The companion launches matter because they describe what A2A is now connecting. Workspace Studio, rolling out to Google Workspace business and enterprise customers, plugs directly into Asana, Jira, Mailchimp, Salesforce, and arbitrary external APIs via webhooks. Vertex AI was renamed the Gemini Enterprise Agent Platform; Model Garden now hosts more than 200 models — Google's own Gemini and Gemma families, Anthropic's Claude, and open-weight Llama variants. And Google committed $750 million to a partner fund covering 120,000 consulting firms, systems integrators, and channel partners to ship agent prototypes for joint customers.


Read together, this is not a product announcement. It's an admission. According to Belitsoft and Gartner research circulating this month, enterprises run on average 12 AI agents today, on track for 20 by 2027. About half of those agents operate in isolation, with no coordination with other systems. Google's own framing this week was unusually candid — they essentially conceded "agent sprawl" is the dominant problem of 2026, and pitched A2A plus Workspace Studio as the answer.


There's a healthy skepticism worth noting. A protocol with 150 production deployments is meaningful, but it isn't ubiquitous. Anthropic's Model Context Protocol crossed 97 million installs in March, and MCP and A2A solve overlapping but distinct problems — MCP is about tool calls between agent and resource, A2A is about agent-to-agent task delegation. Most enterprises will end up running both, plus whatever proprietary glue their CRM vendor ships. The "agent society" that's emerging is not one protocol; it's a layered stack, and we are at the messy early-architectural-decisions phase.


My Perspective: Stop Asking "Which Model" and Start Asking "Which Wire Format"


When a technology category matures, the conversation moves down the stack. Cloud Next 2026 was the week the AI agent conversation moved from model performance — where everyone has roughly converged at "good enough" — to interoperability, governance, and orchestration. For developers and CTOs, this is unambiguously good news.


The shift means buying decisions stop being existential. If you build on Gemini today and Claude eats Gemini's lunch in eight months, A2A means your orchestration layer survives the swap. The lock-in moves up to your data and workflow definitions, where it always belonged. That is what mature enterprise software looks like.


But there's a quieter implication for development teams. If A2A becomes the dominant inter-agent protocol — and Linux Foundation governance suggests it has the right institutional shape to do so — then the next two years of platform engineering are going to look a lot like the early 2010s of microservices. Service discovery, idempotency, observability, retries, dead-letter handling. We solved those problems once for HTTP services. We will solve them again for agents, because agent calls are non-deterministic in a way HTTP calls never were.


The companies that win this transition will not be the ones that picked the right model vendor. They will be the ones whose engineering teams understood, two years before everyone else, that agents are going to need the same operational rigor we eventually demanded from microservices. The ones still arguing about Gemini vs. Claude are fighting the last war.


The real story of Cloud Next 2026 is not "Google is challenging OpenAI." It's "the agent layer is becoming infrastructure, and infrastructure has different winners than products."


AI & Automation Back to Blog