Google Cloud's flagship annual conference hit full stride on Day 2 (April 23, 2026) in Las Vegas, delivering a wave of infrastructure and AI platform announcements that signal Google's deepest push yet into enterprise agentic AI. From next-gen TPU silicon to a brand-new data architecture, here's everything that dropped.
The Big Picture: Google Goes Full-Stack on Agentic AI
If Day 1 set the stage with Gemini 3.1 and broad AI agent previews, Day 2 filled in the infrastructure backbone that makes those agents actually work at scale. Google's message is clear: it wants to own the full stack — chips, cloud, data, and agent orchestration — so enterprises don't have to stitch together multiple vendors.
Sundar Pichai appeared at the event to reinforce the company's commitment: Google Cloud is betting its enterprise future on agentic AI, and today's announcements are the technical proof.
New Silicon: TPU 8t and TPU 8i
The headline hardware news: Google unveiled its eighth-generation TPUs — two purpose-built chips designed for the distinct demands of training and inference in agentic workloads.
TPU 8t (Training)
The TPU 8t is Google's most powerful training chip ever. Built around breakthrough Inter-Chip Interconnect (ICI) technology, it can scale to 9,600 TPUs in a single superpod — all sharing 2 petabytes of high-bandwidth memory. That's an unprecedented pool of shared memory that lets the largest foundation models train without constant data shuffling.
Performance numbers:
- 3× the processing power of the previous Ironwood generation
- 2× better performance per watt — critical for cost efficiency at scale
TPU 8i (Inference)
The TPU 8i takes a different approach, optimized for the latency-sensitive demands of serving millions of concurrent AI agents. It uses a new Boardfly topology to directly interconnect 1,152 TPUs in a single pod, and packs 3× more on-chip SRAM than its predecessor — enough to host large KV caches entirely on-silicon, cutting memory latency.
The headline number: 80% better performance per dollar versus the prior inference generation. For enterprises running hundreds of AI agents simultaneously, that's a significant cost lever.
Gemini Enterprise Agent Platform
Google's most strategically significant software announcement: the Gemini Enterprise Agent Platform, described as "end-to-end connective tissue for the agentic enterprise."
This is Google's answer to the enterprise question of how do you actually manage agents at scale? The platform provides:
- Build — tools to create agents grounded in your enterprise data and APIs
- Scale — infrastructure to run thousands of agents concurrently without manual intervention
- Govern — policy controls, audit logs, and compliance guardrails
- Optimize — feedback loops and observability so agents improve over time
Think of it as mission control for a company's entire fleet of AI agents. Rather than each department spinning up its own rogue agent stack, IT and operations teams get a centralized plane to see what every agent is doing, why, and at what cost.
Agentic Data Cloud: A New Architecture for AI-Speed Data
One of the most technically ambitious announcements: Agentic Data Cloud, a completely rethought data architecture built for the speed and scale that agentic AI demands.
Traditional data architectures weren't designed for AI agents that need to query, synthesize, and act on data in milliseconds — not the hours that batch pipelines require. Agentic Data Cloud closes that gap with:
- Cross-cloud Lakehouse — unified analytics across multi-cloud environments without data duplication
- Knowledge Catalog — semantic layer that lets agents understand what data means, not just what it contains, enabling smarter retrieval
- AI-native query engine — optimized for the unstructured, high-frequency data access patterns that agents generate
For enterprises already deep in BigQuery or Spanner, this is an evolution, not a rip-and-replace. Google positioned it as bringing together the best of its existing data products under a new agentic-first umbrella.
Virgo Network: The Data Center Fabric Underneath It All
Less flashy but equally significant: Google announced Virgo Network, a new megascale data center fabric designed to underpin its AI Hypercomputer infrastructure for the next decade.
Virgo replaces older interconnect technology with a fabric purpose-built for the all-to-all communication patterns that large-scale AI training and inference require. It's the plumbing that makes the TPU 8t superpod numbers actually achievable in practice — without it, 9,600 chips talking to each other would create a networking bottleneck that negates the compute gains.
New Model Releases
Alongside the infrastructure announcements, Google dropped several new model releases available immediately or in preview on Vertex AI:
- Gemini 3.1 Pro — enhanced reasoning and multimodal capabilities for enterprise workloads
- Gemini 3.1 Flash Image (Nano Banana 2) — faster, cheaper image-understanding model for high-volume use cases
- Veo 3.1 Lite — lighter video generation model optimized for cost-sensitive deployments
- Lyria 3 Pro — next-generation music and audio generation model
$750 Million Partner Commitment
Google also used the event to announce a $750 million commitment to accelerate partners' agentic AI development — a mix of credits, co-investment programs, and go-to-market support for GSI and ISV partners building on Google Cloud's agentic stack.
This follows a pattern Google has used to build ecosystem momentum: make it economically attractive for Accenture, Deloitte, and hundreds of regional partners to build practices around your platform, creating a distribution network that Google's direct sales force couldn't replicate alone.
- Full-stack approach means fewer integration headaches for enterprises
- TPU 8i pricing improvement makes agent-at-scale actually affordable
- $750M partner program creates broad ecosystem support
- Open architecture (cross-cloud Lakehouse) signals less lock-in than Microsoft
- Agentic Data Cloud requires migration work for existing BigQuery customers
- Gemini Enterprise Agent Platform is newer and less battle-tested than Azure AI Foundry
- TPU availability has historically lagged A100/H100 access times outside top-tier regions
What's Coming on Day 3
Google Cloud Next wraps up April 24 with Day 3 sessions expected to focus on security, developer tools, and cloud-native application modernization. Watch for potential announcements around Chronicle security platform updates, Firebase/Cloud Run improvements, and deeper Workspace AI integration details.
For enterprises evaluating their AI cloud strategy in 2026, Google Cloud Next's Day 2 made one thing clear: Google is no longer content to compete on individual products. It's competing on the entire architecture of the agentic enterprise — and today's announcements are its strongest proof of that ambition yet.