Google Cloud Cements Values of Enterprise Readiness, Full-stack AI and Hybrid Cloud at Next 2025

In April Google Cloud hosted its annual Next event to showcase new innovations in AI. Staying true to the theme of “A New Way to Cloud,” Google focused on AI, including how AI can integrate with enterprises’ existing tech landscape, with partners playing the role of orchestrator. After Google CEO Sundar Pichai spoke about the company’s achievements around Gemini, which is integral to Google Cloud’s strategy, Google Cloud CEO Thomas Kurian highlighted the business’s three key attributes: optimized for AI; open and multicloud; and enterprise-ready. Additionally, Google Cloud announced a series of new innovations that highlight how the company is trying to execute on these three areas to be the leader in modern AI development.

Google takes an end-to-end approach to AI

When discussing Google Cloud’s three key attributes, Kurian first highlighted how Google Cloud Platform (GCP) is optimized for AI. Based on our own conversations with IT decision makers, this claim is valid: many customers enlist GCP services purely for functional purposes, as they believe they cannot obtain the same performance with another vendor. This is particularly true of BigQuery, for large-scale data processing and analytics, and increasingly Vertex AI, which now supports over 200 curated foundation models for developers.
 
Within this set of models is, of course, Gemini, Google’s own suite of models, including the new Gemini 2.5 Pro, which has a context window of 1 million tokens and is reportedly now capable of handling advanced reasoning. To be fair, Google still faces stiff competition from other frontier model providers, but Google’s years of AI research through DeepMind and its ability to have models grounded in popular apps like Google Maps, not to mention Google Search, will remain among its key differentiators.
 
With that said, the AI software stack is only as effective as the hardware it runs on. That is why Google has been making some advances in its own custom AI accelerators, and at the event, Google reaffirmed its plans to invest $75 billion in total capex for 2025, despite the current macroeconomic challenges. A large piece of this investment will likely focus on paying for the ramp-up of Google’s sixth-generation TPU (Tensor Processing Unit) — Trillium — which became generally available to Google Cloud customers in December. Additionally, Google is making some big bets on the next wave of AI usage: inference.
 
At the event, Google introduced its seventh-generation TPU, dubbed Ironwood, which reportedly scales up to 9,216 liquid cooling chips linked through a high-powered networking layer, to support the compute-intensive requirements of inference workloads, including proactive AI agents. In 2024 there was a 3x increase in the number of collective TPU and GPU hours consumed by GCP customers, and while this was likely off a small number of hours to begin with, it is clear that customers’ needs and expectations around AI are increasing. These investments in AI hardware help round out key areas of Google’s AI portfolio ― beyond just the developer tools and proprietary Gemini models ― as part of a cohesive, end-to-end approach.
 

Watch now: Cloud market growth will slow in 2025, but will activity follow? Deep dive into generative AI’s impact on the cloud market in 2025 in the below TBR Insights Live session

 

Recognizing the rise of AI inference, Google Cloud reinforces longtime company values of openness and hybrid cloud

With its ties to Kubernetes and multicloud editions of key services like BigQuery and AlloyDB, Google Cloud has long positioned itself as a more open cloud compared to its competitors. However, in recent quarters, the company has seemed to hone this focus more closely, particularly with GDC (Google Distributed Cloud), which is essentially a manifestation of Anthos, Google’s Kubernetes-based control plane that can run in any environment, including at the edge. GDC has been the source of some big wins recently for Google Cloud, including with McDonald’s, which is deploying GDC to thousands of restaurant locations, as well as several international governments running GDC as air-gapped deployments.
 
At Next 2025, Google announced it is making Gemini available on GDC as part of a vision to bring AI to environments outside the central cloud. In our view, this announcement is extremely telling of Google Cloud’s plans to capture the inference opportunity. Per our best estimate, roughly 85% of AI’s usage right now is focused on training, with just 15% in inference, but the inverse could be true in the not-too-distant future. Not only that, but inference will also likely happen in distributed locations for purposes of latency and scale. Letting customers take advantage of Gemini to build applications on GDC — powered by NVIDIA Blackwell GPUs — on premises or at the edge certainly aligns with market trends and will help Google Cloud ensure its services play a role in customers’ AI inference workloads regardless of where they are run.

Boosting enterprise mindshare with security, interoperability and Google-quality search

Kurian mentioned that customers leverage Google Cloud because it is enterprise-ready. In our research, we have found that while Google Cloud is highly compelling for AI and analytics workloads, customers believe the company lacks enterprise-grade capabilities, particularly when compared to Microsoft and Amazon Web Services (AWS). But we believe this perception is changing, and Google Cloud is recognizing that to gain mindshare in the enterprise space, it needs to lead with assets that will work well with customers’ existing IT estates and do so in a secure way. This is why the pending acquisition of Wiz is so important. As highlighted in a recent TBR special report, core Wiz attributes include not only being born in the cloud and able to handle security in a modern way but also connecting to all the leading hyperscalers, as well as legacy infrastructure, such as VMware.
 
Google Cloud has been very clear that it will not disrupt the company’s multihybrid capability. In fact, Google Cloud wants to integrate this value proposition, which suggests Google recognizes its place in the cloud market and the fragmented reality of large enterprises’ IT estates. Onboarding Wiz, which is used by roughly half of the Fortune 500, as a hybrid-multicloud solution could play a sizable role in helping Google Cloud assert itself in more enterprise scenarios. In the meantime, Google Cloud is taking steps to unify disparate assets in the security portfolio.
 
At Next 2025, Google Cloud launched Google Unified Security, which effectively brings Google Threat Intelligence, Security Operations, Security Command Center, Chrome Enterprise and Mandiant into a single platform. By delivering more integrated product experiences, Google helps address clients’ growing preference for “one hand to shake” when it comes to security and lays a more robust foundation for security agents powered by Gemini, such as the alert triage agent within Google Security Operations and the malware analysis agent in Google Threat Intelligence to help determine if code is safe or harmful.
 
One of the other compelling aspects of Google’s enterprise strategy is Agentspace. Launched last year, Agentspace acts as a hub for AI agents that uses Gemini’s multimodal search capabilities to pull information from different storage applications (e.g., Google Drive, Box, SharePoint) and automate common productivity tasks like crafting emails and scheduling meetings. At the event, Google announced that Agentspace is integrated with Chrome, allowing Agentspace users to ask questions about their existing data directly through a search in Chrome. This is another clear example of where Google’s search capabilities come into play and is telling of how Google plans to use Agentspace to democratize agentic AI within the enterprise.

Training and more sales alignment are at the forefront of Google Cloud’s partner priorities

Google Cloud has long maintained a partner-first approach. Attaching partner services on virtually all deals; taking an industry-first approach to AI, particularly in retail and healthcare; and driving more ISV coselling via the Google Cloud Marketplace are a few examples. At Next 2025, Google continued to reaffirm its commitment to partners, implying there will be more alignment between field sales and partners, to ensure customers are matched with the right ISV or global systems integrator (GSI), a strategy many other cloud providers have tried to employ.
 
When it comes to the crucial aspect of training, partners clearly see the role Google Cloud plays in AI, and some of the company’s largest services partners, including Accenture, Cognizant, Capgemini, PwC, Deloitte, KPMG, McKinsey & Co., Kyndryl and HCLTech, have collectively committed to training 200,000 individuals on Google Cloud’s AI technology. Google has invested $100 million in partner training over the past four years, and as highlighted in TBR’s Voice of the Partner research, one of the leading criteria services vendors look for in a cloud partner is the willingness to invest in training and developing certified resources.

Google Cloud wants partners to be the AI agent orchestrators

As previously mentioned, Vertex AI is a key component of Google Cloud’s AI software stack. At Next 2025, Google Cloud introduced a new feature in Vertex called the Agent Development Kit, which is an open-source framework for building multistep agents. Google Cloud is taking steps to ensure these agents can be seamlessly connected regardless of the underlying framework, such as launching Agent2Agent (A2A), which is an open protocol, similar to protocols introduced by model providers like Anthropic.
 
Nearly all of the previously mentioned GSIs, in addition to Boston Consulting Group (BCG), Tata Consultancy Services (TCS) and Wipro, have contributed to the protocol and will be supporting implementations. This broad participation underscores the recognition that AI agents will have a substantial impact on the ecosystem.
 
New use cases will continue to emerge where agents are interacting with one another, not only internally but also across third-party systems and vendors. With the launch of the Agent Development Kit and the related protocol, Google Cloud seems to recognize where agentic AI is headed, and for Google Cloud’s alliance partners, this is an opportune time to ensure they have a solid understanding of multiparty alliance structures and are positioned to scale beyond one-to-one partnerships.

Final thoughts

At Next 2025, Google reportedly announced over 200 new innovations and features, but developments in high-powered compute, hybrid cloud and security, in addition to ongoing support for partners, are particularly telling of the company’s plans to capture more AI workloads within the large enterprise. Taking an end-to-end approach to AI, from custom accelerators to a diverse developer stack that will let customers build their own AI agents for autonomous work, is how Google Cloud aims to protect its already strong position in the market and help lead the shift toward AI inferencing.
 
At the same time, Google Cloud appears to recognize its No. 3 position in the cloud market, significantly lagging behind AWS and Microsoft, which are getting closer to each other in IaaS & PaaS revenue. As such, taking a more active stance on interoperability to ensure AI can work within a customer’s existing IT estate, and guaranteeing partners that have the enterprise relationships are the ones to orchestrate that AI, will help Google Cloud chart its path forward.