Oracle Redefines Data Intelligence in Full-stack Approach

Oracle pivots around data intelligence, owing to its full-stack approach

Oracle has long offered a modern analytics stack tailored to multiple personas and workloads, such as through the Oracle Analytics Cloud (OAC) and Autonomous Data Warehouse (ADW), coupled with the operational data — where the true value exists — in Oracle’s Fusion, NetSuite and Industry Applications. But the 2023 launch of Fusion Data Intelligence (FDI) marked a major shift in Oracle’s analytics strategy and early vision for data intelligence, where data is used not only for static reporting of one-and-done use cases but also for continual predictive insights made possible by AI.
 
As a reminder, Oracle delivers FDI as a single-SKU application, an approach not all peers take, so Fusion customers are connected to their data through the quarterly Fusion updates, potentially causing minimal disruption to the workflow, which is an important enabler of data intelligence. From a technical perspective, FDI also comes with prebuilt AI and machine learning (ML) models, data science capabilities, and even its own separate set of intelligent applications (e.g., Supply Chain Command Center) that are persona-specific, allowing customers to act on a particular use case without leaving FDI, which is now the fastest-growing application across the entire Oracle corporation.
 

Oracle Data Intelligence (Source: TBR)


 
Importantly though, effective data intelligence is about not only the application but also the underlying architecture and whether it can effectively support structured and unstructured data for complex analytics use cases. We all know Oracle Cloud Infrastructure (OCI) has become a critical component of Oracle’s business, and because of the infrastructure layer, Oracle has a top-down advantage that many other players cannot provide.
 
The 2024 launch of Intelligent Data Lake reaffirmed how Oracle wants to further bridge the gap between applications and infrastructure, with an architecture that integrates with ADW and OAC. Essentially, Intelligent Data Lake is a reworking of existing OCI capabilities, such as cataloging and integration, to create a single abstraction layer that, in true data lake fashion, allows customers to query data on object storage, with support for popular data format frameworks including Apache Iceberg.
 
Many peers have been moving more squarely into the data lake space to make it easier for customers to build AI applications on top of a single copy of data. But in the case of Oracle, Intelligent Data Lake serves as the glue between the infrastructure and applications. With Intelligent Data Lake, Oracle has essentially redelivered its analytics tools as part of the Data Intelligence Platform, offering another key layer that could make the case for best-of-breed customers to consolidate more of their data and business intelligence estates on Oracle.
 
Regarding those application components, customers can leverage FDI as a single product, but this extends to NetSuite, Oracle Health (Cerner) and Industry Applications. For instance, last year Oracle launched Energy & Water Data Intelligence, leveraging insights from Industry Applications like Oracle Utilities Customer Cloud Service. More notably, as Oracle pivots around data intelligence, the company is taking steps to help customers access non-Oracle data sources.
 
For instance, last year Oracle launched a native Salesforce integration with FDI so customers can combine their CRM and Fusion data within the lakehouse architecture. This means Oracle customers can access Salesforce data with FDI the same way they can with Fusion. It will be interesting to see if Oracle more aggressively expands the data ecosystem in the future, particularly within the back office, to deliver FDI’s value to those outside the Oracle SaaS base.

FDI aligns with partners’ digital transformation ambitions

One of the compelling things about Oracle’s full-stack approach to analytics, from infrastructure up to applications, is that it prevents Oracle from getting caught up in a traditional BI RFP and instead enables the company to sell Oracle Data Intelligence as part of a broader enterprise transformation, which aligns with global systems integrators’ (GSIs) business models. Today, most of GSIs’ Oracle business comes from the applications side, and doing a Fusion SaaS implementation (e.g., ERP, HCM [human capital management]) and then introducing FDI to break down integration barriers and ultimately make those Fusion apps more “intelligent” appears to be a common motion.
 
In some cases, FDI is also displacing different components of a build-your-own data strategy. For example, we recently heard a compelling example from Infosys, which modernized a customer’s analytics stack by migrating from Snowflake and Informatica to FDI, which was then integrated with external systems, including NetSuite. In a scenario like this, it is clear that having a lot of data in the Oracle ecosystem can influence a customer’s decision to consolidate on FDI, but it also speaks to the role Oracle plays on the infrastructure side, as FDI address not only the analytics pieces but also the underlying data tasks, including the data pipelines, and absorb system-level tasks like ETL (Extract, Transform, Load).
 
Oracle’s full-stack approach to analytics makes a compelling case for consolidation, helping partners create value by eliminating disparate integrations and unlocking ROI. This is particularly true for partners that are perhaps willing to abandon the typical tech-agnostic approach and recommend Oracle as the primary choice from a data and analytics perspective. If Oracle engages a broader external data ecosystem in the future, as discussed above, partners will need to make sure they look beyond the applications layer and leverage Oracle’s broad PaaS and IaaS capabilities for custom development use cases.

Fujitsu Expands AI Strategy in Europe, Emphasizing Collaboration, Compliance and Customization

‘AI can be a knowledge management accelerator, but only if well-fed by an enterprise’s own data’

In February TBR met with two AI leaders in Fujitsu’s European Platform Business to better understand the company’s approach to the AI market, its evolving AI capabilities and offerings, and what we can expect as 2025 unfolds. Maria Levina, AI Business analyst, and Karl Hausdorf, head of AI Business, gave a detailed presentation focused primarily on the European market. The following reflects both that briefing and TBR’s ongoing research and analysis of Fujitsu, the company’s partners and peers, and the overall AI landscape.
 
One highlight that illustrates many facets of Fujitsu’s approach to AI was Levina and Hausdorf’s description of Fujitsu’s customers’ choices between “bring your own data” and “bring your own AI.” The first allows more AI-mature customers to bring their data into a Fujitsu-provided on-premises solution with full support for scaling, maintaining hardware and software, and updating, as needed.
 
The second allows customers to run their own AI stack on a Fujitsu “validated and optimized” platform, developed and maintained by Fujitsu and select technology partners. Critical, in TBR’s view, is Fujitsu’s positioning of these options as responsive to clients’ needs, as determined by all AI stakeholders within an enterprise, including IT, AI and business leaders.
 
Levina and Hausdorf explained, “Together, with our ecosystem of partners, we’re committed to unlock the potential of generative AI for our clients” through “on-premises data sovereign, sustainable private GPT and AI solutions,” focused on “rapid ROI.” Fujitsu is not approaching clients with a technology solution, but rather with options on how to address and solve business problems. As the Fujitsu team said, “Understand the why, know the what, and co-create the how.” The Fujitsu team also noted that the company’s industry expertise resides in the processes and workflows unique and/or critical to an industry.
 

‘Maintaining control of data means owning your AI in the future’

Before diving into Fujitsu’s AI offerings, the Fujitsu team laid out their understanding of the European market, sharing data the company collected around AI adoption, use of AI platforms, and barriers to growth (in Fujitsu’s phrasing, “progress-limiting factors,” which is perhaps a more positive spin on the usual list of barriers and challenges). Fujitsu surveyed or spoke with 400 data and IT professionals across six European countries, and the results indicated that overcoming legacy mindsets continues to be a major impediment to adopting and harnessing the value of AI.
 
TBR’s November 2024 Voice of the Customer Research similarly noted challenges in Europe with “the lack of engagement from employees who are being asked to change.” The Fujitsu team noted that change management, therefore, had to involve all AI stakeholders, including “IT people, business people and AI people” within an enterprise.
 
In TBR’s experience, IT services companies and consultancies continue to find new constituents for change management at their clients as the promise — and disruption — of AI becomes more widespread, reinforcing Fujitsu’s strategy of bringing change management to all AI stakeholders. Lastly, the Fujitsu team noted that within European clients, expectations around AI have heightened, especially as AI initiatives have launched across multiple business units. Again, Fujitsu’s research and TBR’s Voice of the Customer Research align around ROI expectations as AI matures.
 
The Fujitsu team introduced their AI platform by delineating the key performance indicators they believe a successful platform must have: scaling, performance and speed, simplicity, energy efficiency, AI services in data centers, and GPUs.
 
Although TBR is not in a position to evaluate the technological strengths, completeness or complexity of Fujitsu’s platform, the expansive KPIs indicate Fujitsu has considered not only the IT needs behind an AI deployment but also the larger business factors, particularly the financial impacts. Levina and Hausdorf then dove into the details, including the two customer options described above (bring your own data and bring your own AI). They discussed how Fujitsu offers consulting around the technical and business implications of AI platforms and solutions, including an “AI Test Drive,” which allows clients to test AI solutions before investing in new technologies, large language models (LLMs) or other AI components.
 
Notably for TBR, Fujitsu’s presentation extensively highlighted the company’s AI alliance partners, including Intel, NVIDIA, AMD and NetApp, as well as a slew of LLM providers, demonstrating an appreciation for the collaborative and ecosystem-dependent nature of AI at the enterprise level. The Fujitsu team also stressed the European nature of its AI strategy and platform.
 
European clients, Fujitsu noted, had specific requirements related to the European Union’s (EU) General Data Protection Regulation and the EU AI Act, as well as a preference for on-premises solutions. Some of the use cases Levina and Hausdorf described included a law firm using Fujitsu-enabled AI solutions to analyze case data, contracts, corporate and public legal documents, and multiple deployments of Fujitsu-enabled private GPTs.

Additional observations

  • Fujitsu remains focused on targeting customers already aligned with the company around AI, a strategy that TBR believes speeds ROI and increases client retention.
  • In contrast to some peers in the IT services market, Fujitsu has capabilities across the entire AI technology stack — hardware, software and service — which Levina and Hausdorf called “highly appealing,” especially to European clients.
  • Levina and Hausdorf made two comments that, in TBR’s view, neatly sum up AI at present: “AI can be a knowledge management accelerator, but only if well-fed by an enterprise’s own data” and “maintaining control of data means owning your AI in the future.”

Fujitsu’s AI prowess makes it an invaluable partner

TBR has reported extensively on Fujitsu’s evolving AI capabilities and offerings, noting in a special report in May 2024: “TBR appreciates that Fujitsu’s combination of compute power and proven AI expertise makes the company a significant competitor and/or alliance partner for nearly every player fighting to turn GenAI [generative AI] hype into revenue.
 
“Second, Fujitsu’s vision of ‘converging technologies’ aligns exceptionally well with the more tectonic trends TBR has been observing in the technology space, indicating that Fujitsu’s market positioning is more strategic than transactional or opportunistic.” Add in Fujitsu’s deepening experience in delivering AI solutions to AI clients, and TBR continues to see tremendous near-term opportunity and growth for Fujitsu and its ecosystem partners.

Google Cloud Cements Values of Enterprise Readiness, Full-stack AI and Hybrid Cloud at Next 2025

In April Google Cloud hosted its annual Next event to showcase new innovations in AI. Staying true to the theme of “A New Way to Cloud,” Google focused on AI, including how AI can integrate with enterprises’ existing tech landscape, with partners playing the role of orchestrator. After Google CEO Sundar Pichai spoke about the company’s achievements around Gemini, which is integral to Google Cloud’s strategy, Google Cloud CEO Thomas Kurian highlighted the business’s three key attributes: optimized for AI; open and multicloud; and enterprise-ready. Additionally, Google Cloud announced a series of new innovations that highlight how the company is trying to execute on these three areas to be the leader in modern AI development.

Google takes an end-to-end approach to AI

When discussing Google Cloud’s three key attributes, Kurian first highlighted how Google Cloud Platform (GCP) is optimized for AI. Based on our own conversations with IT decision makers, this claim is valid: many customers enlist GCP services purely for functional purposes, as they believe they cannot obtain the same performance with another vendor. This is particularly true of BigQuery, for large-scale data processing and analytics, and increasingly Vertex AI, which now supports over 200 curated foundation models for developers.
 
Within this set of models is, of course, Gemini, Google’s own suite of models, including the new Gemini 2.5 Pro, which has a context window of 1 million tokens and is reportedly now capable of handling advanced reasoning. To be fair, Google still faces stiff competition from other frontier model providers, but Google’s years of AI research through DeepMind and its ability to have models grounded in popular apps like Google Maps, not to mention Google Search, will remain among its key differentiators.
 
With that said, the AI software stack is only as effective as the hardware it runs on. That is why Google has been making some advances in its own custom AI accelerators, and at the event, Google reaffirmed its plans to invest $75 billion in total capex for 2025, despite the current macroeconomic challenges. A large piece of this investment will likely focus on paying for the ramp-up of Google’s sixth-generation TPU (Tensor Processing Unit) — Trillium — which became generally available to Google Cloud customers in December. Additionally, Google is making some big bets on the next wave of AI usage: inference.
 
At the event, Google introduced its seventh-generation TPU, dubbed Ironwood, which reportedly scales up to 9,216 liquid cooling chips linked through a high-powered networking layer, to support the compute-intensive requirements of inference workloads, including proactive AI agents. In 2024 there was a 3x increase in the number of collective TPU and GPU hours consumed by GCP customers, and while this was likely off a small number of hours to begin with, it is clear that customers’ needs and expectations around AI are increasing. These investments in AI hardware help round out key areas of Google’s AI portfolio ― beyond just the developer tools and proprietary Gemini models ― as part of a cohesive, end-to-end approach.
 

Watch now: Cloud market growth will slow in 2025, but will activity follow? Deep dive into generative AI’s impact on the cloud market in 2025 in the below TBR Insights Live session

 

Recognizing the rise of AI inference, Google Cloud reinforces longtime company values of openness and hybrid cloud

With its ties to Kubernetes and multicloud editions of key services like BigQuery and AlloyDB, Google Cloud has long positioned itself as a more open cloud compared to its competitors. However, in recent quarters, the company has seemed to hone this focus more closely, particularly with GDC (Google Distributed Cloud), which is essentially a manifestation of Anthos, Google’s Kubernetes-based control plane that can run in any environment, including at the edge. GDC has been the source of some big wins recently for Google Cloud, including with McDonald’s, which is deploying GDC to thousands of restaurant locations, as well as several international governments running GDC as air-gapped deployments.
 
At Next 2025, Google announced it is making Gemini available on GDC as part of a vision to bring AI to environments outside the central cloud. In our view, this announcement is extremely telling of Google Cloud’s plans to capture the inference opportunity. Per our best estimate, roughly 85% of AI’s usage right now is focused on training, with just 15% in inference, but the inverse could be true in the not-too-distant future. Not only that, but inference will also likely happen in distributed locations for purposes of latency and scale. Letting customers take advantage of Gemini to build applications on GDC — powered by NVIDIA Blackwell GPUs — on premises or at the edge certainly aligns with market trends and will help Google Cloud ensure its services play a role in customers’ AI inference workloads regardless of where they are run.

Boosting enterprise mindshare with security, interoperability and Google-quality search

Kurian mentioned that customers leverage Google Cloud because it is enterprise-ready. In our research, we have found that while Google Cloud is highly compelling for AI and analytics workloads, customers believe the company lacks enterprise-grade capabilities, particularly when compared to Microsoft and Amazon Web Services (AWS). But we believe this perception is changing, and Google Cloud is recognizing that to gain mindshare in the enterprise space, it needs to lead with assets that will work well with customers’ existing IT estates and do so in a secure way. This is why the pending acquisition of Wiz is so important. As highlighted in a recent TBR special report, core Wiz attributes include not only being born in the cloud and able to handle security in a modern way but also connecting to all the leading hyperscalers, as well as legacy infrastructure, such as VMware.
 
Google Cloud has been very clear that it will not disrupt the company’s multihybrid capability. In fact, Google Cloud wants to integrate this value proposition, which suggests Google recognizes its place in the cloud market and the fragmented reality of large enterprises’ IT estates. Onboarding Wiz, which is used by roughly half of the Fortune 500, as a hybrid-multicloud solution could play a sizable role in helping Google Cloud assert itself in more enterprise scenarios. In the meantime, Google Cloud is taking steps to unify disparate assets in the security portfolio.
 
At Next 2025, Google Cloud launched Google Unified Security, which effectively brings Google Threat Intelligence, Security Operations, Security Command Center, Chrome Enterprise and Mandiant into a single platform. By delivering more integrated product experiences, Google helps address clients’ growing preference for “one hand to shake” when it comes to security and lays a more robust foundation for security agents powered by Gemini, such as the alert triage agent within Google Security Operations and the malware analysis agent in Google Threat Intelligence to help determine if code is safe or harmful.
 
One of the other compelling aspects of Google’s enterprise strategy is Agentspace. Launched last year, Agentspace acts as a hub for AI agents that uses Gemini’s multimodal search capabilities to pull information from different storage applications (e.g., Google Drive, Box, SharePoint) and automate common productivity tasks like crafting emails and scheduling meetings. At the event, Google announced that Agentspace is integrated with Chrome, allowing Agentspace users to ask questions about their existing data directly through a search in Chrome. This is another clear example of where Google’s search capabilities come into play and is telling of how Google plans to use Agentspace to democratize agentic AI within the enterprise.

Training and more sales alignment are at the forefront of Google Cloud’s partner priorities

Google Cloud has long maintained a partner-first approach. Attaching partner services on virtually all deals; taking an industry-first approach to AI, particularly in retail and healthcare; and driving more ISV coselling via the Google Cloud Marketplace are a few examples. At Next 2025, Google continued to reaffirm its commitment to partners, implying there will be more alignment between field sales and partners, to ensure customers are matched with the right ISV or global systems integrator (GSI), a strategy many other cloud providers have tried to employ.
 
When it comes to the crucial aspect of training, partners clearly see the role Google Cloud plays in AI, and some of the company’s largest services partners, including Accenture, Cognizant, Capgemini, PwC, Deloitte, KPMG, McKinsey & Co., Kyndryl and HCLTech, have collectively committed to training 200,000 individuals on Google Cloud’s AI technology. Google has invested $100 million in partner training over the past four years, and as highlighted in TBR’s Voice of the Partner research, one of the leading criteria services vendors look for in a cloud partner is the willingness to invest in training and developing certified resources.

Google Cloud wants partners to be the AI agent orchestrators

As previously mentioned, Vertex AI is a key component of Google Cloud’s AI software stack. At Next 2025, Google Cloud introduced a new feature in Vertex called the Agent Development Kit, which is an open-source framework for building multistep agents. Google Cloud is taking steps to ensure these agents can be seamlessly connected regardless of the underlying framework, such as launching Agent2Agent (A2A), which is an open protocol, similar to protocols introduced by model providers like Anthropic.
 
Nearly all of the previously mentioned GSIs, in addition to Boston Consulting Group (BCG), Tata Consultancy Services (TCS) and Wipro, have contributed to the protocol and will be supporting implementations. This broad participation underscores the recognition that AI agents will have a substantial impact on the ecosystem.
 
New use cases will continue to emerge where agents are interacting with one another, not only internally but also across third-party systems and vendors. With the launch of the Agent Development Kit and the related protocol, Google Cloud seems to recognize where agentic AI is headed, and for Google Cloud’s alliance partners, this is an opportune time to ensure they have a solid understanding of multiparty alliance structures and are positioned to scale beyond one-to-one partnerships.

Final thoughts

At Next 2025, Google reportedly announced over 200 new innovations and features, but developments in high-powered compute, hybrid cloud and security, in addition to ongoing support for partners, are particularly telling of the company’s plans to capture more AI workloads within the large enterprise. Taking an end-to-end approach to AI, from custom accelerators to a diverse developer stack that will let customers build their own AI agents for autonomous work, is how Google Cloud aims to protect its already strong position in the market and help lead the shift toward AI inferencing.
 
At the same time, Google Cloud appears to recognize its No. 3 position in the cloud market, significantly lagging behind AWS and Microsoft, which are getting closer to each other in IaaS & PaaS revenue. As such, taking a more active stance on interoperability to ensure AI can work within a customer’s existing IT estate, and guaranteeing partners that have the enterprise relationships are the ones to orchestrate that AI, will help Google Cloud chart its path forward.

Here Comes KPMG: Client Trust, Alliance Focus and Tech-enabled Strategy Emphasized at 2025 Global Analyst Summit

Executing on its Collective Strategy through integrated scale and backed by robust strategic partnerships and platform-enabled services positions KPMG to remain a formidable competitor in the transforming professional services market

KPMG Global Chairman and CEO Bill Thomas kicked off the firm’s 2025 Global Analyst Summit by reinforcing the firm’s mission to be “the most trusted and trustworthy professional services firm.” As we have discussed at length across TBR’s professional and IT services research, firms like KPMG trade on trust with clients, alliance partners and employees. Putting a stake in the ground from the get-go provided Thomas and KPMG’s executives a strong foundation to rely on during the next two days as trust — at the human and technology level — was an underlying theme during presentations and demos.
As a member of the Big Four, KPMG has brand permission and a breadth of services that are relevant to nearly every role in any enterprise. As the firm executes on its Collective Strategy, TBR believes KPMG will accelerate the scale and completeness of its offerings, building on a solid foundation and expanding the gaps between KPMG and other consulting-led, technology-enabled professional services providers. ​
 
KPMG’s global solutions — Connected, Powered, Trusted and Elevate — which resonate with clients and technology partners, have now been brought together into one transformation framework under KPMG Velocity, providing KPMG’s professionals with clear insight into the firm’s strengths and strategy, and underpinning, in the near future, all KPMG’s transformation engagements. KPMG Velocity’s evolving strategy will challenge KPMG’s leaders to execute on the promise of that transformation during the next wave of macroeconomic pressures, talent management battles and technology revolutions. At the same time, KPMG’s leaders recognize that their priorities are transforming the firm’s go-to-market approach, unlocking the power of the firm’s people, reimagining ways of working, and innovating capabilities and service enhancements. ​
 
Success in executing these priorities, in TBR’s view, will come as KPMG shifts from building a foundation to scaling alongside the growing needs of its clients and as the era of GenAI presents yet another opportunity and challenge. Striking the right balance between elevating the potential of GenAI as a value creator and accounting for commercial and pricing model implications will test the durability of KPMG’s engagement and delivery frameworks. ​
 
Although the firm has placed in motion many of the aforementioned investments over the past 12 to 18 months, the one opportunity that is changing relates to speed. As one enterprise buyer recently explained to TBR: “GenAI will force all services vendors to change. The [ones] who [will] be [the] most successful will be [those] who do it fast.” With speed comes risk — which KPMG fully acknowledges and is why KPMG Velocity’s offering is a differentiator for the firm in the market. With KPMG Velocity, all of KPMG’s multidisciplinary and heritage risk and regulatory considerations have been embedded across each transformation journey to ensure clients can remain compliant and avoid the pitfalls that can often arise during transformation. ​
Continuing the firm’s presentation, Thomas outlined KPMG’s evolving Collective Strategy, noting that the firm is 18 months into its latest iteration focused on “accelerating trust and growth.” Among the key enablers of achieving this goal is KPMG’s collapsing of its organizational structure from 150 country-specific member firms to a cluster of 30 to 40 regionally organized “economic units.” TBR views this pivot as the most natural evolution of KPMG’s operating model. For the Big Four, the biggest challenge is how to demonstrate value through integrated scale. Once completed, the reorganization will allow KPMG to minimize such disruption and better compete for globally sourced opportunities from what the firm calls “transactions to transformation” and for large, multi-year, geographically dispersed enterprise, function and foundational transformations.
 
Following Thomas’ presentation, Carl Carande, KPMG U.S. & global lead, Advisory, and Regina Mayor, global head of Clients & Markets, amplified KPMG’s strategy, reinforcing the importance of the firm’s people, technology partners and technology — with AI the catalyst and change agent of success. For example, Carande recognized the technology relationships are changing in two ways. Relationships are becoming more exclusive, and the multipartner alliance framework offers a multiplier power — themes TBR has discussed at length throughout our Ecosystem Intelligence research stream.
 
Although KPMG continues to manage a robust network of alliance partners, highlighting its seven strategic partners — Google Cloud, Microsoft, Oracle, Salesforce, SAP, ServiceNow and Workday — solidifies its recognition of these vendors’ position throughout the ecosystem. Mayor expanded on Carande’s discussion around alliances through an industry lens, describing “alliance partners leaning in with KPMG” as they realize efforts to only sell the product will be insufficient. Meanwhile, on the KPMG side, alliance sales partners help figure out how to penetrate sector-specific alliance relationships.
 
Taking such a systematic approach across KPMG’s 7 sectors (with the desire to expand these to 14) will allow the firm to demonstrate value and support its evolving Collective Strategy to act as a globally integrated firm. Additionally, new offerings like KPMG Velocity (discussed in depth on Slide 6) will arm KPMG’s consultants with the necessary collective knowledge management to serve global clients locally, further supporting the firm’s strategy.
 
One could argue that many of KPMG’s steps, including launching partner-enabled industry IP, reinforcing trust, developing regionally organized operations, outlining a select few strategic partners, and investing in platform-enabled service delivery capabilities, resonate with the moves taken by many of its Big Four and large IT services peers. We see two differences.
 
First, KPMG is laser-focused on exactly which of the strategies above to amplify, rather than taking a trial-and-error approach. Second, it is about timing. Some of KPMG’s peers have tried these strategies for some time, with limited success because of poor execution or timing. We believe that as the professional services market goes through its once-in-a-century transformation, KPMG has an opportunity to ride the wave, provided it maintains internal consensus and executes on its operational and commercial model evolution with minimal disruption.
 

 

KPMG’s evolution will largely stem from orchestrating alliances with seven strategic technology partners

At the event, KPMG asserted the role of tech alliance partners in building the “firm of the future.” Although the firm works with a range of ISVs, a targeted focus on the firm’s seven strategic technology partners has become key to the company’s growth profile — with 50% of its consulting business alliance-enabled in the U.S. — and, as the case of previous audit client SAP shows, KPMG has been able to overcome barriers to ultimately help clients get the most out of technology. The firm’s approach of leading with client outcomes first and technology second is unchanged, but prioritizing a tech-enabled go-to-market approach will support KPMG’s position in the market behind two major trends.
 
The first trend is the overall maturation in partner alliance structures we see from the cloud vendors. Changes in programmatic structure, including bringing sales and partner delivery closer together, and an all-around shift in how partners are viewed among historically siloed vendors, could act as enablers for KPMG’s newer capabilities, including Velocity. Second, there is a big paradigm shift underway on where the value of tech exists. Increasingly, we see the firm moving down the stack, a trend enabled by agentic AI and customers’ need to harness their own data and build new applications. Across the Big Seven, there is no shortage of innovation. As the value of AI shifts down the technology stack, KPMG can leverage the technology to deliver business outcomes to clients.
 
To fully describe KPMG’s evolving technology alliance strategy and the firm’s growing capabilities, KPMG leaders hosted a panel discussion that included leaders from Microsoft, SAP, Salesforce and KPMG clients. Todd Lohr, KPMG’s head of Ecosystems for Advisory, set the stage by saying the firm views ecosystems as more than simply a collection of one-to-one alliances, but ecosystems are, instead, many-to-many relationships, an idea TBR has increasingly heard expressed by consultancies, IT services companies, hyperscalers and software vendors.
 
Having leaders from technology partners on stage to display a very common example of a tech stack — with SAP as the system of record (SOR), Salesforce in the front office, and Microsoft as the platform with Copilot — was a strong way to depict the “many-to-many relationships” structure and KPMG’s role in orchestrating the ecosystem, especially in scenarios where some of these ISVs may not have a native integration and/or formal collaboration with one another. Lohr noted that KPMG “needs to show up understanding how complicated multiparty relationships work before showing up and working them out ad hoc at the client.” That direct acknowledgment of the challenges inherent in multiparty alliances is decidedly not something TBR consistently hears from KPMG’s peers and partners.

KPMG moves away from vendor agnosticism

One of the most important takeaways for TBR from the summit was KPMG’s willingness, in the right circumstances, to aggressively abandon the typical agnostic approach to recommending technologies and instead make a specific technology recommendation where there is a deep understanding of the client needs. One client example highlighted this new(ish) approach. When the client reached out for advice on a sales-enablement platform, KPMG did not take an agnostic approach and, instead, told the client Salesforce was the only choice, based on KPMG’s evaluation.
 
Part of KPMG’s proposal rested on reworking the client’s processes so Salesforce could work as much out of the box as possible, limiting costs and customizations. As KPMG leaders described it, this reflected the opposite of most consultancies’ (and enterprises’) usual approach of forcing the business processes to work with a new technology. In a competitive bidding process, the lead KPMG partner, according to the client, answered questions on the Salesforce software and implementation issues without turning to others on the KPMG team, demonstrating mastery of Salesforce and the client’s IT environment that reassured the client about KPMG’s recommendations. Further, the client expressly did not want customization layers on top of Salesforce, knowing that would be more expensive over time.
 
Notably, the “fairly comprehensive implementation,” according to the client, took less than a year, including what the client said was “a lot of investment with KPMG in change management.” Recalling best practices TBR has heard in other engagements, the client team and KPMG called the Salesforce implementation Project Leap Frogs to avoid the word “transformation,” enabled champions across the enterprise, and held firm to the approach of making minimal customizations. In discussions with TBR, KPMG leaders confirmed that not being technology agnostic was contrary to the firm’s usual practice but was becoming more common.
 
Reinforcing that notion, a KPMG leader told TBR that the firm had lost a deal after it recommended Oracle and said SAP was not the right fit. The client selected SAP (for nontechnical reasons) but later awarded, without a competitive bidding process, Oracle-specific work to KPMG after noting respect for the firm’s honesty and integrity.

KPMG showcases client-centric innovation in action

ServiceNow implementation

A client story featuring a ServiceNow implementation that brought cost savings and efficiencies to the client notably emphasized change management, a core KPMG consulting capability that is sometimes overshadowed by technologies. The client described the “really good change management program that KPMG brought” as well as the emphasis on a clear data and technology core, out-of-the-box ServiceNow implementation, and limited customizations. In TBR’s view, KPMG’s approach with this engagement likely benefited considerably from the firm’s decades-long relationship with the client, playing to one of KPMG’s strengths, which the firm’s leaders returned to repeatedly in discussions with TBR: Trusted partnerships with clients create long-standing relationships and client loyalty.

Reimagining leaders

One client story centered on a five-day “reimagining leaders” engagement at the Lakehouse facility, conducted by the KPMG Ignition team. Surprisingly, KPMG included an immersive session with an unrelated KPMG team working on an unrelated client’s project that had little overlap with the business or technology needs of the leadership engagement client.
 
According to the KPMG Ignition team, the firm showcased how KPMG works, how innovation occurs at the working level, and how KPMG creates with clients, giving them confidence in KPMG’s breadth and depth of capabilities. Echoing sentiments TBR has heard during more than a decade of visiting transformation and innovation centers, KPMG Ignition leaders said that being enclosed on the Lakehouse campus made it easier for clients to be fully present throughout the engagement and removed from the distractions of day-to-day work.
 
KPMG kept the client in the dark about what to expect from the engagement, which prevented any biased expectations from creeping in before the engagement had even started. KPMG Ignition leaders shared additional insights, noting that it was a pilot program for rising leaders at the client, providing an immersive experience that showcased the power of the KPMG partnership.
 
Throughout the five-day immersion at KPMG Lakehouse, participants learned how to apply the methodologies that fuel innovation at KPMG while staying focused on one theme: reimagining leadership of the overall company and of the participants as next-generation leaders, as well as reimagining leadership capabilities at every level of the organization.
 
KPMG equipped the client’s leaders with methodologies emphasizing storytelling, design thinking and strategic insights, and strengthened the client’s culture by fostering high-performing, collaborative teams.
 
One final comment from the Ignition Center leaders: This pilot program “highlighted the fact that AI can be viewed as a wellness play across the agency if you free up capacity and understand what can be achieved.” Based on the use case and sidebar discussions TBR had with KPMG Ignition leaders, we believe Ignition Centers continue to evolve, although the basics remain the same: Get clients into a dedicated space outside their own office, use design thinking, and focus on business and innovation and leadership and change, not on technology.

The art of the possible

A final client story, presented on the main stage, wove together the themes of AI, transformation and trust. The client, a chemicals manufacturer and retailer, said KPMG consistently shared “what’s possible,” essentially making innovation an ongoing effort, not a one-off aspect of the relationship.
 
The client added that his company and KPMG had “shared values … and we understand each others’ cultures,” in part reinforced by KPMG dedicating the same team to the client during a multiyear engagement.
 
In TBR’s view, KPMG’s decision to highlight this client reinforced everything KPMG leaders had been saying during the summit: Relationships, built on consistent delivery and continually coming to the client with ideas and innovations, plus a commitment to the teaming aspect of the engagement, are KPMG’s superpower. Notably, this client was not a flashy tech company, a massive financial institution or a well-known government agency, and the work KPMG did was not cutting-edge or highly specialized but rather core KPMG capabilities — in short, what KPMG does well.

Velocity and GenAI: KPMG’s client-first approach to AI adoption and transformation

KPMG dedicated the second day of the analyst summit to AI, a decision that reflected the firm’s overall approach: Business decisions come first, enabled by technology. Supporting the firm’s AI strategy, KPMG has developed Velocity, a knowledge platform, AI recommendation and support engine underpinned by one universal method that pulls together every capability, offering and resource across the firm for the KPMG workforce. According to KPMG leaders, Velocity reinforces the firm’s multidisciplinary model and will become the primary way KPMG brings itself to clients.

In addition to sharing knowledge across the global firm, Velocity will help KPMG’s clients find the right AI journey that matches their ambitions — whether it be Enterprise, Function or Foundation — by allowing them to select a strategic objective they are trying to achieve, which function(s) they want to transform, and which technology platforms they want to transform on. The platform also reaffirms the firm’s acknowledgment of data’s role in AI. In fact, part of the rationale for Velocity was bringing the data modernization and AI business together while maintaining a focus on a sole client outcome. This means KPMG does not care whether customers build their data foundation with a hyperscaler or internally; as one leader in the AI Journey breakout session said, it is just about “helping clients do what they want to do.”

Velocity includes preconfigured journeys based on specific client needs, as developed, understood and addressed in all of KPMG previous engagements. Similar to many consultancies, KPMG begins engagements by developing an understanding of clients’ strategic needs and issues, rather than their technology stack. (TBR comment: easy to say, hard to do, especially when a firm has practices built around specific technologies).

Velocity is designed to add value to client engagements (including describing, calculating and being accountable). It will also bring a “tremendous amount of information” and is “highly tailorable,” according to a KPMG leader, who also noted that the platform’s adoption, use and usefulness over time will be key. KPMG leaders said the core aspects of AI — even agentic AI — are all the same, separated only by planning and orchestration. For example, KPMG’s AI Workbench underpins how it is bringing agents and AI-enabled services to its clients and its people. Velocity, then, is a KPMG offering where every step is focused on achieving client outcomes, which comes back to understanding clients’ key business issues, not simply their technology stack.

The launch of Velocity internally (starting in March 2025) into its largest member firms brings to life KPMG’s approach to AI. KPMG expects its member firms to be able to start unlocking the power of Velocity beginning in May, and will launch Velocity externally later in the year. Amid caution on the client side around the adoption and implementation of AI technologies, KPMG’s David Rowlands, head of AI, discussed how KPMG wants to be client zero around AI, helping to ease clients’ ethics and security concerns by working through experimentation and into adoption and scale. Rowlands highlighted the firm’s attention to knowledge and need to fully benefit from AI. Training around AI, including the definition of AI and how to use it; creating trust within AI; and learning effective AI prompts also fit within this strategy, enabling both KPMG and clients to effectively embed AI across people and operational strategies.

 

Velocity, AI and the future of audits

Three other AI-centric comments from KPMG leaders stood out for TBR:

  • With AI, “the road to value is paved with human behavior and change,” according to Rowlands, reflecting the firm’s emphasis on the business over the technology and the importance of change management — a core KPMG consulting strength.
  • Rowlands also noted that AI is a critical national infrastructure, dependent on energy, connectivity and networks, and should be considered a national investment priority and national security issue. In TBR’s view, framing AI this way — not as just a tool or another service to be sold — adds credibility to KPMG’s AI efforts.
  • According to Per Edin, KPMG’s AI leader, “ROI is clear and documented, but still not enough adoption to be as measurable as desired.” In TBR’s view, Edin’s sentiments track closely with TBR’s Voice of the Customer and Voice of the Partner research, which have repeatedly shown that interest and investments in AI have outpaced adoption, particularly at scale.

In a breakout session, KPMG walked through the firm’s well-established KPMG Clara platform, a tool designed to help the firm complete its audits more quickly and accurately. In essence, KPMG creates a digital twin of an organization, reflecting the firm’s understanding of where AI can be applied. KPMG Clara Transaction Scoring enables auditors to deliver what the firm calls “audit evidence” and note “outlier” transactions. According to KPMG leaders, “AI agents perform audit procedures and document results for human review, just like junior staff.”
 
Critically, KPMG Clara audits every transaction, not just a sample of transactions, increasing the likelihood of catching problems, issues and outliers. By flagging high-risk transactions, KPMG can deploy professionals to focus on solving real problems rather than adjudicating false positives or meaningless issues. In TBR’s view, this represents the proverbial “higher-value task” long-promised by robotic process automation, AI and analytics.
 
When pressed by TBR, KPMG leaders said clients were not looking for rate cuts but rather for higher-quality audits and new insights into their operations. Importantly, clients also expect to spend less time on an audit, freeing up professionals’ time: The client can do what they do, and KPMGers can stay focused on reflected issues and generate new insights.
 
TBR remains a bit skeptical, but if clients do not expect a rate cut when KPMG deploys AI to speed up the audit process and instead expect to spend less time internally on what should be a higher-quality audit, TBR considers that a fantastic way to position AI while also reducing KPMG’s professionals’ time. There are two unanswered questions: What happens to the apprenticeship model, in which less-experienced KPMG professionals learn the art, not the science, of audit? And, in a few years, will 95,000 professionals conduct 400,000 audits (twice the current number) or will 50,000 professionals (half the current staff) complete 200,000 audits?
 
Regardless, as the company rolls out internally developed generative AI (GenAI) tools, the learning and experience captured through the firm’s implementation and change management process will undoubtedly be integrated into customer engagements involving third-party solutions. With SAP and Salesforce in attendance, KPMG zeroed in on each vendor’s AI strategy and how the firm plans to support it. To focus on Salesforce, Lohr echoed Salesforce CEO Marc Benioff in calling Agentforce the most successful Salesforce launch ever, which suggests a recognition from KPMG leadership of Salesforce’s agentic AI strategy.
 
For its part, KPMG highlighted the recent launch of an Agentforce Incubator, an experimental experience that can be delivered to clients from any location — a client site, Salesforce event or a KPMG office — to ignite the ideation stage and begin exploring the road map from proof of concept to production. During one-on-one conversations, TBR explored KPMG’s view of its role in the agentic AI, and we found it to be both pragmatic and valuable — similar to how the firm must be opinionated in broader digital transformation engagements.
 
KPMG’s journey to becoming an AI orchestrator will require the firm to take a stance on a vendor-by-vendor basis and arrive on-site with a preconceived understanding of the best path forward for clients given their goals. In addition to having an opinion, KPMG also recognizes it must help facilitate the road maps it lays out to clients, which will involve a heavy change management component, as well as a more technical design and development element. With the Agentforce example, once a targeted business outcome is established, an AI agent needs to be designed and developed to achieve the outcome. In many cases, a customer may lack the internal technical resources necessary to build the agent and tackle the problem. As KPMG avoids vendor agnosticism, the company can focus on building out technical resources with the vendors it chooses, building deeper benches with technical training associated with its strategic partners.

KPMG’s Lakehouse offers unique setting for analyst event

As it did less than 18 months ago when KPMG broke from the traditional analyst event style, the firm did it again. Hosting 62 analysts and dozens of global executives, clients and partners at a flagship Lakehouse facility for two days of both formal and informal interactions, presentations, client use cases and demos, KPMG demonstrated agility in terms of the delivery and engagement format, yet, with a steady hand, continued to execute on its vision with its global solutions — Connected, Powered, Trusted and Elevate — and proven IP, methods and enablers coming together through Velocity.
 
KPMG held one-on-one sessions between analysts and executives midway through the first day so that executives were present and engaged. Additionally, KPMG saved the all-about-AI-and-nothing-else sessions for the second day, which came off as, “We get AI is important, but we are also realistic and keeping our heads on straight and not being ‘me too, me too’ about AI.” KPMG senior executives sat in on both the client case study and platform breakout sessions. Subtle message to analysts: This stuff matters enough across the firm to be worth KPMG partners’ time even if it is not in their area.

Conclusion

As a member of the Big Four, KPMG has brand permission and a breadth of services that are relevant to nearly every role in any enterprise. As the firm executes on its Collective Strategy, TBR believes KPMG will accelerate the scale and completeness of its offerings, building on a solid foundation and expanding the gaps between KPMG and other consulting-led, technology-enabled professional services providers. 

KPMG’s global solutions — Connected, Powered, Trusted and Elevate — which resonate with clients and technology partners, have now been brought together into one transformation framework under KPMG Velocity, providing KPMG’s professionals with clear insight into the firm’s strengths and strategy, and underpinning, in the near future, all KPMG’s transformation engagements. KPMG Velocity’s evolving strategy will challenge KPMG’s leaders to execute on the promise of that transformation during the next wave of macroeconomic pressures, talent management battles and technology revolutions. At the same time, KPMG’s leaders recognize that their priorities are transforming the firm’s go-to-market approach, unlocking the power of the firm’s people, reimagining ways of working, and innovating capabilities and service enhancements. 

Success in executing these priorities, in TBR’s view, will come as KPMG shifts from building a foundation to scaling alongside the growing needs of its clients and as the era of GenAI presents yet another opportunity and challenge. Striking the right balance between elevating the potential of GenAI as a value creator and accounting for commercial and pricing model implications will test the durability of KPMG’s engagement and delivery frameworks. 

Although the firm has placed in motion many of the aforementioned investments over the past 12 to 18 months, the one opportunity that is changing relates to speed. As one enterprise buyer recently explained to TBR: “GenAI will force all services vendors to change. The [ones] who [will] be [the] most successful will be [those] who do it fast.” With speed comes risk — which KPMG fully acknowledges and is why KPMG Velocity’s offering is a differentiator for the firm in the market. With KPMG Velocity, all of KPMG’s multidisciplinary and heritage risk and regulatory considerations have been embedded across each transformation journey to ensure clients can remain compliant and avoid the pitfalls that can often arise during transformation.

Special report contributors: Catie Merrill, Senior Analyst; Kelly Lesizcka, Senior Analyst; Alex Demeule, Analyst; Boz Hristov, Principal Analyst

Comcast Business Nears $10B in Annual Revenue and Accelerates Enterprise Growth but Faces Headwinds from Competitive and Macroeconomic Pressures

2025 Comcast Business Analyst Conference, Philadelphia, April 2-3, 2025 — A select group of industry analysts gathered at the Comcast Center in Philadelphia to hear from Comcast Business leaders about the unit’s progress and success with its sales and go-to-market strategies. The central theme of the event was “Everything, Everywhere, All at Once,” which reflects Comcast Business’s ability to provide solutions to its customers through advancements in areas including AI implementation, network technologies, industry partnerships, and acquisitions, including Masergy and Nitel. The event was hosted by CNBC Senior Markets Correspondent Dominic Chu and included a State of the Business session from Comcast Business President Edward Zimmermann, a Strategy & Vision session from Comcast Business Chief Product Officer Bob Victor, and an update on Comcast’s network from Chief Network Officer Elad Nafshi. Also included were panel discussions with senior leadership as well as speaker sessions featuring industry partners Cisco and Intelisys and a Comcast Business customer within the brewing industry.

 

TBR perspective

Since its inception in 2006, Comcast Business has consistently grown into a more formidable competitor in the B2B telecom space. Most of this growth has stemmed from the SMB segment, where Comcast Business’ superior DOCSIS-based, hybrid-fiber coax (HFC) fixed broadband offerings were priced right compared to non-fiber-to-the-premises (FTTP) telco offerings and addressed demand for more bandwidth. Comcast Business’ strategy has evolved over the past decade to target additional growth segments including midmarket businesses and multinational enterprises via the operator’s managed services portfolio, strategic acquisitions including Masergy, and partnerships with global operators spanning 130 countries.
 
Other key growth drivers over the past decade include Comcast Business’ increasing focus on the public sector, including federal agencies, as well as the launch of portfolio segments including Comcast Business Mobile and value-added services in areas including SD-WAN, security and unified communications.
 
The success of Comcast Business’ growth strategies has enabled the company to essentially reach its long-term goal of generating $10 billion in annual revenue (Comcast Business generated $9.7 billion in revenue for full-year 2024). Comcast Business, which increased total revenue by about 5% in 2024, is also outpacing incumbent operators competing in the B2B market in revenue growth as service providers including AT&T, Verizon and Lumen continue to face significant revenue erosion from customers disconnecting from legacy data and voice services.
 
Despite its recent strong momentum, Comcast Business will encounter obstacles as it tries to increase revenue due to headwinds including competition from the expansion of fixed wireless access (FWA) and FTTP services, and macroeconomic pressures that will cause businesses to optimize spending on connectivity services. Despite its ability to sustain revenue growth in 2024, driven by increased revenue from enterprise solutions and higher average rates from small business customers, Comcast Business lost a net of 16,000 customer relationships in 2024 (compared to customer relationship net additions of 17,000 in 2023 and 52,000 in 2022).
 
TBR attributes rising B2B FWA adoption, especially for Verizon and T-Mobile, as the primary driver of the net loss as more businesses are gravitating to FWA for its cost savings over traditional fixed broadband services as well as its greater ease of installation, which is helping to support companies seeking to quickly launch new branch locations. Comcast Business will also face competitive pressures from operators including AT&T, T-Mobile and Verizon that are expanding their FTTP footprints via organic builds and acquisitions, which will give these operators new opportunities to offer converged service plans combining mobile and broadband services. Comcast Business will also continue to face pressures from businesses continuing to migrate off of pay-TV and VoIP (voice over IP) services.
 
We expect that current macroeconomic challenges, including mass layoffs within the private and public sectors and uncertainty around tariff impacts, will create headwinds for Comcast Business and the overall U.S. B2B telecom market. Though network connectivity solutions such as broadband will remain essential to businesses, companies will need to optimize spending to counter macroeconomic pressures.
 
TBR believes these challenges will require Comcast Business to become more dependent on providing a stronger value proposition to retain and grow its customer base in addition to its reliance on the strengths of its network and solutions portfolio. Comcast Business is also addressing these industry shifts by evolving its portfolio of adjacent solutions in areas including secure networking, cybersecurity, and managed IT services to augment revenue from its core broadband services.

Impact and opportunities

Comcast Business is leveraging AI to optimize its network performance and sales and customer service capabilities

Sessions throughout the event discussed how deeper AI implementation will enable Comcast to enhance capabilities such as network performance and sales and customer service, thereby improving overall efficiency and customer experience. For instance, AI integration is enabling Comcast to automate over 99.7% of all software changes that it is making on its network, which is supporting network self-healing capabilities that can quickly resolve outages. These capabilities will help Comcast Business to more effectively retain customers as recent disruptions experienced by rivals, such as AT&T’s two major prolonged network outages in 2024, have resulted in customer losses and tarnished brand images for impacted operators.
 
AI is also enabling Comcast to enhance the cybersecurity of its network, including through the development of a next-generation firewall embedded into the network, which leverages GenAI and does not require dedicated CPE (customer premises equipment).
 
The vendor is also focused on training its customer care and sales teams to more effectively leverage AI to improve customer support and enhance operations. Comcast Business is increasing the number of AIOps use cases and applying AI and machine learning (ML) across its managed solutions platform to improve service delivery, assurance and management, both for customers and the internal teams that support customers (e.g., help desk, network operations center [NOC] and security operations center [SOC]).
 
Comcast expects AI to not only improve network and operational efficiencies but also provide significant revenue-generation opportunities, though the company is still in the early stages of developing strategies to do so. For instance, Comcast’s edge computing resources enable the company to support ultra-low latency speeds of less than 1 millisecond to many of its customers. These capabilities will position Comcast to optimize connectivity and user experience for future advanced AI applications in areas such as AR/VR that will be more dependent on ultra-low latency, though current AI applications such as ChatGPT are not as dependent on ultra-low latency as they are mainly text-based.

Comcast Business continues to accelerate its data speeds to incentivize customers, though industry pricing pressures will hamper connection growth

Throughout the event, Comcast Business promoted its accelerating data speeds, which are aided by network advancements such as DOCSIS 4.0 and mid-split upgrades to Comcast’s HFC network. Enhancements to Comcast Business’ connectivity portfolio include extending the availability of its Dedicated Internet solution and upgrading the service to provide symmetrical download and upload speeds up to 200Mbps over HFC or up to 400Gbps over fiber.
 
Comcast expects to accelerate its Dedicated Internet solution to reach symmetrical speeds of 300Mbps over HFC and reach a total of over three million passings this year. Other updates to the Dedicated Internet solution include adding a network reliability guarantee, which provides SLAs ensuring 99.99% network uptime, and enhanced proactive network monitoring, which enables IT teams to optimize performance.
 
TBR believes these updates will help to attract clients with bandwidth-intensive workloads, especially customers with more stringent SLA requirements necessitating minimal network downtime. However, TBR also recognizes that competitive pricing and the overall value proposition provided by operators are becoming more influential factors in contract wins within the B2B market. As evidenced by the robust uptake of FWA, small businesses are especially concerned with the value they are getting for the price paid, and they are migrating to lower-cost broadband offerings to obtain internet access that more closely meets their needs and aligns with what they are willing to pay.
 
T-Mobile and Verizon are feeding this market shift to “rightsized bandwidth” through clever marketing and customer education about what businesses actually need. Comcast Business will need to demonstrate why its cutting-edge broadband offerings are necessary for its customers in order to justify the premium pricing. It also has an opportunity to further strengthen the value proposition of its value-added services when combined with its core broadband services. Comcast Business Mobile is a key portfolio segment Comcast Business can further leverage to combat pressures from rivals’ FWA services and converged service bundles.
 
Though Comcast Business Mobile connections are not reported by Comcast, TBR believes just a small portion of Comcast Business customers are currently enrolled in the service as only 12% of Comcast’s residential broadband customer were enrolled in Xfinity Mobile in 4Q24. Heavier promotional activity, such as offering free lines for a limited time, could help Comcast Business Mobile compete more aggressively against rival B2B smartphone plans while creating a stickier ecosystem to retain high-value broadband customers long-term. Comcast Business Mobile’s impact is limited to the SMB market, however, as the brand supports a maximum of 20 lines per business customer under Comcast’s current MVNO agreement with Verizon.

Comcast will strengthen its enterprise business and expand sales channels via the Nitel acquisition

The analyst event coincided with Comcast Business closing its acquisition of Chicago-based managed service provider Nitel on April 1 from private equity firm Cinven. Nitel is a NaaS (Network as a Service) provider offering solutions in areas such as networking (including SD-WAN and SASE [Secure Access Service Edge]), cloud services and cybersecurity.
 
Through the purchase, Comcast Business will expand its footprint in the midmarket and enterprise customer segments and gain Nitel’s 6,600 clients across the U.S. within verticals including financial services, healthcare and education. Acquiring Nitel also enables Comcast Business to expand its channel distribution strategy to more effectively target new sales opportunities within the midmarket and enterprise segments.
 
Comcast Business is also gaining AI and software tools from the Nitel acquisition that will enable it to enhance its sales and customer service capabilities. These benefits include robust orchestration capabilities, an instant quoting tool that makes it easier to price and establish deals across multiple vendors and sites, and a digital dashboard that offers a single-pane-of-glass view of deployments.

Conclusion

Comcast Business remains in a relatively strong position within the B2B market as the company continues to outpace its competitors in revenue growth and will continue to expand its client base within the midmarket and large enterprise segments by leveraging its Masergy and Nitel acquisitions. Comcast Business also has an opportunity to increase revenue by refining its international strategy and more deeply leveraging assets — such as its managed services portfolio, partnerships with global operators across 130 countries, and numerous acquisitions including Sky, Masergy, Nitel, Deep Blue Communications and Blueface — that enable it to support multinational corporations.
 
However, SMB, which accounts for the majority of Comcast Business’ revenue, is becoming a more challenging segment in which to grow market share as FWA competition and macroeconomic challenges lead to spending constraints. These headwinds will require Comcast Business to become more focused on enhancing its value proposition to retain and grow its SMB client base and combat competitive pressures in the market.

Sheer Scale of GTC 2025 Reaffirms NVIDIA’s Position at the Epicenter of the AI Revolution

As the undisputed leader of the AI market, NVIDIA and its GPU Technology Conference (GTC) are unmatched compared to other companies and their respective annual events when it comes to the enormous impact they have on the broader information technology market. GTC 2025 took place March 17-21 in San Jose, Calif., with a record-breaking 25,000 in-person attendees — and 300,000 virtual attendees — and nearly 400 exhibitors on-site to showcase solutions built leveraging NVIDIA’s AI and accelerated computing platforms.

NVIDIA GTC 2025: Pioneering the future of AI and accelerated computing

In 2024 NVIDIA CEO and cofounder Jensen Huang called NVIDIA GTC the “Woodstock of AI,” but to lead off the 2025 event’s keynote address at the SAP Center, he aptly changed his phrasing, calling GTC 2025 “the Super Bowl of AI,” adding that “the only difference is that everybody wins at this Super Bowl.”
 
While the degree to which every tech vendor “wins” in AI will vary, NVIDIA currently serves as the rising tide that is lifting all boats — in this case, hardware makers, ISVs, cloud providers, colocation vendors and service providers — to help accelerate market growth despite the economic and geopolitical struggles that have hampered technology spending in the post-COVID era. NVIDIA’s significant investments not as a GPU company but as a platform company — delivering on innovations in full-stack AI and accelerated computing infrastructure and software — have provided much of the foundation upon which vendors across the tech ecosystem continue to build their AI capabilities.
 
During the event, which also took place at the nearby San Jose McEnery Convention Center, Huang shared his vision for the future, emphasizing the immense scale of the inference opportunity while introducing new AI platforms to support what the company sees as the next frontiers of AI. Additionally, he reaffirmed NVIDIA’s commitment to supporting the entire AI ecosystem by building AI platforms, rather than AI solutions, to drive coinnovation and create value across the entire ecosystem.

The transformation of traditional data centers into AI factories represents a $1 trillion opportunity

The introduction of ChatGPT in November 2022 captured the attention of businesses around the world and marked the beginning of the generative AI (GenAI) revolution. Since then, organizations across all industries have invested in the exploration of GenAI technology and are increasingly transitioning from the prototyping phase to the deployment phase, leveraging the power of inference to create intelligent agents, power autonomous vehicles and drive other operational efficiencies. As AI innovation persists, driven largely by the vision of Huang and the increasingly capital-rich company behind him, new AI paradigms are emerging and NVIDIA is helping the entire AI ecosystem to prepare and adapt.

The rise of reasoning

On Jan. 27, otherwise known as DeepSeek Monday, NVIDIA stock closed the day down 17.0% from the previous day’s trading session, with investors believing DeepSeek’s innovations would materially reduce the total addressable market for AI infrastructure. DeepSeek claimed that by using a combination of model compression and other software optimization techniques, it had vastly reduced the amount of time and resources required to train its competitive AI reasoning model, DeepSeek-R1. However, at GTC 2025, NVIDIA argued that investors misunderstood implications on the inference side of the AI model equation.
 
Traditional knowledge-based models can quickly return answers to users’ queries, but because basic knowledge-based models rely solely on the corpus of data that they are trained on, they are limited in their ability to address more complex AI use cases. To enhance the quality of model outputs, AI model developers are increasingly leveraging post-training techniques such as fine-tuning, reinforcement learning, distillation, search methods and best-of-n sampling. However, more recently test-time scaling, also known as long thinking, has emerged as a technique to vastly expand the reasoning capabilities of AI models, allowing them to address increasingly complex queries and use cases.

From one scaling law to three

In the past, pre-training scaling was the single law dictating how applying compute resources would impact model performance, with model performance improving as pre-training compute resources increased. However, at GTC 2025, NVIDIA explained two additional scaling laws in effect — post-training scaling and test-time scaling. As their names suggest, model pre-training and post-training are on the AI model training side of the equation. However, test-time scaling takes place during inference, allocating more computational resources during the inference phase to allow a model to reason through several potential responses before outputting the best answer.
 
Traditional AI models operate quickly, generating hundreds of tokens to output a response. However, with test-time scaling, reasoning models generate thousands or even tens of thousands of thinking tokens before outputting an answer. As such, NVIDIA expects the new world of AI reasoning to drive more than 100 times the token generation, equating to more than 100 times the revenue opportunity for AI factories.
 
During an exclusive session with industry analysts, Huang said, “Inference is the hardest computing at scale problem [the world has ever seen],” dispelling the misnomer that inference is somehow easier and demands fewer resources than training while also indirectly supporting Huang’s belief that the transformation of traditional data centers into AI factories will drive total data center capital expenditures (capex) to $1 trillion or more by 2028.
 

Graph: NVIDIA Revenue, Growth and Projections (Source: TBR)

NVIDIA Revenue, Growth and Projections (Source: TBR) — If you believe you have access to TBR’s NVIDIA research via your employer’s enterprise license or would like to learn how to access the full research, click here.


 
While on the surface, $1 trillion in data center capex by 2028 sounds like a lofty threshold, TBR believes the capex amount and timeline are feasible considering NVIDIA’s estimate that 2024 data center capex was around $400 billion.
 
Additionally, during 1Q25, announcements centered on investment commitments to build out data centers have become increasingly common, and TBR expects this trend to only accelerate over the next few years. For example, in January the Trump administration announced the Stargate Project with the intent to invest $500 billion over the next four years to build new AI infrastructure in the United States.
 
However, it is worth noting that Stargate’s $500 billion figure represents more than just AI servers; it includes other items such as the construction of new energy infrastructure to power data centers. TBR believes the same holds true for NVIDIA’s $1 trillion figure, especially when considering TBR’s 2024 total AI server market estimate of $39 billion.

The more you buy, the more you make: NVIDIA innovates to maximize potential AI factory revenue

To support the burgeoning demands of AI, NVIDIA is staying true to the playbook through which it has already derived so much success — investing in platform innovation and the support of its growing partner ecosystem to drive the adoption of AI technology across all industries.

AI factory revenue relies on user productivity

Reasoning capabilities allow models to meet the demands of a wider range of increasingly complex AI use cases. Although the revenue opportunity of AI factories increases as AI reasoning drives an exponential rise in token generation, expanding token generation also creates bottlenecks within AI factories and inevitably there is a tradeoff. To maximize revenue potential, AI factories must optimize the balance between token volume and cost per token.
 
From the perspective of an AI inference service user, experience comes down to the speed at which answers are generated and the accuracy of those answers. Accuracy is tied directly to the underlying AI model(s) powering the service and can be thought of as a constant variable in this scenario, while the speed at which answers are generated for a single user is dictated by the rate of output token generation for that specific user. Having more GPUs dedicated to serving a single user results in an increased rate of output token generation for that user and is something that users are typically willing to pay a premium for.
 
However, in general, as more GPUs are dedicated to serving a single user, the overall output token generation of the AI factory falls. On the opposite end of the spectrum, an AI factory can maximize its overall output token generation by changing GPU resource allocations to serve a greater number of users at the same time; however, this has a negative impact on the rate of output tokens generated per user, increasing request latency and thereby detracting from the user’s experience.
 
As NVIDIA noted during the event, to maximize revenue, AI factories must optimize the balance of total factory output token generation and the rate of output token generation per user. However, once the optimal allocation of GPU resources is determined, revenue opportunity hits a threshold. As such, to increase the productivity and revenue opportunity of AI factories, NVIDIA supports the AI ecosystem with its investments in the development of increasingly performant GPUs, allowing for greater total factory output token generation as well as increased rates of output token generation per user.
 
During his keynote address, Huang laid out NVIDIA’s four-year GPU road map, detailing the upcoming Blackwell Ultra as well as the NVIDIA GB300 NVL72 rack, which leverages Blackwell Ultra and features an updated NVL72 design for improved energy efficiency and serviceability. Additionally, he discussed the company’s Vera Rubin architecture, which is set for release in late 2026 and marks the shift from HBM3/HBM3e to HBM4 memory, as well as Vera Rubin Ultra, which is expected in 2027 and will leverage HBM4e memory to deliver higher memory bandwidth. To round out NVIDIA’s four-year road map, Huang announced the company’s Feynman GPU architecture, which is slated for release in 2028.

Scale up before you scale out, but NVIDIA supports both

In combination with NVIDIA’s updated GPU architecture road map, Huang revealed preliminary technical specifications for the Vera Rubin NVL144 and Rubin Ultra NVL576 racks, with each system being built on iterative generations of the company’s ConnectX SuperNIC and NVLink technologies, promising stronger networking performance with respect to increased bandwidth and higher throughput. NVIDIA’s growing focus on NVL rack systems underscores Huang’s philosophy that organizations should scale up before they scale out, prioritizing the deployment of fewer densely configured AI systems compared to a greater number of less powerful systems to drive simplicity and workload efficiency.
 

Graph: 2024 Data Center GPU Market Share (Source: TBR)

2024 Data Center GPU Market Share (Source: TBR) — If you believe you have access to TBR’s NVIDIA research via your employer’s enterprise license or would like to learn how to access the full research, click here.


 
Networking has and continues to become more integral to NVIDIA’s business as the company’s industry-leading advancements in accelerated compute have necessitated full-stack AI infrastructure innovation. While NVIDIA drives accelerated computing efficiency on and close to the motherboard through the design of increasingly high-performance GPUs and CPUs and its ongoing investments in ConnectX and NVLink, the company is also heavily invested in driving AI infrastructure efficiency through its networking platform investments in Quantum-X InfiniBand and Spectrum-X Ethernet.
 
Although copper is well suited for short-distance data transmissions, fiber optics is more effective over long distances. As such, the scale-out of AI factories requires an incredible number of optical transceivers to connect every NIC (network interface card) to every switch, representing the single largest hardware component in a typical AI data center. NVIDIA estimates that optical transceivers consume approximately 10% of total computing power in most AI data centers. During his keynote address, Huang announced NVIDIA Photonics — what the company describes as a coinvention across an ecosystem of copacked optics partners — to reduce power consumption and the number of discrete components in an AI data center.
 
Leveraging components from partners, including TSMC, Sumitomo and Corning, NVIDIA Photonics allows NVIDIA to replace pluggable optical transceivers with optical engines that are copackaged with the switch ASIC. This allows optical fibers to plug directly into the switch with the onboard optical engine processing and converting incoming data — in the form of optical signals — into electrical signals that can then be immediately processed by the switch. Liquid-cooled Quantum-X Photonic switch systems are expected to become available later this year ahead of the Spectrum-X Photonic switch systems that are coming in 2026. NVIDIA claims that the new systems improve power efficiency by 3.5x while also delivering 10x higher resiliency and 1.3x faster time to deploy compared to traditional AI data center architectures leveraging pluggable optical transceivers.

Securing the developer base

Adjacent to what the company is doing in the data center, NVIDIA announced other, more accessible Blackwell-based hardware platforms, including RTX PRO Series GPUs, DGX Spark and DGX Station, at GTC 2025. At CES (Consumer Electronics Show) 2025 in January, NVIDIA made two major announcements: Project DIGITS, a personal AI supercomputer that provides AI researchers, data scientists and students with access to the Grace Blackwell platform; and the next-generation GeForce RTX 50 Series of consumer desktop and laptop GPUs for gamers, creators and developers.
 
Building on these announcements, at GTC 2025 NVIDIA introduced DGX Spark, the new name of the previously announced Project DIGITS, leveraging NVIDIA GB10 Grace Blackwell Superchip and ConnectX-7 to deliver 1,000 AI TFLOPS (tera floating-point operations per second) performance in an energy-efficient and compact form factor. DGX Spark will come pre-installed with the NVIDIA AI software stack to support local prototyping, fine-tuning and inferencing of models with up to 200 billion parameters, and NVIDIA OEM partners ASUS, Dell Technologies, HP Inc. and Lenovo are already building their own branded versions.
 
To complement its recently unveiled GeForce RTX 50 Series, NVIDIA announced a comprehensive lineup of RTX PRO Series GPUs for laptops, desktops and servers with “PRO” denoting the solutions’ intent to support enterprise applications. At the top end of the lineup, RTX PRO 6000 will deliver up to 4,000 AI TFLOPS performance, making it the most powerful discrete desktop GPU ever created. While DGX Spark systems will be available beginning in July, DGX Station is expected to be released toward the end of the year. DGX Station promises to be the highest-performing desktop AI supercomputer, featuring the GB300 Grace Blackwell Ultra Desktop Superchip and ConnectX-8, with OEM partners, including ASUS, Box, Dell Technologies, HP Inc., Lambda and Supermicro, building systems. Together, these announcements highlight NVIDIA’s commitment to democratizing AI and supporting developers.

Software is the most important feature of NVIDIA GPUs

In TBR’s 1Q24 Semiconductor Market Landscape, NVIDIA led all vendors in terms of trailing-12 month (TTM) corporate revenue growth, with hardware revenue accounting for an estimated 88.9% of the company’s TTM top line. However, while NVIDIA’s industry-leading top-line growth continues to be driven primarily by increasing GPU and AI infrastructure systems sales, the reason customers choose NVIDIA hardware ultimately boils down to two interrelated factors: the company’s developer ecosystem, and its AI platform strategy.

The CUDA advantage

In 2006 NVIDIA introduced CUDA (Compute Unified Device Architecture), a coding language and framework purpose-built to enable the acceleration of workloads beyond graphics. With CUDA, developers gained the ability to code applications optimized to run on NVIDIA GPUs. Since CUDA’s inception, NVIDIA has relentlessly invested in strengthening CUDA, supporting backward compatibility, publishing new CUDA libraries, and giving developers new resources to optimize the performance and simplify the building of applications.
 
As such, many legacy AI applications and libraries are rooted in CUDA, whose documentation is light years ahead of competing platforms, such as AMD ROCm. With respect to driving AI efficiency, several NVIDIA executives and spokespeople at GTC 2025 circled back to the notion that, when it comes to enabling the most complex AI workloads of today and tomorrow, software optimization is as important as, if not more important than, infrastructure innovation and optimization, underscoring the unique value behind NVIDIA’s CUDA-optimized GPUs. In short, at the heart of NVIDIA’s comprehensive AI stack and competitive advantage is CUDA, and as Huang emphasized to the attending industry analysts, “Software is the most important feature of NVIDIA GPUs.”

A new framework for AI inference

As the AI inference boom materializes, NVIDIA has leveraged the programmability of its GPUs to optimize the performance of reasoning models at scale, with Huang introducing NVIDIA Dynamo at GTC 2025. Dynamo is an open-source modular inference framework that was designed to serve GenAI models in multinode distributed environments and specifically developed for accelerating and scaling AI reasoning models to maximize token revenue generation.
 
The framework leverages a technique called “disaggregated serving,” which separates the processing of input tokens in the prefill phase of inference from the processing of output tokens in the decode phase. Traditional large language model (LLM) deployments leverage a single GPU or GPU node for both the prefill and decode phases, but each phase has different resource requirements, with prefill being compute-bound and decode being memory-bound. As NVIDIA’s VP of Accelerated Computing Ian Buck put it, “Dynamo is the Kubernetes of GPU orchestration.”
 
To optimize the utilization of GPU resources for distributed inference, Dynamo’s Planner feature continuously monitors GPU capacity metrics in distributed inference environments to make real-time decisions on whether to serve incoming user requests using disaggregated or aggregated serving while also selecting and dynamically shifting GPU resources to serve prefill or decode inference phases.
 
To further drive inference efficiencies by reducing request latency and time to first token, Dynamo has a Smart Router feature to minimize key value (KV) cache re-computation. KV cache can be thought of as the model’s contextual understanding of a user’s input. As the size of the input increases, KV cache computation increases quadratically, and if the same request is frequently executed, this can lead to excessive KV cache re-computation, reducing inference efficiency. Dynamo Smart Router works by assigning an overlap score to each new inference request as it arrives and then using that overlap score to intelligently route the request to the best-suited resource — i.e., whichever available resource has the highest overlap score between its KV cache and the user’s request — minimizing KV cache recomputation and freeing up GPU resources.
 
Additionally, Dynamo leans on its Distributed KV Cache Manager feature to support both distributed and disaggregated inference serving and to offer hierarchical caching capabilities. Calculating KV cache is resource intensive, but as AI demand increases, so does the volume of KV cache that must be stored to minimize KV cache recomputation. Dynamo Distributed KV Cache Manager leverages advanced caching policies to prioritize the placement of frequently accessed data closer to the GPU, with less accessed data being offloaded farther from the GPU.
 
As such, the hottest KV cache data is stored on GPU memory with progressively colder data being offloaded to shared CPU host memory, solid-state drives (SSDs) or networked object storage. Leveraging these key features, NVIDIA claims Dynamo maximizes resource utilization, yielding up to 30 times higher performance for AI factories running reasoning models like DeepSeek-R1 on NVIDIA Blackwell. Additionally, NVIDIA leaders state that while designed specifically for the inference of AI reasoning models, Dynamo can double token generation when applied to traditional knowledge-based LLMs on NVIDIA Hopper.
 

The Super Bowl but everybody wins

NVIDIA’s astronomical revenue growth and relentless innovation road map aside, perhaps nothing emphasizes the degree of importance the company holds over the future of the entire AI market more than the number of partners that are clamoring to gain a foothold using NVIDIA as a launching point. The San Jose McEnery Convention Center was filled with nearly 400 exhibitors showcasing how NVIDIA’s AI and accelerated computing platforms are driving innovation across all industries. NVIDIA GTC is no longer a conference highlighting the innovations of a single company; it is the epicenter of showcasing AI opportunity, and every company that wishes to play a role in the market was in attendance.
 
The broad swath of NVIDIA’s partner ecosystem was represented. Infrastructure OEMs and ODMs displayed systems built on NVIDIA reference architectures, while NVIDIA inception startups highlighted their own diverse codeveloped AI solutions. However, perhaps the most compelling and largest-scale example of NVIDIA relying on its partners to deliver AI solutions to end customers came from the company’s global systems integrator (GSI) partners.

NVIDIA provides the platform; partners provide the solution

The world’s leading GSIs, including Accenture, Deloitte, EY, Infosys and Tata Consultancy Services (TCS), all showcased how they are leveraging NVIDIA’s AI Enterprise software platform — comprising NIMs, NeMo and Blueprints — to help customers build and deploy their own customized AI solutions with a heavy emphasis on agentic AI. While some of the largest enterprises in the world have the talent required to build bespoke AI solutions, many other organizations rely on NVIDIA-certified GSI partners with training and expertise in NVIDIA’s AI technologies to develop and deploy AI solutions.
 
Agentic AI has emerged as the next frontier of AI, using reasoning and iterative planning to solve complex, multistep problems autonomously, leading to enhanced productivity and user experiences. NVIDIA AI Enterprise’s tools help make this possible, and at GTC 2025, NVIDIA business leaders shed light on three overarching reasons why NVIDIA AI Enterprise has resonated with end customers and NVIDIA partners alike.
 
First, NVIDIA AI Enterprise builds on CUDA to deliver software-optimized full-stack acceleration, much like other NVIDIA AI platforms. Business leaders essentially explained NIMs — the building blocks of AI Enterprise — as an opinionated way of running a GenAI model on a GPU in the most efficient way possible.
 
Second, NVIDIA AI Enterprise is enterprise grade, meaning that the thousands of first- and third-party libraries constituting the platform are constantly maintained with AI copilots scanning for security threats and AI agents patching software autonomously. Additionally, enterprises demand commitments to maintenance and standard APIs that are not going to change, and NVIDIA AI Enterprise ticks these boxes while also offering tiered levels of support services on top of the platform.
 
Finally, because NIMs are containerized, based on Kubernetes, AI Enterprise is extremely portable, allowing the platform to deliver a consistent experience across a variety of environments.

Autonomous vehicles are the tip of the physical AI iceberg

Several of NVIDIA’s automotive partners also attended GTC 2025, displaying their vehicles inside and outside the convention center. These partners all leverage at least one of NVIDIA’s three computing platforms comprising the company’s end-to-end solutions for autonomous vehicles, with several partners leveraging NVIDIA’s entire platform — including General Motors (GM), whose adoption of NVIDIA AI, simulation and accelerated compute was announced by Huang during the GTC 2025 keynote address.
 
While autonomous vehicles are perhaps the most tangible example, NVIDIA’s three computer systems can be used to build robots of all kinds, ranging from industrial robots used on manufacturing lines to surgical robots supporting the healthcare industry. The three computers required to build physical AI include NVIDIA DGX, which is leveraged for model pre-training and post-training; NVIDIA OVX, which is leveraged for simulation to further train, test and validate physical AI models; and NVIDIA AGX, which acts as the robot runtime and is used to safely deploy distilled physical AI models in the real world.
 
Following the emergence of agentic AI, NVIDIA sees physical AI as the next wave of artificial intelligence, and the company has already codeveloped foundation models and simulation frameworks to support advancements in the field with industry-leading partners, such as Disney Research and Google DeepMind.

Conclusion

The sheer scale of NVIDIA GTC 2025 reaffirmed NVIDIA’s position at the epicenter of the AI revolution, with Huang’s keynote address filling all the available seating in the SAP Center. Born from Huang’s long-standing vision of accelerating workloads by applying parallel processing, NVIDIA’s relentless investments in the R&D of the entire AI stack — from GPUs to interconnect and software platforms to developer resources — remains the driving force behind the AI giant’s success and seemingly insurmountable lead over competitors.
 
NVIDIA’s first-mover advantage in accelerated computing was predicated on the company’s CUDA platform and its ability to allow developers to optimize applications running on NVIDIA GPUs. Nearly 20 years later, NVIDIA continues to leverage CUDA and its robust ecosystem of developers to create innovative AI platforms, such as Omniverse and AI Enterprise, that attract partners from every corner of the technology ecosystem. By swimming in its own lane and relying on its growing NVIDIA Partner Network to deliver AI systems and solutions to end customers, NVIDIA has built an unrivaled ecosystem of partners whose actions on the front lines with end customers facilitate the near-infinite gravity behind the company’s AI platforms.

Cloud Opportunity Expected to Increase Once DOGE Disruption Subsides

The U.S. federal government will need modern cloud services to be most efficient, regardless of DOGE-driven changes

Rolling pockets of chaos and an overall cloud of uncertainty may be the best way to describe the first two months of the new Trump administration. One upside to federal contracts is that they tend to be long-term in nature, which provides some stability for all types of vendors with existing contracts. However, the current transition has been rocky, to say the least, as contracts are getting canceled, agency staffing is reduced, and the existence of entire agencies is called into question.
 
Beyond the distinct financial impacts that are occurring to many federal systems integrators (FSIs) and IT vendors, the overall uncertainty about future changes has complicated government contractors’ ability to conduct business as usual. Short-term uncertainty will likely persist, but eventually we will see a silver lining for the ecosystem of IT providers catering to the needs of the U.S. federal government. The government may become a more streamlined entity, in all respects, but IT will need to remain at the forefront of U.S. government operations.
 
Differences of opinion on optimal levels of funding will persist, but most people concur that the IT infrastructure supporting many core government agencies is antiquated and long overdue for upgrade. After the Department of Government Efficiency (DOGE) completes its cost-cutting and agency reorganizations, the overall approach to modernizing those systems will come into greater clarity, but third parties including FSIs and IT vendors like Amazon Web Services (AWS), Microsoft, Google and Oracle will all likely be a part of the solution enabling the reformed federal government to modernize and play an ongoing role eliminating waste, fraud and abuse using a refreshed IT infrastructure environment.
 

Explore the expected impact of DOGE on federal systems integrators and how it could shape the technology landscape


 

Vendors hope federal spending materializes after the fog of dismantling and reducing headcount dissipates

Reducing the size of the federal workforce was an immediate focus for DOGE. With the “Fork in the Road” email sent by the Office of Personnel Management to encourage staff resignations and the nonvoluntary firing of workers across civilian agencies, the total number of employees shed from the federal workforce is estimated to have surpassed 100,000 in the first two months of the Trump presidency.
 
The entire federal workforce still totals more than 3 million, excluding 1.3 million active military personnel, and additional cuts are a certainty. Early in the formation of DOGE, the idea of cutting up to 75% of federal workers was floated, which could be far-fetched in reality. Regardless, it is clear the workforce-reduction efforts will continue to be a focus as DOGE expands its reach to additional government agencies and pushes further than just the probationary employees that made up the bulk of early reductions.
 
As headcount reductions continue, cloud and software vendors could assist the administration with those cuts while, at the same time, be impacted by the fallout of those cuts. On Workday’s FY4Q25 earnings call, CEO Carl Eisenbach painted the impact of DOGE in an opportunistic light, stating: “In fact, the majority of them [federal IT systems] are still on-premise, which means they’re inefficient. And as we think about DOGE and what that could potentially do going forward, if you want to drive efficiency in the government, you have to upgrade your systems. And we find that as a really rich opportunity.”
 
If, in the era of DOGE, government agencies undertake new, or continue existing, efforts to modernize IT systems and adopt cloud-enabled solutions, it would certainly be a big opportunity not just for Workday, but for the entire federal IT contractor market. The certainty of that opportunity is still questionable, however, given the rapidity with which major changes to how government operates are occurring. Any technology opportunities with USAID (United States Agency for International Development), for instance, are now dubious given the speed with which the agency has been dissolved, even as legal challenges abound.
 
Additional rapid changes will occur with the Department of Education given President Trump’s clear directive to new Secretary of Education Linda McMahon to dismantle the agency. On ServiceNow’s 4Q24 earnings call, CFO Gina Mastantuono noted some of this uncertainty while also remaining optimistic about the federal opportunity, stating the company’s guidance reflects a stronger U.S. federal performance in the back half of 2025, given changes brought on by the administration.

A build-it-yourself approach could challenge packaged IT solutions

DOGE head Elon Musk has clearly employed many of the same techniques and strategies he has used in the past, such as sending a “Fork in the Road” email to Twitter employees and requiring them to send a weekly email of their accomplishments after he purchased Twitter (now called X). With that in mind, it is relevant to think about the approaches to IT that Musk has used as CEO of Tesla and SpaceX for clues about what might occur in the U.S. federal space.
 
For some of the most important mission-critical IT and software decisions at Tesla and SpaceX, Musk deployed a proprietary software package that is shared by the companies to manage core manufacturing and sales, CRM and financial processes. Instead of utilizing a prebuilt solution from the likes of SAP or Oracle, internal teams at SpaceX and Tesla built, customized and manage their own ERP solution named WARPDRIVE. Musk could very well encourage a similar approach in federal agencies, either by licensing WARPDRIVE to those agencies or by directing more proprietary programs to be custom-built to reduce expenditures and theoretically achieve a superior technological solution. Either option would be challenging to implement but remains within the realm of possibility and would effectively reduce the addressable market for third-party IT solutions.
 

Watch Now: Deep Dive into Generative AI’s Impact on the Cloud Market in 2025

Scaling back new and existing awards will stifle revenue for cloud vendors in the short term

In the U.S. federal sector, SIs are a key conduit for how cloud and software companies capture opportunities. The opportunity pipeline and associated timeline for deals is notoriously long for federal spending, but the total opportunity has already decreased in size based on the cuts made by DOGE. Some of the strategies and actions recently used by leading SIs in the federal space are discussed in TBR’s special report, Leading Federal Systems Integrators React to U.S. Department of Government Efficiency. As outlined in the special report, all 12 of the leading federal SIs are looking to reduce expenses and prepare for a slowing of revenue streams in the near term. After a period of federal investment and expansion, this certainly is a change in trajectory for their businesses. In addition to making similar cost reductions, all 12 vendors are also doubling down on their competitive differentiation to secure growth moving forward. All of the recent market shifts, including security, AI and digital transformation, have led FSIs to reinvest in capabilities that provide the best opportunities for long-term expansion.
 
In the short term, even existing contracts with the federal government are subject to reductions or termination, which impacts not only the SI but also the IT vendors that have secured subawards to provide their technology as part of the overall engagement. One example TBR cited in the special report was the $1.5 billion award Leidos has with the Social Security Administration (SSA), which includes subawards for Pegasystems, AWS and multiple other IT vendors. The Leidos deal was scaled back by DOGE, marking the beginning of the disruption to awards with SIs and subawards with IT vendors. SSA represents a small portion of the federal budget, so when DOGE looks to larger agencies such as the Department of Health and Human Services for cost reductions and efficiencies, the impact on the federal SIs and supporting IT vendors will be even greater.
 
In terms of the scale of revenue at stake, AWS alone has won close to $500 million in subaward contracts in the last three fiscal years. That does not directly translate into revenue, however, as the money still needs to be outlaid, a process that is even more tenuous given the current spending environment and actions taken by the DOGE team. In addition to deals tied to FSIs, cloud vendors and software vendors also have direct deals/prime awards with federal agencies that are at greater risk. AWS, for instance, has won a total of $445 million in prime award contracts over the past three fiscal years.
 
Most of those awards are multiyear contracts that are not guaranteed, and the revenue could be reduced or not disbursed. In fact, only $104 million of those awards to AWS have been outlaid, meaning the balance, more than $340 million, could be impacted. It is also important to note these figures only reflect past deals; we anticipate the new federal deal pipeline for vendors like AWS to shrink due to uncertainty and the administration’s focus on cost reductions.

Big cloud deals such as JWCC and Stargate are expected to proceed without significant funding impacts

The impacts of DOGE should be widespread throughout the government, but we expect the top federal IT opportunities, the Stargate Project and the Joint Warfighting Cloud Capability (JWCC) contract vehicle, to avoid major funding challenges. Though both projects are in the early stages and still subject to competitive jockeying between technology providers to secure task orders, we expect the funding to remain available even amid broader spending reductions.
 
The JWCC was announced in 2022 with a total of $9 billion in funding available to Oracle, Microsoft, AWS and Google Cloud. Oracle has been a leading provider under the contract to date. Roughly $2.5 billion has been awarded to the five vendors thus far in the contract, leaving more than $6 billion in additional task orders in the entire project. The spending bill passed in mid-March to avoid a federal shutdown illustrates the appetite to sustain, if not increase, defense spending. All the participants in JWCC have donated to and publicly supported the administration, which could solidify the longevity of the engagement.
 
Stargate was introduced by President Trump in the early days of his presidency, indicating that the project is likely to proceed in some fashion regardless of any budgetary pressure. The project will be a joint venture with OpenAI, SoftBank and Oracle to initially build a $100 billion data center in Texas. Over the next four years, the project aims to build additional large-scale data centers, with a total of $500 billion in funding, making it the largest centralized data center investment in history. The funding includes significant financial backing from the U.S. government, with contributions from SoftBank, a firm known for its long-term investment strategies. OpenAI, SoftBank, Oracle and MGX are the initial equity investors, while Arm, Microsoft, NVIDIA and OpenAI have been named as technology partners and will have some involvement in the project.

Modern cloud IT solutions should have an elevated role in the restructured federal government

The headcount reductions, eliminations of agencies, and overall uncertainty will disrupt business as usual in the U.S. federal sector at least through the end of 2Q25. Once the new, smaller and streamlined structure emerges, we expect the value of modern IT solutions to be recognized and spending to resume and even increase compared with the prior trajectory. Having fewer human resources, likely fewer skilled IT professionals, and an altered view of budgeting and ROI for all initiatives, IT included, all amplify the value that can be added by modernizing the infrastructure and solutions that support the mission of government agencies.
 
Across fragmented environments, many of which are still traditional on premises and based on aging technology, consolidation and use of government-grade cloud delivery can improve performance and reduce the total cost to deliver even over a relatively short three-to-five-year time frame. On the commercial side, many of the organizations we speak with note that the simplification of their IT environments is one of the strongest drivers of cloud adoption. AI and generative AI capabilities add to the benefits that can now be enabled. And for government agencies, preexisting data protocols and procedures increase their readiness to apply next-generation data analysis and AI. We see the business use cases for AI becoming more compelling on the commercial side, which bodes well for adding real value in the U.S. federal sector as it adapts to a more streamlined way of operations.

Infosys Readies to Deliver Outcomes at Scale Through Enterprise AI

U.S. Analyst and Advisor Meet 2025, New York City, March 4, 2025 — Infosys hosted industry analysts and advisors for a packed afternoon in the company’s offices at One World Trade Center. Using client stories amplified through technology partner support to reinforce Infosys’ role in the IT services, cloud and enterprise AI market, company executives consistently returned to a few main themes, including delivering business outcomes, maintaining trusted relationships, and focusing on speed, agility and simplification.  

 

Infosys’ hub-first strategy in the Americas demonstrates the company’s success with coinnovation and pursuit of large deals

Similar to previous events, Infosys kicked off the event with an update on the company’s strategy and performance in the Americas region. Anant Adya, Infosys EVP and head of Americas Delivery, led the presentation, highlighting key elements of the company’s success in the region, including its hub-first strategy; investments in and expansion of local talent pools, including in the U.S., Canada, Mexico, Brazil and the rest of LATAM; and strategic bets that are centered on delivering business outcomes and enabled through portfolio offerings such as Infosys Cobalt, Infosys Topaz and Infosys Aster..
Infosys’ six Tech Hubs across the U.S. remain the backbone of the company’s hub-first strategy. Located in Phoenix; Richardson, Texas; Raleigh, N.C.; Indianapolis; Providence, R.I.; and Hartford, Conn., and collectively staffed with thousands of local hires, these centers are increasingly allowing Infosys to drive coinnovation with clients and partners and pursue new opportunities with a key focus on large deals (defined by Infosys as deals over $50 million in value) in areas including cloud, AI, data, the edge and cybersecurity. Infosys has been rebalancing its onshore-offshore effort over the last five years.
 
For example, onshore effort was 24% in 4Q24, down from 27.7% in 4Q19. Offshore effort was 76% and 72.3% in 4Q24 and 4Q19, respectively. The recalibration began during the pandemic, as Infosys began capitalizing on the increase in remote working. The current ratio is also helping the company demonstrate pricing agility when competing for service delivery transformation projects. At the same time, maintaining a steady flow of local hires could help Infosys weather any pushback from the Trump administration on its America First Investment Policy requirements. Although the administration has yet to impose tariffs on companies utilizing services from overseas, it would not be surprising for this to happen in the future. Investing in training programs and collaborating with local universities through the Infosys Foundation would not only create a strong PR framework but also help Infosys increase its recruiting opportunities. Meanwhile, expanding across Canada and key LATAM countries, including Mexico and Brazil, to support both nearshore and locally sourced deals allows Infosys to diversify its revenue stream while enhancing the value of its brand beyond the U.S.
 
As Infosys continues to execute on its well-oiled strategy, investing and expanding in the next growth areas across the company’s cloud and enterprise AI portfolio will largely be centered on calibrating its commercial model as client discussions evolve from transactions to outcomes. For example, to support these expansion efforts, Infosys’ work within the Infosys Cobalt portfolio has evolved from tech optimization and data center migration to developing and applying industry solutions, and now includes accounting for the role of AI.
 
Building out a fluid enterprise to derive greater value from AI has compelled Infosys to develop solutions with an eye toward being more digital, more composable and more autonomous. This solution framing is also helping the company drive next-gen conversations with its technology partners and with clients that are seeking to develop an intelligent enterprise enabled by AI.

Infosys’ pivot toward Outcome as a Service will test the company’s ability to drive change management at scale, starting with its own operations

Expanding on Infosys’ evolving go-to-market strategy, portfolio, talent and collaboration with partners, Infosys Chief Delivery Officer Dinesh Rao, along with a series of client and partner panels throughout the afternoon, not only brought to light the company’s aspirations around driving outcome services opportunities but also discussed at length the challenges stakeholders face, often revolving around change management. Rao’s presentations spanned client use cases, AI evolution, and Infosys’ portfolio adjustment as well as resource pyramid calibration to balance support opportunities in foundational and emerging areas. Three key areas stood out:

  • Client examples: Amplifying value through innovation has helped Infosys capture and deliver services for global clients across manufacturing, retail and consumer packaged goods (CPG), among other verticals, while also positioning the company to test new commercial models. For example, in a multiyear, multibillion-dollar deal supporting a multinational communications provider, Infosys is deploying its Outcome as Service commercial framework, bundling software, hardware and third-party services on a single platform.
  • AI: Infosys launched NVIDIA-enabled small language models (SLMs) for Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM, targeting clients through core industry and horizontal offerings and allowing them to use their own data on top of the prebuilt SLMs. Additionally, Infosys released the Finacle Data and AI Suite of solutions to support banking clients seeking to enhance IT systems and customer experience using AI. The solutions include Finacle Data Platform, Finacle AI Platform and Finacle Generative AI. Infosys’ investments in industry-centric SLMs, which the company positions with clients as either accelerators or platforms to drive targeted conversations using industry and/or functional enterprise data, closely align with the company’s playbook from a few years ago, when it began developing industry cloud solutions, both proprietary and partner enabled. Embedding generative AI (GenAI) as part of a deal, rather than using it as a lead-in, is a smart strategy as it allows Infosys to better appeal to price-conscious clients and steer conversations toward outcomes and the benefits of the engagement, rather than trying to convince clients to spend a premium on a technology that has yet to prove ROI at scale. We believe Infosys’ investment in agentic AI capabilities for Infosys Topaz, along with the ITOpsSLM, can also position the company to drive nonlinear, profitable engagements, especially with clients that are seeking to migrate and modernize their existing mainframe infrastructure but lack the necessary COBOL skills and understanding of the environment.
  • Resource pyramid: Infosys’ three talent categories — traditional software engineers, digital specialists focused on digital transformation and ongoing support, and Power Programmers — allow the company to balance innovation and growth while calibrating its business and commercial models. The Power Programmers group consists of highly skilled professionals who are responsible for developing products and ensuring that the intellectual property they create and use meets the cost-saving requirements Infosys pitches to clients. Although the other two groups follow a traditional employee pyramid structure, the Power Programmers group is much leaner and resembles the business models that many vendors, including Infosys, may aspire to adopt in the future.

Rao also discussed Infosys’ approach to innovation. The company’s business incubator framework, backed by Infosys’ $500 million innovation fund and enabled through its network of Living Labs, has empowered the company’s employees to think creatively, thus helping Infosys solidify its culture of learning and collaboration. Gaining employee buy-in is a must, especially at a time when the company is pivoting its own operations toward outcome-based service delivery.

AI- and partner-led discussions will continue to guide Infosys’ efforts to solidify its position as a trusted solution broker

Sunil Senan, Infosys’ global head of Data and AI, provided an update on Infosys’ AI-first strategy and portfolio, which have allowed Infosys to stay competitive in a rapidly evolving AI market. Senan noted that the opportunities around agentic AI require a rigorous data and governance strategy — an acknowledgment that is not surprising given the company’s typically humble yet pragmatic approach to emerging growth opportunities.
 
Scaling AI adoption comes with implications and responsibilities, which Infosys is trying to address one use case at a time. For example, in October 2023 Infosys launched the Responsible AI Suite, which includes accelerators across three main areas: Scan (identifying AI risk), Shield (building technical guardrails) and Steer (providing AI governance consulting). These capabilities will help Infosys strengthen ecosystem trust via the Responsible AI Coalition.
 
Infosys also claimed it was the first IT services company globally to achieve the ISO 42001:2023 certification for the ethical and responsible use of AI. Infosys recognizes that AI adoption will come in waves. The first wave, which started in November 2022 and continued for the next 18 to 24 months, was dominated by pilot projects focused on productivity and software development. In the current second wave, clients are starting to pivot conversations toward improving IT operations, business processes, marketing and sales.
 
The real business value will come from the third wave, which will focus on improving processes and experiences and capitalizing on opportunities around design and implementation. Infosys believes the third wave will start in the next six to 12 months. Although this time frame might suit cloud- and data-mature clients, only a small percentage of the enterprise is AI ready across all components including data, governance, strategy, technology and talent. Thus, it might take a bit longer for AI adoption to scale.
 
But as Infosys continues to execute its pragmatic strategy, the company relies on customer success stories that will help it build momentum. And there was no shortage of examples throughout the afternoon, with Infosys clients across the spectrum — from just getting started to scaling hundreds of AI deployments — sharing their experiences with Infosys and within the broader ecosystem.
 
We believe as Infosys pivots toward an Outcome as a Service commercial model, opportunities to scale AI will stem from the company’s ability to demonstrate value. In a traditional transformation project, the company often deployed professionals to perform typical implementation work and then transferred them to another project; during an AI project, staff would need to stay at a client’s site for a longer period to ensure the technology delivers the value promised. Approaching AI opportunities with a similar focus will help Infosys justify its rates but also help the company calibrate its staffing pyramid.
 
Infosys’ long-term success also depends on the company’s relationship with technology partners. During previous iterations of the summit, Infosys has had separate alliance-led presentations, but this time around the company included the partner presentations, specifically SAP, in a client panel. SAP’s presentation discussed a successful, three-year SAP S/4HANA migration for a global manufacturing client. Although the three-year turnaround was impressive, what stood out was how much the SAP executive was part of the conversation with the client. Speaking with the client on behalf of Infosys demonstrated trust and the depth of the relationship between SAP and Infosys.
 
Throughout TBR’s Ecosystem Intelligence research, we have written extensively that partners speaking on behalf of partners is often the last mile and the biggest challenge for vendors to overcome when they try to differentiate. We understand that vendors, especially IT services vendors, try to maintain tech agnosticism during consulting workshops, but when it comes to the implementation part of the engagement, developing more exclusive messaging resonates with clients much better as it shows knowledge, accountability and trust between parties.

Product Engineering and Quality Engineering present a tale of two cities that can help Infosys deliver value with minimal disruption to its financial profile

As Infosys continues to balance foundational growth with pursuing opportunities in new areas, the company’s evolving portfolio allows it to deliver steady financial results. Executives from Infosys’ Engineering Services and Quality Engineering lines of business, along with clients, highlighted how these two areas are helping Infosys achieve just that. Ben Ko, CEO of Kaleidoscope, a company Infosys acquired in 2020, explained how his company and its portfolio of solutions and products allow Infosys to capture manufacturing and R&D budgets, a slice of the overall enterprise spend that was somewhat untapped prior to expanding in the product engineering space. Infosys Engineering Services remains among the fastest-growing units within the company as Infosys strives to get closer to product development and minimize GenAI-related disruption on its content distribution and support position..
 
Since the 2020 purchase of Kaleidoscope, which provided a much-needed boost for the company to infuse new skills and the IP needed to appeal to the OT buyer, Infosys has enhanced its value proposition to meet GenAI-infused demand. For example, Infosys has purchased India-based, 900-person semiconductor design services vendor InSemi, and Germany-headquartered engineering R&D services firm in-tech, which presents a use case where the company applied a measured risk approach to enhance its chip-to-cloud strategy.
 
The purchase of in-tech certainly accelerates these opportunities, bringing in strong relationships with OEM providers, which is a necessary stepping stone as Infosys tries to bridge IT and OT relationships.
 
Meanwhile, Venky Iyengar, Infosys VP and head of Quality Engineering, along with clients, discussed how the Infosys business is adjusting both its value proposition and staffing models to account for automation and AI and to continue to deliver value to clients with minimal disruption to Infosys’ financial profile.
 
While a degree of revenue cannibalization is inevitable in the long run, Infosys’ approach toward platform-enabled quality engineering services, along with its efforts to fold these offerings under broader transformation projects, will allow the company to pivot and develop its position as a solutions broker.

It is all about the margins, and Infosys has the right ingredients to keep shareholders happy

Infosys, like many of its peers, faces a new reality influenced heavily by AI and reshuffling in buyers’ IT spending priorities. With IT becoming a utility, we expect enterprises not to cut back on spending but rather to demand from third-party vendors such as Infosys to deliver more with less. AI- and automation-enabled service delivery presents Infosys with the right tool to execute on such expectations. And as long as Infosys allows buyers to see that the productivity improvements have driven greater volume, then Infosys will be able to maintain its operating margin. Otherwise, buyers might start pushing back and asking for savings on their contracts when Infosys pitches new work but uses fewer employees. It was evident from the sessions that Infosys, with its enterprise AI capabilities, is strongly positioned to help clients unlock business value and drive growth. This aligns with the broader industry trend of leveraging AI to meet evolving client demands.
 
We understand that Outcome as a Service is a long-term play that will test Infosys’ culture and ability to manage trust within the ecosystem. The last five years of steady financial performance and the expansion of Infosys’ large and mega deals roster have provided the company with a strong foundation to make that pivot. Many of Infosys’ alliance partners, both technology and services ones, that TBR has spoken with view Infosys as a top delivery partner, thus providing the ecosystem support needed for the company to navigate the evolving IT services market.
 
TBR will continue to cover Infosys within the IT services, ecosystems, cloud and digital transformation spaces, including publishing quarterly reports with assessments of Infosys’ financial model, go-to-market, and alliances and acquisitions strategies.
 
For a comparison with Infosys’ peers and other IT services vendors, TBR includes Infosys in our quarterly IT Services Vendor Benchmark, our semiannual Global Delivery Benchmark and Cloud Ecosystem Report, and our annual Adobe and Salesforce Ecosystem Report; SAP, Oracle and Workday Ecosystem Report; and upcoming ServiceNow Ecosystem Report. Access the data and analysis in each of these reports with a TBR Insight Center™ free trial. Sign up today!

India-centric IT Vendors Leverage Partnerships for Technology Expansion and Market Reach

Expanding through partnerships

The India-centric vendors, which include Cognizant, HCLTech, Infosys, Tata Consultancy Services (TCS) and Wipro, leverage partnerships to expand their technology capabilities and scale while also bringing in industry knowledge to strengthen the value of their portfolios. Although these partnerships do not vary significantly from those of other IT services vendors, the India-centric vendors each bring different benefits, such as price competitiveness and low cost of scale, that can enhance other vendors’ go-to-market strategies and ability to reach underpenetrated markets while also bringing in portfolio expertise.
 
Understanding how similar companies bring different capabilities and strengths to their technology alliance partners highlights opportunities for other ecosystem players, such as smaller software companies, OEMs and niche consultancies, that are looking to expand with the India-centric vendors.
 
Graph: India-centric IT Vendor Headcount for 4Q24

Cognizant

Cognizant forges partnerships with industry-oriented vendors and expands its security and digital capabilities. During 4Q24 and early 2025, Cognizant looked to relationships with key partners such as Salesforce and ServiceNow to enhance the company’s positioning around transformation and software development as well as create opportunities around migration and managed services.
 
As transformation projects increasingly center on AI, developing a suite of offerings that streamline the use of data and analytics, security and managed services helps Cognizant strengthen client relationships and drive new projects. Working with security vendors to deepen its security capabilities and protect digital environments will lead to additional services engagements for Cognizant. Further, partnering around industry expertise is enabling Cognizant to improve its performance in certain verticals, such as recently landing modernization and digitization projects with life sciences clients.
 
Cognizant manages an ecosystem to drive innovation both internally as well as with clients to drive value across industries. In April 2023 Cognizant launched Bluebolt, an innovation program that seeks to develop new ways to address clients’ business challenges. Since the launch, more than 115,000 ideas have been developed, of which 22,000 have been implemented, increasing client engagement. Additionally, Cognizant worked with Microsoft to create the Innovative Assistant, a tool that supports idea generation for Microsoft employees. The tool is something that Cognizant could replicate with other partners.
 
In 2014 Cognizant acquired TriZetto, a healthcare IT software and solutions provider, which added healthcare clients and specialized employees and offerings, creating new opportunities for Cognizant across the healthcare space. Cognizant continues to invest in the platform, offering back- and front-office solutions for payers, providers and patients, as well as care management and connected solutions to transform the patient and physician experience. The acquisition and Cognizant’s continued investments in healthcare offerings resulted in the vertical overtaking financial services as the company’s top revenue generator in 2024.
 
Cognizant’s active acquisition pace brings in a variety of new skills and capabilities to supplement existing areas and enable the company to expand transformation contracts with clients. For example, Cognizant acquired ServiceNow partner Thirdera in December 2023, strengthening its consulting and implementation services. Through the acquisitions, Cognizant has quickly developed its engineering, software and advisory services, enhancing its positioning with clients.

HCLTech

HCLTech’s partner network encompasses technology vendors, industry experts, and research and learning institutions, allowing the company to develop a wider set of in-house expertise and offerings. Adding new hyperscaler partners to expand its capabilities and scale enables HCLTech to deliver a wider range of AI offerings and guide technology services clients’ efficiency-related and insight-driven transformation projects. Further, integrating industry expertise within its technology portfolio improves HCLTech’s ability to address clients’ specific transformation needs.
 
Pursuing solution codevelopment partnerships helps HCLTech leverage internal expertise alongside that of its partners to align its portfolio with emerging pain points resulting from heightened AI, cloud and digital usage. HCLTech will strengthen its relationships with key partners such as Microsoft, Google Cloud, Amazon Web Services (AWS) and IBM to enhance its positioning around AI. In addition, HCLTech will enhance its industry positioning through partners and acquisitions to better tailor its offerings and deepen relationships in the telecom, financial services and manufacturing industries.
 
HCLTech’s ongoing investments in engineering capabilities have deepened the vendor’s expertise, allowing it to offer semiconductor design, manufacturing and validation services. Through acquisitions, HCLTech has added new experience and solutions and strengthened its manufacturing relationships. The integration of Engineering and R&D Services (ERS) sales and go-to-market motions with IT and business services sales will help HCLTech extend the reach of its portfolio, generating new segment opportunities and expanding the company’s reach outside its more mature areas such as manufacturing and automotive.
 
HCLTech leads with its Relationship Beyond the Contract (RBTC) approach, which allows the company to deepen client relationships, better address challenges, and future-proof organizations for disruption and threats. With the heightened demand and interest around generative AI (GenAI), HCLTech’s development of applications, infrastructure, semiconductor offerings and business process solutions underpinned by its GenAI Labs enables the company to secure its relationships.

Infosys

Infosys’ alliance partner strategy mirrors that of many of its competitors as the company seeks to secure foundational revenue opportunities while pursuing innovation through a measured risk approach. The company strives to differentiate by sticking to its strengths rather than branching too far into partners’ territory, which enterprise buyers strongly appreciate. Recent partnerships centered on GenAI also provide a glimpse into Infosys’ efforts to establish a beachhead in the emerging market as the company navigates choppy market demand and increases its efforts to expand margins.
 
Infosys’ three talent categories — traditional software engineers; digital specialists focused on digital transformation (DT) and ongoing support; and power programmers — allow the company to balance innovation and growth as it calibrates its business and commercial models. To support these categories, Infosys executes on aggressive hiring, particularly for 2025. In January Infosys announced it was planning to expand its Hyderabad, India, operations, adding 17,000 people for a total of 50,000 employees in the region. Although no time frame was outlined for this increase, during the company’s 4Q24 earnings call Infosys’ executives shared the company is planning to hire 20,000 freshers in FY26, up from 15,000 in FY25.
 
Infosys’ broad-based GenAI investments centered on the development of industry-aligned solutions and small language models, largely enabled through collaborations with NVIDIA, Microsoft and Meta, enhance the company’s value proposition when competing for custom model development engagements. In addition to driving opportunities within the telco vertical, we believe Infosys’ collaboration with NVIDIA will also help the company enhance the recently launched Infosys Aster — a set of AI-driven marketing services, solutions and platforms — as Infosys looks to develop a comprehensive strategy for its digital marketing offerings. Supporting clients seeking to enhance contact center operations through the use of AI and GenAI could backfire if technology and business priorities are misaligned, as chatbots have been around for a long time but have had only minimal positive impact on customer services.
 

Watch on demand: $130+ Billion Emerging India Opportunity – India-centric vs. Global IT Services Firms: Who Wins and Why

TCS

TCS has dedicated business units for its three largest technology partners, fostering deep expertise and enabling the development of specialized solutions. These units leverage a comprehensive approach, including certified talent, Centers of Excellence, migration factories and innovation garages, to deliver superior cloud services. This approach allows TCS to effectively guide clients through their cloud migrations, codevelop industry-specific solutions and ultimately drive successful cloud transformations.
 
Beyond its core cloud partnerships, TCS actively cultivates a diverse ecosystem of technology alliances. These partnerships extend beyond the traditional cloud providers, enabling TCS to enhance its own offerings, strengthen partner capabilities and collectively expand market reach. This collaborative approach fosters mutual growth and enables TCS to deliver more comprehensive and innovative solutions to clients.
 
TCS emphasizes its deep expertise in enterprise application deployment and management, combined with its scale and cost-effective resources, to position as a valuable partner within the technology ecosystem. The company is actively investing in talent development and AI-driven solutions to meet surging client demand around GenAI. By leveraging strong industry relationships and strategic partnerships with leading technology providers, TCS delivers a comprehensive range of digital services such as AI. Collaboration helps TCS enhance its value proposition for clients.
 
TCS stands out among its India-based peers due to its impressive scale, cost-effective labor force, well-balanced portfolio, robust automation framework, in-depth understanding of legacy IT systems and vast expertise in DT. The company’s scale allows it to work across a wider range of client needs and challenges that can be addressed through its DT and application portfolio. Despite TCS’ larger scale relative to peers, the company maintains roughly 75% of its headcount in offshore locations.

Wipro

Wipro continues to expand its partner ecosystem, including incorporating security and enablement services, to ensure the company can provide a wider range of technology solutions. For example, during 4Q24, Wipro partnered with multiple vendors to grow its security services offerings. Working with Netskope and Lineaje helps address risk and vulnerabilities across the technology landscape to drive additional value and strengthen client relationships.
 
In addition to technology development, Wipro looks to deepen its industry expertise through partners, advancing its healthcare and financial services portfolio. Through 2025, Wipro will grow its partner ecosystem to include additional technology capabilities and security services to guide clients’ modernization and efficiency transformations while also maintaining a portfolio that rivals those of its peers.
 
Relative to its India-centric peers, Wipro finds itself in a more precarious position with slower revenue growth and a smaller profit. During 2024 Wipro IT Services (ITS) was able to increase its operating profit, owing to improved internal management, the use of AI and automation tools, as well as a streamlined talent structure. Wipro ITS’ revenue generation slowed in 2023 and 2024, resulting in a year-to-year decline in 2024 in both local currency and U.S. dollars due to ongoing execution challenges in APMEA and Europe and limited interactions with clients. Capco, a financial services consulting firm Wipro acquired in 2021, remains a bright spot for Wipro, as it added a new approach to industry clients in Europe.

Conclusion

Each of the India-centric vendor brings its own strengths and weaknesses that can help enhance partners’ go-to-market strategies and deliver on emerging technologies. The composition of talent varies across the vendors, with some benefiting from technical expertise such as engineering whereas others have a greater bench of consulting and delivery staff. As AI permeates client engagements, developing a larger partner ecosystem that encompasses multiple different business models, talent and portfolio strengths as well as offshore delivery leverage will enable IT services vendors to compete more effectively for limited client spend.
 
Further, internal innovation with partners, including around AI tools that are tested internally before coming to market strengthens portfolio value and trust with clients. Partnering outside of typical partner parameters will bring in much-needed innovation, refreshed talent as well as enhanced delivery resources to secure client trust and engagement.
 
TBR’s ongoing research and company coverage includes regular analysis of alliances between the leading global systems integrators, including the companies outlined in this report. In addition, we publish the Cloud Ecosystem Report semiannually and the Adobe & Salesforce Ecosystem Report, the SAP, Oracle and Workday Ecosystem Report, the U.S. Federal Cloud Ecosystem Report and the Voice of the Partner Ecosystem Report annually. Access the data and analysis in each of these reports with a TBR Insight Center™ free trial. Sign up today!

Informatica’s Alliance Strategy: Powering GSIs, Scaling AI and Strengthening the Data Ecosystem

Informatica uses the ‘power of three solutions’ to bolster its ecosystem

An increasing amount of research and analysis time at TBR is focused on ecosystem intelligence, which applies a set of questions and frameworks to extend traditional market intelligence and competitive intelligence approaches in an effort to better understand a market. Recently, TBR analysts spoke with Informatica’s Richard Ganley, Senior Vice President, Global Partners, and his insights into the actions the company is taking to enhance its alliance relationships with nine key partners (Figure 1) stood out to the team. We believe Informatica is doing the following things really well:
 

  • Enthusiastically embracing the “power of three solutions,” that is, solutions pulling together resources from a global systems integrator (GSI), a cloud or software vendor, and Informatica. According to Ganley, this approach helps enterprise IT clients “modernize faster … [and] master some of their most critical data with multivendor solutions.”
  • Consistently evaluating GSIs based on their performance with Informatica, including growth, new solutions and mindshare
  • Ensuring the company as a whole understands the evolving importance of the ecosystem to Informatica’s success

List of Informatica's key partners_1Q25_TBR

Informatica’s relationship with GSIs

Ganley cited four reasons why GSIs want closer relationships with Informatica. First, Informatica has a mature data platform, the Intelligence Data Management Cloud (IDMC). According to Ganley, one part of the platform’s appeal is its simplicity: GSIs “don’t need to work with small vendors who we compete with and pick three or four of them and stitch together their technologies to try and make a platform. They can just work with us and everything is there.”
 
Second, simply scale. Although Ganley did not say it explicitly, every GSI that TBR covers has been working to consistently (and profitably) bridge the gap between AI pilots and limited AI deployments to AI at scale. Informatica’s established scale brings GSI partners reassurance. As Ganley put it, GSIs “can see eventually how they can build a billion-dollar practice with Informatica.”
 
Third, Informatica partners with the GSI’s partners, including what Ganley described as “very close engineering relationships with the hyperscalers.” Fourth, Ganley described a “huge uptick” in GSI partners’ professionals being trained and certified on Informatica’s solutions, increasing from around 8,000 per year in 2020 to more than 15,000 in 2024. Ganley noted, “one of the reasons we’re seeing so many of our partners wanting to double down with us [is] because they see us as very important foundational work for AI to be possible.”
 
Ganley also highlighted Informatica’s relationship with LTI Mindtree, specifically within the context of how Informatica evaluates (and invests people and resources in) GSI partners. Of the nine strategic GSIs listed in Figure 1, LTI Mindtree is unquestionably the smallest in terms of revenue, and Ganley noted that LTI and Mindtree, as separate companies, were very appealing as strategic partners. After the merger was completed and LTI Mindtree recruited experienced talent known to Informatica, the two companies reconsidered a strategic partnership. Informatica laid out specific criteria, and LTI Mindtree invested in training and other aspects of the alliance. The CEOs of both companies formally announced the new alliance.
 
The result has been, according to Ganley, highly successful for both parties: “They’ve been absolutely amazing to work with … and their data and AI practice is quite a good size. They’ve got 12,000 people in the practice, and I think that’s more than 10% of their business. So it’s pretty meaningful for them.” In TBR’s view, this deliberate, strategic approach to alliances has been the exception, not the rule, across the IT services, cloud and software ecosystem. Having an explicit set of criteria for continually evaluating a partnership — beyond simply revenue or sales opportunities — is a critical component, as is CEO-to-CEO buy-in. Informatica clearly has this figured out.

Informatica’s ‘power of three’ approach integrates technology in a unique way

Throughout our coverage of Informatica, we regularly discuss the company’s partner-first approach, and why Informatica continues to position itself as “the Switzerland of data.” Take Informatica’s seven core tech alliance partners: Microsoft, Amazon Web Services (AWS), Oracle, Google Cloud, Databricks, Snowflake and MongoDB. We cannot identify any company in that list that has a tailored go-to-market approach with all the other six vendors; even if you take the hyperscalers out of the equation, there is simply too much overlap in their capabilities.
 
Of the vendors TBR covers, Informatica is the only PaaS ISV that has worked across a broad cloud ecosystem in a way that gets the company natively embedded in critical layers of the data stack (i.e., Microsoft Fabric), thus making it easier for customers to adopt more components of IDMC. So, it is not surprising that GSI partners are excited about working with Informatica and unlocking growth via the cross-alliance structure.
 
The seven core tech alliance partners listed above, as well as other SaaS vendors like SAP and Salesforce, are becoming more integrated with each other by improving data sharing, opening up their APIs and making a comprehensive shift toward more open architectures. Although competitive obstacles will continue to exist, this trend could generate many opportunities for Informatica given its already established role with many of these tech partners. SAP’s new partnership with Databricks — in which Databricks will be sold as a native SAP service — offers a great model for Informatica, particularly if it wants to capture more engagements around SAP modernization, which the GSIs will help support.

SAP

SAP is not an Informatica technology partner, but naturally, ingesting, managing and integrating SAP data remains an important use case. We have spoken to enterprise customers that leverage Informatica’s data ingestion capabilities to extract data from SAP systems and make it available in a data lake from Informatica partners such as Databricks, as part of the ERP modernization process. For many ISVs, developing a partnership with SAP can be difficult, but Informatica’s work with the biggest GSIs — including Accenture, Deloitte and Capgemini, which according to TBR’s SAP, Oracle and Workday Ecosystem Report collectively employ more than 144,000 people trained on SAP offerings — will play a huge role in getting Informatica in front of SAP and the related ERP modernization opportunities.
 
In describing Informatica’s strategies around “power of three solutions,” Ganley noted that the most frequent teaming approach would include a person from the GSI, a person from that GSI’s technology team (for example, a Deloitte SAP practice professional), and a person from Informatica.
 
In TBR’s view, this approach solidifies Informatica’s relationship with the GSI while helping the GSI solidify its relationship with the cloud or software vendor. As multiparty go-to-market approaches and solutions become more common across the ecosystem, TBR will be watching to see who staffs those teams, which vendor leads, and whether Informatica’s approach is emulated by others.

The value of the ecosystem can be measured: 17%, 47% and 83%

Admittedly, not every player or every professional in the technology space is sold on how ecosystems are changing and how valuable alliances are to long-term growth. Ganley provided perhaps the starkest evidence why ecosystems matter with a few simple numbers: “We looked at basically all the opportunities that we’d had in our system, which we’d either won or we’d lost over the past two years. And we found if we didn’t work with a partner, our win rate was around 17%.
 
If we worked with one partner, it went up to 47%, which kind of makes sense because we’ve got somebody in there speaking up for us, recommending us. But if we worked with two partners, and by two we mean one from the GSI and one from the ecosystem … the win rate goes up to 83%.” 17%, 47%, 83%. TBR has not seen a more compelling case for alliance management and ecosystem intelligence.
 
According to TBR’s Summer 2024 Voice of the Partner Ecosystem Report, data management ranked among the top three growth areas for services vendors in the next two years, sending a signal to the ecosystem that they will continue to invest in resources and guide conversations with mature enterprise buyers that are further along with their digital transformation programs and can embark on the next phase: setting up a strong foundation for generative AI.  Informatica’s portfolio and alliance strategy is well aligned with the emphasis on data management, which is helping it become an invaluable strategic partner for GSIs and reinforcing the company’s tagline “Everyone is ready for AI except your data.”
 
Claim your free preview of TBR’s ecosystem intelligence research: Subscribe to Insights Flight today!