Sheer Scale of GTC 2025 Reaffirms NVIDIA’s Position at the Epicenter of the AI Revolution

As the undisputed leader of the AI market, NVIDIA and its GPU Technology Conference (GTC) are unmatched compared to other companies and their respective annual events when it comes to the enormous impact they have on the broader information technology market. GTC 2025 took place March 17-21 in San Jose, Calif., with a record-breaking 25,000 in-person attendees — and 300,000 virtual attendees — and nearly 400 exhibitors on-site to showcase solutions built leveraging NVIDIA’s AI and accelerated computing platforms.

NVIDIA GTC 2025: Pioneering the future of AI and accelerated computing

In 2024 NVIDIA CEO and cofounder Jensen Huang called NVIDIA GTC the “Woodstock of AI,” but to lead off the 2025 event’s keynote address at the SAP Center, he aptly changed his phrasing, calling GTC 2025 “the Super Bowl of AI,” adding that “the only difference is that everybody wins at this Super Bowl.”
 
While the degree to which every tech vendor “wins” in AI will vary, NVIDIA currently serves as the rising tide that is lifting all boats — in this case, hardware makers, ISVs, cloud providers, colocation vendors and service providers — to help accelerate market growth despite the economic and geopolitical struggles that have hampered technology spending in the post-COVID era. NVIDIA’s significant investments not as a GPU company but as a platform company — delivering on innovations in full-stack AI and accelerated computing infrastructure and software — have provided much of the foundation upon which vendors across the tech ecosystem continue to build their AI capabilities.
 
During the event, which also took place at the nearby San Jose McEnery Convention Center, Huang shared his vision for the future, emphasizing the immense scale of the inference opportunity while introducing new AI platforms to support what the company sees as the next frontiers of AI. Additionally, he reaffirmed NVIDIA’s commitment to supporting the entire AI ecosystem by building AI platforms, rather than AI solutions, to drive coinnovation and create value across the entire ecosystem.

The transformation of traditional data centers into AI factories represents a $1 trillion opportunity

The introduction of ChatGPT in November 2022 captured the attention of businesses around the world and marked the beginning of the generative AI (GenAI) revolution. Since then, organizations across all industries have invested in the exploration of GenAI technology and are increasingly transitioning from the prototyping phase to the deployment phase, leveraging the power of inference to create intelligent agents, power autonomous vehicles and drive other operational efficiencies. As AI innovation persists, driven largely by the vision of Huang and the increasingly capital-rich company behind him, new AI paradigms are emerging and NVIDIA is helping the entire AI ecosystem to prepare and adapt.

The rise of reasoning

On Jan. 27, otherwise known as DeepSeek Monday, NVIDIA stock closed the day down 17.0% from the previous day’s trading session, with investors believing DeepSeek’s innovations would materially reduce the total addressable market for AI infrastructure. DeepSeek claimed that by using a combination of model compression and other software optimization techniques, it had vastly reduced the amount of time and resources required to train its competitive AI reasoning model, DeepSeek-R1. However, at GTC 2025, NVIDIA argued that investors misunderstood implications on the inference side of the AI model equation.
 
Traditional knowledge-based models can quickly return answers to users’ queries, but because basic knowledge-based models rely solely on the corpus of data that they are trained on, they are limited in their ability to address more complex AI use cases. To enhance the quality of model outputs, AI model developers are increasingly leveraging post-training techniques such as fine-tuning, reinforcement learning, distillation, search methods and best-of-n sampling. However, more recently test-time scaling, also known as long thinking, has emerged as a technique to vastly expand the reasoning capabilities of AI models, allowing them to address increasingly complex queries and use cases.

From one scaling law to three

In the past, pre-training scaling was the single law dictating how applying compute resources would impact model performance, with model performance improving as pre-training compute resources increased. However, at GTC 2025, NVIDIA explained two additional scaling laws in effect — post-training scaling and test-time scaling. As their names suggest, model pre-training and post-training are on the AI model training side of the equation. However, test-time scaling takes place during inference, allocating more computational resources during the inference phase to allow a model to reason through several potential responses before outputting the best answer.
 
Traditional AI models operate quickly, generating hundreds of tokens to output a response. However, with test-time scaling, reasoning models generate thousands or even tens of thousands of thinking tokens before outputting an answer. As such, NVIDIA expects the new world of AI reasoning to drive more than 100 times the token generation, equating to more than 100 times the revenue opportunity for AI factories.
 
During an exclusive session with industry analysts, Huang said, “Inference is the hardest computing at scale problem [the world has ever seen],” dispelling the misnomer that inference is somehow easier and demands fewer resources than training while also indirectly supporting Huang’s belief that the transformation of traditional data centers into AI factories will drive total data center capital expenditures (capex) to $1 trillion or more by 2028.
 

Graph: NVIDIA Revenue, Growth and Projections (Source: TBR)

NVIDIA Revenue, Growth and Projections (Source: TBR) — If you believe you have access to TBR’s NVIDIA research via your employer’s enterprise license or would like to learn how to access the full research, click here.


 
While on the surface, $1 trillion in data center capex by 2028 sounds like a lofty threshold, TBR believes the capex amount and timeline are feasible considering NVIDIA’s estimate that 2024 data center capex was around $400 billion.
 
Additionally, during 1Q25, announcements centered on investment commitments to build out data centers have become increasingly common, and TBR expects this trend to only accelerate over the next few years. For example, in January the Trump administration announced the Stargate Project with the intent to invest $500 billion over the next four years to build new AI infrastructure in the United States.
 
However, it is worth noting that Stargate’s $500 billion figure represents more than just AI servers; it includes other items such as the construction of new energy infrastructure to power data centers. TBR believes the same holds true for NVIDIA’s $1 trillion figure, especially when considering TBR’s 2024 total AI server market estimate of $39 billion.

The more you buy, the more you make: NVIDIA innovates to maximize potential AI factory revenue

To support the burgeoning demands of AI, NVIDIA is staying true to the playbook through which it has already derived so much success — investing in platform innovation and the support of its growing partner ecosystem to drive the adoption of AI technology across all industries.

AI factory revenue relies on user productivity

Reasoning capabilities allow models to meet the demands of a wider range of increasingly complex AI use cases. Although the revenue opportunity of AI factories increases as AI reasoning drives an exponential rise in token generation, expanding token generation also creates bottlenecks within AI factories and inevitably there is a tradeoff. To maximize revenue potential, AI factories must optimize the balance between token volume and cost per token.
 
From the perspective of an AI inference service user, experience comes down to the speed at which answers are generated and the accuracy of those answers. Accuracy is tied directly to the underlying AI model(s) powering the service and can be thought of as a constant variable in this scenario, while the speed at which answers are generated for a single user is dictated by the rate of output token generation for that specific user. Having more GPUs dedicated to serving a single user results in an increased rate of output token generation for that user and is something that users are typically willing to pay a premium for.
 
However, in general, as more GPUs are dedicated to serving a single user, the overall output token generation of the AI factory falls. On the opposite end of the spectrum, an AI factory can maximize its overall output token generation by changing GPU resource allocations to serve a greater number of users at the same time; however, this has a negative impact on the rate of output tokens generated per user, increasing request latency and thereby detracting from the user’s experience.
 
As NVIDIA noted during the event, to maximize revenue, AI factories must optimize the balance of total factory output token generation and the rate of output token generation per user. However, once the optimal allocation of GPU resources is determined, revenue opportunity hits a threshold. As such, to increase the productivity and revenue opportunity of AI factories, NVIDIA supports the AI ecosystem with its investments in the development of increasingly performant GPUs, allowing for greater total factory output token generation as well as increased rates of output token generation per user.
 
During his keynote address, Huang laid out NVIDIA’s four-year GPU road map, detailing the upcoming Blackwell Ultra as well as the NVIDIA GB300 NVL72 rack, which leverages Blackwell Ultra and features an updated NVL72 design for improved energy efficiency and serviceability. Additionally, he discussed the company’s Vera Rubin architecture, which is set for release in late 2026 and marks the shift from HBM3/HBM3e to HBM4 memory, as well as Vera Rubin Ultra, which is expected in 2027 and will leverage HBM4e memory to deliver higher memory bandwidth. To round out NVIDIA’s four-year road map, Huang announced the company’s Feynman GPU architecture, which is slated for release in 2028.

Scale up before you scale out, but NVIDIA supports both

In combination with NVIDIA’s updated GPU architecture road map, Huang revealed preliminary technical specifications for the Vera Rubin NVL144 and Rubin Ultra NVL576 racks, with each system being built on iterative generations of the company’s ConnectX SuperNIC and NVLink technologies, promising stronger networking performance with respect to increased bandwidth and higher throughput. NVIDIA’s growing focus on NVL rack systems underscores Huang’s philosophy that organizations should scale up before they scale out, prioritizing the deployment of fewer densely configured AI systems compared to a greater number of less powerful systems to drive simplicity and workload efficiency.
 

Graph: 2024 Data Center GPU Market Share (Source: TBR)

2024 Data Center GPU Market Share (Source: TBR) — If you believe you have access to TBR’s NVIDIA research via your employer’s enterprise license or would like to learn how to access the full research, click here.


 
Networking has and continues to become more integral to NVIDIA’s business as the company’s industry-leading advancements in accelerated compute have necessitated full-stack AI infrastructure innovation. While NVIDIA drives accelerated computing efficiency on and close to the motherboard through the design of increasingly high-performance GPUs and CPUs and its ongoing investments in ConnectX and NVLink, the company is also heavily invested in driving AI infrastructure efficiency through its networking platform investments in Quantum-X InfiniBand and Spectrum-X Ethernet.
 
Although copper is well suited for short-distance data transmissions, fiber optics is more effective over long distances. As such, the scale-out of AI factories requires an incredible number of optical transceivers to connect every NIC (network interface card) to every switch, representing the single largest hardware component in a typical AI data center. NVIDIA estimates that optical transceivers consume approximately 10% of total computing power in most AI data centers. During his keynote address, Huang announced NVIDIA Photonics — what the company describes as a coinvention across an ecosystem of copacked optics partners — to reduce power consumption and the number of discrete components in an AI data center.
 
Leveraging components from partners, including TSMC, Sumitomo and Corning, NVIDIA Photonics allows NVIDIA to replace pluggable optical transceivers with optical engines that are copackaged with the switch ASIC. This allows optical fibers to plug directly into the switch with the onboard optical engine processing and converting incoming data — in the form of optical signals — into electrical signals that can then be immediately processed by the switch. Liquid-cooled Quantum-X Photonic switch systems are expected to become available later this year ahead of the Spectrum-X Photonic switch systems that are coming in 2026. NVIDIA claims that the new systems improve power efficiency by 3.5x while also delivering 10x higher resiliency and 1.3x faster time to deploy compared to traditional AI data center architectures leveraging pluggable optical transceivers.

Securing the developer base

Adjacent to what the company is doing in the data center, NVIDIA announced other, more accessible Blackwell-based hardware platforms, including RTX PRO Series GPUs, DGX Spark and DGX Station, at GTC 2025. At CES (Consumer Electronics Show) 2025 in January, NVIDIA made two major announcements: Project DIGITS, a personal AI supercomputer that provides AI researchers, data scientists and students with access to the Grace Blackwell platform; and the next-generation GeForce RTX 50 Series of consumer desktop and laptop GPUs for gamers, creators and developers.
 
Building on these announcements, at GTC 2025 NVIDIA introduced DGX Spark, the new name of the previously announced Project DIGITS, leveraging NVIDIA GB10 Grace Blackwell Superchip and ConnectX-7 to deliver 1,000 AI TFLOPS (tera floating-point operations per second) performance in an energy-efficient and compact form factor. DGX Spark will come pre-installed with the NVIDIA AI software stack to support local prototyping, fine-tuning and inferencing of models with up to 200 billion parameters, and NVIDIA OEM partners ASUS, Dell Technologies, HP Inc. and Lenovo are already building their own branded versions.
 
To complement its recently unveiled GeForce RTX 50 Series, NVIDIA announced a comprehensive lineup of RTX PRO Series GPUs for laptops, desktops and servers with “PRO” denoting the solutions’ intent to support enterprise applications. At the top end of the lineup, RTX PRO 6000 will deliver up to 4,000 AI TFLOPS performance, making it the most powerful discrete desktop GPU ever created. While DGX Spark systems will be available beginning in July, DGX Station is expected to be released toward the end of the year. DGX Station promises to be the highest-performing desktop AI supercomputer, featuring the GB300 Grace Blackwell Ultra Desktop Superchip and ConnectX-8, with OEM partners, including ASUS, Box, Dell Technologies, HP Inc., Lambda and Supermicro, building systems. Together, these announcements highlight NVIDIA’s commitment to democratizing AI and supporting developers.

Software is the most important feature of NVIDIA GPUs

In TBR’s 1Q24 Semiconductor Market Landscape, NVIDIA led all vendors in terms of trailing-12 month (TTM) corporate revenue growth, with hardware revenue accounting for an estimated 88.9% of the company’s TTM top line. However, while NVIDIA’s industry-leading top-line growth continues to be driven primarily by increasing GPU and AI infrastructure systems sales, the reason customers choose NVIDIA hardware ultimately boils down to two interrelated factors: the company’s developer ecosystem, and its AI platform strategy.

The CUDA advantage

In 2006 NVIDIA introduced CUDA (Compute Unified Device Architecture), a coding language and framework purpose-built to enable the acceleration of workloads beyond graphics. With CUDA, developers gained the ability to code applications optimized to run on NVIDIA GPUs. Since CUDA’s inception, NVIDIA has relentlessly invested in strengthening CUDA, supporting backward compatibility, publishing new CUDA libraries, and giving developers new resources to optimize the performance and simplify the building of applications.
 
As such, many legacy AI applications and libraries are rooted in CUDA, whose documentation is light years ahead of competing platforms, such as AMD ROCm. With respect to driving AI efficiency, several NVIDIA executives and spokespeople at GTC 2025 circled back to the notion that, when it comes to enabling the most complex AI workloads of today and tomorrow, software optimization is as important as, if not more important than, infrastructure innovation and optimization, underscoring the unique value behind NVIDIA’s CUDA-optimized GPUs. In short, at the heart of NVIDIA’s comprehensive AI stack and competitive advantage is CUDA, and as Huang emphasized to the attending industry analysts, “Software is the most important feature of NVIDIA GPUs.”

A new framework for AI inference

As the AI inference boom materializes, NVIDIA has leveraged the programmability of its GPUs to optimize the performance of reasoning models at scale, with Huang introducing NVIDIA Dynamo at GTC 2025. Dynamo is an open-source modular inference framework that was designed to serve GenAI models in multinode distributed environments and specifically developed for accelerating and scaling AI reasoning models to maximize token revenue generation.
 
The framework leverages a technique called “disaggregated serving,” which separates the processing of input tokens in the prefill phase of inference from the processing of output tokens in the decode phase. Traditional large language model (LLM) deployments leverage a single GPU or GPU node for both the prefill and decode phases, but each phase has different resource requirements, with prefill being compute-bound and decode being memory-bound. As NVIDIA’s VP of Accelerated Computing Ian Buck put it, “Dynamo is the Kubernetes of GPU orchestration.”
 
To optimize the utilization of GPU resources for distributed inference, Dynamo’s Planner feature continuously monitors GPU capacity metrics in distributed inference environments to make real-time decisions on whether to serve incoming user requests using disaggregated or aggregated serving while also selecting and dynamically shifting GPU resources to serve prefill or decode inference phases.
 
To further drive inference efficiencies by reducing request latency and time to first token, Dynamo has a Smart Router feature to minimize key value (KV) cache re-computation. KV cache can be thought of as the model’s contextual understanding of a user’s input. As the size of the input increases, KV cache computation increases quadratically, and if the same request is frequently executed, this can lead to excessive KV cache re-computation, reducing inference efficiency. Dynamo Smart Router works by assigning an overlap score to each new inference request as it arrives and then using that overlap score to intelligently route the request to the best-suited resource — i.e., whichever available resource has the highest overlap score between its KV cache and the user’s request — minimizing KV cache recomputation and freeing up GPU resources.
 
Additionally, Dynamo leans on its Distributed KV Cache Manager feature to support both distributed and disaggregated inference serving and to offer hierarchical caching capabilities. Calculating KV cache is resource intensive, but as AI demand increases, so does the volume of KV cache that must be stored to minimize KV cache recomputation. Dynamo Distributed KV Cache Manager leverages advanced caching policies to prioritize the placement of frequently accessed data closer to the GPU, with less accessed data being offloaded farther from the GPU.
 
As such, the hottest KV cache data is stored on GPU memory with progressively colder data being offloaded to shared CPU host memory, solid-state drives (SSDs) or networked object storage. Leveraging these key features, NVIDIA claims Dynamo maximizes resource utilization, yielding up to 30 times higher performance for AI factories running reasoning models like DeepSeek-R1 on NVIDIA Blackwell. Additionally, NVIDIA leaders state that while designed specifically for the inference of AI reasoning models, Dynamo can double token generation when applied to traditional knowledge-based LLMs on NVIDIA Hopper.
 

The Super Bowl but everybody wins

NVIDIA’s astronomical revenue growth and relentless innovation road map aside, perhaps nothing emphasizes the degree of importance the company holds over the future of the entire AI market more than the number of partners that are clamoring to gain a foothold using NVIDIA as a launching point. The San Jose McEnery Convention Center was filled with nearly 400 exhibitors showcasing how NVIDIA’s AI and accelerated computing platforms are driving innovation across all industries. NVIDIA GTC is no longer a conference highlighting the innovations of a single company; it is the epicenter of showcasing AI opportunity, and every company that wishes to play a role in the market was in attendance.
 
The broad swath of NVIDIA’s partner ecosystem was represented. Infrastructure OEMs and ODMs displayed systems built on NVIDIA reference architectures, while NVIDIA inception startups highlighted their own diverse codeveloped AI solutions. However, perhaps the most compelling and largest-scale example of NVIDIA relying on its partners to deliver AI solutions to end customers came from the company’s global systems integrator (GSI) partners.

NVIDIA provides the platform; partners provide the solution

The world’s leading GSIs, including Accenture, Deloitte, EY, Infosys and Tata Consultancy Services (TCS), all showcased how they are leveraging NVIDIA’s AI Enterprise software platform — comprising NIMs, NeMo and Blueprints — to help customers build and deploy their own customized AI solutions with a heavy emphasis on agentic AI. While some of the largest enterprises in the world have the talent required to build bespoke AI solutions, many other organizations rely on NVIDIA-certified GSI partners with training and expertise in NVIDIA’s AI technologies to develop and deploy AI solutions.
 
Agentic AI has emerged as the next frontier of AI, using reasoning and iterative planning to solve complex, multistep problems autonomously, leading to enhanced productivity and user experiences. NVIDIA AI Enterprise’s tools help make this possible, and at GTC 2025, NVIDIA business leaders shed light on three overarching reasons why NVIDIA AI Enterprise has resonated with end customers and NVIDIA partners alike.
 
First, NVIDIA AI Enterprise builds on CUDA to deliver software-optimized full-stack acceleration, much like other NVIDIA AI platforms. Business leaders essentially explained NIMs — the building blocks of AI Enterprise — as an opinionated way of running a GenAI model on a GPU in the most efficient way possible.
 
Second, NVIDIA AI Enterprise is enterprise grade, meaning that the thousands of first- and third-party libraries constituting the platform are constantly maintained with AI copilots scanning for security threats and AI agents patching software autonomously. Additionally, enterprises demand commitments to maintenance and standard APIs that are not going to change, and NVIDIA AI Enterprise ticks these boxes while also offering tiered levels of support services on top of the platform.
 
Finally, because NIMs are containerized, based on Kubernetes, AI Enterprise is extremely portable, allowing the platform to deliver a consistent experience across a variety of environments.

Autonomous vehicles are the tip of the physical AI iceberg

Several of NVIDIA’s automotive partners also attended GTC 2025, displaying their vehicles inside and outside the convention center. These partners all leverage at least one of NVIDIA’s three computing platforms comprising the company’s end-to-end solutions for autonomous vehicles, with several partners leveraging NVIDIA’s entire platform — including General Motors (GM), whose adoption of NVIDIA AI, simulation and accelerated compute was announced by Huang during the GTC 2025 keynote address.
 
While autonomous vehicles are perhaps the most tangible example, NVIDIA’s three computer systems can be used to build robots of all kinds, ranging from industrial robots used on manufacturing lines to surgical robots supporting the healthcare industry. The three computers required to build physical AI include NVIDIA DGX, which is leveraged for model pre-training and post-training; NVIDIA OVX, which is leveraged for simulation to further train, test and validate physical AI models; and NVIDIA AGX, which acts as the robot runtime and is used to safely deploy distilled physical AI models in the real world.
 
Following the emergence of agentic AI, NVIDIA sees physical AI as the next wave of artificial intelligence, and the company has already codeveloped foundation models and simulation frameworks to support advancements in the field with industry-leading partners, such as Disney Research and Google DeepMind.

Conclusion

The sheer scale of NVIDIA GTC 2025 reaffirmed NVIDIA’s position at the epicenter of the AI revolution, with Huang’s keynote address filling all the available seating in the SAP Center. Born from Huang’s long-standing vision of accelerating workloads by applying parallel processing, NVIDIA’s relentless investments in the R&D of the entire AI stack — from GPUs to interconnect and software platforms to developer resources — remains the driving force behind the AI giant’s success and seemingly insurmountable lead over competitors.
 
NVIDIA’s first-mover advantage in accelerated computing was predicated on the company’s CUDA platform and its ability to allow developers to optimize applications running on NVIDIA GPUs. Nearly 20 years later, NVIDIA continues to leverage CUDA and its robust ecosystem of developers to create innovative AI platforms, such as Omniverse and AI Enterprise, that attract partners from every corner of the technology ecosystem. By swimming in its own lane and relying on its growing NVIDIA Partner Network to deliver AI systems and solutions to end customers, NVIDIA has built an unrivaled ecosystem of partners whose actions on the front lines with end customers facilitate the near-infinite gravity behind the company’s AI platforms.

Cloud Opportunity Expected to Increase Once DOGE Disruption Subsides

The U.S. federal government will need modern cloud services to be most efficient, regardless of DOGE-driven changes

Rolling pockets of chaos and an overall cloud of uncertainty may be the best way to describe the first two months of the new Trump administration. One upside to federal contracts is that they tend to be long-term in nature, which provides some stability for all types of vendors with existing contracts. However, the current transition has been rocky, to say the least, as contracts are getting canceled, agency staffing is reduced, and the existence of entire agencies is called into question.
 
Beyond the distinct financial impacts that are occurring to many federal systems integrators (FSIs) and IT vendors, the overall uncertainty about future changes has complicated government contractors’ ability to conduct business as usual. Short-term uncertainty will likely persist, but eventually we will see a silver lining for the ecosystem of IT providers catering to the needs of the U.S. federal government. The government may become a more streamlined entity, in all respects, but IT will need to remain at the forefront of U.S. government operations.
 
Differences of opinion on optimal levels of funding will persist, but most people concur that the IT infrastructure supporting many core government agencies is antiquated and long overdue for upgrade. After the Department of Government Efficiency (DOGE) completes its cost-cutting and agency reorganizations, the overall approach to modernizing those systems will come into greater clarity, but third parties including FSIs and IT vendors like Amazon Web Services (AWS), Microsoft, Google and Oracle will all likely be a part of the solution enabling the reformed federal government to modernize and play an ongoing role eliminating waste, fraud and abuse using a refreshed IT infrastructure environment.
 

Explore the expected impact of DOGE on federal systems integrators and how it could shape the technology landscape


 

Vendors hope federal spending materializes after the fog of dismantling and reducing headcount dissipates

Reducing the size of the federal workforce was an immediate focus for DOGE. With the “Fork in the Road” email sent by the Office of Personnel Management to encourage staff resignations and the nonvoluntary firing of workers across civilian agencies, the total number of employees shed from the federal workforce is estimated to have surpassed 100,000 in the first two months of the Trump presidency.
 
The entire federal workforce still totals more than 3 million, excluding 1.3 million active military personnel, and additional cuts are a certainty. Early in the formation of DOGE, the idea of cutting up to 75% of federal workers was floated, which could be far-fetched in reality. Regardless, it is clear the workforce-reduction efforts will continue to be a focus as DOGE expands its reach to additional government agencies and pushes further than just the probationary employees that made up the bulk of early reductions.
 
As headcount reductions continue, cloud and software vendors could assist the administration with those cuts while, at the same time, be impacted by the fallout of those cuts. On Workday’s FY4Q25 earnings call, CEO Carl Eisenbach painted the impact of DOGE in an opportunistic light, stating: “In fact, the majority of them [federal IT systems] are still on-premise, which means they’re inefficient. And as we think about DOGE and what that could potentially do going forward, if you want to drive efficiency in the government, you have to upgrade your systems. And we find that as a really rich opportunity.”
 
If, in the era of DOGE, government agencies undertake new, or continue existing, efforts to modernize IT systems and adopt cloud-enabled solutions, it would certainly be a big opportunity not just for Workday, but for the entire federal IT contractor market. The certainty of that opportunity is still questionable, however, given the rapidity with which major changes to how government operates are occurring. Any technology opportunities with USAID (United States Agency for International Development), for instance, are now dubious given the speed with which the agency has been dissolved, even as legal challenges abound.
 
Additional rapid changes will occur with the Department of Education given President Trump’s clear directive to new Secretary of Education Linda McMahon to dismantle the agency. On ServiceNow’s 4Q24 earnings call, CFO Gina Mastantuono noted some of this uncertainty while also remaining optimistic about the federal opportunity, stating the company’s guidance reflects a stronger U.S. federal performance in the back half of 2025, given changes brought on by the administration.

A build-it-yourself approach could challenge packaged IT solutions

DOGE head Elon Musk has clearly employed many of the same techniques and strategies he has used in the past, such as sending a “Fork in the Road” email to Twitter employees and requiring them to send a weekly email of their accomplishments after he purchased Twitter (now called X). With that in mind, it is relevant to think about the approaches to IT that Musk has used as CEO of Tesla and SpaceX for clues about what might occur in the U.S. federal space.
 
For some of the most important mission-critical IT and software decisions at Tesla and SpaceX, Musk deployed a proprietary software package that is shared by the companies to manage core manufacturing and sales, CRM and financial processes. Instead of utilizing a prebuilt solution from the likes of SAP or Oracle, internal teams at SpaceX and Tesla built, customized and manage their own ERP solution named WARPDRIVE. Musk could very well encourage a similar approach in federal agencies, either by licensing WARPDRIVE to those agencies or by directing more proprietary programs to be custom-built to reduce expenditures and theoretically achieve a superior technological solution. Either option would be challenging to implement but remains within the realm of possibility and would effectively reduce the addressable market for third-party IT solutions.
 

Watch Now: Deep Dive into Generative AI’s Impact on the Cloud Market in 2025

Scaling back new and existing awards will stifle revenue for cloud vendors in the short term

In the U.S. federal sector, SIs are a key conduit for how cloud and software companies capture opportunities. The opportunity pipeline and associated timeline for deals is notoriously long for federal spending, but the total opportunity has already decreased in size based on the cuts made by DOGE. Some of the strategies and actions recently used by leading SIs in the federal space are discussed in TBR’s special report, Leading Federal Systems Integrators React to U.S. Department of Government Efficiency. As outlined in the special report, all 12 of the leading federal SIs are looking to reduce expenses and prepare for a slowing of revenue streams in the near term. After a period of federal investment and expansion, this certainly is a change in trajectory for their businesses. In addition to making similar cost reductions, all 12 vendors are also doubling down on their competitive differentiation to secure growth moving forward. All of the recent market shifts, including security, AI and digital transformation, have led FSIs to reinvest in capabilities that provide the best opportunities for long-term expansion.
 
In the short term, even existing contracts with the federal government are subject to reductions or termination, which impacts not only the SI but also the IT vendors that have secured subawards to provide their technology as part of the overall engagement. One example TBR cited in the special report was the $1.5 billion award Leidos has with the Social Security Administration (SSA), which includes subawards for Pegasystems, AWS and multiple other IT vendors. The Leidos deal was scaled back by DOGE, marking the beginning of the disruption to awards with SIs and subawards with IT vendors. SSA represents a small portion of the federal budget, so when DOGE looks to larger agencies such as the Department of Health and Human Services for cost reductions and efficiencies, the impact on the federal SIs and supporting IT vendors will be even greater.
 
In terms of the scale of revenue at stake, AWS alone has won close to $500 million in subaward contracts in the last three fiscal years. That does not directly translate into revenue, however, as the money still needs to be outlaid, a process that is even more tenuous given the current spending environment and actions taken by the DOGE team. In addition to deals tied to FSIs, cloud vendors and software vendors also have direct deals/prime awards with federal agencies that are at greater risk. AWS, for instance, has won a total of $445 million in prime award contracts over the past three fiscal years.
 
Most of those awards are multiyear contracts that are not guaranteed, and the revenue could be reduced or not disbursed. In fact, only $104 million of those awards to AWS have been outlaid, meaning the balance, more than $340 million, could be impacted. It is also important to note these figures only reflect past deals; we anticipate the new federal deal pipeline for vendors like AWS to shrink due to uncertainty and the administration’s focus on cost reductions.

Big cloud deals such as JWCC and Stargate are expected to proceed without significant funding impacts

The impacts of DOGE should be widespread throughout the government, but we expect the top federal IT opportunities, the Stargate Project and the Joint Warfighting Cloud Capability (JWCC) contract vehicle, to avoid major funding challenges. Though both projects are in the early stages and still subject to competitive jockeying between technology providers to secure task orders, we expect the funding to remain available even amid broader spending reductions.
 
The JWCC was announced in 2022 with a total of $9 billion in funding available to Oracle, Microsoft, AWS and Google Cloud. Oracle has been a leading provider under the contract to date. Roughly $2.5 billion has been awarded to the five vendors thus far in the contract, leaving more than $6 billion in additional task orders in the entire project. The spending bill passed in mid-March to avoid a federal shutdown illustrates the appetite to sustain, if not increase, defense spending. All the participants in JWCC have donated to and publicly supported the administration, which could solidify the longevity of the engagement.
 
Stargate was introduced by President Trump in the early days of his presidency, indicating that the project is likely to proceed in some fashion regardless of any budgetary pressure. The project will be a joint venture with OpenAI, SoftBank and Oracle to initially build a $100 billion data center in Texas. Over the next four years, the project aims to build additional large-scale data centers, with a total of $500 billion in funding, making it the largest centralized data center investment in history. The funding includes significant financial backing from the U.S. government, with contributions from SoftBank, a firm known for its long-term investment strategies. OpenAI, SoftBank, Oracle and MGX are the initial equity investors, while Arm, Microsoft, NVIDIA and OpenAI have been named as technology partners and will have some involvement in the project.

Modern cloud IT solutions should have an elevated role in the restructured federal government

The headcount reductions, eliminations of agencies, and overall uncertainty will disrupt business as usual in the U.S. federal sector at least through the end of 2Q25. Once the new, smaller and streamlined structure emerges, we expect the value of modern IT solutions to be recognized and spending to resume and even increase compared with the prior trajectory. Having fewer human resources, likely fewer skilled IT professionals, and an altered view of budgeting and ROI for all initiatives, IT included, all amplify the value that can be added by modernizing the infrastructure and solutions that support the mission of government agencies.
 
Across fragmented environments, many of which are still traditional on premises and based on aging technology, consolidation and use of government-grade cloud delivery can improve performance and reduce the total cost to deliver even over a relatively short three-to-five-year time frame. On the commercial side, many of the organizations we speak with note that the simplification of their IT environments is one of the strongest drivers of cloud adoption. AI and generative AI capabilities add to the benefits that can now be enabled. And for government agencies, preexisting data protocols and procedures increase their readiness to apply next-generation data analysis and AI. We see the business use cases for AI becoming more compelling on the commercial side, which bodes well for adding real value in the U.S. federal sector as it adapts to a more streamlined way of operations.

Infosys Readies to Deliver Outcomes at Scale Through Enterprise AI

U.S. Analyst and Advisor Meet 2025, New York City, March 4, 2025 — Infosys hosted industry analysts and advisors for a packed afternoon in the company’s offices at One World Trade Center. Using client stories amplified through technology partner support to reinforce Infosys’ role in the IT services, cloud and enterprise AI market, company executives consistently returned to a few main themes, including delivering business outcomes, maintaining trusted relationships, and focusing on speed, agility and simplification.  

 

Infosys’ hub-first strategy in the Americas demonstrates the company’s success with coinnovation and pursuit of large deals

Similar to previous events, Infosys kicked off the event with an update on the company’s strategy and performance in the Americas region. Anant Adya, Infosys EVP and head of Americas Delivery, led the presentation, highlighting key elements of the company’s success in the region, including its hub-first strategy; investments in and expansion of local talent pools, including in the U.S., Canada, Mexico, Brazil and the rest of LATAM; and strategic bets that are centered on delivering business outcomes and enabled through portfolio offerings such as Infosys Cobalt, Infosys Topaz and Infosys Aster..
Infosys’ six Tech Hubs across the U.S. remain the backbone of the company’s hub-first strategy. Located in Phoenix; Richardson, Texas; Raleigh, N.C.; Indianapolis; Providence, R.I.; and Hartford, Conn., and collectively staffed with thousands of local hires, these centers are increasingly allowing Infosys to drive coinnovation with clients and partners and pursue new opportunities with a key focus on large deals (defined by Infosys as deals over $50 million in value) in areas including cloud, AI, data, the edge and cybersecurity. Infosys has been rebalancing its onshore-offshore effort over the last five years.
 
For example, onshore effort was 24% in 4Q24, down from 27.7% in 4Q19. Offshore effort was 76% and 72.3% in 4Q24 and 4Q19, respectively. The recalibration began during the pandemic, as Infosys began capitalizing on the increase in remote working. The current ratio is also helping the company demonstrate pricing agility when competing for service delivery transformation projects. At the same time, maintaining a steady flow of local hires could help Infosys weather any pushback from the Trump administration on its America First Investment Policy requirements. Although the administration has yet to impose tariffs on companies utilizing services from overseas, it would not be surprising for this to happen in the future. Investing in training programs and collaborating with local universities through the Infosys Foundation would not only create a strong PR framework but also help Infosys increase its recruiting opportunities. Meanwhile, expanding across Canada and key LATAM countries, including Mexico and Brazil, to support both nearshore and locally sourced deals allows Infosys to diversify its revenue stream while enhancing the value of its brand beyond the U.S.
 
As Infosys continues to execute on its well-oiled strategy, investing and expanding in the next growth areas across the company’s cloud and enterprise AI portfolio will largely be centered on calibrating its commercial model as client discussions evolve from transactions to outcomes. For example, to support these expansion efforts, Infosys’ work within the Infosys Cobalt portfolio has evolved from tech optimization and data center migration to developing and applying industry solutions, and now includes accounting for the role of AI.
 
Building out a fluid enterprise to derive greater value from AI has compelled Infosys to develop solutions with an eye toward being more digital, more composable and more autonomous. This solution framing is also helping the company drive next-gen conversations with its technology partners and with clients that are seeking to develop an intelligent enterprise enabled by AI.

Infosys’ pivot toward Outcome as a Service will test the company’s ability to drive change management at scale, starting with its own operations

Expanding on Infosys’ evolving go-to-market strategy, portfolio, talent and collaboration with partners, Infosys Chief Delivery Officer Dinesh Rao, along with a series of client and partner panels throughout the afternoon, not only brought to light the company’s aspirations around driving outcome services opportunities but also discussed at length the challenges stakeholders face, often revolving around change management. Rao’s presentations spanned client use cases, AI evolution, and Infosys’ portfolio adjustment as well as resource pyramid calibration to balance support opportunities in foundational and emerging areas. Three key areas stood out:

  • Client examples: Amplifying value through innovation has helped Infosys capture and deliver services for global clients across manufacturing, retail and consumer packaged goods (CPG), among other verticals, while also positioning the company to test new commercial models. For example, in a multiyear, multibillion-dollar deal supporting a multinational communications provider, Infosys is deploying its Outcome as Service commercial framework, bundling software, hardware and third-party services on a single platform.
  • AI: Infosys launched NVIDIA-enabled small language models (SLMs) for Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM, targeting clients through core industry and horizontal offerings and allowing them to use their own data on top of the prebuilt SLMs. Additionally, Infosys released the Finacle Data and AI Suite of solutions to support banking clients seeking to enhance IT systems and customer experience using AI. The solutions include Finacle Data Platform, Finacle AI Platform and Finacle Generative AI. Infosys’ investments in industry-centric SLMs, which the company positions with clients as either accelerators or platforms to drive targeted conversations using industry and/or functional enterprise data, closely align with the company’s playbook from a few years ago, when it began developing industry cloud solutions, both proprietary and partner enabled. Embedding generative AI (GenAI) as part of a deal, rather than using it as a lead-in, is a smart strategy as it allows Infosys to better appeal to price-conscious clients and steer conversations toward outcomes and the benefits of the engagement, rather than trying to convince clients to spend a premium on a technology that has yet to prove ROI at scale. We believe Infosys’ investment in agentic AI capabilities for Infosys Topaz, along with the ITOpsSLM, can also position the company to drive nonlinear, profitable engagements, especially with clients that are seeking to migrate and modernize their existing mainframe infrastructure but lack the necessary COBOL skills and understanding of the environment.
  • Resource pyramid: Infosys’ three talent categories — traditional software engineers, digital specialists focused on digital transformation and ongoing support, and Power Programmers — allow the company to balance innovation and growth while calibrating its business and commercial models. The Power Programmers group consists of highly skilled professionals who are responsible for developing products and ensuring that the intellectual property they create and use meets the cost-saving requirements Infosys pitches to clients. Although the other two groups follow a traditional employee pyramid structure, the Power Programmers group is much leaner and resembles the business models that many vendors, including Infosys, may aspire to adopt in the future.

Rao also discussed Infosys’ approach to innovation. The company’s business incubator framework, backed by Infosys’ $500 million innovation fund and enabled through its network of Living Labs, has empowered the company’s employees to think creatively, thus helping Infosys solidify its culture of learning and collaboration. Gaining employee buy-in is a must, especially at a time when the company is pivoting its own operations toward outcome-based service delivery.

AI- and partner-led discussions will continue to guide Infosys’ efforts to solidify its position as a trusted solution broker

Sunil Senan, Infosys’ global head of Data and AI, provided an update on Infosys’ AI-first strategy and portfolio, which have allowed Infosys to stay competitive in a rapidly evolving AI market. Senan noted that the opportunities around agentic AI require a rigorous data and governance strategy — an acknowledgment that is not surprising given the company’s typically humble yet pragmatic approach to emerging growth opportunities.
 
Scaling AI adoption comes with implications and responsibilities, which Infosys is trying to address one use case at a time. For example, in October 2023 Infosys launched the Responsible AI Suite, which includes accelerators across three main areas: Scan (identifying AI risk), Shield (building technical guardrails) and Steer (providing AI governance consulting). These capabilities will help Infosys strengthen ecosystem trust via the Responsible AI Coalition.
 
Infosys also claimed it was the first IT services company globally to achieve the ISO 42001:2023 certification for the ethical and responsible use of AI. Infosys recognizes that AI adoption will come in waves. The first wave, which started in November 2022 and continued for the next 18 to 24 months, was dominated by pilot projects focused on productivity and software development. In the current second wave, clients are starting to pivot conversations toward improving IT operations, business processes, marketing and sales.
 
The real business value will come from the third wave, which will focus on improving processes and experiences and capitalizing on opportunities around design and implementation. Infosys believes the third wave will start in the next six to 12 months. Although this time frame might suit cloud- and data-mature clients, only a small percentage of the enterprise is AI ready across all components including data, governance, strategy, technology and talent. Thus, it might take a bit longer for AI adoption to scale.
 
But as Infosys continues to execute its pragmatic strategy, the company relies on customer success stories that will help it build momentum. And there was no shortage of examples throughout the afternoon, with Infosys clients across the spectrum — from just getting started to scaling hundreds of AI deployments — sharing their experiences with Infosys and within the broader ecosystem.
 
We believe as Infosys pivots toward an Outcome as a Service commercial model, opportunities to scale AI will stem from the company’s ability to demonstrate value. In a traditional transformation project, the company often deployed professionals to perform typical implementation work and then transferred them to another project; during an AI project, staff would need to stay at a client’s site for a longer period to ensure the technology delivers the value promised. Approaching AI opportunities with a similar focus will help Infosys justify its rates but also help the company calibrate its staffing pyramid.
 
Infosys’ long-term success also depends on the company’s relationship with technology partners. During previous iterations of the summit, Infosys has had separate alliance-led presentations, but this time around the company included the partner presentations, specifically SAP, in a client panel. SAP’s presentation discussed a successful, three-year SAP S/4HANA migration for a global manufacturing client. Although the three-year turnaround was impressive, what stood out was how much the SAP executive was part of the conversation with the client. Speaking with the client on behalf of Infosys demonstrated trust and the depth of the relationship between SAP and Infosys.
 
Throughout TBR’s Ecosystem Intelligence research, we have written extensively that partners speaking on behalf of partners is often the last mile and the biggest challenge for vendors to overcome when they try to differentiate. We understand that vendors, especially IT services vendors, try to maintain tech agnosticism during consulting workshops, but when it comes to the implementation part of the engagement, developing more exclusive messaging resonates with clients much better as it shows knowledge, accountability and trust between parties.

Product Engineering and Quality Engineering present a tale of two cities that can help Infosys deliver value with minimal disruption to its financial profile

As Infosys continues to balance foundational growth with pursuing opportunities in new areas, the company’s evolving portfolio allows it to deliver steady financial results. Executives from Infosys’ Engineering Services and Quality Engineering lines of business, along with clients, highlighted how these two areas are helping Infosys achieve just that. Ben Ko, CEO of Kaleidoscope, a company Infosys acquired in 2020, explained how his company and its portfolio of solutions and products allow Infosys to capture manufacturing and R&D budgets, a slice of the overall enterprise spend that was somewhat untapped prior to expanding in the product engineering space. Infosys Engineering Services remains among the fastest-growing units within the company as Infosys strives to get closer to product development and minimize GenAI-related disruption on its content distribution and support position..
 
Since the 2020 purchase of Kaleidoscope, which provided a much-needed boost for the company to infuse new skills and the IP needed to appeal to the OT buyer, Infosys has enhanced its value proposition to meet GenAI-infused demand. For example, Infosys has purchased India-based, 900-person semiconductor design services vendor InSemi, and Germany-headquartered engineering R&D services firm in-tech, which presents a use case where the company applied a measured risk approach to enhance its chip-to-cloud strategy.
 
The purchase of in-tech certainly accelerates these opportunities, bringing in strong relationships with OEM providers, which is a necessary stepping stone as Infosys tries to bridge IT and OT relationships.
 
Meanwhile, Venky Iyengar, Infosys VP and head of Quality Engineering, along with clients, discussed how the Infosys business is adjusting both its value proposition and staffing models to account for automation and AI and to continue to deliver value to clients with minimal disruption to Infosys’ financial profile.
 
While a degree of revenue cannibalization is inevitable in the long run, Infosys’ approach toward platform-enabled quality engineering services, along with its efforts to fold these offerings under broader transformation projects, will allow the company to pivot and develop its position as a solutions broker.

It is all about the margins, and Infosys has the right ingredients to keep shareholders happy

Infosys, like many of its peers, faces a new reality influenced heavily by AI and reshuffling in buyers’ IT spending priorities. With IT becoming a utility, we expect enterprises not to cut back on spending but rather to demand from third-party vendors such as Infosys to deliver more with less. AI- and automation-enabled service delivery presents Infosys with the right tool to execute on such expectations. And as long as Infosys allows buyers to see that the productivity improvements have driven greater volume, then Infosys will be able to maintain its operating margin. Otherwise, buyers might start pushing back and asking for savings on their contracts when Infosys pitches new work but uses fewer employees. It was evident from the sessions that Infosys, with its enterprise AI capabilities, is strongly positioned to help clients unlock business value and drive growth. This aligns with the broader industry trend of leveraging AI to meet evolving client demands.
 
We understand that Outcome as a Service is a long-term play that will test Infosys’ culture and ability to manage trust within the ecosystem. The last five years of steady financial performance and the expansion of Infosys’ large and mega deals roster have provided the company with a strong foundation to make that pivot. Many of Infosys’ alliance partners, both technology and services ones, that TBR has spoken with view Infosys as a top delivery partner, thus providing the ecosystem support needed for the company to navigate the evolving IT services market.
 
TBR will continue to cover Infosys within the IT services, ecosystems, cloud and digital transformation spaces, including publishing quarterly reports with assessments of Infosys’ financial model, go-to-market, and alliances and acquisitions strategies.
 
For a comparison with Infosys’ peers and other IT services vendors, TBR includes Infosys in our quarterly IT Services Vendor Benchmark, our semiannual Global Delivery Benchmark and Cloud Ecosystem Report, and our annual Adobe and Salesforce Ecosystem Report; SAP, Oracle and Workday Ecosystem Report; and upcoming ServiceNow Ecosystem Report. Access the data and analysis in each of these reports with a TBR Insight Center™ free trial. Sign up today!

India-centric IT Vendors Leverage Partnerships for Technology Expansion and Market Reach

Expanding through partnerships

The India-centric vendors, which include Cognizant, HCLTech, Infosys, Tata Consultancy Services (TCS) and Wipro, leverage partnerships to expand their technology capabilities and scale while also bringing in industry knowledge to strengthen the value of their portfolios. Although these partnerships do not vary significantly from those of other IT services vendors, the India-centric vendors each bring different benefits, such as price competitiveness and low cost of scale, that can enhance other vendors’ go-to-market strategies and ability to reach underpenetrated markets while also bringing in portfolio expertise.
 
Understanding how similar companies bring different capabilities and strengths to their technology alliance partners highlights opportunities for other ecosystem players, such as smaller software companies, OEMs and niche consultancies, that are looking to expand with the India-centric vendors.
 
Graph: India-centric IT Vendor Headcount for 4Q24

Cognizant

Cognizant forges partnerships with industry-oriented vendors and expands its security and digital capabilities. During 4Q24 and early 2025, Cognizant looked to relationships with key partners such as Salesforce and ServiceNow to enhance the company’s positioning around transformation and software development as well as create opportunities around migration and managed services.
 
As transformation projects increasingly center on AI, developing a suite of offerings that streamline the use of data and analytics, security and managed services helps Cognizant strengthen client relationships and drive new projects. Working with security vendors to deepen its security capabilities and protect digital environments will lead to additional services engagements for Cognizant. Further, partnering around industry expertise is enabling Cognizant to improve its performance in certain verticals, such as recently landing modernization and digitization projects with life sciences clients.
 
Cognizant manages an ecosystem to drive innovation both internally as well as with clients to drive value across industries. In April 2023 Cognizant launched Bluebolt, an innovation program that seeks to develop new ways to address clients’ business challenges. Since the launch, more than 115,000 ideas have been developed, of which 22,000 have been implemented, increasing client engagement. Additionally, Cognizant worked with Microsoft to create the Innovative Assistant, a tool that supports idea generation for Microsoft employees. The tool is something that Cognizant could replicate with other partners.
 
In 2014 Cognizant acquired TriZetto, a healthcare IT software and solutions provider, which added healthcare clients and specialized employees and offerings, creating new opportunities for Cognizant across the healthcare space. Cognizant continues to invest in the platform, offering back- and front-office solutions for payers, providers and patients, as well as care management and connected solutions to transform the patient and physician experience. The acquisition and Cognizant’s continued investments in healthcare offerings resulted in the vertical overtaking financial services as the company’s top revenue generator in 2024.
 
Cognizant’s active acquisition pace brings in a variety of new skills and capabilities to supplement existing areas and enable the company to expand transformation contracts with clients. For example, Cognizant acquired ServiceNow partner Thirdera in December 2023, strengthening its consulting and implementation services. Through the acquisitions, Cognizant has quickly developed its engineering, software and advisory services, enhancing its positioning with clients.

HCLTech

HCLTech’s partner network encompasses technology vendors, industry experts, and research and learning institutions, allowing the company to develop a wider set of in-house expertise and offerings. Adding new hyperscaler partners to expand its capabilities and scale enables HCLTech to deliver a wider range of AI offerings and guide technology services clients’ efficiency-related and insight-driven transformation projects. Further, integrating industry expertise within its technology portfolio improves HCLTech’s ability to address clients’ specific transformation needs.
 
Pursuing solution codevelopment partnerships helps HCLTech leverage internal expertise alongside that of its partners to align its portfolio with emerging pain points resulting from heightened AI, cloud and digital usage. HCLTech will strengthen its relationships with key partners such as Microsoft, Google Cloud, Amazon Web Services (AWS) and IBM to enhance its positioning around AI. In addition, HCLTech will enhance its industry positioning through partners and acquisitions to better tailor its offerings and deepen relationships in the telecom, financial services and manufacturing industries.
 
HCLTech’s ongoing investments in engineering capabilities have deepened the vendor’s expertise, allowing it to offer semiconductor design, manufacturing and validation services. Through acquisitions, HCLTech has added new experience and solutions and strengthened its manufacturing relationships. The integration of Engineering and R&D Services (ERS) sales and go-to-market motions with IT and business services sales will help HCLTech extend the reach of its portfolio, generating new segment opportunities and expanding the company’s reach outside its more mature areas such as manufacturing and automotive.
 
HCLTech leads with its Relationship Beyond the Contract (RBTC) approach, which allows the company to deepen client relationships, better address challenges, and future-proof organizations for disruption and threats. With the heightened demand and interest around generative AI (GenAI), HCLTech’s development of applications, infrastructure, semiconductor offerings and business process solutions underpinned by its GenAI Labs enables the company to secure its relationships.

Infosys

Infosys’ alliance partner strategy mirrors that of many of its competitors as the company seeks to secure foundational revenue opportunities while pursuing innovation through a measured risk approach. The company strives to differentiate by sticking to its strengths rather than branching too far into partners’ territory, which enterprise buyers strongly appreciate. Recent partnerships centered on GenAI also provide a glimpse into Infosys’ efforts to establish a beachhead in the emerging market as the company navigates choppy market demand and increases its efforts to expand margins.
 
Infosys’ three talent categories — traditional software engineers; digital specialists focused on digital transformation (DT) and ongoing support; and power programmers — allow the company to balance innovation and growth as it calibrates its business and commercial models. To support these categories, Infosys executes on aggressive hiring, particularly for 2025. In January Infosys announced it was planning to expand its Hyderabad, India, operations, adding 17,000 people for a total of 50,000 employees in the region. Although no time frame was outlined for this increase, during the company’s 4Q24 earnings call Infosys’ executives shared the company is planning to hire 20,000 freshers in FY26, up from 15,000 in FY25.
 
Infosys’ broad-based GenAI investments centered on the development of industry-aligned solutions and small language models, largely enabled through collaborations with NVIDIA, Microsoft and Meta, enhance the company’s value proposition when competing for custom model development engagements. In addition to driving opportunities within the telco vertical, we believe Infosys’ collaboration with NVIDIA will also help the company enhance the recently launched Infosys Aster — a set of AI-driven marketing services, solutions and platforms — as Infosys looks to develop a comprehensive strategy for its digital marketing offerings. Supporting clients seeking to enhance contact center operations through the use of AI and GenAI could backfire if technology and business priorities are misaligned, as chatbots have been around for a long time but have had only minimal positive impact on customer services.
 

Watch on demand: $130+ Billion Emerging India Opportunity – India-centric vs. Global IT Services Firms: Who Wins and Why

TCS

TCS has dedicated business units for its three largest technology partners, fostering deep expertise and enabling the development of specialized solutions. These units leverage a comprehensive approach, including certified talent, Centers of Excellence, migration factories and innovation garages, to deliver superior cloud services. This approach allows TCS to effectively guide clients through their cloud migrations, codevelop industry-specific solutions and ultimately drive successful cloud transformations.
 
Beyond its core cloud partnerships, TCS actively cultivates a diverse ecosystem of technology alliances. These partnerships extend beyond the traditional cloud providers, enabling TCS to enhance its own offerings, strengthen partner capabilities and collectively expand market reach. This collaborative approach fosters mutual growth and enables TCS to deliver more comprehensive and innovative solutions to clients.
 
TCS emphasizes its deep expertise in enterprise application deployment and management, combined with its scale and cost-effective resources, to position as a valuable partner within the technology ecosystem. The company is actively investing in talent development and AI-driven solutions to meet surging client demand around GenAI. By leveraging strong industry relationships and strategic partnerships with leading technology providers, TCS delivers a comprehensive range of digital services such as AI. Collaboration helps TCS enhance its value proposition for clients.
 
TCS stands out among its India-based peers due to its impressive scale, cost-effective labor force, well-balanced portfolio, robust automation framework, in-depth understanding of legacy IT systems and vast expertise in DT. The company’s scale allows it to work across a wider range of client needs and challenges that can be addressed through its DT and application portfolio. Despite TCS’ larger scale relative to peers, the company maintains roughly 75% of its headcount in offshore locations.

Wipro

Wipro continues to expand its partner ecosystem, including incorporating security and enablement services, to ensure the company can provide a wider range of technology solutions. For example, during 4Q24, Wipro partnered with multiple vendors to grow its security services offerings. Working with Netskope and Lineaje helps address risk and vulnerabilities across the technology landscape to drive additional value and strengthen client relationships.
 
In addition to technology development, Wipro looks to deepen its industry expertise through partners, advancing its healthcare and financial services portfolio. Through 2025, Wipro will grow its partner ecosystem to include additional technology capabilities and security services to guide clients’ modernization and efficiency transformations while also maintaining a portfolio that rivals those of its peers.
 
Relative to its India-centric peers, Wipro finds itself in a more precarious position with slower revenue growth and a smaller profit. During 2024 Wipro IT Services (ITS) was able to increase its operating profit, owing to improved internal management, the use of AI and automation tools, as well as a streamlined talent structure. Wipro ITS’ revenue generation slowed in 2023 and 2024, resulting in a year-to-year decline in 2024 in both local currency and U.S. dollars due to ongoing execution challenges in APMEA and Europe and limited interactions with clients. Capco, a financial services consulting firm Wipro acquired in 2021, remains a bright spot for Wipro, as it added a new approach to industry clients in Europe.

Conclusion

Each of the India-centric vendor brings its own strengths and weaknesses that can help enhance partners’ go-to-market strategies and deliver on emerging technologies. The composition of talent varies across the vendors, with some benefiting from technical expertise such as engineering whereas others have a greater bench of consulting and delivery staff. As AI permeates client engagements, developing a larger partner ecosystem that encompasses multiple different business models, talent and portfolio strengths as well as offshore delivery leverage will enable IT services vendors to compete more effectively for limited client spend.
 
Further, internal innovation with partners, including around AI tools that are tested internally before coming to market strengthens portfolio value and trust with clients. Partnering outside of typical partner parameters will bring in much-needed innovation, refreshed talent as well as enhanced delivery resources to secure client trust and engagement.
 
TBR’s ongoing research and company coverage includes regular analysis of alliances between the leading global systems integrators, including the companies outlined in this report. In addition, we publish the Cloud Ecosystem Report semiannually and the Adobe & Salesforce Ecosystem Report, the SAP, Oracle and Workday Ecosystem Report, the U.S. Federal Cloud Ecosystem Report and the Voice of the Partner Ecosystem Report annually. Access the data and analysis in each of these reports with a TBR Insight Center™ free trial. Sign up today!

Informatica’s Alliance Strategy: Powering GSIs, Scaling AI and Strengthening the Data Ecosystem

Informatica uses the ‘power of three solutions’ to bolster its ecosystem

An increasing amount of research and analysis time at TBR is focused on ecosystem intelligence, which applies a set of questions and frameworks to extend traditional market intelligence and competitive intelligence approaches in an effort to better understand a market. Recently, TBR analysts spoke with Informatica’s Richard Ganley, Senior Vice President, Global Partners, and his insights into the actions the company is taking to enhance its alliance relationships with nine key partners (Figure 1) stood out to the team. We believe Informatica is doing the following things really well:
 

  • Enthusiastically embracing the “power of three solutions,” that is, solutions pulling together resources from a global systems integrator (GSI), a cloud or software vendor, and Informatica. According to Ganley, this approach helps enterprise IT clients “modernize faster … [and] master some of their most critical data with multivendor solutions.”
  • Consistently evaluating GSIs based on their performance with Informatica, including growth, new solutions and mindshare
  • Ensuring the company as a whole understands the evolving importance of the ecosystem to Informatica’s success

List of Informatica's key partners_1Q25_TBR

Informatica’s relationship with GSIs

Ganley cited four reasons why GSIs want closer relationships with Informatica. First, Informatica has a mature data platform, the Intelligence Data Management Cloud (IDMC). According to Ganley, one part of the platform’s appeal is its simplicity: GSIs “don’t need to work with small vendors who we compete with and pick three or four of them and stitch together their technologies to try and make a platform. They can just work with us and everything is there.”
 
Second, simply scale. Although Ganley did not say it explicitly, every GSI that TBR covers has been working to consistently (and profitably) bridge the gap between AI pilots and limited AI deployments to AI at scale. Informatica’s established scale brings GSI partners reassurance. As Ganley put it, GSIs “can see eventually how they can build a billion-dollar practice with Informatica.”
 
Third, Informatica partners with the GSI’s partners, including what Ganley described as “very close engineering relationships with the hyperscalers.” Fourth, Ganley described a “huge uptick” in GSI partners’ professionals being trained and certified on Informatica’s solutions, increasing from around 8,000 per year in 2020 to more than 15,000 in 2024. Ganley noted, “one of the reasons we’re seeing so many of our partners wanting to double down with us [is] because they see us as very important foundational work for AI to be possible.”
 
Ganley also highlighted Informatica’s relationship with LTI Mindtree, specifically within the context of how Informatica evaluates (and invests people and resources in) GSI partners. Of the nine strategic GSIs listed in Figure 1, LTI Mindtree is unquestionably the smallest in terms of revenue, and Ganley noted that LTI and Mindtree, as separate companies, were very appealing as strategic partners. After the merger was completed and LTI Mindtree recruited experienced talent known to Informatica, the two companies reconsidered a strategic partnership. Informatica laid out specific criteria, and LTI Mindtree invested in training and other aspects of the alliance. The CEOs of both companies formally announced the new alliance.
 
The result has been, according to Ganley, highly successful for both parties: “They’ve been absolutely amazing to work with … and their data and AI practice is quite a good size. They’ve got 12,000 people in the practice, and I think that’s more than 10% of their business. So it’s pretty meaningful for them.” In TBR’s view, this deliberate, strategic approach to alliances has been the exception, not the rule, across the IT services, cloud and software ecosystem. Having an explicit set of criteria for continually evaluating a partnership — beyond simply revenue or sales opportunities — is a critical component, as is CEO-to-CEO buy-in. Informatica clearly has this figured out.

Informatica’s ‘power of three’ approach integrates technology in a unique way

Throughout our coverage of Informatica, we regularly discuss the company’s partner-first approach, and why Informatica continues to position itself as “the Switzerland of data.” Take Informatica’s seven core tech alliance partners: Microsoft, Amazon Web Services (AWS), Oracle, Google Cloud, Databricks, Snowflake and MongoDB. We cannot identify any company in that list that has a tailored go-to-market approach with all the other six vendors; even if you take the hyperscalers out of the equation, there is simply too much overlap in their capabilities.
 
Of the vendors TBR covers, Informatica is the only PaaS ISV that has worked across a broad cloud ecosystem in a way that gets the company natively embedded in critical layers of the data stack (i.e., Microsoft Fabric), thus making it easier for customers to adopt more components of IDMC. So, it is not surprising that GSI partners are excited about working with Informatica and unlocking growth via the cross-alliance structure.
 
The seven core tech alliance partners listed above, as well as other SaaS vendors like SAP and Salesforce, are becoming more integrated with each other by improving data sharing, opening up their APIs and making a comprehensive shift toward more open architectures. Although competitive obstacles will continue to exist, this trend could generate many opportunities for Informatica given its already established role with many of these tech partners. SAP’s new partnership with Databricks — in which Databricks will be sold as a native SAP service — offers a great model for Informatica, particularly if it wants to capture more engagements around SAP modernization, which the GSIs will help support.

SAP

SAP is not an Informatica technology partner, but naturally, ingesting, managing and integrating SAP data remains an important use case. We have spoken to enterprise customers that leverage Informatica’s data ingestion capabilities to extract data from SAP systems and make it available in a data lake from Informatica partners such as Databricks, as part of the ERP modernization process. For many ISVs, developing a partnership with SAP can be difficult, but Informatica’s work with the biggest GSIs — including Accenture, Deloitte and Capgemini, which according to TBR’s SAP, Oracle and Workday Ecosystem Report collectively employ more than 144,000 people trained on SAP offerings — will play a huge role in getting Informatica in front of SAP and the related ERP modernization opportunities.
 
In describing Informatica’s strategies around “power of three solutions,” Ganley noted that the most frequent teaming approach would include a person from the GSI, a person from that GSI’s technology team (for example, a Deloitte SAP practice professional), and a person from Informatica.
 
In TBR’s view, this approach solidifies Informatica’s relationship with the GSI while helping the GSI solidify its relationship with the cloud or software vendor. As multiparty go-to-market approaches and solutions become more common across the ecosystem, TBR will be watching to see who staffs those teams, which vendor leads, and whether Informatica’s approach is emulated by others.

The value of the ecosystem can be measured: 17%, 47% and 83%

Admittedly, not every player or every professional in the technology space is sold on how ecosystems are changing and how valuable alliances are to long-term growth. Ganley provided perhaps the starkest evidence why ecosystems matter with a few simple numbers: “We looked at basically all the opportunities that we’d had in our system, which we’d either won or we’d lost over the past two years. And we found if we didn’t work with a partner, our win rate was around 17%.
 
If we worked with one partner, it went up to 47%, which kind of makes sense because we’ve got somebody in there speaking up for us, recommending us. But if we worked with two partners, and by two we mean one from the GSI and one from the ecosystem … the win rate goes up to 83%.” 17%, 47%, 83%. TBR has not seen a more compelling case for alliance management and ecosystem intelligence.
 
According to TBR’s Summer 2024 Voice of the Partner Ecosystem Report, data management ranked among the top three growth areas for services vendors in the next two years, sending a signal to the ecosystem that they will continue to invest in resources and guide conversations with mature enterprise buyers that are further along with their digital transformation programs and can embark on the next phase: setting up a strong foundation for generative AI.  Informatica’s portfolio and alliance strategy is well aligned with the emphasis on data management, which is helping it become an invaluable strategic partner for GSIs and reinforcing the company’s tagline “Everyone is ready for AI except your data.”
 
Claim your free preview of TBR’s ecosystem intelligence research: Subscribe to Insights Flight today!

Leading Federal Systems Integrators React to U.S. Department of Government Efficiency 

After a four-year bull market featuring unprecedented spending growth in federal IT, DOGE is creating near-term challenges for FSIs

The newly inaugurated Trump administration and its Department of Government Efficiency (DOGE) have generated massive upheaval across the board in federal operations, including the federal IT segment. As of March 2025, thousands of contracts described by DOGE as “non-mission critical” have been canceled, including some across the federal IT and professional services landscape.
 
The initial contract terminations have fallen disproportionately on smaller-scale programs, many of which were held by small service-disabled veteran-owned vendors, though leading federal systems integrators (FSIs) have not been spared as some large-scale programs have also been canceled. For example, federal IT leader Leidos had a $1.5 billion award with the Social Security Administration (SSA) for systems operations and hardware engineering scaled back significantly in February 2025.
 
There is no question that the federal government must enhance efficiencies and use IT to enable more cost-effective operations, as DOGE promises. TBR estimates that federal agencies are collectively spending over $100 billion every year on systems that are often not interoperable and in desperate need of modernization. The initial weeks of the Trump administration and actions of the DOGE advisory board, however, have thrown the federal sector into chaos. Federal IT vendors are scrambling to adjust their strategies and tactics to align with the market, which is quickly shifting under their feet.
 
However, the lack of clarity from DOGE as to how it is evaluating and will continue to evaluate the merit of federal contracts is making effective strategic planning nearly impossible for federal technology contractors. In this special report, we summarize how the FSIs we track are reacting to the emerging DOGE-driven challenges and how well they are positioned to not only deflect near-term disruptions but also capture future opportunities.
 

Watch Now: GenAI in Federal IT Services in 2025, featuring TBR analysts John Caucis and James Wichert

 

Accenture Federal Services (AFS)

Strengths and Opportunities

AFS’ core competencies are well aligned with DOGE’s focus on driving efficiencies: broad-based investments in cloud, generative AI (GenAI), innovation and showcase facilities; tight management of their alliance ecosystem; and prudent acquisitions that have been quickly and effectively integrated. AFS has been using AI simulations over the last couple of years that incorporate economic modeling and statistical analysis of federal budgets to show federal leaders new ways to reimagine public resource allocation based on economic modeling and federal budget analysis.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

AFS is one of the 10 or so consulting-led FSIs that will be particularly under close scrutiny by the DOGE advisory board and the Trump administration and could consequently see several of its ongoing strategic engagements scaled back. AFS tends to leverage premium pricing as part of its consulting-led go-to-market approach. The vendor lacks the operational scale and multibillion-dollar engagements of its larger FSI peers, limiting its ability to absorb marketwide disruptions.
 

How will AFS respond to DOGE?

AFS will emphasize the security of its offerings and the vendor’s ability to generate efficiencies for federal agencies, doubling down on its 2024 go-to-market messaging that stressed cybersecurity as core to its digital transformation strategy. AFS will also emphasize its use of cloud, data and AI to drive enhanced efficiencies, and the company believes there will be an even greater appetite for adopting commercial solutions stemming from DOGE’s recommendations. We expect AFS to leverage its corporate parent more heavily than ever to gain access to expertise, case studies and best practices, and the latest digital technologies that have been refined in commercial environments.

Booz Allen Hamilton (BAH)

Strengths and Opportunities

BAH will draft off the unprecedented success of its three-year strategic growth initiative VoLT (Velocity, Leadership and Technology), which ran from the company’s FY23 through FY25 (ending March 31, 2025). The VoLT program generated three straight years of double-digit growth, including multiyear, 20%-plus growth in the lucrative federal health IT market. VoLT also delivered record profits and profit margins and robust cash flows (potentially $900 million-plus in free cash flow in FY25, up from $192 million in FY24 and $527 million in FY23) that can be plowed back into the business. BAH is also well diversified across the civil, Department of Defense (DOD) and Intelligence Community (IC) markets and has over a century of experience in the federal market. BAH sees DOGE as a shift in priorities, not an across-the-board cost takeout.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

Like AFS, BAH will be highly exposed to precipitous cuts in federal budget outlays for consulting services, but BAH’s exposure could be in the billions of dollars, the highest across the FSI landscape. BAH also tends to demand a premium for its advisory services and could face significant price-based competition from smaller advisory-focused peers offering similar but discounted consulting services more tightly aligned with DOGE’s core objectives.
 

How will BAH respond to DOGE?

BAH will tout its experience and successes enhancing efficiencies while crafting strategies for agencies to reinvest savings from DOGE-driven budget cuts, and the eventual implementation of the most cost-effective IT-based solutions, in next-generation technologies to support future missions. The vendor will focus on its versatility while building an even more opportunity-focused mindset in its workforce. BAH will adjust its messaging to emphasize its capabilities to deliver innovation at speed within outcome-based contracting arrangements. Stay tuned for BAH’s next three-year growth strategy, expected to be unveiled in the early months of the firm’s FY26, for more details on how DOGE will impact federal IT’s most venerable firm.

CACI

Strengths and Opportunities

CACI generates 75% of revenue from the DOD and IC, which could shield the company from contract cancellations or revenue contraction if DOGE’s focus is tilted toward the civilian sector. Civil-focused cuts could generate opportunities for CACI around its extensive suite of AI technologies that enable companies to automate more human-resource-intensive tasks and enhance financial management operations. CACI is already entrenched in DOD- and IC-based modernization efforts and can showcase its success in deploying AI technologies.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

While DOD contracts are protested less often than civil awards, that could change with future large-scale DOD contracts in a DOGE-based federal contracting environment. Roughly 60% of CACI’s order book is cost-plus work, which could continue to be the DOD’s preferred price structure for net-new technology, but it appears DOGE could drive a more fixed-price award structure (less than 30% of CACI’s contracts are structured as firm-fixed-price awards). CACI’s footprint in the civil space is limited, and the company could miss out on opportunities if DOGE turns to the civil segment for AI-enhanced automation solutions to replace furloughed government employees.
 

How will CACI respond to DOGE?

CACI immediately requested meetings with executive-level IT decision makers and contract managers at DOD and IC agencies, encouraging them to ensure optimum speed-to-decision with any DOGE-related actions. CACI does not foresee delays in contract adjudication in the near term; awards in the decision pipeline are now expected to be finalized by the fourth quarter of FFY2025 (federal fiscal year 2025).
 
From a solutions standpoint, CACI will promote its AI technologies as key to federal acquisition reform and its counter-UAS (unmanned aerial systems) technologies as critical to homeland defense. CACI will highlight successes on its BEAGLE (Border Enforcement Applications for Government Leading Edge IT) program with the Department of Homeland Security (DHS) Customs and Border Protection, and on NASA’s NCAPS (NASA Consolidated Applications and Platform Services) program in pursuit of new civil opportunities.

CGI Federal

Strengths and Opportunities

CGI Federal has been gaining strong traction with its core managed services platforms, Momentum (financial management) and Sunflower (asset management). These offerings not only have enabled the company to be a perennial margin leader in TBR’s Federal IT Services Benchmark but also are well aligned to DOGE’s cost takeout objectives with a track record to prove it. CGI Federal’s managed services offerings have also facilitated deep relationships with the Department of Justice (DOJ) and Treasury Department.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

CGI Federal generates roughly 90% of its revenue from the civilian sector and could be overexposed to DOGE-related budget reductions that disproportionately impact civilian agencies. The company has made key strategic acquisitions to bolster its advisory capabilities in recent years (e.g., Array in 4Q21, TeraThink in 1Q20, and Sunflower Systems in 3Q19), but its advisory capabilities lack the maturity, breadth and market reputation of similar services offered by consulting leaders like BAH and AFS. Conversely, CGI Federal could undercut its larger consulting-led peers by offering discounted advisory services that emphasize its core capabilities in enhancing agency fiscal and operations management.
 

How will CGI Federal respond to DOGE?

CGI Federal was one of the 10 consulting-focused FSIs mentioned in the Trump administration’s Feb. 27 memo demanding that agencies review consulting engagements and cut “non-essential consulting contracts.” The company’s reaction to DOGE has been limited compared to its peers, but we anticipate CGI Federal will tout its automation and AI capabilities, along with its flagship financial and asset management platforms, as having the exact capabilities civilian agencies will need to achieve IT-driven cost reductions.
 

TBR’s Federal IT Services research team will provide additional company-specific analysis of the impact of DOGE and the Trump administration’s proposed budget actions on the FSIs in our upcoming blog series, DOGE Federal IT Vendor Impact, featuring reports and profiles of the contractors mentioned in this special report.
 
Subscribe to Insights Flight to receive each blog in your inbox as soon as it publishes and to download a preview of our federal IT services vendor benchmark research.


 

General Dynamics Technologies (GDT)

Strengths and Opportunities

Demand for General Dynamics Information Technology’s (GDIT) digital accelerators (Comet 5G, Coral Software Factory, Cove AI Operations, Eclipse Defensive Cyber, Ember Digital Engineering, Everest Zero Trust, Hive Hybrid Multi-Cloud, Luna AI and Tidal Post-Quantum Cryptography) has continued to grow, generating nearly $7.5 billion in contract awards in 2024 compared to more than $2 billion in 2023.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

While the DOD is still GDT’s largest customer, GDIT’s recent expansion into the federal health market leaves it more vulnerable in the short term as the Trump administration looks to rapidly pull back on spending for agencies like the U.S. Department of Health and Human Services (HHS). Additionally, the General Services Administration (GSA) has indicated that GDIT’s consulting contracts will be closely reviewed going forward.
 

How will GDT respond to DOGE?

TBR anticipates that GDIT will invest further in its digital accelerator-centric strategy and increasingly collaborate with partners like ServiceNow. This will allow GDIT to expand its capabilities with emerging technologies and support DOGE’s effort to accelerate IT infrastructure modernization.

IBM Consulting

Strengths and Opportunities

IBM Consulting’s federal market group, IBM-Fed*, is two years removed from its strategic acquisition of Octo Consulting and won positions on a handful of large-scale programs with both civilian and defense agencies in 2024, suggesting the company is fully leveraging the added scale and portfolio depth it obtained from Octo.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

IBM-Fed, while not entirely a newcomer to the federal IT market, is the smallest of the FSIs tracked in TBR’s Federal IT Services Benchmark. While its smaller scale may limit its exposure to DOGE, the company has ongoing, strategic programs with USAID (United States Agency for International Development) ($95 million, five-year contract for cybersecurity services won in 2023) and U.S. Citizenship and Immigration Services (USCIS) ($279 million, five-year contract won in 2Q24) that are likely at risk.
 
The cancellation of these programs would severely impact IBM-Fed’s smaller revenue base relative to its FSI peers. The overarching goal of the Octo acquisition was to expand IBM-Fed’s advisory chops; DOGE could derail IBM-Fed’s still nascent efforts to gain ground in the federal IT market as a consulting-focused professional services competitor.
 

How will IBM Consulting respond to DOGE?

TBR expects IBM-Fed will double down on hybrid cloud and cybersecurity, capabilities at the heart of the company’s burgeoning federal IT growth effort, as DOGE continues upending the federal IT market. IBM-Fed can also leverage hybrid and multicloud capabilities obtained from the now-finalized acquisition of HashiCorp. IBM was one of the companies mentioned in the Trump administration’s Feb. 27 memo regarding federal consulting contracts and the federal government’s use of consultancies.
 
An IBM spokesperson responded in a Nextgov interview the same day the memo was released, saying, “Today, IBM supports the modernization and delivery of mission critical federal services and systems, from processing veteran health claims more quickly to enabling a more efficient digital taxpayer experience. We are … committed to helping agencies become more efficient and deliver better results for the American public.” IBM’s response was necessary, but still rather boilerplate in its tone and lack of detail. (*TBR refers to IBM Consulting’s federal IT operations as IBM-Fed. IBM-Fed is not an official business line title used by IBM or IBM Consulting. The business defined by TBR as IBM-Fed resides within IBM Consulting’s U.S. Public and Federal Market group.)

Leidos

Strengths and Opportunities

Leidos has the largest scale across federal IT, and the number of multiyear, multibillion-dollar engagements on its books is a testament to not only the company’s ability to deliver agencywide IT transformation but also its strong contract delivery on prior-year engagements. The vendor is well diversified across the federal IT market with large civil, defense and IC practices. Leidos is also diversified geographically, generating between 10% and 15% of its business from international markets, where the company is gaining traction. Its’ international operations could enable the company to withstand short-term turbulence in the federal IT market, at least in terms of its profit and loss statement.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

Dynetics, Leidos’ robust and differentiating R&D subsidiary, could be negatively impacted if the Trump administration deemphasizes R&D investments or if DOGE reduces R&D budgets. Leidos also has a large federal health IT business, which could be vulnerable if DOGE targets the health IT market for cost cuts after robust spending under the previous administration. Further, Leidos could face increased protests around its large strategic awards by unsuccessful competitors trying to undercut Leidos on price.
 

How will Leidos respond to DOGE?

Leidos executives stated during the company’s 4Q24 earnings call that they believe DOGE gives the company “increased confidence in our strategy,” but the details remain scarce. Those details are likely forthcoming, as Leidos will unveil North Star 2030, the company’s growth strategies for the next five years, this summer. We anticipate Leidos will emphasize guiding clients through the elimination of regulations, efforts to streamline procurement, and the shift to more outcome-based contracting.
 
Leidos will also increasingly emphasize its ability to accelerate ongoing digital transformation programs and tout its strong record of past performance (e.g., its ongoing, $11.5 billion Defense Enclave Services engagement). Leidos will also focus on developing innovative IT-enhanced military capabilities and will likely plow robust cash flows from FY24 into R&D to accelerate the development of market-ready solutions for next-generation warfighting in areas like hypersonics, unmanned systems and ISR (intelligence, surveillance and reconnaissance) systems.

ICF International

Strengths and Opportunities

After spending over $600 million on M&A to rapidly develop its digital modernization business from 2020 through 2022 (ICF’s digitalization business’ annual revenue more than quintupled over this time frame to reach $500 million), ICF started securing more high-profile federal IT contracts worth over $100 million in 2024 while letting low-margin engagements roll off its books.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

During ICF’s earnings call, executives warned that the company’s 2025 total revenue could contract by up to 10% compared to 2024. While ICF’s environmental work is particularly vulnerable, its digital modernization business will face disruptions. For example, ICF already was struggling with delays tied to lucrative contracts with USAID before the Trump administration placed the agency in its crosshairs. Additionally, a concentrated push to outcome-based contracting, along with a sharp reduction in agencies’ headcounts, could lead to demand pulling back for ICF’s low-code/no-code platforms.
 

How will ICF International respond to DOGE?

ICF may shift its focus from supporting the Energy, Environment, Infrastructure and Disaster Recovery client market back to developing its digital modernization business, including building upon its fraud detection capabilities.

Maximus

Strengths and Opportunities

Since acquiring Veterans Evaluation Services for $1.4 billion in 2Q21, Maximus has been capitalizing on the surge in opportunities created by the Honoring Our Promise to Address Comprehensive Toxics (PACT) Act while supporting the Department of Veterans Affairs (VA) on a plethora of medical disability examination (MDE) contracts. Business process outsourcing (BPO) remains at the core of Maximus’ go-to-market strategy as the company parlays these initial engagements where it provides more traditional services into more lucrative opportunities with clients like the IRS.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

Maximus has been deepening its relationship with the IRS over the years and in 2023 was named a prime contractor on the $2.6 billion Enterprise Development, Operations Services (EDOS) contract. With the IRS’s 90,000-person workforce expected to be reduced by up to 50% under the Trump administration, future funding could be disrupted and upend Maximus’ efforts to branch out. Similarly, Maximus’ exposure to the VA may become a weakness as the agency’s headcount is expected to be reduced by more than 15% (approximately 80,000 workers) as it comes under greater scrutiny by the Trump administration.
 

How will Maximus respond to DOGE?

While bipartisan support for the PACT Act should persist and provide Maximus’ federal business with steady income, the company will leverage its IT systems, network infrastructure and software development capabilities in the long term to increase clients’ efficiency.

ManTech

Strengths and Opportunities

Being a private company gives ManTech a competitive advantage compared to its FSI peers as consultancies race against the quarterly earnings clock during this chaotic period. With the financial backing of The Carlyle Group, ManTech is one of the best positioned vendors in TBR’s Federal IT Services Benchmark to make an acquisition. ManTech can take its time refining its digital consulting practice, aligning it with the Trump administration’s long-term priorities and scooping up displaced consultants from players like AFS while building upon the fundamentals of consulting: people and permission.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

ManTech has historically been a margin laggard compared to its peers in TBR’s Federal IT Services Benchmark, with the vendor last disclosing an operating margin of 5.2% in 2Q22. While ManTech’s restructuring efforts could bring its operating margin more in line with the competition over time, a prolonged return of the lowest price technically acceptable (LPTA) environment would cause significant harm as ManTech lacks the scale to consistently vie for must-win engagements against Tier 1 peers.
 

How will ManTech respond to DOGE?

TBR anticipates that ManTech will lean further into opportunities with its core defense and intelligence clients, given DOGE’s crackdown on non-mission-critical spending seems to be concentrated more in the federal civilian sphere. As DOGE hopes to accelerate the modernization of IT infrastructure and apply AI across the agencies, ManTech will continue to identify new ways to weave the emerging technology into their workflows.

Peraton

Strengths and Opportunities

The three-way megamerger with Perspecta and three Northrop Grumman business units expanded Peraton’s sales sevenfold from roughly $1.0 billion in 2020 to between $7.0 billion and $7.2 billion in 2021, according to TBR estimates. Peraton quickly fused these assets into one homogeneous entity and has been using its newfound scale to reliably compete against established leaders in the federal IT industry, such as GDT, across the civilian and defense markets.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

While Peraton has been further penetrating the federal civilian and health spaces by leveraging its newfound digital transformation, analytics and lifecycle management capabilities, the years of robust health IT budget growth are unlikely to continue under the Trump administration. Peraton is a prime contractor on the Social Security Administration’s IT Support Service Contract II vehicle, which DOGE recently targeted for cuts.
 

How will Peraton respond to DOGE?

Veritas Capital is no longer expected to take Peraton public as federal IT vendors’ valuations have cratered from their November 2024 highs. Under CEO Steve Schorer’s guidance, Peraton will accelerate its efforts to harness emerging technologies to better compete for enterprise IT awards in the $500 million to $2 billion range. As agencies appear to be increasingly open to adopting an “as a Service” model, Peraton will continue to position itself as a cloud services broker to win deals like the Cloud Hosting Solutions III contract and shore up its relationships with partners like SoftIron.

SAIC

Strengths and Opportunities

SAIC has scale, a growing suite of partner-enhanced cloud solutions, and a vendor-agnostic approach to migrating federal IT workloads to cloud environments that have enabled the company to improve its fiscal performance and land strategic engagements. The vendor also derives nearly three-quarters of its revenue from the DOD and IC, potentially sheltering the company from DOGE-related budget cuts in the civil space. SAIC is perhaps the foremost prime contractor with the U.S. Air Force (USAF) leading the service branch’s multiyear cloud migration initiatives.
 

Weaknesses, Risks and Areas Exposed to Market Turmoil

SAIC’s marquee engagements are with civilian agencies, including the seven-year, $1.3 billion T-Cloud program won in 1Q23 with the U.S. Department of the Treasury and the $8 billion blanket purchase agreement with the FBI won in 2Q24, potentially overexposing the company to DOGE-based efforts in the civil sector. The company is also in the middle of an operational and organizational restructuring effort that, while gaining traction, could be undermined by DOGE-related market disruption.
 
Recent unsuccessful contract renewals and business-line divestitures have generated lingering organic growth headwinds that have caused SAIC’s top-line growth to lag its peers’, and may limit the company’s ability to buffer the impact of DOGE-related contract reductions or terminations on its top line. SAIC’s margin performance is improving but also trails that of larger federal IT peers; DOGE may jeopardize the company’s continued profit elevation.
 

How will SAIC respond to DOGE?

SAIC will tout its integration expertise and ability to fuse cloud-based products, platforms and solutions from its partners and joint ventures together to create cloud environments for its customers that will generate DOGE-mandated savings. SAIC will repurpose its experience delivering cloud-enhanced, mission-critical solutions for the DOD to accelerate cloud-based growth in the civilian sector in 2025 and 2026, emphasizing cloud infrastructures as the optimal way for civil and defense agencies to drive down operating costs.
 
CEO Toni Townes-Whitley appeared on Bloomberg TV in early December, saying, “We think we are well positioned” and “Looking forward to that engagement” with DOGE and the Trump administration. She was the first CEO of the FSIs tracked by TBR to respond to the potential impact of the then-incoming Trump administration, smartly getting out in front of the emerging market disruption. Townes-Whitley also noted that SAIC will look at “where are the levers in an environment where technology will be the enabler of efficiencies,” but also said she expects SAIC will have to “adapt to an environment where spending is more circumspect.”
 
 
TBR’s Federal IT Services research team will provide additional company-specific analysis of the impact of DOGE and the Trump administration’s proposed budget actions on the FSIs in our upcoming blog series, DOGE Federal IT Vendor Impact, featuring reports and profiles of the contractors mentioned in this special report.
 
Subscribe to Insights Flight to receive each blog in your inbox as soon as it publishes and to download a preview of our federal IT services vendor benchmark research.

 

Fujitsu’s Policy Twin: Revolutionizing Public Policy with Digital Twins

The “Social Digital Twin” is born

In February TBR met with a Fujitsu team led by Akihiro Inomata, Ph.D., Senior Project Director, Social Digital Twin Core Project, Fujitsu Research, Fujitsu Limited, to discuss Fujitsu’s Policy Twin, a creative application of digital twin technologies to public policy. The following reflects both that discussion and TBR’s ongoing research and analysis around Fujitsu.
 
Fujitsu’s research of “Social Digital Twin” and efforts to bring digital twin concepts and technologies to bear on social issues dates back a few years, although the company initially did not describe the work as “Policy Twin.” As early as 2022, Fujitsu and Japan’s Tsuda University started joint research for community healthcare to find better solutions to bottlenecks in the healthcare delivery processes. Fujitsu used similar approaches to tackle Japan’s healthcare needs that it had previously successfully used in other domains. These involved what Inomata called “green shared mobility” on a U.K. island, EV charging stations in India, and traffic in Pittsburgh.
 
Leveraging lessons learned from those engagements and seeing the applicability of digital twins beyond the confines of the physical world, Fujitsu conceived the Policy Twin. Using its Policy Twin, Fujitsu helps public sector clients recreate new policies from policies generated from the clients’ existing policy documents, and, critically, according to Inomata, allows for a “Digital Rehearsal” that can be used to verify the effectiveness and impact of policies in advance, based on real-world data. He added that the policy twin approach helped Fujitsu “solve social challenges … by understanding human behavior and social movements through Social Digital Twin.” Policymakers at any level could, with Fujitsu’s help, test variations of policies and evaluate the outcomes using Policy Twin, calibrating the scenarios based on desired outcomes, all before actually implementing any changes.

Policy Twin success story

Inomata outlined a few critical components for successful implementation of digital twins in a nonphysical world:

  • Fujitsu uses a logic model for running simulations, but the reference policies should be coming from the same business or framework. Fujitsu’s Policy Twin approach, for example, could not use policies in public health to digitally rehearse tax policies.
  • Policies must be machine-readable, which typically is not an issue as all public policies are publicly available. The challenge, of course, comes when policies are unclear, inexact, contradictory, or understood but not written down.
  • Fujitsu’s approach must begin with understanding the underlying social issues. Similar to consulting and technology engagements, implementations succeed when directed at specific business problems. Fujitsu’s Policy Twin works best when the stakeholders, including Fujitsu, have clearly defined problems and desired outcomes.

Inomata and his team described the example of Fujitsu’s work with Japan’s Tsuda University around preventive medicine approaches and cost savings that were attributable to the use of Policy Twin technologies. Practically, Inomata walked through a recent project on how the local government was able to reduce expenses and improve the overall population health by using Policy Twin to create optimal policies around clinic visits. The Fujitsu presentation noted developed policy options of the preventive medical engagement that could reduce medical costs and improve health outcomes significantly after only one year post-implementation. Additionally, the Fujitsu presentation projected developed policy options that double both cost savings and health outcome improvements in preventive healthcare trial.
 
Fujitsu’s presentation also set ambitious targets for 2025 and 2026, noting that the Policy Twin approach could be applied to societal problems such as “service restructuring to address workforce shortages, disaster prevention and mitigation, and enhancing supply chain resilience.” Inomata also confirmed that the company intends to put the Policy Twin approach under the Uvance Wayfinders umbrella.

Fujitsu shows how technology can be used to benefit society

Applying digital twins to the nonphysical world — and to an inherently political part of the nonphysical world — takes courage and conviction, which are not attributes TBR typically writes about when covering IT services and technology companies. As Figure 1 from Fujitsu shows, the company does not lack for ambition: “Technology to predict future and design society.” Critically, in TBR’s view, Fujitsu will scale its Policy Twin initiative within the embrace of Uvance.
 
As TBR reported last year about the company’s strategy, “Fujitsu will focus on technology consulting, rather than McKinsey-style business consulting, playing to Fujitsu’s legacy technology strengths. In TBR’s view, technology-led consulting reflects the current demand among enterprise consulting buyers to infuse every consulting engagement with technology, a trend well underway before the hype began around generative AI. Fujitsu’s leaders added that Uvance Wayfinders — essentially business and technology consultants — are able to pull together all of Fujitsu’s capabilities and offerings.”
 
Stepping back from the specifics of Policy Twin and its place within Fujitsu, the overall approach of bringing data-driven, digital twin-enabled “digital rehearsals” to public policy strikes TBR as a substantial positive societal contribution, rooted firmly in Fujitsu’s technology legacy, capabilities and innovations. TBR will be watching closely to see which societal challenges Fujitsu takes on next.
 

Figure 1


 
Definitions of terms

  • Social Digital Twin: Fujitsu’s proprietary digital twin technologies
  • Policy Twin: A core technology within Social Digital Twin
  • Digital twins: general digital twin technology

 

Trump Could be the Worst (or Best) Thing Ever for the Telecom Industry

TBR perspective

If the key takeaway from Mobile World Congress 2025 (MWC25) in Barcelona, Spain, could be boiled down to one word, that word would be: warning.
 
Though warnings for the telecom industry have been trumpeted ever since the LTE cycle underwhelmed and failed to bring significant new revenue to telcos, TBR notes that the warning has reached a new level and that the telecom industry faces arguably the most uncertain period in its 150-year history.
 
There is real concern that endemic, chronic issues that have been challenging the telecom industry for many years could be compounded by the agenda and policies of the new U.S. administration, which has created global uncertainty regarding geopolitics, the strength of nation-state alliances, trade policy, economic development and other key areas, ratcheted up to levels not seen since the Cold War.
 
Amid this uncertainty, the modus operandi for telcos and their network vendors is to shrink back to basics and cut costs. With catalysts for sustainable, ROI-positive new revenue for telcos remaining unclear, the will to spend more on capex is simply nonexistent. Rather, telcos are becoming more fixated on cost reduction, especially through AI and M&A.
 
Using history as a guide, deep structural change and regulatory reform, such as that yearned for by the telecom industry, typically only occur in times of monumental crisis, such as severe macroeconomic deterioration, which tends to force governments into action and drive broader restructuring and changes at organizations.
 
For example, two of the most significant, large-magnitude industrial changes across major societies in the past 150 years occurred during the Great Depression of the 1930s, which reshaped labor and industrial dynamics, and the Great Recession of 2007-2009, which reshaped the financial services industry and real estate market. Telecom will, unfortunately, need a similar economically driven crisis to bring about the changes that the industry desperately needs.
 
The telecom industry might finally be getting the fundamental, transformational change it needs, and President Donald Trump may well be the catalyzing agent for this change.
 

MWC25: Disruptive Technologies and Business Models Create New Opportunities for the Mobile Ecosystem
 
Join Principal Analyst Chris Antlitz and Senior Analyst Michael Soper Thursday, March 20, 2025 for an exclusive deep dive into top takeaways from Mobile World Congress 2025. The pair will also discuss how emerging opportunities are likely to drive technology and business model disruption and impact markets.

Register Now

 

Impact and Opportunities

Europe risks reaching a point of no return and dragging its telecom industry down with it

Europe is regressing, and its way of life is threatened unless drastic change is implemented. This was a frequently discussed theme at MWC25. Although Europe has a leading educational system and talented workforce, regulation and taxation in the region have become so restrictive that they are causing chronic disinvestment, brain drain (mostly to the U.S.) and capital flight (again, mostly to the U.S.).
 
There has also been an acceleration in the decline of birth rates, which portends future, structural headwinds for European society. Though it is not too late to bring Europe back from the brink and reassert the continent as a major powerhouse in the global economy, the window to fix the situation is closing.
 
For example, TBR’s research suggests Europe is approximately five to seven years behind other first-world economies in key technological areas, such as 5G, cloud, AI and quantum computing, and the gap is widening as the pace of technological change accelerates.
 
How Europe responds to the impact of the Trump administration will solidify the bloc’s future. Broad-based, structural reform is required to ensure the highest probability of success, with European Union (EU) member states acting more like one, unified bloc that is leveraging the best that each state offers.
 
As part of this, a regulatory overhaul is required, with increased domestic investment and less restrictive encumbrances to economic development enacted. Additionally, M&A, especially as it pertains to nationally important entities, such as telecom operators, must be allowed in order to attain a competitive level of scale and increase the health and financial well-being of these sectors.
 
There are simply too many communication service providers (CSPs) in Europe (between 100 and 200, depending on how operating companies [OpCos] and subsidiaries are counted), most of which are sub-scale, impacting their ability to innovate and invest, especially on the world stage.
 
With three CSPs now remaining in most major countries, Europe’s telcos are minnows in a sea of big fish. More years of the same will further constrict and make the telecom industry even more unhealthy. Structural reform must happen now.
 
Relevant documents pertaining to the future of the EU, such as the Draghi and Leiter reports, were frequently mentioned at MWC25, and many European influencers and decision makers are using those documents to draw ideas from and promote change.
 
A potential silver lining for Europe is that Trump’s new world order may usher in a renaissance for the continent, whereby the EU bands together in solidarity and cooperation to address its weaknesses and focus on becoming competitive again on the world stage.
 
Though the deck is stacked against Europe due to the sheer number and scope of problems that the continent faces, recent events that coincided with the timing of MWC25, such as Germany’s new stance on debt and defense spending, could shake the continent awake from its multidecade slumber and be a watershed moment for structural change.

Growth remains the No. 1 problem for the telecom industry, still with no viable solution

When adjusting for inflation, the telecom industry is shrinking and has been for some time. Though mobile has been offsetting chronic weakness in legacy wireline businesses, even now mobile is exhibiting maturity from a global perspective, with industry-level revenue growth rates flatlining.
 
While it is true that fixed wireless access (FWA) is a new driver of revenue growth, thanks to 5G, the size of the pie is likely to continue shrinking on an inflation-adjusted basis as CSPs fight to attract and retain new subscribers and engage in pricing tactics, such as offering discounts for bundling (aka convergence) to achieve this goal.
 
The reality is that network APIs, edge computing (including AI inferencing at the edge), network slicing and other areas frequently viewed as growth opportunities for telcos over the past few years are still not yielding substantial revenue, and the revenue that is derived from these areas is more cosmetic (i.e., revenue positive but lacking profitability and scale) than genuine (i.e., ROI positive) in nature.

AI gains traction and becomes more sophisticated

One area in which leading telcos are making progress is applying generative AI (GenAI), and now agentic AI, to boost productivity and reduce costs. TBR notes that the increased level of sophistication of AI solutions demonstrated at MWC25 shows significant progress since MWC24, a bright spot for the industry.
 
Customer care and billing remain the most popular domains to apply AI currently, and these areas represent low-hanging fruit, but sales, marketing and, increasingly, the network domain will all be impacted by AI as well. Many use cases and business cases for AI and GenAI in the telecom industry make logical sense and have the potential to produce profoundly significant business outcomes, especially related to cost efficiency.
 
Technological readiness for and commercialization of AI and GenAI are in process, and much more innovation is in store. Additionally, AI will take on increased importance as telcos navigate the deteriorating geopolitical and economic environment and look to sustain their bottom lines.

5G standalone (SA) adoption remains extremely sluggish, and the gap is widening between leading operators and late adopters

The cost and complexity associated with deploying a 5G core, coupled with the lack of a clear ROI from having a 5G core, continues to stifle the pace of commercial deployment of the technology. While approximately 326 CSPs globally have deployed 5G to date, just 123 have officially signed deals to purchase and deploy a 5G core, and about half of those (62 out of the 123 operators) have not actually begun commercial deployment.
 
Additionally, of the 61 CSPs that have deployed a 5G core, most are not what some consider a “complete 5G network,” meaning the architecture utilized for the 5G core is not cloud-native. Given that a 5G core is a prerequisite for network slicing, deploying forthcoming 5G Advanced features, and using other key features associated with 5G, such as for B2B use cases, this means most CSPs that have deployed 5G to date are still not able to participate in these nascent areas.
 
CSPs cannot hope to capture revenue from network slicing, AI inferencing at the edge, or forthcoming use cases enabled by 5G Advanced if they do not invest in the infrastructure needed to provide these services at scale and with low latency. Most CSPs’ cautious capex strategies are hindering their future revenue growth opportunities and risk ceding the value capture from these services to hyperscalers, most likely, or to their own incumbent vendors that elect to bypass the CSP middleman.

FWA is starting to get the attention it deserves from telcos but still has significant untapped potential

TBR continues to believe that FWA represents one of the biggest opportunities for mobile network operators to monetize their 5G investments and drive scalable revenue growth. Though CSP deployment of 5G FWA continues to grow, most CSPs keep underestimating the potential of the technology, likely because FWA ties up a lot of spectrum resources for relatively low average revenue per user (ARPU). There is also an embedded industry bias toward full fiber, though TBR believes this mindset has begun to soften as FWA has proved its staying power.
 
Technological innovations currently available (e.g., multiband carrier aggregation, beamforming, extended range millimeter wave, non-line-of-sight antenna design, New Radio Unlicensed [NR-U], integrated access and backhaul [IAB], silicon advancements) are likely to bring dramatic improvements in network performance, energy efficiency, and the usability of spectrum to support services such as FWA at large scale. FWA customer-premises equipment (CPE) is also becoming more cost effective to buy and deploy.
 
TBR maintains that 5G FWA should be thought of as wireless fiber and that the notion of having to deploy fiber to every business and residential premises globally is not only economically unfeasible but also unrealistic from a pure time-to-market standpoint to meet digital equity initiatives. 5G FWA can address these challenges and is a far more realistic and economically feasible technology to help the world bridge the digital divide, bring more competition into the global broadband market and support new use cases.
 
Changes to the Broadband Equity, Access, and Deployment (BEAD) Program and other stimulus programs in the U.S. to legitimize FWA (and satellite broadband) is a great step forward in leveraging fiber alternatives without sacrificing significant performance and other benefits that fiber-to-the-premises provides. More than 90% of households and most businesses globally do not need more than 100Mbps of broadband speed, mostly thanks to advancements in video compression technologies.
 
Over the next few years, TBR believes the industry’s perception of FWA will shift from being viewed as an “intermediate, good enough” solution pending fiber rollout to a legitimate fiber alternative, and that ultimately up to 50% of global premises (residential dwellings and businesses) could be addressed with FWA, with the balance being served by Fiber-to-the-X (FTTx) and satellites.

Satellite industry enters the telecom hen house

Though there are significant benefits for mobile network operators (MNOs) (and their customers) to partner with satellite connectivity providers, there is also a growing undercurrent of concern. Led by Starlink, which had a strategically located booth in one of the prominent courtyards of the MWC venue, telco leaders are starting to realize that satellite operators pose a legitimate competitive threat.
 
Non-terrestrial networks (NTNs) are advancing quickly, with a broader range of smartphones now off-the-shelf compatible with satellite networks, just as they are with terrestrial networks. From a services perspective, satellite connectivity has advanced from basic SOS messaging services to full text support, with voice and data services on the road map for later this decade, all of which can be utilized by the latest popular smartphones.
 
Satellite broadband is even starting to compete against terrestrial broadband, especially xDSL and FWA, an inflection point made possible by the steady reduction in satellite CPE costs, which historically made satellite connectivity too expensive to be an economically feasible alternative to terrestrial broadband options, as well as significant increases in downlink speeds.
 
Ultimately, according to TBR’s research, it is conceivable that up to 100 million premises globally could be supported by satellite broadband providers, with Starlink likely to remain the frontrunner in the ecosystem.

Defense industry poised to become a major growth area for network vendors

The Russia-Ukraine and Israel-Hamas wars have demonstrated how warfighting has evolved with technology, prompting a reassessment of military strategy, assets and the production of military-related equipment, especially by the U.S. Department of Defense and NATO members in Europe. Additionally, with the U.S. now retreating from Ukraine, Europe is forced to revitalize its own military industrial complex. All of this incurs more spend on military and defense, with mobile technology set to be a prominent feature of new systems and solutions.
 
5G, 6G, private cellular networks, edge computing and AI will all be leveraged in some way in modernized military solutions. Of the more than $13 trillion that it is estimated will be spent on defense globally through the rest of this decade, TBR expects many billions of dollars of this amount to flow to the telecom industry, with Nokia, Ericsson and a broad range of other vendors as well as operators providing the bulk of this equipment and services.

Conclusion

Telecom operators remain unhealthy, and the prognosis is deteriorating. One of the first things the industry needs is a comprehensive reassessment of the regulatory environment to give telcos some breathing room and flexibility to accelerate their digital transformation journeys. A catalyzing event, which usually stems from crises, is needed to force the telecom ecosystem to change, and for regulators to create a friendlier economic and competitive environment.
 
TBR maintains that the telecom industry will look very different by the end of this decade and that significant consolidation will need to take place to create more financially healthy and sustainable telcos. It is possible that Trump and his unconventional policies will be the catalyzing agent to usher in this new phase of telecom industry evolution.

GenAI-related Workload Opportunities Compel NTT DATA to Deepen Ecosystem Relationships

NTT DATA turns to partners to unlock new revenue opportunities

According to TBR’s 4Q24 Cloud Ecosystem Report, “Despite the recent slowdown in overall IT services revenue growth, global SIs (GSIs) remain committed to building out their hyperscaler practices as they try to maintain ecosystem stickiness and ensure they are ready when demand rebounds. GenAI [generative AI] continues to influence both services vendors’ and their hyperscaler partners’ go-to-market strategies with new implications centered on security and data privacy.
 
This is a natural market evolution as, following the hype and opportunities to experiment with large language model (LLM)-based tools in the past 24 months, enterprises are turning to proprietary data to scale GenAI deployments. This is resulting in the advent of small language models (SLMs), which are the new battleground for partners to prove value. Absent accounting for implications around data and AI security, these relationships will likely face challenges, especially as slower macroeconomic conditions have placed greater emphasis on vendors to ensure service quality. And delivering quality services begins with access to enterprise data.”
 
A year after completing the integration of various parts of NTT operations and the formation of NTT DATA Group Corp., NTT DATA continues to calibrate its portfolio and skills to protect its No. 2 position in terms of global revenue size among peers within TBR’s IT Services Benchmark. As TBR discussed in the 2Q24 NTT DATA report, the company’s alliance relationships have played an increasing role in these efforts. “Customer demand for cloud migrations remains strong, which presents opportunities for trusted service providers. NTT DATA is building up its alliance network and its internal capabilities around cloud platforms such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud to address demand. By offering complementary services that seamlessly support client transitions to these hyperscaler platforms, NTT DATA is positioning itself to become a critical partner in cloud adoption journeys.”
 
TBR Cloud Ecosystem Report_2Q24

NTT DATA understands the value of ecosystems

In November 2024 NTT DATA made two strategic announcements highlighting its efforts to strengthen trust and expand addressable market opportunities through its relationship with Google Cloud. First, the two deepened their relationship, forming the NTT DATA Google Cloud Business Unit centered on coinnovation and development of data and AI-ready industry solutions. Second, NTT DATA announced the acquisition — which has since closed — of India-headquartered Niveus Solutions.
 
The purchase adds over 1,000 cloud engineers with skills in Google Cloud Platform (GCP) including GCP-native modernization, data engineering and AI. Following the purchase of Niveus Solutions, NTT DATA’s GCP-certified headcount now sits at approximately 3,600 professionals. According to TBR’s estimates in the 4Q24 Cloud Ecosystem Report, this is higher than the GCP-skilled headcount at Atos, Capgemini, DXC Technology, IBM, Infosys and Wipro. We estimate NTT DATA’s GCP-related revenue to be north of $400 million, or about 12% of its total cloud revenue, with the bulk of the remaining revenue share generated by the company’s relationships with Microsoft and SAP.
 
TBR Cloud Ecosystem Report, 2H24

Why Google?

As TBR wrote in the 4Q24 TBR Cloud Ecosystem Report, “In many ways Google Cloud is staying the course with its partner strategy, focusing on scaling existing programs and incentives to help partners close larger deals more quickly. As part of its vision to foster the most ‘open AI ecosystem,’ Google Cloud has recently put a lot of focus on partner breadth and onboarding new partners that can help Google Cloud appeal to new audiences.
 
One example is with developers, and while there are over 1 million developers using GenAI tools, such as Vertex AI on GCP, Google Cloud aims to follow in AWS’ footsteps, boosting developer mindshare and delivering more seamless experiences. As such, Google Cloud has been delivering integrations with platforms like GitHub, which in 4Q24 announced support for Google’s latest Gemini models.
 
The other big priority for Google Cloud is around Marketplace. Though we often put AWS in a category of its own when it comes to marketplaces, with essentially all AWS’ top 1,000 customers having at least one active subscription, it is clear these platforms are where customers are buying their cloud software. As such, Google Cloud has been scaling the Marketplace with Private Offers, allowing resellers to deliver ISV solutions on GCP, and Google Cloud continues to cite momentum from partners co-selling Marketplace solutions alongside GCP. That said, it is clear Google Cloud wants its partners to continue to move away from traditional resell, toward value-added services, and Google Cloud maintains its commitment to driving 100% partner attach on all services deals.”

Pivoting from a two-dimensional foundation to a multiparty ecosystem play will test NTT DATA’s ability to manage trust

NTT DATA understands the need to pivot toward outcome-based services sales. Although it is easier said than done, the company has an opportunity to deliver value to clients provided it relies more on its alliance partners and continues to stick to its core expertise. Additionally, it will be essential for NTT DATA to invest in a partner framework that helps it address the following questions, which TBR outlined in the special report, Top Predictions for Ecosystems & Alliances:

  • Can your alliance partners tell your clients what makes you special?
  • Do your alliance partners’ sales teams know what value you bring to the ecosystem?
  • Are you sure you placed your strategic ecosystem bets on alliance partners that are well positioned for the next growth wave?
  • Are your competitors gaining ground with your common alliance partners through sales programs, go-to-market motions and training that you are not doing?

 

Learn how the strategic shift to ecosystem intelligence will impact your business in 2025.
 
Download TBR’s 2025 Ecosystem & Alliances Predictions special report today!


 
According to our Ecosystem Intelligence research, no single vendor has mastered the answers to all of these questions. NTT DATA is not new to managing alliance partnerships, as evidenced by its long-standing relationships with Microsoft and SAP. For example, the company touts $2.5-plus billion worth of SAP services business backed by more than 22,000 SAP-trained professionals. As outlined in TBR’s October 2024 SAP, Oracle and Workday Ecosystem Report, the size of its SAP practice places NTT DATA in a close race with EY and Tata Consultancy Services and above Capgemini, Cognizant, DXC Technology, Infosys and PwC.
 
Moving forward, NTT DATA’s success will also depend on the company’s ability to use a multiparty ecosystem lens and bring parties together. We believe an element of NTT DATA’s success with SAP is its ability to take a three-way approach with Microsoft and SAP to drive more targeted conversations. NTT DATA’s opportunity around Google Cloud will require a similar blueprint. Given Google Cloud’s push in data, AI and security, NTT DATA needs to think strategically about how to bring the likes of ISVs to the table that can help fill in that gap.

Snowflake’s AI Evolution: Scaling Innovation with a Data-first Strategy 

TBR attended two virtual Snowflake events in January, AI + Data Predictions 2025: How Operationalizing AI Will Drive Technical Advances and Leadership Challenges on Jan. 16, and Snowflake GenAI Day on Jan. 22. During the events we heard from Snowflake leaders, including Chase Ginther, principal architect AI/ML, and Caleb Baechtold, principal AI architect, Applied Field Engineering. These discussions, coupled with keynote speakers, breakout sessions, and TBR’s ongoing analysis of Snowflake’s strategy, underscore the company’s ongoing transformation from a data warehouse innovator to a leader in integrated data and AI platforms.

Snowflake in transition: Scaling AI through a data-first approach

Snowflake’s AI strategy is centered on a data-first approach that leverages the company’s data management strengths to drive development of advanced AI capabilities. Three key aspects of Snowflake’s strategy help it stand out in a highly competitive data and AI platform market.
 
First, the company is leveraging its origins as a data warehouse provider to offer a fully integrated data and AI platform. By prioritizing the management of structured and unstructured data, Snowflake enables AI-driven analytics, machine learning (ML) workflows and advanced processing within a unified ecosystem. Second, Snowflake is using advanced technologies to scale its AI capabilities, including GPUs to accelerate ML workloads; Snowflake Container Services (SCS) for efficient model deployment; and Snowpark, which enables seamless AI development using SQL, Python and Java. Third, Snowflake is enhancing its ecosystem through open-source AI collaborations via Cortex, integrating models from Meta, Hugging Face and Mistral to power natural language processing, predictive analytics and automation — all within a secure, data-centric framework. By prioritizing data as a foundation for AI, Snowflake enables efficient scaling while ensuring security, performance and governance within its ecosystem.
 
During the Snowflake events, TBR observed that customer demand for scalable, governed and actionable data remains a key driver of Snowflake’s evolution. The company’s ability to manage and harmonize disparate data types was repeatedly emphasized. For example, Ginther highlighted Nissan’s success in using Snowflake to analyze millions of customer profiles across multiple markets. This initiative showcased Snowflake’s ability to address complex, large-scale data challenges while delivering actionable insights for decision making.
 

Find out how generative AI (GenAI) will impact IT services, cloud vendors, the federal IT services market, IT infrastructure vendors and more in 2025.
 
Download TBR’s 2025 GenAI Predictions special report today!


 

Generative AI: Unleashing untapped potential beyond chatbots

Snowflake’s scalability is not just about performance; it also plays a critical role in empowering AI adoption through a favorable cost-to-value alignment. The platform’s pay-as-you-go pricing model adjusts to the dynamic demands of AI applications, particularly for resource-heavy use cases such as generative AI (GenAI) and predictive modeling. This flexible model enables organizations to efficiently grow their AI workloads and lowers the barrier to implement advanced AI solutions.
 
During Snowflake GenAI Day, the company showcased GenAI’s vast potential beyond traditional applications like chatbots and content generation. For example, Snowflake partner Sigma Computing demonstrated how Snowflake transformed raw Salesforce data into actionable insights. The AI-driven analytics not only improved decision making for Sigma’s business leaders but also reduced the time spent on manual data preparation, unlocking faster, more informed outcomes.
 
However, as enterprises scale their GenAI applications, they face challenges related to data bias, IP risks and ethical AI. To build trust with customers, vendors must design their AI solutions with governance, fairness and transparency in mind to ensure responsible AI deployment. Customers need to implement strong data governance practices that carefully monitor data to avoid perpetuating inaccurate or discriminatory outcomes.

Golden datasets and the future of AI development

One emerging trend highlighted during Snowflake GenAI Day was golden datasets — curated collections of structured and unstructured data optimized for GenAI use cases. These datasets, when enriched by Snowflake’s platform, empower organizations to drive more accurate and impactful AI outcomes. Moreover, Snowflake’s focus on text-to-language prompts, which simplify data interactions by reducing reliance on complex SQL queries, demonstrate its commitment to improving user experiences. Using Snowflake’s Universal Search offering, customers can identify datasets in their accounts based on data quality and usage within their workflows to create optimized — or golden — datasets. Universal Search ensures that users — regardless of their level of technical expertise — can effectively leverage Snowflake’s capabilities for AI development, analytics and decision making. However, building and maintaining golden datasets pose significant challenges. For many organizations, curating and cleaning data at scale require advanced governance frameworks and skilled teams to ensure data quality, relevance and accuracy. Organizations that lack these capabilities may struggle to derive meaningful insights from their AI models. Additionally, errors or inconsistencies in golden datasets can lead to biased outcomes, undermining trust in AI-driven decision making.

Simplifying user interactions

Another topic highlighted during the GenAI Day event was Snowflake’s focus on improving user accessibility. By incorporating text-to-language prompts into its data and AI platform, Snowflake has reduced the technical barrier for users who may lack expertise in SQL or other programming languages. This feature ensures that nontechnical users can interact with the platform effectively, making data-driven insights accessible across diverse teams.

Predictions for 2025: From experimentation to enterprise-grade AI

During the AI + Data Predictions 2025 event, Snowflake forecast a significant shift in AI adoption as enterprises transition from experimental pilots to fully realized, enterprise-grade AI solutions throughout 2025. However, TBR’s 2H24 Cloud Infrastructure & Platforms Customer Research survey results suggest that the adoption of GenAI solutions may progress more slowly than expected in 2025, primarily due to cost constraints and a lack of technical expertise with the emerging technology. Despite these challenges, Snowflake anticipates AI adoption will be driven by AI observability, as businesses increasingly need to prioritize ROI measurement, deployment reliability and regulatory compliance.
 
During the presentation, speakers discussed how Snowflake’s key AI advancements such as embedding models to enhance the performance of large language models, including GPT models, are enabling task-specific customizations, improving multilingual capabilities and optimizing overall model performance. Snowflake’s platform supports these efforts with containerized runtimes like Snowflake Notebooks and Snowflake Container Services (SCS), which provide scalable and efficient tools for AI development. Baechtold emphasized the critical role of robust datasets in supporting both GenAI and traditional ML models. Snowflake’s platform addresses key challenges, such as data security, governance and accessibility, ensuring enterprises can confidently deploy AI solutions across industries ranging from healthcare to manufacturing.
 

Deep dive into generative AI’s impact on the cloud market in 2025 in the below TBR Insights Live session

 

Security, governance and containerization: Building trust in AI

Throughout both events, security and governance emerged as central themes in Snowflake’s AI strategy. As enterprises increasingly integrate multiple platforms and environments, the risk of data breaches and compliance violations grows. Snowflake’s approach to governance includes developing best practices around securing cloud configurations, authenticating model access, and monitoring runtime environments to ensure its AI solutions are scalable, secure and compliant with evolving regulations. For example, OM1’s use of Snowflake demonstrated how containerized systems streamline governance processes and enhance scalability and efficiency. By leveraging these systems, Snowflake ensures that clients can deploy AI solutions with confidence, knowing their data and models are protected.
 
Despite Snowflake’s efforts, managing security and governance at scale is an ongoing challenge. Customers operating in highly regulated sectors, such as finance or healthcare, may require additional customizations to ensure they comply with stringent regulatory requirements. Additionally, scaling governance frameworks to accommodate rapidly evolving AI use cases could stretch Snowflake’s platform and resources. Providing consistent, enterprise-grade support while maintaining innovation will be essential for Snowflake to navigate these challenges.

Snowflake’s road map: Scaling innovation while meeting enterprise needs

Looking ahead, Snowflake will continue to focus on expanding its integrated data and AI platform while maintaining its core pillars of scalability, flexibility and observability. The company’s ability to bridge the gap between structured and unstructured data — combined with its investments in user experience, embedding models and AI observability —will place it among the leaders in the next wave of AI innovation.
 
However, Snowflake’s success will depend on its ability to balance innovation with governance, ensuring enterprises can address their unique data challenges while meeting compliance requirements. By focusing on empowering users, streamlining AI deployments and scaling advanced technologies, Snowflake will be well positioned to meet the demands of a rapidly evolving market.

Conclusion

Snowflake’s evolution reflects its commitment to advancing AI through a data-first approach. By addressing the complexities of modern data ecosystems and aligning its platform with emerging AI trends, Snowflake has established itself as a key player in the AI landscape. This strategic focus not only drives digital transformation but also shapes the competitive dynamics of the market, impacting partners, competitors and technology providers. The company has expanded its GenAI capabilities by integrating open-source models such as those from Hugging Face and Meta, enabling customers to deploy and customize AI models more easily.
 
Snowflake also emphasizes AI observability, providing businesses with tools to track performance, optimize outcomes and ensure ROI, while mitigating model drift. Its governance framework ensures regulatory compliance, safeguarding AI data and models across industries. Snowflake’s efforts to simplify the user experience and make AI more accessible to nontechnical users align with new industry standards. By lowering technical barriers, Snowflake is enabling a broader range of businesses to leverage AI and encouraging the market to innovate toward more user-friendly solutions. However, Snowflake faces challenges in integrating diverse data environments and maintaining data quality at scale. The need for significant infrastructure investments, such as GPUs, may also become a hurdle as AI adoption expands.
 
As GenAI and AI observability evolve, Snowflake’s integrated platform is positioned to support partners and stakeholders in navigating the next phase of industry transformation. By offering scalable and secure AI workflows, Snowflake is helping them tackle the challenges of adopting AI at scale across industries. TBR will continue monitoring Snowflake’s progress and its influence on AI-driven business strategies across sectors.