U.S. Mobile Operator Benchmark

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Most operators will sustain wireless service revenue and connection growth in 2025 but face headwinds from macroeconomic challenges and Trump administration immigration policies

Most benchmarked operators sustained service revenue growth in 4Q24, driven by connection growth and higher ARPA

Total wireless revenue from benchmarked U.S. operators increased 4.5% year-to-year to $78.9 billion in 4Q24, mainly due to continued postpaid phone subscriber growth and higher average revenue per account (ARPA). Although the market is maturing, operators are maintaining postpaid phone net additions due to factors including population growth and more businesses purchasing mobile devices for employees. Higher ARPA is being driven by operators increasing connections per account, including from growing fixed wireless access (FWA) adoption, uptake of premium unlimited data plans and recent rate increases implemented over the past year.
 
Though most U.S. operators expect to continue to grow wireless service revenue and connections in 2025, they will face headwinds from factors including macroeconomic pressures (including layoffs within the private and public sectors and uncertainty around tariff impacts) and immigration policies under the Trump administration (including mass deportations).

U.S. operators increase focus on cross-selling mobile and broadband services

U.S. operators are focused on advancing their convergence strategies by offering plans bundling mobile and broadband services. The bundles create a stickier ecosystem to reduce churn long-term via the convenience of enrolling in broadband and mobility services from the same provider as well as by providing discounted pricing compared to purchasing those services separately.
 
Operators including AT&T, Charter, Comcast, T-Mobile and Verizon are growing their ability to offer these bundles via the expanding service availability of their broadband services (including wireline and FWA offerings). Operators are also targeting acquisitions to strengthen their convergence strategies, such as Verizon’s pending purchase of Frontier Communications and T-Mobile’s proposed joint ventures to acquire Metronet and Lumos. Cable operators also have significant opportunity to increase sales of converged services as a relatively low portion of cable broadband customers are enrolled in their service provider’s mobile offering.

AI is providing cost savings and revenue generation opportunities for U.S. operators

U.S. operators are focused on more deeply implementing AI technologies in areas including optimizing customer service and sales & marketing functions as well as enhancing network operations. For instance, deeper AI implementation will help AT&T reach its goal of generating $3 billion in run-rate cost savings between 2025 and 2027, while leveraging AI technologies will help T-Mobile meet its target of reducing the number of inbound customer care calls by 75%.
 
AI will also help operators optimize energy usage, especially as it pertains to network operations. Examples include using AI for optimal, dynamic traffic routing and to determine when to turn on and turn off radios to optimize energy usage. AI, especially providing network and real estate resources to support AI inferencing workloads, will create significant revenue opportunities for operators.
 
For instance, Verizon views telco AI delivery as having a $40 billion total addressable market, and the company has already secured a sales funnel of over $1 billion in business by leveraging its existing infrastructure and resources.

Operators are focused on cost-cutting initiatives, including streamlining headcount and more deeply implementing AI technologies, to improve margins

The impacts of inflation and challenging macroeconomic conditions, such as lower consumer discretionary spending, higher network operations and transportation expenses, and increased labor-related costs, are limiting profitability for U.S. operators. These challenges are leading operators to implement cost-cutting and restructuring initiatives to improve profitability, such as AT&T’s goal of generating $3 billion in savings from 2025 to the end of 2027 through its latest cost-cutting program.
 
Operators are streamlining headcount as part of their cost-cutting initiatives. For instance, about 4,800 employees are expected to leave Verizon by the end of March 2025 as part of the company’s latest voluntary separation program.
 
To increase cost savings and operational efficiencies, operators are more deeply implementing AI technologies in areas including customer service, field technician support and fleet vehicle fuel consumption.
 
T-Mobile is improving profitability, evidenced by its EBITDA margin growing by 220 basis points year-to-year to 35.4% in 4Q24, which was impacted by the company’s higher revenue and lower network costs aided by greater merger-related synergies. T-Mobile’s 2025 guidance for core adjusted EBITDA* is between $33.1 billion and $33.6 billion, compared to $31.8 billion in 2024. Service revenue growth as well as cost-cutting initiatives and merger-related synergies will all contribute to higher core adjusted EBITDA.
 
*Core adjusted EBITDA reflects T-Mobile’s adjusted EBITDA less device lease revenues.
 

Graph: Wireless Revenue, EBITDA Margin and Year-to-year Growth for 4Q24 (Source: TBR)

Wireless Revenue, EBITDA Margin and Year-to-year Growth for 4Q24 (Source: TBR)


 

T-Mobile continued to lead the U.S. in postpaid phone and broadband net additions in 4Q24 and recently launched new FWA pricing plans

Operators are attracting FWA customers, mainly because FWA offerings have lower price points compared to other broadband services and are available to customers in markets with limited other high-speed broadband options, such as within rural markets. Though consumers account for the bulk of FWA connections, FWA is also gaining momentum among businesses seeking to reduce connectivity expenses and/or companies needing to quickly launch new branch locations, as the technology can be installed faster than fixed broadband.
 
In 4Q24 T-Mobile continued to lead the U.S. in broadband subscriber growth, driven by its FWA services, aided by the company continuing to gain market share against cable companies including Comcast and Charter, which reported steeper broadband customer losses in 4Q24 both year-to-year and sequentially. T-Mobile also reported its highest-ever year-to-year broadband ARPU growth in 4Q24, which was aided by the company revamping its 5G Home Internet and Small Business Internet plans in December.
 

Graph: Total FWA Net Additions for 4Q23-4Q24 (Source: TBR)

Total FWA Net Additions for 4Q23-4Q24 (Source: TBR)


 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Wireless capex moderated for most U.S. CSPs in 2024 as they are in the later stages of 5G rollouts

Verizon’s consolidated capex will increase to a guidance range of $17.5 billion to $18.5 billion in 2025, compared to $17.1 billion in 2024 (higher consolidated capex is mainly due to increased wireline capex to support Verizon’s accelerated Fios build). TBR estimates Verizon’s wireless capex in 2025 will be relatively consistent compared to 2024 as the company will focus on the continued expansion of C-Band 5G services into suburban and rural markets.
 
AT&T’s 2025 guidance for capital investment, which includes capex and cash paid for vendor financing, is in the $22 billion range, consistent with $22.1 billion in capital investment in 2024. Capital investment in 2025 will entail materially lower vendor financing payments compared to 2024, while capex is expected to increase year-to-year in 2025. TBR estimates AT&T’s wireless capex will be about $10.6 billion in 2025, which will help to meet AT&T’s goals, including providing midband 5G coverage to over 300 million POPs by the end of 2026 and completing the majority of its transition to open-RAN-compliant technologies by 2027.
 
T-Mobile’s capex guidance for 2025 is around $9.5 billion, compared to $8.8 billion in capex spent in 2024, with spending focused on continued 5G network deployments as well as investments in IT platforms to enhance efficiency and customer experience.

Enterprise Edge Compute Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.


Post Updated: Aug. 6, 2025

Hyperscalers focus investments on AI workloads, which will likely land in the cloud anyway and thus in some ways foster the edge ecosystem

With the edge AI opportunity stemming from the central cloud, the hyperscalers trim edge portfolios and focus investments elsewhere, creating new openings for edge pure plays

Over the past several months, many of the hyperscalers have reevaluated their edge portfolios to focus more on their central cloud services, which is where most of the AI opportunity will land. For example, AWS discontinued edge hardware in the Snow family and, later this year, will sunset features in its IoT Device Management service. In our view, these developments speak to customers’ preference for consuming edge computing as an extension of the central cloud; this includes customers migrating workloads to the cloud and building AI models that can be brought to the edge for a particular use case. This proposition will continue to challenge the “edge-native” players, including many hardware vendors and software pure plays that feed on demand from customers crafting their edge strategies from the ground up. But at the same time, it unlocks more opportunities within the ecosystem. For example, the hyperscalers’ pivot away from first-party IoT services welcomes more openings for IoT specialists that can attach themselves to an AI use case, while allowing the hyperscalers to strategically focus on AI and build the capabilities and infrastructure to support customers’ AI workloads regardless of where they are deployed. In some cases, we could see the hyperscalers investing in AI workloads to actually create an edge ecosystem.

AI use cases at the edge already exist

AI has been a foundational technology in enterprise edge computing for years and continues to support growth of the enterprise edge market, which TBR expects will expand from $58 billion in 2024 to $144 billion in 2029. TBR’s enterprise edge spending forecast has not increased significantly from our previous guidance in 2024, which already incorporated our long-standing assumption that AI will propel market growth. TBR expects that the industrywide focus on generative AI (GenAI) will likely lead to increased adoption of edge computing but that the bulk of enterprises embarking on these projects in 2025 will focus on piloting and adoption of cloud and centralized AI resources.

Compared to other deployment methods, edge expansion still lags

According to TBR’s 2Q24 Infrastructure Strategy Customer Research, 34% of respondents expect to expand IT resources at edge sites and branch locations over the next two years. But this is noticeably lower than the 55% who plan to expand IT resources within centralized data centers, while the central cloud and managed hosting are also gaining more traction. The possibility of large capital outlays and an unclear path to ROI remain the biggest adoption hurdles to edge technology, with some customers exploring other alternatives that have a clearer ROI road map.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

GenAI will not have a significant impact on enterprise edge market growth, at least in the near term, as customers prioritize their investments in the IT core and cloud

Forecast assumptions

TBR continues to revise its enterprise edge forecast to account for changes in the traditional IT and cloud markets, including the advent of generative AI (GenAI). Although the enterprise edge market benefited from the hype surrounding AI in 2024, many pilot projects may not enter production and more concrete use cases around edge AI need to be developed.

The enterprise edge market is estimated to grow at a 19.9% CAGR from 2024 to 2029, surpassing $144 billion by 2029. Professional and managed services will remain the fastest-growing segment, followed by software, at estimated CAGRs of 22.4% and 19.3%, respectively.

Graph: Enterprise Edge Spending Forecast by Segment for 2024-2029 (Source: TBR)

Enterprise Edge Spending Forecast by Segment for 2024-2029 (Source: TBR)

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

Although there is general interest in edge across industries, demand varies by vertical, with surveillance and quality assurance use cases particularly strong in healthcare and consumer goods

Overall, the edge use cases that garnered the most interest from respondents were security & surveillance, quality assurance, and network (e.g., vRAN). Despite the AI hype, real-time analytics was the fourth most popular use case, although other use cases may embed these technologies.

Interest in deploying a certain use case is often industry-specific, such as above-average interest in security & surveillance among respondents in the consumer goods vertical.

Enterprise respondents had an above-average interest in remote asset management.

Cloud vendors look to partners to bridge the gap between IT and OT buyers and drive traction for edge solutions

TBR’s newly launched Voice of the Partner Ecosystem Report includes survey results from alliance partnership decision makers across three groups of vendors: OEMs, cloud providers and services providers. For cloud respondents, edge computing ranked No. 4 out of 26 technologies as the area that will drive the most partner-led growth in the next two years. This stems from the big gap that still exists between IT and OT buyers, and an overall optimism about cloud vendors’ ability to use partners to drive adoption.

OT stakeholders understand the edge but are not necessarily thinking about IT solutions through the lens of their own processes. Because of this, edge hardware vendors and cloud providers benefit from partnering with edge-native software vendors that have permission from OT buyers and can help edge incumbents sell solutions, including attached software and services. The dynamics between IT and OT departments reinforce the importance of the vendor ecosystem in the enterprise edge market.

Dell infuses GenAI into its NativeEdge edge operations platform, enabling customers’ edge environments to operate like their centralized data centers

TBR Assessment: Dell is working to build an ecosystem surrounding its hardware, primarily through expanding its NativeEdge operations software, enrolling ISV partners to create validated solution designs for specific industry use cases, and designing new services to facilitate edge infrastructure rollouts. Dell’s edge approach is increasingly intertwined with its AI strategy and its close partnership with NVIDIA, as Dell seeks to capitalize on growth opportunities through AI use cases that require video analytics, speech analytics and inferencing at edge locations. This approach expands Dell’s addressable market as its previous edge play was primarily focused on computervision solutions, and NVIDIA’s AI Enterprise software portfolio will open the door to a greater variety of use cases. Dell’s edge hardware portfolio includes not only ruggedized servers and gateways but also storage, backup and hybrid cloud solutions.

Key Strategies

  • Build validated designs for verticals with high growth potential, including telecom, retail and manufacturing.
  • Leverage Dell NativeEdge, an edge operations software platform, to add value on top of the company’s diverse infrastructure portfolio.
  • Simplify edge management using AI, and add edge management features that support the needs of AI-based workloads.
  • Partner with leading ISVs to provide an enhanced edge orchestration experience.

Recent Developments
In November SVP of Edge Computing Gil Shneorson outlined how Dell NativeEdge 2.0, Dell’s edge orchestration software, better enables AI workloads at the edge. Shneorson emphasized that although AI workloads have existed at the edge for years, Dell Technologies’ (Dell) orchestration platform utilizes AI to make these workloads easier to deploy and manage. One example is a new software feature that offers high-availability clustering, which provides automatic application failover and live virtual machine migration.

The scope of Dell-branded devices and infrastructure that can be managed under NativeEdge is broad — including servers, high-end storage and backup systems, edge gateways, and even workstation PCs. These various types of endpoints can be clustered through the software to act as a single system.


Dell has also updated NativeEdge to address other customer needs surrounding AI, including data mobility and support for NVIDIA Inferencing Microservices (NIMs).

Cloud Components Benchmark

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Behind healthy server backlog and new software IP, hardware vendors drive the cloud components market, particularly as software pure plays prioritize entirely public cloud migrations

Hardware-centric vendors continue to make their move into software

Over the past several years, the cloud software components market has shifted. Microsoft and Oracle are no longer dominating the market as they prioritize their native tool sets and encourage customers to migrate to public cloud infrastructure. Driven largely by weaker-than-expected purchasing around Microsoft Windows Server 2025, aggregate revenue growth for these two software-centric vendors was down 3% year-to-year in 3Q24. Over the same compare, total software components revenue for the benchmarked vendors was up 14% and total cloud components revenue was up 8%. In some ways, this dynamic has made room for hardware-centric vendors such as Cisco and Hewlett Packard Enterprise (HPE) to move deeper into the software space, particularly as they buy IP associated with better managing orchestration infrastructure in a private and/or hybrid environment.

Backlog-to-revenue conversion for AI servers fuels market growth

Though revenue mixes are increasingly shifting in favor of software, driven in part by acquisitions (e.g., Cisco’s purchase of Splunk), hardware continues to dominate the market, accounting for 80% of benchmarked vendor revenue in 3Q24. Industry standard servers being sold to cloud and GPU “as a Service” providers are overwhelmingly fueling market growth, more than offsetting unfavorable cyclical demand weakness in the storage and networking markets. This growth is largely driven by the translation of backlog into revenue, but vendors are still bringing new orders into the pipeline, which speaks to ample demand from both AI model builders and cloud providers. However, large enterprises are increasingly adopting AI infrastructure as part of a private cloud environment to control costs and make use of their existing data.

Graph: Cloud Revenues by Segment for 3Q23-3Q24 (Source: TBR)

Cloud Revenues by Segment for 3Q23-3Q24 (Source: TBR)

 

Ample scale and strong demand from both CSPs and enterprises extend Dell’s lead in the cloud components market

Cloud components vendor spotlights

Dell Technologies [Dell]

From a revenue perspective, HPE and Cisco once threatened Dell’s cloud components leadership, but the company has been able to distance itself from its nearest competitors. This is largely due to Dell’s performance over the past year, with strong server demand, particularly from Tier 2 cloud service providers (CSPs), propelling the company’s corporate and cloud components revenue growth rate to the double digits. Meanwhile, in 3Q24 Dell shipped $2.9 billion worth of AI servers while backlog reached $4.5 billion, reflecting 181% year-to-year growth during the quarter and indicating strong future revenue performance.

Hewlett Packard Enterprise

Like its peers, HPE is benefiting from AI-related server demand, and in 3Q24 the company reported $1.5 billion in total AI systems revenue. HPE continues to benefit from its ongoing efforts to shift the sales mix in favor of software and services via GreenLake. In 3Q24 HPE completed its acquisition of Morpheus Data, officially equipping HPE with a suite of infrastructure software that allows customers to take core hypervisors, such as KVM and VMware, and use them to build complete private cloud stacks.

Cisco

With its acquisition of Splunk, Cisco has emerged as the leader of the software components market, even surpassing Microsoft in related revenue. But networking still accounts for the bulk of Cisco’s components business, and, as evidenced by a 32% year-to-year decline in total hardware revenue for 3Q24, Cisco is facing headwinds in the core networking business. That said, the company is actively taking steps to build out its portfolio, particularly by integrating more security components into the networking layer, which is where most cyberattacks originate, to boost its long-term competitiveness in the market.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

Infrastructure agnosticism and flexible cloud-enabled delivery are core attributes of the service delivery market, cementing IBM’s leadership

Dedicated orchestration tools continue to have their place in the market, both in on-premises and cloud environments, but growth is largely driven by application lifecycle management and orchestration tools that span multiple environments. IBM has a rich history in this space and remains a revenue leader. Cisco used to have a foothold in the market but no longer sells its CloudCenter suite.

Vendor spotlight: IBM

After taking steps to bring watsonx into Maximo in 2Q24 for greater process automation, IBM strengthened its commitment to the asset performance management space with the acquisition of Prescinto. Prescinto offers AI tools and accelerators designed for asset owners and operators with a focus on renewable energy and operators. This deal is designed to support IBM’s play in certain verticals, particularly energy and utilities.

Graph: Service Delivery and Orchestration Revenue Growth vs Cloud Software Components Revenue Growth for 3Q24 (Source: TBR)

Service Delivery and Orchestration Revenue Growth vs Cloud Software Components Revenue Growth for 3Q24 (Source: TBR)

 

AI PC and AI Server Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Despite hyperscalers’ increasing investments in custom AI ASICs, TBR expects demand for GPGPUs to remain robust over the next 5 years, driven largely by the ongoing success of NVIDIA DGX Cloud

The world’s largest CSPs, including Amazon, Google and Microsoft, remain some of NVIDIA’s biggest customers, using the company’s general-purpose graphics processing units (GPGPUs) to support internal workloads while also hosting NVIDIA’s DGX Cloud service on DGX systems residing in the companies’ own data centers.
 
However, while Amazon, Google and Microsoft have historically employed some of the most active groups of CUDA developers globally, all three companies have been actively investing in the development and deployment of their own custom AI accelerators to reduce their reliance on NVIDIA. Additionally, Meta has invested in the development of custom AI accelerators to help train its Llama family of models, and Apple has developed servers based on its M-Series chips to power Apple Intelligence’s cloud capabilities.
 
However, even as fabless semiconductor companies such as Broadcom and Marvell increasingly invest in offering custom AI silicon design services, only the largest companies in the world have the capital to make these kinds of investments. Further, only a subset of these large technology companies engage in the type of operations at scale that would yield measurable returns on investments and total cost of ownership savings. As such, even as investments rapidly rise in the development of customer AI ASICs, the vast majority of customers continue to choose NVIDIA’s GPGPUs due to not only their programming flexibility but also the rich developer resources and robust prebuilt applications comprising the hardware-adjacent side of NVIDIA’s comprehensive AI stack.
 

Graph: Data Center GPGPU Market Forecast for 2024-2029 (Source: TBR)

Data Center GPGPU Market Forecast for 2024-2029 (Source: TBR)


 

Companies across a variety of industry verticals want to take a piece of NVIDIA’s AI cake

Scenario Discussion: NVIDIA faces increasing threats from both industry peers and partners

NVIDIA GPGPUs are the accelerator of choice in today’s AI servers. However, the AI server and GPGPU market incumbent’s dominance is increasingly under threat by both internal and external factors that are largely related. Internally, as Wall Street’s darling and a driving force behind the Nasdaq’s near 29% annual return in 2024, NVIDIA’s business decisions and quarterly results are increasingly scrutinized by investors, forcing the company to carefully navigate its moves to maximize profitability and shareholder returns. Externally, while NVIDIA positions itself largely as a partner-centric AI ecosystem enabler, the number of the company’s competitors and frenemies is on the rise.
 
Despite NVIDIA’s sequentially eroding operating profitability, investor scrutiny has not had a clear impact on the company’s opex investments — evidenced by a 48.9% year-to-year increase in R&D spend during 2024. However, it may well be a contributing factor to the company’s aggressive pricing tactics and rising coopetition with certain partners. While pricing power is one of the luxuries of having a first-mover advantage and a near monopoly of the GPGPU market, high margins attract competitors and high pricing drives customers’ exploration of alternatives.
 
Additionally, the fear of vendor lock-in among customers is something that comes with being the only name in town, and while there is not much most organizations can do to counteract this, NVIDIA’s customers include some of the largest, most capital-rich and technologically capable companies in the world.
 
To reduce their reliance on NVIDIA GPUs, hyperscalers and model builders alike have increasingly invested in the development of their own custom silicon, including AI accelerators, leveraging acquisitions of chip designers and partnerships with custom ASIC developers such as Broadcom and Marvell to support their ambitions. For example, Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP) and Meta have their own custom AI accelerators, and OpenAI is reportedly working with Broadcom to develop an AI ASIC of its own. However, what these custom AI accelerators have in common is their purpose-built design to support company-specific workloads, and in the case of AWS, Azure and GCP, while customers can access custom AI accelerators through the companies’ respective cloud platforms, the chips are not physically sold to external organizations.
 
In the GPGPU space, AMD and, to a lesser extent, Intel are NVIDIA’s direct competitors. While AMD’s Instinct line of GPGPUs has become increasingly powerful, rivaling the performance of NVIDIA GPGPUs in certain benchmarks, the company has failed to gain share from the market leader due largely to NVIDIA CUDA’s first-mover advantage. However, the rise of AI has driven growing investments in alternative programming models, such as AMD ROCm and Intel oneAPI — both of which are open source in contrast to CUDA — and programming languages like OpenAI Triton. Despite these developments, TBR believes NVIDIA will retain its majority share of the GPGPU market for at least the next decade due to the momentum behind NVIDIA’s closed-source software and hardware optimized integrated stack.
 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Microsoft Copilot+ PCs represent a brand-new category and opportunity for Windows PC OEMs industrywide

PC OEMs expected the post-pandemic PC refresh cycle to begin in 2023, but over the past 18 months, their expectations have continually been delayed, with current estimates indicating the next major refresh cycle will ramp sometime in 2025. While the expected timing of the refresh cycle has changed, the drivers have remained the same, with PC OEMs expecting that the aging PC installed base, the upcoming end of Windows 10 support — slated for October 2025 — and the introduction of new AI PCs will coalesce, driving meaningful rebounds in the year-to-year revenue growth of both the commercial and consumer segments of the PC market.
 
As organizations graduate from Windows 10 devices to Windows 11 devices, TBR expects many customers will opt for AI PCs to future-proof their investments, understanding that the overall commercial PC market will be dominated by devices powered by Windows AI PC SoCs in a few years’ time. However, while TBR expects the Windows AI PC market to grow at a 44.3% CAGR over the next five years, the driver of this robust growth centers on the small revenue base of Windows AI PCs today.
 
While Apple dominated the AI PC market in 2024 due to the company’s earlier transition to its own silicon platform — the M Series, which features onboard NPUs — TBR estimates indicate that among the big three Windows OEMs, HP Inc.’s AI PC share was greatest in 2024, followed closely by Lenovo and then Dell Technologies. Without an infrastructure business, HP Inc. relies heavily on its PC segment to generate revenue, and as such, TBR believes that relative to its peers — and Dell Technologies in particular — HP Inc. is more willing to trade promotions and lower margins for greater number of sales, which is a key factor in the current increasingly price-competitive PC market. TBR estimates Lenovo’s second-place positioning is tied to the company’s growing traction in the China AI PC market, where the company first launched AI PCs leveraging a proprietary AI agent in a region where Microsoft Copilot has no presence.
 

Graph: Windows AI PC Market Forecast for 2024-2029 (Source: TBR)

Windows AI PC Market Forecast for 2024-2029 (Source: TBR)

The PC ecosystem increases its investments in developer resources to unleash the power of the NPU

 
Currently available AI PC-specific applications, such as Microsoft Copilot and PC OEMs’ proprietary agents, are focused primarily on improving productivity, which drives more value on the commercial side of the market compared to the consumer side. However, it is likely more AI PC-specific applications will be developed that harness the power of the neural processing unit (NPU), especially as AI PC SoCs continue to permeate the market.
 
Companies across the PC ecosystem, including silicon vendors, OS providers and OEMs, are investing in expanding the number of resources available to developers to support AI application development and ultimately drive the adoption of AI PCs. For example, AMD Ryzen AI Software and Intel OpenVINO are similar bundles of resources that allow developers to create and optimize applications to leverage the companies’ respective PC SoC platforms and heterogenous computing capabilities, with both tool kits supporting the NPU, in addition to the central processing unit (CPU) and GPU.
 
However, as it relates to AI PCs, TBR believes the NPU will be leveraged primarily for its ability to improve the energy efficiency of certain application processes, rather than enabling the creation of net-new AI applications. While the performance of PC SoC-integrated GPUs pales in comparison to that of discrete PC GPUs purpose-built for gaming, professional visualization and data science, the TOPS performance of SoC-integrated GPUs typically far exceeds that of SoC-integrated NPUs, due in part to the fact that the processing units are intended to serve different purposes.
 
The GPU is best suited for the most demanding parallel processing functions, requiring the highest levels of precision, while the NPU is best suited for functions that prioritize power efficiency and require lower levels of precision, including things like noise suppression and video blurring. As such, TBR sees the primary value of the NPU being extended battery life — an extremely important factor for all mobile devices. This is the key reason why TBR believes that AI PC SoCs will gradually replace all non-AI PC SoCs, eventually being integrated into nearly all consumer and commercial client devices.
 
One of the reasons PC OEMs are so excited about the opportunity presented by AI PCs is that AI PCs command higher prices, supporting OEMs’ longtime focus on premiumization. Commercial customers, especially large enterprises in technology-driven sectors like finance, typically buy more premium machines, while consumers generally opt for less expensive devices, and TBR believes this will be another significant driver of AI PC adoption rising in the commercial segment of the market before the consumer segment.

DOGE Federal IT Vendor Impact Series: General Dynamics Technologies

The Trump administration and its Department of Government Efficiency (DOGE) have generated massive upheaval across the board in federal operations, including in the federal IT segment. As of March 2025, thousands of contracts described by DOGE as “non-mission critical” have been canceled, including some across the federal IT and professional services landscape. TBR’s DOGE Federal IT Vendor Impact Series explores vendor-specific DOGE-related developments and impacts on earnings performance. Click here to receive upcoming series blogs in your inbox as soon as they’ve published.

 

Demand for digital accelerators grows despite federal IT market uncertainty

Although the Department of Government Efficiency (DOGE) claims to have canceled at least six of General Dynamics’ contracts during 1Q25 and the U.S. General Services Administration (GSA) has instructed agencies to scrutinize their work with General Dynamics Information Technology (GDIT) to determine whether it is truly essential, General Dynamics Technologies (GDT) posted better numbers than expected. When General Dynamics released its 1Q25 fiscal results on April 23, it revealed that GDT’s quarterly revenue was $3.4 billion, up 6.8% year-to-year and 5.9% sequentially.
 
GDIT drove this expansion, with its revenue of $2.36 billion surging 9.2% year-to-year and 7.2% sequentially. While the acquisition of Iron EagleX in 3Q24 provided mild inorganic revenue contributions, demand increased rapidly for its growing portfolio of digital accelerators: Comet 5G, Coral Software Factory, Cove AI Operations, Eclipse Defensive Cyber, Ember Digital Engineering, Everest Zero Trust, Hive Hybrid Multicloud, Luna AI and Tidal Post-Quantum Cryptography.
 
GDT’s operating margin also benefited from this uptick in volume for GDIT as well as General Dynamics Mission Systems offerings, improving 40 basis points year-to-year to 9.6% in 1Q25. However, GDT’s operating margin declined 20 basis points on a sequential basis as IT services are becoming a more prominent component of its portfolio mix, rather than high-margin defense electronics, as Mission Systems continues to reshuffle its portfolio mix.
 
While General Dynamics’ backlog decreased on a sequential and year-to-year basis, GDT’s backlog of $14.4 billion was actually up 6.7% year-to-year and 1.8% sequentially. GDT’s bookings were notably robust, with the segment achieving a book-to-bill ratio of 1.1:1, indicating that disruptions to its $120 billion pipeline of opportunities have been minimal despite the new headwinds. While GDT flagged its win and captures rate as being in the 80% range, its leadership highlighted that the solicitation, proposal and award process have been slowing down across the market as the Trump administration refines its long-term goals.

How GDT will navigate 2025

Since GDIT secured two wins in the federal health market worth approximately $3 billion in the second half of 2023, the segment has continued to ramp up its efforts to diversify its non-DOD (Department of Defense) revenue base. For example, GDIT secured a contract worth up to $286 million in 4Q24 to keep enhancing the Centers for Medicare & Medicaid Services’ (CMS) Benefits Coordination & Recovery Center by weaving in emerging technologies like AI to streamline the center’s operations.
 
GDIT formally established a Federal Health practice toward the end of 2024, signaling its intent to create deeper ties to agencies within the Department of Health and Human Services (HHS). Although GDT’s non-DOD revenue growth has gained momentum, expanding 8.8% year-to-year in 4Q24 and 5.3% year-to-year in 1Q25, DOGE’s actions may complicate things, given that the bulk of GDT’s canceled contracts thus far have been tied to HHS.
 
Additionally, GDIT’s consulting services have drawn the ire of the GSA. GDIT is one of the 10 vendors the GSA has identified as being set to receive more than $65 billion in 2025 and beyond. The GSA has requested that agencies go through their contracts with these vendors and outline which are mission critical and why. GDIT has been actively working with clients to identify ways to reduce costs and enable efficiencies through technology. Although consulting offers a lucrative avenue for GDIT to expand its margins and build upon its existing relationships with clients, the vendor still prioritizes delivering solutions and IT services in its go-to-market strategy.
 
GDT is not going to give up on the federal health market or on consulting, but TBR anticipates the vendor will increasingly prioritize defense opportunities in the interim, such as a recently awarded contract worth up to $5.6 billion to manage the DOD’s Mission Partner Environment. The DOD has historically been GDT’s largest client and was responsible for more than 58% of its revenue in 1Q25.
 
While the Trump administration is asking for a 23% reduction in nondefense discretionary funding in its FFY26 budget proposal, it wants to keep the DOD’s discretionary spending roughly on par with the $892.5 billion stopgap for FFY25. GDIT is well positioned to capitalize on the DOD becoming increasingly interested in emerging technologies, given its experience with fixed-price and outcome-based contracting. Additionally, GDIT can offer defense and intelligence clients its array of digital accelerators to help offset the disruptions in the federal civilian market.
 
These digital accelerators were responsible for more than $2 billion of the total contracts that GDIT won during 2023. In 2024 GDIT continued to build out its array of digital accelerators and generated nearly $7.5 billion in awards from them. GDIT’s go-to-market strategy is reliant on these digital accelerators.
 
To continue gaining traction with defense as well as civilian clients, GDIT will need to keep leveraging its partners to enhance these solutions and make inroads in these markets. GDIT suddenly began ramping up its alliance activity during 2H24 and has continued to do so. For example, GDIT is augmenting its Cove AI Ops Digital Accelerator with ServiceNow’s AI and machine learning platform to make agencies’ systems more efficient.

TBR’s outlook for GDT

At the end of 4Q24, GDT tendered full-year revenue guidance for 2025 sales of $13.5 billion, implying growth of approximately 2.8% over 2024 sales of $13.1 billion. As is tradition, General Dynamics did not update its guidance for GDT this early in the fiscal year.
 
TBR remains skeptical of GDT’s guidance, given the lack of synergy between GDIT and MS. The latter will continue to transition its portfolio mix from legacy programs to newer initiatives after starting this arduous process in 2024, and GDIT is tasked with driving revenue expansion during this process.
 
Although GDIT is leveraging the demand for AI and other emerging technologies, the uncertainty in the federal IT market and the segment’s own small-scale portfolio transition could impede the growth needed to offset Mission Systems’ performance. The impact of the Trump administration’s sudden and aggressive adoption of tariffs also increases the likelihood of supply chain bottlenecks and significant program delays.
 
For these reasons, TBR believes that GDT’s guidance for an operating margin of 9.2% could be too lofty, and we anticipate its operating margin could decline from 9.6% in 2024 to 8.9% in 2025. TBR conservatively believes that GDT’s revenue will expand approximately 2% over 2024 sales of $13.1 billion in 2025.

 

TBR’s DOGE Federal IT Impact Series will include analysis of Accenture Federal Services, General Dynamics Technologies, CACI, IBM, CGI, Leidos, IFC International, Maximus, Booz Allen Hamilton and SAIC. Click here to download a preview of our federal IT research and receive upcoming series blogs in your inbox as soon as they’ve published.

 

DOGE Federal IT Vendor Impact Series: IBM Federal

The Trump administration and its Department of Government Efficiency (DOGE) have generated massive upheaval across the board in federal operations, including in the federal IT segment. As of March 2025, thousands of contracts described by DOGE as “non-mission critical” have been canceled, including some across the federal IT and professional services landscape. TBR’s DOGE Federal IT Vendor Impact Series explores vendor-specific DOGE-related developments and impacts on earnings performance. Click here to receive upcoming series blogs in your inbox as soon as they’ve published.

 

DOGE’s aggressive cost-cutting activities impacted IBM-Fed* in 1Q25

IBM tendered its 1Q25 earnings on April 23, and while the company does not disclose fiscal data about the federal operations of IBM Consulting, IBM’s executives did provide useful color on IBM-Fed and the impact of DOGE. Not surprisingly, IBM-Fed’s contracts with the U.S. Agency for International Development (USAID), much maligned by the Trump administration, suffered cancellations and drawdowns.
 
According to the DOGE-Terminated Contracts Tracker on the GX2 website, which tracks developments in federal contracting, IBM-Fed has had a total of $40.1 million in contracts terminated by DOGE as of the publication of this blog. Cancellations included awards with the Department of the Treasury ($17.5 million in TCV), the Department of Health & Human Services ($3.4 million in TCV), the Commerce Department ($1.3 million in TCV), and the Department of Education ($18 million in TCV). Without disclosing specific revenue data for IBM-Fed, IBM noted that its federal business accounts for less than 5% of IBM’s total corporate revenue and less than 10% of IBM Consulting sales, or, according to TBR estimates, about $490 million in 1Q25, up 3% year-to-year.
 
We note that none of the USAID awards terminated or scaled back by DOGE were listed on the GX2 website. IBM CFO Jim Kavanaugh indicated during the 1Q25 earnings call that IBM-Fed had a “handful of contracts” canceled by DOGE, affecting about $100 million worth of contracts in IBM Consulting’s $30 billion (on an annualized basis) backlog.

The advisory business within IBM-Fed bore the brunt of DOGE-based pressures; the company’s core technology operations may have largely been spared

IBM indicated during the earnings call that 40% of IBM-Fed’s revenue stems from technology-focused work described as “high-value annuitized revenue under contract” and, by implication, is so far unscathed by DOGE. IBM-Fed blends its hybrid cloud, AI and security technologies to offer federal agencies a suite of transformative solutions that are very technology-centric and mission-enabling by nature. Conversely, 60% of IBM-Fed’s sales derive from advisory-based work, which company executives noted during the earnings call would be “more susceptible to discretionary efficiency-type programs.”
 
Based on data about IBM-Fed’s canceled contracts on the GX2 website, we believe the advisory work affected by DOGE included cloud transition and support services, data standards testing and implementation, data quality support services, the acquisition and implementation of integrated workplace management system licenses, and “data at rest” support services (i.e., data that is stored and not being actively used or transmitted). Other contracts were “terminated for convenience,” according to the GX2 website, which did not provide a specific description of the canceled services.
 
IBM-Fed, according to IBM CEO Arvind Krishna, processes claims for veterans, provides procurement services to the General Services Administration (GSA), and has implemented and is currently operating payroll systems for several federal agencies. Krishna acknowledged that “some areas around the edges” of this work “could be viewed as discretionary” by DOGE, but that the bulk of IBM-Fed’s services are mission critical and technology focused.

IBM-Fed will double down on its core cloud, security and AI capabilities to successfully traverse the DOGE-disrupted federal IT space in 2025

According to TBR’s 1Q25 IBM Consulting Earnings Response, “IBM Consulting could experience variability in revenue growth in 2025, and IBM is cautious about the revenue contribution from the business to total corporate revenue due to possible further tightening of discretionary spending driven by macroeconomic uncertainty and the U.S. Department of Government Efficiency’s (DOGE) activities.
 
However, IBM Consulting will continue to gain ground in areas such as generative AI (GenAI) due to IBM’s early advances in the segment, diversifying revenues through new areas of expansion.” To buffer its 2025 sales growth against DOGE’s cost-rationalization efforts and offset revenue losses from the cancellation of advisory work deemed discretionary (and thus expendable) by DOGE, IBM-Fed must play to its strengths in AI- and security-infused hybrid cloud solutions and emphasize how well its offerings align with DOGE’s efficiency agenda.
 
IBM-Fed won large-scale programs with civilian and defense agencies in 2024, thanks to the additional delivery and offerings scale in digital transformation it obtained by acquiring Octo Consulting in late 2022, another advantage and a key selling point for IBM-Fed when advising or coaching the DOGE advisory board. While Octo’s pure play advisory capabilities expose IBM-Fed to DOGE’s federal spending cuts in traditional consulting services, Octo’s oLabs center of excellence showcases IBM-Fed’s acquisition-enhanced cloud, security, data science and DevSecOps capabilities that sync well with the IT priorities of the Trump administration.

IBM-Fed must accelerate its expansion within the DOD and among national security agencies, particularly by emphasizing its strengths in cloud

Octo’s oLabs also serves national security and defense agencies. The Trump administration has indicated national security will be an overarching budget priority during its term and has hinted at a federal fiscal year 2026 (FFY26) defense budget surpassing $1 trillion for the first time, underscoring the urgency for IBM-Fed to accelerate its expansion with the Pentagon, where it has been gaining traction since acquiring Octo.
 
According to TBR’s 1H25 IBM Federal Vendor Profile, “Some federal IT industry observers believe the Trump administration’s DOGE will accelerate cloud investment as federal agencies may be forced to outsource more operations deemed outside ‘Inherently Governmental Functions (IGF).’ Cloud adoption in the Department of Defense (DOD) continues to far exceed civilian cloud investment, which the GSA’s Federal IT Dashboard (FITD) estimated to be $8.2 billion in FFY24, up from $5.5 billion in FFY23.”
 
IBM-Fed could leverage IBM’s 1Q25 $6.4 billion acquisition of HashiCorp to accelerate DOD-based expansion, as HashiCorp has helped the DOD migrate more than 3,000 applications to the cloud with its Terraform (Infrastructure as Code software) and Vault (identity-based security) tools designed to facilitate migrations to multicloud architectures. The DOD has clearly indicated it favors a multicloud approach for implementing cloud-based edge computing solutions.
 
*TBR refers to IBM Consulting’s federal IT operations as IBM-Fed. IBM-Fed is not an official business line title used by IBM or IBM Consulting. The business defined by TBR as IBM-Fed resides within IBM Consulting’s U.S. Public and Federal Market group.

 

TBR’s DOGE Federal IT Impact Series will include analysis of Accenture Federal Services, General Dynamics Technologies, CACI, IBM, CGI, Leidos, IFC International, Maximus, Booz Allen Hamilton and SAIC. Click here to download a preview of our federal IT research and receive upcoming series blogs in your inbox as soon as they’ve published.

 

DOGE Federal IT Vendor Impact Series: CACI

The Trump administration and its Department of Government Efficiency (DOGE) have generated massive upheaval across the board in federal operations, including in the federal IT segment. As of March 2025, thousands of contracts described by DOGE as “non-mission critical” have been canceled, including some across the federal IT and professional services landscape. TBR’s DOGE Federal IT Vendor Impact Series explores vendor-specific DOGE-related developments and impacts on earnings performance. Click here to receive upcoming series blogs in your inbox as soon as they’ve published.

 

CACI spared from major DOGE disruptions: Growth and profitability on track for FY25 goals

CACI tendered its 1Q25 earnings on April 23, and TBR did not discern any material impact from DOGE on the company’s business during the quarter, the third fiscal quarter of CACI’s FY25 (ending June 30). The company posted sales of $2.17 billion in 1Q25, up 11.8% year-to-year on a statutory basis and up 5.6% on an organic basis. CACI’s gross margin of 33.8% in 1Q25 was up sequentially from 33.2% in 4Q24, while its operating margin of 9.1% in 1Q25 was up 50 basis points sequentially from 8.6% in 4Q24. The company’s adjusted EBITDA margin was 11.7% in 1Q25, up from 11.1% in 4Q24.
 
CACI believes demand will remain strong through the remainder of its FY25 and into its FY26 for technologies and capabilities at the core of the company’s portfolio: AI-enhanced and commercially honed software-defined solutions delivered with Agile development methodologies; signals intelligence (SIGINT) and electronic warfare (EW) technologies for warfighters, defense vehicles and platforms, and IC applications; and AI-infused financial management offerings.
 
Uninterrupted sales growth and consistent margin performance indicate CACI’s offerings remain well aligned to the Trump administration’s IT investment priorities, particularly as the new administration prepares to expand investment in cybersecurity, national security and national defense, and advanced space-based communications systems for defense, intelligence and civil applications. CACI executives also noted that the federal budget environment is slowly becoming more constructive and more transparent, a positive harbinger for CACI and its fellow federal IT contractors.

CACI’s order book was essentially immune to DOGE-related turmoil in the federal IT market

TBR did not observe any impact from DOGE activities on CACI’s book of business. CACI’s backlog fell 1.3% sequentially, from $31.8 billion to $31.4 billion in 1Q25, but his kind of decline is typical in the company’s third fiscal quarter. CACI’s trailing 12-month (TTM) book-to-bill ratio was 1.5 in 1Q25, down from 1.7 in 4Q24. However, a sequential decline from the second to third fiscal quarter is not unusual for the company. In 1Q25, both the TTM book-to-bill ratio of 1.5 and the quarterly ratio of 1.2 were consistent with figures from the same period last year.
 
Furthermore, CACI’s bookings of $2.2 billion in 4Q23 and $3.5 billion in 1Q24 came during a period of exceptionally robust Department of Defense and Intelligence Community-related award activity. CACI’s bookings were $2.75 billion in 1Q25, up from $1.2 billion in 4Q24, consistent with the seasonal, historical pattern of sequential bookings expansion in the company’s third and second fiscal quarters. CACI noted in its 1Q25 earnings discussion that DOGE examined seven contracts in the company’s order book, including one that had already been completed. The aggregate revenue impact of these awards being eliminated by DOGE would only be $3 million in TCV, though DOGE has only notified CACI that $1 million worth of this ongoing work is likely to be canceled.
 
The company acknowledged that its business development teams have experienced some deceleration in certain aspects of the sales cycle, such as invoice and funding approvals. CACI CFO Jeffrey MacLauchlan said during the earnings call that “things that used to take two or three days are taking four or five days.” CACI’s leadership expects the disruption, which according to the company has been “very manageable” to date, to wane during the second half of federal fiscal year 2025 (FFY25). If sales motions are being impeded by DOGE, TBR would expect to see this reflected in lower-than-expected margin performance by CACI, but we did not observe any DOGE-related margin erosion in CACI’s P&L in 1Q25.

Undeterred by the DOGE-disrupted environment, CACI elevates several elements of its FY25 guidance

CACI raised the low end of its FY25 sales guidance range in 1Q25 and is now calling for top-line revenue of between $8.55 billion and $8.65 billion, implying a growth range of between 11.6% and 12.9% over FY24 revenue of $7.66 billion. In 4Q24 the company forecasted $8.45 billion in revenue at the low end of its projected FY25 sales range, implying growth of 10.3% at the bottom of the range.
 
CACI also raised the low end of its guidance for FY25 adjusted net income* in 1Q25 and now expects at least $543 million in FY25, up from $537 million forecasted in 4Q24.
 
CACI elevated its outlook for non-GAAP adjusted diluted earnings per share (ADEPS) in 1Q25, and as of 1Q25 is projecting a range of between $24.24 and $24.87 per share for FY25, up from a previous ADEPS range of between $23.24 and $24.13 per share. Free cash flow guidance was also elevated from $450 million tendered in 4Q24 to $465 million in 1Q25.
 
TBR notes that CACI has twice raised guidance for FY25 sales, adjusted net income, ADEPS and free cash flow since initially tendering its FY25 outlook in 2Q24. CACI is still guiding for a FY25 EBITDA margin in the low 11% range, implying a potential improvement of 100 basis points over FY24’s EBITDA margin of 10.4%, but also suggesting CACI does not expect any DOGE-related margin headwinds through the remainder of FY25.

CACI will remain vigilant and maintain a constant dialogue with customers

During CACI’s 1Q25 earnings call, CEO John Mengucci described DOGE’s objectives as “peace through strength, secure borders, increased efficiency and technology modernization.” Mengucci and his executive team remain confident that CACI’s strategy and portfolio are and will remain in sync with DOGE’s goals and with the IT strategy of the Trump administration, a contention supported by the company’s 1Q25 fiscal results and its more optimistic FY25 outlook.
 
Irrespective, CACI recognizes that federal executives are under pressure to accelerate IT modernization, quickly achieve IT-driven operational efficiencies and curb spending according to DOGE directives. Procurement teams at federal agencies are struggling to keep bid review processes and proposal adjudications on schedule as the Trump administration executes large-scale furloughs across the federal workforce. As such, CACI will keep its executives, business line leaders and business development teams as close as possible to IT decision makers and procurement counterparts in federal agencies for as long as DOGE’s efficiency agenda is in effect.
 
*Adjusted net income: GAAP-compliant net income excluding intangible amortization expense and the related tax impact

 

TBR’s DOGE Federal IT Impact Series will include analysis of Accenture Federal Services, General Dynamics Technologies, CACI, IBM, CGI, Leidos, IFC International, Maximus, Booz Allen Hamilton and SAIC. Click here to download a preview of our federal IT research and receive upcoming series blogs in your inbox as soon as they’ve published.

 

Fujitsu Expands AI Strategy in Europe, Emphasizing Collaboration, Compliance and Customization

‘AI can be a knowledge management accelerator, but only if well-fed by an enterprise’s own data’

In February TBR met with two AI leaders in Fujitsu’s European Platform Business to better understand the company’s approach to the AI market, its evolving AI capabilities and offerings, and what we can expect as 2025 unfolds. Maria Levina, AI Business analyst, and Karl Hausdorf, head of AI Business, gave a detailed presentation focused primarily on the European market. The following reflects both that briefing and TBR’s ongoing research and analysis of Fujitsu, the company’s partners and peers, and the overall AI landscape.
 
One highlight that illustrates many facets of Fujitsu’s approach to AI was Levina and Hausdorf’s description of Fujitsu’s customers’ choices between “bring your own data” and “bring your own AI.” The first allows more AI-mature customers to bring their data into a Fujitsu-provided on-premises solution with full support for scaling, maintaining hardware and software, and updating, as needed.
 
The second allows customers to run their own AI stack on a Fujitsu “validated and optimized” platform, developed and maintained by Fujitsu and select technology partners. Critical, in TBR’s view, is Fujitsu’s positioning of these options as responsive to clients’ needs, as determined by all AI stakeholders within an enterprise, including IT, AI and business leaders.
 
Levina and Hausdorf explained, “Together, with our ecosystem of partners, we’re committed to unlock the potential of generative AI for our clients” through “on-premises data sovereign, sustainable private GPT and AI solutions,” focused on “rapid ROI.” Fujitsu is not approaching clients with a technology solution, but rather with options on how to address and solve business problems. As the Fujitsu team said, “Understand the why, know the what, and co-create the how.” The Fujitsu team also noted that the company’s industry expertise resides in the processes and workflows unique and/or critical to an industry.
 

‘Maintaining control of data means owning your AI in the future’

Before diving into Fujitsu’s AI offerings, the Fujitsu team laid out their understanding of the European market, sharing data the company collected around AI adoption, use of AI platforms, and barriers to growth (in Fujitsu’s phrasing, “progress-limiting factors,” which is perhaps a more positive spin on the usual list of barriers and challenges). Fujitsu surveyed or spoke with 400 data and IT professionals across six European countries, and the results indicated that overcoming legacy mindsets continues to be a major impediment to adopting and harnessing the value of AI.
 
TBR’s November 2024 Voice of the Customer Research similarly noted challenges in Europe with “the lack of engagement from employees who are being asked to change.” The Fujitsu team noted that change management, therefore, had to involve all AI stakeholders, including “IT people, business people and AI people” within an enterprise.
 
In TBR’s experience, IT services companies and consultancies continue to find new constituents for change management at their clients as the promise — and disruption — of AI becomes more widespread, reinforcing Fujitsu’s strategy of bringing change management to all AI stakeholders. Lastly, the Fujitsu team noted that within European clients, expectations around AI have heightened, especially as AI initiatives have launched across multiple business units. Again, Fujitsu’s research and TBR’s Voice of the Customer Research align around ROI expectations as AI matures.
 
The Fujitsu team introduced their AI platform by delineating the key performance indicators they believe a successful platform must have: scaling, performance and speed, simplicity, energy efficiency, AI services in data centers, and GPUs.
 
Although TBR is not in a position to evaluate the technological strengths, completeness or complexity of Fujitsu’s platform, the expansive KPIs indicate Fujitsu has considered not only the IT needs behind an AI deployment but also the larger business factors, particularly the financial impacts. Levina and Hausdorf then dove into the details, including the two customer options described above (bring your own data and bring your own AI). They discussed how Fujitsu offers consulting around the technical and business implications of AI platforms and solutions, including an “AI Test Drive,” which allows clients to test AI solutions before investing in new technologies, large language models (LLMs) or other AI components.
 
Notably for TBR, Fujitsu’s presentation extensively highlighted the company’s AI alliance partners, including Intel, NVIDIA, AMD and NetApp, as well as a slew of LLM providers, demonstrating an appreciation for the collaborative and ecosystem-dependent nature of AI at the enterprise level. The Fujitsu team also stressed the European nature of its AI strategy and platform.
 
European clients, Fujitsu noted, had specific requirements related to the European Union’s (EU) General Data Protection Regulation and the EU AI Act, as well as a preference for on-premises solutions. Some of the use cases Levina and Hausdorf described included a law firm using Fujitsu-enabled AI solutions to analyze case data, contracts, corporate and public legal documents, and multiple deployments of Fujitsu-enabled private GPTs.

Additional observations

  • Fujitsu remains focused on targeting customers already aligned with the company around AI, a strategy that TBR believes speeds ROI and increases client retention.
  • In contrast to some peers in the IT services market, Fujitsu has capabilities across the entire AI technology stack — hardware, software and service — which Levina and Hausdorf called “highly appealing,” especially to European clients.
  • Levina and Hausdorf made two comments that, in TBR’s view, neatly sum up AI at present: “AI can be a knowledge management accelerator, but only if well-fed by an enterprise’s own data” and “maintaining control of data means owning your AI in the future.”

Fujitsu’s AI prowess makes it an invaluable partner

TBR has reported extensively on Fujitsu’s evolving AI capabilities and offerings, noting in a special report in May 2024: “TBR appreciates that Fujitsu’s combination of compute power and proven AI expertise makes the company a significant competitor and/or alliance partner for nearly every player fighting to turn GenAI [generative AI] hype into revenue.
 
“Second, Fujitsu’s vision of ‘converging technologies’ aligns exceptionally well with the more tectonic trends TBR has been observing in the technology space, indicating that Fujitsu’s market positioning is more strategic than transactional or opportunistic.” Add in Fujitsu’s deepening experience in delivering AI solutions to AI clients, and TBR continues to see tremendous near-term opportunity and growth for Fujitsu and its ecosystem partners.

Inside the AI Hardware Shift: Market Trends Every IT Decision Maker Should Watch in 2025

Watch Inside the AI Hardware Shift: Market Trends Every IT Decision Maker Should Watch in 2025

Silicon vendors and OEMs working together to support AI adoption

While OEMs are responsible for developing and delivering AI-driven and AI-enabling hardware offerings to market, silicon vendors’ innovations are at the heart of the AI hardware revolution.
 
The first wave of AI hardware demand has centered on high-performance AI infrastructure purpose-built to support large-scale AI model training workloads. But the rise of AI inferencing is giving way to a second wave of AI hardware demand as clients increasingly transition from the prototyping phase to the deployment phase with custom AI solutions. On the infrastructure side of the AI hardware market, OEMs such as Dell Technologies, Hewlett Packard Enterprise and Supermicro are integrating accelerated computing platforms from companies like NVIDIA. On the client devices side of the market, OEMs such as HP Inc. and Lenovo are developing new AI PC offerings based on system on a chip (SoC) platforms developed by AMD, Intel, Qualcomm and the like.
 
In this TBR Insights Live session Senior Analyst Ben Carbonneau and Principal Analyst Angela Lambert share an update on developments within the rapidly expanding AI PC and AI server markets as well as key findings from TBR’s AI PC and AI Server Market Landscape. This new research explores the nuances and interconnectedness of the semiconductor and OEM hardware industries, comparing market shares across various industry views and highlighting competitive analysis and forward-looking insights.

Watch this session on AI hardware market trends to learn:

  • TBR’s forecast for the AI PC and AI PC SoC markets
  • Our performance outlook for the AI server and AI server GPGPU (general-purpose computing on GPUs) markets
  • The latest industry trends and ecosystem partnerships
  • Key market dynamics contributing to and inhibiting growth

Watch Now

Excerpt from Inside the AI Hardware Shift: Market Trends Every IT Decision Maker Should Watch in 2025

AI PC market drivers and inhibitors

Market drivers:

  • End of Windows 10 support
  • AI PC advisory and training services
  • Channel incentives for AI PC adoption

Market inhibitors:

  • Lack of killer apps leveraging the neural processing unit (NPU)
  • Lengthening device life cycles
  • Organizations delaying purchases to wait for more powerful NPUs

TBR Insights Live: AI PC and AI Server

Visit this link to download this session’s presentation deck here.

 
TBR Insights Live sessions are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous sessions can be viewed anytime on TBR’s Webinar Portal.

 

Google Cloud Cements Values of Enterprise Readiness, Full-stack AI and Hybrid Cloud at Next 2025

In April Google Cloud hosted its annual Next event to showcase new innovations in AI. Staying true to the theme of “A New Way to Cloud,” Google focused on AI, including how AI can integrate with enterprises’ existing tech landscape, with partners playing the role of orchestrator. After Google CEO Sundar Pichai spoke about the company’s achievements around Gemini, which is integral to Google Cloud’s strategy, Google Cloud CEO Thomas Kurian highlighted the business’s three key attributes: optimized for AI; open and multicloud; and enterprise-ready. Additionally, Google Cloud announced a series of new innovations that highlight how the company is trying to execute on these three areas to be the leader in modern AI development.

Google takes an end-to-end approach to AI

When discussing Google Cloud’s three key attributes, Kurian first highlighted how Google Cloud Platform (GCP) is optimized for AI. Based on our own conversations with IT decision makers, this claim is valid: many customers enlist GCP services purely for functional purposes, as they believe they cannot obtain the same performance with another vendor. This is particularly true of BigQuery, for large-scale data processing and analytics, and increasingly Vertex AI, which now supports over 200 curated foundation models for developers.
 
Within this set of models is, of course, Gemini, Google’s own suite of models, including the new Gemini 2.5 Pro, which has a context window of 1 million tokens and is reportedly now capable of handling advanced reasoning. To be fair, Google still faces stiff competition from other frontier model providers, but Google’s years of AI research through DeepMind and its ability to have models grounded in popular apps like Google Maps, not to mention Google Search, will remain among its key differentiators.
 
With that said, the AI software stack is only as effective as the hardware it runs on. That is why Google has been making some advances in its own custom AI accelerators, and at the event, Google reaffirmed its plans to invest $75 billion in total capex for 2025, despite the current macroeconomic challenges. A large piece of this investment will likely focus on paying for the ramp-up of Google’s sixth-generation TPU (Tensor Processing Unit) — Trillium — which became generally available to Google Cloud customers in December. Additionally, Google is making some big bets on the next wave of AI usage: inference.
 
At the event, Google introduced its seventh-generation TPU, dubbed Ironwood, which reportedly scales up to 9,216 liquid cooling chips linked through a high-powered networking layer, to support the compute-intensive requirements of inference workloads, including proactive AI agents. In 2024 there was a 3x increase in the number of collective TPU and GPU hours consumed by GCP customers, and while this was likely off a small number of hours to begin with, it is clear that customers’ needs and expectations around AI are increasing. These investments in AI hardware help round out key areas of Google’s AI portfolio ― beyond just the developer tools and proprietary Gemini models ― as part of a cohesive, end-to-end approach.
 

Watch now: Cloud market growth will slow in 2025, but will activity follow? Deep dive into generative AI’s impact on the cloud market in 2025 in the below TBR Insights Live session

 

Recognizing the rise of AI inference, Google Cloud reinforces longtime company values of openness and hybrid cloud

With its ties to Kubernetes and multicloud editions of key services like BigQuery and AlloyDB, Google Cloud has long positioned itself as a more open cloud compared to its competitors. However, in recent quarters, the company has seemed to hone this focus more closely, particularly with GDC (Google Distributed Cloud), which is essentially a manifestation of Anthos, Google’s Kubernetes-based control plane that can run in any environment, including at the edge. GDC has been the source of some big wins recently for Google Cloud, including with McDonald’s, which is deploying GDC to thousands of restaurant locations, as well as several international governments running GDC as air-gapped deployments.
 
At Next 2025, Google announced it is making Gemini available on GDC as part of a vision to bring AI to environments outside the central cloud. In our view, this announcement is extremely telling of Google Cloud’s plans to capture the inference opportunity. Per our best estimate, roughly 85% of AI’s usage right now is focused on training, with just 15% in inference, but the inverse could be true in the not-too-distant future. Not only that, but inference will also likely happen in distributed locations for purposes of latency and scale. Letting customers take advantage of Gemini to build applications on GDC — powered by NVIDIA Blackwell GPUs — on premises or at the edge certainly aligns with market trends and will help Google Cloud ensure its services play a role in customers’ AI inference workloads regardless of where they are run.

Boosting enterprise mindshare with security, interoperability and Google-quality search

Kurian mentioned that customers leverage Google Cloud because it is enterprise-ready. In our research, we have found that while Google Cloud is highly compelling for AI and analytics workloads, customers believe the company lacks enterprise-grade capabilities, particularly when compared to Microsoft and Amazon Web Services (AWS). But we believe this perception is changing, and Google Cloud is recognizing that to gain mindshare in the enterprise space, it needs to lead with assets that will work well with customers’ existing IT estates and do so in a secure way. This is why the pending acquisition of Wiz is so important. As highlighted in a recent TBR special report, core Wiz attributes include not only being born in the cloud and able to handle security in a modern way but also connecting to all the leading hyperscalers, as well as legacy infrastructure, such as VMware.
 
Google Cloud has been very clear that it will not disrupt the company’s multihybrid capability. In fact, Google Cloud wants to integrate this value proposition, which suggests Google recognizes its place in the cloud market and the fragmented reality of large enterprises’ IT estates. Onboarding Wiz, which is used by roughly half of the Fortune 500, as a hybrid-multicloud solution could play a sizable role in helping Google Cloud assert itself in more enterprise scenarios. In the meantime, Google Cloud is taking steps to unify disparate assets in the security portfolio.
 
At Next 2025, Google Cloud launched Google Unified Security, which effectively brings Google Threat Intelligence, Security Operations, Security Command Center, Chrome Enterprise and Mandiant into a single platform. By delivering more integrated product experiences, Google helps address clients’ growing preference for “one hand to shake” when it comes to security and lays a more robust foundation for security agents powered by Gemini, such as the alert triage agent within Google Security Operations and the malware analysis agent in Google Threat Intelligence to help determine if code is safe or harmful.
 
One of the other compelling aspects of Google’s enterprise strategy is Agentspace. Launched last year, Agentspace acts as a hub for AI agents that uses Gemini’s multimodal search capabilities to pull information from different storage applications (e.g., Google Drive, Box, SharePoint) and automate common productivity tasks like crafting emails and scheduling meetings. At the event, Google announced that Agentspace is integrated with Chrome, allowing Agentspace users to ask questions about their existing data directly through a search in Chrome. This is another clear example of where Google’s search capabilities come into play and is telling of how Google plans to use Agentspace to democratize agentic AI within the enterprise.

Training and more sales alignment are at the forefront of Google Cloud’s partner priorities

Google Cloud has long maintained a partner-first approach. Attaching partner services on virtually all deals; taking an industry-first approach to AI, particularly in retail and healthcare; and driving more ISV coselling via the Google Cloud Marketplace are a few examples. At Next 2025, Google continued to reaffirm its commitment to partners, implying there will be more alignment between field sales and partners, to ensure customers are matched with the right ISV or global systems integrator (GSI), a strategy many other cloud providers have tried to employ.
 
When it comes to the crucial aspect of training, partners clearly see the role Google Cloud plays in AI, and some of the company’s largest services partners, including Accenture, Cognizant, Capgemini, PwC, Deloitte, KPMG, McKinsey & Co., Kyndryl and HCLTech, have collectively committed to training 200,000 individuals on Google Cloud’s AI technology. Google has invested $100 million in partner training over the past four years, and as highlighted in TBR’s Voice of the Partner research, one of the leading criteria services vendors look for in a cloud partner is the willingness to invest in training and developing certified resources.

Google Cloud wants partners to be the AI agent orchestrators

As previously mentioned, Vertex AI is a key component of Google Cloud’s AI software stack. At Next 2025, Google Cloud introduced a new feature in Vertex called the Agent Development Kit, which is an open-source framework for building multistep agents. Google Cloud is taking steps to ensure these agents can be seamlessly connected regardless of the underlying framework, such as launching Agent2Agent (A2A), which is an open protocol, similar to protocols introduced by model providers like Anthropic.
 
Nearly all of the previously mentioned GSIs, in addition to Boston Consulting Group (BCG), Tata Consultancy Services (TCS) and Wipro, have contributed to the protocol and will be supporting implementations. This broad participation underscores the recognition that AI agents will have a substantial impact on the ecosystem.
 
New use cases will continue to emerge where agents are interacting with one another, not only internally but also across third-party systems and vendors. With the launch of the Agent Development Kit and the related protocol, Google Cloud seems to recognize where agentic AI is headed, and for Google Cloud’s alliance partners, this is an opportune time to ensure they have a solid understanding of multiparty alliance structures and are positioned to scale beyond one-to-one partnerships.

Final thoughts

At Next 2025, Google reportedly announced over 200 new innovations and features, but developments in high-powered compute, hybrid cloud and security, in addition to ongoing support for partners, are particularly telling of the company’s plans to capture more AI workloads within the large enterprise. Taking an end-to-end approach to AI, from custom accelerators to a diverse developer stack that will let customers build their own AI agents for autonomous work, is how Google Cloud aims to protect its already strong position in the market and help lead the shift toward AI inferencing.
 
At the same time, Google Cloud appears to recognize its No. 3 position in the cloud market, significantly lagging behind AWS and Microsoft, which are getting closer to each other in IaaS & PaaS revenue. As such, taking a more active stance on interoperability to ensure AI can work within a customer’s existing IT estate, and guaranteeing partners that have the enterprise relationships are the ones to orchestrate that AI, will help Google Cloud chart its path forward.