Cloud Ecosystem Report

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

AI data needs raise expectations about how services vendors and hyperscalers should be thinking about their relationships, with region-specific guardrails further testing joint GTM durability

Key trends
Hyperscalers’ dispersion across the globe requires these vendors, along with their services partners, to carve out new operating models that prioritize both go-to-market success and adherence to local requirements. This balance translates into a case-by-case — or rather, country-by-country — approach with local partnerships being the common thread. This is especially true for hyperscalers competing for local market share — particularly in the European Union (EU) — to the extent of relinquishing operational control (e.g., Microsoft Bleu), creating a direct opportunity for regional system integrators (SIs) and local infrastructure operators. While global SIs (GSIs) are also in contention, they must account for these vendors as they build out local presence, along with their commercial, staffing and partner models. Meanwhile, regional legislation is pushing hyperscalers to commit to investment pledges to ensure business continuity, with implications for services partners and buyers, further testing the limits of their relationships.

Go-to-market strategy
Enterprise buyers are becoming increasingly conflicted in their expectations of how vendors can best support their technology needs. When it comes to interoperability, many customers look to leverage third-party vendors, and a smaller percentage expect their cloud vendors to address these concerns directly. At the same time, buyers continue to identify technology expertise as a key skills gap in vendors’ value proposition across regions. These dynamics are further amplified in vendors’ regional go-to-market strategies, especially when it comes to accounting for the role of AI and niche vendors that bring specialized knowledge. In a nutshell, vendors cannot rely on a one-size-fits-all AI ecosystem strategy across regions. Success will require region-specific approaches: IP-led initiatives in APAC, orchestration frameworks in Europe, and startup-centric marketplaces in the Americas. All must be underpinned by interoperable APIs and strong governance to help IT services providers capture and monetize local demand. Executing against such expectations while continuing to rely on a traditional labor-arbitrage model will test professional services firms’ readiness to transform their own operations while maintaining trust with hyperscalers, which continue to explore the opportunity to drive professional services revenue by simplifying the sales process and marketplace through the use of agentic AI.

Vendors
In addition to launching bespoke operating models and investing in local infrastructure, hyperscalers are investing at the infrastructure layer to support workload portability and management capabilities that help customers adhere to sovereignty regulations. Google Distributed Cloud is probably the most sovereignty-forward example, but similar comparisons can be made for both Microsoft and Amazon Web Services (AWS). Security is another critical area hyperscalers are investing in, whether it is Microsoft’s new deputy chief information security officer (CISO) role for Europe or Google’s continued effort to expand Mandiant’s assets to prevent breaches. Orchestrating these evolving offerings into a cohesive IT estate will be an opportunity for services vendors, especially when deciding how to configure them in a way that is suitable for country-specific needs. Services — including migration, implementation, consulting and advising — all play a role in navigating the increasingly complex regulatory and security environment, requiring broad hiring in the EU, where the opportunities are the greatest.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research


Data location will remain a leading barrier to cloud adoption, but interoperability and breaking down data barriers across platforms will present the bigger opportunity for services vendors

As highlighted in TBR’s 2H24 Cloud Infrastructure & Platforms Customer Research, data location ranked as the second-highest cloud pain point after security, with 40% of respondents expecting their cloud vendors to directly address these concerns — largely due to vendors doing a better job of making customers aware of the various data center hosting and encryption options. Conversely, when it comes to interoperability, many customers will leverage third-party vendors, and a smaller percentage expect their cloud vendors to address these concerns directly. This speaks to both the skills that services-led firms have amassed across multiple technology platforms and the high degree of lock-in the hyperscalers still create across their infrastructure. That said, from a technology perspective, vendors are doing a better job of integrating their offerings, particularly in the area of agentic AI, which necessitates more robust data sharing. As more open frameworks, including Google’s A2A (Agent2Agent), mature and become enterprise-ready, they could create a needed level of standardization that GSIs can leverage to build new agents alongside their ISV partners. Deloitte’s partnership with Google Cloud to build ServiceNow-specific agents on A2A is a good example.

“It [sovereign cloud] helps, significantly helps. A completely separated, air-gapped environment different from the regular public cloud itself. And Switzerland, by the way, is a great example. AWS put up two regions in Switzerland. It’s a tiny country. But even then, they had two regions to satisfy this condition of resilience, etc.” — Managing Director (Firmwide) & Chief Data Architect, Financial Services

Cloud vendor insights excerpt

Microsoft’s Cloud Services & Ecosystem Strategy

Microsoft Cloud’s Latest Ecosystem Initiative (Source: TBR 1H25)

Microsoft Cloud’s Estimated Ecosystem Statistics for 1H25 (Source: TBR)

 

Private Cellular Networks Market Forecast

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

While the 5G PCN ecosystem is maturing, it remains underdeveloped compared to older technologies such as Wi-Fi and LTE, slowing the pace of adoption

Robust Wi-Fi and LTE ecosystems, coupled with an underdeveloped 5G-compatible device ecosystem and relatively higher costs, hinder private 5G adoption

The private 5G network market will see robust growth through this decade as a wide range of industries and governments adopt the technology. However, TBR now projects the market will reach $5.3 billion in 2030, down dramatically from our October 2022 forecast of $15 billion in 2030. TBR still believes the private 5G network market will ultimately be several times larger than the projected peak of the private LTE market, but the market is taking much longer to scale than previously expected.

The private 5G network market is challenged by enterprises viewing Wi-Fi and/or LTE as good enough for most non-mission-critical use cases. 5G (including infrastructure as well as endpoint devices and modules) remains far more expensive than Wi-Fi, and enterprises are more comfortable using Wi-Fi; most enterprises choose Wi-Fi as the primary connectivity medium for their private network, with private cellular typically utilized for internet redundancy, backup and failover. Essentially, enterprises have more clarity around LTE and Wi-Fi and are uncertain about 5G PCN ROI, causing them to lean toward existing options.

The limited selection of 5G-compatible endpoint devices (excluding smartphones) remains one of the greatest impediments to private 5G network adoption among enterprises. Ultimately, the device ecosystem for 5G needs to become broader and more dynamic to more closely resemble the device ecosystems for LTE and Wi-Fi and to provide greater selection and lower costs to adopters.

The slow development of the PCN market is partially due to vendor offerings that are not tailored to the enterprise and require trained resources to manage what are effectively scaled-down versions of communication service provider (CSP) RAN infrastructure. However, firms such as Celona are increasingly coming to market with lightweight, Wi-Fi-like PCN solutions that are built for enterprises and do not require specialized labor resources to roll out and manage. Incumbent telecom vendors are also scaling down their offerings to compete with Celona. These innovations will help alleviate this slow development over the course of the forecast.

U.S. will overtake China as the highest-spending country on 5G PCNs, partially due to maturation of the CBRS ecosystem

China has led the market in 5G PCN spend since the market’s inception, but TBR estimates the U.S. will outspend China in 2026. TBR expects the maturing CBRS ecosystem in the U.S. to contribute to growth. Vendors are increasingly coming to market with CBRS-based solutions to meet demand. In September Ericsson debuted Ericsson Private 5G Compact, its scaled-down CBRS-based solution, which followed Nokia’s October 2023 launch of a scaled-down version of its Digital Automation Cloud (DAC) PCN solution called DAC Private Wireless Compact. These solutions are aimed at small and midsize industrial sites, carpeted office environments, and campuses — areas where the traditional private 5G solutions from these vendors would be unnecessarily large and expensive. Ericsson and Nokia are the suppliers for some of the best reference cases for CBRS-based 5G PCN deployments, including Tesla (Ericsson) and Deere & Co. (Nokia) factories. In carpeted enterprises, Celona has made significant inroads, thanks to its lightweight PCN solutions that aim to make PCNs as easy and cost-effective to deploy as Wi-Fi.

5G CBRS momentum should spur growth for minor players in the market. For example, Samsung’s alliance with Amdocs focuses on PCN opportunities that use CBRS spectrum, with Amdocs providing systems integration (SI) in joint engagements. Although DISH has gained minimal traction in PCN thus far, the vendor will benefit from its vast CBRS PAL (priority access license) spectrum licenses, which cover 98% of the U.S. population; DISH won the most CBRS licenses in the 2020 auction.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research


TBR estimates the private 5G network market will grow at a slower rate than the industry originally expected, reaching $3.5B in 2028, due to persistent ecosystem maturity challenges

Private 5G Network Infrastructure Spend for 2023 through 2028 Estimate (Source: TBR)

TBR Assessment: TBR expects the private 5G market to grow at a more gradual rate and take longer to reach maturity than the industry originally expected as compatible endpoint devices and key 3GPP (3rd Generation Partnership Project) standards are slowly commercialized.

Most non-CSP entities are being selective about where and how to use 5G. The more mission-critical the environment, the more likely 5G will be utilized. In instances where reliability, speed and/or security are the top concerns, companies are prioritizing 5G.

Though enterprise and government interest in 5G remains robust, the timing of deployments is contingent on ROI and the availability of compatible endpoint devices. The fact that Wi-Fi remains a legitimate alternative to cellular technologies for private networks, mitigating some of the need for 5G, is also a headwind.

Private 5G spend will lag private LTE spend through the forecast as the market is hampered by a slowly maturing device ecosystem and lack of certainty around ROI

Global Private Cellular Networks 5G & LTE Spend for 2023 through 2028 Estimate (Source: TBR)

TBR Assessment: TBR expects growth in the private LTE market will slow and then decline during the remainder of the forecast period, but the slowdown will be more than offset by robust growth in private 5G investment as enterprises and governments adopt the next-generation technology for a broad range of use cases.

Private LTE has been in use for over a decade, and there is a robust vendor, device and application ecosystem that underpins this market, which reduces costs. LTE is sufficient in handling many popular and proven use cases for PCN, reducing the need for 5G. Enterprise CIOs who adopt LTE are reassured about achieving ROI, while 5G ROI is unproven.


Another reason LTE remains the dominant technology is that some vendors offer software upgradability of private LTE solutions to 5G. This approach optimizes TCO and entices enterprises to commit to their platforms before they adopt 5G. Due to these dynamics, TBR expects 5G spend to lag LTE spend through the forecast period.

Telecom AI Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

CSPs have an opportunity to capture meaningful value from AI, but realizing this opportunity requires action and investment

Most CSPs look for quick-hit ROI from GenAI; there is still hesitancy to commit to larger-scope AI initiatives that require significant upfront investment

Leading CSPs are all dabbling with GenAI, with some use cases already commercially scaling, especially in customer care and BSS.

CFOs at some leading CSPs are getting directly involved in AI programs to give visibility to these initiatives and ensure they are paying off, looking for quick ROI.

So far, CSP managers like what they are seeing and are hopeful about the prospects of AI delivering significant outcomes, but this hope has yet to translate to large-scale investments tied to broad transformations. Rather, CSPs are focused on more tactical, smaller-scope solutions that address specific pain points or that promise fast ROI.

TBR believes it will take some time for most CSPs to evolve this investment behavior, but some other considerations also need to be factored into the equation, such as uncertainty about governance, regulation and data efficacy. For example, AI’s effectiveness correlates closely with the volume and quality of the data input into the model. The reality is that data inside CSP organizations are usually highly disaggregated in silos and on legacy systems, which poses a major challenge to undergoing large-scale AI transformations.

Realizing the $170 billion total opportunity TBR estimates AI presents the telecom industry by 2030 requires CSPs to act differently

AI presents a once-in-a-generation opportunity for the telecom industry to achieve two key objectives: generate new revenue and reduce costs.

However, there is real risk that most CSPs globally will miss out on the AI opportunity due to cultural (behavioral) and regulatory encumbrances, such as long decision cycles, an unwillingness to invest what it takes to win and general risk aversion. These encumbrances, which are endemic to the telecom industry, have resulted in CSPs largely missing out on every major opportunity in the last two decades (e.g., cloud computing opportunity, video streaming opportunity, digital advertising opportunity), many of which were won by the hyperscalers that were willing and able to take these risks.

Though leading CSPs have been investing in AI, TBR notes that most of these investments seem to be myopically focused on quick-hit wins, which is acceptable in the short term, but true opportunity capture will be contingent on broader-scope initiatives, coupled with upfront investment.

CSPs cannot afford to miss out on another opportunity, especially one that has such transformational qualities as AI. Getting AI right should be of paramount concern to CSPs, as competitors that transform with AI may obtain unapproachable differentiation compared to CSPs that do not invest in AI. Said differently, CSPs that do not get AI right may not be competitive anymore, leading to longer-term questions about their viability.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research


CFOs need to evangelize AI across their organizations to seize this critical opportunity. True digital transformation with AI will require significant upfront investment, and the payoff of some investments take a longer time to realize than others. CFOs need to balance investment levels with outcomes, but more support is needed from the top to approach AI in a broader manner.

TBR notes that of the $170 billion CSP AI opportunity by 2030, an estimated 50% is likely to be realized in APAC, where there are many CSPs aiming to take a leadership role in the AI market. The lack of domestic hyperscalers (outside of the U.S. and China) and government desire to have sovereignty over technologies of national importance gives CSPs an opportunity to fill in the gaps in the market. China-based CSPs are already diving headfirst into the AI opportunity, and they are likely to generate billions of dollars in new revenue and save billions of dollars in costs from AI by 2030.

Telcos have a data usability problem, which could slow the pace of AI adoption in their businesses; addressing this issue requires significant additional investment

CSPs have a data preparedness problem, which is an underestimated issue and will be costly to rectify

The efficacy of and value provided by AI are contingent on the quality, type and volume of data the AI model is trained on. It is therefore imperative that CSPs ensure their data is ready for AI and that the data is managed and governed in a sustainable way. This is especially pertinent now that GenAI can train on a broad range of structured and unstructured data.

The reality is that CSPs’ data does not reside in one data lake. Rather, it is highly disaggregated and siloed across the organization and, in some cases, may be incomplete. There are also restrictions (e.g., regulatory, privacy) and security considerations around certain types of data, which hinder the utility and accessibility of the data, creating another major challenge that companies will have to address before they can appropriately leverage AI.

Individual CSPs have access to petabytes of data, but most of this data resides in legacy systems and/or is subject to other restrictions, creating challenges around pooling and preparing the data to be leveraged for AI.

To rectify this issue, CSPs will need to clean up their technical debt and assemble their data in a single, unified platform. This horizontal data layer will be required to run and realize AI-driven outcomes at scale. Most CSPs will likely need assistance with this task, opening up opportunities for the vendor community.

Addressing this issue will also require significant investment and human resources, and TBR believes CSPs that are in the initial stages of their GenAI initiatives have underestimated or overlooked the problem.

TBR believes this issue could extend CSPs’ timelines for AI commercialization. Vendors and hyperscalers will try to mitigate the data issue by training AI models on their own datasets, which could also be fed by partners’ data, but there is a risk that this will not be sufficient and that AI models will need to be tuned to a specific CSP’s needs to realize desired outcomes. TBR also believes it will be common for stakeholders in the telecom ecosystem to be protectionist regarding their data, not wanting to (or being able to) share their data for fear of regulatory reprisals and a nullification of competitive advantages.

Sovereign cloud requirements have the potential to be a meaningful revenue driver for CSPs in select regions and countries

Countries (e.g., China, Saudi Arabia, Qatar, United Arab Emirates, South Korea) and alliance entities (e.g., NATO) that are more impacted by geopolitics and/or have strong data privacy rules and regulations (e.g., European Union [EU] member countries) are looking into establishing national AI infrastructure (e.g., data centers), also referred to as sovereign cloud.

Sovereign clouds would theoretically be set up and managed by domestic players for national security reasons and could be an area of opportunity for CSPs that reside in those countries or regions, whereby the CSP might host the cloud and provide some value-added services to manage those environments on behalf of the government and related entities.

However, TBR ultimately believes it is more likely that U.S. hyperscalers will participate in some way in the AI value chain for most countries given the broad scope of their involvement and dominance in the global AI ecosystem, despite intentions for national sovereignty. This includes CSPs partnering with hyperscalers to jointly develop, operate and manage cloud resources in-country, as evidenced by Orange’s use of Microsoft’s cloud and AI technology for its sovereign cloud joint venture with Capgemini (called Bleu) in France.

TBR estimates the potential annual AI-related opportunity for CSPs will reach $170B by 2030, approximately 53% of which is new revenue and 47% is cost efficiencies

CSPs have been largely sidelined from the new revenue opportunity presented by AI since the emergence of GenAI in 4Q22, but this is starting to change, evidenced by significant deals won by Lumen and Zayo to provide transport between data centers for AI workloads. There are also some green shoots of demand for hyperscalers to leverage CSPs’ network facilities (e.g., wire centers and central offices) and other real estate assets to colocate AI infrastructure closer to end users, as evidenced by efforts being made by Verizon and AT&T.

All CSPs that are investing in AI currently expect to reap cost efficiencies from the technology. The new revenue opportunity is more nuanced and is CSP- and market- specific in nature. APAC-based CSPs are likely to be the largest beneficiaries of new revenue from AI due to government protections, stimulus and cultural orientations toward early adoption of emerging technologies.

Total Annual Potential Value of AI to CSPs by 2030 (Source: TBR)

IT Infrastructure Market Forecast

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

Organizations will continue to prioritize spending on AI infrastructure

Growth drivers

  • Investment in, renewed focus on and adoption of enterprise AI are increasing demand for high-performing infrastructure.
  • Private and hybrid cloud deployments increase demand for hyperconverged infrastructure form factors.
  • Organizations are prioritizing investments in denser and more energy-efficient infrastructure solutions to make way for AI.
  • Edge deployments are creating new-new workload opportunities for OEMs.

Growth inhibitors

  • The enterprise and SMB spend environment remains cautious and fragile as trade wars erupt.
  • ODMs are largely capturing cloud growth as they produce low-cost, custom, commoditized hardware for hyperscalers.
  • Commodity hardware and the popularity of software-defined infrastructure reduce OEMs’ pricing power.
  • Heightened demand for InfiniBand threatens traditional Ethernet-based networking solutions providers.

 

IT Infrastructure Market Forecast for 2024-2029 (Source: TBR)


 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

TBR predicts that the top 5 covered IT infrastructure OEMs will achieve double-digit revenue growth from 2024 to 2029 but their respective market shares will decline

IT Infrastructure Market Share for 2024 and 2029 (Source: TBR)


 

Despite shipping over $11B in Blackwell products in 4Q24, NVIDIA is racing to increase production to meet the market’s seemingly insatiable demand for AI servers

Within the OEM market, AI server demand continues to be driven primarily by services providers and model builders, but sovereigns are showing increased interest in OEMs’ AI infrastructure solutions, presenting the OEMs with a major opportunity. Additionally, although enterprise demand for on-premises deployments of AI infrastructure remains soft, especially for the most powerful and thereby highest-revenue-generating systems, the industry expects enterprise AI demand will accelerate throughout 2025 and 2026 as customers pursuing tailored AI solutions increasingly transition from the prototyping phase to the deployment phase.

TBR predicts Dell will lead covered vendors in terms of storage revenue growth due in part to increased attached sales opportunities associated with the company’s growing server business

Key takeaways

TBR forecasts the storage market will grow at a 13.4% CAGR from 2024 to 2029 as organizations across a variety of industries invest in modernizing and hybridizing their storage estates to support current and future workloads, including those related to AI. Organizations’ data volumes will continue to grow over the next five years as the rise of AI further underscores the value behind organizations’ proprietary data.

 

The storage market typically lags trends in the traditional server market, as is presently the case. However, as organizations increasingly transition from prototyping to deploying AI solutions, data management and orchestration has risen toward the top of key customer pain points. Recognizing this, storage OEMs are selling customers on the capabilities of their storage platforms, comprising software, adjacent services and sometimes hardware. Additionally, storage OEMs are forming partnerships with hyperscalers and other ecosystem players, like NVIDIA, to have their storage solutions validated and certified for operability and AI system reference architectures. TBR believes Dell and Hewlett Packard Enterprise (HPE) are well positioned for growth in storage over the next five years due to their strong data management capabilities and increased opportunities around attaching storage sales to server deals.

 

In 2024 TBR estimates Lenovo overtook NetApp for the third-largest storage market share among covered vendors. The storage market has become more of a priority for Lenovo in recent years due to the segment’s higher margins, evidenced by the company’s recently announced acquisition of Infinidat. TBR forecasts NetApp will outperform Lenovo in five-year storage revenue CAGR, but Lenovo will retain its positioning in the market among covered vendors.

 

Storage Revenues and Market Share of Top 5 Vendors for 2024 and 2029 (Source: TBR)

 

IT infrastructure OEMs are expanding manufacturing capabilities in Saudi Arabia

EMEA market changes and vendor activities

Relative to the U.S., European economies have had more difficulty recovering from the pandemic; however, looking ahead to 2029, TBR forecasts covered vendors’ IT infrastructure revenue derived from the EMEA region will grow at a 12.9% CAGR due in large part to AI. While the EMEA market pales in comparison to that of the Americas, TBR believes the region’s strong growth will be driven both by rising AI adoption — especially among sovereigns — as well as rapidly increasing technology and infrastructure investments in countries like Saudi Arabia.

 

TBR believes HPE is among the best-positioned covered vendors in the EMEA geography. Sovereigns in the region already have a strong working relationship with HPE due to their legacy investments in high-performance computing based on Cray systems, and the company has made some of the strongest commitments among covered vendors to develop infrastructure manufacturing capacity in Saudi Arabia and the rest of the Middle East.

AI & GenAI Model Provider Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

Interest in AI capabilities has not waned as enterprises view the technology as critical to long-term competitive positioning

The buzz around GenAI persists as enterprise interest is leading to adoption. Yet it is still early days, and many enterprises remain in exploration mode. Some use cases, such as data management, customer service, administrative tasks and software development, have already moved from the proof-of-concept stage to production. Still, the exploration phase of AI adoption will be a slow burn as enterprises seek opportunities beyond these low-hanging fruit. As seen in the graph to the right, most enterprises are evaluating AI qualitatively, forgoing quantitative measures to keep up with peers based upon the assumption that the technology will bring transformational improvement to business operations.

 

Source: TBR 2H24

Reasoning models excel at performing complex, deterministic tasks, and have become the most popular models at the back end of agentic AI

The capability improvement brought by the iterative inferencing process has made reasoning models the focal point of frontier model research. In fact, most of the models sitting atop established third-party benchmarks are reasoning models, except for OpenAI’s GPT-4.5, which the company stated would be its last nonreasoning LLM. Put simply, the difference in output quality is too pronounced to ignore, especially regarding complex, deterministic tasks. As seen in the graph, reasoning models outperform their nonreasoning predecessors across the board, with the greatest distinction appearing in coding and math benchmarks. The strength in complex, deterministic tasks makes reasoning models particularly adept at powering agentic AI capabilities, offering a wider range of addressable use cases and greater accuracy. In addition, reasoning frameworks can be leveraged at any parameter count, with available reasoning models ranging from fewer than 10 billion parameters to more than 100 billion.

 

As SaaS vendors continue to build proprietary, domain-specific SLMs [small language models] to power their agentic capabilities, incorporating reasoning frameworks will be an important part of their development strategies. Although the capabilities of reasoning models are impressive, the models bring new challenges and are not necessarily the best choice for every application.

 

Simple content generation and summarization, for instance, do not necessarily require iterative inferencing. Moreover, the greater compute intensity caused by repeated processing at the transformer layer will compound existing challenges to scaling AI adoption. Not only will these models be more expensive to run for the customer, but they will also exacerbate the persistent supply shortages facing cloud infrastructure providers. Microsoft has noted infrastructure constraints as a headwind to AI revenue growth in the past several quarters, and the emerging need for test-time compute adds to these infrastructure demands. As discussed in TBR’s special report, Sheer Scale of GTC 2025 Reaffirms NVIDIA’s Position at the Epicenter of the AI Revolution, NVIDIA’s CEO Jensen Huang stated that reasoning AI consumes 100 times more compute than nonreasoning AI. Of course, this was a highly self-serving statement, as NVIDIA is the leading provider of GPUs powering this compute, but we are dealing with magnitudes of difference. For the use of reasoning models to continue scaling, this high compute intensity will need to be addressed.

 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

SaaS vendors will need to get on board with the new Model Context Protocol to ensure customers can use their model of choice

SaaS vendor strategy assessment

From a strategic positioning perspective, TBR does not expect the rising popularity of the Model Context Protocol to have an outsized impact, primarily because we anticipate all application vendors will adopt the framework to ensure customers can leverage the model of their choice. Furthermore, cloud application vendors are positioned to benefit from the standardization of API calls between models and their workloads. Through a standardized API calling framework, these vendors will be better positioned to drive cost optimization and improve workload management for embedded AI tools.

Recent developments

The Model Context Protocol is becoming the standard: The idea of the Model Context Protocol (MCP) has been steadily gaining popularity following its release by Anthropic in November 2024. At its core, MCP aims to address the emerging challenge of building dedicated API connectors between LLMs and applications by introducing an abstraction layer that standardizes API integrations. This abstraction layer — commonly referred to as the MCP server — would establish a default method for LLM function calling, which software providers would need to incorporate into their applications to access LLMs.

 

This standardization offers several benefits for model vendors, such as eliminating the need to build individual connectors for each service and promoting a modular approach to AI service integration, potentially unlocking long-term advantages in areas such as workload management and cost optimization.

 

For SaaS vendors, there is little reason to resist the shift toward MCP, and its growing popularity may make adoption inevitable. Application vendors like Microsoft and ServiceNow have already begun implementing the protocol by establishing MCP servers for the Copilot suite and Now Assist, respectively, and TBR expects other vendors to follow.

 

It is important to recognize, however, that this approach better suits vendors taking a model-agnostic stance — meaning they aim to empower enterprises to use any LLM to automate agentic capabilities. A possible exception lies with vendors that are less model-agnostic. For instance, Salesforce’s emphasis on proprietary models reduces the need for MCP and favors the company’s focus on native connectors between Customer 360 workflows and xGen models.

 

Ultimately, TBR expects Salesforce to adopt MCP, but there is an important distinction in how different SaaS vendors may approach standardization. Today, the BYOM [bring your own model] philosophy remains a priority for Salesforce, but if the company were to eventually push customers to use its proprietary models exclusively with Customer 360, its commitment to MCP could be deprioritized in favor of tighter customer lock-in.

Google enhances AI capabilities with the launch of Gemini 2.5 Pro, revolutionizing search functionality, healthcare solutions and multimodal content generation

Google remains differentiated in the AI landscape through the deep integration of its proprietary models across a broad product ecosystem, including Search, YouTube, Android and Workspace. Although many competitors focus on niche capabilities or open-source development, Google positions Gemini as a comprehensive, multimodal foundation model designed for widescale consumer and enterprise adoption. Google’s infrastructure, proprietary TPUs (Tensor Processing Units), and access to vast and diverse data sources provide a significant advantage in training and deploying next-generation models. Gemini 2.5 Pro is a testament to this strength, offering the best performance and largest context window available on the market. Although TBR expects the top spot to continue exchanging hands, we believe Google’s models will remain among the frontier leaders for years to come.

OpenAI advances AI development with GPT-4.5, cutting-edge agent tools and a premium ChatGPT Pro subscription to expand capabilities and improve user experiences

OpenAI is the most valuable model developer in the market today, largely due to the company’s success in productizing its models via ChatGPT. The mindshare generated by ChatGPT is benefiting the company’s ability to reach custom enterprise workloads, though OpenAI must be mindful of the widening gap in price to performance relative to peers. From a sheer performance perspective, TBR believes the company’s emphasis on securing compute infrastructure via the Stargate Project, as well as its ongoing partner initiatives to gain access to high-quality training data, will ensure its models remain near the top of established third-party benchmarks over the long term.

ServiceNow Ecosystem Report

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

ServiceNow’s evolving value proposition, centered on seamless tech integration and sales alignment, provides a strong backbone in its alliances’ strategy, appealing to client-mindshare-hungry partners

Key trends

The need to unlock data and break down integration barriers between the back, middle and front office is as relevant as ever as customers look to deploy generative AI (GenAI) within their workflows. Acting as an abstraction layer on top of the enterprise system of record (SOR), ServiceNow is in a strong position to message around business transformation and to have more outcome-based conversations with clients, which is aligned with the IT services companies and consultancies’ business models. IT services companies and consultancies that have experience reducing organizations’ technical debt and implementing systems like SAP, Workday and Salesforce are well positioned to use ServiceNow to deliver added value. As evidenced by ServiceNow’s introduction of consumption-based pricing for AI Agents, ServiceNow is focused on selling value as part of its GenAI portfolio, which is certainly in step with the market, though outcome-based pricing may be something for ServiceNow to consider to further align with the global systems integrator (GSI) ecosystem and stay ahead of its growing list of SaaS competitors.

Go-to-market strategy

As ServiceNow continues to grow and pursue new market opportunities, the company is doing a better job of enabling the ecosystem in both sales and delivery. Unlike some of its SaaS peers, ServiceNow is not as established in the market, underscoring a clear need to leverage partners that have the C-Suite relationships, particularly in the line of business (LOB) that can articulate ServiceNow’s value as it exists alongside core enterprise applications. Despite its rapid expansion into more SaaS markets, ServiceNow remains a platform company at its core, but being a true platform company requires an ecosystem that can build on that platform. We suspect the Build motion, where partners sell custom, often industry-specific offerings they develop on the Now Platform, will be an increasingly critical motion, helping ServiceNow capitalize on opportunities.

Vendors

Given the smaller size, ServiceNow is unsurprisingly among the fastest-growing practice area within the GSIs, with average practice-related revenue up 12.9% year-to-year in 4Q24. Several partners have more than $1 billion commitments with ServiceNow, and in early 2025 Infosys and Cognizant joined their competitors in the Global Elite tier of the ServiceNow Partner Program. Cognizant is also the inaugural partner for ServiceNow’s Workflow Data Fabric platform, a key offering that rounds out ServiceNow’s portfolio, offering zero-copy integrations with key platforms, including Google Cloud and Oracle, to feed ServiceNow’s AI Agents. On the technology side, ServiceNow is also strengthening its partnerships with hyperscalers beyond Microsoft, which could unlock new points of engagement for services partners as they start to embrace the multiparty alliance structure. For example, Deloitte is looking at how it can build agents for ServiceNow-specific use cases, with an immediate focus on the front office, on Google Cloud Platform (GCP), while Accenture included ServiceNow on its partner list for the recently announced Trusted Agent Huddle for agent-to-agent interoperability.

Emergence of multipartner networks will test vendors’ trustworthiness and framework transparency

Prioritizing the needs of partners and enterprise buyers over internal growth aspirations will position vendors across the ICT value chain as leading ecosystem participants. It sounds like an idea born in marketing, but positive digital transformation (DT) outcomes will require multiparty business networks that bring together the value propositions of players across the technology value chain. By leading with their core competencies, players can establish needed trust among partners and customers alike, increasing their competitiveness against other players that have spread themselves too thin with aspirations of being end-to-end DT providers.

 

To better understand these approaches, we have identified three back-office ecosystem relationship requirements that guide how the parties work together.

 

TBR Ecosystem Value Chain (Source: TBR)

TBR has identified 4 cloud ecosystem relationship requirements that guide how the parties work together

ServiceNow ecosystem relationship best practices

1.Consider PaaS layer and its role in the SaaS ecosystem: As discussed throughout our research, the value is shifting from “out of the box” to “build your own,” and customers clearly believe building their own custom solutions around a microservices architecture will give their business a competitive advantage. Naturally, we expect ServiceNow wants partners to take the lead in Now Assist delivery, but for the GSIs to see value, GenAI has to actually change the business process.

 

2.Drive awareness through talent development efforts: ServiceNow’s growing portfolio outside the core IT service management (ITSM) space is creating new channel opportunities for services partners to capitalize on, compelling them to invest in training and development programs. Gaining the stamp of approval from a ServiceNow certification program enhances services partners’ value proposition, especially in new areas such as the Creator Workflow and Build portion of the ServiceNow portfolio, which positions them to drive custom application and managed services opportunities. Standing out in a crowded marketplace where services and technology providers vie for each other’s attention will elevate the need to invest in consistent messaging and knowledge management frameworks that elevate buyer trust.

 

3.Prioritize IT modernization ahead of GenAI opportunities and scaling NOW deployment: Some vendors have made GenAI capabilities available only to cloud-deployed back-office suites, meaning customers still on legacy systems must first migrate to the cloud before they can adopt the emerging technology. Partners must account for this modernization prerequisite by prioritizing traditional migration services through broader programs like RISE with SAP if they hope to pursue new opportunities over the long term. Reducing legacy technical debt will also free up resources, both human and financial, which will allow for broader ServiceNow portfolio adoption.

 

4.Set up outcome-based commercial models to scale adoption across emerging areas and protect against new contenders: Aligning commercial, pricing and incentive models that resonate with buyer priorities and achieving business outcomes can allow partners to expand addressable market opportunities, especially as scaling GenAI adoption necessitates greater trust in the portfolio offerings. ServiceNow’s consumption-based model provides a short-term hedge against potential tech partner disruptors, which may take on the risk to offer similar solutions but are able to better align with services partners’ messaging through the use of outcome-based pricing.

 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Acting as an abstraction layer, ServiceNow has a unique opportunity to further expand into the back office to address integration pain points but risks further overlapping with its SOR peers

ServiceNow positions as system of action to expose gaps in core system of record

Existing as a platform layer that orchestrates and integrates workflows, ServiceNow has long been able to successfully enter new markets without encountering a lot of head-to-head competition. But this is changing as ServiceNow, a $10-plus billion company, continues to drive traction with the LOB buyer by challenging a lot of the fragmentation that exists within front- and back-office systems. Over the past several quarters, ServiceNow has continued to launch new products in areas like talent management, finance and supply chain. One of the company’s biggest moves was in the front office with the launch of Sales & Order Management (SOM), giving customers the ability to use CPQ (configure, price, quote) and guided selling in a single product. Though ServiceNow famously integrates with all of the systems of record, these new innovations could pose a risk to the likes of SAP, Workday and Salesforce, which perhaps do not have the platform capabilities to build custom processes that can be tied back to the workflow, at least in a truly modern way. To be clear, ServiceNow is not interested in being a core CRM, ERP or human capital management (HCM) provider, and today acts as a service delivery system. But having customers store their data in the service delivery layer, as opposed to the core system of record so they can use that data against a specific workflow, is how ServiceNow aims to position as a “system of action.”

DXC Technology’s ServiceNow Ecosystem Strategy in Review

TBR assessment

DXC Technology has an established history and deep expertise within the ServiceNow ecosystem, with a partnership spanning more than 15 years, a talent pool of over 1,800 ServiceNow experts, and a track record of more than 7,200 global implementations with over 350 instances managed worldwide, all of which position the company as a mature and experienced service provider for ServiceNow. Notable client wins, such as with the city of Milan (medical supply delivery during a crisis), Nordex Group (workplace safety management) and Swiss Federal Railways (unified customer inquiry management) underscore DXC’s ability to leverage the partnership to address diverse and critical business challenges across different industries and sectors. These successes highlight DXC’s capacity to translate its deep ServiceNow knowledge and implementation capabilities into tangible business value for its clients, suggesting a well-established and impactful ServiceNow practice.

Strategic portfolio offering

Through a strategic alliance with ServiceNow and bolstered by a dedicated global business group and acquisitions such as Syscom AS, TESM, BusinessNow and Logicalis SMC, DXC delivers a comprehensive range of ServiceNow-focused solutions. This approach enables DXC to digitize processes, enhance user experiences, and transform service management across the full ServiceNow platform, driving business innovation at scale, including specialized solutions such as those for the insurance industry, where DXC has had core competencies and long-lasting customer relationships. DXC’s offerings span enterprise applications transformation, security solutions, and compute and data center modernization, all designed to maximize client efficiency and agility utilizing the ServiceNow platform. The establishment of a new Center of Excellence in Virginia in November 2024, combining DXC’s industry strengths with ServiceNow’s solutions, further solidifies the two companies’ commitment to streamlining AI adoption and delivering cutting-edge solutions.

 

DXC Technology’s ServiceNow Ecosystem Strategy in Review (Source: TBR ServiceNow Ecosystem Report)

 

U.S. Mobile Operator Benchmark

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Most operators will sustain wireless service revenue and connection growth in 2025 but face headwinds from macroeconomic challenges and Trump administration immigration policies

Most benchmarked operators sustained service revenue growth in 4Q24, driven by connection growth and higher ARPA

Total wireless revenue from benchmarked U.S. operators increased 4.5% year-to-year to $78.9 billion in 4Q24, mainly due to continued postpaid phone subscriber growth and higher average revenue per account (ARPA). Although the market is maturing, operators are maintaining postpaid phone net additions due to factors including population growth and more businesses purchasing mobile devices for employees. Higher ARPA is being driven by operators increasing connections per account, including from growing fixed wireless access (FWA) adoption, uptake of premium unlimited data plans and recent rate increases implemented over the past year.
 
Though most U.S. operators expect to continue to grow wireless service revenue and connections in 2025, they will face headwinds from factors including macroeconomic pressures (including layoffs within the private and public sectors and uncertainty around tariff impacts) and immigration policies under the Trump administration (including mass deportations).

U.S. operators increase focus on cross-selling mobile and broadband services

U.S. operators are focused on advancing their convergence strategies by offering plans bundling mobile and broadband services. The bundles create a stickier ecosystem to reduce churn long-term via the convenience of enrolling in broadband and mobility services from the same provider as well as by providing discounted pricing compared to purchasing those services separately.
 
Operators including AT&T, Charter, Comcast, T-Mobile and Verizon are growing their ability to offer these bundles via the expanding service availability of their broadband services (including wireline and FWA offerings). Operators are also targeting acquisitions to strengthen their convergence strategies, such as Verizon’s pending purchase of Frontier Communications and T-Mobile’s proposed joint ventures to acquire Metronet and Lumos. Cable operators also have significant opportunity to increase sales of converged services as a relatively low portion of cable broadband customers are enrolled in their service provider’s mobile offering.

AI is providing cost savings and revenue generation opportunities for U.S. operators

U.S. operators are focused on more deeply implementing AI technologies in areas including optimizing customer service and sales & marketing functions as well as enhancing network operations. For instance, deeper AI implementation will help AT&T reach its goal of generating $3 billion in run-rate cost savings between 2025 and 2027, while leveraging AI technologies will help T-Mobile meet its target of reducing the number of inbound customer care calls by 75%.
 
AI will also help operators optimize energy usage, especially as it pertains to network operations. Examples include using AI for optimal, dynamic traffic routing and to determine when to turn on and turn off radios to optimize energy usage. AI, especially providing network and real estate resources to support AI inferencing workloads, will create significant revenue opportunities for operators.
 
For instance, Verizon views telco AI delivery as having a $40 billion total addressable market, and the company has already secured a sales funnel of over $1 billion in business by leveraging its existing infrastructure and resources.

Operators are focused on cost-cutting initiatives, including streamlining headcount and more deeply implementing AI technologies, to improve margins

The impacts of inflation and challenging macroeconomic conditions, such as lower consumer discretionary spending, higher network operations and transportation expenses, and increased labor-related costs, are limiting profitability for U.S. operators. These challenges are leading operators to implement cost-cutting and restructuring initiatives to improve profitability, such as AT&T’s goal of generating $3 billion in savings from 2025 to the end of 2027 through its latest cost-cutting program.
 
Operators are streamlining headcount as part of their cost-cutting initiatives. For instance, about 4,800 employees are expected to leave Verizon by the end of March 2025 as part of the company’s latest voluntary separation program.
 
To increase cost savings and operational efficiencies, operators are more deeply implementing AI technologies in areas including customer service, field technician support and fleet vehicle fuel consumption.
 
T-Mobile is improving profitability, evidenced by its EBITDA margin growing by 220 basis points year-to-year to 35.4% in 4Q24, which was impacted by the company’s higher revenue and lower network costs aided by greater merger-related synergies. T-Mobile’s 2025 guidance for core adjusted EBITDA* is between $33.1 billion and $33.6 billion, compared to $31.8 billion in 2024. Service revenue growth as well as cost-cutting initiatives and merger-related synergies will all contribute to higher core adjusted EBITDA.
 
*Core adjusted EBITDA reflects T-Mobile’s adjusted EBITDA less device lease revenues.
 

Graph: Wireless Revenue, EBITDA Margin and Year-to-year Growth for 4Q24 (Source: TBR)

Wireless Revenue, EBITDA Margin and Year-to-year Growth for 4Q24 (Source: TBR)


 

T-Mobile continued to lead the U.S. in postpaid phone and broadband net additions in 4Q24 and recently launched new FWA pricing plans

Operators are attracting FWA customers, mainly because FWA offerings have lower price points compared to other broadband services and are available to customers in markets with limited other high-speed broadband options, such as within rural markets. Though consumers account for the bulk of FWA connections, FWA is also gaining momentum among businesses seeking to reduce connectivity expenses and/or companies needing to quickly launch new branch locations, as the technology can be installed faster than fixed broadband.
 
In 4Q24 T-Mobile continued to lead the U.S. in broadband subscriber growth, driven by its FWA services, aided by the company continuing to gain market share against cable companies including Comcast and Charter, which reported steeper broadband customer losses in 4Q24 both year-to-year and sequentially. T-Mobile also reported its highest-ever year-to-year broadband ARPU growth in 4Q24, which was aided by the company revamping its 5G Home Internet and Small Business Internet plans in December.
 

Graph: Total FWA Net Additions for 4Q23-4Q24 (Source: TBR)

Total FWA Net Additions for 4Q23-4Q24 (Source: TBR)


 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Wireless capex moderated for most U.S. CSPs in 2024 as they are in the later stages of 5G rollouts

Verizon’s consolidated capex will increase to a guidance range of $17.5 billion to $18.5 billion in 2025, compared to $17.1 billion in 2024 (higher consolidated capex is mainly due to increased wireline capex to support Verizon’s accelerated Fios build). TBR estimates Verizon’s wireless capex in 2025 will be relatively consistent compared to 2024 as the company will focus on the continued expansion of C-Band 5G services into suburban and rural markets.
 
AT&T’s 2025 guidance for capital investment, which includes capex and cash paid for vendor financing, is in the $22 billion range, consistent with $22.1 billion in capital investment in 2024. Capital investment in 2025 will entail materially lower vendor financing payments compared to 2024, while capex is expected to increase year-to-year in 2025. TBR estimates AT&T’s wireless capex will be about $10.6 billion in 2025, which will help to meet AT&T’s goals, including providing midband 5G coverage to over 300 million POPs by the end of 2026 and completing the majority of its transition to open-RAN-compliant technologies by 2027.
 
T-Mobile’s capex guidance for 2025 is around $9.5 billion, compared to $8.8 billion in capex spent in 2024, with spending focused on continued 5G network deployments as well as investments in IT platforms to enhance efficiency and customer experience.

Enterprise Edge Compute Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.


Post Updated: Aug. 6, 2025

Hyperscalers focus investments on AI workloads, which will likely land in the cloud anyway and thus in some ways foster the edge ecosystem

With the edge AI opportunity stemming from the central cloud, the hyperscalers trim edge portfolios and focus investments elsewhere, creating new openings for edge pure plays

Over the past several months, many of the hyperscalers have reevaluated their edge portfolios to focus more on their central cloud services, which is where most of the AI opportunity will land. For example, AWS discontinued edge hardware in the Snow family and, later this year, will sunset features in its IoT Device Management service. In our view, these developments speak to customers’ preference for consuming edge computing as an extension of the central cloud; this includes customers migrating workloads to the cloud and building AI models that can be brought to the edge for a particular use case. This proposition will continue to challenge the “edge-native” players, including many hardware vendors and software pure plays that feed on demand from customers crafting their edge strategies from the ground up. But at the same time, it unlocks more opportunities within the ecosystem. For example, the hyperscalers’ pivot away from first-party IoT services welcomes more openings for IoT specialists that can attach themselves to an AI use case, while allowing the hyperscalers to strategically focus on AI and build the capabilities and infrastructure to support customers’ AI workloads regardless of where they are deployed. In some cases, we could see the hyperscalers investing in AI workloads to actually create an edge ecosystem.

AI use cases at the edge already exist

AI has been a foundational technology in enterprise edge computing for years and continues to support growth of the enterprise edge market, which TBR expects will expand from $58 billion in 2024 to $144 billion in 2029. TBR’s enterprise edge spending forecast has not increased significantly from our previous guidance in 2024, which already incorporated our long-standing assumption that AI will propel market growth. TBR expects that the industrywide focus on generative AI (GenAI) will likely lead to increased adoption of edge computing but that the bulk of enterprises embarking on these projects in 2025 will focus on piloting and adoption of cloud and centralized AI resources.

Compared to other deployment methods, edge expansion still lags

According to TBR’s 2Q24 Infrastructure Strategy Customer Research, 34% of respondents expect to expand IT resources at edge sites and branch locations over the next two years. But this is noticeably lower than the 55% who plan to expand IT resources within centralized data centers, while the central cloud and managed hosting are also gaining more traction. The possibility of large capital outlays and an unclear path to ROI remain the biggest adoption hurdles to edge technology, with some customers exploring other alternatives that have a clearer ROI road map.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

GenAI will not have a significant impact on enterprise edge market growth, at least in the near term, as customers prioritize their investments in the IT core and cloud

Forecast assumptions

TBR continues to revise its enterprise edge forecast to account for changes in the traditional IT and cloud markets, including the advent of generative AI (GenAI). Although the enterprise edge market benefited from the hype surrounding AI in 2024, many pilot projects may not enter production and more concrete use cases around edge AI need to be developed.

The enterprise edge market is estimated to grow at a 19.9% CAGR from 2024 to 2029, surpassing $144 billion by 2029. Professional and managed services will remain the fastest-growing segment, followed by software, at estimated CAGRs of 22.4% and 19.3%, respectively.

Graph: Enterprise Edge Spending Forecast by Segment for 2024-2029 (Source: TBR)

Enterprise Edge Spending Forecast by Segment for 2024-2029 (Source: TBR)

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

Although there is general interest in edge across industries, demand varies by vertical, with surveillance and quality assurance use cases particularly strong in healthcare and consumer goods

Overall, the edge use cases that garnered the most interest from respondents were security & surveillance, quality assurance, and network (e.g., vRAN). Despite the AI hype, real-time analytics was the fourth most popular use case, although other use cases may embed these technologies.

Interest in deploying a certain use case is often industry-specific, such as above-average interest in security & surveillance among respondents in the consumer goods vertical.

Enterprise respondents had an above-average interest in remote asset management.

Cloud vendors look to partners to bridge the gap between IT and OT buyers and drive traction for edge solutions

TBR’s newly launched Voice of the Partner Ecosystem Report includes survey results from alliance partnership decision makers across three groups of vendors: OEMs, cloud providers and services providers. For cloud respondents, edge computing ranked No. 4 out of 26 technologies as the area that will drive the most partner-led growth in the next two years. This stems from the big gap that still exists between IT and OT buyers, and an overall optimism about cloud vendors’ ability to use partners to drive adoption.

OT stakeholders understand the edge but are not necessarily thinking about IT solutions through the lens of their own processes. Because of this, edge hardware vendors and cloud providers benefit from partnering with edge-native software vendors that have permission from OT buyers and can help edge incumbents sell solutions, including attached software and services. The dynamics between IT and OT departments reinforce the importance of the vendor ecosystem in the enterprise edge market.

Dell infuses GenAI into its NativeEdge edge operations platform, enabling customers’ edge environments to operate like their centralized data centers

TBR Assessment: Dell is working to build an ecosystem surrounding its hardware, primarily through expanding its NativeEdge operations software, enrolling ISV partners to create validated solution designs for specific industry use cases, and designing new services to facilitate edge infrastructure rollouts. Dell’s edge approach is increasingly intertwined with its AI strategy and its close partnership with NVIDIA, as Dell seeks to capitalize on growth opportunities through AI use cases that require video analytics, speech analytics and inferencing at edge locations. This approach expands Dell’s addressable market as its previous edge play was primarily focused on computervision solutions, and NVIDIA’s AI Enterprise software portfolio will open the door to a greater variety of use cases. Dell’s edge hardware portfolio includes not only ruggedized servers and gateways but also storage, backup and hybrid cloud solutions.

Key Strategies

  • Build validated designs for verticals with high growth potential, including telecom, retail and manufacturing.
  • Leverage Dell NativeEdge, an edge operations software platform, to add value on top of the company’s diverse infrastructure portfolio.
  • Simplify edge management using AI, and add edge management features that support the needs of AI-based workloads.
  • Partner with leading ISVs to provide an enhanced edge orchestration experience.

Recent Developments
In November SVP of Edge Computing Gil Shneorson outlined how Dell NativeEdge 2.0, Dell’s edge orchestration software, better enables AI workloads at the edge. Shneorson emphasized that although AI workloads have existed at the edge for years, Dell Technologies’ (Dell) orchestration platform utilizes AI to make these workloads easier to deploy and manage. One example is a new software feature that offers high-availability clustering, which provides automatic application failover and live virtual machine migration.

The scope of Dell-branded devices and infrastructure that can be managed under NativeEdge is broad — including servers, high-end storage and backup systems, edge gateways, and even workstation PCs. These various types of endpoints can be clustered through the software to act as a single system.


Dell has also updated NativeEdge to address other customer needs surrounding AI, including data mobility and support for NVIDIA Inferencing Microservices (NIMs).

Cloud Components Benchmark

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Behind healthy server backlog and new software IP, hardware vendors drive the cloud components market, particularly as software pure plays prioritize entirely public cloud migrations

Hardware-centric vendors continue to make their move into software

Over the past several years, the cloud software components market has shifted. Microsoft and Oracle are no longer dominating the market as they prioritize their native tool sets and encourage customers to migrate to public cloud infrastructure. Driven largely by weaker-than-expected purchasing around Microsoft Windows Server 2025, aggregate revenue growth for these two software-centric vendors was down 3% year-to-year in 3Q24. Over the same compare, total software components revenue for the benchmarked vendors was up 14% and total cloud components revenue was up 8%. In some ways, this dynamic has made room for hardware-centric vendors such as Cisco and Hewlett Packard Enterprise (HPE) to move deeper into the software space, particularly as they buy IP associated with better managing orchestration infrastructure in a private and/or hybrid environment.

Backlog-to-revenue conversion for AI servers fuels market growth

Though revenue mixes are increasingly shifting in favor of software, driven in part by acquisitions (e.g., Cisco’s purchase of Splunk), hardware continues to dominate the market, accounting for 80% of benchmarked vendor revenue in 3Q24. Industry standard servers being sold to cloud and GPU “as a Service” providers are overwhelmingly fueling market growth, more than offsetting unfavorable cyclical demand weakness in the storage and networking markets. This growth is largely driven by the translation of backlog into revenue, but vendors are still bringing new orders into the pipeline, which speaks to ample demand from both AI model builders and cloud providers. However, large enterprises are increasingly adopting AI infrastructure as part of a private cloud environment to control costs and make use of their existing data.

Graph: Cloud Revenues by Segment for 3Q23-3Q24 (Source: TBR)

Cloud Revenues by Segment for 3Q23-3Q24 (Source: TBR)

 

Ample scale and strong demand from both CSPs and enterprises extend Dell’s lead in the cloud components market

Cloud components vendor spotlights

Dell Technologies [Dell]

From a revenue perspective, HPE and Cisco once threatened Dell’s cloud components leadership, but the company has been able to distance itself from its nearest competitors. This is largely due to Dell’s performance over the past year, with strong server demand, particularly from Tier 2 cloud service providers (CSPs), propelling the company’s corporate and cloud components revenue growth rate to the double digits. Meanwhile, in 3Q24 Dell shipped $2.9 billion worth of AI servers while backlog reached $4.5 billion, reflecting 181% year-to-year growth during the quarter and indicating strong future revenue performance.

Hewlett Packard Enterprise

Like its peers, HPE is benefiting from AI-related server demand, and in 3Q24 the company reported $1.5 billion in total AI systems revenue. HPE continues to benefit from its ongoing efforts to shift the sales mix in favor of software and services via GreenLake. In 3Q24 HPE completed its acquisition of Morpheus Data, officially equipping HPE with a suite of infrastructure software that allows customers to take core hypervisors, such as KVM and VMware, and use them to build complete private cloud stacks.

Cisco

With its acquisition of Splunk, Cisco has emerged as the leader of the software components market, even surpassing Microsoft in related revenue. But networking still accounts for the bulk of Cisco’s components business, and, as evidenced by a 32% year-to-year decline in total hardware revenue for 3Q24, Cisco is facing headwinds in the core networking business. That said, the company is actively taking steps to build out its portfolio, particularly by integrating more security components into the networking layer, which is where most cyberattacks originate, to boost its long-term competitiveness in the market.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

Infrastructure agnosticism and flexible cloud-enabled delivery are core attributes of the service delivery market, cementing IBM’s leadership

Dedicated orchestration tools continue to have their place in the market, both in on-premises and cloud environments, but growth is largely driven by application lifecycle management and orchestration tools that span multiple environments. IBM has a rich history in this space and remains a revenue leader. Cisco used to have a foothold in the market but no longer sells its CloudCenter suite.

Vendor spotlight: IBM

After taking steps to bring watsonx into Maximo in 2Q24 for greater process automation, IBM strengthened its commitment to the asset performance management space with the acquisition of Prescinto. Prescinto offers AI tools and accelerators designed for asset owners and operators with a focus on renewable energy and operators. This deal is designed to support IBM’s play in certain verticals, particularly energy and utilities.

Graph: Service Delivery and Orchestration Revenue Growth vs Cloud Software Components Revenue Growth for 3Q24 (Source: TBR)

Service Delivery and Orchestration Revenue Growth vs Cloud Software Components Revenue Growth for 3Q24 (Source: TBR)

 

AI PC and AI Server Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

Despite hyperscalers’ increasing investments in custom AI ASICs, TBR expects demand for GPGPUs to remain robust over the next 5 years, driven largely by the ongoing success of NVIDIA DGX Cloud

The world’s largest CSPs, including Amazon, Google and Microsoft, remain some of NVIDIA’s biggest customers, using the company’s general-purpose graphics processing units (GPGPUs) to support internal workloads while also hosting NVIDIA’s DGX Cloud service on DGX systems residing in the companies’ own data centers.
 
However, while Amazon, Google and Microsoft have historically employed some of the most active groups of CUDA developers globally, all three companies have been actively investing in the development and deployment of their own custom AI accelerators to reduce their reliance on NVIDIA. Additionally, Meta has invested in the development of custom AI accelerators to help train its Llama family of models, and Apple has developed servers based on its M-Series chips to power Apple Intelligence’s cloud capabilities.
 
However, even as fabless semiconductor companies such as Broadcom and Marvell increasingly invest in offering custom AI silicon design services, only the largest companies in the world have the capital to make these kinds of investments. Further, only a subset of these large technology companies engage in the type of operations at scale that would yield measurable returns on investments and total cost of ownership savings. As such, even as investments rapidly rise in the development of customer AI ASICs, the vast majority of customers continue to choose NVIDIA’s GPGPUs due to not only their programming flexibility but also the rich developer resources and robust prebuilt applications comprising the hardware-adjacent side of NVIDIA’s comprehensive AI stack.
 

Graph: Data Center GPGPU Market Forecast for 2024-2029 (Source: TBR)

Data Center GPGPU Market Forecast for 2024-2029 (Source: TBR)


 

Companies across a variety of industry verticals want to take a piece of NVIDIA’s AI cake

Scenario Discussion: NVIDIA faces increasing threats from both industry peers and partners

NVIDIA GPGPUs are the accelerator of choice in today’s AI servers. However, the AI server and GPGPU market incumbent’s dominance is increasingly under threat by both internal and external factors that are largely related. Internally, as Wall Street’s darling and a driving force behind the Nasdaq’s near 29% annual return in 2024, NVIDIA’s business decisions and quarterly results are increasingly scrutinized by investors, forcing the company to carefully navigate its moves to maximize profitability and shareholder returns. Externally, while NVIDIA positions itself largely as a partner-centric AI ecosystem enabler, the number of the company’s competitors and frenemies is on the rise.
 
Despite NVIDIA’s sequentially eroding operating profitability, investor scrutiny has not had a clear impact on the company’s opex investments — evidenced by a 48.9% year-to-year increase in R&D spend during 2024. However, it may well be a contributing factor to the company’s aggressive pricing tactics and rising coopetition with certain partners. While pricing power is one of the luxuries of having a first-mover advantage and a near monopoly of the GPGPU market, high margins attract competitors and high pricing drives customers’ exploration of alternatives.
 
Additionally, the fear of vendor lock-in among customers is something that comes with being the only name in town, and while there is not much most organizations can do to counteract this, NVIDIA’s customers include some of the largest, most capital-rich and technologically capable companies in the world.
 
To reduce their reliance on NVIDIA GPUs, hyperscalers and model builders alike have increasingly invested in the development of their own custom silicon, including AI accelerators, leveraging acquisitions of chip designers and partnerships with custom ASIC developers such as Broadcom and Marvell to support their ambitions. For example, Amazon Web Services (AWS), Azure, Google Cloud Platform (GCP) and Meta have their own custom AI accelerators, and OpenAI is reportedly working with Broadcom to develop an AI ASIC of its own. However, what these custom AI accelerators have in common is their purpose-built design to support company-specific workloads, and in the case of AWS, Azure and GCP, while customers can access custom AI accelerators through the companies’ respective cloud platforms, the chips are not physically sold to external organizations.
 
In the GPGPU space, AMD and, to a lesser extent, Intel are NVIDIA’s direct competitors. While AMD’s Instinct line of GPGPUs has become increasingly powerful, rivaling the performance of NVIDIA GPGPUs in certain benchmarks, the company has failed to gain share from the market leader due largely to NVIDIA CUDA’s first-mover advantage. However, the rise of AI has driven growing investments in alternative programming models, such as AMD ROCm and Intel oneAPI — both of which are open source in contrast to CUDA — and programming languages like OpenAI Triton. Despite these developments, TBR believes NVIDIA will retain its majority share of the GPGPU market for at least the next decade due to the momentum behind NVIDIA’s closed-source software and hardware optimized integrated stack.
 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Microsoft Copilot+ PCs represent a brand-new category and opportunity for Windows PC OEMs industrywide

PC OEMs expected the post-pandemic PC refresh cycle to begin in 2023, but over the past 18 months, their expectations have continually been delayed, with current estimates indicating the next major refresh cycle will ramp sometime in 2025. While the expected timing of the refresh cycle has changed, the drivers have remained the same, with PC OEMs expecting that the aging PC installed base, the upcoming end of Windows 10 support — slated for October 2025 — and the introduction of new AI PCs will coalesce, driving meaningful rebounds in the year-to-year revenue growth of both the commercial and consumer segments of the PC market.
 
As organizations graduate from Windows 10 devices to Windows 11 devices, TBR expects many customers will opt for AI PCs to future-proof their investments, understanding that the overall commercial PC market will be dominated by devices powered by Windows AI PC SoCs in a few years’ time. However, while TBR expects the Windows AI PC market to grow at a 44.3% CAGR over the next five years, the driver of this robust growth centers on the small revenue base of Windows AI PCs today.
 
While Apple dominated the AI PC market in 2024 due to the company’s earlier transition to its own silicon platform — the M Series, which features onboard NPUs — TBR estimates indicate that among the big three Windows OEMs, HP Inc.’s AI PC share was greatest in 2024, followed closely by Lenovo and then Dell Technologies. Without an infrastructure business, HP Inc. relies heavily on its PC segment to generate revenue, and as such, TBR believes that relative to its peers — and Dell Technologies in particular — HP Inc. is more willing to trade promotions and lower margins for greater number of sales, which is a key factor in the current increasingly price-competitive PC market. TBR estimates Lenovo’s second-place positioning is tied to the company’s growing traction in the China AI PC market, where the company first launched AI PCs leveraging a proprietary AI agent in a region where Microsoft Copilot has no presence.
 

Graph: Windows AI PC Market Forecast for 2024-2029 (Source: TBR)

Windows AI PC Market Forecast for 2024-2029 (Source: TBR)

The PC ecosystem increases its investments in developer resources to unleash the power of the NPU

 
Currently available AI PC-specific applications, such as Microsoft Copilot and PC OEMs’ proprietary agents, are focused primarily on improving productivity, which drives more value on the commercial side of the market compared to the consumer side. However, it is likely more AI PC-specific applications will be developed that harness the power of the neural processing unit (NPU), especially as AI PC SoCs continue to permeate the market.
 
Companies across the PC ecosystem, including silicon vendors, OS providers and OEMs, are investing in expanding the number of resources available to developers to support AI application development and ultimately drive the adoption of AI PCs. For example, AMD Ryzen AI Software and Intel OpenVINO are similar bundles of resources that allow developers to create and optimize applications to leverage the companies’ respective PC SoC platforms and heterogenous computing capabilities, with both tool kits supporting the NPU, in addition to the central processing unit (CPU) and GPU.
 
However, as it relates to AI PCs, TBR believes the NPU will be leveraged primarily for its ability to improve the energy efficiency of certain application processes, rather than enabling the creation of net-new AI applications. While the performance of PC SoC-integrated GPUs pales in comparison to that of discrete PC GPUs purpose-built for gaming, professional visualization and data science, the TOPS performance of SoC-integrated GPUs typically far exceeds that of SoC-integrated NPUs, due in part to the fact that the processing units are intended to serve different purposes.
 
The GPU is best suited for the most demanding parallel processing functions, requiring the highest levels of precision, while the NPU is best suited for functions that prioritize power efficiency and require lower levels of precision, including things like noise suppression and video blurring. As such, TBR sees the primary value of the NPU being extended battery life — an extremely important factor for all mobile devices. This is the key reason why TBR believes that AI PC SoCs will gradually replace all non-AI PC SoCs, eventually being integrated into nearly all consumer and commercial client devices.
 
One of the reasons PC OEMs are so excited about the opportunity presented by AI PCs is that AI PCs command higher prices, supporting OEMs’ longtime focus on premiumization. Commercial customers, especially large enterprises in technology-driven sectors like finance, typically buy more premium machines, while consumers generally opt for less expensive devices, and TBR believes this will be another significant driver of AI PC adoption rising in the commercial segment of the market before the consumer segment.