AI & Data Sovereignty in Technology Partnerships and Alliances

Register for AI & Data Sovereignty in Technology Partnerships and Alliances

 

Commercial Model Alignment Begins to Trump Technology Integration

Join Principal Analyst Boz Hristov and Senior Analyst Catie Merrill on Thursday, July 17, 2025, at 1 p.m. EDT/10 a.m. PDT for an exclusive webinar on AI and data sovereignty in technology partnerships and alliances. TBR’s team will examine how the intersection of regional regulations and emerging AI capabilities is reshaping partner ecosystems.

 

As governments and enterprises demand greater control over data, global systems integrators (GSIs) are increasingly relying on locally based employees to meet sovereignty requirements, ensure compliance and build trust. Boz and Catie will explore how this shift is influencing partner strategies, resourcing models and AI deployment approaches across regions. They will also dive into the commercial implications for technology vendors and GSIs, as aligning commercial models is becoming just as critical as technical integration.

 

This FREE session on alignment with GSIs will include:

  • An exclusive look at our newly expanded regional breakdown of GSI headcount and revenue, part of TBR’s Cloud Ecosystem Report, and what the data reveals about hyperscaler practices in the Americas, EMEA and APAC
  • A look at how European Union AI and data regulations are impacting staffing and training within GSI practices
  • An overview of our new ServiceNow Ecosystem Report and its implications for partners and alliances
  • Key insights from our Voice of the Partner research, including what’s next in AI ecosystem management and multiparty collaboration
  • A discussion on the increasing importance of commercial model alignment over technology integration and how ServiceNow is moving into the core enterprise SaaS market among the likes of SAP, Salesforce, Workday, Adobe and others

 

Register Now

 

TBR Insights Live sessions are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous sessions can be viewed anytime on TBR’s Webinar Portal.

 

TBR Insights Live: AI & Data Sovereignty in Technology Partnerships and Alliances

AI Inferencing Takes Center Stage at Red Hat Summit 2025

In late May, Red Hat welcomed thousands of developers, IT decision makers and partners to its annual Red Hat Summit at the Boston Convention and Exhibition Center (BCEC). Like the rest of the market, Red Hat has pivoted around AI inferencing, and this conference marked the company’s entry into the market with the productization of vLLM, the open-source project that has been shaping AI model execution over the past two years. Though Red Hat’s push into AI inferencing does not necessarily suggest a deemphasis on model alignment use cases (e.g., fine-tuning, distillation), which was the company’s big strategic focus last year, it is a recognition that AI inferencing is a production environment and that the process of running models to generate responses is where the business value lies. Red Hat’s ability to embed open-source innovation within its products and lower the cost per model token presents a sizable opportunity. Interestingly, Red Hat’s prospects are also evolving in more traditional markets. For instance, Red Hat’s virtualization customer base has tripled over the past year, with virtualization emerging as a strategic driver throughout the company’s broader business, including for communication service providers (CSPs) adopting virtualized RAN and within other domains such as their IT stacks and the mobile core.

Red Hat pivots around AI inferencing

Rooted in Linux, the basis of OpenShift, Red Hat has always had a unique ability to resolution assets to expand into new markets and use cases. Of course, AI is the most relevant example, and two years ago, Red Hat formally entered the market with Red Hat Enterprise Linux (RHEL) AI — the tool Red Hat uses to engage AI developers — and OpenShift AI, for model lifecycle management and MLOps (machine learning operations) at scale. These assets have made up the Red Hat AI platform, but at the Red Hat Summit, the company introduced a third component with AI Inference Server, in addition to new partnerships and integrations further designed to make agentic AI and inferencing realities within the enterprise.

 

AI and generative AI (GenAI) are rapidly evolving, but the associated core challenges and adoption barriers, including the high cost of AI models and the sometimes arduous nature of providing business context, remain largely unchanged. Between IBM’s small language models (SLMs) and Red Hat’s focus on reducing alignment complexity, both companies have crafted a strategy focused on addressing these challenges; they aim not to develop the next big AI algorithm, but rather to serve tangible enterprise use cases in both the cloud and the data center.

 

Everyone is aware of Red Hat’s track record of delivering enterprise-grade open-source innovation, and if Red Hat’s disruption with Linux over two decades ago is any indication, the company is well positioned to make real, cost-effective solutions for the enterprise based on reasoning models and AI inferencing.

Red Hat productizes vLLM to mark entry into AI inferencing

Though perhaps lesser known, most large language models (LLMs) today are leveraging vLLM, an upstream open-source project boasting roughly half a million downloads in any given week. At its core, vLLM is an inference server that helps address “inference-time scaling,” or the budding notion that the longer the model runs or “thinks,” the better the result will be. Of course, the challenge with this approach is the cost of running the model for a longer period of time, but vLLM’s single-server architecture is designed to optimize GPU utilization, ultimately reducing the cost per token of the AI model. Various industry leaders — namely NVIDIA, despite having its own AI model serving stack; Google; and Neural Magic, which Red Hat acquired earlier this year — are leading contributors to the project.

 

Leveraging its rich history of turning open-source projects into enterprise products, Red Hat launched AI Inference Server, based on vLLM, marking Red Hat’s first offering from the Neural Magic acquisition. AI Inference Server is included with both RHEL AI and OpenShift AI but can also run as its own stand-alone server. Though perhaps inclined to emphasize IBM’s watsonx models, Red Hat is extending its values of flexibility, choice and meeting customers where they are to AI Inference Server. This new offering supports accelerators outside IBM, including NVIDIA, AMD, Intel, Amazon Web Services (AWS) and Google Cloud, and offers Day 0 support for a range of LLMs. This means that as soon as a new model is released, Red Hat works with the provider to optimize the model for vLLM and validate it on Red Hat’s platform.

 

Building on vLLM’s early success, Red Hat launched LLM-d, a new open-source project, announced at the Red Hat Summit. LLM-d transcends vLLM’s single-server architecture, allowing inference to run in a distributed manner, further reducing the cost per token. Due to the cost, most will agree that inferencing will necessitate distributed infrastructure, and there are several recent examples across the tech landscape that have alluded to this. LLM-d is being launched with support from many of vLLM’s same contributors, including NVIDIA and Google (LLM-d runs on both GPUs and TPUs [tensor processing units]).

Partnership with Meta around MCP is all about empowering developers and making agentic AI enterprise-ready

If Google’s launch of A2A (Agent2Agent) protocol is any indication, Anthropic’s Model Context Protocol (MCP), which aims to standardize how LLMs discern context, is gaining traction. At the Red Hat Summit, Red Hat committed to MCP by announcing it will deliver Meta’s Llama Stack, integrated with MCP, in OpenShift AI and RHEL AI.

 

To be clear, Red Hat supports a range of models, but Meta went the open-source route early on, bringing Llama Stack, an open-source framework for building specifically on Llama models, into the Red Hat environment. This not only exposes Red Hat to another ecosystem but also provides APIs around it. Enlisting Meta at the API layer is an important aspect of this solution, as it enables customers to consume the solution and build new agentic applications with MCP playing a key role in contextualizing those applications within the AI enterprise. It is still early days for MCP, and making the protocol truly relevant in enterprise use cases will take some time and advancement in security and governance. But Red Hat indirectly supporting MCP within its products signals the framework’s potential and Red Hat’s role in bringing it to the enterprise.

Who would have thought we would be discussing virtualization in 2025?

In 2025 and the world of AI, you don’t often hear of a company putting virtualization at the top of its strategic imperatives list. However, everyone has seen how Broadcom’s takeover of VMware has caused a ripple in the market, with customers seeking cheaper, more flexible alternatives that will not disrupt their current cloud transformation journeys. In fact, when we surveyed enterprise IT decision makers, 42% of respondents indicated they still intend to use VMware, but most plan to do so in a reduced capacity. Of those planning to continue using VMware, a notable 83% are still evaluating other options*.

 

“Options both have increased the prices across the board, 20% to 30%, which is pretty significant. So, you could say myself and my peers are not very happy with the Broadcom method on that, and we’re looking at, you know, definitely options to migrate off VMware when possible. We’re definitely looking at Citrix, and then options from Red Hat and Microsoft.” — CTO Portfolio Manager, Consumer Packaged Goods

 

As a reminder, after Red Hat revolutionized Linux in the early 2000s, the company’s next big endeavor was virtualization. With the rise of cloud-native architectures, Red Hat quickly pivoted around containers, and this is where the company remains most relevant today. However, through the KVM (kernel-based virtual machine) hypervisor, which would eventually be integrated with OpenShift, virtualization has always been a part of the portfolio. Over the past year, given the opportunity surrounding the VMware customer base, Red Hat has actively revisited its virtualization roots in a few primary ways.

 

First, given the risky nature of switching virtualization platforms, Red Hat crafted a portfolio of high-touch services around OpenShift Virtualization, including Migration Factory and a fixed-price offering called Virtualization Migration Assessment. These services from Red Hat Consulting, which are offered in close alignment with global systems integrator (GSI) partners, help customers migrate virtual machines (VMs) as quickly as possible while minimizing risk, which largely stems from helping customers migrate VMs before modernizing them.

 

Secondly, Red Hat has focused on increasing public cloud support. Red Hat announced at the summit that OpenShift Virtualization is now available on Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure (OCI), in addition to previously announced support for IBM Cloud and AWS, officially making the platform available on all major public clouds. Making OpenShift Virtualization applicable across the entire cloud ecosystem reinforces how serious Red Hat is about capturing these virtualization opportunities. These integrations will make it easier for customers to use their existing cloud spend commitments to offload VMware workloads to any cloud of their choice while maintaining the same cloud-native experience they are used to.

 

Of course, there will always be a level of overlap between Red Hat and the hyperscalers, but ultimately the hyperscalers recognize Red Hat’s role in addressing the hybrid reality and enterprises’ need to move workloads consistently across clouds and within data centers, and they welcome a more feature-rich platform like OpenShift that will spin the meter on their infrastructure.

With virtualization, Red Hat is allowing partners to sell infrastructure modernization and AI as part of the same story

At the conference, we heard from established Red Hat customers that have extended their Linux and container investments to virtualization. Examples included Ford and Emirates NBD, which has over 37,000 containers in production and is now migrating 9,000 VMs to Red Hat OpenShift Virtualization for a more consistent tech stack. Based on our conversations with customers, these scenarios — where VMs and containers run side by side — are not an easy sell and require a level of buy-in across the organization.

 

That said, if customers can overcome some of these change management hurdles, this side-by-side approach can offer numerous benefits, largely by creating greater consistency between legacy and cloud-native applications without significant refactoring. Though some GSIs may be better suited to the infrastructure layer than others, partners should recognize the opportunity to use OpenShift Virtualization to have client discussions around broader AI transformations. One of the compelling aspects of Red Hat is that even as it progressed through different phases — Linux, virtualization, containers and now AI — the hybrid platform foundation has remained unchanged. If customers can modernize their infrastructure on the same platform, introducing AI models via OpenShift AI becomes much more compelling.

Virtualization remains a key driver of telecom operator uptake of Red Hat solutions, but AI presents a significant upsell opportunity

Over the past few years, Red Hat has leveraged its virtualization technology in the CSP market, making significant progress in landing new CSP accounts and expanding its account share within this unique vertical. The company’s growth in this market has been aided by factors such as Broadcom’s acquisition of VMware, which initially caused a wave of CSPs to migrate to Red Hat due to the uncertainty surrounding VMware’s portfolio road map. Broadcom’s price hikes are causing a second wave of switching that TBR anticipates will continue for several years.

 

However, Red Hat has also succeeded in more deeply penetrating the telecom vertical due to its savvy marketing, which at times emphasizes that its solutions are “carrier-grade,” along with persistent efforts to raise awareness within the CIO and CTO organizations of CSPs that virtualization and hybrid multicloud strategies will have significant ROI for CSPs. This has led to strong adoption of Red Hat OpenStack and OpenShift, although the Ansible automation platform has lagged in terms of CSP adoption, as this customer segment prefers to use the free, open-source version of Ansible.

 

As CSPs iterate on their AI strategies, Red Hat has the opportunity to play a significant role, including with its new Red Hat Inference Server, as CSPs increasingly embrace edge compute investments. CSPs need to invest upfront to capitalize on the cost efficiency and revenue generation opportunities offered by AI, and Red Hat can help guide them in this direction. CSPs have difficulty moving quickly when new, disruptive technologies emerge, and, with AI specifically, have trouble evaluating and testing AI models themselves due to a lack of in-house expertise. Additionally, they feel constrained by regulations and are concerned about compromising data privacy. Red Hat’s dedicated telecom vertical services can help alleviate these concerns and accelerate CSPs’ investments in AI infrastructure.

Final thoughts

Based on our best estimate, roughly 85% of AI’s current use is focused on training and only 15% on inferencing, but the inverse could be true in the not-too-distant future. Not only that, but AI inferencing will likely occur at distributed locations for the purposes of latency and scale — which, due to its hybrid platform and ability to help customers “write once, deploy anywhere,” remains core to Red Hat’s value proposition. That is one of the compelling aspects of a platform-first approach; even as new components such as AI models are introduced, the core foundation remains unchanged.

 

Though all of Red Hat’s new innovations, including AI Inference Server and the LLM-d project, do not necessarily suggest a deemphasis on model alignment with assets like InstructLab, it is clear Red Hat is pivoting to address the inference opportunity. With its trusted experience productizing open-source innovation and its ability to exist within a broad technology ecosystem of hyperscalers, OEMs and chip providers, Red Hat is in a somewhat unique position to help transition AI inference from an ideal to an enterprise reality.

 

Further, Red Hat’s virtualization prospects are growing, as TBR’s interactions with customers continue to indicate that they are looking for new alternatives. If the hyperscalers’ recent earnings reports are any indication, the GenAI hype is waning, and we suspect many enterprises will refocus on infrastructure modernization to ultimately move beyond basic chatbots and lay the groundwork for the more strategic applications that inferencing will enable. It will be interesting to see how Red Hat capitalizes on new virtualization opportunities with its hyperscaler and services partners as part of a joint effort to bring customers to a modern platform, where VMs and containers can coexist and drive discussions around AI.

 

*From TBR’s 2H25 IT Infrastructure Customer Research

A Challenger Mindset Transforms HCLTech’s Approach to Financial Services to Achieve Success Through AI

HCLTech hosted industry analysts and advisers on May 13 at the ASPIRE at One World Observatory in New York City. Throughout the afternoon, HCLTech executives, leaders and clients spoke at length about the company’s financial services positioning, direction and activities amid disruption from AI and digital acceleration.

Introduction

During the event, HCLTech leaders consistently highlighted how the company’s culture, deep engineering expertise and unique approach to AI set it apart from its peers and strengthen client relationships. These points were echoed by two financial services clients during a panel discussion. Differentiation remains a challenge for all vendors, yet HCLTech emphasized that although the company may not be different in what it does, it is unique in its approach.

Balancing risk, innovation and talent investment

The event began with a presentation by HCLTech CEO and Managing Director C Vijayakumar (CVK), who gave an overview of the company’s current positioning and future plans. The session centered on HCLTech’s evolution toward an engineering and platform-based mindset and its transformation from a traditional model that no longer remains relevant amid a changing balance between revenue and talent volumes. To adapt its business model and better align itself with the needs of clients and the market, CVK announced HCLTech’s goal of doubling its revenue with only half of its previous headcount.

 

As roles within the organization have begun to change with the integration of new technology, including AI, HCLTech has had to begin transforming the company’s structure. Revenue per employee has always been a KPI for HCLTech to ensure the company decouples revenue growth from headcount growth. HCLTech’s attention to the metric is reflected in its ability to maintain peer-leading levels relative to Cognizant, Infosys, Tata Consultancy Services (TCS) and Wipro IT Services (ITS), whose trailing 12-month (TTM) revenue per employee was $59,304, $60,338, 49,692 and 45,270, respectively, in 1Q25 — each below HCLTech’s figure of $62,360.

 

It is a lofty goal to deliver the same quality of service at the same speed with fewer people, even with the support of AI tools and strong partnerships. To achieve this goal, HCLTech will rely on its culture and talent, combined with its strategic technology investments including AI, digital and software solutions. CVK emphasized that HCLTech’s culture is deeply embedded in the company’s DNA, making it difficult for competitors to replicate. This culture fosters strong client trust and deepens relationships, as it consistently comes through in conversations with clients. By building on this foundation, HCLTech effectively leverages AI technologies to strengthen existing partnerships and secure new projects.

 

HCLTech’s client management and retention strategy reflects the company’s ability to embed itself within the client environment and serve as a key partner. HCLTech’s deep relationships have enabled the company to better identify and address client challenges as well as opportunities to recommend transformations to clients. As complexity increases across the technology landscape, HCLTech has had to evolve its approach to both new and existing clients. Client willingness to adopt AI tools can be tempered by concerns over managing multiple platforms and the associated risks.

 

As a result, HCLTech often takes a more measured and gradual approach with new clients, focusing on building trust and easing them into transformation. In contrast, with existing clients, HCLTech adopts a more assertive strategy — leveraging its deep understanding of their technology landscapes and industry-specific needs to drive adoption and deliver results more rapidly.

 

CVK closed his presentation by emphasizing the need to be proactive and carry a “paranoid mindset” to stay ahead of technology trends and remain relevant. HCLTech’s ability to build strong relationships with clients enables the company to guide transformations, equipping clients with the tools and services to be proactive and effectively leverage technology across their organizations. With a greater focus on outcomes, HCLTech’s positioning and relationships with clients provide a foundation for the company to grow its wallet share with clients as it balances risks with innovation and invests for future growth.

Demand for modernization and AI influences client needs within the financial services space

Srinivasan (Srini) Seshadri, HCLTech’s chief growth officer and Financial Services lead, discussed the company’s 50,000-person Financial Services practice, which as HCLTech’s largest industry group generated $2.9 billion in revenue during FY25. During the presentation, Seshadri emphasized five main features of the company’s Financial Services practice that help it drive value for clients: engineering DNA, outcome orientation, challenger mindset, verticalized services and innovation. The benefit of verticalized services stood out to TBR. A few years ago, HCLTech moved all its service lines under one vertical, creating a unified go-to-market strategy, enabling it to deepen its client relationships and positioning around transformation. As vertical and industry expertise does not provide differentiation on its own, HCLTech took it a step further, pairing its industry experience with service lines to better communicate its portfolio and drive value. Taking this approach pulls together HCLTech’s strengths and drives outcomes.

 

Key items influencing HCLTech’s Financial Services activities include adapting to changing regulations, increasing use of Global Capability Centers, and creating and implementing composable products. Aligning its portfolio and resources to help clients navigate current trends and operate more effectively guides HCLTech’s client approach.

 

For example, with the permeation of generative AI (GenAI) and increased adoption of the technology by clients seeking to remain relevant, Seshadri spoke about the evolution of GenAI from a buzzword to actual engagement and usage, including using GenAI to reimagine an autonomous future for Financial Services. HCLTech seeks to integrate GenAI solutions and tools within its clients’ operations, depending on maturity level and understanding, to drive end-to-end value chain transformation.

 

Helping clients use AI to make internal processes better and more efficient and to achieve their goals enhances HCLTech’s value proposition in the financial services industry and enables the company to gain new projects in sensitive areas such as regulation, governance and security.

Prioritizing the main areas within engineering, platform modernization and GenAI aligns HCLTech’s financial services expertise with its key service line strengths around business optimization, design and innovation and enables the company to support client transformations. Seshadri closed his presentation by acknowledging that transformation “is up to the client to implement.” HCLTech’s approach to deal generation is shaped by its deep understanding of culture and clients’ readiness to sustain transformation. By viewing AI as a means to enhance processes and operations — and by factoring in the longevity of each client relationship — HCLTech tailors the pace and intensity of technology integration. This ability to meet clients where they are and ensure lasting transformation distinguishes HCLTech from its peers.

Experience is key to client engagement

Building on Srini’s discussion, Ananth Subramanya, HCLTech’s EVP of Digital Business Services, talked about the industrywide shift in consumer loyalty from a physical product to the experience, with the experience driving the engagement. As clients increasingly demand rapid, relevant transformations that drive business outcomes, Subramanya emphasized the importance of balancing speed with stability — acknowledging that while stability may at times constrain velocity, it is essential for sustainable progress. The strategy helps users build resilience, enabling the customer experience (CX) to permeate the product and platform layers to ensure it influences each aspect of the client transformation.

 

HCLTech’s CX-centric delivery approach — anchored in both business processes and user interface (UI) design — deeply embeds the experience within clients’ operations and functions. This foundation empowers clients to engage more effectively and drive meaningful change. Additionally, by enabling end users to experience improvements more rapidly, the approach fosters stronger client loyalty and supports the development of long-term, strategic projects.

AI permeates approach to transformation

Diving more deeply into the impact of AI on financial services activities and client investments, Vijay Guntur, HCLTech’s CTO and head of Ecosystems, discussed the primary needs within financial operations: operational efficiency, accelerated innovation, CX and risk management. Key challenges around data quality and collection, the use of legacy system, and scalability also remain critical within the financial services space. HCLTech’s investments across AI platforms and solutions have enabled the company to deliver on these needs while embedding industry knowledge to address key client concerns. The company’s four main AI and GenAI offerings are AI Force, AI Foundry, AI Labs and AI/GenAI Engineering. Through these offerings, HCLTech helps clients execute on decision making and handle complex workflows.

 

Using its AI Labs, with six different locations in the U.S., the U.K., Germany, India and Singapore, HCLTech can build and scale AI for clients, helping them work through early stages and identify where they can add value through the use of technology. The labs encapsulate HCLTech’s AI portfolio offerings and create opportunities for the implementation of tools and solutions with the goal of driving value. As clients undergo transformation and modernization services, lowering risk while increasing AI efficiency across IT operations, the labs showcase HCLTech’s portfolio offerings and solutions, helping clients lead AI transformations.

 

The primary AI offering, AI Force, launched in March 2024, takes a platform approach to apply AI technologies within software development and engineering life cycle processes. Further development of the platform has enabled interoperability and greater adoption and AI usage. Guntur emphasized that the platform improves efficiency and shortens time to market, allowing clients to more quickly respond to market needs and remain relevant against peers. With agentic AI emerging as a much-needed use of the technology, AI Force’s ability to embed agentic workflows enhances efficiency and adds value.

 

The second product, AI Foundry, accelerates product development and remodels the value stream using AI and data. With a focus on value streams, modernizing data, and AI that is built within a cognitive infrastructure, AI Foundry uses technology to help clients improve their business operations.

 

HCLTech has a long history working with AI, building off its DRYiCE platform, the company’s original automation platform. This heritage equips HCLTech with the background and trusted technical expertise, backed by its engineering prowess. to deliver on clients’ AI transformation needs. Further, HCLTech can pursue larger-scale and more aggressive AI-led transformations, helping the company accelerate ahead of its peers in terms of client engagement and growth.

Consulting serves as an entry point to broader financial services activities

In a panel discussion with financial services clients, HCLTech leaders discussed the company’s consulting services and main service line areas. Although consulting has not been a primary investment focus for HCLTech, the company has selectively built out consulting capabilities to address clients’ end-to-end modernization and technology needs. For example, in March 2019, HCLTech acquired Strong-Bridge Envision, a digital consulting firm that complemented its digital and analytics capabilities. Embedding this expertise across its portfolio strengthens the company’s ability to drive AI and platform adoption.

 

The company’s AI Labs are a central part of HCLTech’s consulting offerings. Through the labs, HCLTech delivers technology consulting services, helping clients to identify areas where they would most benefit from AI. As many clients, particularly within the financial services space, look to accelerate innovation to create new products and business models that enable them to remain relevant, technology consulting services bring in essential offerings to help address key areas of client transformations.

 

Looking at the data aspect, consulting is required for many clients to organize and manage datasets. Ensuring data is protected and structured remains vital to valuable and trusted AI usage, increasing the importance of HCLTech’s ability to deliver on data needs in a timely manner.

While these consulting investments may offer limited scale, they are sufficient to remain competitive with peers and to guide clients effectively on AI adoption. This expertise aligns well with the company’s client management strategy, particularly in expanding relationships with existing clients — where HCLTech can lead with a proactive and open-minded approach.

Conclusion

HCLTech concluded the event with a wrap-up by CMO Jill Kouri, who noted key points about HCLTech’s positioning and direction as the company navigates client needs around AI. The main comment that struck TBR analysts referenced the need for a challenger mindset companywide. This approach will help HCLTech transform the way it delivers services and solutions to clients. Leading with a proactive and paranoid mindset embodies the challenger focus, allowing HCLTech to stay ahead of AI and technology trends while complementing its existing strengths.

 

The goal of doubling revenue with half the people will certainly present challenges for HCLTech, but the company’s culture and robust AI portfolio, which provides the technology, engineering expertise and resources needed to deliver on consulting services, will help the company move in the right direction. Further, leveraging an AI-intrinsic point of view, as opposed to an AI-first point of view, secures HCLTech’s positioning around AI and its trust-based relationships with clients, to effectively address key market needs around efficiency and modernization.

Telcos Risk Losing the AI Race Without Strategic Shift; $170B at Stake by 2030

Register for Telcos Risk Losing the AI Race Without Strategic Shift; $170B at Stake by 2030

 

Realizing the AI opportunity

AI presents a once-in-a-generation opportunity for the telecom industry to achieve two key objectives: generate new revenue and reduce costs. However, there is a real risk that most communication service providers (CSPs) globally will miss out on the full benefits of AI. Although leading CSPs have been investing in AI, most of these investments appear to be myopically focused on quick-hit wins. This strategy is acceptable in the short term, but true opportunity capture will be contingent on broader-scope initiatives, coupled with upfront investment.

 

Join Principal Analyst Chris Antlitz Thursday, July 10, 2025, at 1 p.m. EDT/10 a.m. PDT for a live discussion on how CSPs are integrating AI into their internal operations and their products and services. Chris will also share insights from the latest edition of TBR’s Telecom AI Market Landscape, which focuses on the opportunity sizing of key new revenue and cost-efficiency use cases.

 

In this free session on AI opportunity for the telecom industry Chris will answer:

  • Where does the telecom industry currently stand in terms of generative AI (GenAI) adoption?
  • How big is the opportunity for telcos to generate new revenue from AI by 2030?
  • How significant is the opportunity for telcos to reduce costs through AI by 2030?
  • Who stands to gain if telcos don’t change?

 

Register Now

 

TBR Insights Live sessions are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous sessions can be viewed anytime on TBR’s Webinar Portal.

 

TBR Insights Live - Telcos Risk Losing the AI Race Without Strategic Shift; $170B at Stake by 2030

Sage Analyst Summit: Keeping the Winning Playbook While Evaluating Emerging Changes to the Game

Connect, grow, deliver

TBR spent two days in Atlanta, listening to and speaking with Sage’s management team as part of the company’s annual Analyst Summit, and we walked away impressed. This is a company that knows itself and its strengths. It knows where it needs to improve. It knows where the pain points and constraints are, and has always done a good job of navigating between the two.

 

Most importantly, the company knows its customers, which should come as no surprise considering how long Sage has been serving its SMB install base. Sage has leveraged these strengths and established a large, sticky install base from which to pursue opportunities adjacent to its core business.

 

Sage is focused on three interlocking areas — connect, grow, deliver — which President Dan Miller described during the event:

  • Connect through trusted partner networks
  • Grow by winning new logos through a verticalized suite motion
  • Deliver real, measurable productivity using AI

Each pillar represents a separate part of the company’s go-to-market strategy, but Grow stands out as the most vital to the company’s growth trajectory. Landing and expanding with new logos is the company’s greatest source of revenue growth, with vertical-specific and business operations solutions offering some of the greatest upsell potential. Aligned with this strategy, the company is a disciplined but active acquirer, onboarding new IP to enhance these sales motions.

 

Long-term, AI presents opportunities for the company to upsell into its finance and accounting (F&A) core. As Sage leans into its strengths while building for the future, its ability to scale AI and industry depth across a known and trusted customer base may prove to be the company’s most valuable asset.

Landing with F&A, then expanding with payroll, HR and operations management

Sage’s land-and-expand strategy starts with a stronghold in finance and builds outward through operational adjacencies. Most customers enter through core accounting —typically via Intacct — and expand into areas like payroll, HR, and inventory or distribution management as their needs mature. Vertical-specific modules are critical to this motion, especially in midmarket industries where Sage can tailor functionality to operational nuances.

 

The company reinforces expansion by packaging these capabilities into suites, streamlining procurement and positioning itself as more than just a financial system. Sales teams are trained to identify expansion triggers early; signs like API adoption, workflow customization or manual process bottlenecks often indicate opportunities. Although the company’s product maturity varies across the portfolio, Sage has seen success in service- and product-centric verticals, enabling the company to upsell and cross-sell. This approach, combined with a focus on ease of integration and strong partner involvement, is helping Sage grow account value without overpromising in its product road map.

AI at Sage: Workflow-first, ROI-driven

Sage management spent much time discussing its ambitions in AI. From TBR’s perspective, the tone was very grounded. Although the company will never be at the cutting edge of AI innovation, management did a great job of articulating the current opportunities to upsell AI capabilities. Finance and accounting workflows offer many sales opportunities for Sage to pursue, and the company is investing in R&D to capitalize on them. Similar to many of its application peers, Sage intends to approach agentic AI and generative AI development on a use-case-by-use-case basis. In Sage’s case, this is even more prudent as SMB customers face greater budgetary restrictions and require ROI to be realized in the first year.

 

Sage management highlighted AP automation, time-saving prompts and variance analysis as key areas where the company is achieving success with AI-powered automation. Like several peers, the company’s Copilot solution serves as the unified user interface (UI) for engaging embedded AI tools. Long-term, management expects to see this UI become more adaptive, guiding the user through an automated workflow. Guided prompting was another area of focus, and the company is building a library of prompts for end users to leverage as they perform specific tasks. Under the hood, the company intends to run its AI tools on internally trained models built on top of a third-party. CTO Aaron Harris discussed two of these tools: Sage Accounting LLM and APDoc2Vec.

 

As a reminder, Sage partnered with Amazon Web Services (AWS) over a year ago to collaborate on F&A models, and management highlighted the continued effort to build a new multitenant, dependency-based stack.

 

Long-term, TBR expects this work to be pivotal in reducing the cost of running AI workloads, while internally developed models with lower parameter counts than big-name large language models (LLMs) will further enhance cost efficiency at inference. Meanwhile, Sage is still figuring out how to monetize AI, but the industry default is to implement a tiered system. Some high-compute copilots may eventually carry usage-based fees, especially in forecasting, but for now, the priority is to show clear value and price accordingly.

 

In 2025 no conversation is complete without recognizing the platform implications of agentic workflows. Behind the scenes, Sage is preparing for an agent-first architecture by integrating emerging frameworks, such as Model Context Protocol (MCP) or Agent 2 Agent (A2A), directly into its platforms. The long-term goal is to coordinate these through super agents and plug into the broader agent ecosystem (Salesforce, Microsoft, Google), but this is still only part of the long-term road map.

 

That said, the company is building for the future, with an emphasis on data model consistency, dependency-based deployment, and orchestration layers capable of managing multi-agent chains. This is all being done with AWS in the background, keeping the platform anchored at the infrastructure layer.

Sage deepens its partner relationships

Sage’s partner and go-to-market strategy is built for focus and leverage. The company cannot cover every vertical or service need on its own, so partners are central to how it sells, delivers and scales. The revamped Sage Partner Network is tighter, with clear roles across sell, build and serve motions, and expectations tied to growth, not just activity. Multiyear vertical plans, coinvestment and execution discipline are now baseline requirements.

 

Internally, the GTM engine runs through SIGMA, which ties product planning to what the direct and partner channels sell. Sales teams are trained to package suites, identify expansion triggers, and position the platform by vertical need, rather than a feature checklist. To prepare for the platform’s evolution, Sage is already laying the groundwork for a more extensible ecosystem, including plans for an agent marketplace that would give partners a direct path into the next wave of product delivery.

Staying the course and preparing for what lies ahead

Sage’s story at its annual Analyst Summit was not necessarily one of reinvention. Land and expand has been the company’s strategy for years, and it has worked well so far. By anchoring in finance, expanding through vertical suites and operation management, and keeping partners close to the motion, Sage is executing with clarity around who it serves and how it wins. Meanwhile, the platform is evolving, AI is taking shape, and the architecture is catching up to the ambition. None of the company’s claims felt like overpromising.

 

In a market filled with transformation stories, Sage is running a disciplined play. The question is whether it can maintain that discipline as it scales and converts its product investments, especially in AI and agentic workflows, into tangible value for the customers it already knows best.

SAP Sapphire 2025: Legacy Application Leader Moving Confidently Into a Data and AI Future

Staking a claim in a best-of-suite future

At SAP Sapphire 2025, one thing became immediately clear: SAP is no longer chasing the cloud market — it is positioning itself to define it. While best-of-breed has long been the enterprise default, a growing segment of the market is leaning toward consolidation: fewer vendors, tighter integration, faster outcomes. SAP sees an opening. With its dominance as a system of record and a broad portfolio spanning platforms and line-of-business (LOB) suites, the company believes it is uniquely equipped to serve these best-of-suite buyers and made a compelling case at Sapphire that it is actively working to turn this vision into reality.

 

SAP’s messaging has focused heavily on customers already operating in the cloud, shifting attention away from the sizable portion of its base still tethered to ECC (ERP Central Component). The forward-looking emphasis may be warranted. Although cloud migrations remain a strategic priority, they have been at the center of SAP’s story for the better part of a decade. While the customer mix still skews toward legacy deployments, TBR estimates that cloud revenue accounts for more than 60% of SAP’s total corporate revenue, presenting a solid base from which to expand contract sizes.

 

In addition to migration efforts, the company has built out a suite of integration, robotic process automation and data assets — many with high attach rates — that are driving much of its commercial cloud momentum. While TBR believes SAP will continue steadily transitioning legacy customers to the cloud, its land-and-expand strategy among new logos (born-in-the-cloud, midmarket) and existing customers leaning into modernization will provide ample growth opportunities to build on top of migration-related gains. For this reason, TBR believes SAP was justified in prioritizing its platform-centric cloud strategy at Sapphire 2025. The company has built a compelling cloud business, and that road map deserves to be in the spotlight.

Building an agentic flywheel

SAP’s central metaphor this year — the “flywheel” — describes a loop in which enterprise applications feed business data into a semantic layer, which powers AI agents that act on the data and push outcomes back into the apps. Put simply: if you own the context, you control the outcome. SAP believes its depth of structured business data gives it a defensible advantage in the race toward agentic AI. Fragmented stacks, the company argues, are the Achilles’ heel of enterprise automation. SAP promises to reduce the cost and complexity of AI adoption by delivering deeply integrated, outcome-oriented capabilities across its entire suite of products.

The state of Joule as a UI for the AI era

SAP wants 2025 to be the year agents move from prototype to production. Joule remains the user interface (UI), but it is now positioned as an orchestration layer, not just a chatbot. The company showcased use cases ranging from accounts receivable prioritization to automated financial close and proactive risk flagging. These scenarios emphasized traceability. For example, each step is visible to the end user, and each recommendation is auditable. That transparency, enabled by LeanIX, signals SAP’s commitment to building enterprise-grade controls around automation.

 

Today, most of these agents are operating in relatively structured environments. Financial workflows, inventory management and procurement tasks offer well-bounded problems. The leap to agents that navigate fuzzier terrain — customer onboarding, scenario planning or partner collaboration — has not happened yet. Agentic systems will continue to be built on a use-case-by-use-case basis, which takes time. SAP is developing more tasks and, at the event, showed a demo of Joule working through a tariff shock scenario. It featured each member of a fictional C-Suite reacting in real time using embedded AI: the CFO reallocating capital, the chief revenue officer rerouting demand, the COO managing supply constraints, and the chief human resources officer rebalancing skills.

 

In TBR’s opinion, the demo felt like an oversimplification of a complex issue, but we were still impressed by the information an agent could gather and the actions it was able to execute. Obviously, agentic AI stands to be highly disruptive to SaaS workflows, and TBR believes SAP is playing the game well. Long-term, the breadth of the company’s ERP and LOB portfolios offers a massive amount of whitespace for innovation, enabling the company to continue attacking the opportunity on a use-case-by-use-case basis as it rides the wave.

Prioritizing semantic cohesion over data consolidation

SAP has spent years refining its data strategy. While Datasphere offered value in real-time processing, it was never intended to serve as a central data platform — especially with Snowflake, Databricks and Google Cloud leading in that space. The launch of Business Data Cloud (BDC) acknowledges this external reliance, advancing the same ambition Datasphere once aimed for: a harmonized, semantically enriched, agent-ready data layer.

 

BDC’s zero-copy architecture and native integrations with platforms like Databricks reflect this evolution. SAP is betting on semantic fabrics, not data lakes. Knowledge graphs across HR, finance and procurement add structure, while embedded governance ensures auditability. This plays to SAP’s strengths. The offering feels tailored to existing customers and midmarket newcomers, especially those with aggressive AI ambitions.

 

That said, harmonized data remains one of the hardest problems in enterprise IT. BDC assumes a level of data maturity that many SAP customers have not yet achieved. A large portion of the install base remains on premises, but for those already in the cloud — or willing to invest — the value proposition is becoming clearer. And SAP’s traction among net-new logos suggests the offering resonates with digital-native buyers looking to operationalize AI quickly.

Turning channel partners into strategic collaborators

The biggest partner takeaway from Sapphire was that SAP is no longer content with resell-and-implement motions. It wants deeper collaboration. The flywheel — applications, data, AI — only spins fast enough when partners are embedded into engineering, orchestration and execution. That shift has forced SAP to rearchitect how it manages partner access, tools and territory, with trust becoming a central pillar of its partner strategy.

 

SAP is also handing over the sales motions for its innovation stack. Partners now have access to the same internal tools used to build and deploy agents: Joule Studio, Prompt Optimizer, LeanIX, SAP Build and WalkMe. This is not only enablement but also an invitation to build within the stack. But access comes with expectation. These tools require fluency, not just familiarity. SAP wants to work with a deeper class of partner that can move from implementation to cocreation.

 

Equally important: territory. SAP is expanding partner-led coverage, particularly in North America and Europe. The new SAP Referral Program, scheduled to launch in 3Q25, formalizes this shift. Strategic partners will now own more of the Customer Value Journey — sales, delivery and post-sales engagement — especially in midmarket and vertical contexts.

 

Perhaps the most strategic move, though, is cultural. SAP is not just training partners; it is also increasingly transferring responsibility. KPMG is delivering structured Joule certifications. Accenture is codeveloping production agents. Capgemini is integrating Databricks into SAP’s data stack. Meanwhile, PartnerEdge, SAP’s overall partner program, is evolving to reward cloud performance, AI capability and vertical differentiation. Success in these areas will see the greatest investment and visibility from SAP.

SAP moves ahead with strategic clarity

All told, Sapphire 2025 marked a turning point, not because SAP introduced a radically new vision but because the company finally appears ready to execute on the one it has been quietly building for years. The narrative has matured, the tools are in place, and the platform is coherent. And the partners, customers and product ecosystem are starting to move together. Some heavy lifting remains, such as around migrations, data harmonization and partner fluency, but if SAP can stay focused on delivering scalable value through agentic AI, integrated data platforms and partner-enabled execution, the next chapter of the company’s growth story will look a lot less like catching up to the cloud and a lot more like leading in it.

IT Infrastructure Market Forecast

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

Organizations will continue to prioritize spending on AI infrastructure

Growth drivers

  • Investment in, renewed focus on and adoption of enterprise AI are increasing demand for high-performing infrastructure.
  • Private and hybrid cloud deployments increase demand for hyperconverged infrastructure form factors.
  • Organizations are prioritizing investments in denser and more energy-efficient infrastructure solutions to make way for AI.
  • Edge deployments are creating new-new workload opportunities for OEMs.

Growth inhibitors

  • The enterprise and SMB spend environment remains cautious and fragile as trade wars erupt.
  • ODMs are largely capturing cloud growth as they produce low-cost, custom, commoditized hardware for hyperscalers.
  • Commodity hardware and the popularity of software-defined infrastructure reduce OEMs’ pricing power.
  • Heightened demand for InfiniBand threatens traditional Ethernet-based networking solutions providers.

 

IT Infrastructure Market Forecast for 2024-2029 (Source: TBR)


 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

TBR predicts that the top 5 covered IT infrastructure OEMs will achieve double-digit revenue growth from 2024 to 2029 but their respective market shares will decline

IT Infrastructure Market Share for 2024 and 2029 (Source: TBR)


 

Despite shipping over $11B in Blackwell products in 4Q24, NVIDIA is racing to increase production to meet the market’s seemingly insatiable demand for AI servers

Within the OEM market, AI server demand continues to be driven primarily by services providers and model builders, but sovereigns are showing increased interest in OEMs’ AI infrastructure solutions, presenting the OEMs with a major opportunity. Additionally, although enterprise demand for on-premises deployments of AI infrastructure remains soft, especially for the most powerful and thereby highest-revenue-generating systems, the industry expects enterprise AI demand will accelerate throughout 2025 and 2026 as customers pursuing tailored AI solutions increasingly transition from the prototyping phase to the deployment phase.

TBR predicts Dell will lead covered vendors in terms of storage revenue growth due in part to increased attached sales opportunities associated with the company’s growing server business

Key takeaways

TBR forecasts the storage market will grow at a 13.4% CAGR from 2024 to 2029 as organizations across a variety of industries invest in modernizing and hybridizing their storage estates to support current and future workloads, including those related to AI. Organizations’ data volumes will continue to grow over the next five years as the rise of AI further underscores the value behind organizations’ proprietary data.

 

The storage market typically lags trends in the traditional server market, as is presently the case. However, as organizations increasingly transition from prototyping to deploying AI solutions, data management and orchestration has risen toward the top of key customer pain points. Recognizing this, storage OEMs are selling customers on the capabilities of their storage platforms, comprising software, adjacent services and sometimes hardware. Additionally, storage OEMs are forming partnerships with hyperscalers and other ecosystem players, like NVIDIA, to have their storage solutions validated and certified for operability and AI system reference architectures. TBR believes Dell and Hewlett Packard Enterprise (HPE) are well positioned for growth in storage over the next five years due to their strong data management capabilities and increased opportunities around attaching storage sales to server deals.

 

In 2024 TBR estimates Lenovo overtook NetApp for the third-largest storage market share among covered vendors. The storage market has become more of a priority for Lenovo in recent years due to the segment’s higher margins, evidenced by the company’s recently announced acquisition of Infinidat. TBR forecasts NetApp will outperform Lenovo in five-year storage revenue CAGR, but Lenovo will retain its positioning in the market among covered vendors.

 

Storage Revenues and Market Share of Top 5 Vendors for 2024 and 2029 (Source: TBR)

 

IT infrastructure OEMs are expanding manufacturing capabilities in Saudi Arabia

EMEA market changes and vendor activities

Relative to the U.S., European economies have had more difficulty recovering from the pandemic; however, looking ahead to 2029, TBR forecasts covered vendors’ IT infrastructure revenue derived from the EMEA region will grow at a 12.9% CAGR due in large part to AI. While the EMEA market pales in comparison to that of the Americas, TBR believes the region’s strong growth will be driven both by rising AI adoption — especially among sovereigns — as well as rapidly increasing technology and infrastructure investments in countries like Saudi Arabia.

 

TBR believes HPE is among the best-positioned covered vendors in the EMEA geography. Sovereigns in the region already have a strong working relationship with HPE due to their legacy investments in high-performance computing based on Cray systems, and the company has made some of the strongest commitments among covered vendors to develop infrastructure manufacturing capacity in Saudi Arabia and the rest of the Middle East.

AI & GenAI Model Provider Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

Interest in AI capabilities has not waned as enterprises view the technology as critical to long-term competitive positioning

The buzz around GenAI persists as enterprise interest is leading to adoption. Yet it is still early days, and many enterprises remain in exploration mode. Some use cases, such as data management, customer service, administrative tasks and software development, have already moved from the proof-of-concept stage to production. Still, the exploration phase of AI adoption will be a slow burn as enterprises seek opportunities beyond these low-hanging fruit. As seen in the graph to the right, most enterprises are evaluating AI qualitatively, forgoing quantitative measures to keep up with peers based upon the assumption that the technology will bring transformational improvement to business operations.

 

Source: TBR 2H24

Reasoning models excel at performing complex, deterministic tasks, and have become the most popular models at the back end of agentic AI

The capability improvement brought by the iterative inferencing process has made reasoning models the focal point of frontier model research. In fact, most of the models sitting atop established third-party benchmarks are reasoning models, except for OpenAI’s GPT-4.5, which the company stated would be its last nonreasoning LLM. Put simply, the difference in output quality is too pronounced to ignore, especially regarding complex, deterministic tasks. As seen in the graph, reasoning models outperform their nonreasoning predecessors across the board, with the greatest distinction appearing in coding and math benchmarks. The strength in complex, deterministic tasks makes reasoning models particularly adept at powering agentic AI capabilities, offering a wider range of addressable use cases and greater accuracy. In addition, reasoning frameworks can be leveraged at any parameter count, with available reasoning models ranging from fewer than 10 billion parameters to more than 100 billion.

 

As SaaS vendors continue to build proprietary, domain-specific SLMs [small language models] to power their agentic capabilities, incorporating reasoning frameworks will be an important part of their development strategies. Although the capabilities of reasoning models are impressive, the models bring new challenges and are not necessarily the best choice for every application.

 

Simple content generation and summarization, for instance, do not necessarily require iterative inferencing. Moreover, the greater compute intensity caused by repeated processing at the transformer layer will compound existing challenges to scaling AI adoption. Not only will these models be more expensive to run for the customer, but they will also exacerbate the persistent supply shortages facing cloud infrastructure providers. Microsoft has noted infrastructure constraints as a headwind to AI revenue growth in the past several quarters, and the emerging need for test-time compute adds to these infrastructure demands. As discussed in TBR’s special report, Sheer Scale of GTC 2025 Reaffirms NVIDIA’s Position at the Epicenter of the AI Revolution, NVIDIA’s CEO Jensen Huang stated that reasoning AI consumes 100 times more compute than nonreasoning AI. Of course, this was a highly self-serving statement, as NVIDIA is the leading provider of GPUs powering this compute, but we are dealing with magnitudes of difference. For the use of reasoning models to continue scaling, this high compute intensity will need to be addressed.

 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

SaaS vendors will need to get on board with the new Model Context Protocol to ensure customers can use their model of choice

SaaS vendor strategy assessment

From a strategic positioning perspective, TBR does not expect the rising popularity of the Model Context Protocol to have an outsized impact, primarily because we anticipate all application vendors will adopt the framework to ensure customers can leverage the model of their choice. Furthermore, cloud application vendors are positioned to benefit from the standardization of API calls between models and their workloads. Through a standardized API calling framework, these vendors will be better positioned to drive cost optimization and improve workload management for embedded AI tools.

Recent developments

The Model Context Protocol is becoming the standard: The idea of the Model Context Protocol (MCP) has been steadily gaining popularity following its release by Anthropic in November 2024. At its core, MCP aims to address the emerging challenge of building dedicated API connectors between LLMs and applications by introducing an abstraction layer that standardizes API integrations. This abstraction layer — commonly referred to as the MCP server — would establish a default method for LLM function calling, which software providers would need to incorporate into their applications to access LLMs.

 

This standardization offers several benefits for model vendors, such as eliminating the need to build individual connectors for each service and promoting a modular approach to AI service integration, potentially unlocking long-term advantages in areas such as workload management and cost optimization.

 

For SaaS vendors, there is little reason to resist the shift toward MCP, and its growing popularity may make adoption inevitable. Application vendors like Microsoft and ServiceNow have already begun implementing the protocol by establishing MCP servers for the Copilot suite and Now Assist, respectively, and TBR expects other vendors to follow.

 

It is important to recognize, however, that this approach better suits vendors taking a model-agnostic stance — meaning they aim to empower enterprises to use any LLM to automate agentic capabilities. A possible exception lies with vendors that are less model-agnostic. For instance, Salesforce’s emphasis on proprietary models reduces the need for MCP and favors the company’s focus on native connectors between Customer 360 workflows and xGen models.

 

Ultimately, TBR expects Salesforce to adopt MCP, but there is an important distinction in how different SaaS vendors may approach standardization. Today, the BYOM [bring your own model] philosophy remains a priority for Salesforce, but if the company were to eventually push customers to use its proprietary models exclusively with Customer 360, its commitment to MCP could be deprioritized in favor of tighter customer lock-in.

Google enhances AI capabilities with the launch of Gemini 2.5 Pro, revolutionizing search functionality, healthcare solutions and multimodal content generation

Google remains differentiated in the AI landscape through the deep integration of its proprietary models across a broad product ecosystem, including Search, YouTube, Android and Workspace. Although many competitors focus on niche capabilities or open-source development, Google positions Gemini as a comprehensive, multimodal foundation model designed for widescale consumer and enterprise adoption. Google’s infrastructure, proprietary TPUs (Tensor Processing Units), and access to vast and diverse data sources provide a significant advantage in training and deploying next-generation models. Gemini 2.5 Pro is a testament to this strength, offering the best performance and largest context window available on the market. Although TBR expects the top spot to continue exchanging hands, we believe Google’s models will remain among the frontier leaders for years to come.

OpenAI advances AI development with GPT-4.5, cutting-edge agent tools and a premium ChatGPT Pro subscription to expand capabilities and improve user experiences

OpenAI is the most valuable model developer in the market today, largely due to the company’s success in productizing its models via ChatGPT. The mindshare generated by ChatGPT is benefiting the company’s ability to reach custom enterprise workloads, though OpenAI must be mindful of the widening gap in price to performance relative to peers. From a sheer performance perspective, TBR believes the company’s emphasis on securing compute infrastructure via the Stargate Project, as well as its ongoing partner initiatives to gain access to high-quality training data, will ensure its models remain near the top of established third-party benchmarks over the long term.

ServiceNow Ecosystem Report

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

 

ServiceNow’s evolving value proposition, centered on seamless tech integration and sales alignment, provides a strong backbone in its alliances’ strategy, appealing to client-mindshare-hungry partners

Key trends

The need to unlock data and break down integration barriers between the back, middle and front office is as relevant as ever as customers look to deploy generative AI (GenAI) within their workflows. Acting as an abstraction layer on top of the enterprise system of record (SOR), ServiceNow is in a strong position to message around business transformation and to have more outcome-based conversations with clients, which is aligned with the IT services companies and consultancies’ business models. IT services companies and consultancies that have experience reducing organizations’ technical debt and implementing systems like SAP, Workday and Salesforce are well positioned to use ServiceNow to deliver added value. As evidenced by ServiceNow’s introduction of consumption-based pricing for AI Agents, ServiceNow is focused on selling value as part of its GenAI portfolio, which is certainly in step with the market, though outcome-based pricing may be something for ServiceNow to consider to further align with the global systems integrator (GSI) ecosystem and stay ahead of its growing list of SaaS competitors.

Go-to-market strategy

As ServiceNow continues to grow and pursue new market opportunities, the company is doing a better job of enabling the ecosystem in both sales and delivery. Unlike some of its SaaS peers, ServiceNow is not as established in the market, underscoring a clear need to leverage partners that have the C-Suite relationships, particularly in the line of business (LOB) that can articulate ServiceNow’s value as it exists alongside core enterprise applications. Despite its rapid expansion into more SaaS markets, ServiceNow remains a platform company at its core, but being a true platform company requires an ecosystem that can build on that platform. We suspect the Build motion, where partners sell custom, often industry-specific offerings they develop on the Now Platform, will be an increasingly critical motion, helping ServiceNow capitalize on opportunities.

Vendors

Given the smaller size, ServiceNow is unsurprisingly among the fastest-growing practice area within the GSIs, with average practice-related revenue up 12.9% year-to-year in 4Q24. Several partners have more than $1 billion commitments with ServiceNow, and in early 2025 Infosys and Cognizant joined their competitors in the Global Elite tier of the ServiceNow Partner Program. Cognizant is also the inaugural partner for ServiceNow’s Workflow Data Fabric platform, a key offering that rounds out ServiceNow’s portfolio, offering zero-copy integrations with key platforms, including Google Cloud and Oracle, to feed ServiceNow’s AI Agents. On the technology side, ServiceNow is also strengthening its partnerships with hyperscalers beyond Microsoft, which could unlock new points of engagement for services partners as they start to embrace the multiparty alliance structure. For example, Deloitte is looking at how it can build agents for ServiceNow-specific use cases, with an immediate focus on the front office, on Google Cloud Platform (GCP), while Accenture included ServiceNow on its partner list for the recently announced Trusted Agent Huddle for agent-to-agent interoperability.

Emergence of multipartner networks will test vendors’ trustworthiness and framework transparency

Prioritizing the needs of partners and enterprise buyers over internal growth aspirations will position vendors across the ICT value chain as leading ecosystem participants. It sounds like an idea born in marketing, but positive digital transformation (DT) outcomes will require multiparty business networks that bring together the value propositions of players across the technology value chain. By leading with their core competencies, players can establish needed trust among partners and customers alike, increasing their competitiveness against other players that have spread themselves too thin with aspirations of being end-to-end DT providers.

 

To better understand these approaches, we have identified three back-office ecosystem relationship requirements that guide how the parties work together.

 

TBR Ecosystem Value Chain (Source: TBR)

TBR has identified 4 cloud ecosystem relationship requirements that guide how the parties work together

ServiceNow ecosystem relationship best practices

1.Consider PaaS layer and its role in the SaaS ecosystem: As discussed throughout our research, the value is shifting from “out of the box” to “build your own,” and customers clearly believe building their own custom solutions around a microservices architecture will give their business a competitive advantage. Naturally, we expect ServiceNow wants partners to take the lead in Now Assist delivery, but for the GSIs to see value, GenAI has to actually change the business process.

 

2.Drive awareness through talent development efforts: ServiceNow’s growing portfolio outside the core IT service management (ITSM) space is creating new channel opportunities for services partners to capitalize on, compelling them to invest in training and development programs. Gaining the stamp of approval from a ServiceNow certification program enhances services partners’ value proposition, especially in new areas such as the Creator Workflow and Build portion of the ServiceNow portfolio, which positions them to drive custom application and managed services opportunities. Standing out in a crowded marketplace where services and technology providers vie for each other’s attention will elevate the need to invest in consistent messaging and knowledge management frameworks that elevate buyer trust.

 

3.Prioritize IT modernization ahead of GenAI opportunities and scaling NOW deployment: Some vendors have made GenAI capabilities available only to cloud-deployed back-office suites, meaning customers still on legacy systems must first migrate to the cloud before they can adopt the emerging technology. Partners must account for this modernization prerequisite by prioritizing traditional migration services through broader programs like RISE with SAP if they hope to pursue new opportunities over the long term. Reducing legacy technical debt will also free up resources, both human and financial, which will allow for broader ServiceNow portfolio adoption.

 

4.Set up outcome-based commercial models to scale adoption across emerging areas and protect against new contenders: Aligning commercial, pricing and incentive models that resonate with buyer priorities and achieving business outcomes can allow partners to expand addressable market opportunities, especially as scaling GenAI adoption necessitates greater trust in the portfolio offerings. ServiceNow’s consumption-based model provides a short-term hedge against potential tech partner disruptors, which may take on the risk to offer similar solutions but are able to better align with services partners’ messaging through the use of outcome-based pricing.

 

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

 

Acting as an abstraction layer, ServiceNow has a unique opportunity to further expand into the back office to address integration pain points but risks further overlapping with its SOR peers

ServiceNow positions as system of action to expose gaps in core system of record

Existing as a platform layer that orchestrates and integrates workflows, ServiceNow has long been able to successfully enter new markets without encountering a lot of head-to-head competition. But this is changing as ServiceNow, a $10-plus billion company, continues to drive traction with the LOB buyer by challenging a lot of the fragmentation that exists within front- and back-office systems. Over the past several quarters, ServiceNow has continued to launch new products in areas like talent management, finance and supply chain. One of the company’s biggest moves was in the front office with the launch of Sales & Order Management (SOM), giving customers the ability to use CPQ (configure, price, quote) and guided selling in a single product. Though ServiceNow famously integrates with all of the systems of record, these new innovations could pose a risk to the likes of SAP, Workday and Salesforce, which perhaps do not have the platform capabilities to build custom processes that can be tied back to the workflow, at least in a truly modern way. To be clear, ServiceNow is not interested in being a core CRM, ERP or human capital management (HCM) provider, and today acts as a service delivery system. But having customers store their data in the service delivery layer, as opposed to the core system of record so they can use that data against a specific workflow, is how ServiceNow aims to position as a “system of action.”

DXC Technology’s ServiceNow Ecosystem Strategy in Review

TBR assessment

DXC Technology has an established history and deep expertise within the ServiceNow ecosystem, with a partnership spanning more than 15 years, a talent pool of over 1,800 ServiceNow experts, and a track record of more than 7,200 global implementations with over 350 instances managed worldwide, all of which position the company as a mature and experienced service provider for ServiceNow. Notable client wins, such as with the city of Milan (medical supply delivery during a crisis), Nordex Group (workplace safety management) and Swiss Federal Railways (unified customer inquiry management) underscore DXC’s ability to leverage the partnership to address diverse and critical business challenges across different industries and sectors. These successes highlight DXC’s capacity to translate its deep ServiceNow knowledge and implementation capabilities into tangible business value for its clients, suggesting a well-established and impactful ServiceNow practice.

Strategic portfolio offering

Through a strategic alliance with ServiceNow and bolstered by a dedicated global business group and acquisitions such as Syscom AS, TESM, BusinessNow and Logicalis SMC, DXC delivers a comprehensive range of ServiceNow-focused solutions. This approach enables DXC to digitize processes, enhance user experiences, and transform service management across the full ServiceNow platform, driving business innovation at scale, including specialized solutions such as those for the insurance industry, where DXC has had core competencies and long-lasting customer relationships. DXC’s offerings span enterprise applications transformation, security solutions, and compute and data center modernization, all designed to maximize client efficiency and agility utilizing the ServiceNow platform. The establishment of a new Center of Excellence in Virginia in November 2024, combining DXC’s industry strengths with ServiceNow’s solutions, further solidifies the two companies’ commitment to streamlining AI adoption and delivering cutting-edge solutions.

 

DXC Technology’s ServiceNow Ecosystem Strategy in Review (Source: TBR ServiceNow Ecosystem Report)

 

DOGE Federal IT Vendor Impact Series: Maximus

The Trump administration and its Department of Government Efficiency (DOGE) have generated massive upheaval across the board in federal operations, including in the federal IT segment. As of May 2025, thousands of contracts described by DOGE as “non-mission critical” have been canceled, including some across the federal IT and professional services landscape. TBR’s DOGE Federal IT Vendor Impact Series explores vendor-specific DOGE-related developments and impacts on earnings performance. Click here to receive upcoming series blogs in your inbox as soon as they’ve published.

 

Maximus is unfazed by the uncertainty in the federal IT market

While vendors like ICF International have disclosed devastating impacts to their FY25 revenue as DOGE upends the stability of the federal IT market with stop work orders and contract cancellations, Maximus remains largely unaffected. Maximus’ leadership team stated on May 8 that a mere $4 million of its FY25 revenue had been negatively impacted year to date by DOGE’s actions.

 

Maximus’ U.S. Federal Services segment has continued to rapidly expand, generating $778 million in revenue during 1Q25. This represents an improvement of 10.9% on a year-to-year basis and is all organic. U.S. Federal Services’ operating margin also kept improving as it expanded 340 basis points year-to-year and 260 basis points sequentially to 15.3% in 1Q25.

 

These robust top- and bottom-line expansions were driven largely by volume growth on clinical assessments. U.S. Federal Services has benefited from the steady increase in demand for medical disability exam (MDE) services since the Honoring Our Promise to Address Comprehensive Toxics Act was passed in 2022. Maximus has been increasingly leveraging productivity-enhancing tools like AI to support these types of engagements so the company can successfully take on higher volumes of work while relying less on temporary contract workers.

 

Maximus has landed in a better position than it was in when the Trump administration first took charge in 2017 with significantly expanded capabilities, breadth and scale. While it processes these clinical assessments and supports the Centers for Medicare & Medicaid Services’ multibillion-dollar Contact Center Operations (CCO) contract, Maximus has continued to parlay its existing relationships with clients into more lucrative opportunities like systems integration and digital transformation work.

 

Maximus is also taking advantage of the bipartisan support for federal agencies to augment their citizen-facing services. The vendor unveiled the Maximus Total Experience Management solution in 2024 to gain traction with the Federal Reserve System and other agencies making these investments.

 

Maximus has booked $2.9 billion in total contract value year to date across its U.S. Federal Services, U.S. Services and its Outside the U.S. segments and has an additional $451 million in unassigned awards in its pipeline as of 1Q25.

 

Maximus estimated the value of its total addressable market at $41.2 billion as of 1Q25, down from $41.4 billion in 4Q24. Roughly $24.7 billion, or 60% of Maximus’ total addressable market, is tied to the U.S. Federal Services segment. It is also worth noting that over 60% of Maximus’ revenue during the first half of FY25 was derived from performance-based and fixed-price contracts, which is the Trump administration’s preferred contracting method.

How Maximus will navigate 2025

Maximus will likely continue to be shielded from the brunt of DOGE’s disruptions, given the bipartisan support for Maximus’ critical citizen services like the CCO and MDE contracts. These two engagements alone were last disclosed as being responsible for around 25% to 30% of Maximus’ $4.9 billion in total FY23 revenue. With the CCO recompete withdrawn and domestic Regions 1 through 4 secured, Maximus’ long-term prospects are more favorable than they were in summer 2024, but the vendor is not completely immune from the surrounding chaos.

 

While the bulk of DOGE’s contract cancellations and stop work orders have focused on various consulting and engineering services, Maximus could still face some disruptions, given the department’s activity in the federal civilian market. Maximus is heavily entrenched in this space and has flagged the Centers for Disease Control and Prevention, IRS and U.S. Securities and Exchange Commission as crucial long-term clients. Maximus’ relationship with the IRS has notably evolved over the years and is the epitome of the vendor’s go-to-market strategy.

 

Maximus gained traction with the IRS initially through its BPO-oriented work before expanding the scope of its services for the agency and becoming a valued technology integrator. Maximus is now competing against top-tier players like Accenture Federal Services as part of the IRS’ $2.6 billion Enterprise Development, Operations Services blanket purchase agreement while providing other crucial services like transitioning the IRS to a cloud-based Enterprise Data Platform. However, like many other agencies, the IRS’ budget could be slashed by billions of dollars in federal FY26. With spending from key clients under threat, Maximus needs to demonstrate that its services are mission critical and in line with the Trump administration’s long-term priorities.

 

A part of DOGE’s stated goals is to modernize agencies’ systems and streamline processes. Maximus can showcase how its technologies are reducing the amount of time needed to deliver critical citizen services without negatively impacting the customer experience (CX). Maximus can also leverage the case studies illustrating how the company’s digital transformation as well as modernization services have positioned agencies for success and put them on the path to responsibly utilize AI. Maximus leadership team recently disclosed that the vendor is currently discussing with clients and even DOGE representatives how to make processes across the government more efficient.

 

Partnerships will be integral as vendors across the federal IT market look to quickly demonstrate their value to the new administration. While Maximus has historically been quiet regarding its alliance activity, this could change as the vendor aims to avoid falling behind. For example, Maximus recently announced a partnership with Salesforce to augment its CX as a Service efforts. The Maximus Total Experience Management solution is being augmented with the Agentforce platform to provide clients with AI agents tailored to their needs that use data to adapt to citizens’ needs and simplify interactions.

 

Maximus is also one of the vendors currently considering M&A activity to bolster its operations despite the ongoing uncertainty in the federal IT market. While Maximus will not make any blockbuster moves like when it purchased Veterans Evaluation Services in 2021 to rapidly expand its clinical assessments business and relationship with the U.S. Department of Veterans Affairs, the company will explore tuck-in acquisitions that can strengthen Maximus’ capabilities with emerging technologies and its presence in core markets like the health sector.

 

Maximus’ venture capital arm will also closely monitor potential candidates that fit that criteria. Maximus disclosed during its 4Q24 earnings call that Maximus Ventures has made its first investment since being established in 3Q23. The unnamed company is optimizing technicians’ workloads when providing clinical assessment services with human-in-the-loop AI.

 

TBR anticipates that while Maximus will keep prioritizing federal opportunities relating to CX as a Service as well as technology modernization and optimization, it will balance the risk of its portfolio mix with state and local opportunities. The U.S. Services segment is particularly well positioned to support state and local governments in navigating the Trump administration’s sweeping changes and use these relationships as a chance to provide other services like unemployment insurance support.

 

TBR predicts that Maximus’ FY25 revenue will be $5.3 billion, representing a decline of 0.2% over FY24. Although U.S. Federal Services will continue to rapidly expand, its growth will not offset the normalization of the U.S. Services segment’s performance and the Outside the U.S. segment divestitures. TBR believes that these divestitures and the volume growth on key programs will cause the vendor’s operating margin to just narrowly surpass its FY24 operating margin of 9.2%.

 

TBR’s DOGE Federal IT Impact Series will include analysis of Accenture Federal Services, General Dynamics Technologies, CACI, IBM, CGI, Leidos, IFC International, Maximus, Booz Allen Hamilton and SAIC. Click here to download a preview of our federal IT research and receive upcoming series blogs in your inbox as soon as they’ve published.