Oracle’s Full-stack Strategy Underscores a High-stakes Bet on AI

Oracle AI World, Las Vegas, Oct. 13–16: Newly rebranded as Oracle AI World, the theme of this year’s event was “AI changes everything,” a message supported with on-the-ground customer use cases in industries like healthcare, hospitality and financial services. New agents in Oracle Applications, the launch of Oracle AI Data Platform, and notable projections for Oracle Cloud Infrastructure (OCI) revenue also reaffirmed the emphasis Oracle is placing on AI. Though not immune to the risks and uncertainties of the AI market at large, Oracle is certainly executing, with the bulk of revenue from AI contracts already booked in its multibillion-dollar remaining performance obligations (RPO) balance. And yet, as OCI becomes a more prominent part of the Oracle business, big opportunities remain for Oracle, particularly in how it partners, prices and simply exists within the data ecosystem.

OCI rounds out the Oracle stack, strengthening its ability to execute on enterprise AI opportunity

2025 has been a transformative year for Oracle. With the Stargate Project — which pushed RPO to over $450 billion — and the recent promotion of two product leaders to co-CEOs, Oracle is undergoing a big transition that aims to put AI at the center of everything. In both AI and non-AI scenarios, the missing piece has been OCI, which plays a critical role in Oracle’s long-solidified application and database businesses.


But now that OCI is transitioning to a robust, scalable offering that could account for as much as 70% of corporate revenue by FY29 (up from 22% today), Oracle is much better positioned than in the early days of the Gen2 architecture. For the AI opportunity, this means using the full stack — cost-effective compute, operational applications data, and what is now a fully integrated AI-database layer to store and build on that data — to guide the market toward reasoning models, making AI more relevant for the enterprise.

Steps Oracle is taking to simplify PaaS have already been taken by others, but the database will be Oracle’s big differentiator

Cross-platform entitlements from the data lake mark a big evolution in Oracle’s data strategy

For a long time, most of the market seemed against open standards, but in the era of AI, storing data from disparate tools into a single architecture that works with open formats and engines has become common practice. With SQL and Java, open standards have been part of Oracle since the beginning, but Oracle is pivoting more heavily in this direction, with what seems to be a broader vision to support analytics use cases on top of the operational database, where Oracle is strongest. For example, at AI World, Oracle launched Autonomous Data Lakehouse; given how the market has revolved around data lakes and their interoperability, this launch has been a long time coming.

An evolution of Autonomous Data Warehouse, Autonomous Data Lakehouse is integrated directly into the Oracle Database, meaning the database can connect to the data lakehouse and read and write data in open formats, including Apache Iceberg, as well as analytics engines like Spark that are used to get data in and out of the lakehouse. Aside from reaffirming Oracle’s commitment to open standards and providing a testimonial for the Apache Iceberg ecosystem at large, Autonomous Data Lakehouse sends a strong message to the market that a converged architecture does not equal lock-in; with Oracle, customers can still pull data from a range of databases, cloud warehouses, streaming and big data tools. When it comes to accessing data in applications, this is specific to Oracle applications.

As OCI becomes a more prominent part of the business and agentic AI further disrupts the applications market, it will be interesting to see whether Oracle takes the opportunity to support external applications natively. Last year’s decision to launch a native Salesforce integration within Fusion Data Intelligence (FDI), enabling customers to combine their CRM and Fusion data within the lakehouse architecture, suggests Oracle may be moving further in the direction of delivering its PaaS value outside its own apps base, which would create more market opportunity for Oracle.

The days of Oracle’s ‘red-stack’ tactics are starting to fade

Getting data into a unified architecture is only half the battle; a big gap at the governance layer for managing different data assets in Iceberg format remains. Addressing this piece, Oracle is launching its own data catalog as part of Autonomous Data Lakehouse, which, importantly, can work with the three core operational catalogs on the market: AWS Glue, Databricks Unity Catalog and Snowflake Polaris (open version).

Customers will be able to access Iceberg tables in these catalogs and query that data within Oracle. While for some customers a single catalog with a unified API may be ideal, in most scenarios, running multiple engines over the same data is the motivator. Oracle’s recognition of this is a strong testament to where the market is headed in open standards and in making it easier to federate data between platforms.

AIData platform should provide a lot of simplicity for customers

The Autonomous Data Lakehouse ultimately serves as the foundation of one of Oracle’s other big announcements: AI Data Platform. At its core, AI Data Platform brings together the data foundations — in this case, Autonomous Data Lakehouse integrated with the database — and the app development layer with various out-of-the-box AI models, analytics tools and machine learning frameworks.

Acting as a new OCI PaaS service, AI Data Platform is more a culmination of existing OCI services, though it still marks a big effort by Oracle to bring the AI and data layers closer together, helping create a single point of entry for customers to build AI on unified data. To be clear, this approach is not new, and vendors have long recognized the importance of unifying data and app development layers. Microsoft helped lead the charge with the 2023 launch of Fabric, which is now offering natively embedded SQL and NoSQL databases, followed by Amazon Web Services’ (AWS) 2024 launch of SageMaker AI.

Both offerings leverage the lakehouse architecture and offer integrated access to model tuning and serving tools in addition to the AI models themselves.  Of course, in instances like these, Oracle’s differentiation will always rest on the database and the ability for customers to more easily connect to their already contextualized enterprise data  in the database for LLMs.

TBR graph: Hyperscaler Revenue and Growth for 10 Systems Integrators

As Oracle becomes more akin to a true hyperscaler, both partners and Oracle must adapt

With AI, platforms are playing a much more prominent role. Customers no longer want to jump through multiple services to complete a given data task. They also want a consistent foundation that can keep pace with rapid technological change. Six of Oracle’s core SI partners are collectively investing $1.5 billion in training over 8,000 practitioners in AI Data platform, suggesting both Oracle and the ecosystem recognize this shift in customer expectations. It also speaks to the pivot Oracle’s partners may be trying to make. As Oracle strengthens its play in IaaS/PaaS, services partners — which still get the bulk of their Oracle business from SaaS engagements — may need to adjust.

The challenge is that the SIs have already invested so much in AWS, Microsoft and Google Cloud, so viewing Oracle through the hyperscaler lens may be easier said than done. For context, research from TBR’s Cloud Ecosystem Report shows that 10 SIs collectively generated over $45 billion in revenue from their AWS, Azure and Google Cloud Platform (GCP) practices in 2024. Put simply, it may take some effort on Oracle’s part to get SIs to think about Oracle on, say, AWS before AWS on AWS. This effort equates to investments in knowledge management and incentives, coupled with an overall willingness to partner in new ways.

The good news is AI Data Platform, which is available across the hyperscalers, will unlock integration, configuration and customization opportunities, resulting in an immediate win for Oracle in the form of more AI workloads, and eventual sticking points for the GSIs. In the long term, AI Data Platform will serve as a test case for partners’ ability to execute on a previously underutilized portion of the Oracle cloud stack and Oracle’s willingness to help them do so.

Role of SaaS apps pivots around industry outcomes

OCI, including PaaS services like AI Data Platform, is becoming a more prominent part of Oracle’s business. Next quarter (FY2Q26) will mark the inflection point in the Oracle Cloud business when IaaS overtakes SaaS in revenue. But for perspective, a lot of the IaaS momentum is coming from cloud-native and AI infrastructure customers leveraging Oracle for cost-effective compute. Oracle has over 700 AI customers in infrastructure alone, with related annual contract revenue growing in the triple digits year-to-year. Within the enterprise, however, the operational data residing in Oracle’s applications remains integral to the company’s strategy and differentiation.

At Oracle AI World, a lot of the focus was on the progress Oracle has made in delivering out-of-the-box agents not just across the Fusion suite but also in industry applications. Oracle reported it has 600 agents and assistants across the entire apps portfolio, and while the majority are within Fusion, more agents are coming online in the industry suite. These agents will continue to be free of charge, including for the 2,400 customers already taking advantage of AI in Oracle’s applications. While Oracle has long offered a suite of industry apps that are strategically key in helping it appeal to LOB decision makers, Industry Apps will start taking a more central role in Oracle’s strategy, coinciding with the recent appointment of Mike Sicilia, previous head of Industry Apps, to co-CEO.

At the event, it became clear that Oracle is starting to view its applications less as Fusion versus Industry and more as a unified SaaS layer. As customers remain under pressure to deliver outcomes from their generative AI (GenAI) investments, industry alignment will be key, especially as they increasingly find value in using this industry data to tune their own models. As such, TBR can see scenarios in which Oracle increasingly leads with its industry apps, potentially unlocking client conversations in the core Fusion back-office.

With all the talk about catering to outcomes with its industry apps, it will be interesting to see how far Oracle goes to align its pricing model accordingly. It may seem bold, but two decades ago, Salesforce disrupted legacy players, including Oracle, with the SaaS model. Eventually, a vendor will take the risk and align its pricing with the outcomes it claims its applications can deliver.

Final thoughts

The theme of this year’s Oracle AI World was “AI changes everything,” and Oracle is investing at every layer of the stack to address this opportunity. Key considerations at each component include:

  • IaaS: It would be very hard to dispute Oracle’s success reentering the IaaS market with the Gen2 architecture. Large-scale AI contracts will fuel OCI growth, making the IaaS business more than 10x what it is today in four years. With this growth, Oracle will give hyperscalers that have been in this business far longer a run for their money. We know OCI will be a big contender for net-new AI workloads. What will be more telling is if OCI can continue to gain share with large enterprises, which are heavily invested with other providers.
  • PaaS: Oracle’s steps to simplify the PaaS layer with AI Data Platform, underpinned by Autonomous Data Lakehouse, will help elevate the role of the database within the broader Oracle stack. OLAP specialists will try to disrupt the core database market, and SaaS vendors, even those lacking the storage layer, will position themselves as data companies. Oracle’s ability to deliver a unified platform underpinned by the database to help customers build on their private data in a highly integrated way make it well positioned to address the impending wave of AI reasoning.
  • SaaS: Today, cost-aware customers are less interested in reinventing processes that are working; they are investing in the data layer. In the next few years, the SaaS landscape will begin to look very different as a result of agentic AI. With these factors in mind, our estimates suggest the PaaS market will overtake SaaS, albeit marginally, in 2029. In Fusion, Oracle has undergone a big evolution from embedded agents to custom development to an agentic marketplace, but the features themselves are ultimately table stakes. A lot of SaaS vendors have tried and failed to do industry suites well. Oracle’s industry portfolio, though still playing the role of application, represents an opportunity for Oracle to go to market on outcomes and make AI more applicable within the enterprise.

Of course, what this AI opportunity really looks like and when it will fully materialize is up for debate. The amount of AI revenue companies are generating compared to what they are investing is still incredibly small, while AI model customers that are operating at heavy losses but making big commitments to Oracle pose an added risk; though, to be fair, Oracle’s ratio will be far more favorable than those of its peers.

AI model customers that are operating at heavy losses but making big commitments to Oracle pose an added risk. But Oracle AI World only cemented that Oracle believes the risk of underinvesting far outweighs the risk of overinvesting. If the market adapts and customers show their willingness to put their own private data to work, then Oracle’s full-stack approach will ensure its competitiveness.

U.S. Telecom Enterprise Operator Benchmark

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.


Post updated: Nov. 15, 2025

Growth in areas including mobility, FWA, IoT, MEC, PCN and public sector is only partially offsetting erosion in operators’ B2B legacy services

Revenue growth drivers

  • Wireless customer additions for smartphones and other connected devices
  • Customers migrating to higher-tier service offerings
  • Higher public sector revenue driven by first responder initiatives such as AT&T FirstNet as well as from large-scale federal task orders
  • Fixed wireless access (FWA) subscriber additions
  • Price increases
  • Adoption of value-added services in areas including SD-WAN, cybersecurity and unified communication to augment operators’ core broadband and mobility offerings
  • IoT, mobile edge computing (MEC) and private cellular network (PCN) deployments

Revenue growth inhibitors

  • Businesses reducing spending due to macroeconomic challenges including inflationary impacts related to tariffs
  • Legacy data solution customer disconnects, which is resulting in businesses switching to other service providers and/or lower-priced service offerings
  • Customers disconnecting from fixed voice services to use wireless offerings exclusively
  • Asset divestments and operators retiring certain legacy solutions including copper-based services
  • Public sector agencies reducing spending due to Department of Government Efficiency (DOGE)-related cuts

    U.S. Enterprise Operator Market Share and Revenue

    U.S. Enterprise Operator Market Share and Revenue (Source: TBR)

    U.S. Enterprise Operator Market Share and Revenue (Source: TBR)

Company profiles excerpt

AT&T’s enterprise wireless revenue growth was driven by FirstNet, which gained nearly 400,000 new connections in 2Q25 and partially offset continued wireline revenue declines

TBR’s assessment of AT&T’s enterprise strategies

AT&T’s total enterprise revenue decreased 4.6% year-to-year in 2Q25 to $7.6 billion, mainly as a result of AT&T Business Wireline revenue declining 9.3% year-to-year to $4.3 billion due to lower demand for legacy voice and data solutions as well as AT&T deemphasizing noncore services within its portfolio. Lower Business Wireline revenue was also attributed to the sale of AT&T’s cybersecurity business to LevelBlue in May 2024, a joint venture between AT&T and WillJam Ventures. Additionally, TBR believes AT&T’s lower Business Wireline revenue in 2Q25 was impacted by disconnects within federal agencies related to the activities of DOGE. Lower Business Wireline service revenue in 2Q25 was partially offset by fiber and advanced connectivity services growing 3.5% year-to-year, driven by higher fiber and fixed wireless revenue in the quarter.

AT&T’s enterprise wireless revenue grew 2.3% year-to-year to $3.3 billion, aided by over 398,000 FirstNet connection additions in 2Q25 to grow to a base of 7.5 million total connections.

As of July AT&T’s 5G RedCap (Reduced Capability) technology reached over 200 million POPs across the U.S. AT&T’s 5G RedCap technology enables clients to lower the cost of deploying IoT solutions by allowing devices to operate at reduced battery consumption and processing power levels while running over AT&T’s 5G SA network. AT&T also announced the certification of the Franklin Wireless RG350, its first commercially approved 5G RedCap mobile hotspot on AT&T’s network. The RG350 is designed to provide connectivity to support scenarios including remote work and travel.


If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research



In June AT&T launched AT&T Business Voice, an all-in-one VoIP solution that enables businesses to replace legacy analog systems with a modern digital platform. The service supports multiple business lines, including fax machines, fire alarms, security systems, elevator phones and public safety phones. AT&T Business Voice provides a seamless transition from traditional copper line infrastructure to a robust digital network, allowing businesses to retain their existing phone numbers and equipment while adding new lines as needed. Designed specifically for small and midsize businesses, the solution includes key features such as 24/7 monitoring, built-in battery backup, optional wireless failover to maintain service during broadband interruptions, spam-call protection, and advanced telephony management capabilities.

Following the launch of the consumer-focused version of AT&T Turbo in May 2024, the company released AT&T Turbo for Business in June 2025. The offering ensures businesses maintain reliable and fast mobile connectivity, even during times of high network congestion. Turbo for Business prioritizes the treatment of data for business-critical applications, delivering the highest level of data priority currently available on AT&T’s wireless network. AT&T’s Business Unlimited Premium 2.0 with Turbo plan is available for $15 more per line per month compared to its standard Business Unlimited Premium 2.0 plan.

In June AT&T announced advanced upgrades to its AT&T ESInet emergency communications platform. The new capabilities include native picture and video messaging to Public Safety Answering Points (PSAPs) and automatic crash alerts from select 2026 Toyota vehicles equipped with AT&T’s Connected Car technology. These enhancements enable first responders to access critical data in real time, improving situational awareness and decision making. Although AT&T is currently the only nationwide provider offering these features, the standards-based design allows other wireless carriers to integrate them into their networks. The upgraded features will be available to new AT&T ESInet customers starting in October.

Segment performance excerpt

T-Mobile continued to outperform rivals in total enterprise revenue growth in 2Q25

AT&T and Verizon remain the largest incumbent operators in the U.S. B2B market by revenue due to the companies’ established client relationships and reputations. However, AT&T and Verizon reported total enterprise year-to-year revenue declines of 4.6% and 0.3%, respectively, in 2Q25 as the companies remain challenged by disconnects from customers on legacy wireline solutions.

T-Mobile outpaced benchmarked operators in total enterprise revenue growth in 2Q25. According to the company, T-Mobile for Business led in areas including U.S. business postpaid net additions, business postpaid phone net additions, business 5G FWA net additions and business postpaid churn performance. T-Mobile is attracting clients via the improved reputation of its wireless network as well as its expanded portfolio offerings in advanced services areas such as IoT, MEC and PCN. T-Mobile for Business projects it will generate a double-digit service revenue CAGR from 2023 to 2027.

Lumen’s total enterprise revenue declined 3.7% year-to-year to $2.6 billion as the company generated lower revenue for services including traditional VPN, Ethernet, legacy voice services, and other legacy products and services. Lumen does not compete in the wireless market, which is presenting a challenge as all other benchmarked operators sustained wireless revenue growth in 2Q25 and leverage wireless for bundling with wireline services.

U.S. Operator Total Enterprise Revenue and Growth

U.S. Operator Total Enterprise Revenue and Year-to-year Revenue Growth (Source: TBR)

 

AT&T and Verizon continued to lead the U.S. in large enterprise revenue due to their deep footing among Fortune 500 companies, while Comcast and T-Mobile had the highest growth rates

AT&T and Verizon are significantly outpacing competitors in large enterprise revenue due to the operators’ deep entrenchment among Fortune 500 companies. However, smaller operators are positioning to gain market share among larger businesses. For instance, T-Mobile for Business is trailing other benchmarked operators in large enterprise revenue but is gradually gaining share in the segment as T-Mobile is attracting large enterprises via the competitive pricing of its unlimited data 5G plans; increased advertising; and time-to-market advantage in deploying 5G SA, which enables T-Mobile to offer B2B-specific offerings such as T-Priority.

Comcast’s acquisition of Nitel, which closed on April 1, bolstered the company’s large enterprise revenue in 2Q25. The purchase enables Comcast to expand its footprint in the midmarket and enterprise customer segments, and adds Nitel’s 6,600 clients across the U.S. in verticals including financial services, healthcare and education. Acquiring Nitel also enables Comcast Business to expand its channel distribution strategy to more effectively target new sales opportunities within the midmarket and enterprise segments. Comcast Business is gaining AI and software tools from the Nitel acquisition that will enable it to enhance its sales and customer service capabilities. These benefits include robust orchestration capabilities, an instant quoting tool that makes it easier to price and establish deals across multiple vendors and sites, and a digital dashboard that offers a single-pane-of-glass view of deployments.

U.S. Operator Large Enterprise Revenue and Growth

U.S. Operator Large Enterprise Revenue and Year-to-year Revenue Growth (Source: TBR)

 

Cloud Data & Analytics Market Landscape

TBR Spotlight Reports represent an excerpt of TBR’s full subscription research. Full reports and the complete data sets that underpin benchmarks, market forecasts and ecosystem reports are available as part of TBR’s subscription service. Click here to receive all new Spotlight Reports in your inbox.

To better enable AI, vendors reevaluate architecture and take big steps to bring operational data closer to the analytics workflow

Data Cloud Trends and Examples

Data Cloud Trends and Examples (Source: TBR)

 

New AI applications will drive near-term growth in the data cloud market, while SaaS vendors’ need to pivot as GenAI disrupts will be a defining trend in the coming years

Key takeaways

The data cloud market, which we expect will reach $124 billion in 2025, up 22% from 2024, remains driven by foundational workloads that support the storage and querying of data. While most of the data science and engineering efforts — and associated IT spend — are tied to moving data across systems and preparing it for transformation, the pressure to abstract insights and deliver business value from data is increasing. In the coming years, customers’ expectations around analytics will increase and drive a greater need for tools that can effectively bridge the gaps between technical and business personas within an organization.

Data Cloud Revenue and Market Share

Data Cloud Revenue and Market Share (Source: TBR)

AI demand remains strong, and extending AI to the workload (as opposed to extending the workload to the AI) underpinned by the database, is often a winning strategy. Vendors continue to closely integrate their services to promote data sharing and access — meeting customers where their data is — to support AI development.

Though responsible for vast amounts of critical business data, SaaS applications are not driving the storage of the data, which will be important as the rise of agentic AI causes disruption to the application layer. In the coming years, the data cloud market will be influenced by vendors trying to adapt to the disruption caused by generative AI (GenAI). For instance, SaaS vendors may increasingly buy their way into the data cloud space (i.e., Salesforce and Informatica), while existing data cloud players will explore new growth opportunities. Another big example is Databricks’ recent entry into the database market with Lakebase to bridge the gap between Online Transaction Processing (OLTP) and Online Analytical Processing (OLAP) systems.

Overall PaaS market growth will be fueled by AI offerings, such as Amazon Bedrock, Azure OpenAI and Google Vertex. Even so, the relationship between data and AI is symbiotic, and as customers build and deploy more AI apps using these services, adoption of database offerings will increase.

If you believe you have access to the full research via your employer’s enterprise license or would like to learn how to access the full research, click the Access Research button.

Access Research

Customer scenario excerpt

Oracle’s ability to provide robust interconnectivity with other hyperscalers proves highly strategic, as legacy Oracle Database customers modernize on other clouds

Customer scenario: A distributed architecture leveraging Oracle and Microsoft Azure drives heavy cost savings

  • A company with a large legacy footprint, including both Microsoft and Oracle databases, leveraged a distributed architecture enabled by Oracle’s interconnectivity agreement with Microsoft Azure. Although this distributed architecture may not be as high-performing as hosting the associated applications in the cloud, its performance was still deemed better than that of the legacy data center.
  • To entice the customer to stay with their Oracle databases, significant savings were offered. The customer ultimately retained their Oracle databases, highlighting the value of Oracle’s partnership with Microsoft. Oracle is also expanding its interconnectivity with other cloud providers, including through its multicloud database strategy.
  • This particular customer decided to ultimately migrate applications to Oracle Cloud Infrastructure (OCI), highlighting the role OCI plays in supporting the apps business.

 

“We ran a lot of performance tests to say, if we have this whole application in Azure, what does that performance look like? And what’s interesting is that Oracle actually has very strong interconnectivity that they’ve built with Azure. And, and when we ran the performance test, it turns out that, you know, the performance is actually better when you go to the cloud, for many use cases, but not all the use cases, right? The application issues are better, but sometimes the data may not be as optimized because now it’s inside Azure. And so we also knew Oracle came to us and said, ‘If you pay a certain amount of money for your on-premises databases, let’s say that money is $3 million, we’ll knock off your rates by 30%. So we’ll save you a million dollars. If, instead of moving all these to the Azure cloud, you move them to our cloud.’ And we thought they were crazy, but they showed us that is technically feasible, they have this high-speed interconnect. And we did that. And we realized that’s a lot of money to save, right? If you can save a couple million dollars, which otherwise I’m stuck in these legacy systems, you know, I’m there for the system for at least a couple of years. And so, it made sense for us to put our databases therefore, in Oracle, and we saw it was giving us better performance than we were in the data center. But slightly, not as great a performance as the whole application was in Azure itself. So we took a little bit of performance hit with this distributed architecture, but it was still better than a data center. And the cost savings were very significant. Were in millions of dollars. And so after we ran all the performance tests, security tests, and all of these pieces, we’ve sort of decided that one of the applications, we’re going to migrate completely, to OCI itself.” — CTO, Manufacturing

Ecosystem developments excerpt

Maturing data cloud companies invest in programmatic partner initiatives, which is critical to improving engagement, accountability and alignment within the ecosystem

Sales & marketing staffing

  • Collective sales and marketing headcount across the seven data cloud pure plays in this report reached roughly 16,600 in CY2Q25, up 24% year-to-year. Snowflake has been particularly focused on hiring more technical sales roles, including sales consultants who can identify new use cases and migration opportunities.
  • Confluent’s transition to a consumption-led GTM model has been supporting productivity, with revenue per S&M employee continuing to grow in the low double digits on a year-to-year basis.

Partner developments

  • On the heels of launching new partner programs including the Accelerate with Confluent program for SI partners, Confluent plans to invest $200 million in its global partner ecosystem. This investment will support new engineering efforts and partner enablement resources on the GTM side.
  • MongoDB continues to invest in the MongoDB AI Applications Program (MAAP), aligning engineering, professional services and partner resources to help digital natives get started with AI. That said, there is an enterprise component to the program, and GSIs like Accenture and Capgemini are participating members.

Sales motions

  • Self-service channel: Currently, 7,300 MongoDB customers are supported through direct sales, which represents just 12% of MongoDB’s nearly 60,000 customers. Naturally, many of MongoDB’s customers are SMBs and midmarket businesses supported through the self-service channel via cloud marketplaces, which act as a great sales productivity lever for MongoDB. As MongoDB continues to look upmarket and focus on capturing wallet share from its largest, most strategic accounts, the number of self-serve customers passed on to direct sales teams will likely continue to decline.
  • Consumption-based selling: Confluent has a consumption-based revenue model and has been taking steps to align its go-to-market model accordingly. For instance, the company is now compensating salespeople on incremental consumption and new logo acquisition.

Geo and industry segmentation

  • Databricks is rapidly expanding in Latin America and opened a new office in São Paulo, Brazil, in early July. Total headcount in the region is expected to reach over 200 employees by the end of this year. Databricks also recently made its platform available in Google Cloud’s São Paulo region, officially making Databricks available on all major public clouds in Brazil.
  • As highlighted in TBR’s 1H25 U.S. Federal Cloud Ecosystem Report, federal cloud spending is expected to surpass $31 billion by FFY28. Data cloud vendors continue to certify their offerings to address this opportunity. For instance, Cloudera has reached some major milestones, including achieving Moderate Provisional Authority to Operate (P-ATO) status for its platform, and recently secured a blanket purchase agreement with the DOD as part of the Enterprise Software Initiative (ESI).

Vendor profiles excerpt

Though still staying true to its iPaaS heritage, Boomi actively repositions as an orchestrator of AI agents to reduce the complexity and sprawl that development platforms are creating

TBR Assessment: Integration Platform as a Service (iPaaS) continues to play a critical role in helping IT departments meet the needs of their business users and, by extension, the end customers. As a key market player, Boomi upholds its long-established position as readily scalable and able to support a low TCO (total cost of ownership). In many ways, Boomi achieved this position by being a cloud-native application — an advantage that not all iPaaS players have. Building on its integration heritage, Boomi is expanding the reach of its platform to support the needs of AI agents, an emerging opportunity as PaaS vendors make it incredibly seamless to spin up an AI agent, indirectly creating more sprawl and complexity. With new tools like Boomi Agentstudio (formerly Boomi AI Studio), which is now generally available, Boomi is giving customers a way to manage, govern and orchestrate agents on a single platform, reinforcing the shift the vendor is making to become more of an agent orchestrator. Though Boomi stays true to its core iPaaS roots, which is a core component of the Boomi Enterprise Platform and Agentstudio, the company clearly recognizes the need to pivot like the rest of the market as a result of GenAI’s emergence.

Customer Insight

So when we need to integrate through APIs, there are multiple interfaces we need, and we create where we end up transferring the data capture. But at the same time, there are non-APIs where we have connectors and workflows and we’ll be able to transfer the data into multiple applications. Now, when we talk about entire API management, we use Boomi for the entire API management, support and configuration of APIs to ensure that we can centrally test and deploy APIs, and we can enforce contracts and policies with an API gateway.”
— VP Technology, Financial Services

Lenovo Aims to Become a Global Solutions Provider through Strategic Partnerships and AI-driven Innovation

Lenovo reaffirmed its commitment to Hybrid AI for All during the company’s annual Global Industry Analyst Conference, held Oct. 20 to 23 at the company’s U.S. headquarters in Morrisville, N.C. The conference featured a series of closed-door sessions during which Lenovo executives briefed roughly 70 industry analysts on all things Lenovo, from liquid cooling to agentic AI solutions. Throughout the conference, Lenovo executives provided updates on the company’s overall strategy and ambitions to shift its perception from PC vendor to full-stack, end-to-end solution provider.

From PC vendor to full-stack solution provider

At its core, Lenovo is an engineering company with a particular strength in computing, having acquired IBM’s PC business and later its x86 server business. However, Lenovo’s investments in scope expansion underpin its transformation into a solutions- and services-led technology provider.

Since the formation of its Solutions and Services Group in 2020, Lenovo has not looked back. In 2024 the company announced its Hybrid AI Advantage framework, which serves as a powerful example of how the company’s portfolio is widening and how the company’s culture and go-to-market approach are evolving to emphasize its increasingly solutions- and services-led model and its growing focus on full-stack AI.

However, despite the company’s investments in transformation, Lenovo has retained its core competencies in engineering and manufacturing, leveraging its expertise and global footprint to drive innovation, cost efficiencies and scale. For example, in Lenovo Infrastructure Solutions Group, the company has leaned into its unique engineering and manufacturing capabilities to establish its rapidly growing ODM+ business, which targets cloud services providers.

Additionally, within the Lenovo Intelligent Device Group, the company has leveraged these capabilities to mitigate the impacts of tariffs and drive design innovation in an otherwise increasingly commoditized PC market. The company’s Solutions and Services business acts like a margin-enriching interconnective tissue over the company’s robust client and data center hardware portfolios and a catalyst to move the company further up the value chain.

As a dual-headquartered company (North Carolina and Beijing), Lenovo’s global footprint is unmatched by its hardware OEM peers, and the company’s business in China largely operates in its own silo. China is one of several regional launch markets for early pilots. However, Lenovo follows a local-first, global-by-design approach: Solutions are incubated to meet local data, content and regulatory requirements, and when there’s Rest of World (ROW) demand, Lenovo reimplements features for global compliance rather than porting code or models 1:1. No China-based user data, models or services are reused in ROW products.

Building on this global theme, during the conference Lenovo noted that it is hiring Lenovo AI Technology Center (LATC) engineers across the world, in places like Silicon Valley; Chicago; Raleigh, N.C.; Europe; Tel Aviv, Israel; and China. Additionally, the company highlighted its investment in establishing new AI centers of excellence to centralize and expand regional AI talent and support independent software vendors in their development of industry- and use-case-specific solutions. Rather than making a net-new investment, Lenovo has leveraged this strategy successfully to expand its AI library, a catalog of preconfigured AI solutions ready to be customized and deployed by Lenovo. In addition to industry- and use-case-specific AI solutions, the company also works with regional independent software vendors to develop solutions tailored to the preferences of customers in specific geographies, such as China.

While Lenovo’s portfolio and go-to-market strategy may differ slightly by geography, the company’s pocket-to-cloud and One Lenovo initiatives remain the same around the world and are the basis for the company’s differentiation in the market — a theme during every session of the conference. From smartphones to servers, Lenovo is vying for share in every segment, and by investing in the unification and openness of its portfolio, whether it be through the development of homegrown software or new ecosystem partnerships, the company is positioning to grow in the AI era. Changing its perception from a PC powerhouse to a solution provider remains one of Lenovo’s largest challenges, but the company’s work in sponsoring and supporting FIFA and F1 with its full-stack technology capabilities demonstrates its willingness to invest in overcoming this hurdle.

Lenovo is investing to win in enterprise AI and bring smarter AI to all

Lenovo’s AI strategy spans all three of the company’s business units and echoes the Smarter AI for All mantra and pocket-to-cloud value proposition.

During the conference Lenovo reemphasized its belief that the meaningful ramp-up of enterprise AI is on the horizon as AI inferencing workloads continue to proliferate. Lenovo has high-performance computing roots and its Infrastructure Solutions Group (ISG) derives a significant portion of its revenue from cloud service provider (CSP) customers, in contrast to some of its closest infrastructure OEM competitors, but Lenovo’s investments and the composition of the company’s portfolio emphasize the company’s intent on driving growth through its Enterprise and Small/Medium Business (ESMB) segment, supporting all levels of on-premises AI computing, from the core data center to the far edge.

Additionally, Lenovo continues to go against the grain on the notion that AI workloads belong exclusively on the GPU and makes a case for lighter workloads being deployed more efficiently on the CPU and in smaller edge server form factors. Lenovo’s view is heterogeneous AI: Training and high-throughput inference lean on GPUs; latency-sensitive and personal workloads increasingly run on NPUs and optimized CPUs, and the company’s portfolio spans all three — multi-GPU-ready workstations, edge servers with GPU/CPU mixes, and Copilot+ PCs with NPUs for local inference — so customers can place the right workload on the right engine.

However, perhaps what differentiates Lenovo’s infrastructure portfolio most is the company’s Neptune liquid cooling technology, which comes in three flavors: Neptune, Neptune Core and Neptune Air. Intensive AI and machine-learnings workloads require dense compute infrastructure that, in some cases, generates so much heat it requires liquid cooling as opposed to traditional air cooling. In addition, even less-intensive workloads often benefit from liquid cooling, which generally operates at lower costs once implemented. This is where Neptune liquid cooling comes in.

The company’s flagship Neptune solution offers full system liquid cooling, making it ideal for the most demanding AI and high-performance computing workloads. The company’s other two offerings — Neptune Core and Neptune Air — deliver lower levels of heat removal but are more easily implemented. For instance, while Neptune Air offers the lowest levels of heat removal, the simplicity of the solution makes it easier to implement, especially in existing data center environments, supporting lower cost transitions to liquid cooling.

TBR sees Lenovo’s family of Neptune solutions as a major advantage, as the variety of offerings targets customers and environments in different stages of liquid cooling adoption. Lenovo’s experience in retrofitting data centers with liquid cooling also presents a strong services opportunity for the company and supports enterprise adoption of higher-power AI servers in their on-premises environments. Further, because liquid cooling is more efficient than air cooling, Neptune supports Lenovo’s sustainability initiatives and delivers strong total cost of ownership savings in many scenarios, which is something IT decision makers tend to scrutinize heavily when making investments.

Unlike its close competitors that have invested heavily in data management and orchestration layers leveraging their networking and storage solutions, Lenovo does not play in the data center networking space, instead choosing to be networking-agnostic and partner-first in this area, which the company sees as an advantage due to geographical differences in customer preference. However, the company’s results have yet to prove that this networking strategy is materially advantageous. Additionally, while complex, networking is typically more margin rich than compute and storage while also presenting myriad attach and services opportunities for OEMs with first-party full-stack infrastructure portfolios.

Adjacent to the company infrastructure offerings, during the conference Lenovo executives stated that there should more adoption of workstations as part of enterprises’ on-premises AI adoption and solution development. Compared to sandboxing AI solutions in the cloud, Lenovo sees its workstations, which can support up to four NVIDIA RTX discrete GPUs, as a more practical and economical solution compared to cloud resources. However, in addition to the company’s Windows-based workstations, Lenovo also showed off its NVIDIA DGX Spark inspired desktop geared more heavily toward use in conjunction with NVIDIA DGX cloud.

Rather than running Windows OS, Lenovo’s DGX Spark inspired desktop runs DGX OS, a Linux-based operating system and is ideal for buyers that already have DGX cloud resources. With desktop offerings for AI spanning multiple operating systems, Lenovo’s Intelligent Device Group showcases the company’s ambition to create AI systems for all types of users. Looking ahead, both TBR and Lenovo expect the adoption of GPU-enabled workstations to grow as an increasing number of enterprises experiment with the development and/or customization of preconfigured AI solutions.

Through a services lens, Lenovo’s enterprise AI strategy centers on the company’s Hybrid AI Advantage framework. Similar to frameworks used by competitors such as Dell Technologies and HPE, Hybrid AI Advantage includes NVIDIA AI Enterprise software components intended to allow for the development of industry- and use-case-specific AI solutions. However, while NVIDIA AI Enterprise can be thought of as a collection of foundational tools to build AI agents, Lenovo’s AI library goes a set up further, offering more out-of-the-box types of industry- and use-case-specific AI solutions.

The composition of Lenovo AI library is largely predicated on solutions developed in conjunction with ISVs through Lenovo’s AI Innovators program. As Lenovo expands its footprint of AI centers of excellence, TBR expects the number of customizable, near-plug-and-play AI solutions to grow, further cementing the company’s differentiation in the marketplace. Additionally, Lenovo argues that its Agentic AI Platform further differentiates Hybrid AI Advantage from what is offered by competitors.

Lenovo’s Solutions and Services Group is equipped with strengthening auxiliary services to support customers wherever they are on their AI journey. This is where the company’s Hybrid AI framework comes into play. The framework has five components: AI discover, AI advisory, AI fast start, AI deploy and scale, and AI managed. The first two components underscore Lenovo’s growing emphasis on delivering professional services, while the third component — where many customers enter the framework — is where Lenovo aligns customers with a solution from the company’s AI library. The last two components highlight Lenovo’s ongoing interest in delivering deployment and ultimately managed services through the company’s maturing TruScale “as a Service” business that caters to both infrastructure solutions and devices deployments.

At the end of the day, Lenovo understands that hardware — specifically compute hardware like PCs and servers — is its strength, but by developing prebuilt solutions and overlaying its expanding services capabilities, the company is investing in moving up the value chain to drive margin expansion and deepen customer engagement.

Intelligent Device Group is doubling down on its unified ecosystem play

At 67.3% of total reported segment revenue in 2Q25, Lenovo’s Intelligent Device Group accounts for the lion’s share of the company’s top line, and TBR estimates 88.5% of the segment’s revenue is derived from the sale of PCs. Over the trailing 12-month (TTM) period ending in 2Q25, TBR estimates the company’s PC business generated nearly $40 billion, growing 14.6% year-to-year and resulting in an approximate 130-basis-point expansion in PC market share, according to TBR’s 2Q25 Devices Benchmark.

In line with the company’s ambitions to change its perception from a PC vendor to a solutions provider, and due to the company’s already established footprint in the PC market, much of the general sessions during the conference focused on the company’s AI position, with specific emphasis on Solutions and Services Group and Infrastructure Solutions Group. However, during the Intelligent Device Group briefings, Lenovo executives confirmed the company has no intention of giving up share in devices — the business on which the company’s success has been predicated. Lenovo touted its leadership position in several segments of the PC market, with the largest being the commercial space. Business leaders acknowledged that it is a good time to take share in the commercial Windows PC market, and by investing in the development of proprietary components and feature sets, the company is actively dispelling the notion that the PC market is fully commoditized. Lenovo has continued its engineering collaboration with Microsoft on CoPilot+ PCs.

Beyond the PC, Lenovo’s investments in growing the share of its Motorola smartphone business have been paying off, and the company is not taking its foot off the gas. To drive cross-selling opportunities, the company is deploying a marketing strategy targeting a younger customer base and is leaning into creating a unified device ecosystem integrated with AI features and capabilities, something the company refers to as its One AI, Multiple Devices strategy. However, unlike other unified device ecosystem plays, such as that of Apple, Lenovo’s play is more open, with the company supporting cross-device features between its PCs and smartphones running both Android OS and iOS. Thus far, Lenovo has seen limited traction in cross-selling smartphones and PCs in the commercial space; however, TBR believes the company’s One AI, Multiple Devices strategy could help shift the tide.

PC Data for 2Q25

2Q25 TTM Windows PC Market Share and Estimated 2025 Lenovo PC Revenue Mix (Source: TBR)

 

During its Global Industry Analyst Conference, Lenovo’s focus on maintaining and even expanding its leadership position in the commercial PC segment was obvious, but what was perhaps more interesting was how the company is marketing several of its PCs that are in direct competition with Apple in an effort to appeal to a younger segment of the PC market. Lenovo is promoting its device brand as premium, trusted and innovative — aspects supported by the company’s leadership in PC design and engineering as well as its ongoing investments in device security through proprietary software developments. Lenovo also showcased innovative designs, such as motorized expanding screens, and developments down to the motherboard level, which harkened back to the company’s core legacy competencies in engineering and manufacturing.

Through partnerships and portfolio innovation, Lenovo is gradually changing its perception in the industry

Lenovo’s FIFA and F1 partnerships underscore the company’s investments in growing its brand recognition globally and changing its perception from PC vendor to solutions provider. For example, Lenovo infrastructure will power semi-automated offsides calls during the World Cup via computer vision technology. Additionally, Lenovo continues to leverage its Neptune liquid cooling technology as a key differentiator. During the conference Kate Swanborg, SVP, Technology Communications and Strategic Alliances, DreamWorks Animation, discussed how Neptune has allowed DreamWorks to consolidate its data center footprint from 210 air-cooled servers down to 72 liquid-cooled servers.


By leveraging its global engineering and manufacturing footprint in combination with its expanding ecosystem of ISV partners, Lenovo’s emphasis on hardware innovation and supply chain agility aligns with the company’s ever-growing AI library and its establishment of AI centers of excellence, to support Lenovo’s ambitions of driving enterprise AI adoption across all kinds of on-premises environments. Constant investments in IT operations management platforms and unified device ecosystem software demonstrate Lenovo’s focus on driving cross-selling within and across its hardware portfolios while increasing the value proposition behind the company’s TruScale managed service offerings.

Fujitsu Showcases Smart GTM Plays, AI-ready Talent and Long-term Sustainability Efforts

As part of Climate Week in New York City, Fujitsu hosted eight analysts on Sept. 25 for a roundtable discussion about sustainability, supply chains and Fujitsu’s emerging Uvance consulting service. Among the Fujitsu leaders in attendance were EVP Chief Sustainability and Supply Chain Officer Takashi Yamanishi and EVP Global Solutions Sinead Kaiya. The following reflects that roundtable discussion and TBR’s extensive and ongoing analysis of Fujitsu’s IT services and consulting efforts, particularly in North America.

Fujitsu’s Uvance gains traction in AI-enabled sustainability

As Fujitsu’s Uvance offering evolves and gains market share and presence, the company’s ability to deliver on AI-enabled sustainability solutions could accelerate overall growth, especially in North America. Use cases highlighting both measurable return on investment in AI and significant cost savings demonstrate that Fujitsu’s relatively smaller scale in North America, as compared to peers such as Accenture or Deloitte, has not prevented the company from delivering value to clients.

TBR believes the critical next steps to growth, perhaps at a faster pace over the next five years, are developing repeatable, IP-driven solutions, learning to compete with fewer employees and more AI agents, and leveraging a scrappy mentality. Finding the right messaging around Uvance, embedding sustainability across all engagements, and increasingly leveraging internal supply chain and cybersecurity expertise to support client-facing opportunities will round out Fujitsu’s strategy. It is no small task, but the company has positioned itself well, as TBR has noted repeatedly over the last few years.

‘From philosophy and targets to execution,’ according to Yamanishi

Folding supply chain and sustainability leadership into one C-Suite role is a slight differentiator for Fujitsu. According to Fujitsu’s EVP Chief Sustainability and Supply Chain Officer, Takashi Yamanishi, the company is combining its focus on suppliers and third-party management with its sustainability offerings and capabilities to expand Fujitsu’s business opportunities. By merging supply chain experience with sustainability imperatives, Fujitsu is creating a compelling business case while simultaneously moving toward its own environmental targets, including in Scope 3 emissions.

Notably, Yamanishi described the close cooperation between Fujitsu’s supply chain and sustainability professionals and the company’s cybersecurity practice, including collaboration around supplier assessments. In addition to greenhouse gas emissions and other Scope 3 metrics, Fujitsu utilizes its own cybersecurity assessment and criteria to strengthen its suppliers’ cybersecurity, enhancing the overall resilience of the supply chain. In TBR’s view, using sustainability and cybersecurity metrics to assess supply chain risks is likely driving more responsiveness and transparency from suppliers around risk mitigation.

Kaiya calls for mindset to ‘Be scrappy’

Fujitsu’s Uvance story continues to evolve, and the roundtable discussion on sustainability reinforced the company’s overall approach of leading with technology-infused business solutions and business value, with a foundation in sustainability. Fujitsu’s EVP Global Solutions Sinead Kaiya highlighted a few aspects of Uvance’s evolution and approach around sustainability.

  • Fujitsu intends for Uvance to account for upward of 30% of the company’s services revenues by 2030, while traditional IT services will account for 60% and modernization the remaining 10%. Uvance’s growth in recent years makes that target likely achievable.
  • Fujitsu’s own intellectual property should be built into all of the company’s engagements. Both Kaiya and Yamanishi distinguished between solutions that exist within Fujitsu’s capabilities and can be deployed as part of an engagement and repeatable solutions (or products) that sit within Uvance and can scale across multiple clients.
  • In North America, as previously discussed with TBR, Fujitsu will focus on a few core industries, notably framed less as traditionally understood verticals and more as shared business challenges, which Fujitsu has positioned itself to help clients tackle. In TBR’s view, this approach reflects the reality that nearly every enterprise operates under business models dominant in multiple industries.

During the slow rollout of Uvance TBR has noted increasingly well-refined explanations of what Uvance does, which kinds of clients Fujitsu is pursuing, and how Uvance can separate itself in a crowded consulting and technology field. Kaiya made two standout points that further cemented TBR’s understanding. First, Uvance will not “just solve the [client’s] problem,” but Fujitsu will be “exceptionally careful in what we can be and what we should be, for profit reasons, repeatable.” Second, in North America Uvance will “be scrappy” and pursue opportunities overlooked or underserved by larger consultancies and IT services companies. In TBR’s view, this strategy of being deliberate with repeatable solutions and taking a self-aware, aggressive approach to the market aligns with Fujitsu’s strengths, current place in the market and opportunities for growth.

Successful deployments depend on reliable technologies and measurable outcomes

Roundtable participants were also provided Uvance and sustainability use cases and demonstrations, notably:

  • In a supply chain optimization use case in which the client realized a 50% savings in operational costs, Fujitsu leaders said they had not used an outcomes-based pricing model but would consider such an approach if and when they are able to repeat that approach and solution at a similar client. Time and materials, Kaiya noted, would not be an optimal long-term pricing model.
  • In a Canadian client’s use case, Fujitsu leaders noted a three-month return on investment from a generative AI (GenAI)-enabled solution, making this one of the more successful, understandable and relatable GenAI deployments in recent TBR memory. Based on the client’s use of Fujitsu’s GenAI solution to reduce time spent responding to compliance and regulatory requests, TBR believes Fujitsu will continue investing in industry-specific large language models (echoing previous industry clouds trend).
  • Multiple use cases highlighted included a blockchain component, leading TBR to question whether Fujitsu had a dedicated blockchain practice similar to what existed at consultancies and IT services companies in the 2010s and early 2020s. Kaiya and Yamanishi noted that blockchain serves as an enabling technology and is part of the overall solution Fujitsu brings to clients (when needed), but it is not a stand-alone offering. Fujitsu professionals also noted that clients specifically ask for greater transparency and quality control, characteristics inherent in blockchain.

Overall, Fujitsu’s use cases and demo made for a compelling case for both Uvance and the company’s underlying sustainability approach. TBR will be watching through 2026 to see whether Fujitsu’s use cases increasingly include outcomes-based pricing, how frequently Fujitsu discusses repeatable solutions, and which other technologies shift from emerging and noteworthy to enabled. TBR will also monitor how Fujitsu adapts its alliance strategy to align more effectively with the needs and strengths of its technology partners. In particular, TBR will seek examples of Fujitsu coordinating multiparty alliances to support the operations of Japanese enterprises based in the Americas.

Sustainability now and forever

The New York City roundtable reinforced for TBR a few truths about Fujitsu now and going forward. First, the company is scrappy, having developed go-to-market and North America strategies that play to its strengths. Second, rapid changes across IT services and consulting are unlikely to catch Fujitsu off guard and unprepared. Fujitsu leaders understand the shifting talent landscape, where IT services and consulting buyers in 2026 and beyond will expect AI in everything and AI-induced savings as part of their engagements. And lastly, circling back to the core reason for the event, Fujitsu knows sustainability may be a lower priority in the U.S. at present, but it will become a top priority again, and Fujitsu has been preparing its offerings, capabilities and clients for that pendulum swing.

 

Partnerships, Not Products, Will Define How Consultancies and Native AI Companies Share Value in Agentic AI Era

Services regain strategic importance

Services are cool again. Native AI companies are embedding services offerings around their products and thinking about services as part of their long-term strategy — well, not too long-term, as how long can a strategy remain the same?


Suppose native AI companies want to deliver services. How will they compete, or perhaps even partner, with traditional IT services companies and management consultancies, which are pursuing their own AI opportunities? These assertions and questions came up repeatedly this summer, and as 2025 winds down, we are starting to see some outcomes and answers.


TBR attended several tech conferences and analyst events in recent months, and AI was the inescapable topic at each one. In particular, KPMG’s Technology and Innovation Symposium in Deer Valley, Utah, stands out, in part because of the sheer breadth of opportunities discussed, use cases highlighted, and future hopes and fears laid out in stark detail.


In our latest blog series, TBR on AI in 2025, we intend to connect those ideas with research and analysis conducted by TBR over the last few years to highlight implications for the companies we cover across the technology ecosystem. Previous topics about the agentic AI age have included human resources management and expectations for enterprise IT architecture. Future posts will discuss who gets first and consistent access to limited resources like energy.

Watch now: The Good, the Bad and the GenAI Opportunity in Cloud Ecosystems

AI-native platforms disrupt the people-dependent services model, forcing incumbents to rethink partnerships

Native AI companies folding services offerings into their products and platforms follow in the large footsteps of the cloud and ERP vendors that persistently maintain professional services offerings to complement their cloud infrastructure and software money-making machines. As these giants have learned, success in services is harder than it looks and requires, at minimum, people and permission. Services are inherently a people business, and delivering consistently, with quality and at scale demands teams of people and staff to support them. No clients buys services without first being certain the provider can deliver (i.e., permission). Cloud and software companies have always had the latter locked up. Who knows SAP better than SAP professionals, and who can deliver Azure better than Microsoft? But the people part continually presents a stumbling block, or at least a check on scale and growth.


Why does all this framing around the cloud and software giants matter to native AI companies with their products and platforms? It doesn’t. However, it’s the world in which IT services companies and consultancies have been operating. Partnering with native AI companies looking to expand their services offerings will not be the same as fending off Microsoft Professional Services or SAP Services. At TBR, we constantly examine the largest IT services companies and consultancies, studying how they operate, partner, go to market and generate revenue. All of these aspects are changing rapidly in the agentic AI age, allowing us to bring that research to this developing space.

What roles will traditional IT services companies and management consultancies play, particularly as AI companies’ products permeate enterprise clients?

TBR sees three, not mutually exclusive, roles for traditional IT services companies and management consultancies: venturer, concierge and rival.


Most IT services companies and consultancies lack an impressive track record of investing early in startups, often acting more like traditional service providers than venture capital firms. At traditional IT services companies, the quarterly earnings clock, management and oversight layers, and competing offerings typically keep startup incubator programs relatively small (and too frequently underfunded).

Why does the people challenge not apply to native AI companies in the ways we have seen with traditional cloud and ERP players?

Solving the people problem is, inherently, part of AI, no matter how often one hears the “human in the loop” chorus. Services offerings folded into native AI companies’ products start optimized for minimal human touch. IT support through a chatbot? Built-in. FinOps solutions? Native.

Yes, services at scale has always demanded people and support staff, but that imperative is fading fast and will not apply to native AI companies.


Consulting firms have not fared much better, given the need for consensus around strategy and investment. Notably, some of the Big Four firms have become more adept at making small investments, capturing enough of a stake to influence without owning. KPMG, in particular, has developed a consistently funded, well-managed startup support program that has seen success in the last couple of years.

Consultancies’ change management expertise positions them as vital partners to emerging AI vendors

Investing can be the gateway into a concierge-like relationship between these giant companies and firms and the relatively small native AI companies. Most commonly, these relationships focus on making introductions to enterprise clients, providing strategic counsel, offering financial and tax advice, and mentoring leaders. TBR sees a critical new opportunity, expressed clearly at the KPMG Technology and Innovation Symposium: change management.


Speakers at that event and professionals across the AI and consulting space in subsequent discussions noted that most native AI companies have not worked out the potentially massive change management requirements and implications of adopting their products. Many IT services companies and all management consultancies excel at change management and could be well positioned to provide clients consulting services entwined with a native AI companies’ product, provided all parties understand both the complementary offerings and the commercial models. And these elements echo TBR’s ecosystem research, which has repeatedly shown that leading companies invest in understanding their alliance partners’ offerings and sales structures.


Just like supporting startup programs, many traditional IT services companies and consultancies have struggled to adequately put themselves in their alliance partners’ shoes. And when those partners are startups or immature native AI companies, that struggle will be harder in the absence of leadership, strategic direction and sustained investment. But that’s the potential downside. The upside is that consultancies are perfectly positioned to be change management specialists, helping their largest clients adopt the best new AI.

“When I think of knowledge, there are two pieces. One is, what are the insights to build the product? That’s where our people come in, because we have practitioners who are working with clients on this product, using the right insights to build the right product. That’s Part 1. And second is, do your salespeople know those product features to help go sell? And I think the second part, I definitely see opportunity. There’s a little bit of upscaling and change management required for a lot of these sales folks across the board on how to sell these modern version of their software in the agentic world.”
– AI professional speaking to TBR about knowledge management, sales and AI

 

And then, there is always the possibility of rivalry

Every traditional IT services company and management consultancy that TBR covers, including Tier 2 companies (enterprise systems integrators, in TBR’s terms), has its own products, and most have platforms. Although not all vendors sell products as stand-alone offerings, they all have accelerators and AI-enabled solutions — quite a few acquired over the last couple of years.


At a technology level, these solutions and AI native companies’ products may compete or complement, but at a business-model level, what matters are the attached services. The traditional IT services companies and consultancies have relied on client intimacy, scale and industry knowledge to stay sticky with their clients, holding off challengers. AI becomes intimate more quickly than traditional ERP. Industry-specific AI is coming fast, so traditional companies and firms will need to rely on scale, for now, to keep native AI companies’ services offerings at bay, at least with enterprise clients. As many traditional companies and firms seek new markets among smaller clients, focusing on investing and partnering will become the only path to sustained success. Or simply acquire, of course.

What comes next for traditional IT services companies and consultancies?

History keeps rhyming. Native AI companies understand that the services business model often clashes with product and platform strategies. As a result, their investment in services will rise and fall with client demands, leadership changes, market opportunities, and the successful moves of the smartest traditional IT services companies and consultancies.