Will the U.S. Government and Hyperscalers Push the Mobile Industry to the Forefront of 6G?

2025 Brooklyn 6G Summit, Brooklyn, New York, Nov. 5-7, 2025 — More than 300 in-person attendees and 1,600 virtual attendees from academia, technology standards bodies, the public sector, industry analyst firms, network infrastructure and device vendors, communication service providers (CSPs), satellite network operators, semiconductor firms, hyperscalers and other stakeholders of the broader wireless technology ecosystem gathered at the Tandon School of Engineering at New York University in Brooklyn for a mix of presentations, fireside chats and panel discussions regarding the future of wireless networks, with a focus on 6G. There were more than 35 exhibits on display from graduate students at various U.S. universities as well as from a range of telecom ecosystem entities showcasing 6G-related innovations. A broad range of other 6G-related topics were discussed, including the role of AI and machine learning in networks, sustainability (especially related to energy), standards development, spectrum policy, public-private collaboration, and nonterrestrial networks’ role in the ecosystem. The 12th annual event was hosted by Nokia and NYU WIRELESS. This year’s event provided the usual cellular ecosystem updates, but one of the major themes was how the telecom industry should approach converting wireless innovation into value, especially economic value. Bell Labs’ 100th anniversary was also commemorated at the event.

TBR perspective

There is no doubt that the global economy is in the midst of an AI super cycle. What is doubtful, though, is whether the telecom industry will rise to the occasion to support (and monetize) opportunities that arise from AI. There are several fundamental challenges endemic to the telecom industry that could keep it from participating in the AI economy in a significant way, and TBR believes the AI economy could disintermediate at least a portion of the telecom industry.



The AI ecosystem currently operates in 18-month innovation cycles, and the telecom industry remains mired in its decade-long generational cycles; this dissonance will create friction and could be the biggest impediment to the convergence of the AI economy with the telecom industry. Will the rapidly evolving AI ecosystem wait for the telecom industry to support it, or will it be forced to move beyond telecom incumbents and institutions to avoid being held back?

One thing is certain: AI will fundamentally change how networks are utilized, necessitating a new network architecture. The telecom industry, which includes the cellular ecosystem, moves incredibly slowly, and this slowness is diametrically opposed to how the AI ecosystem (and other tech theme areas) moves. Something is going to break, and the real questions are what and when.

Impact and opportunities

A new network architecture is required for AI, as current networks will not suffice

One key aspect of AI workloads, especially those emanating from end-user devices, is that they are uplink-intensive, meaning they rely more heavily on uplink resources from the network than on downlink resources. This is a fundamental issue because macro, cellular-based networks are optimized for downlink capacity (typically a 10-1 downlink-uplink ratio from a resource-allocation perspective). CSPs will need to make significant investments in new network technologies and rethink how spectrum resources are utilized, to optimize networks for uplink.

AI traffic also tends to require lower latency than current networks and can support higher bursts of traffic than video and other media consumption. AI networks require uplink bandwidth, lower latency (compared to current networks) and the ability to handle higher bursts in traffic patterns at scale, and none of these requirements can be achieved just by increasing capacity. These requirements are the opposite of how networks are architected today — optimized for downlink, best-effort or good-enough latency, and optimized for more predictable traffic patterns — necessitating significant investment by CSPs. This will be a gradual transition, as there is no silver bullet to address this problem quickly. The best approach seems to be decoupling the downlink from the uplink for transmit power differentials, which would enable network resources to dynamically adapt to traffic demands in real time. Additionally, there is concern as to how willing CSPs will be to invest in uplink when ROI is uncertain.

6G will primarily be a software upgrade

Unlike prior Gs, 6G is unlikely to be a massive hardware refresh, but will instead build on top of existing 5G RAN and 5G core (the 5G SA [standalone] architecture) infrastructure as a software upgrade. This is the most likely outcome as CSPs are highly unlikely to have the appetite or the financial wherewithal to invest in another massive network refresh with unclear ROI.

There will be some hardware elements to invest in, such as processing capability that enhances programmability and maximizes AI support, or radios that support new frequencies not covered by existing RAN via software-defined radio technology, but hardware investments will be a fraction of what was spent in prior cellular generations. As a result, TBR expects a shallower capex curve during the 6G cycle, more closely resembling the 5G Advanced spend curve thus far.

Trump likes 6G

The Trump administration has designated several technologies and resources as strategic national priorities. Much as he did during his first administration, designating 5G as a national strategic priority, Trump views 6G as equally, if not more important. 6G will be leveraged for a broad range of defense and public safety, as well as societal use cases, which makes investment and success with the technology a public issue. Integrated sensing and communication (ISAC) is a 6G-related use case of particular interest to the U.S. government, for national security considerations. The government is heavily involved in 6G R&D-related endeavors, both directly and indirectly, through proxy programs such as the Defense Advanced Research Projects Agency and the U.S. National Science Foundation, as well as government agencies such as the U.S. Department of War.

TBR expects the Trump administration to apply similar approaches to seeding and bolstering U.S. innovation in 6G, much like it has in the production of rare earth metals, AI, nuclear power, and other key technologies. As part of this, it is reasonable to expect U.S. government equity investments and policy support. The recent Nokia deal with NVIDIA and the U.S. and Nokia agreement both align with this policy.

Cellular standards fracturing goes beyond geopolitics

Cellular standards are not only being regionalized due to geopolitical and nationalistic considerations but are also showing cracks within regional ecosystems. For example, non-standardized technologies are increasingly coexisting with 3rd Generation Partnership Project (3GPP)-based cellular technology. Standards in cloud data centers, optics, chip design and end-user devices are fragmenting, creating technological walled gardens.

For example, hyperscalers now have enough global scale to justify pursuing their own technology standards, and it is reasonable that these companies could, at some point, rival or eclipse traditional standards bodies, such as the 3GPP, in ecosystem influence and market power. Potential catalysts for a change such as this could include the need for AI-native networks (thereby reducing dependencies on CSPs for infrastructure investment and innovation road map alignment) and the need to support the rapidly evolving AR/VR market, which holds multitrillion-dollar revenue potential for hyperscalers.

Unlocking stranded spectrum assets: A prerequisite for 6G leadership

The mobile industry continues to beat the drum for more spectrum, but it should instead focus on fully utilizing the spectrum already allocated. TBR notes there are vast tranches of spectrum in the U.S. market that are broadly underutilized, either for technical or economic reasons. And challenges will only worsen as the industry aims to bring upper midband frequencies into the fray, which have greater propagation challenges and are less suited for macro coverage.

The U.S. needs to do a better job, guided by government institutions like the Federal Communications Commission, of utilizing CBRS, C-Band, 6GHz and mmWave bands, which are woefully underutilized today. For example, only a relatively small portion of midband spectrum has been deployed in the U.S. market to date, implying that well over half of it has not yet been put to use. (CSPs are either sitting on it or hoarding it.) Most 4G and 5G network traffic in the U.S. today runs over low bands such as 600MHz to 800MHz and the lower midband (1GHz to 2.6GHz, especially 2.5GHz), with C-Band increasing but nowhere near its full utilization potential. MmWave bands hold promise, but for economic and technical reasons, they were used only in very specific situations, mostly for LAN capacity.

Additionally, spectrum warehousing entities like EchoStar (which recently reluctantly agreed to sell a portion of its vast spectrum holdings to Starlink and AT&T) and private equities are still sitting on large tranches of unused spectrum. The government should redirect its efforts toward addressing these market dislocations rather than straining to bring new spectrum to a market that might not even be used (case in point, CBRS). This should be a mandatory government push because dislocations created by negative externalities of capitalism threaten to keep the U.S. behind China in key technological domains; specifically, corporations with a scarcity mindset are hoarding, but not necessarily using, the spectrum resources they have. The U.S. is already behind China across several key technologies (e.g., 5G, hypersonic missiles, electric batteries and vehicles, rare earth element processing); it is unacceptable for the U.S. to also fall behind on 6G.

TBR believes that the spectrum already allocated to CSPs will be deployed once the business case for its use is secured. Spectrum resources will come out of the shadows once ROI is clear. The government can help in this regard.

Data protectionism is stifling innovation and holding back the U.S.

A major limitation for academia and the broader mobile ecosystem is the lack of raw data to leverage for innovation. Data is viewed as a strategic and valuable asset, and the reality is that companies and governments do not want to share their data for various reasons. This limits academia, research institutions and other companies in what can be used for educational purposes as well as their ability to innovate by training their models and fine-tuning their technologies. Here again, China has an advantage over the U.S. because its authoritarian model can accelerate the pace of innovation in targeted areas.

Conclusion

The AI and cellular ecosystems are moving at different speeds rather than converging onto parallel tracks, which is a major issue and could lead to a breaking point for the technology industry. If hyperscalers feel like CSPs are holding them back, they can and will, as history validates, work around CSPs and take greater control over their own destinies.

This could include building their own standards for network infrastructure, much like they have for IT infrastructure and at the endpoint-device level and build out AI-optimized networks far greater in scope than they have been building (see TBR’s Hyperscaler Digital Ecosystem Market Landscape and Hyperscaler Capex Market Forecast reports for more information). Since it is unlikely the telecom ecosystem will fundamentally change, as it is not geared to do so, greater disintermediation and changing competitive dynamics are likely to occur. It is very possible that 6G could be the last “G.”

Oracle’s Full-stack Strategy Underscores a High-stakes Bet on AI

Oracle AI World, Las Vegas, Oct. 13–16: Newly rebranded as Oracle AI World, the theme of this year’s event was “AI changes everything,” a message supported with on-the-ground customer use cases in industries like healthcare, hospitality and financial services. New agents in Oracle Applications, the launch of Oracle AI Data Platform, and notable projections for Oracle Cloud Infrastructure (OCI) revenue also reaffirmed the emphasis Oracle is placing on AI. Though not immune to the risks and uncertainties of the AI market at large, Oracle is certainly executing, with the bulk of revenue from AI contracts already booked in its multibillion-dollar remaining performance obligations (RPO) balance. And yet, as OCI becomes a more prominent part of the Oracle business, big opportunities remain for Oracle, particularly in how it partners, prices and simply exists within the data ecosystem.

OCI rounds out the Oracle stack, strengthening its ability to execute on enterprise AI opportunity

2025 has been a transformative year for Oracle. With the Stargate Project — which pushed RPO to over $450 billion — and the recent promotion of two product leaders to co-CEOs, Oracle is undergoing a big transition that aims to put AI at the center of everything. In both AI and non-AI scenarios, the missing piece has been OCI, which plays a critical role in Oracle’s long-solidified application and database businesses.


But now that OCI is transitioning to a robust, scalable offering that could account for as much as 70% of corporate revenue by FY29 (up from 22% today), Oracle is much better positioned than in the early days of the Gen2 architecture. For the AI opportunity, this means using the full stack — cost-effective compute, operational applications data, and what is now a fully integrated AI-database layer to store and build on that data — to guide the market toward reasoning models, making AI more relevant for the enterprise.

Steps Oracle is taking to simplify PaaS have already been taken by others, but the database will be Oracle’s big differentiator

Cross-platform entitlements from the data lake mark a big evolution in Oracle’s data strategy

For a long time, most of the market seemed against open standards, but in the era of AI, storing data from disparate tools into a single architecture that works with open formats and engines has become common practice. With SQL and Java, open standards have been part of Oracle since the beginning, but Oracle is pivoting more heavily in this direction, with what seems to be a broader vision to support analytics use cases on top of the operational database, where Oracle is strongest. For example, at AI World, Oracle launched Autonomous Data Lakehouse; given how the market has revolved around data lakes and their interoperability, this launch has been a long time coming.

An evolution of Autonomous Data Warehouse, Autonomous Data Lakehouse is integrated directly into the Oracle Database, meaning the database can connect to the data lakehouse and read and write data in open formats, including Apache Iceberg, as well as analytics engines like Spark that are used to get data in and out of the lakehouse. Aside from reaffirming Oracle’s commitment to open standards and providing a testimonial for the Apache Iceberg ecosystem at large, Autonomous Data Lakehouse sends a strong message to the market that a converged architecture does not equal lock-in; with Oracle, customers can still pull data from a range of databases, cloud warehouses, streaming and big data tools. When it comes to accessing data in applications, this is specific to Oracle applications.

As OCI becomes a more prominent part of the business and agentic AI further disrupts the applications market, it will be interesting to see whether Oracle takes the opportunity to support external applications natively. Last year’s decision to launch a native Salesforce integration within Fusion Data Intelligence (FDI), enabling customers to combine their CRM and Fusion data within the lakehouse architecture, suggests Oracle may be moving further in the direction of delivering its PaaS value outside its own apps base, which would create more market opportunity for Oracle.

The days of Oracle’s ‘red-stack’ tactics are starting to fade

Getting data into a unified architecture is only half the battle; a big gap at the governance layer for managing different data assets in Iceberg format remains. Addressing this piece, Oracle is launching its own data catalog as part of Autonomous Data Lakehouse, which, importantly, can work with the three core operational catalogs on the market: AWS Glue, Databricks Unity Catalog and Snowflake Polaris (open version).

Customers will be able to access Iceberg tables in these catalogs and query that data within Oracle. While for some customers a single catalog with a unified API may be ideal, in most scenarios, running multiple engines over the same data is the motivator. Oracle’s recognition of this is a strong testament to where the market is headed in open standards and in making it easier to federate data between platforms.

AIData platform should provide a lot of simplicity for customers

The Autonomous Data Lakehouse ultimately serves as the foundation of one of Oracle’s other big announcements: AI Data Platform. At its core, AI Data Platform brings together the data foundations — in this case, Autonomous Data Lakehouse integrated with the database — and the app development layer with various out-of-the-box AI models, analytics tools and machine learning frameworks.

Acting as a new OCI PaaS service, AI Data Platform is more a culmination of existing OCI services, though it still marks a big effort by Oracle to bring the AI and data layers closer together, helping create a single point of entry for customers to build AI on unified data. To be clear, this approach is not new, and vendors have long recognized the importance of unifying data and app development layers. Microsoft helped lead the charge with the 2023 launch of Fabric, which is now offering natively embedded SQL and NoSQL databases, followed by Amazon Web Services’ (AWS) 2024 launch of SageMaker AI.

Both offerings leverage the lakehouse architecture and offer integrated access to model tuning and serving tools in addition to the AI models themselves.  Of course, in instances like these, Oracle’s differentiation will always rest on the database and the ability for customers to more easily connect to their already contextualized enterprise data  in the database for LLMs.

TBR graph: Hyperscaler Revenue and Growth for 10 Systems Integrators

As Oracle becomes more akin to a true hyperscaler, both partners and Oracle must adapt

With AI, platforms are playing a much more prominent role. Customers no longer want to jump through multiple services to complete a given data task. They also want a consistent foundation that can keep pace with rapid technological change. Six of Oracle’s core SI partners are collectively investing $1.5 billion in training over 8,000 practitioners in AI Data platform, suggesting both Oracle and the ecosystem recognize this shift in customer expectations. It also speaks to the pivot Oracle’s partners may be trying to make. As Oracle strengthens its play in IaaS/PaaS, services partners — which still get the bulk of their Oracle business from SaaS engagements — may need to adjust.

The challenge is that the SIs have already invested so much in AWS, Microsoft and Google Cloud, so viewing Oracle through the hyperscaler lens may be easier said than done. For context, research from TBR’s Cloud Ecosystem Report shows that 10 SIs collectively generated over $45 billion in revenue from their AWS, Azure and Google Cloud Platform (GCP) practices in 2024. Put simply, it may take some effort on Oracle’s part to get SIs to think about Oracle on, say, AWS before AWS on AWS. This effort equates to investments in knowledge management and incentives, coupled with an overall willingness to partner in new ways.

The good news is AI Data Platform, which is available across the hyperscalers, will unlock integration, configuration and customization opportunities, resulting in an immediate win for Oracle in the form of more AI workloads, and eventual sticking points for the GSIs. In the long term, AI Data Platform will serve as a test case for partners’ ability to execute on a previously underutilized portion of the Oracle cloud stack and Oracle’s willingness to help them do so.

Role of SaaS apps pivots around industry outcomes

OCI, including PaaS services like AI Data Platform, is becoming a more prominent part of Oracle’s business. Next quarter (FY2Q26) will mark the inflection point in the Oracle Cloud business when IaaS overtakes SaaS in revenue. But for perspective, a lot of the IaaS momentum is coming from cloud-native and AI infrastructure customers leveraging Oracle for cost-effective compute. Oracle has over 700 AI customers in infrastructure alone, with related annual contract revenue growing in the triple digits year-to-year. Within the enterprise, however, the operational data residing in Oracle’s applications remains integral to the company’s strategy and differentiation.

At Oracle AI World, a lot of the focus was on the progress Oracle has made in delivering out-of-the-box agents not just across the Fusion suite but also in industry applications. Oracle reported it has 600 agents and assistants across the entire apps portfolio, and while the majority are within Fusion, more agents are coming online in the industry suite. These agents will continue to be free of charge, including for the 2,400 customers already taking advantage of AI in Oracle’s applications. While Oracle has long offered a suite of industry apps that are strategically key in helping it appeal to LOB decision makers, Industry Apps will start taking a more central role in Oracle’s strategy, coinciding with the recent appointment of Mike Sicilia, previous head of Industry Apps, to co-CEO.

At the event, it became clear that Oracle is starting to view its applications less as Fusion versus Industry and more as a unified SaaS layer. As customers remain under pressure to deliver outcomes from their generative AI (GenAI) investments, industry alignment will be key, especially as they increasingly find value in using this industry data to tune their own models. As such, TBR can see scenarios in which Oracle increasingly leads with its industry apps, potentially unlocking client conversations in the core Fusion back-office.

With all the talk about catering to outcomes with its industry apps, it will be interesting to see how far Oracle goes to align its pricing model accordingly. It may seem bold, but two decades ago, Salesforce disrupted legacy players, including Oracle, with the SaaS model. Eventually, a vendor will take the risk and align its pricing with the outcomes it claims its applications can deliver.

Final thoughts

The theme of this year’s Oracle AI World was “AI changes everything,” and Oracle is investing at every layer of the stack to address this opportunity. Key considerations at each component include:

  • IaaS: It would be very hard to dispute Oracle’s success reentering the IaaS market with the Gen2 architecture. Large-scale AI contracts will fuel OCI growth, making the IaaS business more than 10x what it is today in four years. With this growth, Oracle will give hyperscalers that have been in this business far longer a run for their money. We know OCI will be a big contender for net-new AI workloads. What will be more telling is if OCI can continue to gain share with large enterprises, which are heavily invested with other providers.
  • PaaS: Oracle’s steps to simplify the PaaS layer with AI Data Platform, underpinned by Autonomous Data Lakehouse, will help elevate the role of the database within the broader Oracle stack. OLAP specialists will try to disrupt the core database market, and SaaS vendors, even those lacking the storage layer, will position themselves as data companies. Oracle’s ability to deliver a unified platform underpinned by the database to help customers build on their private data in a highly integrated way make it well positioned to address the impending wave of AI reasoning.
  • SaaS: Today, cost-aware customers are less interested in reinventing processes that are working; they are investing in the data layer. In the next few years, the SaaS landscape will begin to look very different as a result of agentic AI. With these factors in mind, our estimates suggest the PaaS market will overtake SaaS, albeit marginally, in 2029. In Fusion, Oracle has undergone a big evolution from embedded agents to custom development to an agentic marketplace, but the features themselves are ultimately table stakes. A lot of SaaS vendors have tried and failed to do industry suites well. Oracle’s industry portfolio, though still playing the role of application, represents an opportunity for Oracle to go to market on outcomes and make AI more applicable within the enterprise.

Of course, what this AI opportunity really looks like and when it will fully materialize is up for debate. The amount of AI revenue companies are generating compared to what they are investing is still incredibly small, while AI model customers that are operating at heavy losses but making big commitments to Oracle pose an added risk; though, to be fair, Oracle’s ratio will be far more favorable than those of its peers.

AI model customers that are operating at heavy losses but making big commitments to Oracle pose an added risk. But Oracle AI World only cemented that Oracle believes the risk of underinvesting far outweighs the risk of overinvesting. If the market adapts and customers show their willingness to put their own private data to work, then Oracle’s full-stack approach will ensure its competitiveness.

Lenovo Aims to Become a Global Solutions Provider through Strategic Partnerships and AI-driven Innovation

Lenovo reaffirmed its commitment to Hybrid AI for All during the company’s annual Global Industry Analyst Conference, held Oct. 20 to 23 at the company’s U.S. headquarters in Morrisville, N.C. The conference featured a series of closed-door sessions during which Lenovo executives briefed roughly 70 industry analysts on all things Lenovo, from liquid cooling to agentic AI solutions. Throughout the conference, Lenovo executives provided updates on the company’s overall strategy and ambitions to shift its perception from PC vendor to full-stack, end-to-end solution provider.

From PC vendor to full-stack solution provider

At its core, Lenovo is an engineering company with a particular strength in computing, having acquired IBM’s PC business and later its x86 server business. However, Lenovo’s investments in scope expansion underpin its transformation into a solutions- and services-led technology provider.

Since the formation of its Solutions and Services Group in 2020, Lenovo has not looked back. In 2024 the company announced its Hybrid AI Advantage framework, which serves as a powerful example of how the company’s portfolio is widening and how the company’s culture and go-to-market approach are evolving to emphasize its increasingly solutions- and services-led model and its growing focus on full-stack AI.

However, despite the company’s investments in transformation, Lenovo has retained its core competencies in engineering and manufacturing, leveraging its expertise and global footprint to drive innovation, cost efficiencies and scale. For example, in Lenovo Infrastructure Solutions Group, the company has leaned into its unique engineering and manufacturing capabilities to establish its rapidly growing ODM+ business, which targets cloud services providers.

Additionally, within the Lenovo Intelligent Device Group, the company has leveraged these capabilities to mitigate the impacts of tariffs and drive design innovation in an otherwise increasingly commoditized PC market. The company’s Solutions and Services business acts like a margin-enriching interconnective tissue over the company’s robust client and data center hardware portfolios and a catalyst to move the company further up the value chain.

As a dual-headquartered company (North Carolina and Beijing), Lenovo’s global footprint is unmatched by its hardware OEM peers, and the company’s business in China largely operates in its own silo. China is one of several regional launch markets for early pilots. However, Lenovo follows a local-first, global-by-design approach: Solutions are incubated to meet local data, content and regulatory requirements, and when there’s Rest of World (ROW) demand, Lenovo reimplements features for global compliance rather than porting code or models 1:1. No China-based user data, models or services are reused in ROW products.

Building on this global theme, during the conference Lenovo noted that it is hiring Lenovo AI Technology Center (LATC) engineers across the world, in places like Silicon Valley; Chicago; Raleigh, N.C.; Europe; Tel Aviv, Israel; and China. Additionally, the company highlighted its investment in establishing new AI centers of excellence to centralize and expand regional AI talent and support independent software vendors in their development of industry- and use-case-specific solutions. Rather than making a net-new investment, Lenovo has leveraged this strategy successfully to expand its AI library, a catalog of preconfigured AI solutions ready to be customized and deployed by Lenovo. In addition to industry- and use-case-specific AI solutions, the company also works with regional independent software vendors to develop solutions tailored to the preferences of customers in specific geographies, such as China.

While Lenovo’s portfolio and go-to-market strategy may differ slightly by geography, the company’s pocket-to-cloud and One Lenovo initiatives remain the same around the world and are the basis for the company’s differentiation in the market — a theme during every session of the conference. From smartphones to servers, Lenovo is vying for share in every segment, and by investing in the unification and openness of its portfolio, whether it be through the development of homegrown software or new ecosystem partnerships, the company is positioning to grow in the AI era. Changing its perception from a PC powerhouse to a solution provider remains one of Lenovo’s largest challenges, but the company’s work in sponsoring and supporting FIFA and F1 with its full-stack technology capabilities demonstrates its willingness to invest in overcoming this hurdle.

Lenovo is investing to win in enterprise AI and bring smarter AI to all

Lenovo’s AI strategy spans all three of the company’s business units and echoes the Smarter AI for All mantra and pocket-to-cloud value proposition.

During the conference Lenovo reemphasized its belief that the meaningful ramp-up of enterprise AI is on the horizon as AI inferencing workloads continue to proliferate. Lenovo has high-performance computing roots and its Infrastructure Solutions Group (ISG) derives a significant portion of its revenue from cloud service provider (CSP) customers, in contrast to some of its closest infrastructure OEM competitors, but Lenovo’s investments and the composition of the company’s portfolio emphasize the company’s intent on driving growth through its Enterprise and Small/Medium Business (ESMB) segment, supporting all levels of on-premises AI computing, from the core data center to the far edge.

Additionally, Lenovo continues to go against the grain on the notion that AI workloads belong exclusively on the GPU and makes a case for lighter workloads being deployed more efficiently on the CPU and in smaller edge server form factors. Lenovo’s view is heterogeneous AI: Training and high-throughput inference lean on GPUs; latency-sensitive and personal workloads increasingly run on NPUs and optimized CPUs, and the company’s portfolio spans all three — multi-GPU-ready workstations, edge servers with GPU/CPU mixes, and Copilot+ PCs with NPUs for local inference — so customers can place the right workload on the right engine.

However, perhaps what differentiates Lenovo’s infrastructure portfolio most is the company’s Neptune liquid cooling technology, which comes in three flavors: Neptune, Neptune Core and Neptune Air. Intensive AI and machine-learnings workloads require dense compute infrastructure that, in some cases, generates so much heat it requires liquid cooling as opposed to traditional air cooling. In addition, even less-intensive workloads often benefit from liquid cooling, which generally operates at lower costs once implemented. This is where Neptune liquid cooling comes in.

The company’s flagship Neptune solution offers full system liquid cooling, making it ideal for the most demanding AI and high-performance computing workloads. The company’s other two offerings — Neptune Core and Neptune Air — deliver lower levels of heat removal but are more easily implemented. For instance, while Neptune Air offers the lowest levels of heat removal, the simplicity of the solution makes it easier to implement, especially in existing data center environments, supporting lower cost transitions to liquid cooling.

TBR sees Lenovo’s family of Neptune solutions as a major advantage, as the variety of offerings targets customers and environments in different stages of liquid cooling adoption. Lenovo’s experience in retrofitting data centers with liquid cooling also presents a strong services opportunity for the company and supports enterprise adoption of higher-power AI servers in their on-premises environments. Further, because liquid cooling is more efficient than air cooling, Neptune supports Lenovo’s sustainability initiatives and delivers strong total cost of ownership savings in many scenarios, which is something IT decision makers tend to scrutinize heavily when making investments.

Unlike its close competitors that have invested heavily in data management and orchestration layers leveraging their networking and storage solutions, Lenovo does not play in the data center networking space, instead choosing to be networking-agnostic and partner-first in this area, which the company sees as an advantage due to geographical differences in customer preference. However, the company’s results have yet to prove that this networking strategy is materially advantageous. Additionally, while complex, networking is typically more margin rich than compute and storage while also presenting myriad attach and services opportunities for OEMs with first-party full-stack infrastructure portfolios.

Adjacent to the company infrastructure offerings, during the conference Lenovo executives stated that there should more adoption of workstations as part of enterprises’ on-premises AI adoption and solution development. Compared to sandboxing AI solutions in the cloud, Lenovo sees its workstations, which can support up to four NVIDIA RTX discrete GPUs, as a more practical and economical solution compared to cloud resources. However, in addition to the company’s Windows-based workstations, Lenovo also showed off its NVIDIA DGX Spark inspired desktop geared more heavily toward use in conjunction with NVIDIA DGX cloud.

Rather than running Windows OS, Lenovo’s DGX Spark inspired desktop runs DGX OS, a Linux-based operating system and is ideal for buyers that already have DGX cloud resources. With desktop offerings for AI spanning multiple operating systems, Lenovo’s Intelligent Device Group showcases the company’s ambition to create AI systems for all types of users. Looking ahead, both TBR and Lenovo expect the adoption of GPU-enabled workstations to grow as an increasing number of enterprises experiment with the development and/or customization of preconfigured AI solutions.

Through a services lens, Lenovo’s enterprise AI strategy centers on the company’s Hybrid AI Advantage framework. Similar to frameworks used by competitors such as Dell Technologies and HPE, Hybrid AI Advantage includes NVIDIA AI Enterprise software components intended to allow for the development of industry- and use-case-specific AI solutions. However, while NVIDIA AI Enterprise can be thought of as a collection of foundational tools to build AI agents, Lenovo’s AI library goes a set up further, offering more out-of-the-box types of industry- and use-case-specific AI solutions.

The composition of Lenovo AI library is largely predicated on solutions developed in conjunction with ISVs through Lenovo’s AI Innovators program. As Lenovo expands its footprint of AI centers of excellence, TBR expects the number of customizable, near-plug-and-play AI solutions to grow, further cementing the company’s differentiation in the marketplace. Additionally, Lenovo argues that its Agentic AI Platform further differentiates Hybrid AI Advantage from what is offered by competitors.

Lenovo’s Solutions and Services Group is equipped with strengthening auxiliary services to support customers wherever they are on their AI journey. This is where the company’s Hybrid AI framework comes into play. The framework has five components: AI discover, AI advisory, AI fast start, AI deploy and scale, and AI managed. The first two components underscore Lenovo’s growing emphasis on delivering professional services, while the third component — where many customers enter the framework — is where Lenovo aligns customers with a solution from the company’s AI library. The last two components highlight Lenovo’s ongoing interest in delivering deployment and ultimately managed services through the company’s maturing TruScale “as a Service” business that caters to both infrastructure solutions and devices deployments.

At the end of the day, Lenovo understands that hardware — specifically compute hardware like PCs and servers — is its strength, but by developing prebuilt solutions and overlaying its expanding services capabilities, the company is investing in moving up the value chain to drive margin expansion and deepen customer engagement.

Intelligent Device Group is doubling down on its unified ecosystem play

At 67.3% of total reported segment revenue in 2Q25, Lenovo’s Intelligent Device Group accounts for the lion’s share of the company’s top line, and TBR estimates 88.5% of the segment’s revenue is derived from the sale of PCs. Over the trailing 12-month (TTM) period ending in 2Q25, TBR estimates the company’s PC business generated nearly $40 billion, growing 14.6% year-to-year and resulting in an approximate 130-basis-point expansion in PC market share, according to TBR’s 2Q25 Devices Benchmark.

In line with the company’s ambitions to change its perception from a PC vendor to a solutions provider, and due to the company’s already established footprint in the PC market, much of the general sessions during the conference focused on the company’s AI position, with specific emphasis on Solutions and Services Group and Infrastructure Solutions Group. However, during the Intelligent Device Group briefings, Lenovo executives confirmed the company has no intention of giving up share in devices — the business on which the company’s success has been predicated. Lenovo touted its leadership position in several segments of the PC market, with the largest being the commercial space. Business leaders acknowledged that it is a good time to take share in the commercial Windows PC market, and by investing in the development of proprietary components and feature sets, the company is actively dispelling the notion that the PC market is fully commoditized. Lenovo has continued its engineering collaboration with Microsoft on CoPilot+ PCs.

Beyond the PC, Lenovo’s investments in growing the share of its Motorola smartphone business have been paying off, and the company is not taking its foot off the gas. To drive cross-selling opportunities, the company is deploying a marketing strategy targeting a younger customer base and is leaning into creating a unified device ecosystem integrated with AI features and capabilities, something the company refers to as its One AI, Multiple Devices strategy. However, unlike other unified device ecosystem plays, such as that of Apple, Lenovo’s play is more open, with the company supporting cross-device features between its PCs and smartphones running both Android OS and iOS. Thus far, Lenovo has seen limited traction in cross-selling smartphones and PCs in the commercial space; however, TBR believes the company’s One AI, Multiple Devices strategy could help shift the tide.

PC Data for 2Q25

2Q25 TTM Windows PC Market Share and Estimated 2025 Lenovo PC Revenue Mix (Source: TBR)

 

During its Global Industry Analyst Conference, Lenovo’s focus on maintaining and even expanding its leadership position in the commercial PC segment was obvious, but what was perhaps more interesting was how the company is marketing several of its PCs that are in direct competition with Apple in an effort to appeal to a younger segment of the PC market. Lenovo is promoting its device brand as premium, trusted and innovative — aspects supported by the company’s leadership in PC design and engineering as well as its ongoing investments in device security through proprietary software developments. Lenovo also showcased innovative designs, such as motorized expanding screens, and developments down to the motherboard level, which harkened back to the company’s core legacy competencies in engineering and manufacturing.

Through partnerships and portfolio innovation, Lenovo is gradually changing its perception in the industry

Lenovo’s FIFA and F1 partnerships underscore the company’s investments in growing its brand recognition globally and changing its perception from PC vendor to solutions provider. For example, Lenovo infrastructure will power semi-automated offsides calls during the World Cup via computer vision technology. Additionally, Lenovo continues to leverage its Neptune liquid cooling technology as a key differentiator. During the conference Kate Swanborg, SVP, Technology Communications and Strategic Alliances, DreamWorks Animation, discussed how Neptune has allowed DreamWorks to consolidate its data center footprint from 210 air-cooled servers down to 72 liquid-cooled servers.


By leveraging its global engineering and manufacturing footprint in combination with its expanding ecosystem of ISV partners, Lenovo’s emphasis on hardware innovation and supply chain agility aligns with the company’s ever-growing AI library and its establishment of AI centers of excellence, to support Lenovo’s ambitions of driving enterprise AI adoption across all kinds of on-premises environments. Constant investments in IT operations management platforms and unified device ecosystem software demonstrate Lenovo’s focus on driving cross-selling within and across its hardware portfolios while increasing the value proposition behind the company’s TruScale managed service offerings.

Fujitsu Showcases Smart GTM Plays, AI-ready Talent and Long-term Sustainability Efforts

As part of Climate Week in New York City, Fujitsu hosted eight analysts on Sept. 25 for a roundtable discussion about sustainability, supply chains and Fujitsu’s emerging Uvance consulting service. Among the Fujitsu leaders in attendance were EVP Chief Sustainability and Supply Chain Officer Takashi Yamanishi and EVP Global Solutions Sinead Kaiya. The following reflects that roundtable discussion and TBR’s extensive and ongoing analysis of Fujitsu’s IT services and consulting efforts, particularly in North America.

Fujitsu’s Uvance gains traction in AI-enabled sustainability

As Fujitsu’s Uvance offering evolves and gains market share and presence, the company’s ability to deliver on AI-enabled sustainability solutions could accelerate overall growth, especially in North America. Use cases highlighting both measurable return on investment in AI and significant cost savings demonstrate that Fujitsu’s relatively smaller scale in North America, as compared to peers such as Accenture or Deloitte, has not prevented the company from delivering value to clients.

TBR believes the critical next steps to growth, perhaps at a faster pace over the next five years, are developing repeatable, IP-driven solutions, learning to compete with fewer employees and more AI agents, and leveraging a scrappy mentality. Finding the right messaging around Uvance, embedding sustainability across all engagements, and increasingly leveraging internal supply chain and cybersecurity expertise to support client-facing opportunities will round out Fujitsu’s strategy. It is no small task, but the company has positioned itself well, as TBR has noted repeatedly over the last few years.

‘From philosophy and targets to execution,’ according to Yamanishi

Folding supply chain and sustainability leadership into one C-Suite role is a slight differentiator for Fujitsu. According to Fujitsu’s EVP Chief Sustainability and Supply Chain Officer, Takashi Yamanishi, the company is combining its focus on suppliers and third-party management with its sustainability offerings and capabilities to expand Fujitsu’s business opportunities. By merging supply chain experience with sustainability imperatives, Fujitsu is creating a compelling business case while simultaneously moving toward its own environmental targets, including in Scope 3 emissions.

Notably, Yamanishi described the close cooperation between Fujitsu’s supply chain and sustainability professionals and the company’s cybersecurity practice, including collaboration around supplier assessments. In addition to greenhouse gas emissions and other Scope 3 metrics, Fujitsu utilizes its own cybersecurity assessment and criteria to strengthen its suppliers’ cybersecurity, enhancing the overall resilience of the supply chain. In TBR’s view, using sustainability and cybersecurity metrics to assess supply chain risks is likely driving more responsiveness and transparency from suppliers around risk mitigation.

Kaiya calls for mindset to ‘Be scrappy’

Fujitsu’s Uvance story continues to evolve, and the roundtable discussion on sustainability reinforced the company’s overall approach of leading with technology-infused business solutions and business value, with a foundation in sustainability. Fujitsu’s EVP Global Solutions Sinead Kaiya highlighted a few aspects of Uvance’s evolution and approach around sustainability.

  • Fujitsu intends for Uvance to account for upward of 30% of the company’s services revenues by 2030, while traditional IT services will account for 60% and modernization the remaining 10%. Uvance’s growth in recent years makes that target likely achievable.
  • Fujitsu’s own intellectual property should be built into all of the company’s engagements. Both Kaiya and Yamanishi distinguished between solutions that exist within Fujitsu’s capabilities and can be deployed as part of an engagement and repeatable solutions (or products) that sit within Uvance and can scale across multiple clients.
  • In North America, as previously discussed with TBR, Fujitsu will focus on a few core industries, notably framed less as traditionally understood verticals and more as shared business challenges, which Fujitsu has positioned itself to help clients tackle. In TBR’s view, this approach reflects the reality that nearly every enterprise operates under business models dominant in multiple industries.

During the slow rollout of Uvance TBR has noted increasingly well-refined explanations of what Uvance does, which kinds of clients Fujitsu is pursuing, and how Uvance can separate itself in a crowded consulting and technology field. Kaiya made two standout points that further cemented TBR’s understanding. First, Uvance will not “just solve the [client’s] problem,” but Fujitsu will be “exceptionally careful in what we can be and what we should be, for profit reasons, repeatable.” Second, in North America Uvance will “be scrappy” and pursue opportunities overlooked or underserved by larger consultancies and IT services companies. In TBR’s view, this strategy of being deliberate with repeatable solutions and taking a self-aware, aggressive approach to the market aligns with Fujitsu’s strengths, current place in the market and opportunities for growth.

Successful deployments depend on reliable technologies and measurable outcomes

Roundtable participants were also provided Uvance and sustainability use cases and demonstrations, notably:

  • In a supply chain optimization use case in which the client realized a 50% savings in operational costs, Fujitsu leaders said they had not used an outcomes-based pricing model but would consider such an approach if and when they are able to repeat that approach and solution at a similar client. Time and materials, Kaiya noted, would not be an optimal long-term pricing model.
  • In a Canadian client’s use case, Fujitsu leaders noted a three-month return on investment from a generative AI (GenAI)-enabled solution, making this one of the more successful, understandable and relatable GenAI deployments in recent TBR memory. Based on the client’s use of Fujitsu’s GenAI solution to reduce time spent responding to compliance and regulatory requests, TBR believes Fujitsu will continue investing in industry-specific large language models (echoing previous industry clouds trend).
  • Multiple use cases highlighted included a blockchain component, leading TBR to question whether Fujitsu had a dedicated blockchain practice similar to what existed at consultancies and IT services companies in the 2010s and early 2020s. Kaiya and Yamanishi noted that blockchain serves as an enabling technology and is part of the overall solution Fujitsu brings to clients (when needed), but it is not a stand-alone offering. Fujitsu professionals also noted that clients specifically ask for greater transparency and quality control, characteristics inherent in blockchain.

Overall, Fujitsu’s use cases and demo made for a compelling case for both Uvance and the company’s underlying sustainability approach. TBR will be watching through 2026 to see whether Fujitsu’s use cases increasingly include outcomes-based pricing, how frequently Fujitsu discusses repeatable solutions, and which other technologies shift from emerging and noteworthy to enabled. TBR will also monitor how Fujitsu adapts its alliance strategy to align more effectively with the needs and strengths of its technology partners. In particular, TBR will seek examples of Fujitsu coordinating multiparty alliances to support the operations of Japanese enterprises based in the Americas.

Sustainability now and forever

The New York City roundtable reinforced for TBR a few truths about Fujitsu now and going forward. First, the company is scrappy, having developed go-to-market and North America strategies that play to its strengths. Second, rapid changes across IT services and consulting are unlikely to catch Fujitsu off guard and unprepared. Fujitsu leaders understand the shifting talent landscape, where IT services and consulting buyers in 2026 and beyond will expect AI in everything and AI-induced savings as part of their engagements. And lastly, circling back to the core reason for the event, Fujitsu knows sustainability may be a lower priority in the U.S. at present, but it will become a top priority again, and Fujitsu has been preparing its offerings, capabilities and clients for that pendulum swing.

 

In Fast-evolving AI Markets, Platform Alignment Determines Who Keeps the Customer

Platform strategy emerges as critical differentiator in agentic AI era

Every company claims to be a data company, every tech company wants to be an AI company, and every smart company has become a platform company. For companies in the technology ecosystem, including consultancies like McKinsey & Co. and EY, OEMs like Dell Technologies (Dell) and Hitachi, connectivity specialists like Nokia and Ericsson, and IT services generalists like Accenture and IBM, finding the right platform partner might be the most strategic decision they need to make before 2026.

The speed of technology change and challenges adapting to business model changes drive the importance of platforms. The rapid developments in AI, most recently with agentic solutions, illustrate how being a platform company enables vendors to maintain a solutioning role even as technology evolves, and new participants become critical in the eyes of customers. Agentic solutions have also created myriad ways for companies to sustain their business models, acting as a conduit between end customers and the changing vendor landscape.

Ecosystem partnerships evolve as agnosticism fades, fast followers rise and multiparty alliances drive wins

Partnering across the technology ecosystem will be different going forward in three specific ways that will make early-2020s alliances seem as wildly dated as the metaverse. First, agnosticism will enter its death throes. Companies, starting with consulting firms, can no longer simply let the customer decide which components of the tech stack will come from which companies. Having no preference no longer implies flexibility to work with anyone and everyone because every company now works with anyone and everyone. Having no preference now implies having no technology innovation or, critically, no commercial relationship to bring to bear on the client’s business and technology needs. Clients have grown accustomed to tech agnosticism and moved beyond it. They want to know which companies have special relationships that bring innovative ideas, the latest solutions and the best commercial terms. Agnosticism is not special at all.


Second, 2026 will be the year when fast followers start following in earnest. TBR has seen industry-leading consultancies, IT services companies, hyperscalers and software vendors align their go-to-market motions, sales and leadership teams, and even cross-training and knowledge management initiatives to build a pipeline and accelerate revenue for all parties. According to data and insights from TBR’s Ecosystem Research, the most profitable and fastest-growing companies have been leveraging their alliances in new and profoundly meaningful ways.

For example, in TBR’s September 2024 Adobe and Salesforce Ecosystem Report, we estimate that IBM, currently among the top four largest by revenue, will have the third-fastest Salesforce-related revenue growth from 2023 to 2028 as “the two will integrate IBM watsonx platform and IBM Granite series models with the Salesforce Einstein 1 Platform. IBM Consulting is creating industry-specific prompt templates and copilot actions for Salesforce applications.” TBR anticipates more companies across the technology ecosystem will emulate the leading companies’ alliance strategies in 2026. Success begets imitation.


Third, transparency wrought by AI adoption will compel companies across the technology ecosystem to partner differently. As agnosticism fades and strategies converged around the best practices, differentiation will come from companies’ abilities to engage in multiparty alliances. In March, we highlighted Informatica’s realization that although going to market with one partner led to a win rate of 47%, compared to only 17% when going alone, going to market with more than one partner led to an 83% win rate. A couple of months later, Salesforce acquired Informatica. Without question, Salesforce understands the lessons Informatica learned about the power of multiparty alliances.

Reflecting this understanding, Salesforce has quickly embraced a multiparty ethos, prioritizing third-party integrations within its broader AI platform strategy. Its Agentforce Partner Network already includes Amazon Web Services (AWS), IBM, Workday, Google Cloud, and dozens of SI and ISV partners building reusable agent actions and templates into a shared agent ecosystem rather than separate point products. At the same time, Salesforce has expanded its alliance with Google so that customers can build agents using Gemini models while running core Salesforce workloads on Google Cloud infrastructure, although that capability is not yet generally available. On the services side, partners will lead deployment, governance, and change management. For instance, Cognizant recently announced an expanded partnership to help clients deploy, scale, and govern enterprise agents within the Salesforce ecosystem.

Like services vendors, Salesforce is moving away from an agnostic approach in favor of prioritization. Partner performance KPIs are closely scrutinized, and ecosystem participants showing the most momentum in scaling Agentforce adoption are receiving the bulk of co-marketing and co-innovation resources. From Salesforce’s perspective, it matters little whether a service vendor is an upstart; in fact, the company has seen several emerging partners rise quickly through the ecosystem ranks due to their adaptability and speed amid Salesforce’s growing focus on data and AI. Regardless of which vendors carry the Agentforce torch, Salesforce is making clear distinctions and taking note. By becoming more targeted, the company is better positioned to build structured multiparty alliance frameworks with selected vendors—an approach that will help Salesforce achieve the win rates cited by Informatica.

Next year, companies across the technology ecosystem will begin doing the same, slowly. We are under no illusions that multiparty alliances will become easier beginning Jan. 1, 2026. Commercial arrangements, sales incentives, and selecting the right mix in a multiparty alliance will still require investment, leadership and time. The fastest-growing companies in the technology ecosystem will commit to making their multiparty alliance strategy work, and we expect they will start to see those win rates above 80%.


In this changed environment, TBR will be more closely tracking the companies and firms most likely to be disruptive in the near term. Our current Ecosystem Research stream covers 15-plus global systems integrators’ and consultancies’ relationships with nine cloud companies and software vendors. Of those companies, we expect 2026 to be particularly exciting, perhaps even momentous or strategic, for Deloitte, Salesforce, Dell and IBM. In TBR’s quarterly coverage of those four companies, subscribers will learn about strategies, trends and performances that reflect how ecosystem participants expect the agentic AI age to shape the market.

The Federal Government Shutdown: What It Means for Leading Federal System Integrators

How are federal systems integrators interpreting the latest federal shutdown?

Federal fiscal year 2026 (FFY26) began on Oct. 1, 2025, with the federal government shuttering most operations after lawmakers failed to reach a budget agreement. The current stoppage is the fourth full shutdown since 1995. The last federal closure was the 35-day partial shutdown that occurred from December 2018 to January 2019, during President Trump’s first term. TBR believes federal systems integrators (FSIs) are intensely concerned that the current shutdown will be as long and disruptive as the 2018-2019 shutdown.

Some integrators have noted that while the short-term impact of the shutdown will create a significant growth headwind in the current fiscal year, they expect to see opportunities to backfill shutdown-related sales gaps by mid-FFY26. Again, the duration of the shutdown will be critical, but federal IT vendors are hoping that Congress passes a continuing resolution to end the current shutdown.

Contractors are preparing for the worst-case scenario: Program funding is interrupted altogether for an unknown duration, which seems more likely this time around. The federal IT community is looking to prior shutdowns for strategies to navigate the latest shutdown, such as heavily scrutinizing their current order book for the most vulnerable engagements while prioritizing preservation efforts on strategic, mission-critical work. From what TBR is hearing, the final quarter of FFY25 was more irregular than usual in sweeps and budget flush, which does not bode well for the federal IT community.

Fortunately for contractors, the shutdown comes at the beginning of the federal fiscal year, when prior-year funding remains available, enabling them to continue working on some programs, at least until the funding runs dry. In contrast, the 2018-2019 shutdown began with fewer budget dollars available, exacerbating the fiscal disruption on contractors’ P&Ls and order books.

How are FSIs navigating the shutdown?

FSIs got a head start of sorts earlier in the year with their responses to the disruption caused by the Trump administration’s Department of Government Efficiency (DOGE), and we are seeing contractors take many of the same actions.

Vendors are again drawing closer to procurement staff in the agencies, and contractors’ development teams are pushing to launch recent contract wins, renewals and expansions as soon as possible, while accelerating the adjudication of programs in the business development pipeline. However, the furlough of so many contracting professionals across the federal government since DOGE’s cost-rationalization efforts began in January will make communications with agency acquisition counterparts more difficult than ever. TBR has heard that procurement staff in some agencies has declined by two-thirds compared to their size during the Biden administration. Many other agency contracting professionals retired early or accepted resignation offers.

TBR also expects the shutdown will impact vendor balance sheets in the third calendar quarter (FF4Q25) in terms of more aggressive collections and a sequential decline in days sales outstanding. FSIs will also have limited latitude with efforts to preserve profitability, as they had already implemented tighter expense controls in response to DOGE’s aggressive contract reviews and cost-cutting actions, leaving little room to further optimize operations and contract execution.

HR managers are struggling to manage myriad shutdown-related challenges affecting contractors’ workforces. HR teams are being forced to develop plans for employees most likely to be affected by the shutdown, in terms of temporary furloughs and reassignments, while having honest discussions about the potential for shutdown-driven furloughs becoming permanent. Vendors are reaching out to their strategic and solutions partners, which should expect leading integrators to help them navigate the shutdown.

Vendors are evaluating new cost-cutting measures to offset the inevitable erosion of margins that accompany any shutdown, which will include doubling down on efficiencies in operations and contract delivery, and, unfortunately, layoffs. Many federal IT professionals at FSIs have defected to commercial IT companies in the wake of DOGE’s impact on the federal technology sector, and attrition among IT workers could spike at FSIs, especially with a prolonged shutdown, creating additional challenges for FSI HR teams.

Project teams are preparing for the challenges of restarting programs stopped by the shutdown and expect, in some cases, it could take up to a week for paused programs to return to full operating tempo. Resuming project work will be complicated by layoffs across agencies and in the FSIs themselves.

We will hear from the federal IT community at the end of October — when the next earnings season begins — regarding how the FSIs expect the shutdown to impact current fiscal-year performance. The full impact will be visible with the 4Q25 earnings season.

What are the biggest risks facing FSIs amid the shutdown?

TBR believes federal IT vendors will suffer multiple margin headwinds, disrupted cash flows from operations, invoicing challenges, and delays in receivables collections in 4Q25 that may linger into 1Q26. We expect most FSIs will be forced to reduce growth, margin, earnings per share and cash flow guidance for FY25 or FY26.

TBR has observed IT professionals at both federal agencies and federal IT vendors departing the sector altogether since the changeover in administration, and we believe the brain drain of IT workers with significant experience and institutional knowledge of federal agency missions and IT infrastructures could continue during the shutdown. Once these people are gone, particularly professionals with advanced technological degrees and/or training in digital technologies like AI, cybersecurity and quantum computing, they will be very difficult to lure back, let alone replace.

Recent contract wins may be at risk of cancellation, and delivery timelines on programs that endure will be significantly delayed as contractors struggle to maintain communications with agency procurement staff. In some cases, program continuity will be disrupted as project teams will not have access to shuttered government facilities.

Contractors who fail to fully and proactively document every shutdown-related expense or disruption may not be able to recover those expenses when the government resumes operations. TBR has heard that once the federal government reopens, contractors may have only 30 days to submit reimbursement claims.

Which contractors are best positioned to minimize shutdown-related disruptions?

FSIs that operate as subsidiaries of larger global IT services firms or consultancies — for example, Accenture Federal Services (AFS), CGI Federal, Deloitte Federal and IBM Consulting’s federal operations — have been moving resources from federal projects to other public sector or commercial programs to retain highly experienced workers. However, transitioning them back to federal work after the shutdown may be met with consternation from workers reluctant to return to the turbulent federal market.

Leidos has more globally diverse operations than many other leading FSIs, with operations in Europe, the Middle East and Australia, as well as in commercial healthcare IT, and has the option of offering to reassign federal IT workers to projects in these markets to prevent them from leaving altogether.

The largest FSIs, particularly those with the flexibility afforded by strong profitability (e.g., Leidos, CACI and Booz Allen Hamilton [BAH]), will use the shutdown as an opportunity to cross-train large swaths of their workforce on AI, cloud, cybersecurity, data science and other emerging technologies, versus implementing layoffs, again to avoid losing skilled IT professionals. The leading FSI with strategic awards funded by prior-year appropriations (e.g., Leidos, BAH, CACI and SAIC) may also be able to avoid work stoppages or interruptions on ongoing, big-ticket engagements.

FSIs with significant volumes of work on programs considered “essential,” such as national security and national defense work, most notably CACI but also Leidos, BAH and SAIC, may avoid the same shutdown-related pitfalls as vendors without a large presence in the Department of Defense, Intelligence Community or national security agencies in the civilian sector.

Federal IT vendors with large cash reserves will be able to tap into their fiscal war chests to defray at least some of the financial impact of the shutdown on profitability, but this will mean deferring any funding for M&A, joint ventures or internal implementation of AI to streamline operations. Conversely, TBR expects small to midsize federal contractors, particularly those focused exclusively on the federal space, will suffer the most severe consequences of the shutdown.

Access TBR’s federal IT data and analysis with a free trial of TBR Insight Center.

Modernization First: Mongo’s Enduring Pursuit of the AI Opportunity

Now is a good time to be in the database business

Since relational databases first entered the market in the 1970s, a lot has changed. Even as non-relational systems entered the scene to help overcome the scale constraints of their SQL-based counterparts, technological revolutions — from the internet to cloud computing — have ultimately revealed the database’s true purpose: a transactional system for storing data. But generative AI (GenAI) is changing markets in ways we never expected, and databases are gaining renewed importance and being seen as more of a strategic piece of the AI strategy. That is because large language models (LLMs), while revolutionary in their ability to generate content and provide reasoning, do not have the underlying memory and ability to execute against the private data that agents thrive on. Customers are now realizing that if they want to bring the next wave of AI applications — AI agents — into production, they need to consider the role of the database.

If it is any indication of the opportunity, almost every cloud vendor now wants to be a data company. Even established data companies are moving down to the OLTP layer, recognizing that while AI models may train well alongside analytics systems (OLAP), AI agents, which thrive on enterprise context, are best built closest to where that enterprise data resides. But even as new analytics vendors vie for a piece of the TBR-estimated $71 billion cloud operational database market, the few companies that have been in the business for more than five decades, including MongoDB, will have some clear advantages when addressing customers’ needs around AI, especially as customers are averse to using net-new databases for vector search.

In today’s market, capabilities like vector and retrieval-augmented generation (RAG) have become a prerequisite for any modern, AI-ready database. While MongoDB is an early supporter of delivering vector search natively in-database, specifically with MongoDB Atlas, it is the company’s 50 years of experience with document architecture and JSON — which powers LLMs and agentic protocols like MCP (Model Context Protocol) — that make MongoDB well suited to address the AI opportunity once customers overcome legacy application modernization hurdles.

Modernization precedes AI

Although the fact that 30% of Atlas’ annual recurring revenue (ARR) comes from customers using at least one feature, such as vector search, suggests early momentum, AI is not yet having a significant impact on MongoDB’s growth. The question then becomes, when will this opportunity manifest? While AI startups and digital natives are ready to adopt scalable databases, most enterprises with bigger budgets are not prepared to make the shift. The reality is that these enterprises, although eager to leverage GenAI and agentic AI, have accumulated decades of technical debt and are often ill-prepared to handle the AI innovations the market is introducing.

Enterprises need a way to justify not only the cost of upgrading but also re-architecting their core infrastructure software to take advantage of AI. This is where the innovation announced at the September MongoDB.local NYC event was focused. In some ways, a lot of MongoDB’s AI work has already been accomplished by default due to its JSON-native architecture and early support for vector search in the cloud. The company is now focusing on preparing customers to take advantage of AI.

“At some point Oracle lost the competitive advantage because it was everything for everybody, and as a result of that, the code base become very voluminous and bloated, and that simply wasn’t scalable and agile enough to go around and develop [on] … Oracle Database is still the best on-premises relational system. But when people talk about specifically for GenAI, most likely the GenAI decision makers are not going to be the old-world relational database experts.

Managing Director (Firmwide) and Chief Data Architect, Financial Services

Announcing AMP

MongoDB believes that legacy relational databases are ill-equipped to support the needs of AI. However, this is not a unique point of view, as TBR has heard the same thing during its discussions with enterprise decision makers. Of course, Oracle and other vendors will make it very attractive for customers to upgrade to their AI databases, saving them the cost and risk of migrating to a different data model such as MongoDB’s, while often offering steep discounts.

Even so, there will always be a market for customers that want to reduce their dependency on the likes of Oracle and Microsoft SQL Server, not just for scalability reasons as they consider AI but also to reduce the high degree of lock-in these vendors create. That is where Application Modernization Platform (AMP) comes into play. While MongoDB announced new feature updates you would expect at a database conference, such as support for in-query encryption with MongoDB v. 8.2, AMP was the company’s big strategic announcement at MongoDB.local NYC.

At its core, AMP brings together the various tools, processes and best practices MongoDB has used over the years to help customers transition from a relational schema to a document schema. As described to participants at the event, AMP follows “three T’s,” Tools, Techniques and Talent. To be clear, MongoDB has always believed these three attributes are key to a successful migration, and AMP provides the company with an opportunity for some fresh messaging. That said, unlike some of MongoDB’s existing capabilities, such as Modernization Factory and Relational Migrator, AMP leverages GenAI and LLMs to actually convert code, making the platform more than a rebranding or repackaging of existing capabilities, which could help MongoDB expand beyond its core developer audience and engage in more senior-level conversations.

Tools: GenAI for code conversion

Although we have not yet seen significant traction for code conversion as a GenAI use case, customers seem willing to experiment in the hope of lowering the cost and time of migration. This is particularly true as AI puts pressure on businesses to move faster, which is challenging for enterprises that are still spending a significant portion of their IT budgets on maintaining legacy systems. There is no shortage of GenAI-powered code conversion tools in the market to help solve migration challenges, from Microsoft’s GitHub Copilot to Amazon Q Developer: Transform, and AMP acts in a similar capacity, relying on LLMs to analyze the existing code base, test the code and convert it to newer frameworks.

It is too soon to tell if AMP will drive revenue growth for MongoDB, but early use cases are promising. For example, we heard from Geneva-based asset management firm Lombard Odier, which already uses MongoDB as its core database engine and is now using AMP to modernize and migrate 250 applications. We also see an opportunity for MongoDB to leverage its existing hyperscaler partners, which collectively support MongoDB Atlas across 120 cloud regions, to integrate the platform with one of these partners’ existing code conversion capabilities.

Techniques & Talent and the role of services partners

Any GenAI tool on its own is unlikely to solve the modernization problem, and the other attributes of AMP — Techniques and Talent — are where MongoDB adds value. As previously mentioned, MongoDB has been in the business of transitioning customers away from the relational model for decades, and techniques like the App Transformation Framework (ATF) and MongoDB’s engineers, who are equipped to help with validation and deployment, are where MongoDB can add its unique perspective and leverage its code conversion to win these workloads.

That said, we believe MongoDB will still need to consider the role of its services partners, which can provide MongoDB with the enterprise access it needs as it begins to look upmarket and branch outside its core developer audience. For many data ISVs, gaining the attention of global systems integrators (GSIs) is not always easy, as GSIs go where the revenue opportunities are, such as large-scale IaaS migrations or upper-stack areas like analytics that they can build around. With GenAI, however, we see GSIs paying more attention to the data infrastructure layer.

For instance, when we surveyed alliance decision makers at professional services companies, 55% of respondents said data strategy & management represented the most opportunity for partner-led growth over the next two years, up from 42% a year ago. Moreover, GSIs are refocusing on managed services, and a technology partner that can create an enabling layer for cheap and fast migration — leading to customers spending less money on implementation services and more on managed services —  represents an emerging opportunity within the ecosystem. In many ways, this shift is being driven by GenAI’s disruption of the traditional services model, but many of MongoDB’s SI partners that were represented at MongoDB.local NYC, including Accenture, Infosys and Capgemini, have also demonstrated success using code conversion tools from other vendors to hasten migrations for clients and scale their own modernization practices.


The same opportunity exists for MongoDB and SIs, provided the company puts the right partner model in place around AMP. At the end of the day, we believe that while MongoDB will be successful in using AMP to transform applications stemming from the data tier, GSIs will play a critical role in extending AMP to the application layer, optimizing that application and, most importantly, tying it to a business outcome. This will be especially true if MongoDB further integrates with its hyperscaler partners, which are perhaps best equipped to bring the GSI to the table and act as an orchestrator of a MongoDB-GSI relationship.

The opportunity is there; success will boil down to execution

Agentic AI has customers and ecosystem participants reconsidering the role of the database. MongoDB is well positioned to capitalize on the AI opportunity, partly due to its JSON-native document database, as well as its early support for native vector search in the cloud. But for customers using legacy systems, there is still much work to be done in preparing for AI, and this is where MongoDB’s investment priorities lie. GenAI provides opportunities to accelerate migrations in ways that customers, which have been running their legacy databases for decades, have never experienced. AMP, which uses LLMs to evaluate and convert code to modern frameworks, combined with MongoDB’s extensive experience in helping customers transition away from the relational model, could be a compelling way to help customers shed their legacy tech debt. Strategically, this could serve as a stepping stone for MongoDB to become a more relevant piece of the enterprise AI strategy. With this opportunity, much of MongoDB’s near-term success will hinge on execution, including a willingness to proactively collaborate with partners in new ways.

PwC Positions AI, Industry Depth and Microsoft Partnership as Catalysts for Asia Pacific Momentum

Singapore event highlights PwC’s regional momentum, client impact and focus on AI-driven transformation

In early August PwC hosted clients, analysts and technology partners in Singapore for an in-person update on the firm’s activities in the Asia Pacific region, with a focus on PwC’s partnership with Microsoft. Among the PwC leaders who spoke were Charles Loh, partner, Singapore Consulting leader, Microsoft Alliance leader, and Digital, Cloud, Data Practice leader, PwC South East Asia Consulting; Winston Nesfield, partner, Insurance and Wealth leader, and AI Advisory leader, PwC South East Asia Consulting; Tracey Kennair (TK), partner and Asia Pacific Microsoft Alliance leader, PwC Australia; Richard Chong, managing director, Microsoft Alliance driver, PwC South East Asia Consulting; Terence Gomes, partner, Cybersecurity, PwC India; and Louise Co, senior manager, Digital Transformation, PwC South East Asia Consulting.

The event included client testimonials, partners’ technology demonstrations, and presentations by the PwC partners listed above, touching on macroeconomic issues, technology (especially AI), and expectations for the firm’s growth in the region. TBR noted that the client stories were exceptionally compelling, in part due to the clients all emphasizing what PwC did for them, the specific value the firm brought and the business problems they solved. Overall, the event included both a large number of sessions spanning a range of business challenges, technologies and PwC engagements, and plenty of time for questions for the PwC partners and their clients. TBR also noted a general optimistic outlook around the current macroeconomic environment in the region, prospects for growth, and PwC’s clients’ embrace of business model reinvention. As TK said, “Businesses need to reinvent, and that includes PwC!”

Investments in acceleration centers and Microsoft alliance strengthen PwC’s APAC digital transformation play

In his opening comments, Loh said digital transformation is accelerating, rather than slowing down, in the Asia Pacific region, despite challenges around geopolitics, supply chains and overall uncertainty in the global macro economy. PwC expects more consulting growth, and the firm has been investing in acceleration centers in Indonesia, Thailand and the Philippines. In Loh’s view, enterprises in the region need to engage in business model reinvention (BMR) and PwC’s AI-enabled approach to BMR will help companies survive and thrive. In TBR’s view, PwC’s bullish outlook on the APAC region’s demand for consulting reflects sentiments across a wide range of businesses and countries, as evident through PwC’s peers’ investments in the region and TBR’s ongoing discussions with regional enterprises and their consultancies. Further, AI-enabled BMR neatly marries two dominant market trends: incorporating AI into everything and refocusing on growth, not simply cost-cutting.

Extending Loh’s comments around digital transformation, PwC’s TK said that the firm’s clients increasingly expect technology to provide faster returns on investments, leading some clients to narrow the scope of their consulting engagements and take smaller steps toward digital transformation. TK added that PwC’s combination of strategy consulting, business innovation experience and AI-enabled solutions creates a compelling story in the boardroom, particularly when Microsoft accompanies the firm as a technology partner. In TK’s words, PwC and Microsoft are “better together” when they cosell and coinvent. With more than 7,000 PwC professionals in the region trained and certified on Microsoft’s technologies, TK said PwC has a credible and strengthening market position. In addition, TK noted PwC is doubling down on industry expertise, particularly around financial services, consumer markets and manufacturing, the public sector, and healthcare, and has a compelling “customer zero” story around change and risk management. Building on TK’s comments, Kevin Wo, Microsoft ASEAN Chief Partner Officer, added that PwC brings capabilities to move from strategy to execution and an ability to rethink business models, leading to “remarkable momentum” with PwC in the region. He noted that PwC is “investing in their own infrastructure” and helping “customers looking for tangible outcomes.”

Kevin Wo further explained his company’s deepening relationship with PwC by describing a shared commitment to execute on a joint go-to-market plan, including holding CXO roundtables, cobuilding assets and accelerators, and meeting with clients together and early in the engagement and digital transformation process. PwC’s uniqueness, according to Wo, came through the firm’s ability to help every organization become a frontier organization by leveraging AI to reinvest in the customer experience, reshape business processes, bring AI into everything, and bend the curve on innovation. He also praised PwC’s “big investment” in Microsoft and the resonance he sees from PwC’s customer zero use case. On agentic AI, Wo said Microsoft looked to PwC to lead with Agentic-led innovations to reimagine business models and accelerate enterprises’ AI transformation from proof of concepts to full scale organization adoption and customer impact. A final comment from Wo struck TBR as a not-so-subtle warning to Microsoft’s other consulting partners (who were obviously not in the room): being a distribution channel for Microsoft’s products will earn flat or falling revenues going forward, a prediction that echoes TBR’s ecosystem analysis.

TK wrapped up the day’s opening session on a more positive note, observing that while enterprises in the region have been struggling to translate AI into real outcomes, consulting partners (read: PwC) can deliver on strategy, governance and change management, making AI’s impact and business model reinvention within an organization more tangible. TK added that as enterprises increasingly seek a partner that can accelerate AI-enabled transformations, business line leaders, not CIOs, have taken the lead. She noted that PwC has “fantastic deep relationships” with the firm’s clients, well beyond the C-Suite, and can help business line leaders articulate the business advantages of AI-enabled solutions.

Client case studies underscore PwC’s Microsoft partnership

PwC clients featured prominently at the Singapore event, with participants from multiple industries and all sharing details about their engagements with both PwC and Microsoft. A financial services client noted that the PwC-Microsoft team beat out a Deloitte-Amazon Web Services team, in part because the winning team brought “heavy customization” to ensure they met every client need. Critically, PwC and Microsoft also built “four use cases and four apps” (in 12 weeks) together with the client, accelerating adoption and avoiding solutions ill-suited for the client’s IT environment and business processes. Notably, PwC directly spoke to clients’ professionals’ fears about losing their jobs to AI agents, getting ahead of disruptions and jump-starting change management. In TBR’s view, taking on fears head-on accelerates the journey to a trusted partner relationship.

A senior executive with an Indian conglomerate explained that his company selected PwC for a Microsoft Sentinel cybersecurity engagement because of PwC’s close alliance with Microsoft and applicable use cases, especially around IT and OT cybersecurity convergence, as well as the firm’s reliability, strategy consulting capabilities and scale. Notably, the Indian conglomerate is building cybersecurity experience centers specifically for OT environments utilizing PwC consulting and Microsoft solutions. In TBR’s view, this use case, especially given the client’s high profile in India and the growing relevance of IT and OT convergence, could be a marquee showcase for the firm’s capabilities. Critical to the success story is the client’s positive view of PwC’s alliance with Microsoft: every consultancy partners with Microsoft, but this client believes PwC has something unique, which makes for a compelling story for other potential clients.


In sharp contrast to the Indian conglomerate, at least in scale and client size, the CTO of an Australian civil engineering and heavy metals reseller described PwC’s AI-enabled supply chain solution, which brought mispricing mistakes down from an average of 35% to 0% across all contracts. It gets better. The AI-enabled solution allowed the company to apply its “Tier 1 service to Tier 2 clients” and “gave back two days of work a week for a four-person team.” Notably, the company did not reduce its headcount but instead shifted professionals’ time and responsibilities to doing more for customers. When asked what made PwC’s efforts special, beyond the clearly remarkable outcomes, the CTO said PwC delivered on time and on budget because of “lots of prework,” setting realistic goals, and staying within the scope. He added, “PwC enforced discipline.” For TBR, the success speaks for itself, but the more surprising aspect of this use case was the client’s size: at 300 people, this engineering firm is far smaller than the typical PwC client. In TK’s assessment, because of AI, PwC’s “client set is changing!” Indeed.

Conclusion

At a 2017 PwC analyst event in Singapore, TBR noted that PwC let its clients tell their success stories. In 2025 PwC expanded to technology partners, most prominently Microsoft, demonstrating the firm has evolved its alliances strategy to leverage one of the most critical means for gaining and retaining clients: have your technology partners tell your story, let Microsoft explain why PwC is special, abandon being agnostic, and embrace the value that closer relationships bring to every player in the ecosystem, including the client. TBR will be watching closely to see how PwC continues to evolve its alliance strategies and how a growing relationship with Microsoft leads to increased market presence and growth across the region.

Ericsson’s Biggest Customers and Partners (Operators) Are Holding it Back

2025 Ericsson Industry Analyst Event, Boston, Sept. 11, 2025 — A select group of industry analysts gathered at Convene in Boston to hear from Ericsson leaders, partners and customers about the company’s Enterprise business unit’s strategies, capabilities and opportunities in domains such as private cellular networks (PCNs) and network APIs, with AI and 5G monetization serving as themes that ran across the various executive presentations.

TBR perspective

Ericsson struck a cautiously optimistic tone at its annual industry analyst event, which focused on the Enterprise segment. The company acknowledged struggles and highlighted learnings and adaptations, especially pertaining to Vonage and the new Aduna joint venture for network APIs, which will set the stage for better outcomes moving forward. Ericsson is uniquely positioned to capitalize on the still-nascent PCN opportunity that is developing globally, but the vendor’s go-to-market encumbrances continue to constrain its growth prospects.

Specifically, the range and nature of Ericsson’s partnerships with the broader PCN ecosystem remain relatively limited compared to other players in the domain, especially frontrunner Nokia. Ericsson’s go-to-market channel beyond CSPs for enterprise growth areas remains limited relative to competitors such as Nokia, particularly in PCN. Nokia decided several years ago to reduce its reliance on CSPs and made a concerted effort to sell its PCN solutions directly to enterprises and through a robust roster of channel partners, including global systems integrators (GSIs), niche systems integrators (SIs), VARs and communication service providers (CSPs).

Ericsson has little control over one of its biggest challenges: CSPs are difficult to deal with, hesitant to work together for competitive reasons, and move slowly. Compounding this, CSPs are Ericsson’s largest customer cohort and partner channel, and TBR estimates more than 97% of Ericsson’s total company revenue through direct and indirect means stemmed from CSPs in 2024. These are key reasons why TBR expects Ericsson’s enterprise revenue will lag its potential in areas such as PCN, network APIs and communication applications.

Impact and Opportunities

FWA has significantly more room to run

Ericsson estimates that approximately 25% of all mobile broadband traffic globally is fixed wireless access (FWA) now and that 18% of premises globally will utilize FWA within five years, representing unprecedented growth considering FWA only began to take off in 2020. These statistics align with what TBR has been saying for several years: The market opportunity for FWA is much larger and more vibrant than the industry originally thought.

For example, TBR estimates FWA is technologically and economically feasible to support up to 50% of residential premises in the U.S. This opportunity is helped by the rapid time to deployment and strong value proposition the technology provides end users, especially when compared to other broadband access mediums like fiber-to-the-premises (FTTP), which is laborious and expensive to build out and has higher service costs for the end user. Business premises can also be strong candidates for adopting FWA. Ericsson is arguably the largest beneficiary of the FWA movement on a global basis in terms of revenue generated by selling FWA enablement products and services to CSPs (infrastructure only, not including customer premises equipment [CPE]).

Awareness gap in market slows adoption

On several occasions at the event, Ericsson leaders mentioned that there is an “awareness gap” in terms of the efficacy and outcomes that 5G-enabled technologies are achieving, especially as it pertains to PCN for businesses and the public sector. For example, Newmont, a Tier 1 mining company that adopted a private 5G network from Ericsson to remotely control its dump trucks at the mine, has realized a tenfold increase in coverage and a significant corresponding boost in productivity.

Such outcomes are compelling to enterprises looking to transform their operations to drive more revenue growth and or reduce costs. Though word is gradually getting out that early adopter enterprises are achieving strong results from new technology solutions, more needs to be done. Greater emphasis on partnerships with companies that have the ears of senior management at enterprises, most notably global systems integrators (GSIs), are a key way for Ericsson to promote the benefits of these solutions.

Differentiation, both in technology and marketing, needs to be addressed

Ericsson needs to do more to differentiate its technology solutions, and this includes ensuring that differentiation is well messaged to the market. Though some consolidation has occurred in the PCN, network API and communications applications domains, there remains significant competition and fragmentation, with many vendors playing in the market that are not well differentiated. This lack of differentiation is likely another reason Ericsson is struggling to achieve outsized growth in these nascent market areas. TBR did not hear a compelling narrative as to how Ericsson is differentiating itself in terms of technology and partnerships in these key, high-growth market areas. It remains to be seen if Ericsson’s NetCloud AI-powered cloud management and orchestration solution for PCN becomes a key differentiator once it is scaled up to large deployments of the Ericsson Private 5G solution.

Private cellular networks channel remains underdeveloped

Ericsson’s PCN revenue is growing at a relatively strong double-digit rate, but the size of the business is less than 60% the size of frontrunner (outside of China) Nokia’s PCN revenue, according to TBR’s Private Cellular Networks Vendor Benchmark. This differential in revenues is primarily due to Ericsson’s underdeveloped go-to-market approach and channel relative to Nokia. This is an issue TBR has identified and written about for the past few years, but Ericsson seems to have made minimal progress.

Ericsson is partnering extensively on PCN, but according to TBR’s research, most of that activity is driven by CSPs — even though enterprises and the public sector primarily work with GSIs, niche SIs, VARs and government contractors on digital transformation-related initiatives. Ericsson is also partnering with non-CSPs, including a new agreement with NTT DATA that reflects the kind of deeper, broader partnerships TBR believes Ericsson should pursue. Surface-level partnerships are essentially reseller arrangements, whereas some vendors have robust engagements with key GSIs and other types of partners. Nokia’s relationship with EY and Kyndryl are two such examples.

Connected laptops is a niche market, not a mass market opportunity

Ericsson has jumped on the bandwagon of 5G-connected PCs, and representatives from T-Mobile and HP Inc. spoke at the event about why this is a unique market opportunity. Though 5G-connected PCs sounds like a great feature that end users will utilize, TBR believes the additional cost required to embed the 5G modem into the PC, plus the subscription fees that would need to be paid to the service provider, broadly limits the scope of who would actually find enough value in the product and corresponding service to actually pay for it, especially when most people have smartphones and those smartphones have mobile hotspot tethering, which essentially makes the computer a cellular-connected device at no additional cost.

To be sure, 5G-connected PCs offer a unique capability that the mass market would value, but once extra cost is involved, only a small fraction of that market is likely to pay for the experience. Some enterprises and select types of SMBs (e.g., construction firms) and small office/home office (SOHO) workers (e.g., real estate agents and other road warrior workers) would be unique candidates for 5G-connected PCs, but this would be more of a niche market than a mass market opportunity. As such, TBR suggests network and PC vendors reassess their addressable market projections to be more aligned with observed user behavior and price-for-value considerations.

Conclusion

Ericsson has competitive technology, but its overreliance on CSPs to purchase that technology and/or scale it into end markets remains a weakness that will continue to hamper the company’s ability to participate more significantly in key growth domains, such as PCN. On the network API and communications application side, progress is being made and some scale is occurring, but Ericsson and its CSP partners are up against relatively fast-moving, well-resourced and more specialized entities, most notably hyperscalers and other digital-native players. Addressing the telecom industry’s weaknesses and shortcomings in these market areas will require more investment in channel development and more robust strategic partnerships with entities such as government contractors, GSIs and niche, domain-specific SIs.


Ericsson’s dependence on slow-moving CSPs is a risky proposition, especially when it comes to driving growth in key areas, including PCN, network APIs and communication applications. Ericsson should take a page out of Nokia’s playbook for aligning its opportunity areas with non-CSP players to accelerate growth. Specifically, Ericsson should take a closer look at how Nokia structured its PCN business, especially its channel ecosystem, to reduce its reliance on CSPs. This has proved to be the most optimal way to participate in opportunities arising in the enterprise end markets Ericsson is targeting.

Konecta Hybrid Customer Experience Combines Human Expertise with Advanced AI and Digital Capabilities

Konecta Analyst Day, Madrid, May 28, 2025 — Konecta invited industry analysts to the 20th annual ExpoContact, a company-organized event that welcomed more than 1,000 industry leaders, including clients, technology partners and organizations that are looking to improve competitiveness by modernizing customer management. In the morning, Konecta held a special in-person and virtual event for industry analysts in which Konecta executives, clients and technology partners discussed in detail the company’s vision, digital portfolio, and generative AI (GenAI) and agentic AI approach. TBR attended Konecta’s first analyst day event and was impressed by not only the openness of the company and its willingness to communicate with the analyst community but also the closeness of its relationships with partners and clients.

Konecta’s vision and ambition are to become the trusted technology, data and operations partner for clients’ agentic AI transformations

During the event, Konecta CEO Nourdine Bihmane shared details about Katalyst 2028, the company’s three-year plan to become a technology, data and operations partner. Essentially, the company’s goal is to provide AI-driven hybrid customer experience solutions (CX) that combine human expertise with advanced AI and digital capabilities. The plan includes four steps: 1) accelerating the adoption of data, GenAI and agentic AI; 2) increasing digital growth; 3) strengthening the partnership ecosystem; and 4) expanding global reach. The company raised €150 million (or $176 million) to fund the transformation plan. Konecta’s goal is to increase revenue to €2.5 billion (or $2.9 billion) by 2028 and generate between 30% and 40% of total revenue from AI and digital services.

To achieve these targets, the company is training more than 7,100 people on role-specific GenAI technologies and offering proprietary GenAI solutions. Konecta is also launching a new global Digital Business unit with digital offerings and 2,500 employees, including more than 300 trained sales leads. Konecta’s digital services revenue was €150 million (or $176 million) in 2024, and the company plans to increase revenue in the segment to €250 million (or $293 million) in 2027, representing a CAGR of 20%.

Expanding its partnership ecosystem will serve as a lever for future growth, such as by establishing strategic partnerships around GenAI with Google Cloud and Uniphore, and with STC Group in the Gulf Region around GenAI-powered CX solutions. Konecta’s partner ecosystem combines technology leaders, such as hyperscalers, cybersecurity providers, GenAI and large language model (LLM) vendors, hyperautomation and service platform solutions providers, and consulting companies to enable coinnovation and codevelopment with clients.

Notably, Konecta’s open ecosystem has been designed on joint IP, shared outcomes and scalable transformation models. Partnerships among IT services providers and technology vendors are a leading lever for portfolio expansion, and Konecta is moving in a similar direction alongside multiple IT services providers. According to TBR’s 1Q25 IT Services Vendor Benchmark, “The roles of alliance partners are changing in the rapidly evolving professional services market. During the past several years, multiple professional services companies took a technology-agnostic approach to offer flexibility to buyers that were wary of vendor lock-in.

As macroeconomic pressures force buyers to examine their existing technology stacks to ensure they maximize ROI, these buyers are consolidating vendors, compelling professional services companies to develop a preferred, if not exclusive, list of alliance partners. … Vendors are leveraging partners to launch agentic AI offerings to automate tasks and drive operational efficiency, and GenAI offerings to boost productivity and create cost efficiency, encouraging adoption by solving clients’ particular business challenges. NVIDIA-enabled agentic AI solutions dominated alliance announcements during the quarter, including new joint offerings with Accenture, Capgemini, Cognizant, IBM and Wipro.”

Konecta plans to expand by establishing a sales organization that is structured for global reach and local engagement. Notably, the company is opening new delivery centers in Bengaluru, India, and Cairo and is establishing five new AI Global Competence Centers, located in India, Egypt, Spain, Colombia and the U.S., to diversify service delivery capabilities and expand client reach. Such activities will help Konecta improve its global revenue distribution, as presently the company’s revenue is generated mainly from Europe and Latin America, while English-speaking markets and the U.S. contribute approximately 4% of total annual revenue, though the company plans to increase this figure in the coming years. In January Konecta established Egypt as its regional headquarters to serve clients in the Middle East, Africa, Europe and the Americas and announced the opening of a global delivery center and global Center of Excellence (CoE) for GenAI in Cairo.

The company is investing $100 million over the next three years and is planning to hire approximately 3,000 people with digital and technical skills to provide AI solutions, digital transformation, cybersecurity, big data and analytics, IoT, technical support, and multilingual customer services in English, French, German, Italian and Spanish. Konecta is also partnering with the Information Technology Industry Development Agency in Egypt to provide training and upskilling programs for local people, creating future employment opportunities for skilled talent. Konecta’s partnership with Uniphore, announced in November 2024, to deliver industry-specialized AI solutions that enhance CX with hyperpersonalized interactions will augment Konecta’s client reach in the U.S. and U.K. and contribute to revenue expansion in English-speaking markets.

Konecta provides experience services and digital solutions around service design, technology implementation and process optimization
Headquartered in Madrid, Konecta is provider of transformative experiences and an expert in CX solutions enabled by AI. Konecta has approximately €2 billion (or $2.3 billion) in annual revenue, 120,000 employees across 26 countries and 5,000 digital experts, and supports more than 30 languages. The company offers customer and employee experience services, digital marketing offerings, and products and solutions, such as around CX automation and analytics, all underpinned by AI and GenAI services and advisory and consulting services. Konecta expanded in size and client reach during 2022 through the merger with Comdata, an Italy-based BPO services provider. Comdata had 50,000 employees and annual revenue of approximately €980 million (or $1.15 billion) generated from services such as customer care, back-office and credit management. Since mid-2023 the merged companies have operated under the Konecta brand and currently serv more than 500 blue chip clients. The clients are spread across Europe, Latin America, North Africa, the Middle East and Asia and have an average client tenure of more than 20 years, underscoring Konecta’s emphasis on long-term relationships.

Utilizing a renewed management team will be a critical lever for successful execution of the Katalyst 2028 plan. Notably, over the past several months, Konecta has attracted experienced executives with strong technology and industry expertise from its France-based peer Atos, which has been challenged by attrition due to a turbulent and prolonged transformation initiative. Bihmane, who took the position of Konecta’s CEO in April 2024, previously worked at Atos for more than 23 years, including as global CEO and head of Atos’ Tech Foundations business line. In March Adil Tahiri was appointed head of Advisory and Professional Services. Previously, Tahiri’s 21-year tenure at Atos included roles as advisor to Atos’ CEO and head of CTO. Oscar Verge, also a long-term Atos leader with 20 years of experience at the company, joined Konecta in October 2024 as chief Ai deployment officer.

Konecta is shifting from providing simple automation to orchestration, and AI is a core enabler

While according to Tahiri, “Agentic AI is in the nascent phase,” Konecta’s ambition is to actively transform the industry and create differentiation through digital services. Konecta attracts clients by offering intelligent business orchestration, applying new levels of creativity, such as through real-time and context-aware hyperpersonalized experiences across channels, and orchestrating human and specialized agent interactions. The company provides clients with robust execution through composable agentic platforms and strategic technology and business advisory capabilities to guide clients through their transformations and speed up time to value.

As clients typically have a multitude of business applications, and each has its own data repository, the proliferation of agents creates complexities. Konecta is moving from simple automation to orchestration, and agentic AI adapts to dynamic application landscapes and automatically understands, reasons and sets code to extract data and support decision making. Investing in orchestration capabilities, and development of IP, such as solution accelerators and methodologies and specialized talent enables Konecta to address clients’ needs around managing their agentic AI environments.

Shifting from utilizing industry LLMs to employing customer-specific LLMs enables Konecta to generate business value from customer-specific data. Delivering high-performing and personalized agentic AI based on real-time, proprietary customer data and workflows enables Konecta to benefit from contextual data intelligence and establish trust with clients. The complexity of digital transformation is pushing Konecta to establish a strategic partner ecosystem, including foundational AI providers and niche domain experts, that is complementary to the company’s expertise.

Egypt is an attractive location for IT services providers
Konecta’s expansion in Egypt is driven by the availability of talent with language skills and technical capabilities and will support the company’s global revenue diversification. However, IT services providers such as Accenture, Capgemini, Atos, IBM and Deloitte are utilizing Egypt for global service delivery, are planning to expand their resources in the country, and are actively working with government bodies and local educational organizations to develop in-demand skills to support future recruitment. Intensified recruitment interest from IT services  providers might challenge Konecta’s expansion activities in the country. For example, in April Capgemini announced it will open an AI CoE in Cairo to enable GenAI and agentic AI adoption for clients globally. The center will consist of data scientists, architects, product engineers and project managers. Capgemini plans to double its headcount in Egypt to approximately 1,200 professionals in digital transformation and innovation by the end of 2025 and to expand to 3,000 people through 2026.

Offering GenAI and agentic AI solutions in an open platform increases Konecta’s value proposition

Konecta provides clients with an industrialized, modular and complete GenAI stack that comprises three solutions — Insights for strategic CX intelligence; Co-pilot for real-time agent augmentation; and Auto-pilot for seamless, AI-driven engagement. The Insights solution converts customer interactions into actionable intelligence, automatically mines 100% of voice and chat logs, correlates to KPIs and identifies agent-level coaching insights to forecast outcomes. Co-pilot provides agents with contextual AI to uplift conversations; summarizes prior interaction and customer context; and provides intent recognition, nudges and compliance suggestions during calls. Auto-pilot enables conversational automation of activities and provides escalation to human agents for exceptions. Offering GenAI And agentic AI capabilities in the Konecta platform, which is based on open and modular technology stacks, and offering the solutions as an extension not a replacement of human-delivered services improves the company’s value proposition around deriving productivity gains and expands its client reach.

Investing in GenAI-enabled solutions creates growth opportunities for Konecta, given ongoing buyer interest in adopting GenAI solutions. According to TBR’s November 2024 Digital Transformation: Voice of the Customer Research, “GenAI continues to influence digital transformation (DT) budgets as buyers grapple with juggling hype, ROI and FOMO (fear of missing out). With over three-quarters of respondents combined allocating 26% or more of their DT budgets to GenAI two years after the technology came on the market, it is evident that buyers are eager to explore the possibilities the technology can bring. We do not expect this trend will slow down anytime soon given that the majority of respondents plan to increase their GenAI spend by 10% or more in the next year.”

As macroeconomic pressures force buyers to examine their existing technology stacks to ensure they get the most ROI, Konecta’s GenAI stack demonstrates material outcomes for clients. For example, the Insights solution increases revenue conversion by up to 20% and decreases the ramp-up time for new agents by 20%. The Co-pilot solution enables 98% accuracy in all European languages and a decrease of 30% to 50% in average handling time in managing email and written communication. The Auto-pilot solution automates around 50% of inbound contacts on voice and written channels and reduces cost of interaction by 30%. Demonstrating ROI is critical for solution adoption.

According to TBR’s 1Q25 Digital Transformation: Analytics Professional Services Benchmark, “Enterprises are juggling fear, hype and hope surrounding the potential impact of generative AI (GenAI) on their operating models. This has heightened their expectations for vendors to deliver timely ROI tied to ongoing business process and/or IT modernization transformation, as the implications of technology complexities extend beyond data science, thus creating opportunities for vendors that can manage broad organizational relationships.”

In conclusion

According to TBR’s 2Q25 Accenture Earnings Response, “Transforming the CX domain will remain low-hanging fruit for the next two to three years, offering companies a clear path to apply agentic AI systems for productivity gains. This presents Accenture with a blank canvas to showcase its capabilities at scale and strengthen its position among chief marketing officer buyers. As CX evolves into experience operating systems, powered by continuous feedback and contextual inference, Accenture will need to consider applying multidomain context integration in an era when hyperpersonalization has become table stakes, at least from a communications standpoint.”


Konecta is moving in the right direction, and strict execution of its strategic initiatives and investments in platform-based services will enable the company to reach its revenue growth target of €2.5 billion (or $2.9 billion) by 2028. While Konecta’s competitors are making similar investments, the company will succeed due to its emphasis on helping clients reimagine operations, experience and outcomes with AI, platforms and human creativity, and established local client reach and best-shored service delivery model.