GenAI in 2025: Revolutionizing Agencies and Reshaping Ecosystems

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G.

Top Predictions for GenAI in 2025

    1. GenAI will continue to revolutionize mission-critical functions and day-to-day operations at federal civilian, defense and intelligence agencies
    2. Cloud vendors will splurge on AI investments even as customers grow apprehensive
    3. Infrastructure vendors’ focus will shift from serving cloud companies to making a massive push in enterprise AI
    4. The energy problem is likely to slow the pace of AI market development significantly
    5. GenAI upends pyramids, even as enterprises slow their AI roll

 

Request Your Free Copy of 2025 GenAI Predictions

The state of AI and GenAI in 2024

In 2024 the AI and generative AI (GenAI) landscape faced four key challenges: rising costs, driven by growing investments in data and infrastructure; talent and training gaps; regulatory uncertainty; and macroeconomic pressures. These obstacles will persist into 2025, with additional challenges in the GenAI space becoming increasingly evident.
 
TBR Insights Live: 2025 GenAI Predictions
According to TBR research, the waning GenAI hype has exposed underlying issues, including expensive cloud commitments and fragmented data strategies, creating opportunities for companies that emphasize ROI, complementary technologies and cost management. Adding to the complexity, rising energy costs and heightened awareness of GenAI-related security risks are further shaping this uncertain yet opportunity-filled environment.
 
But after two years of GenAI disruption, a clear trend is emerging across the ecosystem: strategic partnering is becoming essential. Companies such as McKinsey & Co, Wipro, Dell Technologies, Amazon Web Services (AWS) and NVIDIA are adopting this approach, recognizing that no single organization can deliver comprehensive GenAI-enabled solutions alone. Instead, success increasingly depends on leveraging the technology and expertise of ecosystem partners.
 
To read the entire 2025 GenAI Predictions special report, request your free copy today!

Digital Transformation in 2025: From Optimization Fatigue to Business Model Reinvention

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G.

Top Predictions for Digital Transformation in 2025

  1. Transformation comes roaring back
  2. GenAI upends pyramids, even as enterprises slow their AI roll
  3. Ecosystem intelligence becomes a strategic advantage

Request Your Free Copy of 2025 Digital Transformation Predictions

Of the three major focus areas for TBR’s 2025 predictions — strategy consulting, generative AI (GenAI) and ecosystem intelligence — the first may seem a long shot, the second too obvious to be new, and the third too well established to be changing much. All three will upend expectations in 2025 with wildly varying results for the IT services companies and consultancies that TBR tracks and for their technology partners.
 
TBR Insights Live: 2025 Digital Transformation Predictions
When OEMs first started releasing AI PCs, they shared expectations that the advent of this new product category would help drive the next major PC refresh cycle. However, even as vendors continue to roll out new generations of AI PCs containing increasingly powerful NPUs, adoption remains relatively slow. This is because the presence of an NPU itself does nothing to increase the value of AI PCs compared to other similar devices, and AI PCs require an additional layer in the form of applicable software that makes AI-enabled features easily accessible and user-friendly.
 
Strategy consulting’s rebound will come from a renewed push for growth, underpinned by business model reinvention. GenAI will profoundly change the structures and business models of IT services companies and consultancies, all while enterprises struggle to take GenAI to scale (and hey, how about some strategy consulting to help with those struggles?).
 
The need to stand out in a crowded market will compel technology leaders to better align their strategic partnerships, elevating the need for refined and tested ecosystem intelligence and taking alliance management from a good-to-have to strategically critical.
 
To read the entire 2025 Digital Transformation Predictions special report, request your free copy today!

AWS Re:Invent 2024: Innovating and Integrating to Meet AI’s Moment

AWS re:Invent 2024 Overview

Matt Garman kicked off his first re:Invent conference as AWS’ CEO, reinforcing a strategy that has been rooted in AWS’ DNA for over a decade. That is the notion of “building blocks,” in other words, the 220-plus native services AWS offers that cater to a specific workload and, when used together in a modular fashion, can address specific use cases. This approach of offering the broadest set of out-of-the-box, user-friendly tools to attract new applications, spin the IaaS meter and feed the lucrative flywheel effect AWS is known for, has naturally garnered a lot of interest with developer and startup communities. But Garman was quick to remind us how far AWS has come in catering to the large enterprise.

 

As an example, Garman welcomed JPMorgan Chase Global CIO Lori Beer to the stage to share the company’s aggressive cloud transformation, which consisted of growing from 100 applications on AWS in 2020 to over 1,000 today, powered by a range of services, from Graviton chips to SageMaker to AWS’ fastest-growing service, Aurora. If this success story is any indication and if we factor in the feedback from our own C-Suite discussions, this building-block approach appears to be resonating, solidifying AWS’ position as the leading IaaS & PaaS provider. But with every new application poised to have some AI or generative AI (GenAI) component, this budding technology is raising the stakes, and the hybrid-multicloud reality means customers have a lot of options when it comes to crafting new workloads.

Compute is foundational building block, with a heavy focus on AI training

Today, AWS offers over 850 Amazon Elastic Compute Cloud (EC2) instance types, and on average, 130 million new EC2 instances are launched daily. This pace of innovation and scale is largely due to AWS’ approach to the virtualization stack dating back to 2012 with the Nitro System, which other hyperscalers have since emulated in their own way, making compute the foundational building block and hallmark of AWS’ success. Though at the event AWS touted its commitment to NVIDIA, with support for Blackwell GPUs coming online next year, and general-purpose workloads via Graviton, a lot of the focus was on AI training.
 

Since it first launched its Trainium chip in 2020, AWS has served the needs of AI training workloads, but now AI-driven ISVs like Databricks and Adobe, seem to have an appetite for these chips, hoping to deliver cost and performance efficiencies to their wide swath of customers that also run on AWS. It is why AWS launched Trainium 2 and is making these EC2 instances, which encompass 16 Inferentia chips, generally available following year in private preview. AWS also reinforced its commitment to continuing to push the compute boundaries on AI training, announcing that Trainium 3, which will be available later next year, will reportedly offer double the compute power of Trainium 2.

Rise of the distributed database

Another core building block of the cloud stack is the database. Distributed databases are nothing new but have been picking up steam as customers in certain industries, including the public sector, want to have data stored within country borders but scale across different regions. At the event, AWS introduced Aurora DSQL, a distributed SQL database, that at its core isolates the transaction processing from the storage layer, so customers can scale across multiple regions with relatively low latency.
 

This development comes at an interesting time in the cloud database market. Database giant Oracle is shaking up the market, making its services available on all leading clouds, including AWS, with the Oracle Database@AWS service now in limited preview. But AWS is focused on choice. While the IaaS opportunity to land Oracle workloads was too good to pass up, particularly when Microsoft Azure and Google Public Cloud (GCP) are doing the same thing, AWS wants to continue pushing the performance boundaries of its own databases. In fact, it was Google Cloud that AWS targeted at the event, boasting that Aurora DSQL handles read-write operations four times faster than Google Spanner.
 

Watch On Demand: Monetizing GenAI: Cloud Vendors’ Investment Strategies and 2025 Outlook

Creating more unity between the data and AI was somewhat inevitable

Jumping on the platform bandwagon, AWS morphs SageMaker into SageMaker AI

AWS launched SageMaker seven years ago, and the machine learning development service quickly emerged as one of AWS’ most popular, innovative offerings, adding 140 new features in the last year alone. But when GenAI and Amazon Bedrock came on the scene, SageMaker found a new home in the GenAI portfolio, acting as the primary tool customers use to fine-tune foundation models they access through the Bedrock service. So, from a messaging perspective, it was not surprising to see AWS announce that SageMaker is becoming SageMaker AI. But what is notable is how SageMaker AI is being marketed, integrated and delivered.

 

First, AWS VP of Data and AI Swami Sivasubramanian introduced the SageMaker AI platform as a one-stop shop for data, analytics and AI, underpinned by SageMaker Unified Studio, which consolidates several disparate AWS data and analytics tools, from Redshift to Glue, into a single environment. Just as importantly, Unified Studio offers a native integration with Bedrock so customers can access Bedrock for GenAI app development within the same interface, as well as Q Developer for coding recommendations.
 

The second important piece is how data is accessed for SageMaker AI. The foundational layer of the SageMaker AI platform is SageMaker Lakehouse, which is accessible directly through Unified Studio, so customers can make a single copy of data regardless of whether it is sitting in data lakes they created on S3 or the Redshift data warehouse. This means customers do not have to migrate any existing data to use SageMaker Lakehouse, and they can query data stored in Apache Iceberg format as it exists today. From competitors and/or partners like Microsoft, Oracle and Databricks, we have seen big leaps forward in the data lake messaging, so the SageMaker Lakehouse announcement, combined with traditional S3 developments like S3 Tables for the automatic maintenance of Apache Iceberg tables, aligns with the market and is a big reaffirmation of the Apache Iceberg ecosystem.

 

In our view, SageMaker AI is a big development for a couple of reasons. First and foremost, it could go a long way in addressing one of the top concerns we often hear from customers pertaining to AWS, which is that they want consistent data without having to leverage multiple disparate services to carry out a task. SageMaker is still available as a stand-alone service for customers that have a specific requirement, but we suspect a lot of customers will find value in serving the full AI life cycle, from initial data wrangling up to model development as part of a unified experience. Since AWS launched the first EC2 instance in 2009, formalizing cloud computing as we know it today, we have watched the market gradually shift toward more complete, integrated solutions. From IBM to Microsoft, many of IT’s biggest players take a platform-first approach to ease common pain points like integration and cost in hopes of enabling true enterprise-grade digital transformation, and SageMaker AI signifies a step in this direction.
 

Secondly, SageMaker AI aligns AWS more closely with what competitors are doing to better integrate their services and selling data and AI as part of the same story. Considering the consolidation of services, data lake architecture and copilot (Amazon Q) integration, Microsoft Fabric is the most notable example, and while there are big technical differences between the two platforms, you can now draw parallels between both companies and how they are trying to better address the data layer in a broader AI pursuit. For context, TBR’s own estimates suggest Microsoft Azure (IaaS & PaaS) will significantly narrow, if not beat, AWS’ revenue lead by 2027, and a lot of customers we talk to today give Microsoft a leg up on data architecture. Nothing can displace Microsoft’s ties to legacy applications and the data within them, but SageMaker AI is clearly in step with the market, and if AWS can effectively engage partners on the data side, this solution could help AWS retain existing and compete for new workloads.

AWS’ values of breadth and accessibility extend to Bedrock

Because Bedrock and SageMaker go hand in hand, having a Bedrock IDE (integrated development environment) directly in SageMaker makes a lot of sense. This means within SageMaker AI, customers can access all the foundation models Bedrock supports and the various capabilities, like Agents and Knowledge Bases, that AWS has been rolling out to its audience of “tens of thousands” of Bedrock customers, which reportedly implies five times the growth in the last year alone. In true AWS fashion, offering the broadest set of foundation models is integral to the Bedrock strategy. This includes adding support for models from very early-stage AI startups like Luma and poolside, getting them tied to AWS infrastructure early on, and growing them into competitive ISVs over time.
 

Another key attribute of Bedrock has always been democratization and making access to the foundation models as seamless as possible through a single API hosting experience. In line with this strategy, AWS launched Bedrock Marketplace to make it easier for customers to find and subscribe to the 100-plus foundation models Bedrock supports, including those from Anthropic, IBM and Meta, as well as Amazon itself. AWS is the king of marketplaces, so having a dedicated hub for AI models that are from startups and are enterprise grade as part of a single experience is certainly notable and further fueling the shift in buyer behavior toward self-service.

Partners take note: Security, modernization and marketplace

Despite all the talk around AI and GenAI, security remains the No. 1 pain point when it comes to cloud adoption and was a big theme in the partner keynote. AWS’ VP of Global Specialists and Partners, Ruba Borno, reinforced the importance of AWS’ various specialization programs to demonstrate skills to clients in key areas including security. During the keynote, AWS announced new security specializations, including one around AWS’ Security Lake service. This is a pretty telling development for partners; Security Lake was a service essentially designed with partners in mind, allowing many services-led firms to build integrations and attach managed services. Now these partners can demonstrate their skills with Security Lake to customers, along with other areas in the realm of security, such as digital sovereignty, which aligns with AWS’ upcoming launch of the European Union (EU) Sovereign Cloud region.

 

Aside from security, AWS emphasized modernization and the need for partners to think beyond just traditional cloud migration opportunities. It is why AWS launched new incentives for modernization, including removing funding caps within MAP (Migration Acceleration Program), and rebranded the AWS Migration Competency as the AWS Migration and Modernization Competency. This is pretty telling of where AWS wants partners to focus and, in many cases, change the conversation with buyers, emphasizing the role of modernizing as part of the migration process. Considering how difficult it has become for services players to compete on migration services, as well as the fact that modernization could set the stage for more GenAI usage with tools like Q Developer, we believe this is aligned with where many global systems integrators are headed anyway.

Expanding the reach of AWS Marketplace

No partner discussion would be complete without AWS Marketplace, AWS’ pervasive hub where customers can buy and provision software using their existing cloud spend commitments. Year to date, AWS reports that essentially all of its top 1,000 customers buy on the AWS Marketplace, and usage spans several industries, including the public sector, which has reportedly transacted over $1 billion on AWS Marketplace in the past year. At re:Invent, AWS continued to take steps to expand the reach of AWS Marketplace, getting partners to better engage customers through this channel, with the availability of Buy with AWS. This option allows customers to access AWS Marketplace directly from a partner’s website.

Final thoughts

re:Invent showcased how AWS is pushing the envelope, in both breadth and capability, on the compute, database and AI building blocks customers use to solve specific use cases in the cloud. This approach, coupled with innovations like Bedrock Marketplace and a commitment to early-stage startups, speaks to how AWS will continue to lean into the core strengths that have made the cloud provider what it is today. But just as notably, offerings like SageMaker AI and an alliance with competitor Oracle show how AWS is embracing new tactics and elevating its role within the cloud ecosystem.

AI PCs: Progress, Potential and Hurdles in Redefining the Market in 2025

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for AI PCs in 2025

  1. AI PCs will not drive the next commercial PC refresh cycle
  2. Proprietary AI agents will become increasingly prevalent in the AI PC space over the next several quarters

 

Request Your Free Copy of 2025 AI PC Predictions
 

Revitalizing the PC market

For several quarters during 2022 and 2023, major PC OEMs directed investment from their PC businesses to other ventures as PC sales slowed due to market saturation and cautious spending from commercial organizations. Since late 2023, however, this trend has reversed as PC OEMs invest in the development and marketing of PCs with built-in AI capabilities powered in part by a dedicated processor called a neural processing unit (NPU).
 
While PCs with AI capabilities have existed for years, including high-powered workstations that leverage the GPU for AI tasks such as computer-aided design (CAD) and other simulation workloads, new AI PCs will target a much broader user base, including consumer and business users. This latest influx of AI PCs started in December 2023 with Intel’s release of its Core Ultra series of processors, which offload on-device AI tasks to the NPU in order to deliver greater power efficiency. Since then, PC OEMs have released several waves of AI PCs featuring both the first and second generation of Intel’s Core Ultra chips, as well as similar x86 processors from AMD and comparable ARM-based variants from Qualcomm.

TBR Insights Live: 2025 AI PC Predictions
When OEMs first started releasing AI PCs, they shared expectations that the advent of this new product category would help drive the next major PC refresh cycle. However, even as vendors continue to roll out new generations of AI PCs containing increasingly powerful NPUs, adoption remains relatively slow. This is because the presence of an NPU itself does nothing to increase the value of AI PCs compared to other similar devices, and AI PCs require an additional layer in the form of applicable software that makes AI-enabled features easily accessible and user-friendly.
 
Therefore, to build out the market and drive greater adoption of AI PCs over the next few years, silicon providers, PC OEMs and ISVs will need to collaborate around and invest in developing applications that increase the functionality of these devices beyond what can be achieved by a traditional, non-AI PC.
 
To read the entire 2025 AI PC Predictions special report, request your free copy today!

Cloud Market Share in 2025: GenAI Spurs Growth but Does Not Promise Vendors Long-term Gains

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for Cloud Market Share in 2025

  1. Scale, innovation and even repatriation will moderate cloud market growth in 2025
  2. Microsoft will narrow the gap with AWS in IaaS & PaaS market share, en route to leadership in 2027
  3. SaaS vendors will shrug off growing GenAI disillusionment, focusing on the long term by prioritizing GenAI agents within their development strategies

 

Request Your Free Copy of 2025 Cloud Market Share Predictions

The GenAI opportunity is developing but does not ensure future cloud market growth

The revenue generated from generative AI (GenAI) offset some of the impact of cost-saving and expense-reduction efforts that defined the IT and cloud market in 2024. We expect some of that luster to fade in 2025, however, as the lack of a clear ROI from GenAI solutions will be a sticking point that slows investment in the coming year. The long-term GenAI opportunity is still sizable and customer interest remains strong, but the coming year will be a transition period for end customer investment in the technology.
 
TBR Insights Live: 2025 Cloud Market Share Predictions
 
At the same time, the leading hyperscalers will use 2025 to expand delivery capabilities and secure their position in the AI market for the long term. We expect double-digit growth in capex spending from the leading vendors like Amazon Web Services (AWS), Microsoft and Google. This dichotomy of accelerated vendor investment and more restrained customer spending will define the coming year.
 
To read the entire 2025 Cloud Market Share Predictions special report, request your free copy today!

6G’s Fate Depends on the Level of Government Intervention

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for 6G in 2025

  1. 6G will leverage FR3 spectrum
  2. Capex spend on 6G is likely to be subdued
  3. Scope of government support for the telecom industry will increase and persist to facilitate 6G market development

 

Request Your Free Copy of 2025 6G Predictions

 

Lack of a clear ROI for the private sector to justify investing sufficiently in 6G puts the fate of the technology into the hands of the government

The telecom industry continues to struggle with realizing new revenue and deriving ROI from 5G, even after five years of market development. TBR continues to see no solution to this persistent challenge and with no catalyst on the horizon to change the situation, communication service providers’ (CSPs) appetite for and scope of investment in 6G will likely be limited.
 
TBR expects CSP capex investment for 6G will be subdued compared with previous cellular network generations, and deployment of the technology will be more tactical in nature, which is a marked deviation from the multihundred-billion-dollar investments in spectrum and infrastructure associated with the nationwide deployments during each of the prior cellular eras.

TBR Insights Live: 2025 6G Predictions

In a longer-term effort to address this situation, TBR expects the level of government involvement in the cellular networks domain (via stimulus, R&D support, purchases of 6G solutions and other market-influencing mechanisms) to significantly increase and broaden, as 6G has been shortlisted as a technology of national strategic importance.
 
With that said, 6G will ultimately happen, and commercial deployment of 6G-branded networks will likely begin in the late 2020s (following the ratification of 3rd Generation Partnership Project [3GPP] Release 21 standards, which is tentatively slated to be complete in 2028). However, it remains to be seen whether 6G will be a brand only or a legitimate set of truly differentiated features and capabilities that bring broad and significant value to CSPs and the global economy.
 
Either way, the scope of CSPs’ challenges is growing, and governments will need to get involved in a much bigger way to ensure their countries continue to innovate and adopt technologies that are deemed strategically important.
 
To read the entire 2025 6G Predictions special report, request your free copy today!

HCLTech AI Force: Scalable, Modular and Backed by Proven AI Expertise

TBR perspective

Disparate and siloed data, specialized software tools and interrelated processes challenge enterprises to gain real value from AI-enabled solutions. HCLTech’s AI Force platform provides visibility into data streams and interdependencies across the software development and operations life cycles — requiring minimal change management but no replacement of existing technology and greatly enhancing an enterprise’s existing IT environment. In short, AI Force is a nondisruptive force multiplier of customers’ technology investments.

 

In late September, TBR met with executives from HCLTech to discuss the company’s AI Force platform, overall business model, and strategies around AI and generative AI (GenAI). The HCLTech team included Apoorv Iyer, EVP and Global Lead, Generative AI Practice; Gopal Ratnam, Vice President, Product Management, Generative AI Products & Platforms; Alan Flower, EVP and Global Head, AI & Cloud Native Labs; and Rohan Kurian Varghese, Senior Vice President, Marketing. This special report reflects that discussion as well as TBR’s ongoing research on and analysis of HCLTech.

AI Force is a GenAI-powered platform that infuses intelligence across every phase of the dev and ops life cycles

HCLTech had an early start in AI, setting up a research team in 2016 and building out its AI engineering strengths around AI silicon; the development of AI-led IP solutions like DRYiCE, iAutomate and SDLC (Software Development Life Cycle), which was a precursor to AI Force; and its strong heritage in Data & AI with strategic acquisition like Actian, Starschema and, most recently, Zeenea. This has ingrained AI across HCLTech’s portfolio and underpinned transformation projects, allowing customers to seamlessly manage IT and cloud environments. Leveraging this heritage, HCLTech developed AI Force with responsible AI spanning built-in use cases that are scalable and modular and cover the entire software and operations life cycle, such as requirements and analysis (e.g., user story generation, change impact analysis), development (e.g., code generation, code refactoring), triage (e.g., duplicate defect detection), and technical support.

 

Through AI Force, HCLTech provides clients with a platform that supports not only software development life cycle, reducing the lift on manual tasks and shortening overall development time, but also the operations life cycle, enhancing overall efficiency and accelerating technology value across an enterprise by reducing accrued technical debt and producing better quality code. As one HCLTech leader described it, AI Force allows an enterprise to “stitch everything [in the IT environment] together and figure out where the issues are.”

 

Notably, AI Force has been on the market for over a year, is live with more than 25 of HCLTech’s enterprise clients, and serves the broader IT ecosystem within an enterprise, beyond just application development and maintenance teams. An HCLTech leader noted that the AI Force platform “reduces the lift of manual tasks and accelerates the overall service delivery time,” a clear operational and financial benefit for any enterprise and clearly more than simply a collection of software tools. Enterprises can now take intelligent decisions by harnessing data, leading to the accelerated development of products and applications, along with significant cost savings and improved efficiencies.

 

Before diving into specifics around AI Force, HCLTech’s leaders described some of the challenges enterprises face across the software development and operations life cycles, highlighting the complexities inherent in having multiple personas, disconnected processes, siloed data, disparate systems and specialized tools.

 

According to HCLTech, this landscape is missing a digital thread or intelligence hub capable of understanding the entire process end to end, including the data sets generated by specialized tools, and then further unlocking the relationships between the data sets. HCLTech’s AI Force can integrate existing tools not replace them and bring data sets together, create a knowledge graph of the relationships between the data sets, and conduct comprehensive root cause analysis.

AI Force’s key characteristics and advantages

In the discussion with TBR and during HCLTech’s presentation of AI Force’s capabilities, HCLTech’s AI leaders walked through AI Force’s go-to-market approach, characteristics, architecture, advantages and use cases. HCLTech conducted a demonstration of AI Force in action before turning to the synergies between AI Force and the company’s global network of AI and Cloud Labs.

 

At its core, HCLTech’s AI Force features extensibility, modularity and flexibility. It can integrate smoothly with existing IT environments, be leveraged for a large variety of use cases within an enterprise, and be deployed, consumed and priced in different ways that are suitable to an individual customer’s business needs.

 

In describing HCLTech’s go-to-market strategy, the AI leaders stressed three points:

  1. HCLTech will continue to enhance large-scale engagements with the capabilities and benefits of AI Force from the start, affording the client immediate cost savings.
  2. In other situations, HCLTech will assist clients in deploying AI Force as a platform within the client’s enterprise IT environment.
  3. For clients already engaging HCLTech for managed IT services, AI Force can be deployed to gain cost savings and efficiencies, directly complementing existing managed services. This last approach, in TBR’s view, reinforces HCLTech’s value proposition around offering innovation, even in established managed services engagements, and expands its remit within the enterprise, from simply IT services to more consultative, business-outcomes-driven and AI-enabled solutions. As part of this consultative approach, HCLTech undertakes value stream mapping in the discovery process for deploying AI Force, including a detailed as-is picture, to-be picture, and the true impact at scale. Through this due diligence, HCLTech helps customers select the right projects that can benefit from AI Force.

Appealing broadly across the enterprise and embedding customer context

Recognizing that peers such as Infosys and EY have similarly developed suites of AI-enabled and AI-forward solutions, HCLTech leaders highlighted some aspects they believe distinguish the company’s capabilities, particularly AI Force.

 

First, the solution can be deployed on the cloud, on premises or even in edge-enabled devices, depending on a client’s needs and circumstances. The leaders described this aspect as appealing to HCLTech’s ecosystem partners, which include Microsoft, Amazon Web Services (AWS), SAP and IBM, further noting the already established integration with Microsoft’s GitHub Copilot and being offered as a certified extension.

 

Second, the HCLTech executives noted AI Force is valuable to more than just coders and enterprise professionals looking for AI-enabled cost- and time-saving assistance. Being extensible and working with multiple large language models (LLMs) made AI Force flexible enough for a broader enterprise workforce audience.

 

Third, the inclusion of a customer context using enterprise data makes the solution more than simply an addition to an existing LLM accelerator. HCLTech’s leaders emphasized the value of customer context inherent to the platform, noting that HCLTech will train AI models on customer-specific data.

 

On a related note, the HCLTech executives described the underlying AI architecture as “comprehensive, but not complex; unified” and “holistic, therefore not a point solution.” According to HCLTech, AI Force has been granted 18 patents, and its batch processing mode reduces the strain on the underlying cognitive infrastructure, leading to reduced energy consumption. In TBR’s view, the characteristics and architecture likely resonate with IT professionals and particularly software engineers, while the flexibility and customer context significantly enhance the business value of AI Force.

 

Building on key characteristics, the HCLTech AI leaders walked through AI Force’s overall advantages, including a single, unified platform, rather than hundreds of solutions; simplified management and budget; built-in use case prioritization, allowing decision-makers and IT support to focus on the use cases that would lead to business transformation; inherently enabled customer context, greatly enhancing the stickiness of AI Force within an enterprise; and built-in data ingestion and storage, significantly diminishing the likelihood of disjointed or counteracting results.

 

In TBR’s view, AI Force’s advantages play well for different buying and decision-making personas. Procurement, IT operations and even the CFO can appreciate a single solution with simplified management. Business unit leaders can find and deploy uses cases suitable to their specific needs. And the inherent stickiness of AI Force can appeal to executives looking to gain advantages from deploying AI-enhanced solutions and not simply paying for another round of new technologies.

Applying GenAI only when and where it is needed

Not every business problem is best solved by deploying GenAI-enabled solutions. HCLTech leaders emphasized that some customer problems can be handled by simple automation, some with traditional AI, and only a niche set through GenAI-enabled solutions.

 

In TBR’s view, HCLTech’s strategic decision to recognize that customers can solve problems with existing technologies and do not always need GenAI-enabled solutions plays well, given enterprise buyers’ fatigue around the constant carousel of emerging technologies and ever-increasing IT budgets. Simply showing customers that AI Force will help identify where GenAI is best suited and where it is not should resonate with IT decision makers and their C-Suite bosses, all of whom are looking for tangible returns on technology investments. If HCLTech can help get more from existing technologies, AI Force is an immediate value-add.

 
Notably, HCLTech works with a wide variety of models and is model agnostic. The choice of model depends on a client’s business problem and the context of the client’s own data. Rather than recommending a model based on technical specifications or a familiarity with a particular model, HCLTech centers the decision on the client’s specific business problem.

Four ways to consume, determined by the customer’s business problem

HCLTech’s customers can take advantage of the AI Force platform in whichever deployment and consumption model fits their needs. HCLTech offer the platform as a stand-alone deployment, embedded into the client’s IT environment, through APIs (which one HCLTech leader described as “headless … behind the scenes”), or on the edge through AI-enabled PCs.

 

Critically, HCLTech leaders assured TBR that the customer’s consumption model of choice made “no difference in how the customer pays for AI Force.” As for decision making around the consumption model, HCLTech leaders said the company advises customers based on the business problem the customer is trying to solve.

 

On this point, TBR believes HCLTech has, itself, made a strategic decision: allow the customer’s environment, needs and business problems to determine the best commercial and technological fit for HCLTech’s platform, rather than HCLTech’s business and commercial needs dictating deployment terms.

 

The discussion included detailed accounts of two deployments at different types of companies. First, to accelerate a legacy IT modernization effort at a financial institution, HCLTech used AI Force to map, migrate and test more than 200 legacy applications.

 

Second, at a massive global technology company, HCLTech used AI Force to radically reduce marketing spend through a what an HCLTech leader referred to as “marketing ops transformation from manual-driven content development by a third-party vendor to GenAI-automated content generation.” TBR has been briefed on similar marketing operations improvements through GenAI automation, but none at the same scale or with comparable cost savings as those described by HCLTech.

 

HCLTech leaders also described the company’s recently announced partnership extension with Xerox. The company will leverage automation, product and sustenance engineering, and process operations services — including order to cash, sales and marketing operations, and supply chain and procurement — along with AI Force, to deliver a unified interface that transforms the way employees and clients engage with Xerox.

 

HCLTech describes other AI Force use cases on its website.

Minimal change management and increased visibility provide immediate value

In TBR’s research, GenAI adoption has benefited enterprises with well-managed and orchestrated data, even if that data exists in silos. In contrast, enterprises with little visibility into their data have been challenged to see meaningful returns on their GenAI investments, in part because of a challenge HCLTech identified above: People within an enterprise typically like the specialized software tools they are already using and want to keep using them.

 

HCLTech’s AI Force does not ask for change from multiple personas across an enterprise or for adoption of a new set of tools; it instead provides greater visibility into everyone’s processes, software usage and IT environment and demonstrates how one person, process or tool can affect another. By providing visibility without demanding replacement and adoption, HCLTech’s AI Force can deliver value with minimal change management.

AI Force may be what helps HCLTech survive the coming IT services business model upheaval

As HCLTech’s leaders noted to TBR, HCLTech is not new to AI, as the company had been investing in AI, training its workforce around AI principles and deployments, working with chip manufactures, and developing and selling software all before GenAI emerged. As one slide in HCLTech’s presentation noted, the company has been “Building and deploying AI solutions since 2016.”

 

Legacy — and maybe more accurately, proven — skills and capabilities lend immediate credibility to what HCLTech brings to clients and partners with AI Force. Further, a significant part of what separates HCLTech from immediate peers is the company’s IP-driven services model, a strategic difference that becomes increasingly relevant as clients ask for more GenAI-enabled services and less labor-dependent services. HCLTech’s business model is not simply enhanced by AI Force and other IP-driven solutions; it might actually be saved by those capabilities as the entire IT services business model undergoes significant, AI-induced change.

 

TBR will be watching as HCLTech develops additional platforms, brings agentic AI solutions to discussions with clients, and enables fully autonomous AI deployments, all built on a solid foundation of expertise, experience and ever-increasing capabilities around artificial intelligence.

Hybrid AI: Lenovo Builds a Portfolio Ready to Address the Confluence of Personal, Enterprise and Public Data

Lenovo Outlines Its Vision for Hybrid AI

Lenovo CEO Yuanqing Yang, better known as YY, opened up the company’s 2024 Tech World event by discussing Lenovo’s stance on what it calls “hybrid AI,” a vision not dissimilar to hybrid cloud.

 

Hybrid AI is the ability to leverage both private (personal or enterprise) and public foundational models together to drive action. YY sees hybrid AI as the path forward for both consumer and enterprise users, with AI agents serving as the vector for combining these multiple data sources and connecting knowledge with specific tasks. AI agents will know their users by integrating disparate data into unified frameworks, will understand their users by creating models of a person or enterprise, and will work for their users by putting this knowledge into action.

 

The incoming era of agentic AI will require multiple agents work together to make critical connections across public and private data sets. During the Tech World keynote, Lenovo executives demonstrated a handful of capabilities of personal AI agents, from helping students study more effectively for exams to understanding the context of a consumer’s morning routine and ordering their favorite coffee from their usual coffee shop. These types of tasks require the ingestion, understanding and integration of user data across multiple applications.

 

While the discussion around consumer AI was focused on devices, Lenovo’s Enterprise AI showcase largely highlighted its IT infrastructure and services businesses. Core to the hybrid AI theme was the announcement of Hybrid AI Advantage with NVIDIA. Enabled by Lenovo’s full-stack portfolio and new Lenovo AI Library, the joint solution framework highlights how Lenovo believes its Lenovo Hybrid AI Advantage will accelerate enterprises’ AI adoption. This announcement formalizes Lenovo in competition with similar NVIDIA-based joint solution portfolios from Dell Technologies and Hewlett Packard Enterprise (HPE).

 

In contrast to these large-scale high-performance computing (HPC) systems, Lenovo arguably will have equal or greater success through its AI-centric edge business, where Lenovo has a track record of deploying retail, manufacturing and smart city use cases. Multiple server vendors’ AI stories center on their massive 8-GPU AI systems, but Lenovo points out that for many companies, AI will be executed on far smaller and more affordable systems, some with no GPU at all. This strategy plays directly into Lenovo’s “AI for all” mantra.

 

Underneath the enthusiasm for hybrid AI, Lenovo’s mission remains unchanged: It is driving transformation to become a technology leader in global devices, infrastructure solutions and services worldwide. Lenovo positions itself as having an end-to-end technology portfolio, a user-centered approach and an immense emphasis on open innovation. The company offers its customers choices thanks to its partnerships across semiconductor, AI platforms and ISVs; and it leverages its Solutions and Services Group (SSG) to accelerate solution development between its own portfolio and partner ecosystem.

Lenovo Develops Proprietary AI Features to Differentiate Its IDG Portfolio

Overall PC demand decreased over the past several quarters due to lengthening PC life cycles and the lingering effects of post-pandemic market saturation. However, during the pandemic the total addressable market for PCs increased robustly as the number of PCs per household jumped, driven by both work- and learn-from-home initiatives around the world.

 

As such, with these pandemic-bought machines aging, the next major PC refresh cycle is on the horizon and is expected to drive a material rebound in the market, supported by the upcoming end of Windows 10 support and mounting interest around AI PCs.

 

However, the lack of killer applications leveraging the neural processing unit (NPU) has throttled AI PC adoption thus far. At GIAC Lenovo emphasized that it expects new AI PC killer use cases and applications will come in 2025 and 2026, noting that there are already over 100 independent software vendors developing applications leveraging the NPU.

 

Lenovo had also recently announced AI PC Fast Start, an AI-centered advisory and deployment service that helps organizations transition to AI-ready devices and quickly unlock the potential of AI PCs.

Lenovo Announces Aura Edition AI PCs Ahead of the Next Major PC Refresh Cycle

To prepare for this refresh and capitalize on the market’s interest in AI and generative AI (GenAI), Lenovo unveiled a series of new AI PCs, including the company’s Aura Edition AI PCs, which the company developed in deep collaboration with Intel and includes three levels of “Smart” features to enhance the user experience.

 

The Smart Modes feature allows Aura Edition PCs to intelligently adapt to users’ workloads and environments through five submodes, including Shield Mode and Collaboration Mode, which enhance user privacy and optimize video from integrated PC cameras, respectively. The Smart Care feature integrates natural language processing capabilities to drive an enhanced user support experience, and the Smart Share feature allows for cross-device image sharing, supporting smartphones on both Android and iOS platforms.

Lenovo Plans to Leverage Its AI Now Agent to Drive Differentiation in the Market

Over the last decade the Windows PC market has become increasingly commoditized as all OEMs across the industry built machines based on the same PC silicon and operating system, resulting in a lack of material differentiation. However, the rise of AI PCs presents a new opportunity, and Lenovo is working to set itself apart from its peers by working with Meta to develop and integrate an on-device AI agent, dubbed AI Now, through a deepening of their partner engagement.

 

While Microsoft Copilot+ offers a series of GenAI features and experiences for Windows 11 machines leveraging several of Microsoft’s small language models, at GIAC Lenovo Executive Vice President and President of IDG Luca Rossi noted that not all Copilot+ functions are run natively on the device, with certain queries going to the cloud. In contrast, Lenovo AI Now leverages a local large language model to drive new capabilities that complement Copilot+’s feature set.

 

With significant support from Meta, Lenovo’s research team worked extensively to fine-tune the local large language model behind AI Now using Meta’s Llama 3. Through AI Now users can interact in real time with their device’s personal knowledge base, all without relying on cloud computing, providing enhanced data privacy and enabling GenAI features without internet connectivity.

 

AI Now’s capabilities include document management, meeting summarization, device control and content generation, with the AI assistant supporting natural language interaction. Additionally, it is worth reiterating that Lenovo sees AI Now complementing Copilot+ rather than replacing it, as the company does not want its AI PC agent to compete with other cloud-based or cloud-leveraging alternatives.

Lenovo Bets Big on MBG and moto ai

IDG is comprised of two business units: PCs and Smart Devices (PCSD) and Mobile Business Group (MBG). While the majority of IDG’s investment is focused on PCSD, the larger of the two business units, Lenovo remains committed to expanding its MBG business, which includes Motorola Mobility, to grow its global market share and increase the premium mix of its overall mobile portfolio.

 

Similar to its strategy in the AI PC space, Lenovo MBG continues to invest in the development and integration of AI features within its smartphone lineup through moto ai. While many of the company’s moto ai features showcased at Tech World are in proof of concept or beta stages, Lenovo made clear its plans to bring customized user experiences to market in the near term. New moto ai features include prompts and commands like “Catch me up,” which summarizes personal communications, and “Remember this,” which, when initiated, captures live moments and on-screen information while also providing AI-generated insights.

 

Additionally, Lenovo demonstrated the capabilities of its large action model, which allows Motorola devices to learn from users’ behaviors to offer increasingly personalized responses and translate natural language prompts into actions that can be executed automatically on behalf of the user.

 

Further, Lenovo provided an update on how the company plans to bolster the capabilities of its Smart Connect software solution, launched in February, to enable multidevice experiences across Lenovo’s portfolio of PCs, tablets and smartphones. The integration of new AI features with Smart Connect will enable users to not only transfer personal data across their connected devices but also benefit from cross-device searches and smart actions, allowing them to activate moto ai features directly from their PC using moto ai prompts and commands.
 
Perhaps most noteworthy, Smart Connect supports Lenovo’s hybrid AI strategy by fully integrating device ecosystems, allowing users to instruct their Motorola devices to carry out a complex AI task that cannot be performed locally on the device. Instead, Smart Connect uses a connected AI-enabled device, such as a Lenovo AI PC, to execute the task and return the results to the users’ smartphone.

Lenovo Uses One Lenovo Strategy to Bring Enterprise AI to Fruition

Enterprise AI Solutions Highlight ISG and SSG Integration

While Lenovo operates three distinct groups for devices, infrastructure and services, its enterprise AI solutions pull from expertise across the three businesses. This is particularly evident in the ISG and SSG space with the launch of Lenovo AI Fast Start services and Lenovo Hybrid AI Advantage with NVIDIA.

 

Lenovo’s AI Fast Start professional services help customers identify AI use cases and begin generating value within 90 days. While this may seem like a lofty goal, particularly as the enterprise market struggles to identify and best deploy AI, Lenovo highlighted two examples of this service in action. SAP used AI Fast Start to build an interactive AI avatar for one of its newest experience centers. Formula One also used the service to deploy an AI-based solution that provides a more immersive viewer experience by pulling video from numerous video feeds, enhancing the content and delivering it to the user faster than through manual video management method.

 

While these use cases are examples of tangible needs that can be met using AI, other use cases are not as evident in the enterprise market. Lenovo has also built an AI advisory practice that identifies ways AI can create value for a business and develop an adoption road map leveraging Lenovo’s AI library of use cases. By using the term “library” to describe its collection of AI use cases, Lenovo is intentionally conveying the impression that it offers specific use cases for everyone.

 

In TBR’s view, this provides some subtle differences from the one-size-fits-most messaging around AI use cases coming from most of Lenovo’s peers and ecosystem players. In addition, TBR notes that Lenovo intends to fully root its AI advisory capability into its technology, rather than taking a McKinsey-like approach to business consulting, playing to Lenovo’s services strengths.

Lenovo Deepens Its Relationship with NVIDIA to Drive Enterprise AI Adoption

Lenovo Hybrid AI Advantage with NVIDIA is first and foremost a collaboration that marries  Lenovo’s infrastructure and services portfolios with the NVIDIA AI Enterprise software platform, NVIDIA accelerators and NVIDIA networking solutions. While other companies have collaborated with NVIDIA in the enterprise market, including Dell Technologies with its Dell AI Factory with NVIDIA and HPE with its NVIDIA AI Computing by HPE, Lenovo intends to differentiate itself through its library of horizontal and vertical-specific accelerators, which will help customers build solutions more quickly.

 

Lenovo Hybrid AI Advantage with NVIDIA can be paired with Lenovo AI Advisory and Lenovo AI Fast Start services, Lenovo TruScale GPUaaS, or ISV offerings from Lenovo’s AI Innovators Program.

 

Lenovo is also using its 70,000-employee base as the test bed for AI use cases being brought to market. Ken Wong, executive vice president and president, SSG, notes that SSG’s biggest customer is Lenovo itself. For example, the company has built generative AI-based solutions to generate marketing content and to create customer service agents for its customer support centers.

Lenovo Highlights Engineering Distinctions in Its Wide-ranging AI Server Lineup

Sustainability is a difficult topic to broach when it comes to large-scale AI systems, which consume increasing amounts of electricity and generate more heat with each new generation of AI accelerators. Lenovo provides a compelling approach with its sixth generation of Neptune liquid cooling, which is integrated into its ThinkSystem SC750 and SC777 servers.

 

Unlike other water cooled systems, Lenovo’s liquid cooling uses conductive copper piping instead of PVC and is able to cool the systems with warm water instead of prechilled water, which consumes additional energy. Compared to air cooled server systems, Lenovo claims that Neptune can reduce energy consumption for server fans and data center air conditioners by 40%. Lenovo also pairs its in-house liquid cooling design expertise with its data center design and planning, implementation and management services to facilitate liquid cooling technology adoption for AI workloads.

 

Of the major server OEMs, Lenovo is the quickest to point out that not all enterprise AI use cases require high-performance computing. Lenovo’s edge computing business, which is now integrated with its AI business, features its ThinkEdge server portfolio including multiple small form factor servers that operate in edge environments. These servers are the foundation for many of the AI use cases featured in the 165-plus ISV solutions built through Lenovo’s AI Innovators Program.

Responsible AI Serves as Lenovo’s Guiding Principle

Lenovo’s company vision of hybrid AI, in which personal, enterprise and public data sets are used to inform AI agents, is the natural evolution of AI technology but is not without risks around security, privacy and sustainability.

 

In response to these risks, Lenovo has proactively implemented its own AI governance organization to create AI policies and establish trust among its employees, customers and partners. Lenovo has combined its chief security officer and chief AI officer positions into one role under Doug Fisher, based on the company’s belief that security, privacy and ethics are central to designing AI solutions.

Behind the Scenes, Lenovo Is Honing Its Strategy Execution

Following the pandemic and a related multiquarter slump in PC demand that impacted top-line revenue and profitability, Lenovo has underscored its strategy to diversify revenue away from PC, which made up about 74% of total revenue in 2021.

 

While Lenovo has made progress on this goal to some extent, as ISG and SSG have both experienced revenue growth, the company acknowledges it needs to make changes across its portfolio and go-to-market strategies to further accelerate revenue growth.

Lenovo 360 Continues to Target Growth Through Partner Channel

An effective channel strategy is critical to executing on Lenovo’s broad growth initiatives, particularly those around driving ISG hardware to profitable growth. The company’s Lenovo 360 partner framework has simplified the partner process by drastically reducing the number of partner programs and incentive structures, streamlining certification processes, and creating a digital hub that supports demand-generation activities and helps partners track their deal pipeline and sales performance.

 

In tandem with transforming partner engagement, Lenovo is simplifying the ISG product portfolio to focus on the hardware configurations that comprise the most sales volume. This strategy is one of the key ways Lenovo plans to trim costs within its operations and make its portfolio easier for channel partners to sell. Additionally, a simplified infrastructure portfolio will also help Lenovo more easily maintain healthy channel inventory levels.

 

Lenovo acknowledges that the landscape of resellers is evolving from traditional value-added resellers to a services-led approach. As such, the company is evolving its partner framework to better engage with a broader set of ecosystem players, including managed services providers and global systems integrators, that are increasingly relevant partners in complex, multivendor solutions.

Lenovo Will Expand into Tangential Markets Where It Can Tap into Existing Strengths

Lenovo wants to capitalize on new markets, including the auto industry where technology is shifting from using multiple distributed computing resources throughout vehicles to more centralized computing, specifically around infotainment and autonomous driving systems.

 

Yong Rui, Lenovo’s former CTO, has been appointed to lead the company’s newly formed Emerging Technology Group (ETG), which will spearhead the expansion into in-vehicle computing as well as other emerging tech areas.

 

Lenovo feels its strengths in hardware design and manufacturing will help it expand into a brand-new market with an entirely different set of competitors. Through this expansion, Lenovo will remain true to its own DNA, focusing specifically on compute and leaving other aspects such as software, algorithms and vehicle manufacturing to ecosystem partners.

Lenovo Is Investing in Brand Recognition and Perception

Lenovo is investing in brand recognition through major sports sponsorships. At Tech World, Lenovo announced an expansion of its existing sponsorship of Formula One, which will include Lenovo’s subsidiary Motorola becoming the global smartphone partner for Formula One.

 

Further, Lenovo announced a partnership with FIFA to become the technology partner for the FIFA World Cup 2026 and the FIFA Women’s World Cup 2027. These investments will help Lenovo drive brand recognition and expand into key growth markets including premium PC, premium smartphone, IT infrastructure and related solutions and services.

Fujitsu’s Strategic Evolution: Transforming for a Future with Uvance at the Core

On Oct. 1, TBR attended Fujitsu’s Executive Analyst Day in Santa Clara, Calif., and engaged with Fujitsu leaders, including Tim White, chief strategy officer; Ted Okada, SVP and head of Technology; Ted Nakahara, SVP and Head of Strategic Alliances; Fleur Copping, VP of Strategic Alliances in Regions; and Asif Poonja, EVP and CEO of Fujitsu Americas. The following reflects main stage presentations, breakout sessions and one-on-one discussions, as well as TBR’s ongoing analysis of Fujitsu’s business model, strategy and performance.

Fujitsu in Transition, with Clear Direction and Intent, Playing to Strengths

Three things about Fujitsu stand out in a crowded IT services and consulting market. First, the company is in the middle of an organizational evolution, changing its business model to fit emerging client demands and orienting its go-to-market strategy around Uvance. Second, Fujitsu’s commitment to change in the Americas has completely remade the company around IT services and consulting, with aspirations to become a technology consulting leader. And third, Fujitsu’s alliances strategy, while still dependent on labor-intensive relationships and persistent account-level management, includes all the best practices TBR has seen from larger competitors, with at least one unique twist. In short, Fujitsu’s evolution will likely make the company a highly capable contender as the IT services and consulting market changes.

 

At the start of the analyst event, Tim White, chief strategy officer, explained that Fujitsu’s transition has been underway for a few years and has included allowing the Americas business to shed everything except services. As part of the overall transition, Fujitsu committed to expanding consulting while continuing to deliver on core IT services and modernizations. White noted that Fujitsu is roughly halfway through a three-year plan to grow services and the Americas region has already surpassed targets for 2024. For example, Uvance accounts for 37% of Fujitsu Americas’ business, above the 30% goal.

 

Critically, according to White, Fujitsu has not lost a step on technology advances or quality of services delivered, so clients and alliance partners continue to be well served. The change — the evolution — is primarily in how Fujitsu sees itself and its future. And that future is Uvance.
 

In TBR’s view, understanding Fujitsu’s existing and evolving business model, strategy and performance requires, perhaps surprisingly, a certain separation from the typical analysis, if only because of Fujitsu’s current transition.
 
While there is perhaps some uncertainty among analysts around Fujitsu’s brand, specific offerings and organizational structure, TBR sees no evidence that Fujitsu’s clients and technology alliance partners lack the clarity required to make decisions about Fujitsu’s capabilities, scale and skills.
 
Undoubtedly, Fujitsu’s brand in the Americas could use a significant boost — without which a ceiling could remain for the company’s growth — but the importance of marketwide brand recognition pales in comparison to a successful track record of delivering IT services and consulting, providing innovative solutions, and leveraging the latest technologies to solve clients’ problems.

 

Uvance Is “the Future State of Fujitsu’s Portfolio”

Fujitsu’s leaders stressed the centrality of Uvance in the company’s strategy and vision for IT services, consulting and technology. White described Uvance as “the future state of Fujitsu’s portfolio.” Asif Poonja, CEO of Americas, said, “Uvance is the center of our strategy.” At the center of Uvance is consulting. Fujitsu announced a goal to hire 10,000 consultants, but White and others explained that Fujitsu’s focus is not the number but the portfolio shift toward consulting while still serving clients who need core IT services and modernization.
 
Poonja noted that Fujitsu will focus on technology consulting, rather than McKinsey-style business consulting, playing to Fujitsu’s legacy technology strengths. In TBR’s view, technology-led consulting reflects the current demand among enterprise consulting buyers to infuse every consulting engagement with technology, a trend well underway before the hype began around generative AI (GenAI). Fujitsu’s leaders added that Uvance Wayfinders — essentially business and technology consultants — are able to pull together all of Fujitsu’s capabilities and offerings.

 

In TBR’s view, Uvance is the framework around the company’s “SaaS-like” business model, with the leaders using the term “SaaS-like” but recognizing the phrasing may need further refinement and/or explanation. Fujitsu will use platform-enabled services to drive higher-value conversations and engagements, led by the consultants the company is planning to hire and/or acquire. Fujitsu will sell IP when needed and drive managed services through its delivery capabilities. The shift in the Americas toward becoming an asset-light organization is the first step, and the second step is expanding consulting capabilities and scale. The third step is organizing delivery under a globally run P&L (which Fujitsu may have already begun).
 
Meanwhile, modernization services — moving from mainframe to cloud — remains the engine that keeps Fujitsu running. The company still has its own data centers outside the U.S. and also still has plenty of clients running on mainframe, especially in their core verticals, like public services. For TBR, Uvance’s success may depend on broader adoption of the asset-light Americas strategy, albeit at a pace that does not compromise quality or lose clients in core markets. Again, Uvance is the future state of Fujitsu’s portfolio.

Fujitsu Americas: “Leveraging Global Pillars to Grow”

As described by Poonja and White, Fujitsu in the Americas has persistently pared down its offerings to focus only on IT services and technology consulting, playing to Fujitsu’s strengths and concentrating on industries in which the company has proven capabilities, well-established relationships with clients and differentiated offerings.

 

Poonja added that, although Fujitsu Americas earned a small percentage of Fujitsu’s overall revenues, corporate leadership in Japan recognize the importance of the Americas market and understand the challenges of building a more widely known brand. Poonja stressed that Fujitsu Americas would continue “leveraging global pillars to grow” while staying focused on regional strengths, specifically in government, manufacturing and AI.

 

In TBR’s view, Fujitsu Americas’ current state and trajectory align well with Fujitsu’s overall corporate strategy. The business aspires to be a top technology consulting company and appreciates the difference between being skilled at technologies and being able to make the business case for Fujitsu’s solutions. As an integral part of its strategy, Fujitsu Americas consistently pulls in the global company’s broader strengths and capabilities.

 

The use cases that Fujitsu’s leaders shared during the event highlighted the company’s technology, such as 5G and AI, and its deployable, offshore scale. Overall, Fujitsu Americas’ leadership presented a compelling story of evolution, strategic focus, early positive results and appreciation for current weaknesses. In contrast to analyst events dominated by marketing messages, Fujitsu maintained a substantive and clear-eyed atmosphere, with discussions centered on realistic expectations for Fujitsu Americas’ changing position in the IT services and consulting market.

Fujitsu’s Alliances: Doing the Hard Work While Taking Customer Zero to Another Level

In both the formal presentations and the informal discussions, Fujitsu’s leaders impressed TBR with the fullness and maturity of the company’s alliances strategy. The ecosystem has changed substantially in recent years, forcing companies to rethink their partnering strategies and more closely examine the best practices of peers, competitors and alliance partners. This shift has been an ongoing focus of TBR’s research, which has increasingly been used by alliance leaders at global technology companies as they undergo this transformation.

 

As part of this research, TBR has analyzed a wide range of alliance strategies and activities, from inadequate and underfunded to strategically thoughtful and exceptionally well managed. Fujitsu Americas, in TBR’s assessment, lands solidly in the latter category, based on the full range of investments and activities that Fujitsu’s leaders described with respect to their five strategic partners: Amazon Web Services (AWS), SAP, Microsoft, Salesforce and ServiceNow. (Note: See TBR’s ecosystem reports for more information.)

 

According to Fujitsu’s leaders, the next strategic partner will be determined by Uvance’s business strategy and continued evolution in the technology space, particularly AI. Keeping perspective on the challenges of managing technology partners, Fleur Copping, VP of Strategic Alliances in Regions, noted that every alliance relationship requires constant attention and, often, engagement-by-engagement reinforcement around Fujitsu’s offerings, capabilities and value proposition. Copping further acknowledged that Fujitsu needs to strengthen partner cosell activities. In other words, even when executing on all the best practices, alliance management remains a hard slog.

 

During the event, TBR noted two additional points on alliances — areas that are perhaps unique to Fujitsu. First, TBR has consistently heard that the customer zero approach to new technologies and offerings resonates with clients by bringing credibility and assurance. IT services companies, consultancies and their technology partners have also told TBR that the customer zero approach helps solidify alliances and can lead to innovations and new solutions. Fujitsu appears to be taking customer zero to the next level. For example, Copping described how Fujitsu brought its internal human resource management professionals to a client meeting about a joint Fujitsu-ServiceNow opportunity. The Fujitsu professionals told the client about their own experiences using the ServiceNow solution. This more personal touch resonated with the client and demonstrated the fullness of Fujitsu’s capabilities to alliance partner ServiceNow.

 

Second, Copping noted that because many of Fujitsu’s customers “don’t have as much of a voice” with the cloud vendors and software giants as the largest enterprises, Fujitsu can be an advocate for these small and midsize enterprises, amplifying their concerns and needs to the likes of Microsoft and SAP. TBR has not heard Fujitsu’s peers explicitly state this marketing message. As a matter of positioning, particularly with technology partners, Fujitsu’s message could be another way of gaining mindshare and differentiating from IT services and consulting competitors.

Consulting Is Harder Than It Looks; Fujitsu Has a Good Plan

White “unabashedly” characterized Fujitsu as a technology company, but emphasized using technology as a means to deliver services rather than making technology a commodity play. In the Americas in particular, Fujitsu would not “move away from our heritage as a technology company” but would more fully embrace consulting and the future portfolio of Uvance.

 

In TBR’s view, keeping Fujitsu’s heralded research, innovation and technology capabilities as foundational strengths makes strategic sense while leaving open questions around consulting. For example, one Fujitsu leader outlined the company’s AI sales approach in four basic steps:

  1. Get the client interested in Fujitsu’s technology
  2. Do a proof of concept with Fujitsu’s AI platform
  3. Allow the client to use a precommercial instance of the platform
  4. Bring in Uvance to develop a full solution, highly customized to the client

 

The fourth step, at a minimum, requires consulting skills, business knowledge and industry expertise, although many of Fujitsu’s peers include those elements throughout the sales and delivery process. Recruiting (or acquiring), retaining and managing consulting talent could affect Fujitsu’s corporate culture and undoubtedly will challenge Fujitsu’s leadership.

 

Further, and perhaps the most significant obstacle for Fujitsu in the Americas, will be gaining permission from clients to deliver consulting. By narrowing its scope to technology consulting — not the broad swath of strategy and operations consulting — Fujitsu plays to its own strengths, lessens the marketing load, and likely does not give up market share as the company is unlikely to displace firms like McKinsey & Co. or Boston Consulting Group (BCG).

 

Part of gaining permission, in TBR’s view, will be positioning Fujitsu differently with its current clients, particularly with respect to the key personas interacting with Fujitsu professionals. During the event, one Fujitsu leader described current clients’ struggles to adopt GenAI as a combination of an inability to do the basic work of making their data usable, the uncertainty around return on investment, and a fear of running afoul of the law as new regulations come into effect.

 

Yes, Fujitsu can address all of these concerns, but these hurdles impact and reflect the responsibilities of three different personas within an enterprise. Fujitsu’s challenge will be to become the preferred technology consulting provider for all three personas. In short, consulting is harder than it looks, and TBR believes Fujitsu has the right vision, strategy and approach. We will continue to monitor the company’s ability to execute.

 

TBR’s ongoing coverage of Fujitsu includes dedicated quarterly reports and inclusion in appropriate benchmarks, market landscapes and ecosystem reports. Log in to TBR Insight Center to view all current research.

6G Will Not be Like the Other G’s

TBR Perspective on 6G

6G is unlikely to look like the other G’s in terms of cycle length and scope and level of investment as the beleaguered telecom industry continues to struggle with implementing and realizing ROI from 5G. The telecom industry must also contend with supporting new use cases and how to embed AI, ML and sustainability into the fabric of the network while covering security gaps and preparing for a post-quantum cryptography world. Though there is tremendous brainpower (spanning the public and private sectors as well as academia) assembled to tackle these issues, growth prospects for the telecom industry continue to look challenging.
 
6G is shaping up to be an addendum to LTE and 5G, providing a new antenna overlay that supports net-new frequency bands, as well as enhanced spectral efficiency features and capabilities that provide further network performance and operational improvement. The missing link in the value equation remains how the telecom industry will monetize these new technologies beyond traditional mobile broadband (MBB) and fixed wireless access (FWA) services, and this lack of clear monetization threatens to relegate 6G to a continuation of what was observed during the LTE and 5G eras.
 
TBR continues to see no fundamental change or catalyst on the horizon that will bring CSPs more revenue. The primary incentive for CSPs to invest in 6G, therefore, will remain reduction in the cost per bit to support growing data traffic. This means the ROI for 5G still does not exist, which will likely limit the appetite and scope of investment in 6G. As such, TBR expects CSP capex investment for 6G will be subdued compared with previous G’s and deployment of the technology will be tactical in nature, which is a marked deviation from the multihundred-billion-dollar investments in spectrum and infrastructure associated with the nationwide deployments during each of the prior cellular eras.
 
Additionally, the 6G cycle may be significantly longer in duration than prior cellular generations due to the exponential increase in complexity inherent in these systems and the pace of data traffic growth, which has been slowing.
 
Against this backdrop, private cellular networks represent a real, significant threat to CSPs, as enterprises can extract most, if not all, of what they need from networks without requiring CSPs in the value chain. CSPs’ edge assets continue to be considered a key vector for CSPs to reassert themselves in the market, but this overlooks the alternative paths that enterprises and hyperscalers have to bypass CSPs to get what they need (e.g., real estate, access to power and fiber) at the edge layer.
 
The 5G cycle is now 5 years old, and the telecom industry is still struggling to adopt and deploy virtualization, open RAN and network slicing, much less a 5G standalone (SA) network architecture. This reality implies expectations for 6G will need to be tempered further. TBR believes 6G (at least the first phase of 6G, which will be represented in the 3rd Generation Partnership Project’s [3GPP] Release 21 standards) will only bring spectral and cost-per-bit efficiency improvements and potentially some net-new enterprise-specific features and capabilities. 6G is unlikely to bring any more significant or profound outcomes than 5G, at least not from CSPs.
 
TBR believes hyperscalers, government entities (especially the defense sector) and large enterprises are likely to reap the most benefit from 6G. For CSPs, 6G is likely to primarily be an infill solution to address complex environments and enhance network capacity and speed for existing MBB and FWA offerings.
 
Taken together, 6G will ultimately happen, and commercial deployment of 6G-branded networks will likely begin in the late 2020s, but it remains to be seen whether 6G will be a brand only or a legitimate set of truly differentiated features and capabilities that bring broad and significant value to the global economy. Either way, the scope of CSPs’ challenges is growing, while new value continues to be created outside their purview or goes over the top of their pipes.
 

Watch On Demand: TBR Principal Analyst Chris Antlitz discusses the Looming Business Disruption Among Operators and Vendors as They Strive to Change from Telco to “Tech-co” in the Coming Years

Impacts and Opportunities in 6G

Upper-midband Spectrum Is in Play for 6G

After an initial belief several years ago that 6G would leverage millimeter wave and terahertz spectrum, the wireless technology ecosystem has settled on the upper midbands, specifically in the 7GHz-24GHz range (also known as the Frequency Range 3 [FR3] tranche). Within FR3, 7GHz-15GHz is considered to be the golden range for 6G as it has the best balance between coverage and capacity and there is approximately 1600MHz of total bandwidth that could be made available in the U.S.
 
However, one of the biggest issues with these “golden bands” is the need for CSPs to coexist with incumbent users, such as government entities and satellite operators, which utilize some of these channels for various purposes and would need to either be cleared, refarmed or shared with CSPs for use in cellular communications. The telecom industry already has some experience with shared spectrum through CBRS, which operates in the 3.5GHz band, so there is a pre-existing framework and mechanism in place (i.e., Spectrum Access System) from which to begin establishing a spectrum sharing system for these new bands.
 
Ultimately, TBR believes that 6G will end up leveraging a mix of spectrum tranches, with midband, upper midband and mmWave frequencies all in play. Carrier aggregation and other frequency-combination technologies, as well as advancements in beamforming and endpoint devices, make these spectrum bands perform better when working together. Additionally, FR3 spectrum is not good at penetrating walls. Given around 80% of wireless traffic is generated indoors — a statistic that is unlikely to change materially in the 6G era — FR3 bands would need to be complemented with lower bands to penetrate walls and provide optimal coverage and capacity.

Nonterrestrial Networks (NTN), aka Satellite Connectivity, Enters the Mainstream

The NTN domain is flourishing, and satellite connectivity will be a mainstream technology for both businesses and consumers by the end of this decade. Satellite-provided connectivity will cover most of the Earth (and nearly the entire human population) with at least basic text messaging services, though some NTN providers will also provide high-speed broadband services as well as a range of other communications services, such as voice, just like a traditional CSP.
 
The most disruptive impact of NTN will be closing the cellular coverage gap and reducing the digital divide. Approximately 10% of Earth’s surface and 5% of the global human population, or around 800 million people, still lack cellular network coverage, and satellites can close this gap relatively quickly and at a significantly lower price compared to building out terrestrial macro base station sites in rural and remote areas. The ability to provide truly global network coverage has created a new paradigm in the telecom industry, shaping end-user expectations and pushing CSPs to align with (and increasingly compete against) NTN providers.

FWA Is Not Getting the Attention It Deserves

The mobile industry continues to largely view FWA as an ancillary offering, and the use case is not receiving the level of attention and innovation that it should given FWA’s resounding success in the market. Some attendees noted that current standards do not adequately factor in and focus enough on FWA and that networks are not architected to optimally support this use case. Spectral efficiency technologies tailored to optimize FWA traffic could free up significant capacity on existing networks that could be utilized for other purposes.
 
There are also energy-efficiency considerations for FWA. Mobile network operators (MNOs) have a vested interest in pushing standards bodies and network vendors to innovate on FWA because margins are low and there is room to alleviate some of this margin impact by applying technological innovations. In addition, MNOs want standards bodies and vendors to focus on architecting cellular standards to support unlicensed spectrum bands so that network coverage and capacity can be enhanced with minimal investment by aggregating licensed spectrum with unlicensed spectrum. The 6GHz band is especially pertinent to these considerations.

The Energy Problem Has No Easy Fix

Though the wireless technology ecosystem will continue to eke out gains on energy efficiency and performance, an as-yet-undetermined paradigm shift will be required to fundamentally break the linear relationship between network performance and energy usage. Additionally, AI is unlikely to help address this issue when factoring in the net energy impact because AI workloads are inherently power hungry.
 
Given this rising demand for energy, in addition to driving further reduction in the cost per bit, the broader economy and public sector should focus more on innovations in energy production and distribution, such as more deeply exploring small modular [nuclear] reactors (SMR) and cold fusion, to produce and widely distribute high-output, sustainable, low carbon-footprint energy. Said differently, it will become increasingly difficult to squeeze energy efficiency out of network infrastructure, so focusing on creating cleaner energy at greater scale is a sounder long-term strategy than emphasizing a lower net utilization of energy to achieve sustainability goals.

AI and ML Will Initially be Leveraged for Network Optimization

AI and ML will come into the network domain slowly. Network optimization-related use cases will likely be the initial focus areas, as AI and ML can provide significant outcomes by running complex simulations, such as ray tracing, propagation modeling and channel management (e.g., spectrum access sharing and dynamic spectrum sharing) at scale.
 
Though AI and ML promise a higher degree of automation to accomplish optimization-related tasks, there is concern that the amount and cost of energy required to run these simulations will outweigh the benefits. There is some validity to this concern, but attendees were confident there will be pockets of use cases or workarounds that will mitigate energy consumption and make networks more resilient and higher performing by leveraging AI and ML.

Western Governments Need to be More Proactive to Keep Their Countries at the Forefront of Innovation

Evidence suggests the West is falling behind China in key technologies, most notably in 5G SA, 6G, quantum computing, SMR and other key areas, despite Western governments allocating unprecedented sums of fiscal and monetary support for the technology sector and broader economy during and immediately after the COVID-19 pandemic. Governments, therefore, will need to take a more assertive approach rather than setting big-picture guidelines and relying on the private sector to figure things out. Since the current model is not yielding the desired results, a change will be needed to alter the trajectory. Greater reliance on hyperscalers will likely factor into the equation for a solution.
 
The most glaring deficiency in the Western world is regulatory clarity and policy agenda. For example, the U.S. Federal Communications Commission has been restrained and restricted from advancing important spectrum policies, and special interests have been creating encumbrances that slow down or prevent the wireless technology ecosystem from optimally moving forward (e.g., inconsistent policies around private spectrum and the use of shared bands like 6GHz create harmonization challenges and disincentivize attaining critical mass in the broader industry).

Scope of Government Support for the Telecom Industry Will Likely Increase

The persistent lack of ROI to justify private sector investment in 6G (and cellular networks more broadly) will ultimately push governments deeper into the telecom industry, prompting governments to increase the scope of their involvement in the wireless technology ecosystem as well as make these support structures more embedded in nature. During the first half of the 5G cycle, governments from various countries around the world pumped many hundreds of billions of dollars in aggregate into their respective domestic technology sectors via various stimulus programs, which provide direct or indirect, low- or zero-interest rate loans, subsidies and other means of market support.
 
Additional government backing will be required to enable the full benefits of 6G to come to fruition. Governments have a vested interest in supporting the telecom industry and the broader technology sector as it provides innovations of societal and national security importance and serves as foundational infrastructure to support long-term economic development. TBR expects governments in technology-forward countries (especially the U.S., China, Japan and South Korea) and regional blocs (e.g., the European Union) to continue underpinning R&D programs, subsidizing and/or directly paying for infrastructure deployment, and backstopping industry players that relate to national security concerns.
 
This model of industry stimulation was witnessed at unprecedented scale during the COVID-19 pandemic and now serves as a model for further government involvement. Workforce development has also emerged as a top-of-mind initiative for some governments as a means of preparing domestic workforces to handle new technologies and to offset the negative economic externalities that emerge from the impact of these new technologies (e.g., labor displacement from AI and how this can be mitigated).

Conclusion

6G will happen one way or another, with commercial deployments and services branded as 6G likely to commence by pioneering CSPs by 2030 (as originally expected within the confines of 10-year cellular generation cycles), but the wireless technology ecosystem seems to be absorbing much more than it can handle.
 
In addition to addressing the evolution of 3GPP standards for 6G, the ecosystem must also incorporate AI, ML, quantum and other nascent technologies as well as meet societal objectives, such as carbon zero, to align with theoretical expectations for the new G and the new use cases the technology is expected to enable.
 
The requirements for 6G are causing complexity to increase and are likely to make the ecosystem fall short on delivering these outcomes. Greater investment, collaboration and alignment across the public and private sectors, as well as with academia, will be required to address these challenges and set the telecom industry on a better path.