Ericsson’s Biggest Customers and Partners (Operators) Are Holding it Back

2025 Ericsson Industry Analyst Event, Boston, Sept. 11, 2025 — A select group of industry analysts gathered at Convene in Boston to hear from Ericsson leaders, partners and customers about the company’s Enterprise business unit’s strategies, capabilities and opportunities in domains such as private cellular networks (PCNs) and network APIs, with AI and 5G monetization serving as themes that ran across the various executive presentations.

TBR perspective

Ericsson struck a cautiously optimistic tone at its annual industry analyst event, which focused on the Enterprise segment. The company acknowledged struggles and highlighted learnings and adaptations, especially pertaining to Vonage and the new Aduna joint venture for network APIs, which will set the stage for better outcomes moving forward. Ericsson is uniquely positioned to capitalize on the still-nascent PCN opportunity that is developing globally, but the vendor’s go-to-market encumbrances continue to constrain its growth prospects.

Specifically, the range and nature of Ericsson’s partnerships with the broader PCN ecosystem remain relatively limited compared to other players in the domain, especially frontrunner Nokia. Ericsson’s go-to-market channel beyond CSPs for enterprise growth areas remains limited relative to competitors such as Nokia, particularly in PCN. Nokia decided several years ago to reduce its reliance on CSPs and made a concerted effort to sell its PCN solutions directly to enterprises and through a robust roster of channel partners, including global systems integrators (GSIs), niche systems integrators (SIs), VARs and communication service providers (CSPs).

Ericsson has little control over one of its biggest challenges: CSPs are difficult to deal with, hesitant to work together for competitive reasons, and move slowly. Compounding this, CSPs are Ericsson’s largest customer cohort and partner channel, and TBR estimates more than 97% of Ericsson’s total company revenue through direct and indirect means stemmed from CSPs in 2024. These are key reasons why TBR expects Ericsson’s enterprise revenue will lag its potential in areas such as PCN, network APIs and communication applications.

Impact and Opportunities

FWA has significantly more room to run

Ericsson estimates that approximately 25% of all mobile broadband traffic globally is fixed wireless access (FWA) now and that 18% of premises globally will utilize FWA within five years, representing unprecedented growth considering FWA only began to take off in 2020. These statistics align with what TBR has been saying for several years: The market opportunity for FWA is much larger and more vibrant than the industry originally thought.

For example, TBR estimates FWA is technologically and economically feasible to support up to 50% of residential premises in the U.S. This opportunity is helped by the rapid time to deployment and strong value proposition the technology provides end users, especially when compared to other broadband access mediums like fiber-to-the-premises (FTTP), which is laborious and expensive to build out and has higher service costs for the end user. Business premises can also be strong candidates for adopting FWA. Ericsson is arguably the largest beneficiary of the FWA movement on a global basis in terms of revenue generated by selling FWA enablement products and services to CSPs (infrastructure only, not including customer premises equipment [CPE]).

Awareness gap in market slows adoption

On several occasions at the event, Ericsson leaders mentioned that there is an “awareness gap” in terms of the efficacy and outcomes that 5G-enabled technologies are achieving, especially as it pertains to PCN for businesses and the public sector. For example, Newmont, a Tier 1 mining company that adopted a private 5G network from Ericsson to remotely control its dump trucks at the mine, has realized a tenfold increase in coverage and a significant corresponding boost in productivity.

Such outcomes are compelling to enterprises looking to transform their operations to drive more revenue growth and or reduce costs. Though word is gradually getting out that early adopter enterprises are achieving strong results from new technology solutions, more needs to be done. Greater emphasis on partnerships with companies that have the ears of senior management at enterprises, most notably global systems integrators (GSIs), are a key way for Ericsson to promote the benefits of these solutions.

Differentiation, both in technology and marketing, needs to be addressed

Ericsson needs to do more to differentiate its technology solutions, and this includes ensuring that differentiation is well messaged to the market. Though some consolidation has occurred in the PCN, network API and communications applications domains, there remains significant competition and fragmentation, with many vendors playing in the market that are not well differentiated. This lack of differentiation is likely another reason Ericsson is struggling to achieve outsized growth in these nascent market areas. TBR did not hear a compelling narrative as to how Ericsson is differentiating itself in terms of technology and partnerships in these key, high-growth market areas. It remains to be seen if Ericsson’s NetCloud AI-powered cloud management and orchestration solution for PCN becomes a key differentiator once it is scaled up to large deployments of the Ericsson Private 5G solution.

Private cellular networks channel remains underdeveloped

Ericsson’s PCN revenue is growing at a relatively strong double-digit rate, but the size of the business is less than 60% the size of frontrunner (outside of China) Nokia’s PCN revenue, according to TBR’s Private Cellular Networks Vendor Benchmark. This differential in revenues is primarily due to Ericsson’s underdeveloped go-to-market approach and channel relative to Nokia. This is an issue TBR has identified and written about for the past few years, but Ericsson seems to have made minimal progress.

Ericsson is partnering extensively on PCN, but according to TBR’s research, most of that activity is driven by CSPs — even though enterprises and the public sector primarily work with GSIs, niche SIs, VARs and government contractors on digital transformation-related initiatives. Ericsson is also partnering with non-CSPs, including a new agreement with NTT DATA that reflects the kind of deeper, broader partnerships TBR believes Ericsson should pursue. Surface-level partnerships are essentially reseller arrangements, whereas some vendors have robust engagements with key GSIs and other types of partners. Nokia’s relationship with EY and Kyndryl are two such examples.

Connected laptops is a niche market, not a mass market opportunity

Ericsson has jumped on the bandwagon of 5G-connected PCs, and representatives from T-Mobile and HP Inc. spoke at the event about why this is a unique market opportunity. Though 5G-connected PCs sounds like a great feature that end users will utilize, TBR believes the additional cost required to embed the 5G modem into the PC, plus the subscription fees that would need to be paid to the service provider, broadly limits the scope of who would actually find enough value in the product and corresponding service to actually pay for it, especially when most people have smartphones and those smartphones have mobile hotspot tethering, which essentially makes the computer a cellular-connected device at no additional cost.

To be sure, 5G-connected PCs offer a unique capability that the mass market would value, but once extra cost is involved, only a small fraction of that market is likely to pay for the experience. Some enterprises and select types of SMBs (e.g., construction firms) and small office/home office (SOHO) workers (e.g., real estate agents and other road warrior workers) would be unique candidates for 5G-connected PCs, but this would be more of a niche market than a mass market opportunity. As such, TBR suggests network and PC vendors reassess their addressable market projections to be more aligned with observed user behavior and price-for-value considerations.

Conclusion

Ericsson has competitive technology, but its overreliance on CSPs to purchase that technology and/or scale it into end markets remains a weakness that will continue to hamper the company’s ability to participate more significantly in key growth domains, such as PCN. On the network API and communications application side, progress is being made and some scale is occurring, but Ericsson and its CSP partners are up against relatively fast-moving, well-resourced and more specialized entities, most notably hyperscalers and other digital-native players. Addressing the telecom industry’s weaknesses and shortcomings in these market areas will require more investment in channel development and more robust strategic partnerships with entities such as government contractors, GSIs and niche, domain-specific SIs.


Ericsson’s dependence on slow-moving CSPs is a risky proposition, especially when it comes to driving growth in key areas, including PCN, network APIs and communication applications. Ericsson should take a page out of Nokia’s playbook for aligning its opportunity areas with non-CSP players to accelerate growth. Specifically, Ericsson should take a closer look at how Nokia structured its PCN business, especially its channel ecosystem, to reduce its reliance on CSPs. This has proved to be the most optimal way to participate in opportunities arising in the enterprise end markets Ericsson is targeting.

Konecta Hybrid Customer Experience Combines Human Expertise with Advanced AI and Digital Capabilities

Konecta Analyst Day, Madrid, May 28, 2025 — Konecta invited industry analysts to the 20th annual ExpoContact, a company-organized event that welcomed more than 1,000 industry leaders, including clients, technology partners and organizations that are looking to improve competitiveness by modernizing customer management. In the morning, Konecta held a special in-person and virtual event for industry analysts in which Konecta executives, clients and technology partners discussed in detail the company’s vision, digital portfolio, and generative AI (GenAI) and agentic AI approach. TBR attended Konecta’s first analyst day event and was impressed by not only the openness of the company and its willingness to communicate with the analyst community but also the closeness of its relationships with partners and clients.

Konecta’s vision and ambition are to become the trusted technology, data and operations partner for clients’ agentic AI transformations

During the event, Konecta CEO Nourdine Bihmane shared details about Katalyst 2028, the company’s three-year plan to become a technology, data and operations partner. Essentially, the company’s goal is to provide AI-driven hybrid customer experience solutions (CX) that combine human expertise with advanced AI and digital capabilities. The plan includes four steps: 1) accelerating the adoption of data, GenAI and agentic AI; 2) increasing digital growth; 3) strengthening the partnership ecosystem; and 4) expanding global reach. The company raised €150 million (or $176 million) to fund the transformation plan. Konecta’s goal is to increase revenue to €2.5 billion (or $2.9 billion) by 2028 and generate between 30% and 40% of total revenue from AI and digital services.

To achieve these targets, the company is training more than 7,100 people on role-specific GenAI technologies and offering proprietary GenAI solutions. Konecta is also launching a new global Digital Business unit with digital offerings and 2,500 employees, including more than 300 trained sales leads. Konecta’s digital services revenue was €150 million (or $176 million) in 2024, and the company plans to increase revenue in the segment to €250 million (or $293 million) in 2027, representing a CAGR of 20%.

Expanding its partnership ecosystem will serve as a lever for future growth, such as by establishing strategic partnerships around GenAI with Google Cloud and Uniphore, and with STC Group in the Gulf Region around GenAI-powered CX solutions. Konecta’s partner ecosystem combines technology leaders, such as hyperscalers, cybersecurity providers, GenAI and large language model (LLM) vendors, hyperautomation and service platform solutions providers, and consulting companies to enable coinnovation and codevelopment with clients.

Notably, Konecta’s open ecosystem has been designed on joint IP, shared outcomes and scalable transformation models. Partnerships among IT services providers and technology vendors are a leading lever for portfolio expansion, and Konecta is moving in a similar direction alongside multiple IT services providers. According to TBR’s 1Q25 IT Services Vendor Benchmark, “The roles of alliance partners are changing in the rapidly evolving professional services market. During the past several years, multiple professional services companies took a technology-agnostic approach to offer flexibility to buyers that were wary of vendor lock-in.

As macroeconomic pressures force buyers to examine their existing technology stacks to ensure they maximize ROI, these buyers are consolidating vendors, compelling professional services companies to develop a preferred, if not exclusive, list of alliance partners. … Vendors are leveraging partners to launch agentic AI offerings to automate tasks and drive operational efficiency, and GenAI offerings to boost productivity and create cost efficiency, encouraging adoption by solving clients’ particular business challenges. NVIDIA-enabled agentic AI solutions dominated alliance announcements during the quarter, including new joint offerings with Accenture, Capgemini, Cognizant, IBM and Wipro.”

Konecta plans to expand by establishing a sales organization that is structured for global reach and local engagement. Notably, the company is opening new delivery centers in Bengaluru, India, and Cairo and is establishing five new AI Global Competence Centers, located in India, Egypt, Spain, Colombia and the U.S., to diversify service delivery capabilities and expand client reach. Such activities will help Konecta improve its global revenue distribution, as presently the company’s revenue is generated mainly from Europe and Latin America, while English-speaking markets and the U.S. contribute approximately 4% of total annual revenue, though the company plans to increase this figure in the coming years. In January Konecta established Egypt as its regional headquarters to serve clients in the Middle East, Africa, Europe and the Americas and announced the opening of a global delivery center and global Center of Excellence (CoE) for GenAI in Cairo.

The company is investing $100 million over the next three years and is planning to hire approximately 3,000 people with digital and technical skills to provide AI solutions, digital transformation, cybersecurity, big data and analytics, IoT, technical support, and multilingual customer services in English, French, German, Italian and Spanish. Konecta is also partnering with the Information Technology Industry Development Agency in Egypt to provide training and upskilling programs for local people, creating future employment opportunities for skilled talent. Konecta’s partnership with Uniphore, announced in November 2024, to deliver industry-specialized AI solutions that enhance CX with hyperpersonalized interactions will augment Konecta’s client reach in the U.S. and U.K. and contribute to revenue expansion in English-speaking markets.

Konecta provides experience services and digital solutions around service design, technology implementation and process optimization
Headquartered in Madrid, Konecta is provider of transformative experiences and an expert in CX solutions enabled by AI. Konecta has approximately €2 billion (or $2.3 billion) in annual revenue, 120,000 employees across 26 countries and 5,000 digital experts, and supports more than 30 languages. The company offers customer and employee experience services, digital marketing offerings, and products and solutions, such as around CX automation and analytics, all underpinned by AI and GenAI services and advisory and consulting services. Konecta expanded in size and client reach during 2022 through the merger with Comdata, an Italy-based BPO services provider. Comdata had 50,000 employees and annual revenue of approximately €980 million (or $1.15 billion) generated from services such as customer care, back-office and credit management. Since mid-2023 the merged companies have operated under the Konecta brand and currently serv more than 500 blue chip clients. The clients are spread across Europe, Latin America, North Africa, the Middle East and Asia and have an average client tenure of more than 20 years, underscoring Konecta’s emphasis on long-term relationships.

Utilizing a renewed management team will be a critical lever for successful execution of the Katalyst 2028 plan. Notably, over the past several months, Konecta has attracted experienced executives with strong technology and industry expertise from its France-based peer Atos, which has been challenged by attrition due to a turbulent and prolonged transformation initiative. Bihmane, who took the position of Konecta’s CEO in April 2024, previously worked at Atos for more than 23 years, including as global CEO and head of Atos’ Tech Foundations business line. In March Adil Tahiri was appointed head of Advisory and Professional Services. Previously, Tahiri’s 21-year tenure at Atos included roles as advisor to Atos’ CEO and head of CTO. Oscar Verge, also a long-term Atos leader with 20 years of experience at the company, joined Konecta in October 2024 as chief Ai deployment officer.

Konecta is shifting from providing simple automation to orchestration, and AI is a core enabler

While according to Tahiri, “Agentic AI is in the nascent phase,” Konecta’s ambition is to actively transform the industry and create differentiation through digital services. Konecta attracts clients by offering intelligent business orchestration, applying new levels of creativity, such as through real-time and context-aware hyperpersonalized experiences across channels, and orchestrating human and specialized agent interactions. The company provides clients with robust execution through composable agentic platforms and strategic technology and business advisory capabilities to guide clients through their transformations and speed up time to value.

As clients typically have a multitude of business applications, and each has its own data repository, the proliferation of agents creates complexities. Konecta is moving from simple automation to orchestration, and agentic AI adapts to dynamic application landscapes and automatically understands, reasons and sets code to extract data and support decision making. Investing in orchestration capabilities, and development of IP, such as solution accelerators and methodologies and specialized talent enables Konecta to address clients’ needs around managing their agentic AI environments.

Shifting from utilizing industry LLMs to employing customer-specific LLMs enables Konecta to generate business value from customer-specific data. Delivering high-performing and personalized agentic AI based on real-time, proprietary customer data and workflows enables Konecta to benefit from contextual data intelligence and establish trust with clients. The complexity of digital transformation is pushing Konecta to establish a strategic partner ecosystem, including foundational AI providers and niche domain experts, that is complementary to the company’s expertise.

Egypt is an attractive location for IT services providers
Konecta’s expansion in Egypt is driven by the availability of talent with language skills and technical capabilities and will support the company’s global revenue diversification. However, IT services providers such as Accenture, Capgemini, Atos, IBM and Deloitte are utilizing Egypt for global service delivery, are planning to expand their resources in the country, and are actively working with government bodies and local educational organizations to develop in-demand skills to support future recruitment. Intensified recruitment interest from IT services  providers might challenge Konecta’s expansion activities in the country. For example, in April Capgemini announced it will open an AI CoE in Cairo to enable GenAI and agentic AI adoption for clients globally. The center will consist of data scientists, architects, product engineers and project managers. Capgemini plans to double its headcount in Egypt to approximately 1,200 professionals in digital transformation and innovation by the end of 2025 and to expand to 3,000 people through 2026.

Offering GenAI and agentic AI solutions in an open platform increases Konecta’s value proposition

Konecta provides clients with an industrialized, modular and complete GenAI stack that comprises three solutions — Insights for strategic CX intelligence; Co-pilot for real-time agent augmentation; and Auto-pilot for seamless, AI-driven engagement. The Insights solution converts customer interactions into actionable intelligence, automatically mines 100% of voice and chat logs, correlates to KPIs and identifies agent-level coaching insights to forecast outcomes. Co-pilot provides agents with contextual AI to uplift conversations; summarizes prior interaction and customer context; and provides intent recognition, nudges and compliance suggestions during calls. Auto-pilot enables conversational automation of activities and provides escalation to human agents for exceptions. Offering GenAI And agentic AI capabilities in the Konecta platform, which is based on open and modular technology stacks, and offering the solutions as an extension not a replacement of human-delivered services improves the company’s value proposition around deriving productivity gains and expands its client reach.

Investing in GenAI-enabled solutions creates growth opportunities for Konecta, given ongoing buyer interest in adopting GenAI solutions. According to TBR’s November 2024 Digital Transformation: Voice of the Customer Research, “GenAI continues to influence digital transformation (DT) budgets as buyers grapple with juggling hype, ROI and FOMO (fear of missing out). With over three-quarters of respondents combined allocating 26% or more of their DT budgets to GenAI two years after the technology came on the market, it is evident that buyers are eager to explore the possibilities the technology can bring. We do not expect this trend will slow down anytime soon given that the majority of respondents plan to increase their GenAI spend by 10% or more in the next year.”

As macroeconomic pressures force buyers to examine their existing technology stacks to ensure they get the most ROI, Konecta’s GenAI stack demonstrates material outcomes for clients. For example, the Insights solution increases revenue conversion by up to 20% and decreases the ramp-up time for new agents by 20%. The Co-pilot solution enables 98% accuracy in all European languages and a decrease of 30% to 50% in average handling time in managing email and written communication. The Auto-pilot solution automates around 50% of inbound contacts on voice and written channels and reduces cost of interaction by 30%. Demonstrating ROI is critical for solution adoption.

According to TBR’s 1Q25 Digital Transformation: Analytics Professional Services Benchmark, “Enterprises are juggling fear, hype and hope surrounding the potential impact of generative AI (GenAI) on their operating models. This has heightened their expectations for vendors to deliver timely ROI tied to ongoing business process and/or IT modernization transformation, as the implications of technology complexities extend beyond data science, thus creating opportunities for vendors that can manage broad organizational relationships.”

In conclusion

According to TBR’s 2Q25 Accenture Earnings Response, “Transforming the CX domain will remain low-hanging fruit for the next two to three years, offering companies a clear path to apply agentic AI systems for productivity gains. This presents Accenture with a blank canvas to showcase its capabilities at scale and strengthen its position among chief marketing officer buyers. As CX evolves into experience operating systems, powered by continuous feedback and contextual inference, Accenture will need to consider applying multidomain context integration in an era when hyperpersonalization has become table stakes, at least from a communications standpoint.”


Konecta is moving in the right direction, and strict execution of its strategic initiatives and investments in platform-based services will enable the company to reach its revenue growth target of €2.5 billion (or $2.9 billion) by 2028. While Konecta’s competitors are making similar investments, the company will succeed due to its emphasis on helping clients reimagine operations, experience and outcomes with AI, platforms and human creativity, and established local client reach and best-shored service delivery model.

Fujitsu Eyes Americas Growth with Alliances and Innovation

On June 5 and 6, 2025, Fujitsu hosted around 25 analysts and advisers for an Executive Analyst Day event in Santa Clara, Calif. Fujitsu leaders from across the globe spoke on various parts of the company’s business and Fujitsu subject matter experts demonstrated solutions currently in research development or recently piloted with a few Fujitsu clients. The following includes insights from the event and analysis based on TBR’s ongoing research around Fujitsu and the broader IT services sector.

Fujitsu is putting the pieces together

The Fujitsu story keeps getting better. A year ago, TBR was struck by Fujitsu’s grounded and comprehensive approach to artificial intelligence. Later in 2024, Fujitsu’s Uvance initiative led TBR to conclude that the company had “the right vision, strategy and approach. We will continue to monitor the company’s ability to execute.” And this June Fujitsu extended a few more pieces, continuing to demonstrate capabilities, vision and a grounded approach that should garner steadily improving results, especially in the Americas. Those pieces? An enhanced alliances strategy and organization, ambitious and well-supported acquisition and investment capabilities, and an improving story around Uvance. Key components continue to evolve, such as Fujitsu’s AI offerings and global delivery operations, both of which support success in the Americas and contribute to a strengthening brand. In TBR’s view, Fujitsu has positioned itself, perhaps quietly compared to larger peers in the Americas, as a potentially disruptive force in those arenas where it chooses to compete.

Fujitsu positions its Americas practice and startup strategy as growth pillars

Speaking specifically about the Americas practice, Fujitsu leaders noted a strong performance in 2024 (a 7% profit, the highest among Fujitsu’s international regions), intentions to partner and acquire more aggressively, and a sustained focus on seven countries and specific U.S. states. Asked how Fujitsu Americas keeps up with quickly changing market conditions, Fujitsu leaders mentioned quantum capabilities, innovation, and partnering with ServiceNow while keeping the company focused entirely on services.

Breaking down various revenue streams, Fujitsu leaders said roughly 40% of revenue comes from Application Development Management and Application Solutions; 30% from partnering with SAP, ServiceNow and Oracle; 20% from workplace solutions and cloud migrations; and 10% from data and AI. In TBR’s view, Fujitsu Americas has built the solid foundation needed to outpace the overall company in terms of growth, provided the Wayfinders initiative gains traction and Fujitsu Americas stays focused on core industries in which Fujitsu has permission to play. Acquisitions and increased investments in alliance partnerships should be fuel added to a well-built fire.

Fujitsu executives highlighted the company’s startup strategy and some recent success stories, including a Japan-based AI biopharma company and an Italy-based AI governance company. Overall, Fujitsu’s startup strategy follows one of two paths: Either Fujitsu takes its own IP and licenses it to a startup (or invests in a startup rather than collecting a licensing fee), allowing the startup to test the market, likely reaching clients not typically within Fujitsu’s target market; or Fujitsu partners with a startup, providing funding, tools, platforms and entrée to Fujitsu’s client base. The second approach, according to Fujitsu, is utilized when the company comes across a technology and thinks, “Fujitsu would not have thought of this.”

In TBR’s view, both approaches reflect Fujitsu’s starting point with startups: R&D. In contrast to peers, which attempt to read market trends, capture emerging customer sentiment or conduct intensive gap analysis, Fujitsu relies on its deep R&D experience and extensive research and technology network to uncover startups well-suited and complementary to Fujitsu’s offerings or doing something unique and worthy of Fujitsu’s investment. Or, one suspects, both.

Advise, design, implement, support

First, the hard truth: In TBR’s view, Fujitsu needs a cleaner, more compelling Uvance story, at least in the Americas. Fujitsu executives said the Uvance percentage of the company’s Service Solution business increased from 10% to 30% from 2022 to 2025, so Uvance is resonating in the market and is increasingly a critical part of Fujitsu’s overall business. Challenging more accelerated growth in the Americas, in TBR’s view, is the mixed messaging about what Uvance is and what it brings Fujitsu’s clients. At various times over two days, Uvance was described as:

  • “solving cross-industry issues, filling in the white spaces”;
  • a “product and an accelerator” taking “proof of concepts to commercialization”;
  • as a means to “address social challenges” not just “solving business problems”; and
  • and “the center of our business.”

Yes, Uvance can be all those things, but the message then gets muddled.

Second, the rest of the truth: In TBR’s view, Fujitsu’s strategy around Uvance, Wayfinders and consulting writ large positions the company well, given Fujitsu’s strengths, accelerating change in the overall consulting business model, and market opportunities. Fujitsu positions Uvance Wayfinders as engaging clients “before implementation,” reflecting the company’s technology-centric value proposition. Fujitsu is not proposing business strategies or solving business problems but is instead applying its core technology strengths to a defined client technology environment and bringing efficiencies and improvements. As one Fujitsu executive said, “Like Accenture, not McKinsey.”

In TBR’s research, clients appreciate IT services companies and consultancies that stay within their own lane, doing what they do well and not persistently looking to upsell or increase their footprint. As management consultancies face the existential disruption of AI on their business models, Fujitsu’s Uvance neatly complements the company’s extensive AI-enabled offerings and capabilities. Fujitsu can advise — and, more importantly, implement — based on experience and talent at scale, harnessing AI disruption rather than being upended by it. Lastly, Fujitsu’s focus on three specific industries in the Americas  — public sector, manufacturing and retail — helps tremendously. No company has deep expertise in every industry, so delivering a message that Fujitsu excels in three specific areas builds credibility in a market where Fujitsu does not have a large-scale footprint or presence. In TBR’s view, this strategic decision to stay focused will be critical to Fujitsu’s continued growth in the Americas.

Deepening ServiceNow partnership

In a session dedicated to Fujitsu’s global alliances, the company brought in an executive from ServiceNow, complementing the Fujitsu presentation with specific examples of the two companies’ strategies and partnership. Fujitsu’s global alliances lead, Fleur Copping, noted that technology alliance partners provide entrée to new customers or position Fujitsu differently in the market, essentially amplifying the Fujitsu brand. Notably, every IT services company partners with nearly every technology partner, and differences in technology capabilities barely register, compelling Fujitsu — like other IT services companies — to differentiate in how they partner.

Copping said Fujitsu had professionalized its alliances organization and go-to-market efforts in recognition of the criticality of alliances in the emerging IT services ecosystem. Fujitsu has designated strategic partners —Amazon Web Services (AWS), SAP, ServiceNow, Microsoft and Salesforce — with an intent to develop and deploy scale around all five without becoming siloed around a specific partner. Copping adding that co-selling with these partners means being “coherent in front of the clients.” In TBR’s view, keeping the alliance focus on clients and their outcomes, rather than the commercial models or specific go-to-market motions, is an ideal strategy but is difficult to execute, in part because it is a break from previous practice.

The last two years have seen a “massive uptick in how Fujitsu shows up in the ecosystem,” according to the ServiceNow executive. A few highlights:

  • Fujitsu acquired a ServiceNow boutique consultancy in the APAC region.
  • ServiceNow’s “mature rigor” in its process of evaluating partners and offering expanded incentives to drive partner behavior spotlight Fujitsu’s higher ranking across the pre-sales to post-sales spectrum.
  • Fujitsu has been an early participant in ServiceNow’s Enterprise Training Agreement program.

Fujitsu and ServiceNow have invested in training their sellers on each other’s portfolios.

Highlighting the third bullet, the ServiceNow executive said, “What does good look like in building talent [around ServiceNow’s portfolio]? Fujitsu is the partner I point to.” In TBR’s view, having a clearly defined and easily understood differentiating quality — that is clearly tied to the partnership and not an attribute applicable to the company as a whole — is critical to helping technology alliance partners and clients understand what added value an IT services company brings. TBR also noted that ServiceNow uses net-new annual contract value as an incentive for sales staff; in contrast, Fujitsu’s revenues depend on client retention and managed services contracts. Innovation, according to both companies, serves as bridge between the divergent sales incentive structures. TBR notes that innovation remains difficult to measure, but we see the evident alignment of ServiceNow’s and Fujitsu’s strategy and leadership approaches as perhaps more consequential for the long-term alliance. In sum, Fujitsu and ServiceNow presented a compelling partnership story — one that bears close scrutiny as the broader technology ecosystem evolves.



“In terms of the services, kind of scoping and bundling, that was typically a black-box process. So, we bring each other in at the right times, but they [services partners] never included us in terms of the actual kind of creation of the services offerings and the margins and the profitability, etc. So, we were only kind of aware of that, on a very macro level. It was just the solutioning part of it, saying, ‘OK, we know that this is the top-line business initiative that we’re solving for. And then we need to create the underlying products and services around that to create the recording of the product.’ So that’s the way that we typically work with them.” — Senior Manager of Global Channels and Alliances, Cloud

What peers can learn from Fujitsu

Competitors and technology alliance partners should keep an eye on Fujitsu’s acquisitions, investments in startups and new partnerships. Fujitsu’s investment portfolio has a mandate to invest for accelerated growth, has leadership support at the highest level, and has experience that has been built over the last few years. In TBR’s view, all three of those elements — a mission, leadership, and the muscle memory around acquiring, investing and partnering — are essential. Further, when a Fujitsu executive said the company’s CEO starts every acquisition discussion with questions around the technology and the customers, TBR saw a perfect reflection of the company’s overall culture and value proposition: take technology and solve customers’ (and societal) problems. Fujitsu executives did note concern that the market has been changing so rapidly that acquisitions made today may become less valuable by the time they are absorbed into the company. Overall, TBR believes Fujitsu will invest more in partnering (traditional alliances and corporate venture capital) more because of the current market dynamics and not because of a lack of appetite for acquisitions. Circling back to the competitive and alliances implications: be wary of — or partner with — a company that knows how to execute on M&A.

Final thoughts

So, what comes next? In TBR’s view, Fujitsu has the opportunity to expand its market presence and success in the Americas, particularly if the company can leverage three advantages and strengths it enjoys right now. First, technology alliance partners, such as ServiceNow, Salesforce and AWS, are aggressively partnering with IT services companies and consultancies that can show flexible commercial arrangements, bring new clients and coinnovate. Second, Fujitsu has the resources and resolve to aggressively acquire, partner or invest. TBR has not seen an IT services company or consultancy grow without attitude and resources. Third, across all the company’s recent analyst events and briefings, Fujitsu’s core culture has remained focused on bringing technology solutions to clients.


Focus and staying within a company’s strengths, in TBR’s view, separate well-run, high-performing consultancies and IT services vendors from peers. TBR does not anticipate that Fujitsu will wander from its path but does expect that analyst events in 2026 will bring additional Fujitsu strengths to the forefront.

Immigration Policy Changes Portend a Growth Shock for the U.S. Wireless Industry

U.S. wireless operators added more than 51 million new wireless phone connections from 2018 to 2024. Where did they come from?

According to company-reported data, industry data, U.S. government data and TBR estimates, U.S. wireless operators collectively added more than 51 million wireless phone connections (prepaid and postpaid) from the beginning of 2018 through the end of 2024, a relatively large number considering the organic population growth rate in the U.S. has slowed significantly over the past two decades as Americans have fewer children and since most people in the country already have at least one wireless phone.


A portion of this increase in phone connections can be explained by situations where one person has multiple lines, such as through work (e.g., business line or first responder line) or by long-tail situations, such as younger and older people subscribing to wireless phone plans for the first time, as well as the net change from births and deaths in the overall population.

However, based on TBR’s research, this only accounts for approximately 56% of total wireless phone net additions during this time frame. Where did the rest of these phone connections come from?

Approximately 44% of wireless phone connection net additions in the U.S. from 2018 to 2024 were from immigrants (legal and illegal)

According to TBR’s research, these 51 million new phone connections fall into four main categories, as outlined in Figure 1.



As shown in figures 1 and 2, immigration (both legal and illegal*) was the largest driver of wireless phone net additions for U.S. operators from 2018 to 2024, accounting for an estimated 22.3 million net additions, or nearly 44% of total wireless phone net additions during this seven-year time frame.

This number makes sense considering that, according to official government statistics such as from the Department of Homeland Security (DHS) and Customs and Border Protection (CBP), more than 30 million immigrants (legal and illegal) entered and stayed in the U.S. between 2018 and 2024. Of this number, it is reasonable to assume around three-quarters of these immigrants obtained cellphone service (prepaid or postpaid).

One of the first things immigrants do when they enter the country (legally or illegally) with the intention of staying is to purchase wireless phones and wireless phone service. Most people who enter the country are of an age that they would use phones.

This significant influx of new population into the U.S. drove a secular growth trend for wireless phone connection additions for the wireless industry from 2018 to 2024, and immigration represented the largest driver of wireless phone connection additions for U.S. wireless operators.

However, now that immigration policy is fundamentally changing under the Trump administration, this tailwind has become a headwind for operators and will require a structural reassessment of growth prospects for the industry.

TBR expects this growth shock will begin to present itself in operators’ 2Q25 results, with the effects more clearly seen in 2H25 results.

*Legal immigration primarily includes green card holders, visa holders (e.g., H-1B for temporary works; F1 and J1 for international students) and refugees/asylum seekers. Illegal immigration primarily reflects CBP encounters (all borders), “gotaways” and visa overstays.

 

Why will there be a growth shock for wireless phone net adds in the U.S. in 2025?

Given this growth slowdown, U.S. wireless operators face a conundrum. How will they continue showing robust net phone additions every quarter? There is no easy solution, but in the last section of this report TBR outlines some tactics operators are employing, or might employ in the future, to mitigate the negative immigration-related effects on their earnings results.

Since the start of President Trump’s second term, there has been a reduction of more than 80% in the flow of illegal immigrants into the country, and legal immigration into the U.S. has also declined significantly. Additionally, there are other population headwinds at play:

  • Deportations of immigrants currently residing in the U.S. are ramping up (U.S. government agencies are on pace for over 500,000 deportations in calendar year 2025, up from over 271,000 in 2024)
  • Emigration has increased as more people are choosing to leave the country voluntarily.
  • Slowing birth rate
  • Increase in death rate as the baby boomer generation ages

If we use a conservative estimate and assume a 50% reduction in the number of total new immigrants entering the U.S. from 2025 onward (and if other variables stay constant), this implies a reduction of more than 20% in the total level of new wireless phone connections, a significant challenge for wireless operators since phone connections are operators’ most lucrative offering. Even a 20% reduction in total wireless phone net additions would have a significant impact on operators’ revenue, margins and cash flow.

It is incorrect to assume immigrants only use prepaid plans

Despite claims by the U.S. wireless industry that immigrants predominately use prepaid phone service, the reality is that total prepaid phone connections declined by nearly 5 million in aggregate from 2018 to 2024 while postpaid phone net additions exceeded 56 million in the same period. Given that more than 30 million immigrants are estimated to have entered and stayed in the U.S. during that seven-year period (according to official U.S. government statistics from agencies such as CBP and DHS), it is unreasonable to assume that such a large influx of the population was absorbed by the prepaid market. Rather, it must have been mostly absorbed by the postpaid market.


What U.S. telcos are not saying but Canadian telcos are

U.S. wireless operators have been asked by Wall Street analysts on quarterly earnings calls and at investment conferences about the potential effects of immigration policy changes on their phone connection metrics. Thus far, operators have unanimously downplayed any impact. However, given immigration levels have plummeted since January 2025 (illegal border crossings are down 80% to 90% year-to-year) and voluntary and involuntary deportations are ramping up, there must be at least some impact.

Given immigration has represented approximately 44% of total phone net additions from 2018 to 2024, assuming legal and illegal immigration levels are down a conservative 50% compared to the past seven years, this would imply a reduction of more than 20% in the level of phone net additions moving forward, essentially taking the seven-year annual average of 7.3 million industrywide wireless phone net additions down to an annual average of 5.8 million moving forward. This reduction in new phone additions (which is operators’ most lucrative connection type from an average revenue per user [ARPU] and margin perspective) implies a growth shock, which would force operators to lower revenue and earnings projections. It is possible that operators expect most or all of any reduction in the immigration aspect of their phone connection dynamics could be mitigated by growth in other phone connection types, but history suggests that is unlikely.

The minimal wireless phone connection net addition impact observed in U.S. wireless operators’ financial results in 4Q24 and 1Q25 is likely due to the following reasons:

  • Lag time between when someone enters the country and obtains a phone and phone plan and when that result is reflected in U.S. wireless operators’ earnings reports
  • The timing of the new administration taking over and implementing and enforcing policy changes
  • The delay between when someone is deported or emigrates and their phone line is shut off
  • The lag in company reporting — earnings are publicly released one to two months after the quarter ends; there was a surge in immigrants entering the country leading up to the change in administrations and immigration reform began to be implemented and enforced in late January 2025

However, this will change, starting as soon as in 2Q25 and more so in 2H25 as the decline in net immigration begins to flow through U.S. wireless operators’ quarterly figures.

By contrast, the Big Three telcos in Canada have been very forthcoming in talking about and shaping expectations with stakeholders about the effects of immigration policy changes in the country. In 2024 the Canadian government reduced target levels for permanent residents by 20% from 2025 to 2026, set caps on international students, and tightened eligibility requirements for foreign workers. These policy changes are not at the same level as those in the U.S. Why are operators in Canada publicly stating that the country’s lower immigration targets are slowing their subscriber net additions, yet U.S. telcos claim there will be no meaningful impact? Something does not add up.

“We anticipate the environment for our businesses to remain competitive in the coming year with continued moderating wireless subscriber growth versus 2024 as Canada’s immigration and foreign student levels decline.” — Glenn Brandt, CFO, Rogers, 4Q24 earnings transcript

“The decrease in gross and net additions this quarter was a result of a less active market, slowing population growth as a result of changes to government immigration policies, and our focus on attracting subscribers to our premium 5G Rogers brand.” — Rogers 1Q25 earnings press release

“In the quarter, Rogers delivered a combined 34,000 net new wireless subscribers, down from 61,000 last year, reflecting the smaller market size due to reduced immigration.” — Glenn Brandt, CFO of Rogers, 1Q25 earnings transcript

Canadian telecom industry experiencing ‘strong subscriber growth’ thanks to immigration: report — Mobile Syrup, Feb. 22, 2024

Telcos blame Canada’s immigration policies for slower subscriber growth Mobile Syrup, May 12, 2025

AT&T, T-Mobile and Verizon all gained the most subscribers from immigration and now have the most subscribers to lose

The largest U.S. telcos will be disproportionately affected by lower immigration levels. These operators have been the biggest beneficiaries of immigration flows, adding millions of net-new wireless phone subscribers to each of their businesses since 2018, but now this tailwind is turning into a headwind.



All of the major U.S. telcos have clusters of branded retail stores near major border-crossing areas with Mexico, such as in southern California and parts of Texas. Having store clusters near crossing hubs enables telcos to cater to the needs of migrants coming into the country (one of the first things immigrants do when they come into the country is buy wireless phones and phone plans) and jockey for new phone subscriber opportunities.

Tactics U.S. telcos can employ to mitigate the immigration impact

Though some telcos might start to publicly discuss the impact of immigration on their businesses, TBR expects most U.S. telcos to remain cagey about these changes and the effects of immigration on their businesses. TBR believes the telcos can leverage certain tactics to mask or mitigate the impact to their business results.

Indicators to watch

  • Free line offers: Offering free lines enables telcos to show enhanced phone connection figures even though the revenue from the free line is not generating the revenue of an actual phone subscriber. Telcos will likely still receive activation fees and collect some other taxes and fees for allowing subscribers to carry the “free” line.
  • Increase in competitive pricing offers and promotions: Special deals enable telcos to retain existing customers and take phone connection share from other operators. This could serve to offset some of the negative effects of immigration-related phone disconnections.
  • Timing of line shutdowns: Another potential tactic telcos could employ is to delay the roll-off of line shutdowns. One situation where this can occur is when illegal immigrants are deported, leaving their phone service still in effect even though no one is using it and the bill is unlikely to be paid. There is also a trend of emigration, whereby immigrants (legal or illegal) leave the country voluntarily, canceling their phone service before they leave. This is another headwind to phone connection metrics for the telecom industry.
  • Increase in allowance for doubtful accounts: This is also referred to as allowance for credit losses. The shutdowns described above typically lead to bad debt expense, which will likely rise with deportations on pace to exceed 500,000 in 2025.

Key takeaways

If you see minimal impact in operators’ headline wireless phone subscriber numbers from immigration, keep in mind that the aforementioned tactics are likely in play.

The negative effects might not show up in 2Q25 results and might not show up in a major way thereafter because wireless operators might be able to hide most of the impact via programs like offering a free phone line on multiline plans, such as what T-Mobile is doing.

Conclusion

Though the topic is rarely discussed, U.S. wireless operators have been relying on both legal and illegal immigrants to drive a significant portion of their underlying growth in phone connection net additions.

U.S. wireless operators face a major new headwind starting in 2H25 as traditionally important drivers of net wireless phone additions — operators’ most lucrative customers — slow meaningfully. With total immigration levels (legal and illegal) into the U.S. down by more than 50% so far this year compared to year-ago levels, coupled with an increase in deportations of existing residents as well as other population headwinds, telcos will struggle to demonstrate the same level of phone net addition numbers they have enjoyed for the past decade.

 

AMD Lays Out its Road Map to Erode NVIDIA’s Dominance in the AI Data Center

All eyes were again trained on San Jose, Calif., during AMD Advancing AI 2025, held on June 12, just three months after NVIDIA GTC 2025. The event centered on AMD’s bold AI strategy that, in contrast to that of its top competitor, emphasizes an open ecosystem approach to appeal to developers and organizations alike. The entire industry seeks increased competition and accelerated innovation in AI, and AMD plans to fill this void in the market.

Catching up with NVIDIA — can AMD achieve the seemingly impossible?

AMD’s Advancing AI 2025 event presented an opportunity for CEO Lisa Su to outline how AMD’s investments, both organic and inorganic, position the company to challenge NVIDIA’s dominant position in the market. During the event’s keynote address, Su announced new Instinct GPUs, the company’s first rack scale solution, and the debut of ROCm 7.0, the next generation of the company’s open-source AI software platform. She also detailed the company’s hardware road map and highlighted strategic partnerships that underscore the increasing viability of the company’s AI technology.

 

However, NVIDIA’s dominance in the market cannot be understated, and the AI incumbent’s first-mover advantage has created massive barriers of entry to the space that AMD will tactfully need to invest in overcoming. For instance, TBR estimates NVIDIA derived more than 25 times the revenue AMD did from the sale of data center GPUs in 2024. Nonetheless, AMD is committed to the endeavor, and the company’s overall AI strategy is clear: deliver competitive hardware and leverage ecosystem openness and cost competitiveness to drive platform differentiation and gain share in the market.

Acquired assets pave the way for Helios

Su’s keynote address began with the launch of AMD’s Instinct 350 Series GPUs, comprised of the Instinct MI350X and MI355X. The Instinct MI355X outperforms the MI350X but also requires liquid cooling, whereas the MI350X can be air cooled. As such, the Instinct MI355X offers maximum inferencing throughput and is specifically designed to be integrated into high-density racks while the MI350X targets mixed training and inference workloads and is ideal for standard rack configurations. Both GPUs pack 288GB of HBM3e memory capacity — significantly more than the 192GB offered by NVIDIA’s B200 GPUs.

 

The denser memory architecture of the AMD Instinct 350 Series is a key enabler of the chip’s comparable performance to NVIDIA’s B200 where AMD claims to deliver equivalent to approximately twice the compute performance of Blackwell, depending on the floating-point precision of the model being run. However, even more noteworthy was AMD’s introduction of its open rack scale AI infrastructure, which was made possible by the company’s 2022 acquisition of Pensando Systems.

 

Along with the acquired company’s software stack, Pensando added a high-performance data processing unit (DPU) to AMD’s portfolio. By leveraging this technology and integrating Pensando’s team into the company, AMD unveiled the industry’s first Ultra Ethernet Consortium (UEC)-compliant network interface card (NIC) for AI, dubbed AMD Pensando Pollara 400 AI NIC, in 4Q24, highlighting the company’s support of open standards.

 

At Advancing AI 2025, Su formally announced the integration of Pollara 400 AI NIC with the company’s MI350 Series GPU and fifth-generation EPYC CPU to create the company’s first AI rack solution architecture, configurable as an air-cooled variant featuring 64 MI350X GPUs or a liquid-cooled variant featuring up to 128 MI355X GPUs. The development of AMD’s rack scale solution architecture comes in response to the release of NVIDIA’s GB200 NVL72 rack scale solution, with both racks being Open Compute Platform (OCP)-compliant to ensure interoperability and simplified integration with existing OCP-compliant infrastructure.

 

Going a step further, at the event Su introduced AMD’s next-generation GPU — the Instinct MI400 series — alongside the company’s next-generation rack scale solution, both of which are expected to be made available in 2026. The Instinct MI400 series is slated to deliver roughly twice the peak performance of the MI355X, while Helios — AMD’s next-generation rack scale solution — will leverage 72 MI400 series GPUs in combination with next-generation EPYC Venice CPUs and Pensando Vulcano network adapters. Unsurprisingly, Helios will adhere to OCP standards and support both Ultra Accelerator Link (UALink) and UEC standards for GPU-to-GPU interconnection and rack-to-rack connectivity.

 

In comparison to the prerelease specs of NVIDIA’s upcoming Vera Rubin NVL72 solution, which is also scheduled to be released in 2026, Helios is expected to deliver the same scale-up bandwidth and similar FP4 and FP8 performance with 50% greater HBM4 memory capacity, memory bandwidth and scale-out bandwidth. However, with AMD GPUs delivering higher memory capacity and bandwidth than equivalent NVIDIA GPUs, this begs the question: Why do NVIDIA GPUs dominate the market?

Developers, developers, developers

At NVIDIA GTC 2025, CEO Jensen Huang said, “Software is the most important feature of NVIDIA GPUs,” and this statement could not be more true. While NVIDIA has benefited from first-mover advantage in the GPU space, currently the company’s GPU release cycle is only slightly ahead of AMD’s in terms of delivering roughly equivalent silicon to market from a compute performance perspective. However, AMD has a leg up when it comes to GPU memory capacity, which helps to drive inference efficiency.

 

Where NVIDIA’s first-mover advantage really benefits the company is on the software side of the accelerated computing equation. In 2006 NVIDIA introduced CUDA (Compute Unified Device Architecture), a coding language and framework purpose-built to enable the acceleration of workloads beyond just graphics on the GPU. Since then, CUDA has amassed a developer base nearing 6 million, boasting more than 300 libraries and 600 AI models, all while garnering over 48 million downloads. Importantly, CUDA is proprietary, designed and optimized to exclusively support NVIDIA GPUs, resulting in strong vendor lock-in.

 

Conversely, AMD’s ROCm is open source and relies heavily on community contributions to drive the development of applications. Recognizing the inertia behind CUDA and the legacy applications built and optimized on the platform, ROCm leverages HIP (Heterogenous-computing Interface for Portability) to allow for the porting of CUDA-based code, simplifying code migration. However, certain CUDA-based applications — especially those that are more complex — do not run with the same performance on AMD GPUs after being ported due to NVIDIA software optimizations that have not yet been replicated.

Recognizing the critical importance of the ecosystem to the company’s broader success, AMD continues to invest in enhancing its ROCm platform to appeal to more developers. At Advancing AI 2025, the company introduced ROCm 7, which promises to deliver stronger inference throughput and training performance compared to ROCm 6. Additionally, AMD announced that ROCm 7 supports distributed inference, which decouples the prefill and decode phases of inferencing to vastly reduce the cost of token generation, especially when applied to AI reasoning models. Minimizing the cost of token generation is key to maximizing customers’ revenue opportunity, especially those running high-volume workloads such as service providers.

 

In addition to distributed inference capabilities similar to those offered by NVIDIA Dynamo, AMD announced ROCm Enterprise AI, a machine learning operations (MLOps) and cluster management platform designed to support enterprise adoption of Instinct GPUs. ROCm Enterprise AI includes tools for model fine-tuning, Kubernetes integration and workflow management. The platform will rely heavily on software partnerships with companies like Red Hat and VMware to support the development of new, use-case- and industry-specific AI applications, and in stark contrast to NVIDIA AI Enterprise, ROCm Enterprise AI will be available free of charge. This pricing strategy is key in driving the development of ROCm applications and the adoption of the platform. However, customers may continue to be willing to pay for the maturity and breadth of NVIDIA AI Enterprise, especially as NVIDIA continues to invest in the expansion of its capabilities.

Partners advocate for the viability of AMD in the AI data center

Key strategic partners, including executives from Meta, Oracle and xAI, joined Su on stage during the event’s keynote, endorsing the company’s AI platforms. All three companies have deployed AMD Instinct GPUs and intend to deploy more as time goes on. These are effectively some of the largest players in the AI space, and their words underscore the value they see in AMD and the company’s approach of driving a more competitive ecosystem to accelerate AI innovation and reduce single-vendor lock-in.

 

However, perhaps the most noteworthy endorsement came from OpenAI CEO Sam Altman, who discussed how his company is working alongside AMD to design AMD’s next-generation GPUs, which will ultimately be employed to help support OpenAI’s infrastructure. While on stage, Altman also underscored the growing AI market with arguably the most ambitious, albeit somewhat self-serving, statement of the entire keynote: “Theoretically, at some point, you can see that a significant fraction of the power on Earth should be spent running AI compute.” It is safe to say that AMD would be pleased if this ends up being the case; however, for now, AMD is projecting the data center AI accelerator total addressable market will grow to greater than $500 billion by 2028, with inference representing a strong majority of AI workloads.

AMD has become the clear No. 2 leader in AI data center and is well positioned to take share from NVIDIA

AMD’s Advancing AI 2025 event served as a testament, reaffirming the company’s open-ecosystem-driven and cost-competitive AI strategy while also highlighting how far the company’s AI hardware portfolio has come over the last few years. However, while AMD’s commitment to an open software ecosystem and open industry standards is a strong differentiator for the company, it is also a major risk as it makes AMD’s success dependent on the performance of partners and consortium members. Nonetheless, TBR sees the reputation of AMD GPUs becoming more positive, but NVIDIA’s massive installed base and developer ecosystem make competing with the industry giant a significant feat.

Well-placed Investments in Emerging Tech Will Enable CGI to Accelerate Growth Long Term

Acquisitions, being attentive to clients’ bottom-line demands, and implementing AI into IP are backbone of CGI’s “built to grow and last” strategy

On June 5, CGI hosted its Industry Analyst Summit. CEO Francois Boulanger and CGI Board of Directors Executive Chair Julie Godin commenced the meeting, detailing how CGI’s business culture, proximity model and decentralized approach, acquisitions and cocreation with clients are key to the company’s growth strategy. Throughout each session CGI leaders highlighted the company’s emphasis on meeting client objectives, providing flexibility and codeveloping solutions as necessary. Cocreating on projects not only delivers more relevant solutions to the client but also provides CGI with new intellectual property (IP) that it can bring to other clients.

 

Notably, CGI is leaning into its proximity model by acquiring more companies that build out the company’s footprint in metro markets. This is particularly evident with the purchases of U.S.-based Daugherty and Novatec. Other acquisitions such as Momentum Technologies and Apside expand the company’s local presence in Canada. Access to more markets across the U.S., Canada and Europe, alongside new client-led solutions, is broadening the company’s opportunities, particularly around AI-related projects. CGI has also enhanced its data and AI capabilities through the strategic acquisitions of Apside, Novatec and Aeyon.

 

CGI shared insights into its recent AI endeavors, including an example involving the deployment of an air gap solution for the North Atlantic Treaty Organization (NATO). For this project, CGI collaborated with NATO’s Allied Command Transformation in Norfolk, Va., to construct and tune models that accelerate the classification, editing and analysis of documents using the knowledge agent AI Felix. Outside client-led solutions, CGI is embedding AI across its existing portfolio to enhance delivery to clients across industries.

As CGI remains largely unaffected by DOGE, enhancements across the company’s public sector portfolio allow it to dig deep on federal deals

Stephanie Mango, president of CGI Federal, led an hourlong panel discussion with executives representing CGI’s U.S. Federal, Canadian and European public sector operations. Globally, CGI’s various government clients are encountering similar challenges associated with rising levels of economic and geopolitical uncertainties and governmentwide changes. Common across the company’s roster of government customers is the enduring demand for IT modernization. CGI is currently helping governments transform outdated legacy IT infrastructures, prioritize digital transformation initiatives, address talent shortages in cybersecurity and AI, adopt zero-trust security architectures, implement sovereign AI and cloud solutions, enhance the security and resilience of government supply chains, and protect critical public sector infrastructure.

 

TBR believes having such a broad swath of activities provides CGI’s public sector practices globally with case studies and success stories to showcase when pursuing new opportunities, talent with relevant experience that can be redeployed to new government markets, and solutions codeveloped with government clients highly relevant to public sector agencies elsewhere. A common go-to-market approach CGI employs across all public sector markets is to help government IT departments and IT decision makers retain a modernization mindset as the company firmly believes governments must view digital transformation as a long-term, multiyear strategy. TBR believes CGI also effectively leverages its client proximity approach to codevelop solutions with government clients and to optimize its agility in responding to fast-changing market dynamics.

 

In the U.S. federal market, where the arrival of the Trump administration and its Department of Government Efficiency (DOGE) has caused sectorwide upheaval, TBR believes CGI Federal is well positioned to capture a growing share of digital modernization work that we expect to accelerate, after the initial shock of billions of dollars in budget cuts and reallocations. Although CGI was included on DOGE’s initial hit list of consultancies under scrutiny, CGI Federal only generates 2% of its sales from “discrete consulting services,” which TBR assumes is a reference to the type of management or strategic consulting services most vulnerable to DOGE.

 

CGI Federal generates over 50% of its revenue from outcome-focused engagements, which are typically structured as fixed-price contracts. According to TBR’s Federal IT Services research practice, federal IT contractors can expect a general shift from cost-plus to fixed-price arrangements as agencies adopt a more outcome-focused mindset regarding new IT outlays. When the federal IT procurement environment begins focusing more on outcome-based contracting, it will shift more risk of cost-overruns or delivery delays to the vendors — a potentially margin-erosive scenario for federal system integrators (FSIs) that fail to maintain strong program execution.

 

CGI Federal is confident it can adapt to outcome-focused contracting in federal IT but is uncertain how quickly the transition can be completed. CGI Federal has been a perennial margin leader in TBR’s Federal IT Services Benchmark due to its traction with its ever-expanding suite of homespun IP-based offerings like Sunflower and Momentum, and demand for these offerings will at least endure, but likely increase, under DOGE.

 

TBR anticipates additional opportunities for CGI Federal will stem from its proprietary Sunflower (cloud-based asset management) and Momentum (financial management) solutions, as improving asset and financial management are among DOGE’s chief objectives and are in high demand by civilian and defense agencies looking to enhance fiscal and supply chain management, especially to comply with DOGE-related mandates.

 

CGI Federal has ongoing engagements that the company will showcase to win future federal work. For example, CGI Federal is implementing a cybersecurity shared services platform for the Department of Homeland Security (DHS), while the Department of Transportation’s use of the Momentum platform will serve as the case study for similar engagements across the federal civilian market.

 

In the Department of Defense, CGI Federal expects to leverage its fiscal and asset management offerings to capitalize on the recent mandate from Secretary of Defense Pete Hegseth that all U.S. service branches pass financial audits, and the company will cite its recent success on the U.S. Marine Corps Platform Integration Center (MCPIC) engagement to illustrate the full range of its capabilities. In the U.S. state government market, CGI leaders also mentioned that an unnamed state government had established its own version of DOGE with similar efficiency objectives and noted that other states are likely to follow suit, creating an expanding addressable market for the company’s asset and fiscal management platforms to prevent fraud, waste and abuse and to maximize operational transparency.

 

CGI Federal’s 2024 acquisition of Aeyon added process automation and AI capabilities that TBR believes will have high relevance for not only U.S. federal agencies but also state governments. The company also provides low-cost onshore managed services in the U.S. from delivery centers in Lebanon, Va., and Lafayette, La., staffed by 2,000 CGI professionals. Low-cost onshore delivery is also common across CGI’s public sector operations in Canada, but less so in Europe.

 

TBR believes CGI’s alliances with cloud hyperscalers (Amazon Web Services, Google and Microsoft), platform providers (Salesforce, SAP and ServiceNow) and others (UiPath, TrackLight and NetApp) will be key to its future success in not only the U.S. federal market but also public sector markets globally. These partners are also enablers of the company’s IP-focused solution strategy — as important as client-partners in developing new technologies and solutions.

 

CGI does not believe the advent of generative AI (GenAI) marks the beginning of the end for traditional IT services. Rather, CGI intends to leverage its expanding GenAI capabilities to migrate its public sector portfolio of offerings away from lower-value services and embrace higher-value offerings designed to maximize the value and potential of GenAI for public sector agencies. Higher-value services will require CGI to lean more heavily into its well-established proximity model as clients may need more guidance to fully reap the benefits of new offerings as capabilities become increasingly complex. In the long term, CGI may turn to more offshore resources as clients demand greater support.

BJSS provides short-run revenue relief but demonstrating AI competency to clients will be key

Vijay Srinivasan, president of U.S. Commercial and State Government operations, led the session on CGI’s banking segment by highlighting the company’s dedication to servicing clients long-term and holistically, providing flexibility to address clients’ objectives rather than focusing only on selling financial services solutions. Following opening remarks, CGI discussed recent trends and concerns in the sector supported by annual interviews of business and IT executives. First, as many banks strive for increased personalization, they demand AI and real-time capabilities on mobile applications. Unsurprisingly, banking clients are facing challenges deciphering market expectations, given the unpredictable nature of ongoing tariffs. In turn, banking clients are looking for new ways to generate revenue and maintain profitability. The banking industry, alongside many others, is experiencing a deterioration of institutional knowledge with retirements, fueling demand for AI tools.

 

As banks demand more AI tools and other new technologies, they need to modernize legacy IT systems, migration support, and application modernization. These modernization efforts also help banks execute on their cost-cutting initiatives. During the second half of 2024, CGI modernized a U.S. financial services company’s loan origination system with CGI Credit Studio and implemented its Trade360 platform for Bladex.

 

Despite the recent interest rate cuts made by the European Central Bank (four reductions thus far in 2025) and by the U.S. Federal Reserve (three reductions in 2H24), ongoing uncertainty is driving the need for streamlined processes enabling cost efficiency. CGI shared an example where the company supported a Canadian bank to optimize over 110 core applications, many of which were running on legacy systems. CGI was able to reduce the bank’s run costs by more than 35% year-to-year. Although the transformation began eight years ago, CGI has a long-standing relationship with the client, and the deal serves as a blueprint for similar contracts in the industry. Leading digital transformation efforts that support bottom-line initiatives is particularly important in the current environment.

 

Although the financial services sector is experiencing ongoing volatility, the sector and the manufacturing, retail and distribution sector are roughly equal contributors to CGI’s overall revenue. Finding new revenue opportunities and honing strategy within the segment will be vital to sustaining growth. Many IT services companies, including CGI, experienced revenue decline in their financial services sector in 2024, CGI’s financial services revenue declined in 1Q24, 2Q24 and 3Q24 before increasing by low-single digits in 4Q24.

 

In January 2025 CGI completed the acquisition of BJSS, a U.K.-based engineering and technology consultancy with industry expertise in financial services. The acquisition contributed to 8.6% year-to-year growth in the sector. CGI is not the only company prioritizing acquisitions that boost struggling verticals. Accenture recently purchased U.K.-based Altus Consulting, which will improve digital transformation capabilities in the financial services and insurance industries. Accenture experienced similar segment revenue growth declines as CGI in the first half of 2024. TBR believes BJSS will have a meaningful impact on revenue in the short run, but CGI may need to be more persistent with adding new solutions. Although introducing AI capabilities enhances client experience, it may not signal AI competency in the same way as new solutions. CGI may benefit from using acquisitions and more portfolio investments, similar to its investments in the public sector, to foster organic growth.

 

TBR believes CGI’s coinnovation with clients will create new opportunities tailored to industry needs; however, at the same time, other vendors in the past year have been leveraging partnerships to expand market share and provide industry-specific solutions. For example, Cognizant and ServiceNow expanded their partnership to reach midmarket banking clients, and Accenture is collaborating with S&P Global to jointly pursue financial services clients. Joint offerings may motivate clients to invest in these solutions rather than only implementing AI into existing solutions.

 

CGI illustrated how BJSS adds value to the company’s capabilities in its banking vertical in the U.K., specifically related to end-to-end services and product deployment. BJSS’ emphasis on meeting client goals made it a strong cultural fit for CGI. The acquisition came six months after the purchase of Celero’s Canadian credit union servicing business, which deepened CGI’s reach in Canada. These recent acquisitions, alongside recent interest rate cuts and continued additions of AI capabilities in banking solutions, position CGI well for strong performance in the sector. Further, in the company’s most recent earnings call, Boulanger announced the company is seeing “early signs in quarter two of renewed client spending in the banking sector.”

Adaptability with manufacturing clients provides deal opportunities even in a challenging environment

After the public sector, the financial services and the manufacturing, retail and distribution sectors are CGI’s next-largest revenue contributors, according to company-reported data. Similar to financial services, revenue growth in the manufacturing, retail and distribution sector was also volatile throughout 2024. Manufacturing clients will continue to experience a challenging environment with tariffs also contributing to uncertainty. During the industry session, CGI leaders discussed the increased importance of supply chain resiliency, stating that clients are seeking alignment with the company’s talent, data and technology. Investments to improve resiliency, such as in data-sharing ecosystems and capacity management, will be vital for clients, serving as a revenue opportunity for CGI.

 

To capture more revenue opportunities, CGI is completing acquisitions that bolster its standing in manufacturing, similar to recent purchases made to expand in financial services. Although CGI leaders did not directly discuss it during the session, the company’s recent purchase of Novatec expands CGI’s reach into the manufacturing sector in Germany and Spain, specifically in the automotive industry. The acquisition brings capabilities in digital strategy, digital product development and cloud-based solutions, which will help CGI with growing demand for supply chain resiliency. Further, manufacturing sectors are beginning to experience the effects of knowledge loss associated with large numbers of retirements. Increased automation in the sector will help close gaps. CGI is investing in implementing AI across its manufacturing portfolio, as well as the entirety of the business.

 

CGI is finding its clients are at different places in their digital journey, and the divide is only increasing. To address this gap, CGI will need to lean into its adaptable nature to meet each client’s needs. CGI included two examples to demonstrate the company’s approach. CGI highlighted a key deal with the Volkswagen Group around digitization, where the two companies jointly created a governance model and collaborated on Agile DevOps. The two formed a new entity, known as MARV1N, a unit that will provide the group with the necessary development support for digitalization projects. Additionally, CGI modernized the group’s legacy systems while developing new IT systems designed to cut operation expenses. The example from the session emphasizes one of CGI’s main themes of the summit: helping clients holistically, specifically around providing meaningful outcomes that improve clients’ bottom line.

 

Similarly, CGI developed and is managing Michelin’s supply chain and planning production manufacturing, underpinning the client’s Customer Experience and Services & Solutions focus areas. CGI was able to increase supply chain resiliency by enabling inventory prediction and implementing AI and business performance monitoring. CGI also supported Michelin with a machine learning project. The collaboration reflects CGI’s mission to increase automation to improve productivity and enhance supply chain resiliency. In contrast, the demand for AI comes mainly from intrigue in manufacturing, rather than leveraging it to boost productivity. TBR believes it will become important for CGI to signal to manufacturing clients how AI tools can help boost productivity, which is what CGI did recently in an engagement with Rio Tinto. CGI deployed AI tools to help Rio Tinto reduce production breakdowns, helping the client capture additional revenue.

As CGI’s proximity model and ideology provide longevity, investing in the right next-generation technology will position the company competitively

CGI’s focus on cocreation, infusing AI into its IP, and recent acquisitions has fueled revenue growth that is currently outpacing most other IT service vendors, largely due to its robust acquisition pace that is surpassing that of its peers, many of which are prioritizing smaller acquisitions with specialized capabilities. Additionally, as concerns of a recession rise, other IT service companies are turning their attention to startups. For example, Accenture has been ramping up its investments in startups, recently investing in AI prediction company Aaru and Voltron Data, which has GPU-powered data processing capabilities.

 

Similarly, Capgemini is collaborating with ISAI, a France-based tech entrepreneurs’ fund, launching ISAI Cap Venture II centered on investing in B2B startups. Although CGI made brief mentions of its startup framework, CGI Unicorn Academy, and discussed its AI-powered service delivery approach, CGI DigiOps, the company has not made many public announcements about investments or new initiatives. Capturing new technology, especially amid economic uncertainty, could help CGI secure a competitive edge on future revenue opportunities.

 

In CGI’s FY2025, which ends in September, CGI is likely to maintain its M&A pace. Although acquisitions may help CGI gain an edge over its peers, especially if the targets are well aligned culturally and able to significantly widen CGI’s reach into metro markets, the accelerated revenue growth will need to be coupled with AI investments that signal productivity improvements to maintain momentum in the long term so that CGI can attract more clients based on its innovation capabilities. However, TBR anticipates CGI will continue to expand revenue faster than its peers during its fiscal year, likely growing 5.0% in Canadian dollars and 1.2% in U.S. dollars. Nevertheless, the event reinforced CGI’s reputable strength: forming strong, in-depth, long-term relationships with its clients.

EY Reinvents Its People Advisory Services, Leaning on a Single Methodology to Drive Successful Change

In late April, TBR visited with EY’s People Advisory Services (PAS) team at the firm’s Boston office. Rapid changes in the HR role, along with the need to effectively manage the workforce amid ongoing transformation and increased technology adoption, have driven greater investment in the function and a need to effectively manage the changes. The discussion centered on how these external drivers led to changes in EY’s services. First was a thorough review of the global EY Change Experience (ChX) method, which is an updated, data-driven change management approach that focuses on achieving business outcomes through behavior change. Second was a preview of EY’s CHRO (chief human resources officer) 2030 study, which was published in early June and emphasizes the need for talent readiness and business focus.

Transformation within EY’s PAS

Starting off the discussion, Randy Beck, EY’s Global Organization and People leader, spoke at length about the major market priorities and activities that have been the focus in 2025 to provide leading methodologies and thought leadership to their clients. EY’s PAS guides client workforce strategies, impacting people experience, organization and workforce transformation, HR transformation, rewards and people transactions, and people mobility.

 

A key area of emphasis was updating the change management service offering to align with market needs. This year, EY implemented a single global, modernized change management methodology, moving away from the over 33 different methodologies previously used across PAS engagements. This unified methodology distinguishes EY from its competitors, enhances global consistency, improves the employee experience, and ensures that clients’ transformations through PAS are more successful and enduring. The methodology seeks to create a data-driven, proactive rather than reactive approach to change, enabling clients to prepare and adjust to the transformation turning points. As part of the launch and to support consistency, EY retrained 2,600 employees in the practice across 60-plus countries using e-learning to recertify its staff. This mandatory training ensures EY’s practitioners will follow the same methodology moving forward.

 

In structuring the methodology, EY sought to embrace modularity and scalability, enabling the company to meet clients at their current maturity point and prepare them for future changes. EY consultants collaborate with clients as change management solution architects to design suitable project approaches, utilizing appropriate tools and technologies to address their requirements.

 

EY sets itself apart from its peers through the methodology, backing its transformation with data and technology and addressing the people side of organizational change. In the methodology, EY calls out Meaning, Empowerment and Growth as the change conditions, or guiding principles, which are applied at the individual, team and organization levels throughout a project to make change stick. Each project approach is also centered on delivering change components under the four change pillars: leadership, engagement, confidence and proficiency.

 

To align with the company’s overall vision and better guide change, EY ensures that members of clients’ leadership teams are in position to guide the transformation. Throughout the engagement, EY provides resources and training for clients, enabling them to support and uphold their transformations. Fostering confidence using data, metrics and established governance exemplifies the need for change and encourages support across the organization. Lastly, the need for skilled individuals to lead and uphold change highlights the importance of ongoing skills development, underscoring the pillar of proficiency.

 

EY shared six additional practices that differentiate the company in the market. One key feature that practice EY leaders demonstrated to TBR was the Network Analysis tool, which plays a critical role in driving effective organizational change. The tool maps trusted networks within the organization and identifies influential leaders, enabling them to champion change and help ensure it takes hold.

 

The development of the Modernization Change Experience methodology reflects EY’s PAS practicewide commitment to a more modular, proactive and people-centric approach to change. Using the platform, EY seeks to better guide change for its clients and establish “change as a muscle,” enabling the company to be proactive and a part of clients’ normal operations. While acknowledging that change is necessary, EY also recognizes that change can be personal, highlighting the importance of change as an experience that fully engages client talent to make the transformation more successful. Further, the platform drives the value of outcomes, reflecting an industrywide shift toward outcomes-based engagements and risk-sharing on projects.

Conclusion

As workforce and employee experience grow increasingly critical in the era of rapid technological advancement, EY’s refreshed approach within PAS — centered on a unified methodology and a stronger focus on people experience — helps distinguish the firm from its peers and better aligns with technology-driven transformation initiatives. Further, taking a global approach to retraining and methodology creates a more unified approach within the firm to better engage with clients and navigate market change.

 

EY has structured its portfolio to meet clients where they are in their transformation journey, delivering solutions that empower them to embrace change and lead their own initiatives effectively. Integrating experience and continuous change into people advisory and workforce transformation strengthens EY’s competitive edge while enabling clients to sustain long-term progress, anchored by technology.

AI Inferencing Takes Center Stage at Red Hat Summit 2025

In late May, Red Hat welcomed thousands of developers, IT decision makers and partners to its annual Red Hat Summit at the Boston Convention and Exhibition Center (BCEC). Like the rest of the market, Red Hat has pivoted around AI inferencing, and this conference marked the company’s entry into the market with the productization of vLLM, the open-source project that has been shaping AI model execution over the past two years. Though Red Hat’s push into AI inferencing does not necessarily suggest a deemphasis on model alignment use cases (e.g., fine-tuning, distillation), which was the company’s big strategic focus last year, it is a recognition that AI inferencing is a production environment and that the process of running models to generate responses is where the business value lies. Red Hat’s ability to embed open-source innovation within its products and lower the cost per model token presents a sizable opportunity. Interestingly, Red Hat’s prospects are also evolving in more traditional markets. For instance, Red Hat’s virtualization customer base has tripled over the past year, with virtualization emerging as a strategic driver throughout the company’s broader business, including for communication service providers (CSPs) adopting virtualized RAN and within other domains such as their IT stacks and the mobile core.

Red Hat pivots around AI inferencing

Rooted in Linux, the basis of OpenShift, Red Hat has always had a unique ability to resolution assets to expand into new markets and use cases. Of course, AI is the most relevant example, and two years ago, Red Hat formally entered the market with Red Hat Enterprise Linux (RHEL) AI — the tool Red Hat uses to engage AI developers — and OpenShift AI, for model lifecycle management and MLOps (machine learning operations) at scale. These assets have made up the Red Hat AI platform, but at the Red Hat Summit, the company introduced a third component with AI Inference Server, in addition to new partnerships and integrations further designed to make agentic AI and inferencing realities within the enterprise.

 

AI and generative AI (GenAI) are rapidly evolving, but the associated core challenges and adoption barriers, including the high cost of AI models and the sometimes arduous nature of providing business context, remain largely unchanged. Between IBM’s small language models (SLMs) and Red Hat’s focus on reducing alignment complexity, both companies have crafted a strategy focused on addressing these challenges; they aim not to develop the next big AI algorithm, but rather to serve tangible enterprise use cases in both the cloud and the data center.

 

Everyone is aware of Red Hat’s track record of delivering enterprise-grade open-source innovation, and if Red Hat’s disruption with Linux over two decades ago is any indication, the company is well positioned to make real, cost-effective solutions for the enterprise based on reasoning models and AI inferencing.

Red Hat productizes vLLM to mark entry into AI inferencing

Though perhaps lesser known, most large language models (LLMs) today are leveraging vLLM, an upstream open-source project boasting roughly half a million downloads in any given week. At its core, vLLM is an inference server that helps address “inference-time scaling,” or the budding notion that the longer the model runs or “thinks,” the better the result will be. Of course, the challenge with this approach is the cost of running the model for a longer period of time, but vLLM’s single-server architecture is designed to optimize GPU utilization, ultimately reducing the cost per token of the AI model. Various industry leaders — namely NVIDIA, despite having its own AI model serving stack; Google; and Neural Magic, which Red Hat acquired earlier this year — are leading contributors to the project.

 

Leveraging its rich history of turning open-source projects into enterprise products, Red Hat launched AI Inference Server, based on vLLM, marking Red Hat’s first offering from the Neural Magic acquisition. AI Inference Server is included with both RHEL AI and OpenShift AI but can also run as its own stand-alone server. Though perhaps inclined to emphasize IBM’s watsonx models, Red Hat is extending its values of flexibility, choice and meeting customers where they are to AI Inference Server. This new offering supports accelerators outside IBM, including NVIDIA, AMD, Intel, Amazon Web Services (AWS) and Google Cloud, and offers Day 0 support for a range of LLMs. This means that as soon as a new model is released, Red Hat works with the provider to optimize the model for vLLM and validate it on Red Hat’s platform.

 

Building on vLLM’s early success, Red Hat launched LLM-d, a new open-source project, announced at the Red Hat Summit. LLM-d transcends vLLM’s single-server architecture, allowing inference to run in a distributed manner, further reducing the cost per token. Due to the cost, most will agree that inferencing will necessitate distributed infrastructure, and there are several recent examples across the tech landscape that have alluded to this. LLM-d is being launched with support from many of vLLM’s same contributors, including NVIDIA and Google (LLM-d runs on both GPUs and TPUs [tensor processing units]).

Partnership with Meta around MCP is all about empowering developers and making agentic AI enterprise-ready

If Google’s launch of A2A (Agent2Agent) protocol is any indication, Anthropic’s Model Context Protocol (MCP), which aims to standardize how LLMs discern context, is gaining traction. At the Red Hat Summit, Red Hat committed to MCP by announcing it will deliver Meta’s Llama Stack, integrated with MCP, in OpenShift AI and RHEL AI.

 

To be clear, Red Hat supports a range of models, but Meta went the open-source route early on, bringing Llama Stack, an open-source framework for building specifically on Llama models, into the Red Hat environment. This not only exposes Red Hat to another ecosystem but also provides APIs around it. Enlisting Meta at the API layer is an important aspect of this solution, as it enables customers to consume the solution and build new agentic applications with MCP playing a key role in contextualizing those applications within the AI enterprise. It is still early days for MCP, and making the protocol truly relevant in enterprise use cases will take some time and advancement in security and governance. But Red Hat indirectly supporting MCP within its products signals the framework’s potential and Red Hat’s role in bringing it to the enterprise.

Who would have thought we would be discussing virtualization in 2025?

In 2025 and the world of AI, you don’t often hear of a company putting virtualization at the top of its strategic imperatives list. However, everyone has seen how Broadcom’s takeover of VMware has caused a ripple in the market, with customers seeking cheaper, more flexible alternatives that will not disrupt their current cloud transformation journeys. In fact, when we surveyed enterprise IT decision makers, 42% of respondents indicated they still intend to use VMware, but most plan to do so in a reduced capacity. Of those planning to continue using VMware, a notable 83% are still evaluating other options*.

 

“Options both have increased the prices across the board, 20% to 30%, which is pretty significant. So, you could say myself and my peers are not very happy with the Broadcom method on that, and we’re looking at, you know, definitely options to migrate off VMware when possible. We’re definitely looking at Citrix, and then options from Red Hat and Microsoft.” — CTO Portfolio Manager, Consumer Packaged Goods

 

As a reminder, after Red Hat revolutionized Linux in the early 2000s, the company’s next big endeavor was virtualization. With the rise of cloud-native architectures, Red Hat quickly pivoted around containers, and this is where the company remains most relevant today. However, through the KVM (kernel-based virtual machine) hypervisor, which would eventually be integrated with OpenShift, virtualization has always been a part of the portfolio. Over the past year, given the opportunity surrounding the VMware customer base, Red Hat has actively revisited its virtualization roots in a few primary ways.

 

First, given the risky nature of switching virtualization platforms, Red Hat crafted a portfolio of high-touch services around OpenShift Virtualization, including Migration Factory and a fixed-price offering called Virtualization Migration Assessment. These services from Red Hat Consulting, which are offered in close alignment with global systems integrator (GSI) partners, help customers migrate virtual machines (VMs) as quickly as possible while minimizing risk, which largely stems from helping customers migrate VMs before modernizing them.

 

Secondly, Red Hat has focused on increasing public cloud support. Red Hat announced at the summit that OpenShift Virtualization is now available on Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure (OCI), in addition to previously announced support for IBM Cloud and AWS, officially making the platform available on all major public clouds. Making OpenShift Virtualization applicable across the entire cloud ecosystem reinforces how serious Red Hat is about capturing these virtualization opportunities. These integrations will make it easier for customers to use their existing cloud spend commitments to offload VMware workloads to any cloud of their choice while maintaining the same cloud-native experience they are used to.

 

Of course, there will always be a level of overlap between Red Hat and the hyperscalers, but ultimately the hyperscalers recognize Red Hat’s role in addressing the hybrid reality and enterprises’ need to move workloads consistently across clouds and within data centers, and they welcome a more feature-rich platform like OpenShift that will spin the meter on their infrastructure.

With virtualization, Red Hat is allowing partners to sell infrastructure modernization and AI as part of the same story

At the conference, we heard from established Red Hat customers that have extended their Linux and container investments to virtualization. Examples included Ford and Emirates NBD, which has over 37,000 containers in production and is now migrating 9,000 VMs to Red Hat OpenShift Virtualization for a more consistent tech stack. Based on our conversations with customers, these scenarios — where VMs and containers run side by side — are not an easy sell and require a level of buy-in across the organization.

 

That said, if customers can overcome some of these change management hurdles, this side-by-side approach can offer numerous benefits, largely by creating greater consistency between legacy and cloud-native applications without significant refactoring. Though some GSIs may be better suited to the infrastructure layer than others, partners should recognize the opportunity to use OpenShift Virtualization to have client discussions around broader AI transformations. One of the compelling aspects of Red Hat is that even as it progressed through different phases — Linux, virtualization, containers and now AI — the hybrid platform foundation has remained unchanged. If customers can modernize their infrastructure on the same platform, introducing AI models via OpenShift AI becomes much more compelling.

Virtualization remains a key driver of telecom operator uptake of Red Hat solutions, but AI presents a significant upsell opportunity

Over the past few years, Red Hat has leveraged its virtualization technology in the CSP market, making significant progress in landing new CSP accounts and expanding its account share within this unique vertical. The company’s growth in this market has been aided by factors such as Broadcom’s acquisition of VMware, which initially caused a wave of CSPs to migrate to Red Hat due to the uncertainty surrounding VMware’s portfolio road map. Broadcom’s price hikes are causing a second wave of switching that TBR anticipates will continue for several years.

 

However, Red Hat has also succeeded in more deeply penetrating the telecom vertical due to its savvy marketing, which at times emphasizes that its solutions are “carrier-grade,” along with persistent efforts to raise awareness within the CIO and CTO organizations of CSPs that virtualization and hybrid multicloud strategies will have significant ROI for CSPs. This has led to strong adoption of Red Hat OpenStack and OpenShift, although the Ansible automation platform has lagged in terms of CSP adoption, as this customer segment prefers to use the free, open-source version of Ansible.

 

As CSPs iterate on their AI strategies, Red Hat has the opportunity to play a significant role, including with its new Red Hat Inference Server, as CSPs increasingly embrace edge compute investments. CSPs need to invest upfront to capitalize on the cost efficiency and revenue generation opportunities offered by AI, and Red Hat can help guide them in this direction. CSPs have difficulty moving quickly when new, disruptive technologies emerge, and, with AI specifically, have trouble evaluating and testing AI models themselves due to a lack of in-house expertise. Additionally, they feel constrained by regulations and are concerned about compromising data privacy. Red Hat’s dedicated telecom vertical services can help alleviate these concerns and accelerate CSPs’ investments in AI infrastructure.

Final thoughts

Based on our best estimate, roughly 85% of AI’s current use is focused on training and only 15% on inferencing, but the inverse could be true in the not-too-distant future. Not only that, but AI inferencing will likely occur at distributed locations for the purposes of latency and scale — which, due to its hybrid platform and ability to help customers “write once, deploy anywhere,” remains core to Red Hat’s value proposition. That is one of the compelling aspects of a platform-first approach; even as new components such as AI models are introduced, the core foundation remains unchanged.

 

Though all of Red Hat’s new innovations, including AI Inference Server and the LLM-d project, do not necessarily suggest a deemphasis on model alignment with assets like InstructLab, it is clear Red Hat is pivoting to address the inference opportunity. With its trusted experience productizing open-source innovation and its ability to exist within a broad technology ecosystem of hyperscalers, OEMs and chip providers, Red Hat is in a somewhat unique position to help transition AI inference from an ideal to an enterprise reality.

 

Further, Red Hat’s virtualization prospects are growing, as TBR’s interactions with customers continue to indicate that they are looking for new alternatives. If the hyperscalers’ recent earnings reports are any indication, the GenAI hype is waning, and we suspect many enterprises will refocus on infrastructure modernization to ultimately move beyond basic chatbots and lay the groundwork for the more strategic applications that inferencing will enable. It will be interesting to see how Red Hat capitalizes on new virtualization opportunities with its hyperscaler and services partners as part of a joint effort to bring customers to a modern platform, where VMs and containers can coexist and drive discussions around AI.

 

*From TBR’s 2H25 IT Infrastructure Customer Research

A Challenger Mindset Transforms HCLTech’s Approach to Financial Services to Achieve Success Through AI

HCLTech hosted industry analysts and advisers on May 13 at the ASPIRE at One World Observatory in New York City. Throughout the afternoon, HCLTech executives, leaders and clients spoke at length about the company’s financial services positioning, direction and activities amid disruption from AI and digital acceleration.

Introduction

During the event, HCLTech leaders consistently highlighted how the company’s culture, deep engineering expertise and unique approach to AI set it apart from its peers and strengthen client relationships. These points were echoed by two financial services clients during a panel discussion. Differentiation remains a challenge for all vendors, yet HCLTech emphasized that although the company may not be different in what it does, it is unique in its approach.

Balancing risk, innovation and talent investment

The event began with a presentation by HCLTech CEO and Managing Director C Vijayakumar (CVK), who gave an overview of the company’s current positioning and future plans. The session centered on HCLTech’s evolution toward an engineering and platform-based mindset and its transformation from a traditional model that no longer remains relevant amid a changing balance between revenue and talent volumes. To adapt its business model and better align itself with the needs of clients and the market, CVK announced HCLTech’s goal of doubling its revenue with only half of its previous headcount.

 

As roles within the organization have begun to change with the integration of new technology, including AI, HCLTech has had to begin transforming the company’s structure. Revenue per employee has always been a KPI for HCLTech to ensure the company decouples revenue growth from headcount growth. HCLTech’s attention to the metric is reflected in its ability to maintain peer-leading levels relative to Cognizant, Infosys, Tata Consultancy Services (TCS) and Wipro IT Services (ITS), whose trailing 12-month (TTM) revenue per employee was $59,304, $60,338, 49,692 and 45,270, respectively, in 1Q25 — each below HCLTech’s figure of $62,360.

 

It is a lofty goal to deliver the same quality of service at the same speed with fewer people, even with the support of AI tools and strong partnerships. To achieve this goal, HCLTech will rely on its culture and talent, combined with its strategic technology investments including AI, digital and software solutions. CVK emphasized that HCLTech’s culture is deeply embedded in the company’s DNA, making it difficult for competitors to replicate. This culture fosters strong client trust and deepens relationships, as it consistently comes through in conversations with clients. By building on this foundation, HCLTech effectively leverages AI technologies to strengthen existing partnerships and secure new projects.

 

HCLTech’s client management and retention strategy reflects the company’s ability to embed itself within the client environment and serve as a key partner. HCLTech’s deep relationships have enabled the company to better identify and address client challenges as well as opportunities to recommend transformations to clients. As complexity increases across the technology landscape, HCLTech has had to evolve its approach to both new and existing clients. Client willingness to adopt AI tools can be tempered by concerns over managing multiple platforms and the associated risks.

 

As a result, HCLTech often takes a more measured and gradual approach with new clients, focusing on building trust and easing them into transformation. In contrast, with existing clients, HCLTech adopts a more assertive strategy — leveraging its deep understanding of their technology landscapes and industry-specific needs to drive adoption and deliver results more rapidly.

 

CVK closed his presentation by emphasizing the need to be proactive and carry a “paranoid mindset” to stay ahead of technology trends and remain relevant. HCLTech’s ability to build strong relationships with clients enables the company to guide transformations, equipping clients with the tools and services to be proactive and effectively leverage technology across their organizations. With a greater focus on outcomes, HCLTech’s positioning and relationships with clients provide a foundation for the company to grow its wallet share with clients as it balances risks with innovation and invests for future growth.

Demand for modernization and AI influences client needs within the financial services space

Srinivasan (Srini) Seshadri, HCLTech’s chief growth officer and Financial Services lead, discussed the company’s 50,000-person Financial Services practice, which as HCLTech’s largest industry group generated $2.9 billion in revenue during FY25. During the presentation, Seshadri emphasized five main features of the company’s Financial Services practice that help it drive value for clients: engineering DNA, outcome orientation, challenger mindset, verticalized services and innovation. The benefit of verticalized services stood out to TBR. A few years ago, HCLTech moved all its service lines under one vertical, creating a unified go-to-market strategy, enabling it to deepen its client relationships and positioning around transformation. As vertical and industry expertise does not provide differentiation on its own, HCLTech took it a step further, pairing its industry experience with service lines to better communicate its portfolio and drive value. Taking this approach pulls together HCLTech’s strengths and drives outcomes.

 

Key items influencing HCLTech’s Financial Services activities include adapting to changing regulations, increasing use of Global Capability Centers, and creating and implementing composable products. Aligning its portfolio and resources to help clients navigate current trends and operate more effectively guides HCLTech’s client approach.

 

For example, with the permeation of generative AI (GenAI) and increased adoption of the technology by clients seeking to remain relevant, Seshadri spoke about the evolution of GenAI from a buzzword to actual engagement and usage, including using GenAI to reimagine an autonomous future for Financial Services. HCLTech seeks to integrate GenAI solutions and tools within its clients’ operations, depending on maturity level and understanding, to drive end-to-end value chain transformation.

 

Helping clients use AI to make internal processes better and more efficient and to achieve their goals enhances HCLTech’s value proposition in the financial services industry and enables the company to gain new projects in sensitive areas such as regulation, governance and security.

Prioritizing the main areas within engineering, platform modernization and GenAI aligns HCLTech’s financial services expertise with its key service line strengths around business optimization, design and innovation and enables the company to support client transformations. Seshadri closed his presentation by acknowledging that transformation “is up to the client to implement.” HCLTech’s approach to deal generation is shaped by its deep understanding of culture and clients’ readiness to sustain transformation. By viewing AI as a means to enhance processes and operations — and by factoring in the longevity of each client relationship — HCLTech tailors the pace and intensity of technology integration. This ability to meet clients where they are and ensure lasting transformation distinguishes HCLTech from its peers.

Experience is key to client engagement

Building on Srini’s discussion, Ananth Subramanya, HCLTech’s EVP of Digital Business Services, talked about the industrywide shift in consumer loyalty from a physical product to the experience, with the experience driving the engagement. As clients increasingly demand rapid, relevant transformations that drive business outcomes, Subramanya emphasized the importance of balancing speed with stability — acknowledging that while stability may at times constrain velocity, it is essential for sustainable progress. The strategy helps users build resilience, enabling the customer experience (CX) to permeate the product and platform layers to ensure it influences each aspect of the client transformation.

 

HCLTech’s CX-centric delivery approach — anchored in both business processes and user interface (UI) design — deeply embeds the experience within clients’ operations and functions. This foundation empowers clients to engage more effectively and drive meaningful change. Additionally, by enabling end users to experience improvements more rapidly, the approach fosters stronger client loyalty and supports the development of long-term, strategic projects.

AI permeates approach to transformation

Diving more deeply into the impact of AI on financial services activities and client investments, Vijay Guntur, HCLTech’s CTO and head of Ecosystems, discussed the primary needs within financial operations: operational efficiency, accelerated innovation, CX and risk management. Key challenges around data quality and collection, the use of legacy system, and scalability also remain critical within the financial services space. HCLTech’s investments across AI platforms and solutions have enabled the company to deliver on these needs while embedding industry knowledge to address key client concerns. The company’s four main AI and GenAI offerings are AI Force, AI Foundry, AI Labs and AI/GenAI Engineering. Through these offerings, HCLTech helps clients execute on decision making and handle complex workflows.

 

Using its AI Labs, with six different locations in the U.S., the U.K., Germany, India and Singapore, HCLTech can build and scale AI for clients, helping them work through early stages and identify where they can add value through the use of technology. The labs encapsulate HCLTech’s AI portfolio offerings and create opportunities for the implementation of tools and solutions with the goal of driving value. As clients undergo transformation and modernization services, lowering risk while increasing AI efficiency across IT operations, the labs showcase HCLTech’s portfolio offerings and solutions, helping clients lead AI transformations.

 

The primary AI offering, AI Force, launched in March 2024, takes a platform approach to apply AI technologies within software development and engineering life cycle processes. Further development of the platform has enabled interoperability and greater adoption and AI usage. Guntur emphasized that the platform improves efficiency and shortens time to market, allowing clients to more quickly respond to market needs and remain relevant against peers. With agentic AI emerging as a much-needed use of the technology, AI Force’s ability to embed agentic workflows enhances efficiency and adds value.

 

The second product, AI Foundry, accelerates product development and remodels the value stream using AI and data. With a focus on value streams, modernizing data, and AI that is built within a cognitive infrastructure, AI Foundry uses technology to help clients improve their business operations.

 

HCLTech has a long history working with AI, building off its DRYiCE platform, the company’s original automation platform. This heritage equips HCLTech with the background and trusted technical expertise, backed by its engineering prowess. to deliver on clients’ AI transformation needs. Further, HCLTech can pursue larger-scale and more aggressive AI-led transformations, helping the company accelerate ahead of its peers in terms of client engagement and growth.

Consulting serves as an entry point to broader financial services activities

In a panel discussion with financial services clients, HCLTech leaders discussed the company’s consulting services and main service line areas. Although consulting has not been a primary investment focus for HCLTech, the company has selectively built out consulting capabilities to address clients’ end-to-end modernization and technology needs. For example, in March 2019, HCLTech acquired Strong-Bridge Envision, a digital consulting firm that complemented its digital and analytics capabilities. Embedding this expertise across its portfolio strengthens the company’s ability to drive AI and platform adoption.

 

The company’s AI Labs are a central part of HCLTech’s consulting offerings. Through the labs, HCLTech delivers technology consulting services, helping clients to identify areas where they would most benefit from AI. As many clients, particularly within the financial services space, look to accelerate innovation to create new products and business models that enable them to remain relevant, technology consulting services bring in essential offerings to help address key areas of client transformations.

 

Looking at the data aspect, consulting is required for many clients to organize and manage datasets. Ensuring data is protected and structured remains vital to valuable and trusted AI usage, increasing the importance of HCLTech’s ability to deliver on data needs in a timely manner.

While these consulting investments may offer limited scale, they are sufficient to remain competitive with peers and to guide clients effectively on AI adoption. This expertise aligns well with the company’s client management strategy, particularly in expanding relationships with existing clients — where HCLTech can lead with a proactive and open-minded approach.

Conclusion

HCLTech concluded the event with a wrap-up by CMO Jill Kouri, who noted key points about HCLTech’s positioning and direction as the company navigates client needs around AI. The main comment that struck TBR analysts referenced the need for a challenger mindset companywide. This approach will help HCLTech transform the way it delivers services and solutions to clients. Leading with a proactive and paranoid mindset embodies the challenger focus, allowing HCLTech to stay ahead of AI and technology trends while complementing its existing strengths.

 

The goal of doubling revenue with half the people will certainly present challenges for HCLTech, but the company’s culture and robust AI portfolio, which provides the technology, engineering expertise and resources needed to deliver on consulting services, will help the company move in the right direction. Further, leveraging an AI-intrinsic point of view, as opposed to an AI-first point of view, secures HCLTech’s positioning around AI and its trust-based relationships with clients, to effectively address key market needs around efficiency and modernization.

Sage Analyst Summit: Keeping the Winning Playbook While Evaluating Emerging Changes to the Game

Connect, grow, deliver

TBR spent two days in Atlanta, listening to and speaking with Sage’s management team as part of the company’s annual Analyst Summit, and we walked away impressed. This is a company that knows itself and its strengths. It knows where it needs to improve. It knows where the pain points and constraints are, and has always done a good job of navigating between the two.

 

Most importantly, the company knows its customers, which should come as no surprise considering how long Sage has been serving its SMB install base. Sage has leveraged these strengths and established a large, sticky install base from which to pursue opportunities adjacent to its core business.

 

Sage is focused on three interlocking areas — connect, grow, deliver — which President Dan Miller described during the event:

  • Connect through trusted partner networks
  • Grow by winning new logos through a verticalized suite motion
  • Deliver real, measurable productivity using AI

Each pillar represents a separate part of the company’s go-to-market strategy, but Grow stands out as the most vital to the company’s growth trajectory. Landing and expanding with new logos is the company’s greatest source of revenue growth, with vertical-specific and business operations solutions offering some of the greatest upsell potential. Aligned with this strategy, the company is a disciplined but active acquirer, onboarding new IP to enhance these sales motions.

 

Long-term, AI presents opportunities for the company to upsell into its finance and accounting (F&A) core. As Sage leans into its strengths while building for the future, its ability to scale AI and industry depth across a known and trusted customer base may prove to be the company’s most valuable asset.

Landing with F&A, then expanding with payroll, HR and operations management

Sage’s land-and-expand strategy starts with a stronghold in finance and builds outward through operational adjacencies. Most customers enter through core accounting —typically via Intacct — and expand into areas like payroll, HR, and inventory or distribution management as their needs mature. Vertical-specific modules are critical to this motion, especially in midmarket industries where Sage can tailor functionality to operational nuances.

 

The company reinforces expansion by packaging these capabilities into suites, streamlining procurement and positioning itself as more than just a financial system. Sales teams are trained to identify expansion triggers early; signs like API adoption, workflow customization or manual process bottlenecks often indicate opportunities. Although the company’s product maturity varies across the portfolio, Sage has seen success in service- and product-centric verticals, enabling the company to upsell and cross-sell. This approach, combined with a focus on ease of integration and strong partner involvement, is helping Sage grow account value without overpromising in its product road map.

AI at Sage: Workflow-first, ROI-driven

Sage management spent much time discussing its ambitions in AI. From TBR’s perspective, the tone was very grounded. Although the company will never be at the cutting edge of AI innovation, management did a great job of articulating the current opportunities to upsell AI capabilities. Finance and accounting workflows offer many sales opportunities for Sage to pursue, and the company is investing in R&D to capitalize on them. Similar to many of its application peers, Sage intends to approach agentic AI and generative AI development on a use-case-by-use-case basis. In Sage’s case, this is even more prudent as SMB customers face greater budgetary restrictions and require ROI to be realized in the first year.

 

Sage management highlighted AP automation, time-saving prompts and variance analysis as key areas where the company is achieving success with AI-powered automation. Like several peers, the company’s Copilot solution serves as the unified user interface (UI) for engaging embedded AI tools. Long-term, management expects to see this UI become more adaptive, guiding the user through an automated workflow. Guided prompting was another area of focus, and the company is building a library of prompts for end users to leverage as they perform specific tasks. Under the hood, the company intends to run its AI tools on internally trained models built on top of a third-party. CTO Aaron Harris discussed two of these tools: Sage Accounting LLM and APDoc2Vec.

 

As a reminder, Sage partnered with Amazon Web Services (AWS) over a year ago to collaborate on F&A models, and management highlighted the continued effort to build a new multitenant, dependency-based stack.

 

Long-term, TBR expects this work to be pivotal in reducing the cost of running AI workloads, while internally developed models with lower parameter counts than big-name large language models (LLMs) will further enhance cost efficiency at inference. Meanwhile, Sage is still figuring out how to monetize AI, but the industry default is to implement a tiered system. Some high-compute copilots may eventually carry usage-based fees, especially in forecasting, but for now, the priority is to show clear value and price accordingly.

 

In 2025 no conversation is complete without recognizing the platform implications of agentic workflows. Behind the scenes, Sage is preparing for an agent-first architecture by integrating emerging frameworks, such as Model Context Protocol (MCP) or Agent 2 Agent (A2A), directly into its platforms. The long-term goal is to coordinate these through super agents and plug into the broader agent ecosystem (Salesforce, Microsoft, Google), but this is still only part of the long-term road map.

 

That said, the company is building for the future, with an emphasis on data model consistency, dependency-based deployment, and orchestration layers capable of managing multi-agent chains. This is all being done with AWS in the background, keeping the platform anchored at the infrastructure layer.

Sage deepens its partner relationships

Sage’s partner and go-to-market strategy is built for focus and leverage. The company cannot cover every vertical or service need on its own, so partners are central to how it sells, delivers and scales. The revamped Sage Partner Network is tighter, with clear roles across sell, build and serve motions, and expectations tied to growth, not just activity. Multiyear vertical plans, coinvestment and execution discipline are now baseline requirements.

 

Internally, the GTM engine runs through SIGMA, which ties product planning to what the direct and partner channels sell. Sales teams are trained to package suites, identify expansion triggers, and position the platform by vertical need, rather than a feature checklist. To prepare for the platform’s evolution, Sage is already laying the groundwork for a more extensible ecosystem, including plans for an agent marketplace that would give partners a direct path into the next wave of product delivery.

Staying the course and preparing for what lies ahead

Sage’s story at its annual Analyst Summit was not necessarily one of reinvention. Land and expand has been the company’s strategy for years, and it has worked well so far. By anchoring in finance, expanding through vertical suites and operation management, and keeping partners close to the motion, Sage is executing with clarity around who it serves and how it wins. Meanwhile, the platform is evolving, AI is taking shape, and the architecture is catching up to the ambition. None of the company’s claims felt like overpromising.

 

In a market filled with transformation stories, Sage is running a disciplined play. The question is whether it can maintain that discipline as it scales and converts its product investments, especially in AI and agentic workflows, into tangible value for the customers it already knows best.