AWS Re:Invent 2024: Innovating and Integrating to Meet AI’s Moment

AWS re:Invent 2024 Overview

Matt Garman kicked off his first re:Invent conference as AWS’ CEO, reinforcing a strategy that has been rooted in AWS’ DNA for over a decade. That is the notion of “building blocks,” in other words, the 220-plus native services AWS offers that cater to a specific workload and, when used together in a modular fashion, can address specific use cases. This approach of offering the broadest set of out-of-the-box, user-friendly tools to attract new applications, spin the IaaS meter and feed the lucrative flywheel effect AWS is known for, has naturally garnered a lot of interest with developer and startup communities. But Garman was quick to remind us how far AWS has come in catering to the large enterprise.

 

As an example, Garman welcomed JPMorgan Chase Global CIO Lori Beer to the stage to share the company’s aggressive cloud transformation, which consisted of growing from 100 applications on AWS in 2020 to over 1,000 today, powered by a range of services, from Graviton chips to SageMaker to AWS’ fastest-growing service, Aurora. If this success story is any indication and if we factor in the feedback from our own C-Suite discussions, this building-block approach appears to be resonating, solidifying AWS’ position as the leading IaaS & PaaS provider. But with every new application poised to have some AI or generative AI (GenAI) component, this budding technology is raising the stakes, and the hybrid-multicloud reality means customers have a lot of options when it comes to crafting new workloads.

Compute is foundational building block, with a heavy focus on AI training

Today, AWS offers over 850 Amazon Elastic Compute Cloud (EC2) instance types, and on average, 130 million new EC2 instances are launched daily. This pace of innovation and scale is largely due to AWS’ approach to the virtualization stack dating back to 2012 with the Nitro System, which other hyperscalers have since emulated in their own way, making compute the foundational building block and hallmark of AWS’ success. Though at the event AWS touted its commitment to NVIDIA, with support for Blackwell GPUs coming online next year, and general-purpose workloads via Graviton, a lot of the focus was on AI training.
 

Since it first launched its Trainium chip in 2020, AWS has served the needs of AI training workloads, but now AI-driven ISVs like Databricks and Adobe, seem to have an appetite for these chips, hoping to deliver cost and performance efficiencies to their wide swath of customers that also run on AWS. It is why AWS launched Trainium 2 and is making these EC2 instances, which encompass 16 Inferentia chips, generally available following year in private preview. AWS also reinforced its commitment to continuing to push the compute boundaries on AI training, announcing that Trainium 3, which will be available later next year, will reportedly offer double the compute power of Trainium 2.

Rise of the distributed database

Another core building block of the cloud stack is the database. Distributed databases are nothing new but have been picking up steam as customers in certain industries, including the public sector, want to have data stored within country borders but scale across different regions. At the event, AWS introduced Aurora DSQL, a distributed SQL database, that at its core isolates the transaction processing from the storage layer, so customers can scale across multiple regions with relatively low latency.
 

This development comes at an interesting time in the cloud database market. Database giant Oracle is shaking up the market, making its services available on all leading clouds, including AWS, with the Oracle Database@AWS service now in limited preview. But AWS is focused on choice. While the IaaS opportunity to land Oracle workloads was too good to pass up, particularly when Microsoft Azure and Google Public Cloud (GCP) are doing the same thing, AWS wants to continue pushing the performance boundaries of its own databases. In fact, it was Google Cloud that AWS targeted at the event, boasting that Aurora DSQL handles read-write operations four times faster than Google Spanner.
 

Watch On Demand: Monetizing GenAI: Cloud Vendors’ Investment Strategies and 2025 Outlook

Creating more unity between the data and AI was somewhat inevitable

Jumping on the platform bandwagon, AWS morphs SageMaker into SageMaker AI

AWS launched SageMaker seven years ago, and the machine learning development service quickly emerged as one of AWS’ most popular, innovative offerings, adding 140 new features in the last year alone. But when GenAI and Amazon Bedrock came on the scene, SageMaker found a new home in the GenAI portfolio, acting as the primary tool customers use to fine-tune foundation models they access through the Bedrock service. So, from a messaging perspective, it was not surprising to see AWS announce that SageMaker is becoming SageMaker AI. But what is notable is how SageMaker AI is being marketed, integrated and delivered.

 

First, AWS VP of Data and AI Swami Sivasubramanian introduced the SageMaker AI platform as a one-stop shop for data, analytics and AI, underpinned by SageMaker Unified Studio, which consolidates several disparate AWS data and analytics tools, from Redshift to Glue, into a single environment. Just as importantly, Unified Studio offers a native integration with Bedrock so customers can access Bedrock for GenAI app development within the same interface, as well as Q Developer for coding recommendations.
 

The second important piece is how data is accessed for SageMaker AI. The foundational layer of the SageMaker AI platform is SageMaker Lakehouse, which is accessible directly through Unified Studio, so customers can make a single copy of data regardless of whether it is sitting in data lakes they created on S3 or the Redshift data warehouse. This means customers do not have to migrate any existing data to use SageMaker Lakehouse, and they can query data stored in Apache Iceberg format as it exists today. From competitors and/or partners like Microsoft, Oracle and Databricks, we have seen big leaps forward in the data lake messaging, so the SageMaker Lakehouse announcement, combined with traditional S3 developments like S3 Tables for the automatic maintenance of Apache Iceberg tables, aligns with the market and is a big reaffirmation of the Apache Iceberg ecosystem.

 

In our view, SageMaker AI is a big development for a couple of reasons. First and foremost, it could go a long way in addressing one of the top concerns we often hear from customers pertaining to AWS, which is that they want consistent data without having to leverage multiple disparate services to carry out a task. SageMaker is still available as a stand-alone service for customers that have a specific requirement, but we suspect a lot of customers will find value in serving the full AI life cycle, from initial data wrangling up to model development as part of a unified experience. Since AWS launched the first EC2 instance in 2009, formalizing cloud computing as we know it today, we have watched the market gradually shift toward more complete, integrated solutions. From IBM to Microsoft, many of IT’s biggest players take a platform-first approach to ease common pain points like integration and cost in hopes of enabling true enterprise-grade digital transformation, and SageMaker AI signifies a step in this direction.
 

Secondly, SageMaker AI aligns AWS more closely with what competitors are doing to better integrate their services and selling data and AI as part of the same story. Considering the consolidation of services, data lake architecture and copilot (Amazon Q) integration, Microsoft Fabric is the most notable example, and while there are big technical differences between the two platforms, you can now draw parallels between both companies and how they are trying to better address the data layer in a broader AI pursuit. For context, TBR’s own estimates suggest Microsoft Azure (IaaS & PaaS) will significantly narrow, if not beat, AWS’ revenue lead by 2027, and a lot of customers we talk to today give Microsoft a leg up on data architecture. Nothing can displace Microsoft’s ties to legacy applications and the data within them, but SageMaker AI is clearly in step with the market, and if AWS can effectively engage partners on the data side, this solution could help AWS retain existing and compete for new workloads.

AWS’ values of breadth and accessibility extend to Bedrock

Because Bedrock and SageMaker go hand in hand, having a Bedrock IDE (integrated development environment) directly in SageMaker makes a lot of sense. This means within SageMaker AI, customers can access all the foundation models Bedrock supports and the various capabilities, like Agents and Knowledge Bases, that AWS has been rolling out to its audience of “tens of thousands” of Bedrock customers, which reportedly implies five times the growth in the last year alone. In true AWS fashion, offering the broadest set of foundation models is integral to the Bedrock strategy. This includes adding support for models from very early-stage AI startups like Luma and poolside, getting them tied to AWS infrastructure early on, and growing them into competitive ISVs over time.
 

Another key attribute of Bedrock has always been democratization and making access to the foundation models as seamless as possible through a single API hosting experience. In line with this strategy, AWS launched Bedrock Marketplace to make it easier for customers to find and subscribe to the 100-plus foundation models Bedrock supports, including those from Anthropic, IBM and Meta, as well as Amazon itself. AWS is the king of marketplaces, so having a dedicated hub for AI models that are from startups and are enterprise grade as part of a single experience is certainly notable and further fueling the shift in buyer behavior toward self-service.

Partners take note: Security, modernization and marketplace

Despite all the talk around AI and GenAI, security remains the No. 1 pain point when it comes to cloud adoption and was a big theme in the partner keynote. AWS’ VP of Global Specialists and Partners, Ruba Borno, reinforced the importance of AWS’ various specialization programs to demonstrate skills to clients in key areas including security. During the keynote, AWS announced new security specializations, including one around AWS’ Security Lake service. This is a pretty telling development for partners; Security Lake was a service essentially designed with partners in mind, allowing many services-led firms to build integrations and attach managed services. Now these partners can demonstrate their skills with Security Lake to customers, along with other areas in the realm of security, such as digital sovereignty, which aligns with AWS’ upcoming launch of the European Union (EU) Sovereign Cloud region.

 

Aside from security, AWS emphasized modernization and the need for partners to think beyond just traditional cloud migration opportunities. It is why AWS launched new incentives for modernization, including removing funding caps within MAP (Migration Acceleration Program), and rebranded the AWS Migration Competency as the AWS Migration and Modernization Competency. This is pretty telling of where AWS wants partners to focus and, in many cases, change the conversation with buyers, emphasizing the role of modernizing as part of the migration process. Considering how difficult it has become for services players to compete on migration services, as well as the fact that modernization could set the stage for more GenAI usage with tools like Q Developer, we believe this is aligned with where many global systems integrators are headed anyway.

Expanding the reach of AWS Marketplace

No partner discussion would be complete without AWS Marketplace, AWS’ pervasive hub where customers can buy and provision software using their existing cloud spend commitments. Year to date, AWS reports that essentially all of its top 1,000 customers buy on the AWS Marketplace, and usage spans several industries, including the public sector, which has reportedly transacted over $1 billion on AWS Marketplace in the past year. At re:Invent, AWS continued to take steps to expand the reach of AWS Marketplace, getting partners to better engage customers through this channel, with the availability of Buy with AWS. This option allows customers to access AWS Marketplace directly from a partner’s website.

Final thoughts

re:Invent showcased how AWS is pushing the envelope, in both breadth and capability, on the compute, database and AI building blocks customers use to solve specific use cases in the cloud. This approach, coupled with innovations like Bedrock Marketplace and a commitment to early-stage startups, speaks to how AWS will continue to lean into the core strengths that have made the cloud provider what it is today. But just as notably, offerings like SageMaker AI and an alliance with competitor Oracle show how AWS is embracing new tactics and elevating its role within the cloud ecosystem.

Harnessing AI and Automation in Business Process Outsourcing to Drive Growth Amid Shifting Buyer Priorities

Offering Business Process Improvement Underpinned by AI Technology Enables Vendors to Increase Their Value Propositions and Drive BPO Revenue Growth

Vendors’ business process outsourcing (BPO) businesses continue to benefit from the ongoing shift in buyer priorities from innovation and growth toward business resiliency and optimization. Buyers are investing in automating business processes to free up costs, providing pathways to growth for vendors with AI-powered and platform-based offerings.

 

With generative AI (GenAI) taking center stage in both Accenture’s and clients’ investments, Accenture has an opportunity to further improve the company’s profitability, provided it can position GenAI as a value enabler, rather than another technology that is in search of a problem. However, we expect Accenture, like many of its IT services peers, to continue racing to capture as much business as possible in the managed services space before GenAI picks up and threatens the core value proposition centered on human-backed service delivery.

Graph - 2Q24 BPO Revenue and Growth

Driving Transformation in Healthcare, HR and IT Services with Business Process Outsourcing

Expanding its industry portfolio offerings, such as within healthcare, will enable Cognizant to deliver on finance, HR and back-office functional needs, helping to offset recent declines within its outsourcing services. In June Cengage, an education technology company, renewed its engagement with Cognizant to continue receiving services for seven years. As part of the deal extension, Cognizant will also now support Cengage Unlimited, which provides a subscription platform for users to receive education courses. Cognizant will also provide cloud and security services in support of its finance and HR operations.

 

HCLTech deepens its knowledge base with close partnerships with Google Cloud, AWS and IBM that enable the company to build domain and functional expertise on HR, CRM and finance functions, underpinned by AI. Integrating its partners’ AI platforms and associated talent enabled HCLTech to deliver on clients’ IT and business services needs and generated revenue growth during 2Q24. HCLTech’s partnership with IBM around IBM watsonx brings in AI expertise to address clients’ HR and IT concerns. HCLTech won a deal in 2Q24 with an energy infrastructure company in the U.S. to provide IT and business services to improve the client’s user experience, enabling it to more quickly pursue market opportunities.

Emerging Consultancy Trends: Talent Management and Innovation in the Spotlight

Technology continues to threaten the nature of consulting engagements, requiring consultancies to showcase value and deliver on outcomes. Greater investment in talent frameworks, structure and skill will equip staff to lead client discussions and effectively leverage technology to assist workflows. Partnerships remain a core piece of the technology integration, bringing in new expertise and go-to-market opportunities that enable consultancies to meet a wider variety of client needs. Client retention remains a priority across consultancies but will require the firms to effectively deliver value through services.

Consultancies Will Experience a Shift in Traditional Consulting Services as Technology Is Further Embedded in the Market

Consultancies will manage talent more closely to reach higher quality standards

Consultancies refreshed their network of centers, including new operations with partners as well as those designed for internal use. As the consultancies look to bring both talent and clients in-office to work more collaboratively, improve communication and enhance the culture, the new centers serve as a path to facilitate interactions and engagement.

 

In a joint investment, Microsoft and KPMG opened an Operational Risk Skills Development Center in Quebec, followed by a national rollout of the training center supporting Canadian clients’ efforts to take advantage of generative AI (GenAI) responsibly. KPMG also recently announced the opening of a European Union (EU) AI Hub in Ireland. The AI hub is located inside one of the firm’s Innovation Hubs and is set to house 200 employees with skills in risk, regulatory services and cybersecurity. The AI hub will leverage KPMG’s Trusted AI framework and use technology from Microsoft and Cranium (an AI security startup that was spun out from KPMG Studio in 2023).

 

IBM Consulting has poured investment dollars into training and building a network of hybrid cloud and AI talent globally, including IBM’s launch of an AI Center of Excellence in Abu Dhabi, United Arab Emirates, in January 2023 in partnership with the Mohamed bin Zayed University of Artificial Intelligence.
 

Watch On Demand: $130+ Billion Emerging India Opportunity – Who Wins and Why in India-centric vs. Global IT Services Firms

With a return to in-person discussions and conversations, PwC strengthened its center network to emphasize technology solutions. For example, PwC established a Cyber Managed Services Center in Cork, Ireland; added a GenAI business center to its Luxembourg experience center; and opened an AI excellence center in Saudia Arabia.

 

Accenture announced the openings of GenAI studios in Chicago, Houston, New York, San Francisco, Toronto and Washington, D.C. Accenture Federal Services opened a Cybersecurity Center of Excellence (CoE) in partnership with Google in Washington, D.C., highlighting Accenture’s balanced approach to pragmatism and innovation executed through a well-oiled command-and-control culture.

 

Capgemini announced it had signed a strategic agreement with Amazon Web Services (AWS) to accelerate the adoption of GenAI solutions across organizations. The agreement focuses on helping clients gain knowledge around and realize the value of GenAI in business processes. Together, the two companies will move clients from pilots to production by leveraging Capgemini’s network of AWS CoEs. The partners will fast-track the deployment of industry-specific solutions, assets and accelerators, and create functional use cases through Amazon Bedrock.

New leadership will facilitate the evolution of traditional consulting services

With the appointment in July of a new PwC global chairman, Mohamed Kande, who was formerly the company’s Global Advisory and U.S. Consulting Solutions leader, TBR expects sustained and possibly increased investment in advisory and consulting capabilities across the global network, even as the consulting market overall continues to stagnate. Combined with the announcement of new leadership teams in a number of major territories, including the U.S. and the U.K., a new PwC strategy — to update the firm’s 3-year-old The New Equation strategy — will likely re-emphasize the PwC brand and lean into the firm’s technology expertise.

 

Janet Truncale became EY global chair and CEO on July 1, and TBR expects her top priority will be strengthening the company’s talent base, in part by continuing the EY Badges program and leading with its “better working world” strategy.

 

In February McKinsey & Co. partners re-elected Bob Sternfels as the firm’s global managing partner after three voting rounds. Sternfels has provided some stability to McKinsey in the last few years and will likely continue prioritizing quality over quantity, slowing its hiring efforts and fine-tuning the firm’s existing expertise to meet new demands in a GenAI age.

 

Across all three firms, in TBR’s view, new leadership (or consistent leadership, in McKinsey’s case) has been welcomed at the senior level and seen as necessary in light of post-pandemic changes to the consulting and professional services market.

 

In APAC, management consultancies have seen some leadership changes as well. EY Australia recently announced four new appointments: the leads for consulting in Australia; supply chain for Oceania; Asia Pacific financial services; and private equity consulting. Notably, only two of the people appointed to those four positions have been with EY for more than a few years.

 

In May EY Australia announced Katherine Boiciuc as the firm’s new chief technology officer. Earlier in the year, Boston Consulting Group (BCG) in Australia and New Zealand announced a number of senior-level promotions, including Stephen Hosie (healthcare, private equity and public sector), Whitney Merchant (gas, including liquefied natural gas), Lachlan McDonald (mining, oil & gas, manufacturing, and construction), and James Argent (managing director and partner). BCG also appointed Kelly Newton to serve as comanaging partner in New Zealand. Rounding out the region, KPMG appointed David Rowlands as global head of AI, in addition to the promotion of Tim Robinson to lead the firm’s technology consulting practice in Australia.

Partnerships require additional differentiation to drive value for clients

Differentiation will be key for consultancies to prove value tied to partner technologies including AI, GenAI, digital and cloud. Leaning on centers and talent to communicate the value and possibility of their services will provide consultancies with a slight advantage, but the firms will also need to partner around core AI and GenAI capabilities in addition to offering implementation and management services.

 

Post-pandemic management consultancies, particularly the Big Four, have expanded the scope and composition of their partner ecosystem, focusing primarily on key technology vendors including Google Cloud, AWS, SAP and Microsoft, as well as niche providers. Consultancies have trained their own staff around partners’ capabilities, as well as accelerated the opening of dedicated centers to facilitate adoption and transformation rooted within different solutions.

 

  • KPMG US runs a global Oracle operating model backed by a Global Oracle Center of Excellence, and the firm recently launched a Global Oracle EMEA Hub to capitalize on growth in that market. KPMG Delivery Network hubs, located across LATAM, EMEA and APAC, are supported by over 8,500 Oracle consultants, including more than 700 Oracle Cloud Infrastructure-certified consultants. KPMG also opened a CoE with Google Cloud to combine Google Cloud’s AI technologies with KPMG’s industry and functional knowledge.
  • PwC also partnered with Google Cloud, but through a different avenue, focusing on tax compliance and analytics. The partnership also dovetails with PwC’s efforts to complement its existing strengths around tax, workforce transformation, customer experience and HR processes. In addition, PwC teamed with Microsoft to open an AI CoE in Saudi Arabia with the goal of developing AI skills and supporting recruitment. The center will host a recruiting program every two months, bringing in new talent to support portfolio expansion.

 

With partnerships among the consultancies and vendors pursuing similar goals and initiatives, differentiation from the consultancies will be key to their success in scaling new technology and remaining go-to partners for their clients. The CoEs and certified talent provide an avenue for consultancies to immerse clients within the culture and company values in addition to exposure to the services and solutions.

AI PCs: Progress, Potential and Hurdles in Redefining the Market in 2025

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for AI PCs in 2025

  1. AI PCs will not drive the next commercial PC refresh cycle
  2. Proprietary AI agents will become increasingly prevalent in the AI PC space over the next several quarters

 

Request Your Free Copy of 2025 AI PC Predictions
 

Revitalizing the PC market

For several quarters during 2022 and 2023, major PC OEMs directed investment from their PC businesses to other ventures as PC sales slowed due to market saturation and cautious spending from commercial organizations. Since late 2023, however, this trend has reversed as PC OEMs invest in the development and marketing of PCs with built-in AI capabilities powered in part by a dedicated processor called a neural processing unit (NPU).
 
While PCs with AI capabilities have existed for years, including high-powered workstations that leverage the GPU for AI tasks such as computer-aided design (CAD) and other simulation workloads, new AI PCs will target a much broader user base, including consumer and business users. This latest influx of AI PCs started in December 2023 with Intel’s release of its Core Ultra series of processors, which offload on-device AI tasks to the NPU in order to deliver greater power efficiency. Since then, PC OEMs have released several waves of AI PCs featuring both the first and second generation of Intel’s Core Ultra chips, as well as similar x86 processors from AMD and comparable ARM-based variants from Qualcomm.

TBR Insights Live: 2025 AI PC Predictions
When OEMs first started releasing AI PCs, they shared expectations that the advent of this new product category would help drive the next major PC refresh cycle. However, even as vendors continue to roll out new generations of AI PCs containing increasingly powerful NPUs, adoption remains relatively slow. This is because the presence of an NPU itself does nothing to increase the value of AI PCs compared to other similar devices, and AI PCs require an additional layer in the form of applicable software that makes AI-enabled features easily accessible and user-friendly.
 
Therefore, to build out the market and drive greater adoption of AI PCs over the next few years, silicon providers, PC OEMs and ISVs will need to collaborate around and invest in developing applications that increase the functionality of these devices beyond what can be achieved by a traditional, non-AI PC.
 
To read the entire 2025 AI PC Predictions special report, request your free copy today!

Cloud Market Share in 2025: GenAI Spurs Growth but Does Not Promise Vendors Long-term Gains

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for Cloud Market Share in 2025

  1. Scale, innovation and even repatriation will moderate cloud market growth in 2025
  2. Microsoft will narrow the gap with AWS in IaaS & PaaS market share, en route to leadership in 2027
  3. SaaS vendors will shrug off growing GenAI disillusionment, focusing on the long term by prioritizing GenAI agents within their development strategies

 

Request Your Free Copy of 2025 Cloud Market Share Predictions

The GenAI opportunity is developing but does not ensure future cloud market growth

The revenue generated from generative AI (GenAI) offset some of the impact of cost-saving and expense-reduction efforts that defined the IT and cloud market in 2024. We expect some of that luster to fade in 2025, however, as the lack of a clear ROI from GenAI solutions will be a sticking point that slows investment in the coming year. The long-term GenAI opportunity is still sizable and customer interest remains strong, but the coming year will be a transition period for end customer investment in the technology.
 
TBR Insights Live: 2025 Cloud Market Share Predictions
 
At the same time, the leading hyperscalers will use 2025 to expand delivery capabilities and secure their position in the AI market for the long term. We expect double-digit growth in capex spending from the leading vendors like Amazon Web Services (AWS), Microsoft and Google. This dichotomy of accelerated vendor investment and more restrained customer spending will define the coming year.
 
To read the entire 2025 Cloud Market Share Predictions special report, request your free copy today!

6G’s Fate Depends on the Level of Government Intervention

2025 Predictions is a series of special reports examining market trends and business changes TBR expects in the coming year for AI PCs, cloud market share, digital transformation, GenAI, ecosystems and alliances, and 6G

Top Predictions for 6G in 2025

  1. 6G will leverage FR3 spectrum
  2. Capex spend on 6G is likely to be subdued
  3. Scope of government support for the telecom industry will increase and persist to facilitate 6G market development

 

Request Your Free Copy of 2025 6G Predictions

 

Lack of a clear ROI for the private sector to justify investing sufficiently in 6G puts the fate of the technology into the hands of the government

The telecom industry continues to struggle with realizing new revenue and deriving ROI from 5G, even after five years of market development. TBR continues to see no solution to this persistent challenge and with no catalyst on the horizon to change the situation, communication service providers’ (CSPs) appetite for and scope of investment in 6G will likely be limited.
 
TBR expects CSP capex investment for 6G will be subdued compared with previous cellular network generations, and deployment of the technology will be more tactical in nature, which is a marked deviation from the multihundred-billion-dollar investments in spectrum and infrastructure associated with the nationwide deployments during each of the prior cellular eras.

TBR Insights Live: 2025 6G Predictions

In a longer-term effort to address this situation, TBR expects the level of government involvement in the cellular networks domain (via stimulus, R&D support, purchases of 6G solutions and other market-influencing mechanisms) to significantly increase and broaden, as 6G has been shortlisted as a technology of national strategic importance.
 
With that said, 6G will ultimately happen, and commercial deployment of 6G-branded networks will likely begin in the late 2020s (following the ratification of 3rd Generation Partnership Project [3GPP] Release 21 standards, which is tentatively slated to be complete in 2028). However, it remains to be seen whether 6G will be a brand only or a legitimate set of truly differentiated features and capabilities that bring broad and significant value to CSPs and the global economy.
 
Either way, the scope of CSPs’ challenges is growing, and governments will need to get involved in a much bigger way to ensure their countries continue to innovate and adopt technologies that are deemed strategically important.
 
To read the entire 2025 6G Predictions special report, request your free copy today!

Federal IT Spending Will Remain Robust in FFY25 Amid AI Prioritization

Federal IT in 2025: Sustained growth amid modest budget increases and strategic modernization

Since coming into office, the Biden administration has fueled an unprecedented federal IT bull market. While the White House’s proposed federal civilian technology budget of $75.1 billion for federal fiscal year 2025 (FFY25) is the smallest increase in several years (up less than 1% compared to $74.5 billion in FFY24), it is still an increase of more than 14% from $65.8 billion in FFY23, and up 25% from $60.1 billion in FFY21, the last year of the prior administration.

 

FFY25 has started with a continuing resolution (CR), as have most of the last several fiscal year. The impact of the latest CR on the largest federal systems integrators may be limited to shorter-cycle programs in their order books, but some disruptions to larger, longer-term engagements are not out of the question.

 

Despite uncertainties as a new administration comes into power, overall federal IT spending priorities will remain intact. Digital modernization across civilian, defense and intelligence IT infrastructures must continue. Services provided by civilian agencies must be digitized to enhance citizen engagement and operational efficiency. And IT investment by defense and intelligence agencies must continue expanding in response to global geopolitical instability and the ever-rising challenges from U.S. nation-state rivals.

 

In the civil space, IT decision makers are coming under greater scrutiny to demonstrate how effectively they have invested IT budget windfalls from the last several fiscal cycles. This is partially reflected in overall IT spending growth that will slow in FFY25, at least based on the Biden administration’s initial FFY25 budget request.

 

For example, the U.S. Department of Health and Human Services’ budget is expected to decline by $100 million (or 1%) in FFY25 compared to FFY24. IT budgets at some agencies, such as the Department of Education will be flat in FFY25 but will expand at a handful of agencies like the Department of Homeland Security in FFY25.

 

The Defense Appropriations Act for FFY25 provides $852.2 billion in total funding, a 3.3% increase compared to FFY24. The Biden administration ceased providing greater detail on Department of Defense (DOD) IT outlays early in its term, but TBR assumes defense IT spending will increase in concert with the growth in overall defense outlays and will continue centering on using data to enhance warfighting and intelligence operations, modernizing the Pentagon’s underlying IT infrastructure, and achieving interoperability across service branches and with the defense agencies of U.S. allies.

 

Defense agencies will also ramp up investment on solutions that push data capture and analysis ever further out to the tactical edge. National security will continue to be a bipartisan matter as the global threat environment remains elevated. The total addressable market for federal systems integrators (FSIs) with a presence in the defense and intelligence sectors could be worth at least $200 billion, and potentially $300 billion or more, with a large and growing portion of the market opportunity tied to AI.

 

Technologies like quantum computing and space-based IT architectures have also been deemed critical by the U.S. defense and intelligence communities, and investment is accelerating in these areas. Additionally, AI currently retains a high strategic priority among emerging digital technologies.

 

AI investments across all federal sectors have accelerated as agencies recognize AI’s potential to optimize agency operations and enhance mission-critical agency functions. AI solutions enable DOD and intelligence community (IC) agencies to process high volumes of data, and subsequently generate actionable insights to warfighters and combat commands, as well as to intelligence operators in the field.

 

Federal agencies must also master AI from both a technological and a responsible use standpoint, prior to the inevitable adoption of generative AI (GenAI). The most basic, fundamental distinction between AI and GenAI is that AI is good at analyzing existing content while GenAI generates new content. Much foundational modernization work is still needed across the federal IT environment to accommodate digital technologies like cloud, AI and GenAI, ensuring continued (albeit slower) federal IT growth in FFY25 and beyond.

State of the Federal IT Market: Continue Opportunities Amid Slowing Growth — Watch On Demand Now!

GenAI will continue to revolutionize mission-critical functions and day-to-day operators at federal civilian, defense and intelligence agencies

The business case for GenAI streamlining human-resource-intensive, mission-essential operational tasks is indisputable. Even early GenAI use cases have demonstrated the potential of GenAI for federal agencies. Early AI pilots focused on automating repetitive duties to maximize efficiencies in federal agency workflows, but the scope is expanding to focus on the potential for AI to transform more mission-critical activities.

 

In addition to streamlining operations vis-à-vis AI, DOD and IC agencies are also implementing AI technologies to make sense of the enormous volume of data being generated by networks of satellite and C5ISR (Command, Control, Communications, Computers, Cyber, Intelligence, Surveillance and Reconnaissance) systems and provide actionable intelligence to warfighters, military commands and other national security personnel deployed globally.

 

As a result, TBR expects that AI prototyping initiatives will accelerate in FFY25 and that more pilot projects than ever will convert to formal AI implementation initiatives in FFY25.

FSI-operated, AI-focused CoEs and innovation centers will proliferate across federal IT in FFY25

Federal agencies want to see AI in action, but FSIs must clearly demonstrate the potential ROI of AI to risk-averse agency IT decision makers. The FSIs most proactively managing their alliance ecosystems will take their relationships with commercially focused technology peers to the next level.

 

Like its parent company in commercial markets, Accenture Federal Services (AFS) has actively stood up new showcasing centers in federal IT. AFS’ collaboration with Google Public Sector has been particularly prolific as of late. In 4Q24, AFS and Google Public Sector’s Rapid Innovation Team (RIT) launched the Federal AI Solution Factory to accelerate the development and deployment of AI-powered solutions for federal agencies.

 

The new facility comes on the heels of AFS and Google Public Sector launching a new center of excellence (CoE) in 2Q24 to showcase how GenAI technologies can improve citizen services across federal agencies, following AFS and Google Public Sector teaming to stand up a new Cybersecurity CoE in 4Q23.

AI-related budget outlays in civil agencies will surge in 2025

AI is enhancing the citizen experience by automating human-resource-intensive tasks and enabling civilian agencies to respond proactively, not reactively, to security or operational challenges.

 

Civilian agencies are demanding AI technologies that maximize organizational efficiencies, knowledge management and security and that facilitate digital transformation of monolithic IT systems.

 

According to TBR’s 3Q24 CACI report, “The federal government allocated $3.3 billion for artificial intelligence (AI) in the FFY25 budget request, although TBR believes there is a large volume of undisclosed AI-related spending in the FFY25 budget earmarked for the classified arena in the DOD and IC, and federal law enforcement agencies. Beyond the $3.3 billion allocated for AI in the FFY25 budget, Congress is currently considering a proposal to spend over $30 billion on ‘AI innovation projects’ across civilian agencies, the first such effort fund large-scale AI adoption in the civil space. The bipartisan nature of the proposal certainly reflects how AI is increasingly being prioritized across the federal IT market.”

 

Civilian agencies will increasingly leverage AI in FFY25 to improve citizen-facing services, achieve regulatory compliance more efficiently, optimize operational workflows, and enhance workforce recruiting and retraining.

HCLTech AI Force: Scalable, Modular and Backed by Proven AI Expertise

TBR perspective

Disparate and siloed data, specialized software tools and interrelated processes challenge enterprises to gain real value from AI-enabled solutions. HCLTech’s AI Force platform provides visibility into data streams and interdependencies across the software development and operations life cycles — requiring minimal change management but no replacement of existing technology and greatly enhancing an enterprise’s existing IT environment. In short, AI Force is a nondisruptive force multiplier of customers’ technology investments.

 

In late September, TBR met with executives from HCLTech to discuss the company’s AI Force platform, overall business model, and strategies around AI and generative AI (GenAI). The HCLTech team included Apoorv Iyer, EVP and Global Lead, Generative AI Practice; Gopal Ratnam, Vice President, Product Management, Generative AI Products & Platforms; Alan Flower, EVP and Global Head, AI & Cloud Native Labs; and Rohan Kurian Varghese, Senior Vice President, Marketing. This special report reflects that discussion as well as TBR’s ongoing research on and analysis of HCLTech.

AI Force is a GenAI-powered platform that infuses intelligence across every phase of the dev and ops life cycles

HCLTech had an early start in AI, setting up a research team in 2016 and building out its AI engineering strengths around AI silicon; the development of AI-led IP solutions like DRYiCE, iAutomate and SDLC (Software Development Life Cycle), which was a precursor to AI Force; and its strong heritage in Data & AI with strategic acquisition like Actian, Starschema and, most recently, Zeenea. This has ingrained AI across HCLTech’s portfolio and underpinned transformation projects, allowing customers to seamlessly manage IT and cloud environments. Leveraging this heritage, HCLTech developed AI Force with responsible AI spanning built-in use cases that are scalable and modular and cover the entire software and operations life cycle, such as requirements and analysis (e.g., user story generation, change impact analysis), development (e.g., code generation, code refactoring), triage (e.g., duplicate defect detection), and technical support.

 

Through AI Force, HCLTech provides clients with a platform that supports not only software development life cycle, reducing the lift on manual tasks and shortening overall development time, but also the operations life cycle, enhancing overall efficiency and accelerating technology value across an enterprise by reducing accrued technical debt and producing better quality code. As one HCLTech leader described it, AI Force allows an enterprise to “stitch everything [in the IT environment] together and figure out where the issues are.”

 

Notably, AI Force has been on the market for over a year, is live with more than 25 of HCLTech’s enterprise clients, and serves the broader IT ecosystem within an enterprise, beyond just application development and maintenance teams. An HCLTech leader noted that the AI Force platform “reduces the lift of manual tasks and accelerates the overall service delivery time,” a clear operational and financial benefit for any enterprise and clearly more than simply a collection of software tools. Enterprises can now take intelligent decisions by harnessing data, leading to the accelerated development of products and applications, along with significant cost savings and improved efficiencies.

 

Before diving into specifics around AI Force, HCLTech’s leaders described some of the challenges enterprises face across the software development and operations life cycles, highlighting the complexities inherent in having multiple personas, disconnected processes, siloed data, disparate systems and specialized tools.

 

According to HCLTech, this landscape is missing a digital thread or intelligence hub capable of understanding the entire process end to end, including the data sets generated by specialized tools, and then further unlocking the relationships between the data sets. HCLTech’s AI Force can integrate existing tools not replace them and bring data sets together, create a knowledge graph of the relationships between the data sets, and conduct comprehensive root cause analysis.

AI Force’s key characteristics and advantages

In the discussion with TBR and during HCLTech’s presentation of AI Force’s capabilities, HCLTech’s AI leaders walked through AI Force’s go-to-market approach, characteristics, architecture, advantages and use cases. HCLTech conducted a demonstration of AI Force in action before turning to the synergies between AI Force and the company’s global network of AI and Cloud Labs.

 

At its core, HCLTech’s AI Force features extensibility, modularity and flexibility. It can integrate smoothly with existing IT environments, be leveraged for a large variety of use cases within an enterprise, and be deployed, consumed and priced in different ways that are suitable to an individual customer’s business needs.

 

In describing HCLTech’s go-to-market strategy, the AI leaders stressed three points:

  1. HCLTech will continue to enhance large-scale engagements with the capabilities and benefits of AI Force from the start, affording the client immediate cost savings.
  2. In other situations, HCLTech will assist clients in deploying AI Force as a platform within the client’s enterprise IT environment.
  3. For clients already engaging HCLTech for managed IT services, AI Force can be deployed to gain cost savings and efficiencies, directly complementing existing managed services. This last approach, in TBR’s view, reinforces HCLTech’s value proposition around offering innovation, even in established managed services engagements, and expands its remit within the enterprise, from simply IT services to more consultative, business-outcomes-driven and AI-enabled solutions. As part of this consultative approach, HCLTech undertakes value stream mapping in the discovery process for deploying AI Force, including a detailed as-is picture, to-be picture, and the true impact at scale. Through this due diligence, HCLTech helps customers select the right projects that can benefit from AI Force.

Appealing broadly across the enterprise and embedding customer context

Recognizing that peers such as Infosys and EY have similarly developed suites of AI-enabled and AI-forward solutions, HCLTech leaders highlighted some aspects they believe distinguish the company’s capabilities, particularly AI Force.

 

First, the solution can be deployed on the cloud, on premises or even in edge-enabled devices, depending on a client’s needs and circumstances. The leaders described this aspect as appealing to HCLTech’s ecosystem partners, which include Microsoft, Amazon Web Services (AWS), SAP and IBM, further noting the already established integration with Microsoft’s GitHub Copilot and being offered as a certified extension.

 

Second, the HCLTech executives noted AI Force is valuable to more than just coders and enterprise professionals looking for AI-enabled cost- and time-saving assistance. Being extensible and working with multiple large language models (LLMs) made AI Force flexible enough for a broader enterprise workforce audience.

 

Third, the inclusion of a customer context using enterprise data makes the solution more than simply an addition to an existing LLM accelerator. HCLTech’s leaders emphasized the value of customer context inherent to the platform, noting that HCLTech will train AI models on customer-specific data.

 

On a related note, the HCLTech executives described the underlying AI architecture as “comprehensive, but not complex; unified” and “holistic, therefore not a point solution.” According to HCLTech, AI Force has been granted 18 patents, and its batch processing mode reduces the strain on the underlying cognitive infrastructure, leading to reduced energy consumption. In TBR’s view, the characteristics and architecture likely resonate with IT professionals and particularly software engineers, while the flexibility and customer context significantly enhance the business value of AI Force.

 

Building on key characteristics, the HCLTech AI leaders walked through AI Force’s overall advantages, including a single, unified platform, rather than hundreds of solutions; simplified management and budget; built-in use case prioritization, allowing decision-makers and IT support to focus on the use cases that would lead to business transformation; inherently enabled customer context, greatly enhancing the stickiness of AI Force within an enterprise; and built-in data ingestion and storage, significantly diminishing the likelihood of disjointed or counteracting results.

 

In TBR’s view, AI Force’s advantages play well for different buying and decision-making personas. Procurement, IT operations and even the CFO can appreciate a single solution with simplified management. Business unit leaders can find and deploy uses cases suitable to their specific needs. And the inherent stickiness of AI Force can appeal to executives looking to gain advantages from deploying AI-enhanced solutions and not simply paying for another round of new technologies.

Applying GenAI only when and where it is needed

Not every business problem is best solved by deploying GenAI-enabled solutions. HCLTech leaders emphasized that some customer problems can be handled by simple automation, some with traditional AI, and only a niche set through GenAI-enabled solutions.

 

In TBR’s view, HCLTech’s strategic decision to recognize that customers can solve problems with existing technologies and do not always need GenAI-enabled solutions plays well, given enterprise buyers’ fatigue around the constant carousel of emerging technologies and ever-increasing IT budgets. Simply showing customers that AI Force will help identify where GenAI is best suited and where it is not should resonate with IT decision makers and their C-Suite bosses, all of whom are looking for tangible returns on technology investments. If HCLTech can help get more from existing technologies, AI Force is an immediate value-add.

 
Notably, HCLTech works with a wide variety of models and is model agnostic. The choice of model depends on a client’s business problem and the context of the client’s own data. Rather than recommending a model based on technical specifications or a familiarity with a particular model, HCLTech centers the decision on the client’s specific business problem.

Four ways to consume, determined by the customer’s business problem

HCLTech’s customers can take advantage of the AI Force platform in whichever deployment and consumption model fits their needs. HCLTech offer the platform as a stand-alone deployment, embedded into the client’s IT environment, through APIs (which one HCLTech leader described as “headless … behind the scenes”), or on the edge through AI-enabled PCs.

 

Critically, HCLTech leaders assured TBR that the customer’s consumption model of choice made “no difference in how the customer pays for AI Force.” As for decision making around the consumption model, HCLTech leaders said the company advises customers based on the business problem the customer is trying to solve.

 

On this point, TBR believes HCLTech has, itself, made a strategic decision: allow the customer’s environment, needs and business problems to determine the best commercial and technological fit for HCLTech’s platform, rather than HCLTech’s business and commercial needs dictating deployment terms.

 

The discussion included detailed accounts of two deployments at different types of companies. First, to accelerate a legacy IT modernization effort at a financial institution, HCLTech used AI Force to map, migrate and test more than 200 legacy applications.

 

Second, at a massive global technology company, HCLTech used AI Force to radically reduce marketing spend through a what an HCLTech leader referred to as “marketing ops transformation from manual-driven content development by a third-party vendor to GenAI-automated content generation.” TBR has been briefed on similar marketing operations improvements through GenAI automation, but none at the same scale or with comparable cost savings as those described by HCLTech.

 

HCLTech leaders also described the company’s recently announced partnership extension with Xerox. The company will leverage automation, product and sustenance engineering, and process operations services — including order to cash, sales and marketing operations, and supply chain and procurement — along with AI Force, to deliver a unified interface that transforms the way employees and clients engage with Xerox.

 

HCLTech describes other AI Force use cases on its website.

Minimal change management and increased visibility provide immediate value

In TBR’s research, GenAI adoption has benefited enterprises with well-managed and orchestrated data, even if that data exists in silos. In contrast, enterprises with little visibility into their data have been challenged to see meaningful returns on their GenAI investments, in part because of a challenge HCLTech identified above: People within an enterprise typically like the specialized software tools they are already using and want to keep using them.

 

HCLTech’s AI Force does not ask for change from multiple personas across an enterprise or for adoption of a new set of tools; it instead provides greater visibility into everyone’s processes, software usage and IT environment and demonstrates how one person, process or tool can affect another. By providing visibility without demanding replacement and adoption, HCLTech’s AI Force can deliver value with minimal change management.

AI Force may be what helps HCLTech survive the coming IT services business model upheaval

As HCLTech’s leaders noted to TBR, HCLTech is not new to AI, as the company had been investing in AI, training its workforce around AI principles and deployments, working with chip manufactures, and developing and selling software all before GenAI emerged. As one slide in HCLTech’s presentation noted, the company has been “Building and deploying AI solutions since 2016.”

 

Legacy — and maybe more accurately, proven — skills and capabilities lend immediate credibility to what HCLTech brings to clients and partners with AI Force. Further, a significant part of what separates HCLTech from immediate peers is the company’s IP-driven services model, a strategic difference that becomes increasingly relevant as clients ask for more GenAI-enabled services and less labor-dependent services. HCLTech’s business model is not simply enhanced by AI Force and other IP-driven solutions; it might actually be saved by those capabilities as the entire IT services business model undergoes significant, AI-induced change.

 

TBR will be watching as HCLTech develops additional platforms, brings agentic AI solutions to discussions with clients, and enables fully autonomous AI deployments, all built on a solid foundation of expertise, experience and ever-increasing capabilities around artificial intelligence.

Private Cellular Networks: Growth Drivers, Challenges and Opportunities Expected Through 2028


 

What Verticals Will Lead in Private Cellular Networks Adoption Through 2028?

Despite persistent ecosystem maturity challenges, the private cellular networks (PCN) market is growing as leading enterprises advance their Industry 4.0 strategies and governments aim to capitalize on defense and public safety use cases. TBR research indicates that the private 5G network market will see strong growth through this decade as a wide range of industries and governments adopt the technology, but a confluence of factors is slowing the pace of market development relative to the industry’s original expectations.

 

In this TBR Insights Live session Senior Analyst Michael Soper gives n in-depth look at TBR’s private cellular networks research. Each year TBR publishes a vendor benchmark, market forecast and market landscape on PCN, each covering a different aspect of the market. TBR’s private cellular networks analysis includes rankings of key PCN vendors by various revenue splits, the ecosystem for private LTE and private 5G networks, spend on private LTE- and 5G related infrastructure, and more.
 

In The Above FREE Webinar on Private Cellular Networks You’ll Learn:

  • Key growth drivers and detractors expected in the PCN market through 2028
  • Which verticals are leading and lagging in PCN adoption
  • Which ecosystem players are positioned to capitalize on trends in the PCN market

 

Excerpt From Private Cellular Networks: Growth Drivers, Challenges and Opportunities Expected Through 2028

TBR private cellular networks research update

Enterprise buying behavior

Key reasons why enterprises do not want to work with CSPs (i.e., buy services from a CSP’s public network):

  • Security, trust and privacy concerns
  • Downtime risk
  • Cultural mindset — some enterprises want control and possess a can-do attitude
  • CSPs lack deep knowledge of industrial processes and pain points.
  • Greater flexibility to customize solutions to specific needs
  • SIM cards — enterprises do not want to be tethered to the telco
  • Preference for unlicensed spectrum

Reasons why enterprises might procure 5G services from CSPs:

  • IT and other technical staff may not be comfortable supporting cellular technologies due to a lack of training or credentials and might seek to outsource all or some of this responsibility.
  • Network slices could be a more cost-efficient way of consuming 5G resources (e.g., only pay for what you use).
  • CSPs can provide dedicated spectrum and SLAs.
  • CSPs can cost-effectively provide wide-area network coverage, such as global roaming.

TBR Insights Live sessions are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous sessions can be viewed anytime on TBR’s Webinar Portal.

Meet MAMAA: The Top 5 Hyperscalers Shaping the Future of Digital Ecosystems

What Are the Top 5 Hyperscalers?

Alphabet, Amazon, Apple, Meta Platforms and Microsoft are the five largest, most comprehensive hyperscalers in the world by a wide margin. This group of Tier 1 hyperscalers are collectively referred to as MAMAA.

 

TBR research shows only the Tier 1 hyperscalers can transcend most, if not all, of the major lifestyle categories to provide a seamless end-to-end ecosystem experience, touching all aspects of people’s lives, primarily due to their scale and access to resources.
 

An In-depth Discussion on How GenAI Will Impact the Telecom Industry — Watch Now!

The World’s Largest Hyperscalers Are Positioned to Win an Outsized Share of the New Opportunities Created in the Digital Era

Tier 1 hyperscalers have momentum as they pursue their end-to-end digital ecosystem goals. Key reasons why hyperscalers will succeed in their digital ecosystem endeavors include:

  • Scale
  • Network effect
  • Proficiency at building and scaling platforms
  • Adept at translating data into outcomes
  • Near limitless financial resources
  • Access to the best talent
  • Have the best legal teams
  • Control essential intellectual property and patents (e.g., devices, chipsets, AI/machine learning algorithms, Lidar)
  • Tax advantaged — pay relatively little in taxes

Tier 1 Hyperscalers Intend to Own and Control Critical Aspects of the Value Chain in the Digital Era

The End Goal: Full-scope Digital Ecosystems

TBR believes the top 5 hyperscalers will own and control foundational and critical aspects of the digital economy and capture an outsized portion of the value created during the digital era.

  • Societies will become dependent on their clouds.
  • The majority of internet traffic will run over their networks.
  • Transportation will be directed by their self-driving technologies.
  • A significant portion of financial transactions will be processed via their payment platforms.
  • And much, much more

 

If current trends play out, and assuming interference from regulators remains manageable, TBR believes MAMAA will ultimately become fully integrated, end-to-end digital ecosystem owners, providing essential solutions for businesses and consumers worldwide.

 

At a high level, hyperscalers are all pursuing the same strategy, which is to maximize the value of data. All of the Tier 1 hyperscalers are entering similar markets, introducing similar products and services, investing in similar technology areas, and pursuing similar business models. The underlying goal is to provide an immersive, seamless, end-to-end digital experience to end users (consumers and businesses), which will maximize hyperscalers’ value capture in the digital era. Value in its most basic form resides in the data that hyperscalers have access to, but leveraging that data to produce outcomes is how the hyperscalers make their money.

As Hyperscalers Redefine Digital Ecosystems, Industry Consolidation and Adaptation Become Essential for Long-Term Relevancy

As hyperscalers build out their ecosystems, some incumbent entities will become marginalized and fail, and the remaining incumbent players across industries will have to consolidate, adapt and/or partner with hyperscalers in some way or risk fading from relevancy. The markets for IT, advertising, retail, media, entertainment, financial services and other industries are already facing this disruption. TBR believes transportation, healthcare, education, telecom and other industries will also experience hyperscaler disruption during this decade.

 

The lifeblood of the digital era is data, and the heart is the systems, platforms, marketplaces and other digital infrastructure that make use of that data. Hyperscalers have thus far created the most compelling digital infrastructure that is capable of amassing, synthesizing, automating and utilizing vast amounts of data, which is the key underlying reason why they have been so successful in creating economic value over the past two decades.