Infosys Collaborates with Clients and Partners to Navigate What’s Next in Their AI Transformation Programs

Strong Services Execution, Enabled Through Infosys Cobalt and Focused on Outcomes, Provides Foundation Upon Which Infosys Can Build AI Strategy

The steady performance of Infosys’ cloud business highlights the company’s pragmatic approach to its portfolio and go-to-market efforts, largely enabled by Infosys Cobalt.

 

Building on Infosys Cobalt’s success, the company now has an opportunity to steer client conversations toward AI and is positioning Infosys Topaz as the suite of services and solutions that can bring it all together. Agentic AI (i.e., autonomous AI) is the newest set of capabilities dominating client and partner conversations. Scaling AI adoption comes with implications and responsibilities, which Infosys is trying to address one use case at a time. For example, earlier in 2024, Infosys launched the Responsible AI Suite, which includes accelerators across three main areas: Scan (identifying AI risk), Shield (building technical guardrails) and Steer (providing AI governance consulting). These capabilities will help Infosys strengthen ecosystem trust via the Responsible AI Coalition. Infosys also claimed it was the first IT services company globally to achieve the ISO 42001:2023 certification for ethical and responsible use of AI.
 
Regardless of the client’s cloud and AI adoption maturity, everyone TBR spoke with and those who presented at 2024 Infosys Americas Confluence agreed that the need for data strategy and architecture comes first. Two separate customers perfectly summarized the state of AI adoption: “You can’t get to AI without reliable data across the supply chain,” and “GenAI is not a magical talisman. Companies need to build true AI policy and handle GenAI primitives before scaling adoption, with the shift in mindset among developers and users a key component.”

 

Infosys recognizes that AI adoption will come in waves. The first wave, which started in November 2022 and continued over the last 18 to 24 months, was dominated by pilot projects focused on productivity and software development. In the current second wave, clients are starting to pivot conversations toward improving IT operations, business processes, marketing and sales. The real business value will come from the third wave, which will focus on improving processes and experiences and capitalizing on opportunities around design and implementation. Infosys believes the third wave will start in the next six to 12 months. While this might work for cloud- and data-mature clients, only a small percentage of the enterprise is AI ready across all components including data, governance, strategy, technology and talent. Thus, it might take a bit longer scale for AI adoption to scale.

 

But as Infosys continues to execute its pragmatic strategy, the company relies on customer success stories that will help it build momentum. As another customer positioned it, “Infosys knows the data and processes. They know what they are talking about. In [the] 11 years since we have worked with them, they have not missed a single release with their … team delivering the outcomes.”

 

We believe Infosys’ position within the ecosystem will also play a role in how fast and successful the company is when it comes to scaling AI with clients. Infosys’ AI-related messaging includes 23 AI playbooks, which focus on value realization spanning technical and business components, such as Foundry and Factory models, as well as change management.

 

Of course, AI and GenAI will also disrupt Infosys’ business model and service delivery. And while many of its peers are still debating internally how to best position themselves with clients and pitch the value of GenAI without exposing their business to too much risk in the long run, Infosys’ thoughtful, analytics-enabled approach to commercial and pricing model management has positioned the company favorably with price-conscious clients that have predominantly been focused on digital stack optimization over the past 18 months.
 
Infosys’ success with large deals is a testament to the effectiveness of the company’s strategy. In FY4Q24 Infosys had $4.5 billion in large deals, which is the highest quarterly large deal value for the company. In addition, investing in and transforming right-skilled talent who can support this model are critical components to the company’s success. While Infosys has trained 270,000 of its employees on AI, we believe it is the composition and depth of these skills that vary across service lines and clients, especially as outcome-based pricing models now represent half of the contracts in some service lines.

Infosys’ Investments in Engineering and Marketing Strengthen Company’s Position as a Solutions Broker

Navigating the hype of GenAI requires Infosys to also recognize and place bets on other areas that are tangential and have a more immediate impact on its value proposition and overall financial performance.

Infosys Tries to Bring CIOs and CMOs Together Through Infosys Aster

Building off the success of Infosys Cobalt and Infosys Topaz, the company launched Infosys Aster, a set of AI-amplified marketing services, solutions and platforms. While Infosys Cobalt and Infosys Topaz have horizontal applications, the domain-specific nature of Infosys Aster provides a glimpse into what we might expect to see from Infosys in the near future, given the permeation of GenAI across organizational processes. Additionally, the marketing orientation of Infosys Aster is not surprising since most GenAI use cases are geared toward improving customer experience.

 

Built around three pillars — experience, efficiency and effectiveness — Infosys Aster will test Infosys’ ability to capitalize on a new wave of application services opportunities and create first-party data-unique solutions rather than providing off-the-shelf solutions just to ramp up implementation sales.
 
With DMS continuing to act as a conduit for broader digital transformation opportunities for Infosys, we expect the company to use Infosys Aster to position its marketing services portfolio in a more holistic manner, creating a bridge between CMOs and CIOs and also bringing parts of Infosys’ Business Process Management subsidiary into the mix to position the company to capture marketing operations opportunities. Infosys Aster provides a comprehensive set of marketing across the value chain of strategy, brand and creative services, digital experience, digital commerce, marketing technology (martech), performance marketing and marketing operations.
 
Although this is an area of opportunity for Infosys, rivals such as Accenture have an advantage in the marketing operations domain. We do believe the greater opening for Infosys comes from focusing more on driving conversations around the custom application layer and steering client discussions toward achieving profitable growth through the use of Infosys Aster. Client wins such as with Formula E and ongoing work with the Grand Slam tennis tournaments also allow Infosys to demonstrate its innovation capabilities beyond traditional IT services. Part marketing and part branding, wins such as these elevate Infosys’ capabilities. Executing against its messaging is key for Infosys.

Infosys Engineering Services Will Close Portfolio and Skills Gaps Between IT and OT Departments

Infosys Engineering Services remains among the fastest-growing units within the company as Infosys strives to get closer to product development and minimize GenAI disruption on its content distribution and support position. Since the 2020 purchase of Kaleidoscope, which provided a much-needed boost for the company to infuse new skills and the IP needed to appeal to the OT buyer, Infosys has further enhanced its value proposition to also meet GenAI-infused demand.

 

Infosys recently announced the acquisition of the India-based, 900-person semiconductor design services vendor InSemi, which presents a use case where the company applied a measured risk approach to enhance its chip-to-cloud strategy as it tries to balance its portfolio of partner-ready solutions, such as through NVIDIA, with a sound GenAI-first cloud-supported story. Shortly after, Infosys also acquired Germany-headquartered engineering R&D services firm in-tech. The purchase will bolster Infosys’ Engineering Services R&D capabilities and add over 2,200 trained resources to regional operations across Germany, Austria, China, the U.K., and nearshore locations in the Czech Republic, Romania, Spain and India, supporting Infosys’ opportunities within the automotive industry. The purchase of in-tech certainly accelerates these opportunities, bringing in strong relationships with OEM providers, which is a necessary steppingstone as Infosys tries to bridge IT and OT relationships.

 

We do not expect Infosys’ cloud business Infosys Cobalt to slow down anytime soon given the company’s market position for infrastructure migration and managed services as well as its well-run partner strategy with hyperscalers. Adding semiconductor design services bolsters that value proposition as buyers consider whether to use price-attractive CPUs or premium-priced GPU data centers. The latter currently dominates the marketplace, and we expect that trend will not change for at least the next 18 to 24 months. But having semiconductor engineers on its bench can help Infosys start supporting CPU-run models, further appealing to more price-sensitive clients. Meanwhile, Infosys is planning to train 50,000 of its employees on NVIDIA technologies. Lastly, the close collaboration between Infosys Engineering Services and Infosys Living Labs further extends the company’s opportunities to drive conversations with new buyers and demonstrates its ability to build, integrate and manage tangible products.

Infosys’ Reliance on Partners Provides a Strong Use Case of Trust and the Future of Ecosystems

The mutual appreciation between Infosys and partners was amplified throughout 2024 Infosys Americas Confluence. From a dedicated Partner Day to partner-run demos and various sponsorship levels to main-stage presentations, the experience reminded TBR of an event that a technology vendor would typically set up (think: Adobe Summit, AWS re:Invent, Dreamforce, Oracle OpenWorld, to name a few).
 
Infosys’ decision to feature some of its key alliance partners in a similar way that the tech companies do suggests a strong alignment between parties starting with the top-down executive support, through mutual investments in both portfolio and training resources, and most importantly, knowledge management between the parties. In conversations throughout the event with partners, it was evident that Infosys’ strategy is consistent regardless of the length of relationship, from decades-long relationships such as with SAP or an emerging but fast-growing alliance such as with Snowflake. All partners agreed Infosys’ humble approach to managing relationships has put them at ease in working with Infosys and delivering value to joint clients.

 

After attending Infosys’ U.S. Analyst and Advisor Meeting in Texas in March, TBR wrote about Infosys’ relationship with Oracle, highlighting the level of trust and transparency Infosys typically deploys with partners. In TBR’s Summer 2024 Voice of the Partner Ecosystem Report we wrote: “Services vendors most frequently rely on their direct sales efforts and permission to demonstrate value with customers to drive revenue. Using demos and proof-of-concept discussions as a frequent tactic to engage with clients also highlights many of the profiled vendors’ consulting heritage.

 

The technical expertise came through very vividly and aligned with Infosys’ strengths in playing within its own swim lane. In a main-stage discussion, Infosys and Hewlett Packard Enterprise (HPE) discussed at length the role each plays in pursuing opportunities in areas such as GenAI and the need for greater interactions through multiparty model including the value NVIDIA brings to the table, for example. While one could argue that Infosys’ alliance partner strategy mirrors that of many of its competitors as it seeks to secure foundational revenue opportunities while pursuing innovation through a measured risk approach, the company strives to differentiate by acknowledging its strengths and sticking to them rather than branching too far into partners’ territory, which enterprise buyers strongly appreciate.

Land-and-execute Approach and Expansion Will Follow Naturally

Close to a decade ago, TBR analyzed what Infosys’ five-year strategy should look like. While the company went through leadership and strategy changes during this period to such an extent that one could cite concerns about consistency, those days are over. Infosys now has a well-grounded strategy with executives executing on a clear vision rooted in a land-and-execute approach rather than the typical land-and-expand framework many of its peers aspire to. This puts greater pressure on the company’s quality and talent-retention strategies. While no one is immune to macroeconomic headwinds, the internal growth and training opportunities the company provides for its employees across all levels provides a strong backbone to a culture of learning and trust.

 

TBR will continue to cover Infosys within the IT services, ecosystems, cloud and digital transformation spaces, including publishing quarterly reports with assessments of Infosys’ financial model, go-to-market, and alliances and acquisitions strategies. Access reports as soon as they’re available with TBR Insight Center™ access.

IT Services Vendors Embrace Digital Transformation to Revolutionize the Sports and Entertainment Industry

IT Services Vendors Pursue Opportunities in the Sports and Entertainment Industry

Like every other industry, sports has undergone digital transformation in recent years, greatly improving operations within the industry and fundamentally changing the fan experience. Every major sporting event is enhanced by analytics, both at an operational level and for the fans, and other elements core to IT services, such as cybersecurity and automation, have become fundamental to running a sports operation.

 

Not surprisingly, IT services companies and consultancies have jumped on the bandwagon, increasingly associating their brands with major sport events and leagues, not simply as sponsors but also now as digital transformation, AI and analytics partners.

 

The sports and entertainment industry segment typically contributes a small share of revenue for the 31 vendors covered in TBR’s IT Services Vendor Benchmark compared to established industries such as financial services, public sector and manufacturing. However, an increasing number of IT services providers are building specialized expertise to address the needs of clients in sports and entertainment and to diversify revenue streams. Applying capabilities such as around digital design, secure infrastructure and data, and customer experience enables vendors to increase value and capture growth opportunities.

Specialized Expertise and History of Working with Clients in Sports and Entertainment Help Vendors Establish Credibility and Attract New Clients

IBM, Atos, Accenture and Infosys have well-established industry expertise and a history of working with clients in the sports and entertainment sector. In addition to those companies, other IT services providers are developing capabilities and building client relationships to capture opportunities in the sector.

 

Utilizing their solutions, expertise and reputation gained by working with clients in other sectors and applying that knowledge to the sports and entertainment industry enable vendors to expand their client reach. Vendors increasingly utilize digital design capabilities to add value. For example, IBM iX, the experience design business of IBM Consulting, developed a new AI commentary feature for the Wimbledon Championships utilizing watsonx to train the AI in the language of tennis, and then implemented the solution to create engaging commentary for event video clips.

IBM

Utilizes IBM Watsonx to Improve Fan Engagement

IBM has a 30-year partnership with the All England Lawn Tennis Club. To help more than 19 million fans globally follow the Wimbledon Championships more closely, IBM has been improving the digital experience of the tournament’s official app and website.

 

In June IBM announced a new feature for the app and website that provides personalized player stories as players advance through the tournament, utilizing data and generative AI (GenAI) from IBM’s watsonx platform. In addition to the Wimbledon Championships, IBM Consulting has been providing insights and improving experiences over the past several years for events such as the Masters Tournament, the U.S. Open and the Grammy Awards; improving user engagement and integrating AI, such as with the ESPN Fantasy Football app; and addressing storage and security needs, such as for the Mercedes-Benz Stadium in Atlanta.

 

For example, IBM has been working with the Masters Tournament for more than 30 years to digitally transform the event by designing solutions and user interfaces and transforming back-end systems to deliver insights through golf data. IBM is utilizing GenAI to convert Masters data into AI-powered narration and insights about players and games.

 

In April IBM announced new fan features for the Masters app and Masters.com to improve the digital experience of the tournament that was held April 11-14. IBM Consulting collaborated with the Masters’ digital team to provide fans with shot-by-shot insights based on data-based projections and analysis for each hole, thanks to GenAI capabilities from IBM watsonx.

 

IBM has also been working with the U.S. Tennis Association (USTA) for more than 30 years. In August IBM announced several fan features for the digital platforms of the 2024 U.S. Open that are powered by IBM watsonx to improve fan engagement and tournament coverage. IBM delivered AI-generated Match Report summaries for singles matches minutes after they were completed utilizing IBM’s Granite 13B large language model (LLM) and the USTA’s data and editorial guidelines. IBM also provided AI commentary with automated English-language audio and subtitles for singles match summaries. Fans also utilized the redesigned IBM SlamTracker experience offering that provides pre-live and post-match insights.

 

In September IBM and ESPN announced enhancements to the ESPN Fantasy app, which is powered by GenAI technologies from IBM watsonx. The new Top Contributing Factors feature within the Waiver Grade and Trade Grade features of the app provide analysis around grades. The grades that are assigned to players are created by AI models built with IBM watsonx, and the information is generated by IBM’s Granite LLM.

Atos

Every Olympics Must Run Flawlessly; there Are No Second Chances

Atos used its well-established expertise in the sports and entertainment industry to provide infrastructure services for the 2024 Paris Olympics and Paralympics and enable a secure and digital Games experience for end users globally. The company has been providing services for the Olympic Movement since 1989. Atos established its relationship with the International Olympic Committee (IOC) as a Worldwide IT Partner in 2001 and provided IT services for the first Winter Olympics in 2002 in Salt Lake City. Ensuring that the IT systems behind the Olympics run flawlessly every two years requires dedication and strict execution of processes and timelines.

 

Atos has been expanding its client roster in the sports and entertainment industry, applying its vast experience gained from the Olympics. In December 2022 Atos signed an eight-year deal with the Union of European Football Associations (UEFA) to be the official technology partner for men’s national team competitions. Atos is assisting UEFA in managing, improving and optimizing its technology landscape and operations. It is also managing and securing the hybrid cloud environment and infrastructure that hosts UEFA’s services, applications and data. Atos is the official IT partner of UEFA National Team Football until 2030.

 

In March Atos announced plans to open a Sports Technology Center of Excellence (CoE) in its new Middle East and North Africa headquarters in Riyadh, Saudi Arabia, in 2Q24. The CoE will develop technology applications for athletes, fans and sports organizations in Saudi Arabia. The new CoE provides a way for Atos to capture opportunities in the local sports industry as Saudi Arabia works on its Vision 2023 to position as a host for leading international sporting events. The center will enable clients to explore solutions around digital transformation, cloud services, cybersecurity, decarbonization, application modernization, DevSecOps and edge computing. Atos provided cybersecurity and infrastructure services for the 2024 Paris Olympic and Paralympic Games utilizing its Technology Operations Centre.

Accenture

Accenture Helps NFL Make Data-driven Decisions and Works with ESPN to Transform Sports Fan Experience

In May Accenture announced a five-year partnership with the NFL in which Accenture will be the Official Business and Technology Consulting Partner. Accenture will help the NFL make data-driven decisions in three business areas: football, financial operations and human resources. Accenture will also support the NFL across multiple areas such as transforming the league’s human capital systems, ERP and analytics, and driving efficiencies and automation across the NFL’s back-office functions.

 

In 2021 ESPN partnered with Accenture, Microsoft and Verizon with the goal of exploring ways to improve the fan experience in sports through technologies such as 5G, augmented reality and mobile edge computing. Accenture and ESPN launched the ESPN Edge Innovation Center to utilize technologies and jointly imagine, explore, conceive and prototype sports entertainment experiences and production capabilities. The combination of design and innovation capabilities with technology and industry expertise enabled Accenture to become ESPN’s Innovation and Founding Consulting Partner. Accenture and ESPN collaborate to enhance live sports broadcasting, develop consumer-facing products and improve the sports fan experience.

Infosys

Client Wins Such as Formula E and Ongoing Work with Grand Slam Tennis Tournaments Allow Infosys to Demonstrate Innovation Capabilities Beyond Traditional IT Services

Since 2015 Infosys has been the Digital Innovation Partner for the Australian Open, Roland-Garros, the Association of Tennis Professionals (ATP) Tour and the International Tennis Hall of Fame, transforming tennis through data, insights and digital experiences. For example, Infosys has been partnering with the ATP to develop digital assets. Infosys’ design capabilities and technical prowess continue to help it attract business in experience design and AI-powered services with sports and entertainment companies, particularly around tennis tournaments.

 

In March Infosys extended its digital innovation relationship with the ATP by three years, until 2026. The ATP will continue to benefit from Infosys’ capabilities in AI, data analytics and cloud. Since the beginning of the partnership in 2015, Infosys has deployed digital assets for ATP Tour, such as reinventing the ATP PlayerZone intranet portal; launched the ATP fan app; and developed systems integration (SI)-driven features powered by Infosys Topaz in the Infosys ATP Stats Center. Infosys and the ATP are also collaborating on the ATP Carbon Tracker, which monitors and helps offset the carbon footprint of players, supporting the ATP’s goal of achieving net-zero emissions by 2040.

 

Outside of tennis, Infosys is the Official Digital Innovation Partner of Madison Square Garden and the New York Knicks and New York Rangers. In May Infosys announced that it will be the official Digital Innovation Partner for the ABB FIA Formula E World Championship, the global motorsport championship for electric cars, for the next three years. Infosys will deliver in-race analytics, improve fan engagement experiences and enhance sustainability reporting and tracking for Formula E.

 

Additionally, Infosys will develop a new AI-powered Fan Customer Data platform to engage 500 million fans by 2030; provide in-race insights utilizing GenAI capabilities through Infosys Topaz; and implement a sustainability data management tool based on AI to help Formula E reduce carbon emissions by 45% by 2030.

Oracle’s Path to $100B+: Unlocking Growth with Multicloud Strategy

Oracle Is Charting a Path for Unprecedented Growth with Its ‘Infrastructure Anywhere’ Vision

Oracle has among the most complete, full-stack cloud portfolios, from infrastructure to database to applications. While Oracle Cloud World 2024 covered a sizable landscape, one theme stuck out during the four-day event: deployment flexibility. This theme reflects how much Oracle has changed compared to 2016, when Gen2 OCI (Oracle Cloud Infrastructure) launched.

 

With multitenant OCI, Dedicated Regions, Cloud@Customer and Oracle Alloy, a specialized service where customers white label OCI services inside their own data centers, Oracle has quickly emerged as one of the most flexible, delivery-agnostic IaaS vendors on the market. Of course, the other big component of Oracle’s “infrastructure anywhere” vision is multicloud, in which customers can run Oracle databases as native services hosted in the data centers of Oracle’s biggest hyperscaler competitors.

 

Not only does this move reflect a major maturity leap for Oracle, in which Oracle cozies up to its rivals to better address the needs of the customer, but it is also critical to the company’s financial strategy. In addition to giving Oracle the flexibility to allocate more capex dollars toward strategic compute and storage resources as opposed to land and buildings, this strategy will help Oracle get its on-premises database support base to the cloud faster. In doing so, Oracle may forfeit lucrative support and license contracts, but the company reports that for every $1 in lost license and support gross profit it could realize as much as $5 in gross profit in the cloud, which is a testament to how quickly the cloud business is growing.

 

The multicloud strategy is also one of the reasons Oracle awed financial analysts not only by raising its FY26 revenue targets by $1 billion, to $66 billion, but also by setting a FY29 goal of $104 billion. This target, backed by Oracle’s $99 billion RPO (remaining performance obligation) balance, implies an average corporate revenue growth rate of roughly 16% over the next five years. This kind of growth was once unheard of for Oracle, but with cloud now overtaking support as the biggest business, Oracle is a different company, and the OCI growth trajectory instills a degree of optimism in Oracle’s ability to disrupt a highly saturated market in the years to come.

Announcing Oracle Database@AWS

Based on interactions at Cloud World, it is clear the Oracle Database@AWS announcement was the most noteworthy. In our view, given Oracle already launched Oracle Database@Azure, and more recently Oracle Database@Google Cloud, which is now live in four regions, it was only a question of when, not if, Amazon Web Services (AWS) would partner with Oracle.

 

With this announcement, Oracle officially saved the biggest hyperscaler for last, onboarding all the critical partners it needs to migrate legacy database customers and accelerate cloud revenue growth. In terms of how this alliance will work, it is no different than the approach Oracle takes with Microsoft Azure and Google Cloud; Oracle will deliver the hardware and networking inside AWS data centers so customers can provision Oracle database services natively from the AWS console and have the system run in AWS, just as it would if it was hosted in OCI.

 

The approach of physically embedding OCI within other clouds as opposed to just bolting Oracle Database on to other infrastructure through a standard interconnection is important as it will not only give customers the native AWS, Azure and Google Cloud Platform (GCP) experiences they are used to, but also limit latency as the Oracle Exadata hardware is physically located with the appropriate hyperscaler.

 

One could argue Oracle is taking a lot of risk with this strategy, as it is essentially bringing customers and their data closer to AWS, Azure and GCP. But in the age of mounting competition, not to mention generative AI (GenAI), it is a risk worth taking. As one customer at a major financial services firm recently told us, “The GenAI decision makers will not be the old world relational database experts,” and these alliances could help ensure Oracle stays relevant in cloud GenAI discussions by making it easier for customers to use the data within Oracle Database for RAG (retrieval augmented generation), to fine-tune foundation models and build new applications using tools many customers are likely already using, like Amazon SageMaker.

 

We should also point out the concept of data gravity. Customers leveraging these multicloud services will still be established Oracle Database customers with some Oracle SaaS presence, and therefore the bulk of their business data gravity will naturally reside within OCI. Those customers may still be inclined to keep their databases within OCI and not extend to other clouds, but with this strategy, Oracle is at least giving them the option to do so. We expect that these multicloud offerings will gain a lot of traction among Oracle Database customers that have big application footprints on other clouds.

Oracle Analytics Is the Glue Between IaaS and SaaS

Analytics, and the ability to turn data into business insight, is the ultimate objective for nearly every organization. With popular tools like Power BI and Tableau as well as neutral data platforms like Snowflake on the market, customers have a lot of choices when crafting the analytics stack.

 

But customers have also made it clear they want to limit the integration burden, and one of the compelling things about Oracle’s approach to analytics is how it can store customers’ operational data from Fusion applications in the Autonomous Data Warehouse (ADW) for analytics as part of a single SKU. This approach, productized as Fusion Data Intelligence (FDI), reinforces the value of Oracle playing in both the SaaS and IaaS markets and its ability to deliver a unified solution.

Evolving the Data Lake Strategy and Competing as a Unified Solution

Access to operational data in the Fusion suite will remain the hallmark differentiator for FDI, but it is on the infrastructure side where Oracle took a big leap forward with the launch of Intelligent Data Lake. Oracle has been elevating its data lake strategy and positioning for some time, but this announcement puts Oracle more squarely into the space.

 

At its core, Intelligent Data Lake is a reworking of existing OCI capabilities, such as cataloging and integration, to create a single abstraction layer that in true data lake fashion, allows customers to query data on object storage, such as Amazon S3 or Microsoft OneLake, with support for the popular Apache Iceberg and Delta Lake frameworks.

 

To be fair, with Fabric and BigLake, Microsoft and Google Cloud, respectively, have been similarly making advancements with the data lake architecture to better address analytics workloads. However, Oracle is not only adding the simplicity and performance benefits of the data lake but also delivering the architecture in a way in which customers can run the entire data pipeline and still have all the analytics components in a single SKU.

 

With Oracle’s launch of a native Salesforce integration with FDI, which allows customers to combine their CRM and Fusion data within the lakehouse architecture, Oracle’s vision of embedded clouds at the database layer is extending to analytics.

 

Though FDI’s draw will still be primarily with existing Oracle customers, the company is clearly taking steps to help combine Fusion with non-Fusion data and make its platform more relevant within the cloud ecosystem. While FDI may not rip and replace the analytics footprint within any particular account, we could see scenarios where FDI displaces some components of the stack, such as Snowflake at the infrastructure layer, or on the analytics side, PowerBI in Microsoft Fabric.

New Applications Are Being Built on the Analytics Stack

In general, scaling the existing platform components of Oracle Analytics is a top priority for the company, but there is another emerging piece of the analytics vision: Intelligent Applications. Coming soon, Oracle will offer applications — People Leader Workbench for HCM and Supply Chain Command Center for SCM — that sit on top of the Fusion system of record, within the FDI platform.

 

This approach should allow Oracle to target a broader set of personas. For example, in People Leader Workbench, it is not necessarily about reaching only the C-Suite but rather anyone who manages people and can benefit from data-driven insights on their people, and most notably, take action on that insight by connecting back to the Fusion HCM system of record.

What About GenAI?

GenAI has officially exited the hype cycle and is being widely deployed within the enterprise, but when it comes to analytics, capabilities like dashboarding, semantic models and visualization are still taking precedence It is still early, but customer feedback suggests that if data is properly configured and there are guardrails in place, GenAI in analytics has a lot of potential.
 

Dive into the complexities of vendor partnerships in this recent TBR Insights Live session — Click the image below to watch on demand today!

On-demand Video - TBR Insights Live webinar: How to Think as a Partner in the Era of GenAI

One of the key announcements at the event was the general availability of Analytics Cloud AI Assistant in Oracle Analytics Cloud (OAC), which is based on a large language model (LLM) so customers can ask questions about their data. Staying in line with the rest of the Oracle strategy, where GenAI is fully embedded into the portfolio and available to customers at no added cost, the analytics assistant will be available to OAC customers for free as part of their existing instances.

Speaking of SaaS and IaaS

From database alliances to the data lake architecture, Oracle has made many calculated moves at the PaaS layer to better compete for strategic workloads. But there are other innovations and key developments in the upper and lower rungs of Oracle’s cloud portfolio.

Oracle Targets Complete End-to-end Process Automation with AI Agents in Fusion Suite

Since it first entered the GenAI game in late 2023, Oracle stood out in the SaaS market for not upcharging customers for GenAI in their SaaS applications. This speaks to Oracle’s play at the IaaS layer with the OCI GenAI Service, which is native to the same infrastructure where all Oracle’s SaaS applications live.

 

Logically, this approach means that as Oracle’s LLM partners, which host in OCI, push the boundaries of their models, Fusion customers stand to benefit in not just using GenAI for basic assisted authoring and summarization use cases (e.g., writing a job description in Fusion HCM or summarizing customer calls in CX), but actually contextualizing data. In the long term, this could mean providing reasoning on that data to manage more complex workflows and deliver business recommendations.

 

At this time last year, Oracle announced 50 GenAI use cases in the SaaS suite. This year, the applications team announced the number of use cases has grown to over 100, while there are now more than 50 AI agents within the Fusion suite. This announcement marks a progression in how Oracle is moving from more generic prompt-and-response use cases in Fusion to actual contextualization use cases, by applying LLM-based RAG agents to address specific goals and roles within a particular business function. In Fusion HCM, this could include a benefits analyst agent, offering users the ability to ask questions, such as which health plan features are available, based on the enrollment data contained in Fusion HCM and the health plan document specific to the company.

 

But the most commonly cited example throughout the event was the Document IO agent in Fusion ERP, which can convert a picture of a quote in a particular currency into U.S. dollars and automatically create and load a purchase order (PO) within the system. With these AI agents, we see Oracle taking the next big step in addressing more complete process automation and productivity enhancements within its SaaS portfolio, and ultimately a shift in mindset where it is less about delivering an ERP system or an HCM system but more about completing end-to-end business process and experience.

OCI Strategy Centers on Growing Within the Large Enterprise and Attracting Cloud-natives

Oracle’s ability to offer among the most flexible cloud delivery methods is the focus of the OCI strategy and strategic road map, led by high-profile partnerships with AWS and others. But Oracle’s strategy is about being agnostic to not only where customers run OCI but also how they run OCI.

 

For example, at Cloud World Oracle announced Dedicated Region 25, a longtime investment and feat of engineering that essentially consolidates a standard Oracle Cloud region into just three racks, which we physically saw on the keynote stage. This configuration extends the value proposition of Dedicated Region, where customers can get the scale and economics of the public cloud inside their own data centers.

 

Dedicated Region 25 could also play a big role in helping Oracle reach new customers. Oracle’s multicloud alliances will undoubtedly be appealing to the large enterprise customer base, but offerings like Dedicated Region 25 could help Oracle attract cloud-native and AI companies looking for a more compact footprint that can still scale to support critical workloads.

Conclusion

Led by its partnership with AWS, Oracle Cloud World 2024 told a story of a maturing business that is turning competitors into partners to better address the needs of the customer. By keeping the lifeblood of the cloud stack, the database, relevant in customers’ cloud transformations, Oracle also ensures it remains competitive in GenAI scenarios, which aligns with the GenAI investments the company is making in other areas of the stack, from analytics to Fusion applications.

 

As the company continues to navigate as a full-stack vendor catering to the existing Oracle base, while simultaneously gaining relevance in the broader cloud ecosystem, there is a lot of potential ahead, and Oracle is well on its way to becoming a $100-plus billion company.

Diversification Into Other Verticals Is Critical to Amdocs Sustaining Long-term Growth

TBR Perspective: Amdocs Must Accelerate Push into Non-Telecom Verticals for Growth and Diversification

Amdocs has made substantial progress on its reinvention, diversifying its customer base, portfolio and business mix while shifting the market perception of the company from a traditional OSS/BSS provider to more of an ICT software transformation specialist. However, most of Amdocs’ transformation thus far pertains to the telecom industry; Amdocs still needs to transition from being a telecom-centric vendor to a multifaceted provider that supports a diversified mix of verticals. The pressure to move in this direction will intensify as the telecom industry’s challenges persist and Amdocs’ organic growth from the industry continues to slow.
 
Amdocs’ current situation is reminiscent of Tech Mahindra’s before it merged with Mahindra Satyam in 2013. Pre-merger, Tech Mahindra was largely viewed as a telecom-only shop and had minimal exposure to other verticals (the company’s revenue split was around 90% telecom and 10% other verticals pre-merger). This specialization helped Tech Mahindra differentiate and compete for business in the telecom vertical but kept it from benefiting from diversification and greater scale.
 
After the Mahindra Satyam merger was completed, Tech Mahindra became a multifaceted ICT services provider, with robust diversification across many verticals. Though TBR is not suggesting Amdocs should or will take a similar approach, Amdocs has already made several acquisitions that bring exposure to nontelecom verticals. However, these acquisitions are relatively small and have not brought transformational changes to the company’s business mix.
 
Amdocs has been involved in nontelecom verticals for at least a couple of decades, and TBR estimates Amdocs’ nontelecom revenue currently composes approximately 10% of the company’s total revenue. While Amdocs has yet to formalize its foray into nontelecom verticals, TBR notes that is beginning to change as the company seems to be making a stronger push into the financial services vertical, as evidenced by acquisitions (especially Astadia, Projekt202 and Sourced Group) and an increase in dedicated resources to support that vertical.
 
Amdocs is also supporting a variety of brand-forward customers from other verticals, primarily via its Stellar Elements business unit, and is focused on opportunities to help companies in the utilities and media & entertainment verticals with IT and digital transformation.

Impact and Opportunities

Astadia Exposes Amdocs to Mainframe Migration Opportunities

One of Amdocs’ newest acquisitions, Astadia, plays into the nontelecom vertical theme and could serve as a key beachhead to winning more deals with nontelecom customers. Astadia is focused on helping mainframe users migrate to the cloud and has carved out a strong niche in the financial services industry, which is one of the verticals outside of telecom that Amdocs is focusing on. Helping companies migrate off mainframes plays well into Amdocs’ mission-critical transformation value proposition. Amdocs estimates there are 40,000 mainframe computers still in use worldwide by a range of companies and government entities, representing a significant opportunity for net-new business.

Competitor List for Products and Services Broadens for Amdocs

Amdocs’ string of acquisitions and new strategic initiatives, such as the partnership with Microsoft, broadens the scope of companies Amdocs now competes with, from both a products and services standpoint. Historically, Netcracker was Amdocs’ most formidable competitor in terms of portfolio overlap, but that list now includes companies like Salesforce, ServiceNow and Oracle. Meanwhile, on the services side, Amdocs is increasingly crossing paths with traditional C&SI companies, such as Accenture, Tata Consultancy Services and Tech Mahindra.

Amdocs Can Compete (and Win) Against C&SIs like Accenture, Just at Smaller Scale

Amdocs possesses all the capabilities required to drive customer IT and digital transformation, both for and beyond the telecom industry. Though the vendor is less than a tenth of the size of Accenture (which is arguably the benchmark vendor to emulate in the C&SI domain) in metrics such as revenue and headcount, Amdocs can still compete against Accenture and other C&SI firms and win business.
 
Amdocs needs to focus on its specialization in delivering migration and transformation for mission-critical software environments, a skill that is broadly applicable across verticals, as well as its leading KPIs for project completion rates.

There Is More Juice to Squeeze Out of CSPs but Not Much

Amdocs boasts over 400 communication service provider (CSP) logos globally, including most of the top 50 CSPs, and in many of these accounts Amdocs is already the dominant provider in terms of the products it sells. Therefore, squeezing more revenue out of these customers (and/or taking more market share from competitors) will be increasingly challenging as telecom operators chronically struggle amid market maturity and anemic growth prospects, and resort to cost containment and M&A for additional economies of scale.
 
Amdocs is also proactively trying to move further down market, targeting smaller CSPs such as MVNOs and Tier 3 operators to sustain growth. However, this approach is unlikely to move the revenue needle significantly, given the largest CSPs globally account for well over 80% of the total telecom market opportunity.
 
GenAI remains exploratory; automated, scaled usage of GenAI in commercial environments is at least a year away
Amdocs is actively exploring how generative AI (GenAI) can be incorporated across domains, both within its own company and for its customers. Thus far, the company is primarily utilizing GenAI internally for code development, and focusing on contact center transformation for its customers. Amdocs’ strategic partnership with Microsoft broadly applies to AI coinnovation and go-to-market efforts and the current focus is offering a joint solution for marketing and sales process automation.
 
Amdocs is also embedding Microsoft Copilot across its broader product portfolio. TBR notes that the GenAI-enabled “virtual agent” and process automation technology Amdocs showcased at the event were compelling and demonstrate a clear path to business value for CSPs.

Learnings From Partnerships with Hyperscalers Provide a Strong Beachhead Into Other Verticals

Amdocs has been learning a lot from its partnerships with Microsoft, Amazon Web Services and Google Cloud, especially as it pertains to implementing cloud migrations of ICT workloads and digital transformation. Specifically, Amdocs has obtained certifications, status and organizational alignment with hyperscalers. The skills and capabilities Amdocs has developed from the telecom ecosystem can be leveraged across other verticals. Solution cocreation also opens new doors for Amdocs, both within telecom and in other verticals.

Amdocs Makes Waves in CRM for Telecom Leveraging Microsoft Partnership

Amdocs has integrated Microsoft Dynamics (CRM) with Amdocs’ Customer Engagement Platform to offer marketing and sales automation solutions to its customers (TBR notes the joint solution, including the GenAI large language model it uses, is customized specifically for the telecom industry by Amdocs’ TelcoGPT, amAIz).
 
Microsoft Dynamics is integrated with Microsoft’s other key business productivity applications, such as Outlook, O365 and Teams, and the company’s Copilot is embedded across the stack, bringing customers improved outcomes. The joint Amdocs-Microsoft solution will enable the two companies to compete with incumbent CRM providers, especially Salesforce, Oracle and ServiceNow. TBR notes that the joint CRM solution is differentiated by the power of its GenAI platform, an aspect where incumbent CRM providers are lagging, and could displace incumbent CRM providers from CSP accounts. Deals would draw in Amdocs’ systems integration capabilities as well as other services, yielding larger deal sizes.

Conclusion

Amdocs has been navigating the increasingly challenged telecom market well, but with organic growth slowing, the company will need to seek out and accelerate into other areas for more sustainable, long-term growth. Amdocs’ incremental steps into other verticals, mostly via acquisitions, are moving the company in the right direction, but a larger magnitude shift is required.
 
This aspect of Amdocs’ reinvention would encompass the institution of formalized strategic, organizational and portfolio changes gearing the company toward addressing multiple verticals. Doing so would enable Amdocs to expand its total addressable market, diversify its business mix and hedge against downturns in the telecom industry.
 
As a first step toward formalizing Amdocs’ strategy in other verticals, TBR encourages the company to start providing more information about its initiatives in verticals outside of telecom, which are known to be significant but are unquantified and minimally discussed, as it will become more important to Amdocs’ business results and growth profile over time.

Ericsson Aims to Accelerate Network API Market Development via New Venture with Leading Global Telcos

TBR Perspective

Ericsson’s Enterprise Wireless Solutions unit is exhibiting strong revenue growth and serves as a bright spot amid the company’s broader challenges. Ericsson has a compelling 5G-related portfolio that addresses the unique needs of enterprises ranging from SMBs to large industrial entities. Ericsson’s focus on enhancing its enterprise portfolio in areas including private cellular networks (PCNs), neutral host networks, fixed wireless access (FWA) and IoT will generate new revenue that will help to partially offset declining consolidated revenue, which is being negatively impacted by most Tier 1 operators decreasing network capex as they enter the later stages of 5G deployments.

 

Ericsson’s Enterprise segment has experienced challenges, however, namely declining revenue within its Global Communications Platform division, which includes Vonage. Ericsson appeared to overpay ($6.2 billion) for its acquisition of Vonage, which fits awkwardly within Ericsson’s historical core business and was primarily considered a down payment on developing a network API business with an unproven business model when it closed in 2022, and Ericsson has essentially confirmed that notion.

 

In October 2023 the company booked an SEK 32 billion ($3 billion) impairment charge on Vonage’s goodwill, writing off half of the acquisition price. The company took a further SEK 11.2 billion ($1.1 billion) noncash charge on the Vonage acquisition in July 2024. TBR believes Ericsson is correcting course, however, by more deeply collaborating with industry partners through its new network API joint venture, which will reduce fragmentation in the market and make it easier for developers to innovate and create new apps and use cases.

 

The joint venture will also provide Ericsson with a more risk-averse approach to tackling the network API opportunity by pooling funding and resources from the partners as the long-term market size for network APIs is uncertain. Ericsson will need to split proceeds from the joint venture with its partners, however, which will limit long-term revenue potential.

Ericsson Realizes the Need to Collaborate with Industry Partners to Accelerate Network API Development

The composition of Ericsson’s new network API joint venture, which currently does not have a formal name and is expected to close in early 2025 pending regulatory approval, entails Ericsson holding 50% equity in the venture, with the following telecom operators holding the remaining 50% of equity: America Móvil, AT&T, Bharti Airtel, Deutsche Telekom, Orange, Reliance Jio, Singtel, Telefonica, Telstra, T-Mobile, Verizon and Vodafone.

 

Vonage and Google Cloud will serve as channel partners for the joint venture, providing access to their ecosystems of millions of developers as well as their partners, and additional communication service providers (CSPs) and channel partners will be invited to join the entity in the future (Ericsson would maintain its 50% share in the venture if additional CSPs join). The goal of the joint venture is to create a platform that will provide network APIs to an ecosystem of developers, including hyperscalers, Communications Platform as a Service (CPaaS) providers, systems integrators and independent software vendors. The joint venture will be in alignment with existing industry network API initiatives, including the GSMA’s Open Gateway and the Linux Foundation’s CAMARA Project.

 

TBR believes the main benefit of the joint venture will be incentivizing developers to focus on the network API market by providing them with a simpler way to create apps at scale. For instance, developers currently need to engage with CSPs on a one-on-one basis to procure network APIs, which can be a slow and complex process. The joint venture aims to accelerate market development by providing combined common APIs that can work from any location or network. Reduced fragmentation will also speed market development as developers will be able to more fully concentrate on new use cases and applications rather than spending time modifying existing applications to make them compatible with networks on an operator-by-operator basis.

 

Industry projections for the network API market are wide ranging, with Ericsson citing McKinsey & Co.’s projections that the market will generate around $100 billion to $300 billion in incremental connectivity and edge computing-related revenue for operators by 2030 and that an additional $10 billion to $30 billion in revenue will be generated from the APIs themselves.

 

TBR believes the market size of the segment will mainly hinge on network APIs being able to provide developers with differentiated and compelling capabilities that are distinct from existing 5G capabilities that are available independent of network API access. Enhanced capabilities enabled by network APIs include differentiated connectivity, device-based location, security (e.g., authentication) and network insights.

 

Current primary use cases for network APIs include simplified secure login for devices and advanced network authentication to strengthen fraud prevention. Other main use cases include enabling enhanced location verification and more reliable connectivity to support point-of-sale platforms, as well as optimizing the user experience for entertainment services such as video streaming and gaming applications.

 

Ericsson’s joint venture will create competitive pressures for Nokia, which is providing network API solutions via its Network as Code platform. Nokia has at least 14 Network as Code CSP partners as of June and aims to have more than 30 partners by the end of 2024. Nokia may be challenged in meeting this goal, however, due to potential CSP partners possibly being swayed by the ecosystem and benefits provided by Ericsson’s joint venture. Ericsson’s CSP partners are not tied exclusively to the joint venture, however, and have the option to join Nokia’s ecosystem as well.

For IT Services Companies and Consultancies, the New Joint Venture Could be a Promising Change Agent in the Broader Ecosystem

From the perspective of global IT services companies and consultancies, such as Accenture, Infosys and Deloitte, Ericsson’s event theme, “Capture the value of enterprise 5G,” remained focused on Ericsson’s opportunities with and through telco operators while providing a modest opening for increased go-to-market and alliance activity.

 

Based on the event presentations, sidebar discussions with Ericsson leaders, and TBR’s analysis of Ericsson over the last two decades, we see two opportunities for Ericsson to enhance its ecosystem plays with IT services companies and consultancies that align well with Ericsson’s overall strategy.

 

First, TBR’s recent Voice of the Partner research shows that cloud and software vendors, OEMs, and IT services companies see 5G as a promising source of near-term growth, nearly on par with generative AI. To address their enterprise clients’ growing 5G needs, IT services companies and consultancies will need closer alliances with incumbent telcos and OEMs, including Ericsson. IT services companies and consultancies will not try to sell their own connectivity solutions but will readily partner to bring those solutions to their enterprise clients if informed, aligned and incented, particularly if the five-to-eight-times revenue multiplier applies to services attached to Ericsson’s hardware.

 

Second, TBR’s ecosystem reports, which cover a dozen leading global IT services companies’ relationships with Amazon Web Services (AWS), Google Cloud, Microsoft Azure, Adobe and Salesforce, confirm that scale remains a key differentiating characteristic, both for alliances managers across the ecosystem and enterprise clients looking for multiparty, well-orchestrated technology solutions. Ericsson’s joint venture with Google and the 12 operators could be highly appealing as an alliance partner, bringing IT services companies and consultancies into contact with new personas within their enterprise clients, which will create an expanded playing field for professional and managed services companies. In short, Ericsson’s new joint venture could be an ecosystem catalyst, provided the joint venture finds a go-to-market focus and well-led partnerships with the right IT services companies and consultancies.

Ericsson Launches Private 5G and Neutral Host Network Solutions Under its Ericsson Enterprise 5G Segment

At Ericsson Enterprise Industry Analyst Day in September, Ericsson reintroduced its Ericsson Enterprise 5G portfolio, which includes three solutions:

 

  • Ericsson Private 5G: A converged LTE/5G PCN solution with industry and licensed spectrum support
  • Ericsson Private 5G Compact: A U.S. CBRS-based solution designed for enterprises requiring connectivity that is more reliable than Wi-Fi. The solution was previously branded as Cradlepoint NetCloud Private Networks.
  • Ericsson Enterprise 5G Coverage: A turnkey neutral host solution that features certification from all Tier 1 U.S. operators. The solution can support up to three carriers per radio.

 

The relaunch of the Ericsson Enterprise 5G portfolio, in addition to the legacy Cradlepoint business now branded under this segment, will help Ericsson strengthen its messaging within the PCN market and better compete against Nokia, which TBR estimates is the second-largest PCN vendor by revenue globally (behind Huawei) and the largest when excluding China.

 

Ericsson Enterprise 5G Coverage is certified by AT&T, T-Mobile and Verizon, which will be a significant benefit as Ericsson aims to gain headway within the neutral host networks market. Neutral host networks are gradually gaining traction as they are easier to deploy compared to legacy distributed antenna systems (DAS) and can provide significant cost savings as they enable a single neutral host network to support customers from multiple operators without requiring each operator to deploy its own separate infrastructure.

 

Industrial sites, schools and hospitals are the primary locations where neutral host networks are initially being deployed, and Ericsson’s early customers for the solution include Toyota Forklifts in Indiana and engine manufacturer Cummins in New York.

Conclusion

TBR believes Ericsson is effectively positioning to capitalize on 5G-based solutions within the telecom enterprise space, including network APIs, PCNs and neutral host networks. Ericsson is aware that industry collaboration is essential for these segments to reach their peak potential, evidenced by the vendor’s initiatives including the formation of the network API joint venture and gaining certification from AT&T, T-Mobile and Verizon for its neutral host network solution.

 

Ericsson’s success in areas including network APIs, PCN and multi-access edge computing will be impacted by coopetition from hyperscalers within these segments. Though Ericsson has established partnerships with AWS, Google Cloud and Microsoft Azure within multiple portfolio segments, the company’s revenue opportunities will be limited as hyperscalers take a portion of revenue from enterprise deployments.

Nokia’s Fixed Networks Unit Poised for Long-term Growth Despite Market Challenges

TBR Perspective: Nokia’s Fixed Networks Business Unit

Nokia is the largest vendor of fixed network access infrastructure by revenue in the Western economic bloc, a position of strength that exposes the vendor to a range of opportunities that arise in the market. While Nokia remains focused on its fiber-based platform, the vendor is also supporting fixed-wireless access (FWA), which is a rapidly growing service offering in the telecom industry.
Though revenue in Nokia’s Fixed Networks business unit has been uneven over the past few years (primarily due to the disruptions caused by the COVID-19 pandemic), the unit is poised to be one of the biggest beneficiaries of government-supported broadband programs and ongoing internet service provider (ISP) investment in high-speed broadband access technologies, driving a positive revenue trend over at least the next three to five years.
 
Nokia is focused on expanding access to broadband (through fiber and/or FWA) and introducing a future-proof platform for ISPs to build upon. The company is trying to be everything to everyone in this domain by providing a near complete portfolio (only DOCSIS is missing).
 
Despite Nokia’s favorable market position and government-induced tailwinds for the broadband infrastructure domain, TBR notes that the supply-and-demand dynamics as well as the timing of investments are prone to be disjointed, lengthening the time required to meet infrastructure deployment objectives compared to what was originally expected by the government and the telecom industry.
 
Additionally, TBR remains steadfast in its belief that building fiber out to every household is not economically feasible (despite what the government and stakeholders in the market say they want) and that alternative broadband access technologies (such as FWA and satellite) are going to increase in the global mix to connect the unconnected and underserved peoples of the world.

Impact and Opportunities for Nokia

BEAD Program Will Likely Stretch to the Mid-2030s due to Challenges and Delays

Broadband Equity, Access, and Deployment (BEAD) Program-supported projects are now slated to begin deployments in 2025, more than a year later than originally planned. There is a long list of reasons (most of which are related to mapping integrity and political processes) why the program has been delayed thus far, and there is a growing list of reasons that suggest it will take longer for the program to fully ramp up and complete its objective (i.e., spend all of the $42.5 billion allocated to the program).
Among the biggest challenges that lie ahead for the BEAD Program is a shortage of skilled labor (e.g., fiber splicers and trenching machine operators) and industrial equipment, such as boring machines, that will be required to deploy fiber to an estimated 5.5 million households across the U.S. Shortages of products that meet the Build America Buy America (BABA) requirements associated with the BEAD Program could also cause a timing and supply issue.
 
Taken together, TBR now believes the deployments tied to the BEAD Program will begin next year and it could take as long as the mid-2030s for all the program’s funding to be disbursed, more than five years longer than the government and market ecosystem originally anticipated. Nokia is doing as much as it can to mitigate and alleviate these potential challenges in the market.
 
For example, Nokia is proactively educating stakeholders in the ecosystem and working with its partners to better match supply with demand for products and resources. This orchestration of the ecosystem will help align stakeholders and enable the industry to put its best foot forward in carrying out this infrastructure build-out program as well as position Nokia to maintain and grow its leading share in the broadband infrastructure market.

Do Not Forget About Non-BEAD Government Programs for Broadband

Though the telecom industry likes to focus on the BEAD Program (likely because it is the largest program by dollar amount in the broadband ecosystem in the U.S. market), there are a variety of other government-supported programs that also deal with broadband, including the American Rescue Plan Act (ARPA), the Rural Digital Opportunity Fund (RDOF), the U.S. Department of the Treasury’s Capital Projects Fund, the Tribal Broadband Connectivity Program, and the U.S. Department of Agriculture’s ReConnect Loan and Grant Program.
 
In aggregate, TBR estimates there is more than $80 billion in direct and indirect government stimulus allocated for broadband-related projects in the U.S. market alone, all of which is slated to be spent by the mid-2030s. There are also a few hundred billion dollars in aggregate in similar broadband-implicated programs in other regions, most notably in China, the European Union, the U.K. and Australia.

Fiber Access Technology Capabilities Exceed Usability, Creating a Conundrum for Vendors

Technological innovations pertaining to fiber access have become so advanced and the bandwidth available through fiber access so massive that the capabilities of the technology far exceed what most end customers could possibly need or use. This disconnect creates a conundrum for vendors such as Nokia that supply the broadband infrastructure market.
 
Though fiber broadband infrastructure is, and will remain, in high demand, most ISPs will be loath to adopt the most cutting-edge technologies because they far exceed what customers would need and put unnecessary additional cost burden on the operator.
 
There are exceptions, such as what Google Fiber and Frontier Communications are deploying (specifically 50G and 100G connections, respectively), but TBR believes most ISPs will focus on 10G or lower connections, which is more than enough bandwidth for the vast majority of households and businesses and are likely to be future-proof for many years to come.

Overbuilding and One-upmanship Risks New Price War for High-speed Internet Service

The government funding boost, coupled with technological advancements and new entrants into the ISP domain, is creating a situation that is ripe for a price war for broadband services. Specifically, many more markets across the U.S. are likely to have three or more (in some cases up to seven) providers of high-speed broadband service in a given area, including xDSL, FTTx, HFC (via DOCSIS) as well as FWA and satellite (mostly delivered via low Earth orbit [LEO] satellites).
 
Given that a provider typically needs to have more than 30% market share in a given area to achieve profitability in the broadband services market, an increasing number of options puts more power into the hands of end users, which historically suggests the pricing environment will be extremely competitive.
 
In response to the hotter competitive environment, providers that are multiservice-oriented are trying to attract and lock in market share by offering converged (aka bundled) solutions, usually giving end users a discount as an incentive to sign up and stay.
 
Additionally, TBR notes that ISPs are increasingly engaging in one-upmanship (which is also a symptom of the existence of too many options in a given market), meaning ISPs are marketing ever higher broadband speeds to customers to position their offerings as better than the competition while attempting to incrementally increase average revenue per user.
 
Though this strategy has been effective in years past, it is likely to lose efficacy after speeds surpass the level at which the benefits of faster speeds become imperceptible to end users. Therefore, in aggregate, TBR expects the pricing environment in the U.S. for broadband service to be increasingly competitive through at least the remainder of this decade.

Private Equity Comes into the Fixed Broadband Market

Private equity firms are entering the telecom infrastructure market in a big way, gobbling up assets and forging joint ventures with telcos that want to (or need to) raise capital and hedge their risks. Some private equity-sponsored entities are also now building out their own greenfield fiber-based networks (such as Brookfield Infrastructure Partners’ Intrepid Fiber Networks) and are even moving the market toward wholesale, shared and other forms of open-access models.
 
Though the inclusion of private equity into the broadband infrastructure domain is bringing large pools of fresh capital into the market, this trend also risks fueling overinvestment, price compression and disruption of incumbent ISPs’ business models. Regardless, expect private equity to remain attracted to assets that offer consistent cash flow over a long duration, and their inclusion in the telecom ecosystem is likely a net positive for overall market development and evolution.

Existing Government Stimulus May Still Not be Enough for FTTP; Alternatives Will Likely be Called on at Scale to Fill in the Gaps

Though governments (and most of the stakeholders in the telecom ecosystem) across the world want full fiber to each premises, this is still not economically feasible. For example, it is not uncommon for some locations in the U.S. to cost upward of $1 million per premise to connect with fiber, a price that will be politically difficult to justify and that is not supported by normal market conditions. In these extreme situations, it is highly likely that governments will allow and embrace alternatives, such as FWA and satellite-based connectivity.
 
TBR notes that FWA and LEO constellations can easily deliver sustained speeds in excess of 100Mbps at a fraction of what it would cost to deploy fiber to the premises (FTTP). With that said, of the estimated 5.5 million households that the government has identified as needing broadband connection in the U.S., TBR would not be surprised if up to 25% of that number of households is ultimately connected via FWA or satellite (enhancements to DOCSIS and xDSL are also potential options to close the underserved gap). In other countries, that percentage could be even higher.

New Business Models Hold Promise to Connect Low-income Households in Emerging Markets

Upstart ISPs, such as fibertime and Vulacoin in South Africa, have established innovative solutions to cost-effectively provide high-speed broadband services to low-income areas. The architecture of the network emphasizes leveraging FWA and Wi-Fi with a relatively low amount of fiber and the business model is focused on selling units of time (in minutes), which is more affordable for lower-income end users.
 
TBR notes this model requires scale and high time of use to achieve profitability, meaning it is best suited for dense areas, especially impoverished neighborhoods. TBR also notes that obtaining access to high-speed internet is a key avenue in which areas can strengthen their local economies and help reduce levels of poverty.
 
In addition to South Africa, Brazil is also exploring the use of this model. This approach is also likely to be leveraged in other parts of Africa as well as in parts of India and Southeast Asia.

Conclusion

Government and private equity involvement in the broadband market may prove to be a mixed blessing. Though there are concerning indicators suggesting there are too many broadband providers in some key markets (especially the U.S.) and that broadband access businesses are becoming overvalued, these market dynamics actually represent tailwinds for Nokia, which is best positioned to garner a disproportionate amount of investment slated for broadband infrastructure in the Western economic bloc, which includes North America, Europe, developed APAC and select developing markets such as India.
 
Nokia’s outsized and unique position in the broadband infrastructure ecosystem enables the company to play a key role in orchestrating partners and customers to achieve their objectives in the most optimal way possible. Fiber will remain the coveted access medium for high-speed broadband, but the world will also employ other broadband access mediums to a large extent.
 
New ISP and hyperscaler business models, coupled with sustained investments by incumbent ISPs and supported by government stimulus, create an environment ripe for moving the world closer to full broadband coverage for all people.

Atos Powers 2024 Paris Olympics and Paralympics with Cutting-edge IT and AI Solutions

Atos, the worldwide IT partner for the Summer and Winter Olympic and Paralympic Games, invited a group of industry analysts to the 2024 Paris Olympics. The goal of the event was to show Atos in action during the Games with a tour of the Technology Operations Center in Paris, which is one of the three locations responsible for delivering IT services and keeping the Games running. The analysts also attended a swimming competition event at Paris La Defense Arena, to experience the secure and digital experience provided by Atos and its partners in running the IT systems behind the Games.

The Olympics Must Run Flawlessly; There Are No Second Chances

Atos utilized its well-established expertise in the sports and entertainment industry to provide IT services for the 2024 Paris Olympics and Paralympics and enable a secure and digital experience for end users, which typically amounts to a total of approximately 4 billion viewers globally. Atos has been providing services for the Olympic Movement since 1989. Atos established its relationship with the International Olympic Committee (IOC) as a Worldwide IT Partner in 2001 and provided IT services for the first Winter Olympics in 2002 in Salt Lake City. Providing uninterrupted running of the IT systems behind the Olympics every two years requires dedication and strict execution of processes and timelines.

 

According to Angels Martin, general manager Olympics at Atos, “Olympics challenges are similar to other projects; the difference is visibility [of the Games]. No one will postpone the opening ceremony because Atos is not ready.” Martin also explained that cybersecurity management is a vital activity that Atos provides as the Games are one of the most targeted events in terms of cyberattacks, which could threaten the smooth functioning of the Olympics. She also stated that the Games are complex to manage with multiple parties, such as the IOC, sports federations, broadcasters and journalists, requiring services and access to information 24/7 from anywhere on any device. Martin also noted that demand for information has changed significantly since the first engagement 30 years ago, and today Atos is applying AI-driven solutions to enable processes for the Games. For example, Atos used AI solutions for the 2024 Paris Olympics to support the Organising Committees for the Olympic Games in providing scenarios for matching volunteers with job positions based on skills and abilities. In the 2020 Tokyo Olympics Atos provided an AI solution for facial recognition for venue access using accreditation.

Atos Integrates Critical IT Systems and Manages Partners to Run the Games

Atos is responsible for integrating critical IT systems, managing programs with IT vendors that deliver services for the Organising Committees for the Olympic Games, supporting critical applications for the Games and providing security services to enable smooth and uninterrupted running of the Games. For example, for the 2024 Paris Olympics and Paralympics Atos operated the Olympic Management System, which included a volunteer portal, a workforce management system, athlete voting applications, sport entries and qualifications, competition schedule and accreditation. Atos was responsible for the Olympic Diffusion System, which contained Olympic data feed, web results, mobile apps for results, a Commentator Information System, an information system for journalists called MyInfo, and a print distribution system. Atos was also responsible for cloud orchestration between private cloud, public cloud services and data centers at venues.

 

Additionally, Atos applied its expertise around working with a diverse group of technology partners to help run the Games and provided systems integration of applications with other IT providers and partners. Atos integrated partners, such as technology providers, media, the IOC, Organising Committees for the Olympic Games, and security providers, to ensure efficient delivery, operations, timelines and venue management activities. Atos also helped coordinate responses on daily activities and addressed critical events when they occurred. For example, Atos worked with Omega, the timing and scoring sponsor of the 2024 Paris Olympics, to relay results and data to spectators globally in real time. Omega captured raw data around timing and scoring, fed the results into scoreboards and videoboards at venues jointly with Panasonic, and provided data to Atos to feed into the Commentator Information System.

Atos’ Olympics and Paralympics Achievements

Achievements from the 2020 Tokyo Olympics and the 2024 Paris Olympics show the magnitude of work Atos provides. There are approximately 900 events that Atos has to manage to be able to transmit results instantly from competition and noncompetition venues. The company utilized the volunteer portal to process 200,000 volunteer applications prior to the 2020 Tokyo Olympics, and the number of volunteer applications swelled to 300,000 for the 2024 Paris Olympics. According to Atos, one of the most complex activities around managing people for the Olympic and Paralympic Games is assigning volunteers to the large number of necessary positions. For the 2024 Paris Olympics and Paralympics, Atos innovated the volunteers’ assignment process by implementing an optimized pre-assignment scenario model and an AI-based solution that utilized constraint logic programming to improve position matchups. At the 2020 Tokyo Olympics Atos issued 535,000 accreditations through the system and established 350 accreditation checkpoints with facial recognition in all competition and noncompetition venues. Additionally, cloud usage at the 2020 Tokyo Olympics enabled Atos to reduce by 50% the number of physical servers at the 2020 Tokyo Olympics and improve sustainability.

Every Two Years Atos Organizes Upcoming Games

Typically, pre-project activities for each Olympic Games begin six years prior to the event. For example, pre-project activities for the 2024 Paris Olympics and Paralympics began in 2018, and planning began in 2020 with the development of a master plan and strategy and related responsibilities matrix. In November 2020 Atos appointed the first core team for the 2024 Paris Olympics and Paralympics. In 2021 Atos began designing business requirements and systems infrastructure and established a test lab, and in 2022 the company initiated the building of systems and expanded the testing facility. In June 2023 Atos launched testing activities such as integration tests, acceptance tests, systems tests, events tests and multisport tests to prepare for operating the Games in 2024. During the first several months of 2024, Atos worked on venue deployment, disaster recovery and technical rehearsals.

 

For example, between May 13 and May 17 Atos completed the final technology rehearsal for the 2024 Paris Olympics and Paralympics. The rehearsals, which took place across different locations in Paris and other sites of the Olympic and Paralympic Games, were designed to test IT policies and procedures and how well IT teams can collaborate and handle real-time situations that may impact the Games. Atos is the IT integration leader and coordinates with the Organising Committee for the Olympic Games and with experts and technology partners. The technology rehearsals were conducted in 39 venues, including Atos’ Central Technology Operations Center in Barcelona, Spain, and venues specific to the Games, such as Atos’ Technology Operations Center in Paris, the Main Press Center, The Stade de France and competition venues.

 

The Olympic Games resemble a large-scale international corporation mobilizing approximately 300,000 people for the duration of the Games. Atos provides IT services with teams located in the host city and in Atos’ facilities in Poland, Morocco and Spain, and serves more than 4 billion customers globally competition results. While every two years Atos must set up a new organization for each Summer and Winter Games, the company has a well-established process and experience with starting over again. Every two years Atos establishes a Technology Operations Center (TOC) in the host city of the Summer and Winter Games. The TOC is the technology command and control center that houses teams from Atos, the IOC, the Organising Committees for the Olympic Games and other technology partners. The TOC consists of approximately 300 people who are coordinated by Atos and available 24/7 while the Olympics and Paralympics are running. Atos also has a Central Technology Operations Center (CTOC) in Barcelona, which is organized in a similar manner as the TOC in the host city. The CTOC delivers remote support during competitions and critical events, such as the volunteer campaigns, and orchestrates applications for the Games, and consists of approximately 80 people who provide services around operations, architecture, security, infrastructure and data management. Atos also has an Integration Testing Lab in Madrid that manages system testing for the Games.

 

Atos Adds New clients in the Sports and Entertainment Industry

Atos’ engagement with the IOC ends with the 2024 Paris Olympics and Paralympics. However, Atos has been expanding its client roster in the sports and entertainment industry, applying its vast experience gained from the Olympics. In December 2022 Atos signed an eight-year deal with the Union of European Football Associations (UEFA) to be the official technology partner for men’s national team competitions. Atos is assisting UEFA in managing, improving and optimizing its technology landscape and operations. Atos is also managing and securing the hybrid cloud environment and infrastructure that hosts UEFA’s services, applications and data. In July Atos announced that it had successfully delivered key IT services and applications supporting the UEFA EURO 2024 from June 14 to July 14. Atos supported UEFA systems such as accreditation, access control solutions and competition solutions. Atos managed core IT systems through its football service platform and stored and distributed UEFA football data to stakeholders. Atos is the official IT partner of UEFA National Team Football until 2030.

 

Conclusion

Atos has a well-established position and history of operating in the sports and entertainment industry. Expanding its client roster with organizations such as UEFA will help the company maintain its reputation as a reliable IT services provider and innovation partner for major events. Enabling the running of complex events such as the Summer and Winter Olympic Games and the UEFA EURO 2024 championship provides global visibility of Atos’ capabilities and brand and enables the company to augment its client base in the industry.

Investing Big in GenAI Today: The Key to Unlocking Massive Long-term Returns

GenAI requires massive investment now for a chance at massive long-term returns

For most new technologies and trends in the IT space, actual business momentum and revenue generation typically take years to develop. In fact, in many cases, particularly with new technologies available to consumers, monetization may never develop, as the expectation of free trials or advertising-led revenue streams never leads to sustainable business models.

 

The history around monetizing new technologies is what makes the rise of generative AI (GenAI) over the past 18 months so notable. In such a short period of time, we have tangible evidence from some of the largest IT vendors that billions of dollars in revenue have already been generated in the space, with the expectation that even more opportunity will develop in the coming years.

 

AI and GenAI revenue streams have not come without investment, however, as the infrastructure required to enable the new technology has been significant. The three major hyperscale cloud providers have borne the brunt of this required investment, outlaying billions of dollars to build out data centers, upgrade networking and install high-performance GPU-based servers. Amazon Web Services (AWS), Microsoft, Google and other cloud platform providers were already spending tens of billions annually to maintain and expand their cloud service offerings, and GenAI adds significantly to that investment burden.

 

The early revenue growth resulting from GenAI offerings has been promising, but put in the context of the increased investment required, it becomes clear that the business impacts of the technology will play out over an extended time period. Most public companies execute quarterly, plan annually and, as a stretch, project their expectations out over three to five years.
 
The impact of GenAI extends even further, as Microsoft CFO Amy Hood stated on the company’s fiscal 4Q24 earnings call: “Cloud and AI-related spend represents nearly all of our total capital expenditures. Within that, roughly half is for infrastructure needs where we continue to build and lease data centers that will support monetization over the next 15 years and beyond.” That means not only that Microsoft spent $19 billion on capital expenditures during a single quarter to support cloud and AI but also that the time horizon for the returns on that investment stretches beyond a decade.

 

Microsoft is, in this way, representative of all cloud platform peers, investing huge sums of capital expenditures now to realize modest new streams of revenue in the short term and anticipating significant revenue opportunity over the next 20 years.

AI & GenAI versus Capital Expenditures (Amazon Web Services, Microsoft, Google and Oracle)

AI-related revenue is already considerable, with growth expected to persist

TBR estimates the four leading cloud platform vendors generated more than $12 billion in revenue from AI and GenAI services in 2023, which is in and of itself a sizable market. On top of that, we expect revenue from those four vendors to increase by 71% during 2024.

 

Below are examples from some of the largest monetizers of GenAI so far, with estimates on the current size of their respective businesses and the strategies they use. A market of that scale and growth trajectory is notable in an IT environment where much more modest growth is the norm. While we expect growth to gradually slow and normalize over the coming years, the AI and GenAI markets remain attractive nonetheless. Insights follow about how some of the current leaders in this space are monetizing.

 

Microsoft (estimated $1 0 billion in GenAI revenue annually): While Microsoft did not quite meet Wall Street’s lofty expectations for AI-related revenue growth, the company posted a solid quarter in 2Q24. In TBR’s opinion, Microsoft’s GenAI strategy is on the right track, and its financial results align closely with our expectations. In 2Q24 Azure AI services contributed 8% of Azure’s 29% year-to-year growth, while Copilot was cited as a growth driver for Office 365.

 

Nevertheless, with Office 365 revenue growth decelerating compared to past quarters, it is clear the monetization of GenAI will take time to materialize. Still, given Microsoft’s current capex spend and capex forecast, the company is committed to its AI strategy. Management stated nearly all $19 billion of capital expenditures this quarter was focused on the cloud business, with roughly half going toward data center construction and the other half used to procure infrastructure components like GPUs.

 

This hefty commitment indicates that GenAI will remain at the forefront of Microsoft’s product development, go-to-market and partner strategies for years to come as the company looks to turn an early lead into an established position atop the AI and GenAI market.

 

AWS (estimated $2.5 billion in GenAI revenue annually): During AWS’ New York City Summit event in July, Matt Wood, the company’s VP of AI Products, noted that GenAI had already become a multibillion-dollar business for the company. Amazon CEO Andy Jassy has also spoken confidently about the future of AI, publicly proclaiming the company’s belief that GenAI would grow to generate tens of billions in revenue in the coming years.

 

The fact that AWS has been playing in AI infrastructure, with custom chip lines for both training and inference, well before the GenAI hype cycle is notable. Customers are not likely to go through the daunting task of moving off industry standard hardware, so these custom offerings can still be a more cost-effective source for net-new workloads, which is one of the reasons they signify a lot of potential for GenAI.

 

AWS’ custom offerings, coupled with tools that customers use to build and fine-tune models, such as Bedrock and SageMaker, will continue to spin the EC2 meter. AWS does have other GenAI monetization plans with a two-tiered pricing model for Amazon Q Business and Q Developer. However, it is still early days for these offerings, and Microsoft Copilot entering the mix, at least from the line-of-business (LOB) perspective, clearly indicates AWS faces an uphill battle.

 

Google Cloud (estimated $2 billion in GenAI revenue annually): Unlike some of its peers in the industry, Alphabet has not clearly quantified the impact that GenAI is having on Google Cloud’s top line. However, on Alphabet’s recent earnings call, executives said that GenAI solutions have generated billions of dollars year to date and are used by “the majority” of Google Cloud’s top 100 customers.

 

These results, coupled with a 40-basis-point acceleration in Google Cloud’s 2Q24 revenue growth rate, to 28.8%, signal that while GenAI is having an impact on Google Cloud Platform (GCP) revenue growth, it is very early days. Steps Google Cloud is taking to boost developer mindshare — with over 2 million developers using its GenAI solutions — and align with global systems integrator (GSI) partners to unlock new use cases, leave us confident Google Cloud can more aggressively vie for GenAI spend through 2025.

 

ServiceNow (less than $100 million in GenAI revenue annually): With Now Assist net-new annual contract value (NNACV) doubling from last quarter, ServiceNow’s steady momentum selling GenAI to the enterprise continues. Now Assist was included in 11 deals over $1 million in annual contract value (ACV) in 2Q24, showing positive early signs that the strategy of packaging premium digital workflow products based on domain-specific large language models (LLMs) is resonating.

 

At 45%, ServiceNow’s Pro SKU penetration rate, which represents the percentage of customer accounts on Pro or Enterprise editions of IT Service Management (ITSM), HR Service Delivery (HRSD) and CSM products, is already very strong. Upgrading these already premium customers to Pro Plus SKUs with GenAI, for which ServiceNow has already realized a 30% price uplift, could signify an opportunity for ServiceNow valued at well over $1 billion. Naturally, a big focus is expanding the availability of Pro Plus outside the core workflow products.

 

IBM (less than $2 billion in GenAI revenue annually): Approximately 75% of IBM’s reported $2 billion in GenAI book of business to date stems from services signings, and IBM lands nearly all watsonx deals thorough Consulting. Companies need help getting started with GenAI in the cloud, and IBM’s ability to lead with Consulting and go to market as both a technology and consulting organization will continue to prove unique in the GenAI wave.

 

On the software side, overcoming challenges with the Watson brand and deciding how much it wants to compete with peers have been obstacles, but IBM is now strategically pivoting around the middleware layer, hoping to act as a GenAI orchestrator that helps customers build and run AI models in a hybrid fashion. This pivot has resulted in a series of close-to-the-box investments, including Red Hat’s InstructLab project, which allows customers to fine-tune and customize Granite models, and IBM Concert for application management.

 

According to IBM, these types of GenAI assets have contributed roughly $0.5 billion to IBM’s AI book of business. By adopting a strategy to embed its AI infrastructure software into the cloud ecosystem of GenAI tools and copilots already widely accepted by customers, IBM ensures it stays relevant with these cutting-edge workloads.

 

Oracle (less than $100 million in GenAI revenue annually): With the Oracle Cloud Infrastructure (OCI) GenAI Service hitting general availability in January and a code assist tool only recently launched into preview, Oracle has been late to the GenAI game. But the company has highlighted several multibillion-dollar contracts for AI training on OCI, which speaks to its tight relationship with NVIDIA and ample supply of GPUs.

 

As an API-based service providing out-of-the-box access to LLMs for generic use cases, the OCI GenAI Service on its own does not necessarily differ from what other hyperscalers are doing. What does stand out is that Oracle offers the entire SaaS suite. Given that all Fusion SaaS instances are hosted on OCI, where the GenAI service was built, Oracle can deliver GenAI capabilities to SaaS customers at no added cost.

 

This means Oracle’s GenAI monetization will be purely from an infrastructure perspective. GPU supply and the cost efficacy of OCI will help Oracle bring new workloads into the pipeline, and we will see a bigger impact to growth in 2025. For context, Oracle’s remaining performance obligations balance (though some includes Cerner) is $98 billion.
 

Dive Into the Future of GenAI with TBR Analysts Patrick Heffernan, Bozhidar Hristov and Kelly Lesiczka

Beyond revenue generation, cost savings is part of the value proposition for cloud vendors and customers alike

Many of the leading IT vendors’ GenAI strategies have centered on investing in solutions for customers. However, vendors have also been serving as customer zero for the technology by implementing it internally. The results from their early implementations seem very much like end-customer use cases, which focus on cost savings and efficiency as the easiest benefits to realize. While many IT vendors have seen operating expenses and headcount level off over the past couple of quarters, implying that AI has had some impact on company efficiency, IBM and SAP have both explicitly stated AI’s impact on their operating models.

 

IBM was one of the earliest vocal proponents for the labor-saving benefits AI could bring to its business. In mid-2023 CEO Arvind Krishna announced a hiring freeze and shared an expectation that AI would replace 8,000 jobs. IBM remains focused on driving productivity gains, which it is largely doing by lowering the internal cost of IT and rebalancing the global workforce. This includes using AI to automate back-office functions. Such efforts have IBM on track to deliver a minimum of $3 billion in annual run-rate savings by the end of 2024.

 

Meanwhile, SAP’s decision to increase its planned FTE reallocation from a previous target of 8,000 to a new range of between 9,000 and 10,000 FTEs shows the company is committed to improving operating efficiency. While the bulk of the restructuring will consist of reallocating FTEs into lower-cost geographies and strategically important business units, taking a customer-zero approach with GenAI is also a component. SAP is leveraging business AI tools focused on areas like finance & accounting and human resources to reduce the labor intensity within the respective business units.

Just like end customers, vendors are investing significantly now in hopes of generating long-term GenAI returns

As seen in TBR’s Cloud Customer Research streams, customers have been investing in GenAI solutions with some haste, forgoing clear ROI measurements or typical budgeting procedures. Customers, as well as the major vendors we cover, have a sense of urgency around GenAI and share the feeling that if they do not embrace these new solutions now, it could place them at a long-term competitive disadvantage. If customers are not making full use of GenAI capabilities, their competitors will be more efficient and productive and capture more growth opportunities. For vendors, the ability to not only deliver GenAI capabilities but also do so at scale will be a competitive necessity for decades to come.

 

In this regard, customers and vendors find themselves in a similar situation, investing in GenAI now just for the possibility of a future advantage, but the scale of investments required are quite different. Customers have the good fortune of leveraging scalable, subscription-based services for many of these GenAI technologies. Customers are still extending their IT budgets and paying more to incorporate GenAI, but they do not have large fixed costs and long-term commitments at this point.

 

Vendors, on the other hand, need to make significant investments, even beyond the already huge levels of investment to support cloud services, to capitalize on the GenAI opportunity. The scale of investment cannot be understated for the largest cloud platform providers like AWS, Microsoft, Google and Oracle. All of these vendors were already investing tens of billions of dollars annually to support data center and infrastructure build-outs.

 

The unique data center and infrastructure requirements to deliver GenAI solutions, including the GPU-based systems, are driving double-digit to triple-digit increases in capex spending for leading vendors. Not only is the level of spending noticeable, the time periods for the returns are also lengthy. In communicating those increased expenses to investors and Wall Street analysts, vendors like Microsoft messaged the returns from these investments playing out over the next 15 years, a time horizon seldom mentioned previously.

Hybrid, Proximity and Ecosystems Are Elevating the Importance of Colocation

A Multitude of Secular Trends Are Reinforcing Colocation’s Value Proposition

Market trends over the past few years have made several things clear about the IT strategy of most enterprise customers, all of which reinforce the value proposition offered by colocation providers:

  • Hybrid environments will persist — Whether due to existing legacy investments, divisional or regional nuances, or acquisition and divestiture activity, heterogeneity will remain in most IT environments. At one point, the benefits of public cloud made organizations consider a homogeneous, fully cloud-based IT delivery strategy, but those visions have faded for most. The challenge — and goal — is to embrace the hybrid heterogeneous approach and find the best way to integrate, manage and optimize services across these diverse sets of delivery methods and assets. Colocation data centers play a critical role for customers, offering a hybrid approach to facilities and in the interconnection of cloud and on-premises services.
  • Location and proximity matter — The importance of delivery locations is driven by not only the hybrid nature of the environment but also the latency requirements of many workloads. Edge workloads are the clearest example, but there are other cases where latency is critical or where regulations determine the location of data.
  • Investment costs and opportunity costs are important — While organizations are still looking to control and minimize IT costs where possible, there has been a shift toward selective investment. This started when IT became one of the few levers companies could control as they shifted their business models to adapt to changes wrought by the COVID-19 pandemic. Most recently, the onset of generative AI (GenAI) convinced organizations that IT could be a competitive advantage as well, prompting investment in new solutions and technologies to keep pace with the market and key competitors. In this way, organizations are still closely controlling investment costs in new solutions but also can be swayed to spend due to the fear of lost opportunities. Colocation provides an emerging value proposition with GenAI and AI workloads, offering prebuilt facilities and interconnection services without requiring large retrofits or new capital expenditures.
  • Ecosystems equal innovation — Though hyperscalers have become the center of the IT ecosystem over the past decade, the network of infrastructure providers, ISVs, systems integrators (SIs), colocation providers, consultancies and managed services providers remains intact. With the hybrid approach that most customers are embracing, combined with the digital transformations being deployed and then amplified by the onset of new AI and GenAI technology, numerous vendors are part of most enterprise IT solutions. The orchestration of those multiple vendors is critical and most often handled by a trusted SI partner.

Colocation Is a Relied-upon Option for the Vast Majority of Enterprises

According to TBR’s 2Q24 Infrastructure Strategy Customer Research, a significant portion of enterprises report colocation as some part of their overall IT delivery strategy. Most have less than 25% of their workloads deployed in colocation facilities, which is a reflection of the two predominant delivery strategies: their own centralized data centers and cloud-based environments. Colocation is even more of a consideration for new workloads, however, as 72% of surveyed respondents expect to deploy between 25% and 50% of net-new workloads using colocation providers.
 
We believe this trend is due to two factors. First, enterprises are reluctant to build their own data center facilities for workloads that perform best outside the cloud or that have location and latency requirements. Most organizations want to reduce their data center capacity at this point, not add to it at. Second, for many new workloads, data center requirements are more challenging to provide. With the need for increased density, more power requirements and unique GPU-based AI services, a modern data center is required. The challenges of technology, facilities and staffing highlight the value of a ready-to-use colocation facility.

Digital Realty and Equinix Stand Out in a Tightly Packed Colocation Market

Recent trends around hybrid deployment models, latency-sensitive workloads, data residency and AI-reliant solutions have highlighted the sometimes-overlooked benefits of colocation providers. Especially for large enterprise customers, the scale of colocation facilities, strength of alliances and ability to invest in supporting new technologies make a difference in the value of their services. TBR research shows Digital Realty and Equinix are head and shoulders above the rest of their peers in terms of the ability to meet enterprise requirements. From a purely data center location perspective, Digital Realty is the market leader worldwide, with 309 data centers, including those from unconsolidated joint ventures, effective 1Q24.

 

The current revenue perspective is one component when it comes to colocation spending, but enterprises also want to know their solution providers will be able to scale as demands grow. Especially after the supply constraints over the last couple of years and the ongoing shortage of key components for next-gen workloads like GenAI, customers are not always secure in their ability to access resources on a timely basis. While the supply of colocation capacity remains tight, investing now to guarantee expanded capacity is another differentiator. Here again there are advantages to scale, as Digital Realty actually outpaced all covered vendors in level of capital expenditures in 2023. This commitment to current investment is a signal to customers that they can continue to grow with Digital Realty moving forward.

Digital Realty Is Well Positioned to Address Hyperscaler Demand, Both Financially and in its Go-to-market Approach

Though adamant about vying for mindshare among both enterprises and hyperscalers, Digital Realty has always been better known for its play in wholesale colocation. Over the past several quarters, Digital Realty has employed an aggressive joint venture strategy, allying with private equity firms to build multibillion-dollar hyperscale data centers in both Tier 1 and Tier 2 markets. As such, much of Digital Realty’s financial makeup and recent performance have stemmed from this customer base, with over 80% of annualized base rent from new leases in the >1MW category (effective 1Q24). The retail colocation market will undoubtedly continue to grow, led by robust demand for hybrid deployments and cloud adjacency for reasons highlighted earlier in this piece.
 
But many sources continue to suggest a rampant demand surge in the wholesale market as hyperscalers rush to satisfy their own customers’ AI and GenAI deployments. There are several ways Digital Realty is addressing this demand. Some are financial, including the ability to recycle capital by selling off nonscalable, single-tenant facilities to reinvest in strategic markets and maintaining a conservative capital structure; for context, Digital Realty owns nearly all of its facilities, in stark contrast to competitor Equinix, which is still leasing roughly 40% of its data centers. But the other aspect is Digital Realty’s go-to-market approach and how the vendor is nurturing relationships with the hyperscalers and their own partner ecosystems.

Digital Realty and Oracle Have a Strong Customer-partner Relationship: Other Hyperscalers Should Take Note

Digital Realty has always had a strong relationship with Oracle, which is now Digital Realty’s third-largest customer, deploying in 38 locations and spending $170 million in annualized recurring revenue (ARR). It is hard to dispute Oracle’s success pivoting from a traditional software incumbent and SaaS provider to an IaaS challenger with OCI (Oracle Cloud Infrastructure), which is on track to become Oracle’s largest cloud business in the coming years. Digital Realty astutely took notice of OCI’s role in the market, not to mention Oracle’s tight relationship with NVIDIA, which supplied Oracle with GPUs early in the AI wave.
 
Recent developments like connecting to Oracle’s EU Sovereign Cloud and offering NVIDIA-based OCI instances in its high-traffic Northern Virginia data center only reinforce Digital Realty’s role in powering OCI’s expansion. It is one of the reasons Oracle can not only boast more rapid footprint expansion over peers but also deliver on the “distributed cloud” message that nearly all hyperscalers are eager to convey. For perspective, Oracle holds only a single-digit share percentage in the IaaS market, but Oracle’s ability to leverage Digital Realty to expand ahead of peers is notable and something that other hyperscalers that are adamant about building their own data centers should recognize as they fight to capture net-new AI workloads.

SIs and Consultancies Pull It All Together at Scale for Enterprises

For IT services companies and consultancies, two needs mentioned above — orchestration and scale — illustrate how the colocation piece of the enterprise IT ecosystem can provide competitive opportunities.

Orchestration Is Critical and Most Often Handled by a Trusted SI Partner

Companies like Deloitte, Accenture and Infosys have the most complete view of an enterprise’s IT environment, positioning them best to coordinate vendors that provide disparate technologies. Most consultancies and SIs stay in well-defined swim lanes, delivering their added value while facilitating cloud, software and even hardware solutions from an enterprise’s suppliers. In TBR’s research, the market-leading consultancies and SIs use their industry knowledge, influence and reach within a client as the basis for orchestrating a full spectrum of technology providers, calibrated specifically to an enterprise’s IT needs.
 
As described above, colocation continues to be a pressing need, creating an opening for IT services companies and consultancies that have traditionally shied away from alliances that are too far removed from their core competencies. Just as alliances have formed around cloud computing and enterprise software, with IT services companies and consultancies delivering value through innovation, cost containment and managed services, partnerships with colocation specialists could add a compelling component to an IT services company’s orchestration value proposition. Consultancies’ and SIs’ business models depend on retaining clients and expanding footprint within clients. If colocation can become another differentiating factor and improve enterprise clients’ overall IT environments, SIs and consultancies will willingly seek partnerships.

Enterprises Want to Know That Their Solution Providers Can Scale as Demands Grow

If client retention remains critical to SIs’ and consultancies’ business models, scale increasingly marks the difference between average performance and market-leading results. No SI or consultancy can out-scale the largest players, but TBR’s research shows that companies that manage their alliances well can leverage their ecosystem for scale unattainable on their own. In short, no other company can be Accenture, but an SI or consultancy can replicate Accenture’s reach with the combined forces of a well-oiled tech and services ecosystem.
 
Colocation providers already play within the technology ecosystem but have not traditionally been considered a means for consultancies and SIs to increase scale. As AI and GenAI increase compute power demands and enterprises turn to their consultancies and ask, “How can I take advantage of all this new technology without exploding my IT budget?” and “How can I take this GenAI-enabled solution from pilot to my entire enterprise,” colocation can become a critical component.

The SI and Consulting Tech Evolution: From ERP to Cloud to GenAI to Colocation

In TBR’s view, SIs and consultancies will never become adept at selling those components of the technology ecosystem that are the furthest from their core competencies, but the market leaders have become exceptional at managing and expanding their ecosystems. TBR’s research around the relationships between the largest cloud providers and the largest SIs demonstrates how much revenue can be driven through alliances. As SIs and consultancies mature their partnering practices, colocation will become another element orchestrated by the likes of Deloitte, Accenture and Infosys. Quite possibly some smaller SIs and consultancies will use colocation as a critical component to scaling themselves into competitive positions against those larger players. As GenAI drives new demands — on compute power, budgets and expertise — TBR will closely watch these relationships between SIs and colocation providers.

Databricks Pivots Around Data Intelligence to Address GenAI Use Cases

Just like it did with the data lakehouse five years ago, Databricks is establishing another paradigm with data intelligence, which has the data lakehouse architecture at its core but is infused with generative AI (GenAI). Data intelligence was a key theme throughout Databricks Data & AI Summit and signals Databricks’ intentions to further democratize AI and ultimately help every company become an AI company.

A Brief Databricks Backstory

Founded by the creators of Apache Spark, Databricks is known as a trailblazer for launching new concepts in the world of data, such as Delta Lake, the open table format with over 1 billion yearly downloads, and the “lakehouse” architecture, which reflects Databricks’ effort to combine the best of what the data lake and data warehouse offer. Launched in 2020, the lakehouse architecture can handle both structured and unstructured data, and addresses the data engineer and business analyst personas in a single platform.

 

Delta Lake and Unity Catalog, which governs the unstructured data stored in these Delta tables, serve as the basis for the lakehouse architecture and are part of Databricks’ longtime strategy of simplifying the data estate and, by default, AI. But with the advent of GenAI, which is causing the amount of unstructured data to proliferate, Databricks has spearheaded yet another market paradigm, pushing the company beyond its core areas of data ingestion and governance into data intelligence.

 

At the heart of data intelligence is the lakehouse architecture and also Mosaic AI, the rebranded result of last year’s MosaicML acquisition that equipped Databricks with the tools to help customers train, build and fine-tune large language models (LLMs). These also happen to be the same technologies Databricks used to build its own open-source LLM ― DBRX ― sending a compelling message to customers that they, too, can build their own models and use the Mosaic AI capabilities to contextualize that data and tailor it to their business, thus achieving true data intelligence.

What Is Data Intelligence?

Databricks’ executives and product managers largely communicated the definition of data intelligence through demonstrations. One of the more compelling demos showed how Mosaic AI can be used to create an agent that will build a social media campaign, including an image and caption for that campaign, to boost sales.

 

The demo depicted how a user can use transaction data as a tool to supplement a base model, such as Meta’s Llama 3. This demo was key to highlighting one of Databricks’ product announcements, the Shutterstock ImageAI model, which is built on Databricks in partnership with Shutterstock and marks Databricks’ foray into the multimodal model space.

 

The exercise created an image for the fictional social media campaign that included a company’s bestselling product — chosen through transaction data — and a catchy slogan. But to convey the contrast between data intelligence and general intelligence, the demonstrator removed the “intelligence” ― all the data-enabled tools that exist in Unity Catalog ― and generated the image again. This time, the image did not include the bestselling product and was accompanied by a much more generic logan.

 

This demo reinforced the importance of contextualized data in GenAI and the role of Unity Catalog, which helps govern the data being used, and Mosaic AI, which allows developers to use enterprise data as tools for creating agents (e.g., customer support bots).

 

Data intelligence is about not only the context behind the data but also making that context a reality for the enterprise. For instance, in the above scenario, the demonstrator was able to put the image and slogan into Slack and share it with the marketing team through a single prompt. In this example, it is clear how a customer with Databricks skills could use GenAI in their business.

Databricks’ Acquisition of Tabular Is a Blow to Snowflake and a Surefire Way to Stay Relevant in the Microsoft Ecosystem

As a company born on the values of openness and reducing lock-in, Databricks pioneered Delta Lake to ensure any engine can access the data sitting in a data lake. Delta Lake remains the most widely adopted lakehouse format today, handling over 90% of the data processed in Databricks, and is supported by other companies, as 66% of contributions to the open-source software come from outside Databricks.

 

But over the past few years, we have seen Apache Iceberg gain traction as a notable alternative, garnering significant investment from data cloud platforms, including Snowflake. When Databricks announced its acquisition of Tabular ― created by the founders of Apache Iceberg ― days before the Data & AI Summit, it signified a strategic shift that will help Databricks target a new set of prospects who are all in on Iceberg, including many digital natives.

 

The general availability of Databricks’ Delta Universal Format (UniForm), which helps unify tables from different formats, indicates the company’s intention to make Delta and Iceberg more interoperable and, over time, potentially reduce the nuances between both formats, though this may be a longer-term vision.

 

The Tabular acquisition in some ways also marginalizes Snowflake’s steps to become more relevant as a Microsoft Fabric partner. Available through Azure as a first-party native service, Databricks has always had a unique relationship with Microsoft, and Delta serves as the basis for Microsoft Fabric. But Microsoft’s recent announcement to support Iceberg tables with Snowflake in a push for more interoperability was notable, and now with Tabular, Databricks can ensure it remains competitive in the Microsoft Fabric ecosystem.

It Is All About Governance

First announced three years ago, Unity Catalog has emerged as one of Databricks’ more popular products, allowing customers to govern not just their tables but also their AI models, an increasingly important component in GenAI.

 

At the event, Databricks announced it will open source Unity Catalog, which we watched happen during the Day 2 keynote, when Unity Catalog was uploaded to GitHub. Despite Unity Catalog’s mounting success, this announcement is not surprising and only reinforces the company’s commitment to fostering the most open and interoperable data estate.

 

It is very early days, but open sourcing Unity Catalog could help drive adoption, especially as governance of GenAI technologies remains among the top adoption barriers.

Databricks SQL Is Gaining Momentum

It is no secret that Databricks and Snowflake have been moving into one another’s territories. Databricks, with its expertise in AI and machine learning (ML), has been progressing down the stack, trying to capture data warehouse workloads. Snowflake, with its expertise in data warehousing, is looking to get in on the AI opportunity and address the core Databricks audience of data scientists and engineers.

 

Snowflake’s early lead in the data warehouse and strong relationship with Amazon Web Services (AWS) could be making it more difficult for Databricks to attract workloads. Combined with the enormity of the market, there may never be a scenario in which Databricks becomes a “standard” in enterprise accounts for data warehousing. But Databricks’ messaging of “the best data warehouse is a lakehouse” certainly seems to be working.

 

Traditionally, customers have come to Databricks for jobs like Spark processing and ETL (Extract, Transform, Load), but customers are increasingly looking to Databricks for their data warehouse. These customers fall into two groups. In the first group, customers on legacy systems, such as Oracle, are fed up with the licensing and are looking to modernize. In the second group, existing cloud customers are looking for a self-contained environment with less lock-in, compared to vendors like Snowflake, or are seeking to avoid challenges with system management and scale after having worked with hyperscalers.

 

As highlighted by Databricks Co-founder and Chief Architect Reynold Xin, Databricks SQL is the company’s fastest-growing product, with over 7,000 customers, or roughly 60% of Databricks’ total customer base. During his keynote, Xin touted improved startup time with Databricks SQL Serverless to five seconds and automatic optimizations for BI workloads to be four times faster compared to two years ago. Provided Databricks can continue to enhance performance while pushing the boundaries on ease of use to better compete with Snowflake and other vendors in attracting less technical business personas, we expect this momentum will continue and will challenge competitors to raise the bar for their own systems.

Databricks Is Bringing an Added Layer of Value to the BI Stack

Databricks AI/BI is a new service available to all Databricks SQL customers that allows them to ask questions using natural language (Genie) and perform analytics (Dashboards). In a demo, we saw the two user interfaces (UIs) in action: BI offers common features like no-code drag and drop and cross-filtering, and AI includes the conversational experience where customers can ask questions about their data.

 

Databricks AI/BI may lack some of the complex features of incumbent BI tools, but ultimately these are not the goals of the offering. The true value is in the agents that can understand the question the business analyst is asking and hoping to visualize. Databricks’ approach exposes the challenges of bolting on generic LLMs to a BI tool. But the company is not interested in keeping this value confined to its own BI capabilities. Staying true to its culture of openness, Databricks announced at the event that it will open up its API to partners, ensuring PowerBI, Tableau and Google Looker customers can take advantage of data intelligence in these BI environments.

Conclusion

With its lakehouse architecture, which was founded on the principles of open-source software and reduced lock-in, Databricks is well positioned to help customers achieve data intelligence and deploy GenAI. The core lakehouse architecture will remain Databricks’ secret sauce, but acquisitions, including those of MosaicML and Tabular, are allowing Databricks to broaden the scope of its platform to tap into new customer bases and serve new use cases.

 

If Databricks can continue to lower the skills barrier for its technology and sell the partner ecosystem around its platform, the company will no doubt strengthen its hold on the data cloud market and make competitors, including the hyperscalers in certain instances, increasingly nervous.