Inflation’s Effect on SaaS & ITO Operating Models – An Update

Who today has experienced a long-term economic inflationary period?

Inflation is very much in the U.S. news as it reaches 40-year highs. This means a person has to be near the end of their professional careers to have experienced the previous inflationary period. One of the authors dimly recalls his economics professors trying to parse what, at the time, was called stagflation, which impacted the United States in the 1970s. Oil price shocks drove up prices, while unemployment remained high. Inflation previously had been explained as too many dollars chasing too few goods and was generally assigned to economies overheating because of very low unemployment rates.

Today economists seek to assess economic fundamentals to predict whether this inflationary spike will be temporary or persistent. Factors suggesting a short-term spike revolve around the well-publicized supply chain disruptions coupled with record savings levels during the pandemic when discretionary spending on things like travel and restaurant meals was greatly hindered and retail spending shifted from in-store shopping to e-commerce.

On the other hand, some economists point to persistent government deficits due to pumping money into the economy. Given various regulatory and economic uncertainties, that money has been sitting on the sidelines. Further stock market run-ups in valuation have been attributed to investor money seeking higher returns that can be achieved in traditional savings and bond ownership because of low interest rates on these conservative investment instruments.

Partisans will selectively mention these factors to explain away or criticize the current economic climate. Businesses, on the other hand, have a recently dormant financial risk rearing its ugly head that can dramatically impact long-term financial forecasting.

So what are the technology company business models where inflation has near term impact?

Transaction-based businesses in  the IT industry will be able to follow traditional methods of passing costs on to the customer. But, for those business units working from Anything as a Service (XaaS) subscription models, ITO contracts and infrastructure managed service agreements, the near-term impact could be more acute.

Cloud-enabled SaaS models are a relatively new phenomenon as Industry 4.0 gains momentum. Proponents of these business models also assert that legacy business model metrics and analysis do not apply given the majority of selling expenses are recognized in the first fiscal quarter of multiyear agreements while the revenue is then recognized ratably over the contract term. As such, the financial spokespeople for these business models lean heavily on relatively new business metrics — annual recurring revenue (ARR), net dollar revenue retention and lifetime customer value — that chart a forecast course for when operating profits will materialize.

ITO contracts have had a somewhat longer evolution, starting as multiyear deals where vendors could reap greater profits as operating costs declined due to the increased automation of the overall monitoring and maintenance. These contracts then moved to shorter-term durations and, more recently, have stipulated cost decreases over time such that any operating costs savings created by the vendor are passed along, or at least shared with, the customer. The ITO market has likewise seen a shift or rebranding of these customer offers into infrastructure managed services to pivot the contract model to be more in line with SaaS constructs.

When inflation was last a top-of-mind economic consideration, most IT was on premises and operated by company personnel. TBR seriously doubts strategic scenario planning for these new subscription consumption models prior to perhaps late 2020 anticipated the current inflationary levels and their potential operating impact.

What is the immediate inflationary risk to XaaS and ITO business models?

SaaS models take several years to generate profit in what is variously described as the flywheel effect or the force multiplier effect. Increased labor and utility costs beyond forecast and tethered to long-term contracts will add several percentage points of operating costs to these models. In this sense, the newer the SaaS operating model the less risk it will have to cost structure as it has less renewed revenue. TBR expects the more mature the SaaS model, and greater amount of accrued or committed revenue, the more adverse the bottom-line operating impact.

The ITO market, on the other hand, has shown persistent declines, resulting in consolidations and divestments to profitably manage eroding streams traditional ITO vendors seek to convert into managed services agreements. The inflation impact on costing will amplify the need to infuse these business practices with more automated capabilities or increased low-cost (typically offshore) labor as offsets. Still, the operating profit declines in this space will likely worsen unless vendors seek to negotiate incremental cost increases that customers may or may not be willing to accept based on their own issues with cost containment.

What go-forward tactics are in the technology vendor toolbox to mitigate inflationary impact?

Inflation is not new, but the operating models prevalent now were not around when we last experienced it. Business strategists still have a blend of initiatives they can embrace to preserve their operating models and their customer relationships:

  1. Market education: Transparent declarations on the cost impacts to the vendor business and any suggestions of sharing the burden with customers can preserve customer loyalty.
  2. Customer research in existing brand perception. The XaaS Pricing team has a very good blog outlining the Van Westendorp Price Sensitivity Meter and its applicability setting B2B SaaS pricing strategy. That research methodology can assist vendors in level setting where they stand with customers on the value perception and give pricing strategists a line of sight into how much room they have within their brand perception for implementing price increases.
  3. New contract language for price increases: The historic quiet period on inflation, coupled with the innate reality within technology of “faster, better, cheaper,” has customers expecting price reductions for IT that will require true customer education around inflation as an offset to those prevailing market expectations. This will not help with the inflationary impacts on the existing contracts that must be honored, but can establish a new go-forward pricing model that can take into account a business risk largely dormant for the better part of 40 years.

Inflation as a business risk will persist for the foreseeable future. TBR will be assessing it closely as public companies report their earnings and release their financial filing documents.

As the pandemic continues to push customers to hybrid IT, vendors aim to meet demand with flexible, cloud-like pricing models

Average revenue growth for vendors in TBR’s Cloud Components Benchmark increased 12.6% year-to-year in 3Q21, partly due to a favorable year-ago compare considering the economic impacts of COVID-19 in mid-2020. Further, with many vendors operating transactional-heavy business models, rebounding demand for license products supported revenue growth during the quarter, especially for software-centric vendors like Microsoft and VMware. COVID-19 is causing customers to reevaluate their digital transformation plans; this may include migrating completely to a cloud environment, which will erode opportunities for some vendors while others will expand their existing data center investments through solutions like hyperconverged infrastructure (HCI).

Software-centric circles are blue. Hardware-centric circles are orange.

Given some customers’ reluctance to move outside the data center, opportunities arise for vendors to push ‘as a Service’ offerings

According to TBR’s 2H21 Cloud Infrastructure & Platforms Customer Research, 42% of respondents plan to keep most of their workloads inside the data center over the next three years. As COVID-19 accelerates customers’ cloud migration timelines, many enterprises turn to self-built private cloud environments as an intermediary step to a fully managed vendor-hosted private or public cloud model.

Further, many larger, established enterprises are looking to protect their existing investments in IT and find that their own data centers are a better fit for certain workloads, particularly those with stringent security or latency requirements. These customer trends present opportunities for hardware-centric vendors such as Hewlett Packard Enterprise and Dell EMC to capitalize on demand for cloud-like consumption services on premises in the coming years.

Data center consolidation persists

Many self-built private cloud customers adopt HCI solutions to modernize their legacy systems and consolidate their overall data center footprint, a trend brought on by cloud migrations and exacerbated by the pandemic. Colocation is emerging as a notable alternative to privately owned data centers in this model, as customers are offered a secure landing spot for their hardware while providing high proximity to major public cloud platforms. Recognizing this trend, OEMs are partnering with colocation providers to offer central management and governance capabilities that facilitate customers’ workloads.

Vendor competition ramps up amid high demand for cloud-like economics on premises

The cloud components market is consolidating around select vendors, such as Microsoft and VMware, specifically in the virtualization space. However, on the hardware side, vendors are emphasizing their consumption-based pricing offerings, seeking differentiation by taking a workload-by-workload approach. While in general IBM has been lacking in consumption-based hardware, the company is expanding its investments in the area, evidenced by the release of the company’s Tailor Fit Pricing solution for hardware consumption, which applies a pay-as-you-go model to a highly scalable, premium solution like IBM Z.

Gain access to the entire 3Q21 Cloud Components Benchmark, as well as our entire Cloud & Software research, with a 60-day free trial of TBR Insight Center™.

Register for our upcoming webinar, 2022 Predictions: Cloud, an in-depth discussion on the increasing importance of cloud partnerships in the market; how partners will enable growth and stickiness; vendor embrace of open, hybrid architectures; and more.

Webinar: Hyperscalers are reimagining how networks are built, owned and operated

Hyperscalers are building end-to-end networks that embody all attributes and characteristics coveted by communication service providers as part of their digital transformations. Hyperscalers are starting from scratch, completely reimagining how networks should be built and operated. Their clouds, numerous network-related experiments over the past decade, and raft of new network-related technologies on the road map will enable hyperscalers to build asset-light, automated networks at a fraction of the cost of traditional networks.


Join Principal Analyst Chris Antlitz on Thursday, March 24, 2022, for an in-depth, exclusive review of TBR’s most recent Hyperscaler Digital Ecosystem Market Landscape, during which he will discuss hyperscalers’ disruption of the telecom industry, how and why hyperscalers are building networks, and the particular focus of these networks.


The Hyperscaler Digital Ecosystem Market Landscape tracks how and why the world’s largest hyperscalers are disrupting industries to unlock economic value in the digital era, with specific focus on the disruption of the telecom industry. The report focuses on Alphabet (Google), Amazon, Apple, Meta Platforms (Facebook), Microsoft and Rakuten.


Mark your calendars for Thursday, March 24, 2022, at 1 p.m. EDT,
and REGISTER to reserve your space.


Related content:

  1. Hyperscalers are reimaging how networks are built, owned and operated
  2. Top 3 Predictions for Telecom in 2022

Click here to register for more TBR Webinars

WEBINAR FAQs

 

 

 

Atos future-proofs compute ahead of Great Acceleration

As the world awaits the scientific discoveries needed to bring quantum processors to commercial applicability, Atos’ BullSequana XH3000 allows for ecosystem participation within the compute platform itself and future-proofs any early buyer investments. In its Feb. 16 official announcement of the XH3000 supercomputer, for which TBR was provided pre-briefing access, Atos claims the product will have a six-year life cycle and that it is an open architecture capable of housing up to 38 blades. The blades can accommodate a mix of different XPU processors, with more under consideration and development.

The rapid rise in large data sets and evolving AI/machine learning (ML) algorithms have driven this global appetite for greater compute capacity — an appetite that many data scientists believe will only be sated once quantum computers reach commercial viability. Atos’ early lead in quantum simulators and alliances with various quantum systems vendors imply the company will be capable of pivoting its high-performance computing (HPC) offerings quickly to accommodate the addition of commercial-grade quantum processors when they arrive. Atos’ flexible hybrid supercomputing architecture will sell well in Europe for a variety of reasons and may enable Atos to gain share against notable HPC vendors in North America and Asia.

Data and AI require new compute platforms to address intractable problems

Atos correctly asserts the state of compute trails the size of the data sets that are available to run algorithms. Specifically, the world is running out of computational capacity to address the complex problems that can now be simulated and analyzed through increasing digitization.

Proof points offered in the Atos announcement included:

  • Average HPC job durations grow as larger data sets will be applied against systems with as many as 10,000 nodes and 25,000 endpoints.
  • Application refactoring and algorithm refinements can provide as much as a 22x speed improvement.
  • Data centricity and edge processing grow in use case applicability, requiring greater hierarchical depth and more localized compute near the application.
  • Hybrid Sim/AI Workflows for approximate computing are nearing reality. Atos offered the example of Alphafold 2 for protein folding prediction reaching over 90% accuracy, whereas classical methods currently achieve between 30% to 40% accuracy.
  • Yet another industry prediction of reaching the physical limits of Moore’s law now that the industry is at 3nm technology.
  • Extending the performance gains from classical computing while quantum discovery and commercialization advance will require greater innovation around multiple XPU architectures. These hybrid or heterogenous compute architectures need a new compute system structure, which Atos believes the XH3000 system provides.

The Atos Exascale strategy is a hybrid approach that serves many masters

Atos states the future of supercomputing will be hybrid. According to Atos, the future of supercomputing will involve a hybrid approach, consisting in the near term of a blend of classical CPU configurations and specialized processor architectures to address specific workload requirements. Presently, Atos collaborates with AMD (Nasdaq: AMD), Intel (Nasdaq: INTC), Nvidia (Nasdaq: NVDA), SiPearl and Graphcore, among others. Eurocentric chips based on ARM designs are also in the news and have been discussed by Atos.

Atos has addressed the need for future-proof flexibility in its designs by building the standard chassis of the BullSequana XH3000 to accommodate up to 38 compute/switch blades on one rack to be mixed and matched as workflows require from the different blades currently available and available in the future.

This hybrid architectural design approach serves many masters, such as those addressing:

  • Sustainability: Different cooling and processing designs not only generate greater computational capacity but also, when coupled with the hybrid configurations and algorithm innovations, can lead to lower power consumption, and therefore lower carbon footprints.
  • Sovereignty: Technonationalism is not going away, and Atos is a flagship European technology vendor. Former Atos CEO Thierry Breton is now the commissioner for internal market affairs within the European Union (EU) and has been tasked with managing many elements pertinent to digitization and “enhancing Europe’s technical sovereignty.” The EU has clearly stated its intentions to ensure there are European-controlled processors in market. Hybrid computing structures enable companies to select different processors to address the computational requirements amid the increased attention nation states place on compute access as a strategic national interest.
  • Higher performance: The HPC market increasingly takes on the dynamics of emerging ecosystem business models and requires a physical compute stack that can accommodate the many tech stack variations the ecosystem can create to address the world’s compute and AI challenges. Atos claims it also has built the architecture to be resilient and adaptable for six years without forklift upgrades. This flexibility, Atos asserts, can accommodate new discoveries as the unknowns around deep learning, algorithm development and new processor developments in the classical and quantum computing realms come into view.

Lockheed Martin forced to abandon $4.4B acquisition

On Feb. 13 Lockheed Martin (NYSE: LMT) pivoted and severely altered its FY22 outlook by withdrawing from its $4.4 billion plan to acquire missile and rocket propulsion expert Aerojet Rocketdyne (AR) (NYSE: AJRD) after months of mounting antitrust pressure and the recent unanimous U.S. Federal Trade Commission (FTC) decision to sue Lockheed Martin to obstruct the planned acquisition of AR.

Lockheed Martin looked to challenge Northrop Grumman for missile and rocket propulsion market dominance

In December 2020 Lockheed Martin announced it had entered into a definitive agreement with AR to acquire the missile and rocket propulsion innovator. With this proposed purchase, Lockheed Martin indirectly revealed its plans to disrupt the market dominance Northrop Grumman (NYSE: NOC) has enjoyed since 2018 when it purchased renowned rocket booster manufacturer Orbital ATK. Lockheed Martin hoped that its acquisition of AR would follow a similar trajectory as Northrop Grumman’s purchase of Orbital ATK, where the FTC would approve the acquisition so long as Lockheed Martin followed FTC stipulations, such as refusing to discriminate access to its missile system products and services to competing contractors.

With the world’s largest defense contractor planning to dedicate significant resources to acquire AR, Lockheed Martin expected the FTC to approve its acquisition as the combination of Lockheed Martin and AR would be a stronger competitor against Northrop Grumman, giving the U.S. government an additional option when selecting contractors. Expecting approval in 1Q22, Lockheed Martin forecasted its FY22 based on gaining an expanded propulsion systems and rocket engines portfolio and priority access to AR resources during the ongoing supply chain chaos seen in all industries.

After a year of setbacks, the FTC intervenes with Lockheed Martin’s proposed acquisition

Lockheed Martin experienced several setbacks almost immediately after announcing the planned acquisition. In February 2021 Raytheon Technologies (NYSE: RTX) stated it would implore regulatory agencies to block Lockheed Martin’s proposed purchase, arguing that the acquisition would give Lockheed Martin an unfair market advantage and Raytheon Technologies would have to purchase approximately 70% of its missile propulsion systems through Lockheed Martin as a result.

In July 2021 Senator Elizabeth Warren petitioned the FTC to probe the acquisition. Despite a bipartisan appeal to the Pentagon by a group of 13 U.S. Congress members in support of the merger in August 2021 and rumors circling that the Pentagon was in favor of the deal, the FTC voted 4-0 in January 2022 to file a lawsuit impeding Lockheed Martin’s $4.4 billion acquisition.

After initially postponing the vote, the FTC finally argued that Lockheed Martin would damage the national defense market and its rivals by acquiring the United States’ only independent provider of essential missile inputs. By reducing industry competition, Lockheed Martin would be able to relax innovation efforts and not be as competitive with its pricing, which could result in higher prices for the government. The acquisition would also potentially limit rivals access to resources and provide Lockheed Martin with unfair insight into their confidential information as AR operated as a subcontractor for many of them in the market.

Rather than face an arduous administrative trial against the U.S. government in mid-June, Lockheed Martin opted to simply abandon its acquisition plans.

Hyperscalers’ cloud-based modern network architecture provides strategic advantage over legacy network technologies

2H21 Hyperscaler Digital Ecosystem Market Landscape infographic

Hyperscaler-built networks will look very different from traditional networks

Hyperscalers are building end-to-end networks that embody all the attributes and characteristics coveted by communication service providers (CSPs) as part of their digital transformations. The most significant differences are in the software stack and the access layer, where new technologies enable hyperscalers to build dense mesh networks in unlicensed and/or shared spectrum bands and build out low Earth orbit (LEO) satellite overlays for access and backhaul. Mesh networks will likely be used to provide low-cost, wireless-fiber-like connectivity in urban and suburban environments, while satellites will primarily be leveraged to provide connectivity to rural and remote environments.

Hyperscalers are starting from scratch, completely reimagining how networks should be built and operated. Their clouds, numerous network-related experiments over the past decade, plus the raft of new network-related technologies on the road map will enable hyperscalers to build asset-light, automated networks at a fraction of the cost of traditional networks.

Hyperscaler networks will cost a fraction of traditional networks

TBR estimates hyperscaler networks cost 50% to 80% less to build than traditional networks (excludes the cost of spectrum, which would make the cost differential even more pronounced because hyperscalers will primarily leverage unlicensed and shared spectrum, which is free to use). Most of the cost savings stems from innovations, such as mesh networking, carrier aggregation, LEO satellites and integrated access-backhaul, that enable significantly less wired infrastructure to be deployed in the access layer for backhaul and last-mile connection purposes.

For example, Meta’s Terragraph mesh access point can autonomously hop signals through multiple other access points before sending the data through the nearest available backhaul conduit. In the traditional architecture, some form of backhaul would need to connect to each access point to backhaul the traffic. Mesh signals could also be backhauled through LEO satellites, further limiting the need to deploy wired infrastructure in the access layer, which is one of the most significant costs of traditional networks.

Another key area of cost savings stems from cutting out certain aspects of the traditional value chain. By open-sourcing some innovations, such as hardware designs, hyperscalers can foster a vibrant ecosystem of ODMs to manufacture white boxes to compose the physical network. The white-boxing of ICT hardware can lead to cost savings of up to 50% compared to proprietary, purpose-built appliances.

Related Content:

Top 3 Predictions for Telecom in 2022

Webinar: 2022 Predictions: Telecom

Hyperscalers are reimagining how networks are built, owned and operated

Hyperscaler-built networks will look very different from traditional networks

Hyperscalers are building end-to-end networks that embody all the attributes and characteristics coveted by communication service providers (CSPs) as part of their digital transformations. The most significant differences are in the software stack and the access layer, where new technologies enable hyperscalers to build dense mesh networks in unlicensed and/or shared spectrum bands and build out low Earth orbit (LEO) satellite overlays for access and backhaul. Mesh networks will likely be used to provide low-cost, wireless-fiber-like connectivity in urban and suburban environments, while satellites will primarily be leveraged to provide connectivity to rural and remote environments.

Hyperscalers are starting from scratch, completely reimagining how networks should be built and operated. Their clouds, numerous network-related experiments over the past decade, plus the raft of new network-related technologies on the road map will enable hyperscalers to build asset-light, automated networks at a fraction of the cost of traditional networks.

Hyperscaler networks will cost a fraction of traditional networks

TBR estimates hyperscaler networks cost 50% to 80% less to build than traditional networks (excludes the cost of spectrum, which would make the cost differential even more pronounced because hyperscalers will primarily leverage unlicensed and shared spectrum, which is free to use). Most of the cost savings stems from innovations, such as mesh networking, carrier aggregation, LEO satellites and integrated access-backhaul, that enable significantly less wired infrastructure to be deployed in the access layer for backhaul and last-mile connection purposes.

For example, Meta’s Terragraph mesh access point can autonomously hop signals through multiple other access points before sending the data through the nearest available backhaul conduit. In the traditional architecture, some form of backhaul would need to connect to each access point to backhaul the traffic. Mesh signals could also be backhauled through LEO satellites, further limiting the need to deploy wired infrastructure in the access layer, which is one of the most significant costs of traditional networks.

Another key area of cost savings stems from cutting out certain aspects of the traditional value chain. By open-sourcing some innovations, such as hardware designs, hyperscalers can foster a vibrant ecosystem of ODMs to manufacture white boxes to compose the physical network. The white-boxing of ICT hardware can lead to cost savings of up to 50% compared to proprietary, purpose-built appliances.

Hyperscaler disruption portends structural changes to the telecom industry through this decade

The technological and business model disruption hyperscalers are bringing into the telecom industry portends significant challenges for incumbent vendors and CSPs. TBR sees the scope of disruption becoming acute in the second half of this decade, likely prompting waves of M&A that will reshape the global landscape. CSPs will engage in M&A to stay relevant and financially sound, while incumbent vendors scramble to evolve as their primary business model (selling proprietary hardware and/or software and attached services) is increasingly marginalized and eventually becomes obsolete as hyperscaler innovations spread through the industry.

Hyperscalers do not want to become telecom operators; they want to leverage networks to obtain data and drive their other digital businesses

Hyperscalers are in the data business; providing network connectivity is a means to that end

Hyperscalers are building large-scale networks to drive forward and support their big-picture strategies, which revolve around building out their respective metaverses and supporting a wide range of new digital business models that will be enabled by new technologies such as 5G, edge computing and AI.

To that end, hyperscalers have a vested interest in ensuring the entire world is blanketed with high-speed, unencumbered, intelligent, low-cost connectivity. The economic justification to build the network is driven by the need for hyperscalers to gather and process new types of data to drive these new digital business initiatives. TBR notes that this business case is completely different from CSPs’ business case, which monetizes the network access rather than the data that comes over the network. The hyperscaler model emphasizes giving away low-cost or free connectivity and monetizing the data that comes through the network. The hyperscaler model is far more valuable than the traditional connectivity model and will likely ultimately become the predominant business model for connectivity.

CSPs sit on vast data lakes and have for many years. These data lakes contain valuable information about subscribers, endpoint devices, real-time location and tracking, and other metrics that are of critical importance for some of the digital business ideas hyperscalers want to commercialize, such as drone package delivery and autonomous vehicles. Owning more of the physical network infrastructure and the core software stack puts hyperscalers in a prime position to capture and monetize this data.

TBR notes that this strategy is already in use in the telecom industry in various places in the world. For example, Reliance Jio and Rakuten are using this strategy in India and Japan, respectively. In both cases, connectivity is given away for free or at a significantly lower cost compared to rival offers, and the data generated by the connections indirectly feeds and monetizes each company’s respective digital businesses, such as advertising, financial services and e-commerce. There is significant evidence suggesting that Alphabet, Amazon, Apple, Meta Platforms and Microsoft all have strategies that are similar but of a far greater magnitude.

Hyperscalers already own and operate the largest networks in the world; the next build-out phase is the mobile core, far edge and access domains

Over two-thirds of global internet traffic traverses hyperscaler-owned network infrastructure at some point in the data’s journey. The vast majority of that traffic travels over hyperscalers’ backbone networks, which primarily comprise optical transmission systems (submarine and terrestrial long-haul optical cables), content delivery networks, and cloud (including central, regional and metro) data centers.

The domains of the network where hyperscalers have yet to dominate at scale are the mobile core, far edge and access layers, but there is mounting evidence to suggest this is changing, thanks to technological advancement and regulatory breakthroughs (e.g., the democratization of spectrum).

TBR’s Hyperscaler Digital Ecosystem Market Landscape focuses on the five primary hyperscalers in the Western world that TBR believes will own the largest, most comprehensive end-to-end digital ecosystems in the digital era. Specifically, the five hyperscalers covered in this report are Microsoft, Alphabet, Meta Platforms, Amazon and Apple. Collectively, TBR refers to these five hyperscalers under the acronym MAMAA. TBR covers the totality of the largest hyperscalers’ businesses, with an emphasis on how they are disrupting the ICT sector. Gain access to this full report, as well as our entire Telecom research, with a 60-day free trial of TBR Insight Center™.

Demand for 5G infrastructure is becoming more robust, though commercial deployments will be delayed by supply chain headwinds in the short term

Supply-demand imbalance delays pace of 5G market development

The pandemic has prompted enterprises and governments to pull forward and broaden the scope of their digital transformations, primarily for business resiliency and cost-reduction purposes but also for tapping into new market opportunities. There is significant interest among governments and enterprises across verticals in leveraging 5G and other new technologies such as AI and edge computing, to adapt economies and societies to the new normal. Though demand for 5G infrastructure is becoming more fertile and robust, deployments are being challenged by supply chain limitations.

Though most network vendors successfully navigated supply chain headwinds in 2021 and were nearly able to fully meet demand, 2022 will be more challenging as inventories are now thinner and the shortages of chips, components and labor are impacting the telecom industry more directly. Technological complexity, standards delays and geopolitical encumbrances also threaten to slow the pace of 5G ecosystem development despite broad interest in the technology. There are two primary impacts from the supply chain breakdown: The timing of revenue recognition and cash flow for vendors is altered, and the ability of communication service providers (CSPs) to meet their build-out timelines and participate in market development is hindered.

TBR sees no easy fix to resolve the supply chain issues; rather, it will be a series of small adjustments over time that will enable the supply side to fully recover and meet demand (e.g., it takes a few years to build new chip factories). This is compounded by a demand environment that is above the historical trendline, which is driven by unprecedented government market support and greater pressure on CSPs to invest in their networks to remain competitive.

Related content:

Webinar: 2022 Predictions: Telecom

Special report: Top 3 Predictions for Telecom in 2022

Conversion, integration and ecosystems drive SaaS growth

Applications serve as the vessel for cloud’s business value 

Value, in the form of agility, innovation and efficiency, is now the driving force behind customers’ cloud investments. Applications, in the form of SaaS, are the purest vessel for customers to implement and achieve the value they so desperately want in order to improve their businesses. It is for that reason that TBR published our first Cloud and Software Applications Benchmark, tracking the nuances of the applications space from a workload and subworkload perspective.

Customers’ growing reliance on SaaS solutions is shown in the market growth of the 10 vendors covered in the inaugural report — their aggregate revenue increased by 26.4% year-to-year in 3Q21, a rate that has accelerated over the past year. Business Applications workloads, which include ERP solutions, was the fastest-growing segment, with aggregate revenue for the 10 vendors covered increasing by 28.5% year-to-year during 3Q21. The drivers of this expansion are threefold: conversion of existing customers to cloud; the integration of solutions through hybrid deployments; and revenue driven by the ecosystems that are critical to the innovation and go-to-market strategy for SaaS solutions.  

Providers from all backgrounds now look to existing customers as their first growth option 

For both traditional software providers and companies that were born on the cloud, customers with existing traditional software installations have become one of the main drivers of SaaS growth. Traditional software providers did not always see the market this way. In fact, SaaS was a threat to their existing license and maintenance businesses for quite some time. After years of customers voting with their dollars and selecting SaaS-delivered solutions over the traditional license and maintenance delivery model, nearly all applications vendors currently see their existing bases as the first opportunity for growth.

In some ways, this transition has played out on a workload-by-workload basis. Sales and marketing applications, led by CRM, are on the periphery of most enterprise applications suites and were the earliest to see a shift to SaaS over traditional software purchasing. Salesforce (NYSE: CRM) led this trend, converting many existing customers from traditional leading providers like SAP (NYSE: SAP) and Oracle (NYSE: ORCL). The dynamics in CRM served as a warning shot for many traditional providers. Even the most reluctant SaaS providers, like Oracle, are now focused on offering cloud solutions to their existing customers before their competitors can.

The shift in strategy is well timed for traditional providers, as cloud demand in the Business Applications segment is beginning to accelerate. As shown in Figure 1, Business Applications has the lowest cloud revenue mix for the vendors included in TBR’s Cloud and Software Applications Benchmark, making it the largest opportunity for traditional customer conversion.  

Figure 1

Deep dive: Management consulting and analytics services leading trends in 2021

Join Practice Manager and Principal Analyst Patrick Heffernan, Principal Analyst Boz Hristov, Senior Analyst Elitsa Bakalova and Senior Analyst Kelly Lesiczka Thursday, Feb. 24, 2022, at 1 p.m. EST/10 a.m. PST for an in-depth analysis of leading trends in the IT services industry, such as vendor performance across regions, service lines and select verticals and the evolving value proposition as pent-up demand for run-the-business awards continues. The team will also do a deep dive into management consulting and analytics services segments.

Mark your calendars for Thursday, Feb. 24, at 1 p.m. EST,
and REGISTER to reserve your space.

Related content:

  1. Top 3 Predictions for IT Services
  2. Top 3 Predictions for Management Consulting

Click here to access more TBR webinars.

WEBINAR FAQS