Peraton Could Surpass $8B in Sales in 2024, but Will It Go Public?

Updated: Sept. 9, 2024

Will Peraton Issue an IPO?

As Peraton considers going public, it needs to generate predictable revenue and profit streams to avoid pitfalls like failing to meet the company’s forecasted metrics. When the Carlyle Group took Booz Allen Hamilton public in 2010, it ensured investors that the business was in a position to keep expanding and succeed long-term. Peraton’s and Veritas’ leadership teams will undoubtedly take a similar approach. Peraton has become increasingly competitive over the years, and TBR believes it has facilitated sales expansion each year, but it remains to be seen whether Peraton is fully realizing the benefits of cost-saving measures or if it is consistently meeting its revenue goals. If it is not doing either, it will not go public.

 

General Dynamics Information Technology (GDIT) and other unencumbered industry peers have been making rapid investments in emerging technologies like generative AI over the last few years. With Peraton no longer focused on fully integrating its assets, it began to broaden its AI and cloud capabilities more noticeably during 2023 by pursuing strategic relationships with SoftIron and UiPath. These two partnerships, in particular, enable Peraton to leverage SoftIron’s HyperCloud technology as well as the UiPath Business Automation Platform while the company helps clients with establishing their respective cloud networks and streamlining their workflows.
TBR anticipates that Peraton will continue to expand its partner network to operate as a cloud services broker. Peraton is positioning itself to capitalize on federal agencies that are increasingly utilizing an “as a Service” cloud environment model to build their own platforms with desired third-party capabilities as well as the steady funding to accelerate agencies’ digital modernization journeys, which is expected to persist for the foreseeable future.

 

If Peraton falters with this strategy, it can still continue pursuing opportunities related to next-generation national security. TBR estimates that approximately 45% of Peraton’s sales in 2023 came from the DOD and the IC. Peraton focuses on underpinning missions of consequence that have high barriers of entry and receive bipartisan funding, like protecting space systems, in addition to supporting national security initiatives.

 

Veritas disclosed in 3Q24 that it had over $40 billion in assets under management. With interest rates expected to remain at elevated levels through 2024, it is unlikely Veritas will make any more multibillion-dollar acquisitions to further augment Peraton. Veritas has demonstrated flexible ownership over the years to work with Peraton. While Veritas has helped take Peraton to new heights and could pursue a sub-$200 million acquisition to broaden Peraton’s capabilities with emerging technologies, Veritas will cash out sooner rather than later.

 

Peraton has undergone several high-profile leadership changes this year but the most notable announcement is that Steve Schorer will succeed Stu Shea as CEO, president and chairman of the board in September. Schorer was the CEO of Alion Science and Technology before it was acquired by Huntington Ingalls Industries in 2021. Most recently, Schorer has been operating as a senior advisor at Veritas Capital. TBR remains confident that Peraton will go public in 2025 given its recent activity.

Peraton in 2024

The megamerger has given Peraton the necessary portfolio depth and scale to regularly vie with industry leaders such as Leidos for enterprise IT contracts in the $500 million to $2 billion range in the federal civilian and health spaces while also capitalizing on Department of Defense (DOD) Intelligence Community (IC) needs. Now that Peraton’s assets are fully integrated, TBR believes that Peraton is on course to surpass $8.0 billion in annual sales during 2024.

 

Additionally, Peraton’s backlog was last reported at $24.4 billion in the middle of 2022. The company has been placing around 1,200 bids a year worth approximately $40 billion in total. Peraton has not disclosed its current operating margins.

How Peraton Septupled in Size

Private equity firm Veritas Capital officially bought Harris Corporation’s government IT services business for $690 million in 2Q17. The new assets were quickly spun into a stand-alone company, Peraton, helmed by Stu Shea to pursue opportunities in the communications, cybersecurity and space markets.

 

When he was first brought on as Peraton’s president, CEO and chairman of the board, Shea expected that Veritas would financially back his plans for three years before cashing out since the fund was for five years. As part of Shea’s growth strategy, Peraton purchased Strategic Resources International to augment the company’s telecommunication services portfolio in 2Q18 and Solers in 2Q19 to expand its space capabilities. While Peraton did not share the financial values of these transactions, the latter enabled Peraton to generate over $1 billion in annual revenue.

An Arduous Integration Process

In 4Q20, more than three years after appointing Shea, Veritas’ leadership team approached him about Veritas acquiring the IT services operations of three Northrop Grumman business units (collectively referred to as NGIT) and federal IT vendor Perspecta and rolling these assets into Peraton. Veritas purchased NGIT for $3.4 billion in 1Q21, before acquiring Perspecta for $7.1 billion in 2Q21 and bolting on ViON’s cloud operations to Peraton in 3Q23.

 

Anecdotally, Peraton entered this megamerger with industry-leading margins. Following the merger, Peraton’s sales septupled to between $7.0 billion and $7.2 billion in 2021, according to TBR’s estimates, while its headcount surged from 3,500 to 24,000.

 

The largest privately owned federal IT contractor faced hundreds of thousands of obstacles at the start of this integration process, according to Shea. As the leadership team streamlined policies and processes while optimizing the business, they made several notable decisions, such as divesting the systems engineering, integration and support services business to a portfolio company of Veritas. (These assets would later become Arcfield and are still owned by Veritas.)

By divesting this business, Peraton ensured it was fostering ethical business practices by mitigating potential corporate conflicts while narrowing its focus on core operations. In addition to the divestment, Peraton reduced its physical footprint from 150 facilities to less than 100. The company also made sweeping workforce rationalizations, shrinking its post-merger headcount from 24,000 in 2021 to 18,000 by the end of 2022. Concurrent with implementing these optimization efforts, Peraton had to contend with an array of impeding factors that plagued other vendors across the industry including supply chain disruptions, macro inflation and a sustained bid protest environment.

 

Despite this onslaught of obstacles, Peraton has been able to consistently disrupt in the public sector market. The company has been able to successfully compete with Tier 1 vendors to secure high-profile contracts such as the Special Operations Forces IT Enterprise Contract III worth up to $2.8 billion. By August 2022, Shea claimed Peraton only had a few hundred items left to address in its integration master schedule.

PwC Stepping Up When Technology Fails to Deliver Value

In late November 2023, TBR and PwC Transformation Consulting Solutions Leader Tom Puthiyamadam continued a decadelong discussion about the consulting business model, reflecting on changes wrought by the pandemic, technology ecosystem partnerships and generative AI (GenAI).  According to PwC’s assessment, technology investments have not delivered the business value or transformative effects enterprises have expected over the last decade. Implementing the latest ERP does not, in itself, deliver growth, and moving workloads to the cloud does not, unrelentingly, reduce costs. Just as commuters have not taken off in the flying cars that “The Jetsons” promised, business leaders have not seen technology provide transformational results.

Technology Is Easier to Use but Harder to Make Useful — and Still No Flying Cars

For PwC, a new year and a hot new technology, GenAI, provide an opportunity to reassess how consultancies and IT services vendors bring value to their clients, first by defining credible, meaningful business outcomes and then creating a value chain back into the technology, process and operations stacks. What does that actually mean? According to Puthiyamadam and other PwC leaders involved in the discussions with TBR, the starting point is defining business value transformation — a desired end state — and then delivering on trust, transparency and speed.

 

Taking the 10,000-foot view, PwC leaders noted that technology as a whole has been getting easier, perhaps even more so now in the GenAI age. No-code and low-code platforms, visualizations, and GenAI-enabled programs like Microsoft’s Copilot all support a trend toward making technology easier to understand and deploy.

 

Notably, in Puthiyamadam’s words, “The old hard part is still the hard part. Can you stitch it all together? Can you get people to work differently? Can you drive behavioral changes in an enterprise?” And most critically, can a consultancy “deliver on CFO-level outcomes in 12 weeks, not 12 months?”

 

Repeatedly, PwC Consulting leaders came back to a fundamental point around how clients view consultancies: How fast can they deliver measurable, meaningful outcomes? Experience, expertise, technology skills and even scale are table stakes. Speed, combined with quality and at a fair price, matters most.
 

IT services and consulting in 2024: Traversing GenAI pressures, talent challenges and regulatory wavesDownload Your Free Copy of TBR’s Top Predictions for IT Services and Consulting Markets in 2024, featuring expert insights into GenAI’s impact on outcomes-based pricing and the major pain point for all in the new year

  

NASCAR and the Factory Approach: What PwC Can Do Differently

Embracing what PwC leaders have called a “factory approach” to technology-infused professional services engagements allows PwC to reassure clients that the firm is purpose-built when it comes to people, scale, expertise and price.

 

Critically, PwC reassures clients’ IT professionals that the firm provides advisory and support services, availability, and integrated technologies but does not wholly replace those IT professionals’ roles within their organization.

 

In TBR’s view, PwC’s recognition that a standardized, scaled business model — the factory approach — combined with high-touch consulting could actually assuage fears around job disruption may prove critical in coming years as GenAI permeates IT services, generating more uncertainty and fear. Paired with the focus on measurable business outcomes, PwC’s factory approach could help separate the firm from peers.

 

During the discussions with TBR, PwC leaders acknowledged that many enterprise clients struggle with technical debt but challenged the idea that this debt constitutes the biggest obstacle to realizing digital transformation value. Instead, PwC suggested process debt — the ingrained operational tasks, flows and interdependencies — has also accumulated at enterprises, slowing efforts to gain value from the technology (digital) or the business (transformation) investments in digital transformation.

 

PwC leaders further suggested process debt at many enterprises had reached levels that demand attention, even at the cost of additional technical debt. Here, according to PwC, the firm helps clients gain maximum use from current technology investments, finding additional value while accelerating transformation to new (and better) processes with, as needed, new technologies.

 

In TBR’s view, a NASCAR pit crew analogy Puthiyamadam invoked multiple times seemed most appropriate in discussing how PwC could help clients with both their processes and technology. Changing tires fast requires not just better tools but also practice, teamwork and performing under pressure. In an increasingly competitive and budget-constrained IT services and consulting market, bringing NASCAR-like precision and speed to digital transformations will be expected of leading vendors.

Business Value Realization and the Art of Keeping Everyone Honest

Speeding and crashing provides no value on the track or to a business, bringing into play the other two elements PwC sees as critical to a new way of framing consulting: trust and transparency. PwC leaders told TBR that the firm has increasingly been bringing a private equity mindset to its clients’ value realization.

 

Rather than taking three years to fully understand the value of a technology-enabled change, PwC and its clients have been constantly examining ongoing work, determining on a monthly basis whether the expected value continues to be reflected in current progress. The transparency around business value realization — critically here actual measurable business value, not just technology milestones — builds trust and enables speed. PwC has had to reorient its ways of working, reinvigorate its technology training, and build the business model agility to take on financial risks as a way of “keeping everyone honest,” in Puthiyamadam’s assessment.

 

As he pointed out, PwC can help with “modernizing the core while improving the business, realizing value from existing technology. … The client can hit ‘pause’ if they’re not believing [PwC is] going to hit value.”

 

Further on PwC itself, PwC leaders reiterated to TBR that the firm has been training strategists on emerging technologies, an effort that began globally years ago with the Digital Fitness app and has continued to be a learning and development priority. Assessing management consulting overall, Puthiyamadam stated that consultants who are not “trilingual will be irrelevant really soon, if not already. Design, business value, and technology. Must speak all three.”

 

Critically, PwC leaders in the discussion added that the firm’s consultants focused on working within the existing technology stack at their clients, accepting the technology environment that they are in and recognizing that perfect is the enemy of progress. Combining what PwC does for itself and what it brings clients, PwC leaders further elaborated that as clients bring new technologies into their IT stack those clients need the full suite of change management, learning and professional development, and product management critical to successful technology deployments.

 

In TBR’s view, the near-term disruptions in the management consulting and IT services space will require many traditional services — ones that PwC has experience with and credibility around, in part by applying those services directly to the firm.

Does PwC Have the New Consulting Business Model? If So, TBR Is Here for It

TBR might not be quite as gloomy as some of PwC’s consulting leaders on the failures of technology to date — we may yet see flying taxis in Paris next summer — but we agree fully that most enterprise information technology has been oversold and has under-delivered in terms of overall business value. PwC’s focus on getting the most from technology that clients have already acquired and addressing process debt, those sticky business problems that prevent the full value of technology or digital transformation from taking hold, all while delivering value quickly and transparently strikes TBR as a smart strategy to address an ecosystemwide problem.

 

There is an old saying, “You can have it fast or good or cheap, but not all three.” PwC is challenging that formulation by saying you can have fast and good, and you will always know what you are paying for and what you are getting, even in a previously nebulously defined area like management consulting. And to back it up, PwC will put its own fees at risk, knowing that value will be evaluated every three months, at least, if not more frequently.

 

To TBR, this approach echoes the recent attention around financial operations, in which enterprise IT buyers ask how much value they are getting from software, platforms and cloud. At frequent intervals, PwC assesses the value it is bringing to clients with no further steps and no further action until the expected value is understood and credibly on track. Is PwC disrupting the consulting business model? In TBR’s view, there is no better time for it.

Reliable, Proven and High-functioning: HCLTech’s Cloud-native and GenAI Labs

HCLTech considers the “art of the possible” to be what clients can deploy at scale in the near term. In HCLTech’s Cloud Native Labs, “the possible” is grounded completely in what can be done, not what is theoretically possible. In a decade of visiting innovation and transformation centers, TBR has heard every version of blue-sky creativity and out-of-the-box thinking but cannot recall another IT services vendor definitively connecting “the possible” to “deployable at scale.” 

Grounding the ‘Art of the Possible’

Gracechurch Street Cloud Native Lab Echoes HCLTech’s Fundamentals

In fall 2023, TBR met with Alan Flower, EVP, CTO and global head, Cloud Native & AI Labs; Tom De Vos, Google Cloud Platform (GCP) cloud native architect; and Mani Nagasundaram, global head of Cloud Sales, Financial Services, at HCLTech’s Gracechurch Street Cloud Native Lab, one of a network of HCLTech’s worldwide labs, including a Software Defined Infrastructure Lab in Chennai, India, and a Scale Digital Delivery Center and Digital Innovation Lab in Amsterdam.
 
The HCLTech leaders described in detail the kinds of challenges clients bring to them in the labs as well as why clients come to HCLTech. In use case after use case, the following three elements in HCLTech’s approach in the labs and overall approach to technology and IT services resonated with TBR particularly well based on our experience and view of HCLTech’s peers and ecosystem partners:

 

  • Engineering credibility — HCLTech has always stood out among the large India-centric IT services vendors for its engineering DNA, a mindset that seems to permeate every aspect of the company’s solutions and engagements. Flower first mentioned his company’s engineering legacy in the context of how his teams approach clients’ problems. Then De Vos described a critical element in HCLTech’s engagements at the labs, saying that clients know they are going to be able to “flip a switch” and have a working, materially important solution, not just a PowerPoint presentation or road map.
  • Sustained engagement — HCLTech’s leaders repeatedly described client engagements that extended over multiple lab visits, whether on-site, virtual, or even set up in the client’s facility. While client selection — who comes to the labs and for what kinds of work — is not handled lightly, HCLTech clearly maintains flexibility with respect to how clients can tap into the time and expertise of the HCLTech professionals at the various labs worldwide, reflecting the company’s desire and ability to deliver on client objectives with its portfolio and resources over relying entirely on transactional volume.
  • Commitment to relationships — For HCLTech, delivering on client objectives includes keeping the Cloud Native Labs and the entire labs network part of the relationship beyond the contract. Flower repeatedly noted that the labs function as an asset that HCLTech can bring to clients to jump-start problem solving and move from strategic decisions around technology choices and approaches to the training and cultural change management needed to sustain a solution beyond the MVP and pilot stages. That commitment came through in both the use cases Flower described and HCLTech’s understanding that these labs are decidedly not a direct revenue generation source but a critical component to HCLTech’s overall strategy.

 

Technology-centric Cultural Change Management

While HCLTech’s Cloud Native Labs share many attributes with other innovation and transformation centers, including the need to showcase capabilities, challenges managing which clients attend sessions, and opportunities for internal training and skills development, TBR believes these labs could be a blueprint for other IT services vendors, particularly as the entire cloud ecosystem faces disruptions from shifting client expectations and the opportunities around generative AI (GenAI).
 
No client arrives at a consultancy’s or IT services vendor’s innovation and transformation center completely unaware of emerging technologies, nor do any enterprises have blank slate or pristine technology environments. So when informed clients potentially laden with technology debt arrive at HCLTech’s Cloud Native Labs, the shared mandate to get to a deployable-at-scale solution to a clearly defined (and addressable) problem likely resonates extremely well with clients, in large part because HCLTech continues to engage most frequently with technologists and practitioners, the people tasked with making the tech work at an enterprise.
 
That said, Flower and De Vos repeatedly noted that HCLTech understands the cultural change management needed for any technology solution to scale. Consulting, yes, but within the context of HCLTech’s engineering and technology-problem-solving strengths.

Partnering with the Right Hyperscaler — All 3 of Them

Putting HCLTech’s Cloud Native Labs in context of other consulting and IT services vendors’ innovation and transformation centers necessarily sets aside the cloud focus of these labs. On that point, Flower and De Vos consistently stressed the importance of HCLTech’s hyperscaler partners, including (in no particular order), Microsoft (Nasdaq: MSFT), Amazon Web Services (AWS) (Nasdaq: AMZN) and Google (Nasdaq: GOOGL).
 
Notably, HCLTech partners closely with RedHat, and the HCLTech executives repeatedly referenced use cases that featured Red Hat’s and IBM’s (NYSE: IBM) technologies. As TBR has previously examined, how consultancies and IT services vendors manage their ecosystem partners at their innovation and transformation centers (and labs) reveals differences in strategic thinking and intent.
 
While full-on branding remains rare and having technology partners’ staff permanently on-site is even more rare, consultancies and IT services vendors have become adept at including technology partners as part of clients’ experiences, almost always when the client has already committed to a particular tech stack (ask us about what happens when a particular Germany-based ERP partner is not in the room). HCLTech remains committed to partnering with a broad ecosystem, following leads from its clients and undoubtedly serving those clients well.
 
Had Flower and De Vos not shared a use case in which a hyperscaler specifically recommended HCLTech to a client — suggesting Flower, De Vos and the rest of the team were best positioned to help the client solve their cloud-related problems — TBR would have questioned how successfully HCLTech balanced being cloud vendor agnostic with meeting clients where they are in terms of their existing technology environments and needs. That a cloud vendor could definitively recommend HCLTech to a client indicates HCLTech, aided by the sustained investment in Cloud Native Labs, has made a compelling case to the cloud vendors.
 
One further note on cloud partners: TBR persistently pushed Flower and De Vos to distinguish between Microsoft Azure, AWS and Google Cloud Platform and detail differences in HCLTech’s alliances. While refusing to pick favorites, the HCLTech leaders described multiple use cases involving each partner, demonstrating a breadth of client challenges and HCLTech solutions and establishing a credibility around HCLTech’s cloud-agnostic strategy.

Cannot Have GenAI Without Cloud (and Cannot Talk Tech Without GenAI)

One cannot have a technology-centric meeting without discussing GenAI. TBR and HCLTech’s Cloud Native Lab leaders shared mostly synchronized views on the implications and opportunities around GenAI, agreeing that infrastructure players and consultancies should see immediate spikes in engagements and revenues. Long term, HCLTech’s focus on security, responsible AI and intimate collaboration with hyperscalers should prove beneficial.
 
Notably, HCLTech also maintains strategic partnerships with Dell Technologies (NYSE: Dell) and Intel (Nasdaq: INTC), two technology vendors that are well positioned to provide the necessary infrastructure to a GenAI adoption wave. Overall, HCLTech’s sobriety around GenAI struck TBR as refreshingly honest. In a setting conducive to blue-sky ideas and bleeding-edge technology musings, HCLTech’s Cloud Native Lab leaders kept the discussion grounded.
 
In a TBR blog, we discussed how GenAI will likely affect IT services vendors like HCLTech: “When looking at the IT services and professional services space, TBR considers two GenAI tracks: What opportunities will vendors seize for generating new revenues, and what changes will GenAI force on how vendors operate? Currently, the first track is pretty straightforward: Fear, uncertainty and doubt around GenAI — fueled by massive hype — create consulting opportunities, particularly for vendors with established governance, risk and compliance offerings.
 
Every vendor has core artificial intelligence, data orchestration, analytics and cloud capabilities, so no vendor can credibly separate itself from the pack with those tools alone. … On the second track, GenAI could be highly disruptive, especially around managed services, to include changes to the staffing pyramid, as less experienced employees either shift to higher-value tasks or leave.”
 
Reflecting on the GenAI discussion with Flower and De Vos, TBR believes HCLTech could begin to separate itself from IT services peers by emphasizing a grounded practicality mindset and a focus on bringing real solutions to scale, even when discussing the potential disruptions of GenAI.

Being Productive in a Time of Chaos and Uncertainty

Grounded and concrete. Partnering smartly and focused on what can possibly scale within clients’ existing or near-term environment. In TBR’s view, HCLTech’s Cloud Native Labs have positioned themselves well for what will likely be an exceptionally turbulent time in the cloud and IT services space. HCLTech effectively uses the Cloud Native Labs as a platform to showcase its plethora of products from the HCLSoftware division and helps clients integrate the same into the overall solution architecture.
 
Clients’ dissatisfaction with costs and unbridled enthusiasm for GenAI will create unrealistic expectations. Competitive pressures around IT services and hyperscalers’ need to find growth will challenge pricing and engagement models. HCLTech has a reliable, proven, highly functioning cloud lab ecosystem that should be a safe space for clients, technology partners and HCLTech professionals to productively manage through the coming craziness.
 
TBR will highlight HCLTech’s Cloud Native Labs in the next Innovation and Transformation Centers Market Landscape and continue to cover the company in quarterly reports, TBR’s IT Services Benchmark, and in 2024 in TBR’s Cloud Ecosystems Market Landscape.   

 

The Telecom Industry Will be Calculated in Its Progression to 6G to Ensure Meaningful ROI

Approximately 250 attendees representing entities including telecom network vendors, communication service providers (CSPs), technology companies, regulators, academia, and experts from multiple nontelecom industries converged on the 2023 Brooklyn 6G Summit in late October. The event was hosted by Nokia and NYU Wireless and covered a wide range of topics relevant to 6G, including ICT industry trends, regulatory impacts, the metaverse, AI, vertical use cases, and cloud-native network infrastructure.

TBR Perspective on Telecom Industry Progression to 6G

The 2023 Brooklyn 6G Summit highlighted both the optimism and uncertainty the telecom industry is experiencing as it progresses from the 5G era, which is about halfway through its developmental cycle, to the 6G era, which is expected to commence in 2028, when the first 6G specification in 3rd Generation Partnership Project (3GPP) Release 21 is finalized.
 
Initial commercial 6G network deployments are expected by 2030. The sentiments of optimism and uncertainty around 6G were discussed throughout the event, including in a keynote from AT&T’s EVP of Technology Chris Sambar in which he expressed concerns regarding the ROI of 6G.
 
Sambar stated, “We’re getting a little bit worn out with the economics of the industry” to summarize the challenges AT&T and other operators are currently experiencing in light of high investment costs and limited monetization opportunities in the 5G era. Sambar also remarked, “To be completely honest and transparent, the industry has questions on what is 6G going to bring us, what are the use cases that customers want from 6G and frankly, what is it going to cost us.”
 
Sambar’s keynote, which was one of the initial sessions at the 6G Summit, set the tone for the rest of the event as speakers candidly assessed the current state of the 5G market while discussing the benefits and use cases that are expected to materialize during the 6G era. Though 6G technical specifications and expected use cases are still in the developmental stages, TBR believes operators will be more calculated and tactical in investing in 6G compared to 5G, with a deeper emphasis on ensuring a clear line of sight to ROI before significant spending occurs.
 

Download your free copy of TBR′s 2024 Telecom Predictions special report

Telecom Industry Retrenches in Response to Macroeconomic Pressures


 

Impact and Opportunities

Lessons Learned From 5G Era Provide Blueprint to Optimize 6G Deployments

Speakers discussed missteps during the 5G era and the importance of not repeating those mistakes in deploying 6G. A key theme was that the launch of multiple variants of 5G in the U.S. — described as “50 shades of 5G” by an event participant — was ultimately a misstep that was impacted by premature marketing. This trend was exemplified by the initial launch of 5G services in the U.S. over low-band spectrum providing only marginal performance benefits compared to LTE, which in turn created a generally tepid initial impression of 5G from consumers.
 
Another notable example was the launch of 5G non-standalone (5G NSA) prior to the deployment of 5G standalone (5G SA). Though 5G NSA enabled operators to launch commercial 5G services faster, 5G NSA lacks key benefits enabled by 5G SA, including faster data speeds, enhanced security, and the ability to support network slicing and lower latency use cases. The separate launches of 5G NSA and 5G SA in turn created complexities and misunderstandings for consumers and enterprises.
 
Event participants noted these challenges experienced within the 5G era will help guide the industry as it creates more cohesive 6G strategies that will enable operators to optimize network spending, provide more tangible initial benefits to customers, and minimize premature marketing of services. Key focus areas for the industry in 6G development include optimizing spectrum allocations for 6G as well as establishing unified global technology standards for 6G to minimize fragmentation in the market. For instance, participants at the event noted it would be beneficial for the industry to determine during the earlier stages of standards development if 6G will be deployed on its own separate network core or existing 5G cores and for operators to adhere to one deployment model to avoid the complexities created by 5G NSA and 5G SA.

The Clearance of 6G Spectrum Will be Vital in Supporting Continued Growth in Data Traffic

Despite the early stages of 6G use cases and the uncertainties around monetization opportunities, operators will need to invest in 6G to remain competitive with each other and support escalating data traffic long-term as 6G is projected to support a 10x increase in usage on networks. The clearance of additional spectrum in the U.S. will be essential to support 6G and for the country to remain at the forefront of the global wireless market, as Sambar cited that the U.S. currently ranks No. 10 worldwide in licensed midband spectrum allocation. Key spectrum ranges Nokia expects 6G to be deployed on include the 7GHz-20GHz frequencies to support outdoor cell sites in urban markets, low-band spectrum in the 470MHz-694MHz range to maximize coverage, and sub-terahertz spectrum to provide peak data speeds in localized areas.
 
The National Spectrum Strategy, which was released by the Biden administration in November 2023, will help in advancing spectrum development in the U.S. The strategy identifies 2,786MHz of airwaves to study in the near term for new uses, including 5G and 6G. The strategy identifies five spectrum bands for study: 3.1GHz-3.45GHz, 5,030MHz-5,091MHz, 7,125MHz-8,400MHz, 18.1GHz-18.6GHz, and 37GHz-37.6GHz.

More Efficient Network Technologies Will be a Primary Use Case for 6G

Various potential 6G use cases were discussed at the summit, though the time frame for commercial readiness and the willingness of customers to pay for these solutions remain unknown. Many of the use cases discussed involved extended reality (XR) technologies such as AR and VR and included the metaverse and real-world simulations to provide training for users including military personnel and first responders. Use cases around autonomous vehicles, advanced robotics, drones and 8K video were also discussed.
 
TBR expects the most beneficial use cases for 6G will involve the provisioning of advanced technologies that will enable operators to more cost-efficiently support rising traffic on their networks. For instance, deeper implementation of artificial intelligence and machine learning technologies will enable operators to enhance self-optimizing network (SON) capabilities to realize cost efficiencies. 6G is also expected to result in deeper implementation of digital twins, which will help operators better anticipate potential outcomes to their networks and optimize their operations in areas including site management and field operations. Additionally, 6G is expected to be significantly more energy efficient compared to 5G, which will enable operators to improve cost efficiencies while helping to support corporate sustainability goals.
 

Conclusion

The 2023 Brooklyn 6G Summit provided an optimistic yet realistic outlook on the potential of 6G. The telecom industry is particularly concerned regarding the revenue opportunity provided by 6G given the current state of the 5G market. Despite the uncertainty of revenue-generating 6G customer use cases, investments in 6G will likely benefit operators in the long term due to the technology’s ability to support escalating traffic more cost-efficiently on their networks.

AWS Aims to Reinvent GenAI Through Infrastructure Layer, Platform Tools and Applications

Amazon Web Services’ (AWS) 12th annual re:Invent conference was, unsurprisingly, all about generative AI (GenAI). The five-day event showcased all the ways AWS enables this budding technology — which Amazon CEO Andy Jassy claims will add tens of billions of dollars to AWS’ top line — not just through the infrastructure layer AWS is known for, but also through the company’s platform tools and applications. 

AWS Set Out to re:Invent infrastructure over a decade ago and is prepared to do the same with GenAI

Dating back to the dot-com bubble and the early days of amazon.com, Amazon gained an understanding of what it takes to provision infrastructure designed to scale at massive volumes. After Amazon spent years trying to overcome scale challenges associated with bringing third-party merchants to its e-commerce engine, AWS was born.
 
Despite all the competition it has welcomed over the past 10 years, AWS is still largely credited with not only pioneering cloud infrastructure but also making it accessible to anyone. As articulated by AWS CEO Adam Selipsky, this could range from a college student using a laptop in their dorm room to some of the most sophisticated enterprises in the world. But largely owed to the pandemic, we have seen the cloud market shift from a data center outsourcing strategy to a tangential business driver, which means AWS has had to adapt alongside its clients with not just traditional hosting services but also full-stack solutions tied to a specific use case.
 
One of the most compelling customer examples highlighting this approach includes Pfizer. At the height of the COVID-19 pandemic in 2021, Pfizer pledged to expand its cloud footprint from 10% to 80%. Put another way, Pfizer migrated 12,000 applications and 8,000 servers in 42 weeks, which resulted in $47 million in annual savings and the closure of three data centers. This seemingly successful, large-scale transformation has Pfizer now exploring AWS’ GenAI technologies, including Bedrock, to automate manual processes and realize a projected $750 million to $1 billion in annual cost savings.
 
This customer example speaks to the powerful influence AWS’ infrastructure has with clients such as Pfizer — which needed to submit data to the Food and Drug Administration in a matter of days during COVID-19 — that prioritize speed, scale and agility. Holding a significant portion of the cloud infrastructure layer, AWS is looking up the stack to tackle cloud’s next big reinvention: GenAI.
 

Download your free copy of TBR′s 2024 Cloud Predictions special report

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond


 

A closer look at AWS’ GenAI stack

Selipsky’s overview of the AWS GenAI stack was consistent with the commentary Jassy has provided on Amazon earnings calls over the past couple of quarters. Here is a quick look at AWS’ GenAI capabilities and some of the new innovations:

  • Infrastructure: While a great talking point for AWS, we cannot argue the fact that scalable compute serves as the foundation for all things GenAI. For AWS, this includes both custom chips and NVIDIA (Nasdaq: NVDA) GPUs. AWS used re:Invent to launch innovations in both areas, including Amazon Trainium 2 instances (Trn2), which promises a fourfold performance increase over Trn1 for machine learning inference workloads, and NVIDIA DGX Cloud on AWS. The latter is particularly interesting and comes as all other cloud providers have already signed on as hosting partners for NVIDIA’s DGX AI software. As the first company to put GPUs in the cloud, AWS has a unique relationship with NVIDIA, but one that may be growing more contentious as sales teams push AWS’ own chips as part of a cost optimization play designed to maximize customers’ lifetime value. Even so, NVIDIA’s supplier power is significant, and thus the company has a lot of bargaining power with the hyperscalers, which need NVIDIA to supply GPUs to their data centers, and in return, can host DGX and support NVIDIA’s push into the software space.
  • Platform tools and “as a Service” models: The middle layer of AWS’ GenAI stack is largely synonymous with Amazon Bedrock, a managed service used by 10,000-plus customers to access and customize foundation models for their GenAI apps. Making sure customers are not beholden to one model provider and can access an array of options through the same API interface is key to AWS’ strategy. It also contrasts with Microsoft’s (Nasdaq: MSFT) approach and helps AWS position itself as an open and flexible alternative. New models supported via Bedrock include Anthropic’s Claude 2.1, which has a context window of roughly 150,000 words — making it well suited for legal and finance use cases — in addition to internal models, like Amazon Titan Multimodal Embeddings. Breadth of models is key, but improving the native functionality within Bedrock garners the majority of investment from AWS at this layer. This largely includes features that get customers beyond out-of-the-box models to those that can be customized, fine-tuned and applied to business use cases. One example includes Knowledge Bases for Amazon Bedrock, a Retrieval Augmented Generation (RAG) service that pulls data from multiple sources (i.e., databases, APIs) to help customers bring data to their models and customize.
  • GenAI applications: At the top of the stack are the actual GenAI applications built on foundation models. AWS may have a weaker association here, but this layer is important to rounding out the entire stack and keeping customers invested in AWS. This layer largely comprises Code Whisperer, the free-for-use code companion that also offers customization capability, which means the application learns from internal code to provide personalized recommendations.

It is all about breadth

With over 220 native services and 600 compute instance types, portfolio breadth has always been a hallmark attribute of AWS. For context, AWS launched 3,300 new features and services in 2022. In his opening keynote, Selipsky went as far to say that AWS offers 60% more services and 40% more features than its nearest competitor. The approach to GenAI will be no different, as AWS strives to offer the broadest set of capabilities for customers to run, build and deploy GenAI technology.
 
Even in areas where AWS lacks depth or specificity, ISV solutions prove instrumental in filling gaps and, in many cases, do more to drive up a customer’s underlying IaaS resources than AWS’ out-of-the-box services. We also know AWS has a rich history of delivering very basic services to market and quickly building them up into competitive products over time. Perhaps the best example is AWS SageMaker, which accumulated over 250 features and tens of thousands of customers in the span of six years.

 

GenAI applications: How AWS is entering the copilot race with Amazon Q

At re:Invent, AWS took a bigger leap into the GenAI applications space with the launch of Amazon Q. While incorporating natural language processing (NLP) into various services is not necessarily new to AWS, Q is a GenAI-powered assistant based on 17 years of AWS knowledge designed to bridge the gap between the technical and business-led functions in the enterprise. For example, Q will be integrated with the Code Whisperer environment so developers can ask questions like, “How do I create code for this function?” while admins can use Q in pretty much any environment (e.g., AWS Management Console, Slack, documentation) to ask questions as generic as, “How do I build a web app on AWS”?

 

But Q also connects to 40 external data sources for business-related tasks, such as data visualization and document summarization, while the assistant integrates with Amazon Connect for contact center optimization, and will soon work with AWS’ Supply Chain application launched at last year’s re:Invent. Integrating Q across functions like supply chain and customer service, in addition to the analytics stack with QuickSight, suggests AWS wants Q to be the expert assistant not just for building on AWS but also for business.

 

This approach is largely consistent with what we are seeing from competitors integrating copilots and assistants into their SaaS offerings; however, there are a couple of big contrasts between Q and Microsoft’s Copilot and Google Cloud’s (Nasdaq: GOOGL) Duet AI. This first one is pricing: Both Copilot and Duet AI are priced at $30 per month per user, while Amazon Q, though still in preview, will come in at $20 and $25 per month per user for Q Business and Q Builder editions, respectively.
 
AWS may be undercutting its competitors on price, but Microsoft’s and Google Cloud’s recognition and reach in the productivity space may prove challenging, at least in the context of including Q Business edition. Q Builder, however, may be another story. While including all the capabilities of Q Business, Q Builder is designed for AWS-specific use cases, and in general, anything AWS can do to make developers successful is going to be well received by the customer base. This could include tasks like troubleshooting applications, writing SQL queries or even migrating code. A small pool of Amazon developers tested this last capability internally to upgrade 1,000 applications from Java 8 to Java 17 in two days.

 

The other big difference is that Amazon Q leverages Bedrock, which means the GenAI assistant is pulling multiple third-party models and assigning them to the right tasks. Peers have taken a different approach, as their assistants are based on a sole provider; for Google Cloud, this is internal models like Codey, and for Microsoft, this is OpenAI’s ChatGPT. While we cannot say for certain how customers will view these approaches, for AWS, having Q based on Bedrock speaks to the company’s goal of offering a broad array of models in hopes of challenging Microsoft.

The zero-ETL integrations keep coming

Building on last year’s commitment to a zero-ETL (Extract, Transform, Load) future and the resulting integration between Redshift and Aurora, AWS launched three new zero-ETL relational and nonrelational database integrations with Redshift: Aurora PostgreSQL, RDS (Relational Database Service) and DynamoDB. Just like it wants to offer the broadest set of infrastructure options, AWS wants to ensure it has the breadth of cloud data services customers need so they do not have to compromise on the right tool for the right data task. But even if customers have an array of tools accessible to them, they still need a way to break down data silos, which requires integration.

 

To automatically connect data from source to destination and ease manual ETL processes, AWS is offering more integrations between its database and data warehouse services. We do suspect “zero-ETL” has become more of a marketing term and is essentially glorified data sharing, but there is undoubtedly value in simplifying how businesses connect and analyze data. Even before GenAI broke headlines, businesses were realizing the benefits of breaking down data silos and adopting an integrated data posture, but GenAI should only fast-track data strategies throughout the enterprises.

 

Microsoft recognized this trend years ago and recently launched Fabric, a platform that integrates multiple data services, including Synapse, which is akin to Amazon Redshift, into a single offering. Fabric is a single-source-of-truth platform that addresses the entire data cycle and charges customers based on total IaaS resources consumed, versus the compute and storage for each individual data service. AWS’ approach is different, and while customers have a suite of different data services available to them, it could take more effort for customers to stitch these services together and create a unified environment. The new zero-ETL integrations may help rectify this, but Microsoft’s single platform approach and simplified pricing model, all integrated with Copilot, will be competitive.

At AWS, “partners are the catalysts”

In the second-to-last keynote, VP Worldwide Channels and Alliances Ruba Borno discussed the critical role of partners acting as catalysts to GenAI adoption. This includes both ISVs and global systems integrators, and AWS wants to work with both parties collectively to meet a customer where they are in a journey and work backward from their needs. Delivering solutions as part of an ecosystem was a big focus of the revamped Partner Paths model two years ago, and now AWS is tasked with scaling this model to deliver the GenAI stack to customers.
 
When asked by Borno what partners can do to drive more business with AWS, Selipsky quickly called out proficiency and making sure the skills are in place to build trust with joint customers. Specializations and competencies are a big piece of proficiency and are skills customers appear to be asking for. At the event, AWS announced the general availability of specializations in resilience and cyber insurance and is also revamping its Competency, Service Delivery and Service Ready designations into one program. Another piece of advice for partners was to focus on putting the necessary resources in place to go to market with AWS, which could be anything from established business units to codeveloped centers of excellence.
 

It is always a balancing act between the vendor and partner as to who should invest what in terms of go-to-market resources to achieve collective goals. But the message during the talk between Selipsky and Borno seemed to be that AWS has all the funding, tooling and programs available to partners that can make for a successful go-to-market strategy, but the partner has to be willing to engage. Put another way, it may be difficult for some Tier 2 partners to grow their AWS business and get in on the GenAI opportunity given the massive resource scale of some of the Tier 1 competitors.
 
As an example, Accenture pledged to train 50,000 developers and technical specialists on Amazon Q and Code Whisperer over the next two years. Despite GenAI’s potential to automate labor, the technology will only broaden the vast IT skills gap, so vendors that can acquire and train the right talent will continue to outperform when it comes to doing business with AWS.
 
Lastly, Selipsky reiterated the important role partners will play in the data ecosystem. Considering it is not the actual foundation models that will differentiate the customer, but rather their data, there is an opportunity for partners here, and anything they can do to help customers establish a data layer that will pave the way for AWS’ GenAI stack will be well received.

Conclusion

While later to the GenAI movement, AWS, with its early establishment in cloud infrastructure, has actually been involved with AI for quite some time. In many ways, the company used re:Invent to raise its voice over the din of AI chatter and showcase the long-standing innovations that it aims to use to build new capabilities and play catch-up with competitors, namely Microsoft. The best example is Amazon Q, a business-focused assistant that is somewhat comparable to Microsoft Copilot, while more Redshift integrations underscore AWS’ goal of better connecting customers to other AWS services, an approach Microsoft is similarly taking with Fabric. Meanwhile, custom compute offerings will continue to serve as a landing spot for net-new workloads, and in some cases, they could be providing cost and performance benefits that help AWS become viewed as not just a hosting provider but also a long-term digital transformation partner.

 

At the end of the day, customers’ considerations of these GenAI offerings will heavily depend on their existing infrastructure footprint, level of integration required and business use case. Working with partners to land new business and maintain its IaaS leadership lays a foundation for AWS to build the broadest set of integrations, features and services. In doing so, AWS ensures it can meet clients anywhere in their journey regardless of technical requirement or business need. If properly executed, this approach will help AWS further grow off a $88 billion run rate and maintain its lead over its very fast-following peers in IaaS and PaaS.

With Broadcom at the Helm, Profitability Will be at the Center of VMware’s Next Chapter

On Nov. 22, 2023, Broadcom officially closed its acquisition of VMware, concluding an 18-month saga that called on the company to navigate several regulatory roadblocks. While these hurdles may have delayed the deal’s closing, TBR suspects most industry watchers have anticipated this outcome for quite some time.

VMware Acquisition Approved by Global Regulators

The early concerns of global regulators about anti-competitiveness did not take into account the strategic importance to Broadcom of keeping VMware’s platforms accessible across all hardware options, thus eliminating the likelihood of Broadcom limiting these platforms to its own hardware.
 
Chinese regulators were certainly a tail risk given recent geopolitically motivated actions against other U.S.-oriented M&A, yet they ultimately approved the deal, too, perhaps due to Broadcom’s historical ties to the country and the software-centric focus of the acquisition.
 
Now, with the deal done, VMware’s next chapter has begun. It has been a long road for the company, yet many things have remained the same. Although VMware is pushing into new cloud-native platforms, the company’s virtualization platform is still its bread and butter, and much of VMware’s total revenue is tied to this business. This proportion is likely magnified considering the breakdown of operating profit. As Broadcom takes the reins, VMware’s strategy will revolve around maximizing the value of these profit centers, likely to the detriment of emerging businesses.
 

TBR′s 2024 Prediction Series

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond


 

Broadcom Is in Charge and Will be Guided by Profitability

Broadcom has stated profitability through cost cutting is the top priority, communicating to investors the goal to achieve an adjusted EBITDA of $8.5 billion over the next three years compared to $3.2 billion of GAAP EBITDA for the last 12 months ended CY2Q23. While far from a perfect comparison, the targeted uplift is clearly sizable and will rely heavily on reducing costs.
 
TBR expects general & administrative costs to see the greatest relative decline as Broadcom executes its synergy plan, which will involve slashing redundant headcount in administrative roles. TBR expects Broadcom to be particularly successful in this area, as leadership has extensive experience folding acquired businesses into existing functions in departments like legal, finance and human resources. This skill will be put to work quickly, likely resulting in multiple rounds of layoffs across these departments.
 
Sales & marketing teams are expected to see impacts as well as Broadcom makes use of its existing sales teams and channel distribution partners to sell into existing strategic accounts.
 
Headcount reductions have already begun, just days after the deal closed. The total impact of layoffs so far is unclear, yet there are reports that reductions have affected software development and cloud engineering roles as well as administrative roles. While While VMware’s R&D budget will undoubtedly shrink, it is unknown by how much. The fact that R&D-related headcount is being cut early does not paint a favorable picture for Broadcom’s commitment to innovation, yet TBR’s estimates indicate that drastic cuts may not be necessary. This aligns with commentary from Broadcom management, which has promised to maintain VMware’s previous development strategy. Still, TBR remains skeptical on future R&D efforts.

Profitability Goals may Negatively Impact License Products and Emerging Solutions Over the Long Term

Along with many industry watchers, TBR has been concerned about Broadcom’s intention to invest in innovation since the initial announcement of the VMware acquisition, given Broadcom’s history with CA Technologies and Symantec. In both instances, the company slashed funding for support and R&D after the acquisition, opting to extract free cash flow from their sticky install bases instead of pursuing organic growth. VMware offers a similar opportunity.
 
Cost concerns are prompting many enterprise customers to preserve past investments, including their virtualization platforms. Moreover, since VMware has built highly integrated solutions with all the Tier 1 hyperscalers, enterprises are better equipped to migrate their virtualization platforms to the cloud, where they are able to set up broader cloud migrations without fully committing to the transition to cloud native.
 
This means VMware commands a large, sticky install base, which would be ideal for Broadcom’s previous strategy. Recognizing this, many partners and customers are rightfully worried about the outcome of this deal, expecting higher licensing prices and diminishing support.

Profit Centers Will See Little Impact from Broadcom Ownership

In addition to promoting margin expansion, raising license prices will encourage more customers to transition to subscription offerings, which highlights an important consideration within this business transformation. While Broadcom will deprioritize certain segments, large portions of VMware will be deemed strategic by Broadcom and will continue to see the same level of investment.
 
For instance, many customers and partners collaborating around cloud-based virtualization platforms like VMware Cloud will see minimal differences because of the change in ownership. For the last 12 months ended CY2Q23, over 34% of VMware’s revenue was generated in the Subscription & SaaS segment, and TBR suspects Broadcom will prioritize many of the offerings within this segment.
 
In May Broadcom CEO Hock Tan pledged to invest an incremental $2 billion per year, with half slated for R&D to support the Cross-Cloud portfolio. Considering that an incremental $1 billion investment would increase R&D spend by around 30% over CY2022 levels, Broadcom’s ownership may actually benefit large swaths of VMware’s Cross-Cloud portfolio by adding resources and accelerating development timelines.

Long-term, Profitability Will be King

TBR is skeptical about how far into the future Broadcom’s commitment will go, and it is not clear how Broadcom’s investment will be spread across VMware’s different offerings. Many solutions within the Cross-Cloud portfolio are still underdeveloped and represent long-term opportunities for VMware to achieve long-term sustainable top-line performance.
 
Tanzu is a prime example. The container management platform sits at the heart of the company’s multicloud strategy, which VMware has pushed heavily over the past 18 months, yet TBR suspects Tanzu contributes only a small percentage of total revenue and certainly cannot be considered a profit center.
 
If Broadcom is to achieve its stated profitability goals, VMware will need to scale this offering rapidly. If it does not, TBR expects there will be a limit to Broadcom’s patience and a spinoff may be in the cards over the long term. To TBR, the $2 billion commitment indicates a willingness to only support these emerging businesses over the short term.

Conclusion

Regardless of how much Broadcom messages around maintaining VMware’s current investment strategies, it is very difficult to reconcile this marketing approach with the company’s stated profitability goals. Thus, TBR suspects large changes have begun to arrive for the virtualization leader.
 
The most immediate impacts will be the significant layoffs that have reportedly removed redundant administrative headcount, along with likely price increases on license products. While there is good reason to expect that many of VMware’s emerging products will be supported over the next couple years, the long-term view is much more opaque.
 
TBR will be watching for signs of traction and strong execution around many of the emerging solutions included in the Cross-Cloud portfolio, but if they fail to materialize, TBR expects Broadcom’s management to make decisions that benefit profitability.

 

Microsoft Expands PaaS Portfolio on Path to AI Incumbency

A platform company at its core, Microsoft is less concerned with migrating monolithic applications and instead is focused on building a complete data integration and management layer to capture value-add workloads that tie into said applications, all while maximizing clients’ underlying Azure infrastructure usage. To replicate this approach for the AI era, Microsoft has spent years integrating its various data services, from Synapse to Power BI, to automate customers’ entire data pipelines and prepare them for AI adoption. The result is Microsoft Fabric, a new end-to-end SaaS-like data platform that could help Microsoft reach new audiences and spur Azure growth in the continued race for cloud and AI dominance.  

Microsoft Is Investing in Data Cloud to Support its GenAI Strategy

What Is Microsoft Fabric?

Simply put, Microsoft Fabric is a unified data platform comprising seven core Azure data services: Data Factory, Synapse Data Engineering, Synapse Data Warehouse, Synapse Real Time Analytics, Power BI and Data Activator. While Microsoft Fabric makes it easier for customers to connect to different personas within an organization, from data engineers to business analysts, the hallmark of the new service is its simplified pricing model, which charges customers based on the total amount of IaaS resources consumed, rather than the compute and storage for each individual Azure data service.
 
When we interview enterprise buyers, we continue to find that consolidating point solutions in favor of complete, integrated platforms is a common trend, and Fabric is bound to resonate with customers trying to control runaway cloud costs in a still widely uncertain economy.
 
The other key defining attribute of Microsoft Fabric is the underlying architecture it is built on, OneLake. Microsoft Fabric is based on a repository that allows customers to query data on not just MySQL databases but also object storage, as is customary in the data lake architecture.
 
With OneLake, we see Microsoft moving squarely into the data lake space. Given the symbiotic relationship between data lakes, which are designed for unstructured data, and generative AI (GenAI), OneLake is Microsoft’s under-the-hood way of ensuring that customers can easily load data from multiple sources, put it through the Fabric platform for data management and visualization, and build GenAI applications.
 
Altogether, the unification of Microsoft OneLake and Fabric is the right step for Microsoft and exemplifies how far the company has been willing to go to execute its AI-based growth strategy.
 

TBR’s 2024 Prediction Series

GenAI: Growth Catalyst for Cloud in 2024


 

Fabric Will Help Microsoft Change the PaaS Landscape but Not Without Infringing on Partners

As highlighted in TBR’s 3Q23 Cloud Data Services Market Landscape, Amazon Web Services (AWS) is the clear leader in the cloud data warehouse market, with Microsoft falling squarely in second place and not significantly ahead of Google Cloud and Snowflake. Azure Synapse has not gained the same level of interest and traction in the market as AWS and Google Cloud’s BigQuery. As a result, Microsoft partnered with Databricks in 2017, developing and delivering the first-party Azure Databricks service.
 
Partnering with Databricks to ensure customers have an effective data analytics platform natively available on Azure rather than Synapse was a strategic move. With Fabric, however, we now see Microsoft essentially re-delivering Synapse as part of a more complete product that gets to the heart of what customers want: an end-to-end set of capabilities that automate entire data pipelines from data collection and ingestion up to analytics and visualization.
 
This approach should bring Synapse into more client conversations while helping Microsoft expand its reach outside the analytics department. This, of course, raises the question: What becomes of Microsoft’s partnership with Databricks? As part of OneLake, the architecture underpinning Fabric, Microsoft is leveraging Delta Lake — Databricks’ protocol for storing data in an open table format — and this move could persuade Databricks customers to adopt Fabric.
 
Even still, Microsoft OneLake adopts the data lakehouse architecture pioneered by Databricks, and with Fabric’s feature-rich set of upper-stack capabilities, customers may be more inclined to go all in with Microsoft Fabric and its comprehensive pricing model, which would bring a new layer of competition to the Microsoft-Databricks relationship.
 
This trend is indicative of what we are seeing across the cloud landscape. The hyperscalers, even those perceived as more partner friendly, are expanding into new areas of the cloud stack, posing potential risks to their partners, especially as customers continue to indicate their interest in consolidating point solutions.
 
That said, coopetition is nothing new in the cloud landscape, and vendors are getting more adept at navigating competitive differences to deliver outcome-specific solutions to their joint customers.
 
Perhaps the best example is the relationship between AWS and Snowflake, which are both spending millions of dollars to get legacy data warehouse customers to Snowflake’s platform on AWS. While AWS would naturally prefer customers adopt its own data warehouse service — Redshift — over Snowflake, AWS has realized the trade-off of forfeiting some Redshift customers to Snowflake as long as those customers are running on AWS infrastructure.
 
Microsoft Fabric is much broader than the data warehouse, but if AWS and Snowflake are a barometer of a successful partnership, Microsoft and Databricks will similarly learn to overcome these obstacles.
 
With Fabric, we expect Microsoft will slowly chip away at AWS’ share and potentially Snowflake’s and Databricks’ in the coming years. However, it is important to note we do not see Fabric as any kind of direct threat to pure play data cloud platforms, particularly Snowflake, which has the established presence and reputation in the data warehouse space specifically, not to mention easy inroads into AWS’ customer base.
 
In our talks with enterprise buyers, we often find customers value Snowflake as it allows them to run separate workloads as part of a shared data layer that is not tied to any specific cloud infrastructure. Despite the multicloud capabilities in OneLake, nothing changes the fact that the core data warehousing capabilities within Synapse are still built specifically for Azure infrastructure for the seamless integration with other Azure services.
 
We have no doubt Fabric will be attractive to Microsoft-centric shops, but attracting customers invested with other cloud providers may be a more difficult feat, solidifying Snowflake’s and Databricks’ unique value propositions.

Data Lakes and GenAI Go Hand in Hand, and Microsoft Wants to be the First Hyperscaler Strongly Associated with the Architecture

One other interesting consideration with Fabric is Microsoft’s choice of open table format. Considering its partnership with Databricks, Microsoft has opted for Delta Lake, although it plans to add external support for two other popular frameworks: Apache Iceberg and Hudi.
 
In general, for customers that want to build a data lake, Delta Lake is the preferred format while Apache Iceberg is more aligned with data warehouses. Defaulting to Delta Lake reflects Microsoft’s intent to remain relevant with Databricks customers, while allowing customers to query data on object storage (Amazon S3 and eventually Google Cloud Storage) reflects Microsoft’s commitment to the data lake architecture.
 
Due to data lakes’ ability to combine both structured and unstructured data for prescriptive analytics use cases, they are becoming increasingly popular and, in some scenarios, offer customers a way to bypass data warehouse operations altogether. GenAI, which relies on unstructured data sources, such as documents or images, will fuel customers’ desire to consolidate data warehouses into data lakes, leading us to believe that Databricks is in a strong position despite Microsoft’s Fabric announcement.
 
This is also one of the reasons why Snowflake is trying to add more features that support unstructured and semistructured data in hopes of changing its perception in the market from a data warehouse company to a data lake company.
 
The hyperscalers, however, have been arguably behind in their data lake services and messaging, and with OneLake, Microsoft wants to make sure it is the hyperscaler most strongly associated with data lakes and by default, GenAI.

GenAI Enablement Sits at the Heart of Microsoft’s PaaS Strategy

Considering Microsoft has arguably made the biggest splash in generative AI, the company’s latest PaaS developments come as no surprise. As TBR discussed in our 2Q23 Cloud Ecosystems Market Landscape, a large language model (LLM) is only as good as the data that goes inside, which means the ability to establish a centralized, single source of truth is very important for an enterprise pursuing a serious generative AI strategy.
 
OneLake’s ability to provide an enterprisewide repository and a no-code API to manage data will help the company address this need, and the GenAI tools embedded within Fabric will help accelerate the transition to unified data pipelines.
 
Mostly in preview today, there are three Copilot solutions embedded within Fabric: Copilot for Data Science and Data Engineering, Copilot for Data Factory, and Copilot for Power BI. Broadly, the Copilot solutions in Microsoft Fabric enable code generation capable of automating routine tasks and expediting the transformation from raw data to structured, which is what LLMs hunger for.
 
The integrations built over the years between Microsoft’s platform assets and its application portfolios ensure there is plenty of raw data entering Fabric, which, as it becomes structured, presents an ideal environment for enterprises to pursue custom GenAI development. This is where the Azure OpenAI Service enters the conversation.
 
While the Copilot solutions offered by Microsoft provide quick-and-easy access to GenAI capabilities, true transformational value will be unlocked as enterprises build their own GenAI applications around their proprietary data and business processes, presenting a large opportunity for Microsoft.
 
The Azure OpenAI service has been enabling customers to train LLMs on their proprietary data since it became generally available in January, and, at Ignite 2023, Microsoft took another step forward with the public preview launch of Azure AI Studio. A new addition to the Azure OpenAI service, Azure AI Studio brings together developer tools like Azure AI SDK with the company’s growing catalog of foundation models to enable customers to build their own copilots and other generative AI applications.
 
As more enterprises pursue custom GenAI development, the unified approach to data management offered by Microsoft Fabric and OneLake will become more valuable, drawing interest from enterprises with large Microsoft footprints, yet coopetition at the data layer will remain the standard.
 
Ultimately, Microsoft’s priority is ensuring all data can be easily fed into its foundation model service, so integrations that connect the Azure OpenAI Service with third-party data leaders like Snowflake and Databricks will prove to be popular alternatives to Microsoft’s end-to-end approach.

Microsoft Is Not Just after the Data Layer: The Race for Hybrid Cloud Control Plane Continues as Azure Arc Reaches 21,000 Customers

Throughout this report, we have touched on Microsoft’s pursuit of the data layer, but it is important to note that Microsoft’s PaaS capabilities are much broader and extend closer to the box. Owing to Windows Server, Microsoft has captured a significant portion of the enterprise OS layer, allowing the company to effectively move into the multicloud control plane, which Microsoft calls Azure Arc.
 
Best thought of as an abstraction layer that stiches together infrastructure assets for capabilities like monitoring, provisioning and observability, all while securing the OS instance, Azure Arc has amassed 21,000 customers in the span of four years.
 
In recent quarters we have seen Microsoft become increasingly transparent in its customer reporting. For instance, in 2Q23 and 3Q23 Azure Arc customer count grew 150% and 140% year-to-year, respectively, putting the customer count at just 7,200 in 2Q22. This is much lower than the 21,000 customers announced in 3Q23 and indicates vast interest from Microsoft’s install base of customers trying to bridge the gap between the cloud and legacy data center.
 
Another factor driving the platform’s success is Microsoft’s early support for both virtual machines (VMs) and Kubernetes. This approach contrasts with Google Cloud, whose primary goal is getting customers to move away from VMs and use containers. In other words, Google Cloud wants customers to use GKE (Google Kubernetes Engine) on premises to containerize a VM and keep it there, but also wants customers to build net-new, cloud-native apps in containers.
 
Google Cloud did launch Anthos for VMs in 2021, which we viewed as a direct counterattack to Azure Arc, albeit not a very effective one, as Anthos’ customer count is comparatively low and could suggest the company has not been as adept at tapping into the VMware customer base and attracting enterprises that are not ready to migrate VMs.
 
We will continue to monitor Azure Arc’s growing customer count in the coming quarters, and it will be interesting to see if Microsoft begins to leverage Fabric to support other managed data services outside Azure SQL via Arc to turn the hybrid platform a more complete, centralized management layer.

IT Ecosystem Trust Paves the Way for GenAI-enabled Growth in 2024

New: Explore the latest digital transformation trends and predictions for the year in TBR’s 2025 Digital Transformation Predictions special report. Uncover how AI, ecosystems and GenAI will change the IT services landscape. Download your free copy today!

Top 3 Predictions for Digital Transformation in 2024

  1. GenAI hype meets reality
  2. Ecosystems fuel disruption and lead to the rise of the superpowers
  3. Cyber, data and regulations — the three-legged stool enabling new digital transfomration growth

 

Request Your Free Copy of 2024 Digital Transformation Predictions

 

Challenges and Opportunities in the Era of GenAI and Enterprise Digital Transformation

While cloud remains the backbone of buyers’ digital transformation (DT) programs, generative AI (GenAI) has thrown vendors and their technology partners into a frenzy, especially as enterprise buyers have started paying closer attention to their IT spend in response to macroeconomic headwinds.
This new dynamic creates a plethora of challenges and opportunities for technology and services vendors that guide and manage enterprise DT programs. From vendor consolidation to technology stack simplification, buyers continue to look for ways to optimize their digital assets, making it hard for vendors to introduce new technology without the appropriate use cases. Delivering value in a challenging market requires vendors to act more as strategic partners and collaborate rather than simply transact with enterprises.
 
GenAI is here to stay. There are certainly more unknowns than knowns today, despite everyone across the ecosystem convincing others they have found the silver bullet that will enable the creation of the next-gen enterprise business model. As with most new technologies, establishing the right frameworks as well as commercial and pricing models is a necessary first step before adoption can scale. Developing and deploying pricing mechanisms that incorporate pro bono and/or risk-sharing services and using templated offerings to standardize delivery can help vendors maintain their incumbent positions, especially as GenAI will level the skills playing field.
 
TBR Insights Live - Navigating GenAI Opportunities and Challenges in Digital Transformation in 2024
Expectations around differentiation are also changing, increasing the need for vendors to add specialization and often spurring them to expand their partner ecosystem. The advent of a new technology stack (e.g., next-gen GPU-run data centers that enable GenAI to reveal its full potential) will compel vendors to re-evaluate and expand their relationships with chip manufacturers — something many software and services vendors have not done for a while.
 
Additionally, the implications for cyber, data, regulations, ethics, and model governance will continue to dominate headlines and vendor-buyer conversations. And while vendors are in the business of making money, we believe the winning formula is to strike the right balance between constantly selling and consistently developing relationships with buyers and partners.
 
To read the entire 2024 Digital Transformation Predictions special report, request your free copy today!

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond

New: Explore the future of the cloud market in 2025 and beyond in TBR’s 2025 Cloud Market Predictions special report. Learn how AI, GenAI and alliances will shape the industry and affect market share. Download your free copy today!

Top 3 Predictions for Cloud in 2024

  1. Simply providing cloud services at scale is no longer enough for vendors to gain cloud market share
  2. IaaS will become more tailored to workload and regulation
  3. SaaS vendors promote multiproduct sales with generative AI

 

Request Your Free Copy of 2024 Cloud Predictions

GenAI’s Rise Amid Cloud Challenges: Navigating 2024’s Landscape and Shaping the Future

For all the challenges that cloud vendors faced in 2023, there was a promising sprout of opportunity that developed quite rapidly with generative AI (GenAI) technologies. The pace with which GenAI gained not only awareness but also real investment and usage in the market was notable, and we expect end customers’ real investments in the solutions to continue to grow and develop in 2024.
 
However, GenAI solutions on their own will not overcome the headwinds that worked against the market throughout 2023. Many of the forces that caused revenue growth rates to slow precipitously for nearly every major cloud vendor remain in place heading into 2024.
 
TBR Insights Live: GenAI and the Cloud Revolution in 2024
 
The general macroeconomic conditions remain uncertain, wars continue to threaten global stability, IT buyers remain cautious about spending, and cloud has reached a saturation point in many IT organizations. So, while we do not expect GenAI technology to return the market and leading vendors to their pre-2023 pace of revenue expansion, it will serve as a small yet rapidly growing segment in 2024 and should become a significant market in 2025 and beyond.
 
We also expect the intensity of AI-focused strategies during 2024 to reflect the importance of the technology to long-term growth. AI could reset the cloud leaderboard for the next decade, so incumbents like Amazon Web Services (AWS) and Salesforce will be keen to protect their large customer bases against mounting AI competition from the likes of Google, Microsoft and SAP.
 
To read the entire 2024 Cloud Predictions special report, request your free copy today!

IT Services and Consulting in 2024: Traversing GenAI Pressures, Talent Challenges, and Regulatory Waves

New: Discover insights into GenAI for 2025 across IT services, cloud, IT infrastructure, federal IT services and more. Download your free copy of TBR’s 2025 GenAI Predictions special report today!

Top 3 Predictions for Professional Services in 2024

  1. The 2023 focus on reskilling and training will pay off in accelerated revenues in 2024
  2. Generative AI will create a pivot to outcomes-based pricing
  3. Regulations will become a major pain point for all

 

Request Your Free Copy of 2024 Professional Services Predictions

Embracing Change, GenAI Hype and the Imperative of Outcome-Based Strategies for IT Services and Consultancies

As they say, nothing in life is certain except for death and taxes. And change. And data overload. And hype about technology and disruption. Predictions provide a perfect platform for big leaps and wild guesses, but at TBR, we are seeing more of the same for 2024, including taxes, data overload, and technology (read: generative AI [GenAI]) hype.
 
IT services and consulting stubbornly remain a people-centric business, despite advances in automation, analytics and AI, and vendors most adept at attracting and retaining good people continue to outperform peers. Keeping good people when the hype around GenAI suggests that many task-oriented jobs will disappear requires vendors offer training in new skills and develop new career paths.
 
Concurrent with these pressures on talent, GenAI will pressure contracts — with greater transparency comes greater opportunity to pay for exactly what you got. IT services vendors and consultancies that embrace outcome-based pricing models will increasingly find their clients, particularly those enamored with GenAI (although, who isn’t?) open to creative pricing and reluctant to continue business as usual once GenAI has pushed the client’s procurement office out the door.
 
TBR Insights Live - GenAI Hype in 2024: A Deep Dive into IT Services Industry Predictions
 
Additionally, governments continue to lean into regulation to mitigate societal risks and to tame or unleash (depending on your political views) commercial activities. After the last three years of dealing with the pandemic, war, and the emergence of robot overlords (read: again, GenAI), we can reasonably expect governments will increasingly seek the security blanket of tighter regulations.
 
Add a splintering of global approaches to trade, finance and geopolitics, and companies face not just more regulations but also overlapping and potentially conflicting compliance obligations, varying wildly by jurisdiction. Death and taxes, indeed.
 
For IT services vendors and consultancies, 2024 looks a little boring. Reskill and train your people so you’ve got the right folks ready to deploy at scale to address your clients’ toughest problems. Let someone else handle the easy problems until they get replaced by GenAI. Start baking outcomes-based pricing into every engagement, underpinned by AI and analytics that demonstrate unquestionably what value you are bringing your clients. And lean hard into governance, risk and compliance (GRC), unless you do not have those skills already, in which case, find a partner.
 
To read the entire 2024 Professional Services Predictions special report, request your free copy today!