The Telecom Industry Will be Calculated in Its Progression to 6G to Ensure Meaningful ROI

Approximately 250 attendees representing entities including telecom network vendors, communication service providers (CSPs), technology companies, regulators, academia, and experts from multiple nontelecom industries converged on the 2023 Brooklyn 6G Summit in late October. The event was hosted by Nokia and NYU Wireless and covered a wide range of topics relevant to 6G, including ICT industry trends, regulatory impacts, the metaverse, AI, vertical use cases, and cloud-native network infrastructure.

TBR Perspective on Telecom Industry Progression to 6G

The 2023 Brooklyn 6G Summit highlighted both the optimism and uncertainty the telecom industry is experiencing as it progresses from the 5G era, which is about halfway through its developmental cycle, to the 6G era, which is expected to commence in 2028, when the first 6G specification in 3rd Generation Partnership Project (3GPP) Release 21 is finalized.
 
Initial commercial 6G network deployments are expected by 2030. The sentiments of optimism and uncertainty around 6G were discussed throughout the event, including in a keynote from AT&T’s EVP of Technology Chris Sambar in which he expressed concerns regarding the ROI of 6G.
 
Sambar stated, “We’re getting a little bit worn out with the economics of the industry” to summarize the challenges AT&T and other operators are currently experiencing in light of high investment costs and limited monetization opportunities in the 5G era. Sambar also remarked, “To be completely honest and transparent, the industry has questions on what is 6G going to bring us, what are the use cases that customers want from 6G and frankly, what is it going to cost us.”
 
Sambar’s keynote, which was one of the initial sessions at the 6G Summit, set the tone for the rest of the event as speakers candidly assessed the current state of the 5G market while discussing the benefits and use cases that are expected to materialize during the 6G era. Though 6G technical specifications and expected use cases are still in the developmental stages, TBR believes operators will be more calculated and tactical in investing in 6G compared to 5G, with a deeper emphasis on ensuring a clear line of sight to ROI before significant spending occurs.
 

Download your free copy of TBR′s 2024 Telecom Predictions special report

Telecom Industry Retrenches in Response to Macroeconomic Pressures


 

Impact and Opportunities

Lessons Learned From 5G Era Provide Blueprint to Optimize 6G Deployments

Speakers discussed missteps during the 5G era and the importance of not repeating those mistakes in deploying 6G. A key theme was that the launch of multiple variants of 5G in the U.S. — described as “50 shades of 5G” by an event participant — was ultimately a misstep that was impacted by premature marketing. This trend was exemplified by the initial launch of 5G services in the U.S. over low-band spectrum providing only marginal performance benefits compared to LTE, which in turn created a generally tepid initial impression of 5G from consumers.
 
Another notable example was the launch of 5G non-standalone (5G NSA) prior to the deployment of 5G standalone (5G SA). Though 5G NSA enabled operators to launch commercial 5G services faster, 5G NSA lacks key benefits enabled by 5G SA, including faster data speeds, enhanced security, and the ability to support network slicing and lower latency use cases. The separate launches of 5G NSA and 5G SA in turn created complexities and misunderstandings for consumers and enterprises.
 
Event participants noted these challenges experienced within the 5G era will help guide the industry as it creates more cohesive 6G strategies that will enable operators to optimize network spending, provide more tangible initial benefits to customers, and minimize premature marketing of services. Key focus areas for the industry in 6G development include optimizing spectrum allocations for 6G as well as establishing unified global technology standards for 6G to minimize fragmentation in the market. For instance, participants at the event noted it would be beneficial for the industry to determine during the earlier stages of standards development if 6G will be deployed on its own separate network core or existing 5G cores and for operators to adhere to one deployment model to avoid the complexities created by 5G NSA and 5G SA.

The Clearance of 6G Spectrum Will be Vital in Supporting Continued Growth in Data Traffic

Despite the early stages of 6G use cases and the uncertainties around monetization opportunities, operators will need to invest in 6G to remain competitive with each other and support escalating data traffic long-term as 6G is projected to support a 10x increase in usage on networks. The clearance of additional spectrum in the U.S. will be essential to support 6G and for the country to remain at the forefront of the global wireless market, as Sambar cited that the U.S. currently ranks No. 10 worldwide in licensed midband spectrum allocation. Key spectrum ranges Nokia expects 6G to be deployed on include the 7GHz-20GHz frequencies to support outdoor cell sites in urban markets, low-band spectrum in the 470MHz-694MHz range to maximize coverage, and sub-terahertz spectrum to provide peak data speeds in localized areas.
 
The National Spectrum Strategy, which was released by the Biden administration in November 2023, will help in advancing spectrum development in the U.S. The strategy identifies 2,786MHz of airwaves to study in the near term for new uses, including 5G and 6G. The strategy identifies five spectrum bands for study: 3.1GHz-3.45GHz, 5,030MHz-5,091MHz, 7,125MHz-8,400MHz, 18.1GHz-18.6GHz, and 37GHz-37.6GHz.

More Efficient Network Technologies Will be a Primary Use Case for 6G

Various potential 6G use cases were discussed at the summit, though the time frame for commercial readiness and the willingness of customers to pay for these solutions remain unknown. Many of the use cases discussed involved extended reality (XR) technologies such as AR and VR and included the metaverse and real-world simulations to provide training for users including military personnel and first responders. Use cases around autonomous vehicles, advanced robotics, drones and 8K video were also discussed.
 
TBR expects the most beneficial use cases for 6G will involve the provisioning of advanced technologies that will enable operators to more cost-efficiently support rising traffic on their networks. For instance, deeper implementation of artificial intelligence and machine learning technologies will enable operators to enhance self-optimizing network (SON) capabilities to realize cost efficiencies. 6G is also expected to result in deeper implementation of digital twins, which will help operators better anticipate potential outcomes to their networks and optimize their operations in areas including site management and field operations. Additionally, 6G is expected to be significantly more energy efficient compared to 5G, which will enable operators to improve cost efficiencies while helping to support corporate sustainability goals.
 

Conclusion

The 2023 Brooklyn 6G Summit provided an optimistic yet realistic outlook on the potential of 6G. The telecom industry is particularly concerned regarding the revenue opportunity provided by 6G given the current state of the 5G market. Despite the uncertainty of revenue-generating 6G customer use cases, investments in 6G will likely benefit operators in the long term due to the technology’s ability to support escalating traffic more cost-efficiently on their networks.

AWS Aims to Reinvent GenAI Through Infrastructure Layer, Platform Tools and Applications

Amazon Web Services’ (AWS) 12th annual re:Invent conference was, unsurprisingly, all about generative AI (GenAI). The five-day event showcased all the ways AWS enables this budding technology — which Amazon CEO Andy Jassy claims will add tens of billions of dollars to AWS’ top line — not just through the infrastructure layer AWS is known for, but also through the company’s platform tools and applications. 

AWS Set Out to re:Invent infrastructure over a decade ago and is prepared to do the same with GenAI

Dating back to the dot-com bubble and the early days of amazon.com, Amazon gained an understanding of what it takes to provision infrastructure designed to scale at massive volumes. After Amazon spent years trying to overcome scale challenges associated with bringing third-party merchants to its e-commerce engine, AWS was born.
 
Despite all the competition it has welcomed over the past 10 years, AWS is still largely credited with not only pioneering cloud infrastructure but also making it accessible to anyone. As articulated by AWS CEO Adam Selipsky, this could range from a college student using a laptop in their dorm room to some of the most sophisticated enterprises in the world. But largely owed to the pandemic, we have seen the cloud market shift from a data center outsourcing strategy to a tangential business driver, which means AWS has had to adapt alongside its clients with not just traditional hosting services but also full-stack solutions tied to a specific use case.
 
One of the most compelling customer examples highlighting this approach includes Pfizer. At the height of the COVID-19 pandemic in 2021, Pfizer pledged to expand its cloud footprint from 10% to 80%. Put another way, Pfizer migrated 12,000 applications and 8,000 servers in 42 weeks, which resulted in $47 million in annual savings and the closure of three data centers. This seemingly successful, large-scale transformation has Pfizer now exploring AWS’ GenAI technologies, including Bedrock, to automate manual processes and realize a projected $750 million to $1 billion in annual cost savings.
 
This customer example speaks to the powerful influence AWS’ infrastructure has with clients such as Pfizer — which needed to submit data to the Food and Drug Administration in a matter of days during COVID-19 — that prioritize speed, scale and agility. Holding a significant portion of the cloud infrastructure layer, AWS is looking up the stack to tackle cloud’s next big reinvention: GenAI.
 

Download your free copy of TBR′s 2024 Cloud Predictions special report

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond


 

A closer look at AWS’ GenAI stack

Selipsky’s overview of the AWS GenAI stack was consistent with the commentary Jassy has provided on Amazon earnings calls over the past couple of quarters. Here is a quick look at AWS’ GenAI capabilities and some of the new innovations:

  • Infrastructure: While a great talking point for AWS, we cannot argue the fact that scalable compute serves as the foundation for all things GenAI. For AWS, this includes both custom chips and NVIDIA (Nasdaq: NVDA) GPUs. AWS used re:Invent to launch innovations in both areas, including Amazon Trainium 2 instances (Trn2), which promises a fourfold performance increase over Trn1 for machine learning inference workloads, and NVIDIA DGX Cloud on AWS. The latter is particularly interesting and comes as all other cloud providers have already signed on as hosting partners for NVIDIA’s DGX AI software. As the first company to put GPUs in the cloud, AWS has a unique relationship with NVIDIA, but one that may be growing more contentious as sales teams push AWS’ own chips as part of a cost optimization play designed to maximize customers’ lifetime value. Even so, NVIDIA’s supplier power is significant, and thus the company has a lot of bargaining power with the hyperscalers, which need NVIDIA to supply GPUs to their data centers, and in return, can host DGX and support NVIDIA’s push into the software space.
  • Platform tools and “as a Service” models: The middle layer of AWS’ GenAI stack is largely synonymous with Amazon Bedrock, a managed service used by 10,000-plus customers to access and customize foundation models for their GenAI apps. Making sure customers are not beholden to one model provider and can access an array of options through the same API interface is key to AWS’ strategy. It also contrasts with Microsoft’s (Nasdaq: MSFT) approach and helps AWS position itself as an open and flexible alternative. New models supported via Bedrock include Anthropic’s Claude 2.1, which has a context window of roughly 150,000 words — making it well suited for legal and finance use cases — in addition to internal models, like Amazon Titan Multimodal Embeddings. Breadth of models is key, but improving the native functionality within Bedrock garners the majority of investment from AWS at this layer. This largely includes features that get customers beyond out-of-the-box models to those that can be customized, fine-tuned and applied to business use cases. One example includes Knowledge Bases for Amazon Bedrock, a Retrieval Augmented Generation (RAG) service that pulls data from multiple sources (i.e., databases, APIs) to help customers bring data to their models and customize.
  • GenAI applications: At the top of the stack are the actual GenAI applications built on foundation models. AWS may have a weaker association here, but this layer is important to rounding out the entire stack and keeping customers invested in AWS. This layer largely comprises Code Whisperer, the free-for-use code companion that also offers customization capability, which means the application learns from internal code to provide personalized recommendations.

It is all about breadth

With over 220 native services and 600 compute instance types, portfolio breadth has always been a hallmark attribute of AWS. For context, AWS launched 3,300 new features and services in 2022. In his opening keynote, Selipsky went as far to say that AWS offers 60% more services and 40% more features than its nearest competitor. The approach to GenAI will be no different, as AWS strives to offer the broadest set of capabilities for customers to run, build and deploy GenAI technology.
 
Even in areas where AWS lacks depth or specificity, ISV solutions prove instrumental in filling gaps and, in many cases, do more to drive up a customer’s underlying IaaS resources than AWS’ out-of-the-box services. We also know AWS has a rich history of delivering very basic services to market and quickly building them up into competitive products over time. Perhaps the best example is AWS SageMaker, which accumulated over 250 features and tens of thousands of customers in the span of six years.

 

GenAI applications: How AWS is entering the copilot race with Amazon Q

At re:Invent, AWS took a bigger leap into the GenAI applications space with the launch of Amazon Q. While incorporating natural language processing (NLP) into various services is not necessarily new to AWS, Q is a GenAI-powered assistant based on 17 years of AWS knowledge designed to bridge the gap between the technical and business-led functions in the enterprise. For example, Q will be integrated with the Code Whisperer environment so developers can ask questions like, “How do I create code for this function?” while admins can use Q in pretty much any environment (e.g., AWS Management Console, Slack, documentation) to ask questions as generic as, “How do I build a web app on AWS”?

 

But Q also connects to 40 external data sources for business-related tasks, such as data visualization and document summarization, while the assistant integrates with Amazon Connect for contact center optimization, and will soon work with AWS’ Supply Chain application launched at last year’s re:Invent. Integrating Q across functions like supply chain and customer service, in addition to the analytics stack with QuickSight, suggests AWS wants Q to be the expert assistant not just for building on AWS but also for business.

 

This approach is largely consistent with what we are seeing from competitors integrating copilots and assistants into their SaaS offerings; however, there are a couple of big contrasts between Q and Microsoft’s Copilot and Google Cloud’s (Nasdaq: GOOGL) Duet AI. This first one is pricing: Both Copilot and Duet AI are priced at $30 per month per user, while Amazon Q, though still in preview, will come in at $20 and $25 per month per user for Q Business and Q Builder editions, respectively.
 
AWS may be undercutting its competitors on price, but Microsoft’s and Google Cloud’s recognition and reach in the productivity space may prove challenging, at least in the context of including Q Business edition. Q Builder, however, may be another story. While including all the capabilities of Q Business, Q Builder is designed for AWS-specific use cases, and in general, anything AWS can do to make developers successful is going to be well received by the customer base. This could include tasks like troubleshooting applications, writing SQL queries or even migrating code. A small pool of Amazon developers tested this last capability internally to upgrade 1,000 applications from Java 8 to Java 17 in two days.

 

The other big difference is that Amazon Q leverages Bedrock, which means the GenAI assistant is pulling multiple third-party models and assigning them to the right tasks. Peers have taken a different approach, as their assistants are based on a sole provider; for Google Cloud, this is internal models like Codey, and for Microsoft, this is OpenAI’s ChatGPT. While we cannot say for certain how customers will view these approaches, for AWS, having Q based on Bedrock speaks to the company’s goal of offering a broad array of models in hopes of challenging Microsoft.

The zero-ETL integrations keep coming

Building on last year’s commitment to a zero-ETL (Extract, Transform, Load) future and the resulting integration between Redshift and Aurora, AWS launched three new zero-ETL relational and nonrelational database integrations with Redshift: Aurora PostgreSQL, RDS (Relational Database Service) and DynamoDB. Just like it wants to offer the broadest set of infrastructure options, AWS wants to ensure it has the breadth of cloud data services customers need so they do not have to compromise on the right tool for the right data task. But even if customers have an array of tools accessible to them, they still need a way to break down data silos, which requires integration.

 

To automatically connect data from source to destination and ease manual ETL processes, AWS is offering more integrations between its database and data warehouse services. We do suspect “zero-ETL” has become more of a marketing term and is essentially glorified data sharing, but there is undoubtedly value in simplifying how businesses connect and analyze data. Even before GenAI broke headlines, businesses were realizing the benefits of breaking down data silos and adopting an integrated data posture, but GenAI should only fast-track data strategies throughout the enterprises.

 

Microsoft recognized this trend years ago and recently launched Fabric, a platform that integrates multiple data services, including Synapse, which is akin to Amazon Redshift, into a single offering. Fabric is a single-source-of-truth platform that addresses the entire data cycle and charges customers based on total IaaS resources consumed, versus the compute and storage for each individual data service. AWS’ approach is different, and while customers have a suite of different data services available to them, it could take more effort for customers to stitch these services together and create a unified environment. The new zero-ETL integrations may help rectify this, but Microsoft’s single platform approach and simplified pricing model, all integrated with Copilot, will be competitive.

At AWS, “partners are the catalysts”

In the second-to-last keynote, VP Worldwide Channels and Alliances Ruba Borno discussed the critical role of partners acting as catalysts to GenAI adoption. This includes both ISVs and global systems integrators, and AWS wants to work with both parties collectively to meet a customer where they are in a journey and work backward from their needs. Delivering solutions as part of an ecosystem was a big focus of the revamped Partner Paths model two years ago, and now AWS is tasked with scaling this model to deliver the GenAI stack to customers.
 
When asked by Borno what partners can do to drive more business with AWS, Selipsky quickly called out proficiency and making sure the skills are in place to build trust with joint customers. Specializations and competencies are a big piece of proficiency and are skills customers appear to be asking for. At the event, AWS announced the general availability of specializations in resilience and cyber insurance and is also revamping its Competency, Service Delivery and Service Ready designations into one program. Another piece of advice for partners was to focus on putting the necessary resources in place to go to market with AWS, which could be anything from established business units to codeveloped centers of excellence.
 

It is always a balancing act between the vendor and partner as to who should invest what in terms of go-to-market resources to achieve collective goals. But the message during the talk between Selipsky and Borno seemed to be that AWS has all the funding, tooling and programs available to partners that can make for a successful go-to-market strategy, but the partner has to be willing to engage. Put another way, it may be difficult for some Tier 2 partners to grow their AWS business and get in on the GenAI opportunity given the massive resource scale of some of the Tier 1 competitors.
 
As an example, Accenture pledged to train 50,000 developers and technical specialists on Amazon Q and Code Whisperer over the next two years. Despite GenAI’s potential to automate labor, the technology will only broaden the vast IT skills gap, so vendors that can acquire and train the right talent will continue to outperform when it comes to doing business with AWS.
 
Lastly, Selipsky reiterated the important role partners will play in the data ecosystem. Considering it is not the actual foundation models that will differentiate the customer, but rather their data, there is an opportunity for partners here, and anything they can do to help customers establish a data layer that will pave the way for AWS’ GenAI stack will be well received.

Conclusion

While later to the GenAI movement, AWS, with its early establishment in cloud infrastructure, has actually been involved with AI for quite some time. In many ways, the company used re:Invent to raise its voice over the din of AI chatter and showcase the long-standing innovations that it aims to use to build new capabilities and play catch-up with competitors, namely Microsoft. The best example is Amazon Q, a business-focused assistant that is somewhat comparable to Microsoft Copilot, while more Redshift integrations underscore AWS’ goal of better connecting customers to other AWS services, an approach Microsoft is similarly taking with Fabric. Meanwhile, custom compute offerings will continue to serve as a landing spot for net-new workloads, and in some cases, they could be providing cost and performance benefits that help AWS become viewed as not just a hosting provider but also a long-term digital transformation partner.

 

At the end of the day, customers’ considerations of these GenAI offerings will heavily depend on their existing infrastructure footprint, level of integration required and business use case. Working with partners to land new business and maintain its IaaS leadership lays a foundation for AWS to build the broadest set of integrations, features and services. In doing so, AWS ensures it can meet clients anywhere in their journey regardless of technical requirement or business need. If properly executed, this approach will help AWS further grow off a $88 billion run rate and maintain its lead over its very fast-following peers in IaaS and PaaS.

With Broadcom at the Helm, Profitability Will be at the Center of VMware’s Next Chapter

On Nov. 22, 2023, Broadcom officially closed its acquisition of VMware, concluding an 18-month saga that called on the company to navigate several regulatory roadblocks. While these hurdles may have delayed the deal’s closing, TBR suspects most industry watchers have anticipated this outcome for quite some time.

VMware Acquisition Approved by Global Regulators

The early concerns of global regulators about anti-competitiveness did not take into account the strategic importance to Broadcom of keeping VMware’s platforms accessible across all hardware options, thus eliminating the likelihood of Broadcom limiting these platforms to its own hardware.
 
Chinese regulators were certainly a tail risk given recent geopolitically motivated actions against other U.S.-oriented M&A, yet they ultimately approved the deal, too, perhaps due to Broadcom’s historical ties to the country and the software-centric focus of the acquisition.
 
Now, with the deal done, VMware’s next chapter has begun. It has been a long road for the company, yet many things have remained the same. Although VMware is pushing into new cloud-native platforms, the company’s virtualization platform is still its bread and butter, and much of VMware’s total revenue is tied to this business. This proportion is likely magnified considering the breakdown of operating profit. As Broadcom takes the reins, VMware’s strategy will revolve around maximizing the value of these profit centers, likely to the detriment of emerging businesses.
 

TBR′s 2024 Prediction Series

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond


 

Broadcom Is in Charge and Will be Guided by Profitability

Broadcom has stated profitability through cost cutting is the top priority, communicating to investors the goal to achieve an adjusted EBITDA of $8.5 billion over the next three years compared to $3.2 billion of GAAP EBITDA for the last 12 months ended CY2Q23. While far from a perfect comparison, the targeted uplift is clearly sizable and will rely heavily on reducing costs.
 
TBR expects general & administrative costs to see the greatest relative decline as Broadcom executes its synergy plan, which will involve slashing redundant headcount in administrative roles. TBR expects Broadcom to be particularly successful in this area, as leadership has extensive experience folding acquired businesses into existing functions in departments like legal, finance and human resources. This skill will be put to work quickly, likely resulting in multiple rounds of layoffs across these departments.
 
Sales & marketing teams are expected to see impacts as well as Broadcom makes use of its existing sales teams and channel distribution partners to sell into existing strategic accounts.
 
Headcount reductions have already begun, just days after the deal closed. The total impact of layoffs so far is unclear, yet there are reports that reductions have affected software development and cloud engineering roles as well as administrative roles. While While VMware’s R&D budget will undoubtedly shrink, it is unknown by how much. The fact that R&D-related headcount is being cut early does not paint a favorable picture for Broadcom’s commitment to innovation, yet TBR’s estimates indicate that drastic cuts may not be necessary. This aligns with commentary from Broadcom management, which has promised to maintain VMware’s previous development strategy. Still, TBR remains skeptical on future R&D efforts.

Profitability Goals may Negatively Impact License Products and Emerging Solutions Over the Long Term

Along with many industry watchers, TBR has been concerned about Broadcom’s intention to invest in innovation since the initial announcement of the VMware acquisition, given Broadcom’s history with CA Technologies and Symantec. In both instances, the company slashed funding for support and R&D after the acquisition, opting to extract free cash flow from their sticky install bases instead of pursuing organic growth. VMware offers a similar opportunity.
 
Cost concerns are prompting many enterprise customers to preserve past investments, including their virtualization platforms. Moreover, since VMware has built highly integrated solutions with all the Tier 1 hyperscalers, enterprises are better equipped to migrate their virtualization platforms to the cloud, where they are able to set up broader cloud migrations without fully committing to the transition to cloud native.
 
This means VMware commands a large, sticky install base, which would be ideal for Broadcom’s previous strategy. Recognizing this, many partners and customers are rightfully worried about the outcome of this deal, expecting higher licensing prices and diminishing support.

Profit Centers Will See Little Impact from Broadcom Ownership

In addition to promoting margin expansion, raising license prices will encourage more customers to transition to subscription offerings, which highlights an important consideration within this business transformation. While Broadcom will deprioritize certain segments, large portions of VMware will be deemed strategic by Broadcom and will continue to see the same level of investment.
 
For instance, many customers and partners collaborating around cloud-based virtualization platforms like VMware Cloud will see minimal differences because of the change in ownership. For the last 12 months ended CY2Q23, over 34% of VMware’s revenue was generated in the Subscription & SaaS segment, and TBR suspects Broadcom will prioritize many of the offerings within this segment.
 
In May Broadcom CEO Hock Tan pledged to invest an incremental $2 billion per year, with half slated for R&D to support the Cross-Cloud portfolio. Considering that an incremental $1 billion investment would increase R&D spend by around 30% over CY2022 levels, Broadcom’s ownership may actually benefit large swaths of VMware’s Cross-Cloud portfolio by adding resources and accelerating development timelines.

Long-term, Profitability Will be King

TBR is skeptical about how far into the future Broadcom’s commitment will go, and it is not clear how Broadcom’s investment will be spread across VMware’s different offerings. Many solutions within the Cross-Cloud portfolio are still underdeveloped and represent long-term opportunities for VMware to achieve long-term sustainable top-line performance.
 
Tanzu is a prime example. The container management platform sits at the heart of the company’s multicloud strategy, which VMware has pushed heavily over the past 18 months, yet TBR suspects Tanzu contributes only a small percentage of total revenue and certainly cannot be considered a profit center.
 
If Broadcom is to achieve its stated profitability goals, VMware will need to scale this offering rapidly. If it does not, TBR expects there will be a limit to Broadcom’s patience and a spinoff may be in the cards over the long term. To TBR, the $2 billion commitment indicates a willingness to only support these emerging businesses over the short term.

Conclusion

Regardless of how much Broadcom messages around maintaining VMware’s current investment strategies, it is very difficult to reconcile this marketing approach with the company’s stated profitability goals. Thus, TBR suspects large changes have begun to arrive for the virtualization leader.
 
The most immediate impacts will be the significant layoffs that have reportedly removed redundant administrative headcount, along with likely price increases on license products. While there is good reason to expect that many of VMware’s emerging products will be supported over the next couple years, the long-term view is much more opaque.
 
TBR will be watching for signs of traction and strong execution around many of the emerging solutions included in the Cross-Cloud portfolio, but if they fail to materialize, TBR expects Broadcom’s management to make decisions that benefit profitability.

 

Microsoft Expands PaaS Portfolio on Path to AI Incumbency

A platform company at its core, Microsoft is less concerned with migrating monolithic applications and instead is focused on building a complete data integration and management layer to capture value-add workloads that tie into said applications, all while maximizing clients’ underlying Azure infrastructure usage. To replicate this approach for the AI era, Microsoft has spent years integrating its various data services, from Synapse to Power BI, to automate customers’ entire data pipelines and prepare them for AI adoption. The result is Microsoft Fabric, a new end-to-end SaaS-like data platform that could help Microsoft reach new audiences and spur Azure growth in the continued race for cloud and AI dominance.  

Microsoft Is Investing in Data Cloud to Support its GenAI Strategy

What Is Microsoft Fabric?

Simply put, Microsoft Fabric is a unified data platform comprising seven core Azure data services: Data Factory, Synapse Data Engineering, Synapse Data Warehouse, Synapse Real Time Analytics, Power BI and Data Activator. While Microsoft Fabric makes it easier for customers to connect to different personas within an organization, from data engineers to business analysts, the hallmark of the new service is its simplified pricing model, which charges customers based on the total amount of IaaS resources consumed, rather than the compute and storage for each individual Azure data service.
 
When we interview enterprise buyers, we continue to find that consolidating point solutions in favor of complete, integrated platforms is a common trend, and Fabric is bound to resonate with customers trying to control runaway cloud costs in a still widely uncertain economy.
 
The other key defining attribute of Microsoft Fabric is the underlying architecture it is built on, OneLake. Microsoft Fabric is based on a repository that allows customers to query data on not just MySQL databases but also object storage, as is customary in the data lake architecture.
 
With OneLake, we see Microsoft moving squarely into the data lake space. Given the symbiotic relationship between data lakes, which are designed for unstructured data, and generative AI (GenAI), OneLake is Microsoft’s under-the-hood way of ensuring that customers can easily load data from multiple sources, put it through the Fabric platform for data management and visualization, and build GenAI applications.
 
Altogether, the unification of Microsoft OneLake and Fabric is the right step for Microsoft and exemplifies how far the company has been willing to go to execute its AI-based growth strategy.
 

TBR’s 2024 Prediction Series

GenAI: Growth Catalyst for Cloud in 2024


 

Fabric Will Help Microsoft Change the PaaS Landscape but Not Without Infringing on Partners

As highlighted in TBR’s 3Q23 Cloud Data Services Market Landscape, Amazon Web Services (AWS) is the clear leader in the cloud data warehouse market, with Microsoft falling squarely in second place and not significantly ahead of Google Cloud and Snowflake. Azure Synapse has not gained the same level of interest and traction in the market as AWS and Google Cloud’s BigQuery. As a result, Microsoft partnered with Databricks in 2017, developing and delivering the first-party Azure Databricks service.
 
Partnering with Databricks to ensure customers have an effective data analytics platform natively available on Azure rather than Synapse was a strategic move. With Fabric, however, we now see Microsoft essentially re-delivering Synapse as part of a more complete product that gets to the heart of what customers want: an end-to-end set of capabilities that automate entire data pipelines from data collection and ingestion up to analytics and visualization.
 
This approach should bring Synapse into more client conversations while helping Microsoft expand its reach outside the analytics department. This, of course, raises the question: What becomes of Microsoft’s partnership with Databricks? As part of OneLake, the architecture underpinning Fabric, Microsoft is leveraging Delta Lake — Databricks’ protocol for storing data in an open table format — and this move could persuade Databricks customers to adopt Fabric.
 
Even still, Microsoft OneLake adopts the data lakehouse architecture pioneered by Databricks, and with Fabric’s feature-rich set of upper-stack capabilities, customers may be more inclined to go all in with Microsoft Fabric and its comprehensive pricing model, which would bring a new layer of competition to the Microsoft-Databricks relationship.
 
This trend is indicative of what we are seeing across the cloud landscape. The hyperscalers, even those perceived as more partner friendly, are expanding into new areas of the cloud stack, posing potential risks to their partners, especially as customers continue to indicate their interest in consolidating point solutions.
 
That said, coopetition is nothing new in the cloud landscape, and vendors are getting more adept at navigating competitive differences to deliver outcome-specific solutions to their joint customers.
 
Perhaps the best example is the relationship between AWS and Snowflake, which are both spending millions of dollars to get legacy data warehouse customers to Snowflake’s platform on AWS. While AWS would naturally prefer customers adopt its own data warehouse service — Redshift — over Snowflake, AWS has realized the trade-off of forfeiting some Redshift customers to Snowflake as long as those customers are running on AWS infrastructure.
 
Microsoft Fabric is much broader than the data warehouse, but if AWS and Snowflake are a barometer of a successful partnership, Microsoft and Databricks will similarly learn to overcome these obstacles.
 
With Fabric, we expect Microsoft will slowly chip away at AWS’ share and potentially Snowflake’s and Databricks’ in the coming years. However, it is important to note we do not see Fabric as any kind of direct threat to pure play data cloud platforms, particularly Snowflake, which has the established presence and reputation in the data warehouse space specifically, not to mention easy inroads into AWS’ customer base.
 
In our talks with enterprise buyers, we often find customers value Snowflake as it allows them to run separate workloads as part of a shared data layer that is not tied to any specific cloud infrastructure. Despite the multicloud capabilities in OneLake, nothing changes the fact that the core data warehousing capabilities within Synapse are still built specifically for Azure infrastructure for the seamless integration with other Azure services.
 
We have no doubt Fabric will be attractive to Microsoft-centric shops, but attracting customers invested with other cloud providers may be a more difficult feat, solidifying Snowflake’s and Databricks’ unique value propositions.

Data Lakes and GenAI Go Hand in Hand, and Microsoft Wants to be the First Hyperscaler Strongly Associated with the Architecture

One other interesting consideration with Fabric is Microsoft’s choice of open table format. Considering its partnership with Databricks, Microsoft has opted for Delta Lake, although it plans to add external support for two other popular frameworks: Apache Iceberg and Hudi.
 
In general, for customers that want to build a data lake, Delta Lake is the preferred format while Apache Iceberg is more aligned with data warehouses. Defaulting to Delta Lake reflects Microsoft’s intent to remain relevant with Databricks customers, while allowing customers to query data on object storage (Amazon S3 and eventually Google Cloud Storage) reflects Microsoft’s commitment to the data lake architecture.
 
Due to data lakes’ ability to combine both structured and unstructured data for prescriptive analytics use cases, they are becoming increasingly popular and, in some scenarios, offer customers a way to bypass data warehouse operations altogether. GenAI, which relies on unstructured data sources, such as documents or images, will fuel customers’ desire to consolidate data warehouses into data lakes, leading us to believe that Databricks is in a strong position despite Microsoft’s Fabric announcement.
 
This is also one of the reasons why Snowflake is trying to add more features that support unstructured and semistructured data in hopes of changing its perception in the market from a data warehouse company to a data lake company.
 
The hyperscalers, however, have been arguably behind in their data lake services and messaging, and with OneLake, Microsoft wants to make sure it is the hyperscaler most strongly associated with data lakes and by default, GenAI.

GenAI Enablement Sits at the Heart of Microsoft’s PaaS Strategy

Considering Microsoft has arguably made the biggest splash in generative AI, the company’s latest PaaS developments come as no surprise. As TBR discussed in our 2Q23 Cloud Ecosystems Market Landscape, a large language model (LLM) is only as good as the data that goes inside, which means the ability to establish a centralized, single source of truth is very important for an enterprise pursuing a serious generative AI strategy.
 
OneLake’s ability to provide an enterprisewide repository and a no-code API to manage data will help the company address this need, and the GenAI tools embedded within Fabric will help accelerate the transition to unified data pipelines.
 
Mostly in preview today, there are three Copilot solutions embedded within Fabric: Copilot for Data Science and Data Engineering, Copilot for Data Factory, and Copilot for Power BI. Broadly, the Copilot solutions in Microsoft Fabric enable code generation capable of automating routine tasks and expediting the transformation from raw data to structured, which is what LLMs hunger for.
 
The integrations built over the years between Microsoft’s platform assets and its application portfolios ensure there is plenty of raw data entering Fabric, which, as it becomes structured, presents an ideal environment for enterprises to pursue custom GenAI development. This is where the Azure OpenAI Service enters the conversation.
 
While the Copilot solutions offered by Microsoft provide quick-and-easy access to GenAI capabilities, true transformational value will be unlocked as enterprises build their own GenAI applications around their proprietary data and business processes, presenting a large opportunity for Microsoft.
 
The Azure OpenAI service has been enabling customers to train LLMs on their proprietary data since it became generally available in January, and, at Ignite 2023, Microsoft took another step forward with the public preview launch of Azure AI Studio. A new addition to the Azure OpenAI service, Azure AI Studio brings together developer tools like Azure AI SDK with the company’s growing catalog of foundation models to enable customers to build their own copilots and other generative AI applications.
 
As more enterprises pursue custom GenAI development, the unified approach to data management offered by Microsoft Fabric and OneLake will become more valuable, drawing interest from enterprises with large Microsoft footprints, yet coopetition at the data layer will remain the standard.
 
Ultimately, Microsoft’s priority is ensuring all data can be easily fed into its foundation model service, so integrations that connect the Azure OpenAI Service with third-party data leaders like Snowflake and Databricks will prove to be popular alternatives to Microsoft’s end-to-end approach.

Microsoft Is Not Just after the Data Layer: The Race for Hybrid Cloud Control Plane Continues as Azure Arc Reaches 21,000 Customers

Throughout this report, we have touched on Microsoft’s pursuit of the data layer, but it is important to note that Microsoft’s PaaS capabilities are much broader and extend closer to the box. Owing to Windows Server, Microsoft has captured a significant portion of the enterprise OS layer, allowing the company to effectively move into the multicloud control plane, which Microsoft calls Azure Arc.
 
Best thought of as an abstraction layer that stiches together infrastructure assets for capabilities like monitoring, provisioning and observability, all while securing the OS instance, Azure Arc has amassed 21,000 customers in the span of four years.
 
In recent quarters we have seen Microsoft become increasingly transparent in its customer reporting. For instance, in 2Q23 and 3Q23 Azure Arc customer count grew 150% and 140% year-to-year, respectively, putting the customer count at just 7,200 in 2Q22. This is much lower than the 21,000 customers announced in 3Q23 and indicates vast interest from Microsoft’s install base of customers trying to bridge the gap between the cloud and legacy data center.
 
Another factor driving the platform’s success is Microsoft’s early support for both virtual machines (VMs) and Kubernetes. This approach contrasts with Google Cloud, whose primary goal is getting customers to move away from VMs and use containers. In other words, Google Cloud wants customers to use GKE (Google Kubernetes Engine) on premises to containerize a VM and keep it there, but also wants customers to build net-new, cloud-native apps in containers.
 
Google Cloud did launch Anthos for VMs in 2021, which we viewed as a direct counterattack to Azure Arc, albeit not a very effective one, as Anthos’ customer count is comparatively low and could suggest the company has not been as adept at tapping into the VMware customer base and attracting enterprises that are not ready to migrate VMs.
 
We will continue to monitor Azure Arc’s growing customer count in the coming quarters, and it will be interesting to see if Microsoft begins to leverage Fabric to support other managed data services outside Azure SQL via Arc to turn the hybrid platform a more complete, centralized management layer.

IT Ecosystem Trust Paves the Way for GenAI-enabled Growth in 2024

New: Explore the latest digital transformation trends and predictions for the year in TBR’s 2025 Digital Transformation Predictions special report. Uncover how AI, ecosystems and GenAI will change the IT services landscape. Download your free copy today!

Top 3 Predictions for Digital Transformation in 2024

  1. GenAI hype meets reality
  2. Ecosystems fuel disruption and lead to the rise of the superpowers
  3. Cyber, data and regulations — the three-legged stool enabling new digital transfomration growth

 

Request Your Free Copy of 2024 Digital Transformation Predictions

 

Challenges and Opportunities in the Era of GenAI and Enterprise Digital Transformation

While cloud remains the backbone of buyers’ digital transformation (DT) programs, generative AI (GenAI) has thrown vendors and their technology partners into a frenzy, especially as enterprise buyers have started paying closer attention to their IT spend in response to macroeconomic headwinds.
This new dynamic creates a plethora of challenges and opportunities for technology and services vendors that guide and manage enterprise DT programs. From vendor consolidation to technology stack simplification, buyers continue to look for ways to optimize their digital assets, making it hard for vendors to introduce new technology without the appropriate use cases. Delivering value in a challenging market requires vendors to act more as strategic partners and collaborate rather than simply transact with enterprises.
 
GenAI is here to stay. There are certainly more unknowns than knowns today, despite everyone across the ecosystem convincing others they have found the silver bullet that will enable the creation of the next-gen enterprise business model. As with most new technologies, establishing the right frameworks as well as commercial and pricing models is a necessary first step before adoption can scale. Developing and deploying pricing mechanisms that incorporate pro bono and/or risk-sharing services and using templated offerings to standardize delivery can help vendors maintain their incumbent positions, especially as GenAI will level the skills playing field.
 
TBR Insights Live - Navigating GenAI Opportunities and Challenges in Digital Transformation in 2024
Expectations around differentiation are also changing, increasing the need for vendors to add specialization and often spurring them to expand their partner ecosystem. The advent of a new technology stack (e.g., next-gen GPU-run data centers that enable GenAI to reveal its full potential) will compel vendors to re-evaluate and expand their relationships with chip manufacturers — something many software and services vendors have not done for a while.
 
Additionally, the implications for cyber, data, regulations, ethics, and model governance will continue to dominate headlines and vendor-buyer conversations. And while vendors are in the business of making money, we believe the winning formula is to strike the right balance between constantly selling and consistently developing relationships with buyers and partners.
 
To read the entire 2024 Digital Transformation Predictions special report, request your free copy today!

GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond

New: Explore the future of the cloud market in 2025 and beyond in TBR’s 2025 Cloud Market Predictions special report. Learn how AI, GenAI and alliances will shape the industry and affect market share. Download your free copy today!

Top 3 Predictions for Cloud in 2024

  1. Simply providing cloud services at scale is no longer enough for vendors to gain cloud market share
  2. IaaS will become more tailored to workload and regulation
  3. SaaS vendors promote multiproduct sales with generative AI

 

Request Your Free Copy of 2024 Cloud Predictions

GenAI’s Rise Amid Cloud Challenges: Navigating 2024’s Landscape and Shaping the Future

For all the challenges that cloud vendors faced in 2023, there was a promising sprout of opportunity that developed quite rapidly with generative AI (GenAI) technologies. The pace with which GenAI gained not only awareness but also real investment and usage in the market was notable, and we expect end customers’ real investments in the solutions to continue to grow and develop in 2024.
 
However, GenAI solutions on their own will not overcome the headwinds that worked against the market throughout 2023. Many of the forces that caused revenue growth rates to slow precipitously for nearly every major cloud vendor remain in place heading into 2024.
 
TBR Insights Live: GenAI and the Cloud Revolution in 2024
 
The general macroeconomic conditions remain uncertain, wars continue to threaten global stability, IT buyers remain cautious about spending, and cloud has reached a saturation point in many IT organizations. So, while we do not expect GenAI technology to return the market and leading vendors to their pre-2023 pace of revenue expansion, it will serve as a small yet rapidly growing segment in 2024 and should become a significant market in 2025 and beyond.
 
We also expect the intensity of AI-focused strategies during 2024 to reflect the importance of the technology to long-term growth. AI could reset the cloud leaderboard for the next decade, so incumbents like Amazon Web Services (AWS) and Salesforce will be keen to protect their large customer bases against mounting AI competition from the likes of Google, Microsoft and SAP.
 
To read the entire 2024 Cloud Predictions special report, request your free copy today!

IT Services and Consulting in 2024: Traversing GenAI Pressures, Talent Challenges, and Regulatory Waves

New: Discover insights into GenAI for 2025 across IT services, cloud, IT infrastructure, federal IT services and more. Download your free copy of TBR’s 2025 GenAI Predictions special report today!

Top 3 Predictions for Professional Services in 2024

  1. The 2023 focus on reskilling and training will pay off in accelerated revenues in 2024
  2. Generative AI will create a pivot to outcomes-based pricing
  3. Regulations will become a major pain point for all

 

Request Your Free Copy of 2024 Professional Services Predictions

Embracing Change, GenAI Hype and the Imperative of Outcome-Based Strategies for IT Services and Consultancies

As they say, nothing in life is certain except for death and taxes. And change. And data overload. And hype about technology and disruption. Predictions provide a perfect platform for big leaps and wild guesses, but at TBR, we are seeing more of the same for 2024, including taxes, data overload, and technology (read: generative AI [GenAI]) hype.
 
IT services and consulting stubbornly remain a people-centric business, despite advances in automation, analytics and AI, and vendors most adept at attracting and retaining good people continue to outperform peers. Keeping good people when the hype around GenAI suggests that many task-oriented jobs will disappear requires vendors offer training in new skills and develop new career paths.
 
Concurrent with these pressures on talent, GenAI will pressure contracts — with greater transparency comes greater opportunity to pay for exactly what you got. IT services vendors and consultancies that embrace outcome-based pricing models will increasingly find their clients, particularly those enamored with GenAI (although, who isn’t?) open to creative pricing and reluctant to continue business as usual once GenAI has pushed the client’s procurement office out the door.
 
TBR Insights Live - GenAI Hype in 2024: A Deep Dive into IT Services Industry Predictions
 
Additionally, governments continue to lean into regulation to mitigate societal risks and to tame or unleash (depending on your political views) commercial activities. After the last three years of dealing with the pandemic, war, and the emergence of robot overlords (read: again, GenAI), we can reasonably expect governments will increasingly seek the security blanket of tighter regulations.
 
Add a splintering of global approaches to trade, finance and geopolitics, and companies face not just more regulations but also overlapping and potentially conflicting compliance obligations, varying wildly by jurisdiction. Death and taxes, indeed.
 
For IT services vendors and consultancies, 2024 looks a little boring. Reskill and train your people so you’ve got the right folks ready to deploy at scale to address your clients’ toughest problems. Let someone else handle the easy problems until they get replaced by GenAI. Start baking outcomes-based pricing into every engagement, underpinned by AI and analytics that demonstrate unquestionably what value you are bringing your clients. And lean hard into governance, risk and compliance (GRC), unless you do not have those skills already, in which case, find a partner.
 
To read the entire 2024 Professional Services Predictions special report, request your free copy today!

Telecom Industry Retrenches in Response to Macroeconomic Pressures

New: Stay ahead of the game in 6G with TBR’s 2025 Telecom Predictions special report. Explore the expected changes and developments in the telecom industry, including AI, cloud and digital transformation. Download your free copy today!

Top 3 Predictions for Telecom in 2024

  1. New round of M&A and bolder combinations are likely to be allowed by regulators
  2. Cash flow management becomes priority due to increase in cost of capital and other headwinds
  3. Open RAN will not be ready for mainstream adoption in 2024

Request Your Free Copy of 2024 Telecom Predictions

 

CSPs and Telecom-centric Vendors Will Have to Adjust to Headwinds in Their Industry and the Wider Economy

Macroeconomic and telecom industry-specific challenges that manifested in 2023 — for example, rising interest rates, inflation, lack of 5G ROI, technological complexity, and the end of key stimulus programs and various other economic support mechanisms instituted by governments during the COVID-19 pandemic — are expected to persist through 2024, prompting a response from communication service providers (CSPs) and their vendor partners.
 

The most impactful and pervasive issue confronting the telecom industry is the rising cost of capital, which has been increasing due to central bankers’ shift from quantitative easing (QE) to quantitative tightening (QT) in an attempt to tamp down inflation. The result thus far is companies are now paying on average two to three (or more) times the interest rates they had grown accustomed to since the Great Recession, when central banks began holding interest rates at close to zero. This relatively abrupt change in monetary and fiscal policy has created a concerning situation for entities that are heavily levered with debt, which encompasses nearly all CSPs and many telecom vendors.
 

CSPs with the weakest financial positions began changing their behavior in 2023, primarily in response to the rising cost of capital, evidenced by fiber build targets being scaled back, assets being revalued and written down, and overall capex budgets being reduced. Some CSPs have also had to layer on more onerous covenants, such as pledging assets as collateral, to secure new debt issuances and partially offset the rise in interest rates.
 

TBR Insights Live - 2024 Telecom Industry Outlook: Navigating Macroeconomic and Industry-specific Turbulence
 

TBR expects many CSPs with relatively stronger financial positions to also change their behavior in 2024. Changed behavior typically occurs after a reassessment of capital structure and capital allocation, which can lead to a variety of outcomes ranging from dividend cuts to capex reduction to M&A events. Said differently, CSP CFOs worldwide will be under an unusual amount of pressure to meet their objectives in 2024 and they are highly likely to place greater emphasis on cost optimization and cash flow management.
 

TBR maintains its belief that the telecom industry will look very different by the end of this decade as current events and entrenched challenges push the industry through an evolution.
 

To read the entire 2024 Telecom Predictions special report, request your free copy today!

HCLTech Solves ‘Know Your Customer’ for European Bank

During HCLTech’s Financial Services Advisor and Analyst Day in New York City this past August, the vendor described an engagement with a European bank in which HCLTech provided a comprehensive Know Your Customer (KYC) solution. TBR requested further details and met with HCLTech leaders responsible for the solution and the vendor’s European Financial Services practice in September in London.

 
HCLTech has a long history of working with banks and has developed an appreciation for the associated challenges, technology environments, and regulatory and compliance demands, in addition to the full stack of ecosystem partners. Additionally, HCLTech understands financial institutions’ KYC risks and has applied the company’s own investment and IP to address clients’ concerns around data and processes.

 

Over the last couple of years, HCLTech created a comprehensive approach to KYC for a European bank, solving a number of the bank’s operational challenges, including collating siloed processes and gathering related and dependent data and analytics into a single stream, allowing the CIO to see and understand the technology challenges and bringing greater confidence in controls and reporting to the chief compliance and risk officer.

 

Having engaged this and other financial services clients, HCLTech leaders described the company as the “glue” between IT and risk and compliance. Critically, HCLTech’s leaders said their professionals on the highlighted engagement spoke extensively with the people handling the day-to-day work of analyzing KYC cases.
 

According to Santosh Kumar, Vice President and Head of Financial Services Solutions, EMEA and APAC, no other IT services vendor has received permission from — or even pressed for permission from — bank CIOs to interview and work with them around designing a technology solution that meets the needs of the banks’ KYC analysts, professionals that Santosh stated are frequently considered necessary cost centers within a bank’s operations.

 

Diving further into the use case, Abhishek Mishra, Senior Solutions Director, Financial Crime and KYC, HCLTech, first detailed the pain points and trends HCLTech sees across the financial services space, including false positive rates, compliance costs, cloud migration and enhanced data analytics.

 

Against these conditions, according to Mishra, HCLTech positions itself as “the beacon of trust and innovation in the ever-evolving landscape of financial crime prevention” and a vendor capable of empowering “organizations worldwide to safeguard their integrity and financial stability.” Against that aspiration, HCLTech highlighted the company’s resilience and experience, enhanced by technologies, particularly smart automation.

 

Further discussing HCLTech’s decision to engage directly with the KYC analysts, Mishra described how he sees a range of KYC issues that are not always apparent at the CIO or chief compliance officer level, including fragmented IT systems and frequent manual interventions into processes that should be standardized and automated.

 

Using the platform codeveloped with the European bank client, HCLTech helped the bank reduce its operations team by 60% and lowered the incidence rate of false positives by 30%. As Mishra noted, the industry standard incidence rate for false positives is around 90%, making any improvement a substantial savings in operations costs. Overall, the breadth and proven depth of HCLTech’s capabilities across the KYC and broader financial services space struck TBR as potentially significant differentiators as IT services vendors face increased pressures on their margins, talent management strategies and business models.

The “glue” between IT and risk and compliance

It is a bold claim, and plenty of consultancies, many IT services and even some technology vendors would self-describe as the essential connection between functional groups within an enterprise. HCLTech’s claim, in this particular case about KYC, holds greater weight based on the thorough — and fully operational — nature of its KYC platform.

 

In a wide-ranging discussion with TBR on this use case, HCLTech’s Mishra and Santosh did not shy away from answering several challenging questions, including why banking clients would trust HCLTech with as material a requirement as KYC as well as how HCLTech interacts with the regulators.

 

HCLTech executives showed a refreshingly honest assessment of the company’s place in the ecosystem, acknowledging that the Big Four firms continue to play an essential role in providing assurance to both clients and regulators that HCLTech’s KYC approach — and other banking process technologies — meet all criteria for reliability and compliance. Mishra and Santosh also readily acknowledged that the company’s role within the ecosystem depended on niche technology vendors, rejecting the idea that HCLTech was “end-to-end” while embracing the need to be a capable and easy-to-work-with ecosystem partner.

 

In addition to recognizing challenges within the ecosystem, HCLTech acknowledged that not every client would or could adopt the company’s KYC solution, given the complexities of banks’ legacy technology environments, ingrained cultural dispositions toward caution around all operational aspects governed by compliance obligations, and the myriad technology and IT services vendors that are scrambling for a chance to sell the next special tech-infused solution to a bank (hearing thunderous echoes here of generative AI).

 

Rather than pressing forward on a one-size-fits-all solution, HCLTech has created sandboxes for banks to experiment with a test solution, including the KYC platform, in safe — but realistic — environments. HCLTech’s well-established credibility among financial services clients unquestionably provides the company with entry to advise on new approaches to solving persistent problems.

Financial crimes will not disappear, but HCLTech could ease banks’ costs

In TBR’s view, offering new ways to solve persistent problems is precisely how HCLTech has tackled KYC. Banks budget a surprisingly substantial amount of their operational costs toward KYC, including funds dedicated specifically to paying fines if (read: when) they are found to be out of compliance.

 

In its European bank use case, HCLTech helped the client reduce its number of FTEs dedicated to KYC by 60% and also delivered an auditable, comprehensive and technology-enabled platform that the bank owns, operates and depends on. KYC challenges will never disappear: Money launderers — perhaps Venice’s second-oldest creation — will always be more creative than governments, banks and technology companies. But if HCLTech can substantially reduce banks’ KYC costs without compromising compliance, it is going to unlock considerable value for banks to invest in additional services, technologies and transformations.

 

TBR believes the keys to HCLTech’s success in KYC include:

  • Continuing to focus on being the “glue” between risk and compliance and IT: HCLTech has established credibility with the latter and has grown its credibility with the former, but both will remain essential to KYC transformation. Sticking to its comfort zone with technologists will limit HCLTech’s ability to scale a KYC solution.
  • Staying within its swim lane: Although it is contradictory to the point above, HCLTech should focus on delivering comprehensive and highly functional solutions, in sync with the company’s engineering DNA. HCLTech executives’ willingness during the conversation to cede consulting territory to the Big Four firms (notably EY) struck TBR as exceptionally self-aware in assessing HCLTech’s role in the broader banking ecosystem.
  • Remaining patient: TBR has been briefed on countless transformational solutions that are ready to address burdensome costs with bullet-proof technology. And TBR has heard specific transformational use cases cited … but often three, four or five years later, raising the question: “If that solution was so great, why didn’t it scale?”

HCLTech may have something great here. With patience, discipline around partnering, and a focus on collaboration with the right clients in the right setting at the right time, HCLTech’s KYC solution could become a materially significant step forward in banks’ operational costs and also a good thing for society when it comes to combating fraud and financial crimes.

 

Operators Target Emerging 5G Use Cases, but Monetization Will Remain a Challenge

Approximately 100 industry analysts in addition to representatives from well-known telecom operators and vendors convened at the 2023 5G Americas Analyst Forum to discuss the state of the 5G market in North America and Latin America. The event featured keynotes from Ulf Ewaldsson, president of Technology at T-Mobile, and Scott Blake Harris, senior spectrum advisor at the National Telecommunications and Information Administration’s Office of the Assistant Secretary. The event also featured a series of roundtable discussions focused on key topics in areas including 5G network infrastructure and technologies, private cellular networks, multi-access edge computing, IoT, regulatory considerations, and enterprise and consumer 5G use cases.

TBR Perspective

The 2023 5G Americas Analyst Forum highlighted that 5G development in the U.S. is in its middle stages as operators are on track to complete the bulk of their midband 5G spectrum deployments in 2024. The return on investment for 5G remains unclear, especially for Verizon (NYSE: VZ) and AT&T (NYSE: T) due to their heavy investment to acquire C-Band spectrum licenses.
Operators remain challenged in monetizing 5G because use cases, with the exception of fixed wireless access (FWA), are still limited, especially within the consumer market as LTE remains sufficient to support current smartphone apps in most instances. Conversely, revenue generation for enterprise 5G use cases in areas including private cellular networks (PCNs) and multi-access edge computing (MEC) is taking longer than anticipated as many clients are postponing implementing these solutions until business cases and benefits become more certain.
 

Despite current challenges in monetizing 5G, investments in the technology remain necessary for U.S. operators to remain competitive with each other, to add network capacity to support rapidly growing data traffic, and to gain network efficiencies and cost savings as 5G is significantly better at handling network traffic compared to LTE. Additionally, new technology standards, including 3rd Generation Partnership Project (3GPP) Releases 16 and 17, are helping to unlock the potential of 5G solutions in areas including MEC, network slicing, industrial IoT and V2X (vehicle-to-everything) while the upcoming 3GPP Release 18 will debut 5G Advanced technology. Though the availability of these technologies will create 5G monetization opportunities, TBR expects hyperscalers, application developers, OEMs and other players within the technology industry to capture the majority of new revenue from 5G-related solutions, while operators will serve mainly as connectivity pipes to support these solutions.

Click to register for our next private 5G TBR Insights Live event!

Impacts and Opportunities

5G Adoption Is Accelerating in North America, but Revenue Generation Remains Minimal

Though North America leads other regions in the adoption of 5G-compatible devices and enrollment in service plans, direct revenue generation for operators from smartphone customers is limited due to minimal use cases besides providing faster data speeds. TBR believes operators are monetizing 5G in indirect ways, however, including by helping to ensure strong quality of mobile broadband service to minimize churn and by leveraging enhanced network capacity to support features exclusive to higher-tier service plans such as increased high-speed mobile hotspot data limits before speeds are throttled as well as increased data tiers for mobile hotspot coverage. Certain operators, most notably Verizon, are also limiting access to midband 5G services to customers enrolled in premium service plans.
 

FWA currently provides the most significant 5G revenue opportunity for operators, as evidenced by T-Mobile’s (Nasdaq: TMUS) and Verizon’s FWA services outperforming cablecos and other broadband providers in broadband subscriber growth in recent quarters. Government initiatives will also help to further FWA customer adoption and service availability, including via broadband funding programs as well as through financial assistance programs, such as Metro by T-Mobile offering discounted FWA pricing via the government’s Affordable Connectivity Program. However, TBR believes FWA will hinder revenue generation long-term when considering the entirety of the broadband industry due to the lower price points of FWA as well as most FWA customer additions stemming from share shifting from other broadband providers. FWA will also result in “race to the bottom” pricing as cablecos and other broadband providers will likely become more competitive in their pricing in the long term to attract and retain customers.

A National Spectrum Strategy Is Vital to Support 5G Long-term While Creating a Foundation for 6G

Scott Blake Harris discussed the National Spectrum Strategy, an initiative headed by the U.S. Department of Commerce, NTIA and other federal agencies, including the Federal Communications Commission (FCC), to address the long-term spectrum requirements within both the public and the private sectors. The National Spectrum Strategy is expected to be finalized by the end of 2023 and is focused on creating a pipeline to enable the U.S. to maintain its leadership in spectrum-based technologies, ensure long-term spectrum planning in the U.S., and foster unprecedented spectrum access and management through technology development. A key priority of the National Spectrum Strategy is to improve communications between government agencies and the private sector and to identify and evaluate 1500MHz of spectrum in the U.S. that could be repurposed based on the requirements of both sectors over the next decade.
 

The clearance of additional spectrum will be essential for U.S. operators to support rising 5G traffic long-term while helping the U.S. to compete at the forefront of 5G development against other leading countries such as China. TBR believes the National Spectrum Strategy may be facing resistance, however, from federal entities hesitant to clear certain spectrum to the private sector as the CTIA reports the U.S. government controls 600% more midband spectrum than the commercial U.S. wireless industry. For instance, the Department of Defense has expressed reservations about clearing certain spectrum, such as within the 3.1GHz -3.45GHz range, due to national security concerns as the spectrum currently helps to support military infrastructure including defense systems.

Revenue Generation from Enterprise 5G Use Cases Will be Limited for Operators as Other Players Within the Technology Industry Position to Capitalize on These Solutions

Keynotes and roundtables throughout the event discussed the benefits 3GPP Releases 16-18 will provide to support 5G-related network capabilities and use cases.
The technology advancements provided by these releases will help to advance the development of 5G enterprise use cases in areas including MEC, PCN and IoT. However, hyperscalers, OEMs and other players in the telecom ecosystem are also making headway in these areas, which is causing operators to share revenue from these solutions in many cases and to be circumvented altogether in other instances.
 

For instance, AT&T’s, T-Mobile’s and Verizon’s go-to-market strategies for MEC have centered on leveraging hyperscalers’ partnerships to accommodate client demand for Amazon Web Services (AWS) (Nasdaq: AMZN), Google Cloud (Nasdaq: GOOGL) and Microsoft Azure Nasdaq: MSFT) solutions. In many cases, clients are opting to work directly with hyperscalers and OEMs in PCN, circumventing operators altogether.
 

Network slicing is another emerging 5G use case discussed throughout the event that is beginning to gain traction. T-Mobile is positioning to be an early leader in network slicing due to its time-to-market advantage in deploying 5G standalone nationwide. The operator recently launched its 5G networking slicing beta program nationwide, which is initially targeting developers seeking to leverage the technology to enhance video calling applications, and T-Mobile will expand the platform to support additional applications and use cases in the future.
 

Initial companies exploring the platform include Dialpad Ai, Google, Cisco and Zoom. TBR expects operators will monetize network slices by providing specialized pricing tiers to optimize coverage and service quality for certain use cases and applications, though in most instances developers and other players will be the entities that will generate the lion’s share of new revenue from these use cases. TBR expects the scenario will be similar to the LTE era, in which operators served mainly as the connectivity pipes for new applications in areas such as ride-hailing and video streaming but other players captured nearly all of the new revenue.
 

Leveraging satellite connectivity to support mobile customers was another emerging use case discussed at the event. Satellite connectivity is gaining headway through new 3GPP standards releases and recent partnerships such as T-Mobile teaming with SpaceX, Verizon partnering with Amazon’s Project Kuiper, and Apple (Nasdaq: AAPL) collaborating with Globalstar. Satellite connectivity is initially being leveraged by operators to support emergency SOS texting services in remote areas without cellular coverage, though satellites will be leveraged to support more advanced voice and data capabilities in the future. Though partnerships between operators and satellite providers are promoted as being mutually beneficial for both parties, opportunity exists for significant market disruption in the long term if satellite providers decide to target nationwide satellite-based smartphone service directly to consumers once technology capabilities advance and a sufficient number of satellites have been deployed.

Conclusion

The 2023 5G Americas Analyst Forum highlighted the progress operators have made in deploying their 5G networks, especially regarding deploying midband 5G services. This progress, coupled with advancing 3GPP technology standards, provides operators with a foundation to target emerging use cases, especially within the enterprise space. Operators will be challenged, however, in sufficiently monetizing these use cases to generate a viable return on investment that offsets heavy 5G spectrum acquisition and infrastructure deployment costs.