Approximately 250 attendees representing entities including telecom network vendors, communication service providers (CSPs), technology companies, regulators, academia, and experts from multiple nontelecom industries converged on the 2023 Brooklyn 6G Summit in late October. The event was hosted by Nokia and NYU Wireless and covered a wide range of topics relevant to 6G, including ICT industry trends, regulatory impacts, the metaverse, AI, vertical use cases, and cloud-native network infrastructure.
TBR Perspective on Telecom Industry Progression to 6G
The 2023 Brooklyn 6G Summit highlighted both the optimism and uncertainty the telecom industry is experiencing as it progresses from the 5G era, which is about halfway through its developmental cycle, to the 6G era, which is expected to commence in 2028, when the first 6G specification in 3rd Generation Partnership Project (3GPP) Release 21 is finalized.
Initial commercial 6G network deployments are expected by 2030. The sentiments of optimism and uncertainty around 6G were discussed throughout the event, including in a keynote from AT&T’s EVP of Technology Chris Sambar in which he expressed concerns regarding the ROI of 6G.
Sambar stated, “We’re getting a little bit worn out with the economics of the industry” to summarize the challenges AT&T and other operators are currently experiencing in light of high investment costs and limited monetization opportunities in the 5G era. Sambar also remarked, “To be completely honest and transparent, the industry has questions on what is 6G going to bring us, what are the use cases that customers want from 6G and frankly, what is it going to cost us.”
Sambar’s keynote, which was one of the initial sessions at the 6G Summit, set the tone for the rest of the event as speakers candidly assessed the current state of the 5G market while discussing the benefits and use cases that are expected to materialize during the 6G era. Though 6G technical specifications and expected use cases are still in the developmental stages, TBR believes operators will be more calculated and tactical in investing in 6G compared to 5G, with a deeper emphasis on ensuring a clear line of sight to ROI before significant spending occurs.
Download your free copy of TBR′s 2024 Telecom Predictions special report
Telecom Industry Retrenches in Response to Macroeconomic Pressures
Lessons Learned From 5G Era Provide Blueprint to Optimize 6G Deployments
Speakers discussed missteps during the 5G era and the importance of not repeating those mistakes in deploying 6G. A key theme was that the launch of multiple variants of 5G in the U.S. — described as “50 shades of 5G” by an event participant — was ultimately a misstep that was impacted by premature marketing. This trend was exemplified by the initial launch of 5G services in the U.S. over low-band spectrum providing only marginal performance benefits compared to LTE, which in turn created a generally tepid initial impression of 5G from consumers.
Another notable example was the launch of 5G non-standalone (5G NSA) prior to the deployment of 5G standalone (5G SA). Though 5G NSA enabled operators to launch commercial 5G services faster, 5G NSA lacks key benefits enabled by 5G SA, including faster data speeds, enhanced security, and the ability to support network slicing and lower latency use cases. The separate launches of 5G NSA and 5G SA in turn created complexities and misunderstandings for consumers and enterprises.
Event participants noted these challenges experienced within the 5G era will help guide the industry as it creates more cohesive 6G strategies that will enable operators to optimize network spending, provide more tangible initial benefits to customers, and minimize premature marketing of services. Key focus areas for the industry in 6G development include optimizing spectrum allocations for 6G as well as establishing unified global technology standards for 6G to minimize fragmentation in the market. For instance, participants at the event noted it would be beneficial for the industry to determine during the earlier stages of standards development if 6G will be deployed on its own separate network core or existing 5G cores and for operators to adhere to one deployment model to avoid the complexities created by 5G NSA and 5G SA.
The Clearance of 6G Spectrum Will be Vital in Supporting Continued Growth in Data Traffic
Despite the early stages of 6G use cases and the uncertainties around monetization opportunities, operators will need to invest in 6G to remain competitive with each other and support escalating data traffic long-term as 6G is projected to support a 10x increase in usage on networks. The clearance of additional spectrum in the U.S. will be essential to support 6G and for the country to remain at the forefront of the global wireless market, as Sambar cited that the U.S. currently ranks No. 10 worldwide in licensed midband spectrum allocation. Key spectrum ranges Nokia expects 6G to be deployed on include the 7GHz-20GHz frequencies to support outdoor cell sites in urban markets, low-band spectrum in the 470MHz-694MHz range to maximize coverage, and sub-terahertz spectrum to provide peak data speeds in localized areas.
The National Spectrum Strategy, which was released by the Biden administration in November 2023, will help in advancing spectrum development in the U.S. The strategy identifies 2,786MHz of airwaves to study in the near term for new uses, including 5G and 6G. The strategy identifies five spectrum bands for study: 3.1GHz-3.45GHz, 5,030MHz-5,091MHz, 7,125MHz-8,400MHz, 18.1GHz-18.6GHz, and 37GHz-37.6GHz.
More Efficient Network Technologies Will be a Primary Use Case for 6G
Various potential 6G use cases were discussed at the summit, though the time frame for commercial readiness and the willingness of customers to pay for these solutions remain unknown. Many of the use cases discussed involved extended reality (XR) technologies such as AR and VR and included the metaverse and real-world simulations to provide training for users including military personnel and first responders. Use cases around autonomous vehicles, advanced robotics, drones and 8K video were also discussed.
TBR expects the most beneficial use cases for 6G will involve the provisioning of advanced technologies that will enable operators to more cost-efficiently support rising traffic on their networks. For instance, deeper implementation of artificial intelligence and machine learning technologies will enable operators to enhance self-optimizing network (SON) capabilities to realize cost efficiencies. 6G is also expected to result in deeper implementation of digital twins, which will help operators better anticipate potential outcomes to their networks and optimize their operations in areas including site management and field operations. Additionally, 6G is expected to be significantly more energy efficient compared to 5G, which will enable operators to improve cost efficiencies while helping to support corporate sustainability goals.
Conclusion
The 2023 Brooklyn 6G Summit provided an optimistic yet realistic outlook on the potential of 6G. The telecom industry is particularly concerned regarding the revenue opportunity provided by 6G given the current state of the 5G market. Despite the uncertainty of revenue-generating 6G customer use cases, investments in 6G will likely benefit operators in the long term due to the technology’s ability to support escalating traffic more cost-efficiently on their networks.
https://tbri.com/wp-content/uploads/2023/12/Cell-Phone-Tower_RiverNorthPhotography_Getty-Images.png10801080Steve Vachon, Senior Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngSteve Vachon, Senior Analyst2023-12-13 16:37:532023-12-13 16:53:56The Telecom Industry Will be Calculated in Its Progression to 6G to Ensure Meaningful ROI
Amazon Web Services’ (AWS) 12th annual re:Invent conference was, unsurprisingly, all about generative AI (GenAI). The five-day event showcased all the ways AWS enables this budding technology — which Amazon CEO Andy Jassy claims will add tens of billions of dollars to AWS’ top line — not just through the infrastructure layer AWS is known for, but also through the company’s platform tools and applications.
AWS Set Out to re:Invent infrastructure over a decade ago and is prepared to do the same with GenAI
Dating back to the dot-com bubble and the early days of amazon.com, Amazon gained an understanding of what it takes to provision infrastructure designed to scale at massive volumes. After Amazon spent years trying to overcome scale challenges associated with bringing third-party merchants to its e-commerce engine, AWS was born.
Despite all the competition it has welcomed over the past 10 years, AWS is still largely credited with not only pioneering cloud infrastructure but also making it accessible to anyone. As articulated by AWS CEO Adam Selipsky, this could range from a college student using a laptop in their dorm room to some of the most sophisticated enterprises in the world. But largely owed to the pandemic, we have seen the cloud market shift from a data center outsourcing strategy to a tangential business driver, which means AWS has had to adapt alongside its clients with not just traditional hosting services but also full-stack solutions tied to a specific use case.
One of the most compelling customer examples highlighting this approach includes Pfizer. At the height of the COVID-19 pandemic in 2021, Pfizer pledged to expand its cloud footprint from 10% to 80%. Put another way, Pfizer migrated 12,000 applications and 8,000 servers in 42 weeks, which resulted in $47 million in annual savings and the closure of three data centers. This seemingly successful, large-scale transformation has Pfizer now exploring AWS’ GenAI technologies, including Bedrock, to automate manual processes and realize a projected $750 million to $1 billion in annual cost savings.
This customer example speaks to the powerful influence AWS’ infrastructure has with clients such as Pfizer — which needed to submit data to the Food and Drug Administration in a matter of days during COVID-19 — that prioritize speed, scale and agility. Holding a significant portion of the cloud infrastructure layer, AWS is looking up the stack to tackle cloud’s next big reinvention: GenAI.
Download your free copy of TBR′s 2024 Cloud Predictions special report
GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond
Selipsky’s overview of the AWS GenAI stack was consistent with the commentary Jassy has provided on Amazon earnings calls over the past couple of quarters. Here is a quick look at AWS’ GenAI capabilities and some of the new innovations:
Infrastructure: While a great talking point for AWS, we cannot argue the fact that scalable compute serves as the foundation for all things GenAI. For AWS, this includes both custom chips and NVIDIA (Nasdaq: NVDA) GPUs. AWS used re:Invent to launch innovations in both areas, including Amazon Trainium 2 instances (Trn2), which promises a fourfold performance increase over Trn1 for machine learning inference workloads, and NVIDIA DGX Cloud on AWS. The latter is particularly interesting and comes as all other cloud providers have already signed on as hosting partners for NVIDIA’s DGX AI software. As the first company to put GPUs in the cloud, AWS has a unique relationship with NVIDIA, but one that may be growing more contentious as sales teams push AWS’ own chips as part of a cost optimization play designed to maximize customers’ lifetime value. Even so, NVIDIA’s supplier power is significant, and thus the company has a lot of bargaining power with the hyperscalers, which need NVIDIA to supply GPUs to their data centers, and in return, can host DGX and support NVIDIA’s push into the software space.
Platform tools and “as a Service” models: The middle layer of AWS’ GenAI stack is largely synonymous with Amazon Bedrock, a managed service used by 10,000-plus customers to access and customize foundation models for their GenAI apps. Making sure customers are not beholden to one model provider and can access an array of options through the same API interface is key to AWS’ strategy. It also contrasts with Microsoft’s (Nasdaq: MSFT) approach and helps AWS position itself as an open and flexible alternative. New models supported via Bedrock include Anthropic’s Claude 2.1, which has a context window of roughly 150,000 words — making it well suited for legal and finance use cases — in addition to internal models, like Amazon Titan Multimodal Embeddings. Breadth of models is key, but improving the native functionality within Bedrock garners the majority of investment from AWS at this layer. This largely includes features that get customers beyond out-of-the-box models to those that can be customized, fine-tuned and applied to business use cases. One example includes Knowledge Bases for Amazon Bedrock, a Retrieval Augmented Generation (RAG) service that pulls data from multiple sources (i.e., databases, APIs) to help customers bring data to their models and customize.
GenAI applications: At the top of the stack are the actual GenAI applications built on foundation models. AWS may have a weaker association here, but this layer is important to rounding out the entire stack and keeping customers invested in AWS. This layer largely comprises Code Whisperer, the free-for-use code companion that also offers customization capability, which means the application learns from internal code to provide personalized recommendations.
It is all about breadth
With over 220 native services and 600 compute instance types, portfolio breadth has always been a hallmark attribute of AWS. For context, AWS launched 3,300 new features and services in 2022. In his opening keynote, Selipsky went as far to say that AWS offers 60% more services and 40% more features than its nearest competitor. The approach to GenAI will be no different, as AWS strives to offer the broadest set of capabilities for customers to run, build and deploy GenAI technology.
Even in areas where AWS lacks depth or specificity, ISV solutions prove instrumental in filling gaps and, in many cases, do more to drive up a customer’s underlying IaaS resources than AWS’ out-of-the-box services. We also know AWS has a rich history of delivering very basic services to market and quickly building them up into competitive products over time. Perhaps the best example is AWS SageMaker, which accumulated over 250 features and tens of thousands of customers in the span of six years.
GenAI applications: How AWS is entering the copilot race with Amazon Q
At re:Invent, AWS took a bigger leap into the GenAI applications space with the launch of Amazon Q. While incorporating natural language processing (NLP) into various services is not necessarily new to AWS, Q is a GenAI-powered assistant based on 17 years of AWS knowledge designed to bridge the gap between the technical and business-led functions in the enterprise. For example, Q will be integrated with the Code Whisperer environment so developers can ask questions like, “How do I create code for this function?” while admins can use Q in pretty much any environment (e.g., AWS Management Console, Slack, documentation) to ask questions as generic as, “How do I build a web app on AWS”?
But Q also connects to 40 external data sources for business-related tasks, such as data visualization and document summarization, while the assistant integrates with Amazon Connect for contact center optimization, and will soon work with AWS’ Supply Chain application launched at last year’s re:Invent. Integrating Q across functions like supply chain and customer service, in addition to the analytics stack with QuickSight, suggests AWS wants Q to be the expert assistant not just for building on AWS but also for business.
This approach is largely consistent with what we are seeing from competitors integrating copilots and assistants into their SaaS offerings; however, there are a couple of big contrasts between Q and Microsoft’s Copilot and Google Cloud’s (Nasdaq: GOOGL) Duet AI. This first one is pricing: Both Copilot and Duet AI are priced at $30 per month per user, while Amazon Q, though still in preview, will come in at $20 and $25 per month per user for Q Business and Q Builder editions, respectively.
AWS may be undercutting its competitors on price, but Microsoft’s and Google Cloud’s recognition and reach in the productivity space may prove challenging, at least in the context of including Q Business edition. Q Builder, however, may be another story. While including all the capabilities of Q Business, Q Builder is designed for AWS-specific use cases, and in general, anything AWS can do to make developers successful is going to be well received by the customer base. This could include tasks like troubleshooting applications, writing SQL queries or even migrating code. A small pool of Amazon developers tested this last capability internally to upgrade 1,000 applications from Java 8 to Java 17 in two days.
The other big difference is that Amazon Q leverages Bedrock, which means the GenAI assistant is pulling multiple third-party models and assigning them to the right tasks. Peers have taken a different approach, as their assistants are based on a sole provider; for Google Cloud, this is internal models like Codey, and for Microsoft, this is OpenAI’s ChatGPT. While we cannot say for certain how customers will view these approaches, for AWS, having Q based on Bedrock speaks to the company’s goal of offering a broad array of models in hopes of challenging Microsoft.
The zero-ETL integrations keep coming
Building on last year’s commitment to a zero-ETL (Extract, Transform, Load) future and the resulting integration between Redshift and Aurora, AWS launched three new zero-ETL relational and nonrelational database integrations with Redshift: Aurora PostgreSQL, RDS (Relational Database Service) and DynamoDB. Just like it wants to offer the broadest set of infrastructure options, AWS wants to ensure it has the breadth of cloud data services customers need so they do not have to compromise on the right tool for the right data task. But even if customers have an array of tools accessible to them, they still need a way to break down data silos, which requires integration.
To automatically connect data from source to destination and ease manual ETL processes, AWS is offering more integrations between its database and data warehouse services. We do suspect “zero-ETL” has become more of a marketing term and is essentially glorified data sharing, but there is undoubtedly value in simplifying how businesses connect and analyze data. Even before GenAI broke headlines, businesses were realizing the benefits of breaking down data silos and adopting an integrated data posture, but GenAI should only fast-track data strategies throughout the enterprises.
Microsoft recognized this trend years ago and recently launched Fabric, a platform that integrates multiple data services, including Synapse, which is akin to Amazon Redshift, into a single offering. Fabric is a single-source-of-truth platform that addresses the entire data cycle and charges customers based on total IaaS resources consumed, versus the compute and storage for each individual data service. AWS’ approach is different, and while customers have a suite of different data services available to them, it could take more effort for customers to stitch these services together and create a unified environment. The new zero-ETL integrations may help rectify this, but Microsoft’s single platform approach and simplified pricing model, all integrated with Copilot, will be competitive.
At AWS, “partners are the catalysts”
In the second-to-last keynote, VP Worldwide Channels and Alliances Ruba Borno discussed the critical role of partners acting as catalysts to GenAI adoption. This includes both ISVs and global systems integrators, and AWS wants to work with both parties collectively to meet a customer where they are in a journey and work backward from their needs. Delivering solutions as part of an ecosystem was a big focus of the revamped Partner Paths model two years ago, and now AWS is tasked with scaling this model to deliver the GenAI stack to customers.
When asked by Borno what partners can do to drive more business with AWS, Selipsky quickly called out proficiency and making sure the skills are in place to build trust with joint customers. Specializations and competencies are a big piece of proficiency and are skills customers appear to be asking for. At the event, AWS announced the general availability of specializations in resilience and cyber insurance and is also revamping its Competency, Service Delivery and Service Ready designations into one program. Another piece of advice for partners was to focus on putting the necessary resources in place to go to market with AWS, which could be anything from established business units to codeveloped centers of excellence.
It is always a balancing act between the vendor and partner as to who should invest what in terms of go-to-market resources to achieve collective goals. But the message during the talk between Selipsky and Borno seemed to be that AWS has all the funding, tooling and programs available to partners that can make for a successful go-to-market strategy, but the partner has to be willing to engage. Put another way, it may be difficult for some Tier 2 partners to grow their AWS business and get in on the GenAI opportunity given the massive resource scale of some of the Tier 1 competitors.
As an example, Accenture pledged to train 50,000 developers and technical specialists on Amazon Q and Code Whisperer over the next two years. Despite GenAI’s potential to automate labor, the technology will only broaden the vast IT skills gap, so vendors that can acquire and train the right talent will continue to outperform when it comes to doing business with AWS.
Lastly, Selipsky reiterated the important role partners will play in the data ecosystem. Considering it is not the actual foundation models that will differentiate the customer, but rather their data, there is an opportunity for partners here, and anything they can do to help customers establish a data layer that will pave the way for AWS’ GenAI stack will be well received.
Conclusion
While later to the GenAI movement, AWS, with its early establishment in cloud infrastructure, has actually been involved with AI for quite some time. In many ways, the company used re:Invent to raise its voice over the din of AI chatter and showcase the long-standing innovations that it aims to use to build new capabilities and play catch-up with competitors, namely Microsoft. The best example is Amazon Q, a business-focused assistant that is somewhat comparable to Microsoft Copilot, while more Redshift integrations underscore AWS’ goal of better connecting customers to other AWS services, an approach Microsoft is similarly taking with Fabric. Meanwhile, custom compute offerings will continue to serve as a landing spot for net-new workloads, and in some cases, they could be providing cost and performance benefits that help AWS become viewed as not just a hosting provider but also a long-term digital transformation partner.
At the end of the day, customers’ considerations of these GenAI offerings will heavily depend on their existing infrastructure footprint, level of integration required and business use case. Working with partners to land new business and maintain its IaaS leadership lays a foundation for AWS to build the broadest set of integrations, features and services. In doing so, AWS ensures it can meet clients anywhere in their journey regardless of technical requirement or business need. If properly executed, this approach will help AWS further grow off a $88 billion run rate and maintain its lead over its very fast-following peers in IaaS and PaaS.
https://tbri.com/wp-content/uploads/2023/12/Cloud-Technology-with-Robotic-Boy_PhonlamaiPhoto.png10801080Catie Merrill, Senior Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngCatie Merrill, Senior Analyst2023-12-12 16:40:212023-12-13 18:35:13AWS Aims to Reinvent GenAI Through Infrastructure Layer, Platform Tools and Applications
IT services vendors are ramping up innovation efforts and bringing in new expertise and resources experience to address emerging needs as client demand reflects a stronger emphasis on software and efficiency solutions. To effectively drive innovation across organizations, vendors are embracing a new culture and different business orientation, investing in talent and reskilling, and creating a broad ecosystem of partners.
Strategic Business Shift: Embracing Emerging Technologies
Over the last few years, IT services vendors have transitioned away from traditional business orientations and are adjusting their operations and portfolio development to focus more on emerging technologies and solutions. While keeping traditional business services, vendors are incorporating expertise and offerings around emerging technologies, including cloud, IoT, security and analytics.
For example, to drive innovation, HCLTech transitioned its business model from traditional services to include digital, IoT and cloud solutions, initially with the establishment of the company’s Mode 1-2-3 strategy, which helped it pursue emerging technology engagements as well as product and platform adoption. The strategy was successful in helping HCLTech build out its emerging technology solutions alongside its traditional services, which evolved into a comprehensive portfolio mix across digital, engineering, cloud, AI and software that powers the company’s digital transformation projects.
Similarly, PwC introduced PwC Products as a way to bring internally developed software and IP directly to the company’s clients, even to the point of launching a click-to-buy option. The change in portfolio driven by innovation also requires investment in talent and culture to facilitate innovation.
Download your free copy of TBR′s 2024 Professional Services Predictions special report
IT services and consulting in 2024: Traversing GenAI pressures, talent challenges and regulatory waves
Cultivating Innovation: IT Services Vendors Prioritize Talent Investment and Cultural Evolution for Success in the Digital Era
To uphold innovation projects, IT services vendors invest in talent and look to evolve culture through improved training programs and additional resources for collaboration. Vendors have invested in upskilling for digital capabilities to support internal ideas as well as the integration and delivery of emerging solutions. IT services vendors such as Accenture, McKinsey & Co. and PwC have increased training hours per employee to help further strengthen their innovation resources.
Over the last decade, TBR has repeatedly heard about consultancies’ digital training tools and programs, which are intended to make technologists out of business school majors. Similarly, large IT services vendors have pivoted from one emerging tech to another, staying just steps ahead of their clients.
In addition to skills development, IT services vendors embrace existing skills across their organizations in an effort to gather insights and opportunities around new capabilities. PwC leverages its partners and staff for The Solvers Challenge, a program that invites talent to solve challenges in areas such as environment, workforce, transformation and cyber, and risk and regulation. Through the program, PwC benefits from new ideas and solutions that help propel the firm’s strategy.
Strategic Alliances and Ecosystems: IT Services Vendors Forge Collaborative Partnerships to Drive Innovation and Market Expansion
Partners also work with IT services vendors to support innovation and pursue new go-to-market opportunities. Creating ecosystems of research academia, startups, hyperscalers, and specialized technology and industry vendors brings in new expertise to help vendors develop their portfolios and build out scale around emerging technologies. As vendors look to incorporate revenue goals tied to partnerships, IT services vendors will target partnerships at industry solutions that drive additional value for clients through specialized offerings.
Conclusion
To effectively drive innovation across organizations, IT services vendors create expansive partnership ecosystems, bringing in specialized capabilities and experience across different industries. Additionally, a focus on talent, including reskilling and upskilling, integrates new resources that can drive opportunities in underpenetrated areas with a refreshed portfolio and sales approach paired with new solutions. Lastly, refreshing the business reorientation supports the connection with clients, helping to successfully create and foster an innovation-centric culture.
https://tbri.com/wp-content/uploads/2023/12/fabio-oyXis2kALVg-unsplash-1.jpg15362048Kelly Lesiczka, Senior Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngKelly Lesiczka, Senior Analyst2023-12-08 15:16:292024-04-09 11:44:05Product Innovation – How IT Service Vendors are Leveraging Competitive Intelligence
On Nov. 22, 2023, Broadcom officially closed its acquisition of VMware, concluding an 18-month saga that called on the company to navigate several regulatory roadblocks. While these hurdles may have delayed the deal’s closing, TBR suspects most industry watchers have anticipated this outcome for quite some time.
VMware Acquisition Approved by Global Regulators
The early concerns of global regulators about anti-competitiveness did not take into account the strategic importance to Broadcom of keeping VMware’s platforms accessible across all hardware options, thus eliminating the likelihood of Broadcom limiting these platforms to its own hardware.
Chinese regulators were certainly a tail risk given recent geopolitically motivated actions against other U.S.-oriented M&A, yet they ultimately approved the deal, too, perhaps due to Broadcom’s historical ties to the country and the software-centric focus of the acquisition.
Now, with the deal done, VMware’s next chapter has begun. It has been a long road for the company, yet many things have remained the same. Although VMware is pushing into new cloud-native platforms, the company’s virtualization platform is still its bread and butter, and much of VMware’s total revenue is tied to this business. This proportion is likely magnified considering the breakdown of operating profit. As Broadcom takes the reins, VMware’s strategy will revolve around maximizing the value of these profit centers, likely to the detriment of emerging businesses.
TBR′s 2024 Prediction Series
GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond
Broadcom Is in Charge and Will be Guided by Profitability
Broadcom has stated profitability through cost cutting is the top priority, communicating to investors the goal to achieve an adjusted EBITDA of $8.5 billion over the next three years compared to $3.2 billion of GAAP EBITDA for the last 12 months ended CY2Q23. While far from a perfect comparison, the targeted uplift is clearly sizable and will rely heavily on reducing costs.
TBR expects general & administrative costs to see the greatest relative decline as Broadcom executes its synergy plan, which will involve slashing redundant headcount in administrative roles. TBR expects Broadcom to be particularly successful in this area, as leadership has extensive experience folding acquired businesses into existing functions in departments like legal, finance and human resources. This skill will be put to work quickly, likely resulting in multiple rounds of layoffs across these departments.
Sales & marketing teams are expected to see impacts as well as Broadcom makes use of its existing sales teams and channel distribution partners to sell into existing strategic accounts.
Headcount reductions have already begun, just days after the deal closed. The total impact of layoffs so far is unclear, yet there are reports that reductions have affected software development and cloud engineering roles as well as administrative roles. While While VMware’s R&D budget will undoubtedly shrink, it is unknown by how much. The fact that R&D-related headcount is being cut early does not paint a favorable picture for Broadcom’s commitment to innovation, yet TBR’s estimates indicate that drastic cuts may not be necessary. This aligns with commentary from Broadcom management, which has promised to maintain VMware’s previous development strategy. Still, TBR remains skeptical on future R&D efforts.
Profitability Goals may Negatively Impact License Products and Emerging Solutions Over the Long Term
Along with many industry watchers, TBR has been concerned about Broadcom’s intention to invest in innovation since the initial announcement of the VMware acquisition, given Broadcom’s history with CA Technologies and Symantec. In both instances, the company slashed funding for support and R&D after the acquisition, opting to extract free cash flow from their sticky install bases instead of pursuing organic growth. VMware offers a similar opportunity.
Cost concerns are prompting many enterprise customers to preserve past investments, including their virtualization platforms. Moreover, since VMware has built highly integrated solutions with all the Tier 1 hyperscalers, enterprises are better equipped to migrate their virtualization platforms to the cloud, where they are able to set up broader cloud migrations without fully committing to the transition to cloud native.
This means VMware commands a large, sticky install base, which would be ideal for Broadcom’s previous strategy. Recognizing this, many partners and customers are rightfully worried about the outcome of this deal, expecting higher licensing prices and diminishing support.
Profit Centers Will See Little Impact from Broadcom Ownership
In addition to promoting margin expansion, raising license prices will encourage more customers to transition to subscription offerings, which highlights an important consideration within this business transformation. While Broadcom will deprioritize certain segments, large portions of VMware will be deemed strategic by Broadcom and will continue to see the same level of investment.
For instance, many customers and partners collaborating around cloud-based virtualization platforms like VMware Cloud will see minimal differences because of the change in ownership. For the last 12 months ended CY2Q23, over 34% of VMware’s revenue was generated in the Subscription & SaaS segment, and TBR suspects Broadcom will prioritize many of the offerings within this segment.
In May Broadcom CEO Hock Tan pledged to invest an incremental $2 billion per year, with half slated for R&D to support the Cross-Cloud portfolio. Considering that an incremental $1 billion investment would increase R&D spend by around 30% over CY2022 levels, Broadcom’s ownership may actually benefit large swaths of VMware’s Cross-Cloud portfolio by adding resources and accelerating development timelines.
Long-term, Profitability Will be King
TBR is skeptical about how far into the future Broadcom’s commitment will go, and it is not clear how Broadcom’s investment will be spread across VMware’s different offerings. Many solutions within the Cross-Cloud portfolio are still underdeveloped and represent long-term opportunities for VMware to achieve long-term sustainable top-line performance.
Tanzu is a prime example. The container management platform sits at the heart of the company’s multicloud strategy, which VMware has pushed heavily over the past 18 months, yet TBR suspects Tanzu contributes only a small percentage of total revenue and certainly cannot be considered a profit center.
If Broadcom is to achieve its stated profitability goals, VMware will need to scale this offering rapidly. If it does not, TBR expects there will be a limit to Broadcom’s patience and a spinoff may be in the cards over the long term. To TBR, the $2 billion commitment indicates a willingness to only support these emerging businesses over the short term.
Conclusion
Regardless of how much Broadcom messages around maintaining VMware’s current investment strategies, it is very difficult to reconcile this marketing approach with the company’s stated profitability goals. Thus, TBR suspects large changes have begun to arrive for the virtualization leader.
The most immediate impacts will be the significant layoffs that have reportedly removed redundant administrative headcount, along with likely price increases on license products. While there is good reason to expect that many of VMware’s emerging products will be supported over the next couple years, the long-term view is much more opaque.
TBR will be watching for signs of traction and strong execution around many of the emerging solutions included in the Cross-Cloud portfolio, but if they fail to materialize, TBR expects Broadcom’s management to make decisions that benefit profitability.
https://tbri.com/wp-content/uploads/2023/12/Concept-of-Business-Profit_Triloks_Getty-Images.png10801080Alex Demeule, Research Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngAlex Demeule, Research Analyst2023-12-07 06:56:492023-12-07 13:50:44With Broadcom at the Helm, Profitability Will be at the Center of VMware’s Next Chapter
A platform company at its core, Microsoft is less concerned with migrating monolithic applications and instead is focused on building a complete data integration and management layer to capture value-add workloads that tie into said applications, all while maximizing clients’ underlying Azure infrastructure usage. To replicate this approach for the AI era, Microsoft has spent years integrating its various data services, from Synapse to Power BI, to automate customers’ entire data pipelines and prepare them for AI adoption. The result is Microsoft Fabric, a new end-to-end SaaS-like data platform that could help Microsoft reach new audiences and spur Azure growth in the continued race for cloud and AI dominance.
Microsoft Is Investing in Data Cloud to Support its GenAI Strategy
What Is Microsoft Fabric?
Simply put, Microsoft Fabric is a unified data platform comprising seven core Azure data services: Data Factory, Synapse Data Engineering, Synapse Data Warehouse, Synapse Real Time Analytics, Power BI and Data Activator. While Microsoft Fabric makes it easier for customers to connect to different personas within an organization, from data engineers to business analysts, the hallmark of the new service is its simplified pricing model, which charges customers based on the total amount of IaaS resources consumed, rather than the compute and storage for each individual Azure data service.
When we interview enterprise buyers, we continue to find that consolidating point solutions in favor of complete, integrated platforms is a common trend, and Fabric is bound to resonate with customers trying to control runaway cloud costs in a still widely uncertain economy.
The other key defining attribute of Microsoft Fabric is the underlying architecture it is built on, OneLake. Microsoft Fabric is based on a repository that allows customers to query data on not just MySQL databases but also object storage, as is customary in the data lake architecture.
With OneLake, we see Microsoft moving squarely into the data lake space. Given the symbiotic relationship between data lakes, which are designed for unstructured data, and generative AI (GenAI), OneLake is Microsoft’s under-the-hood way of ensuring that customers can easily load data from multiple sources, put it through the Fabric platform for data management and visualization, and build GenAI applications.
Altogether, the unification of Microsoft OneLake and Fabric is the right step for Microsoft and exemplifies how far the company has been willing to go to execute its AI-based growth strategy.
Fabric Will Help Microsoft Change the PaaS Landscape but Not Without Infringing on Partners
As highlighted in TBR’s 3Q23Cloud Data Services Market Landscape, Amazon Web Services (AWS) is the clear leader in the cloud data warehouse market, with Microsoft falling squarely in second place and not significantly ahead of Google Cloud and Snowflake. Azure Synapse has not gained the same level of interest and traction in the market as AWS and Google Cloud’s BigQuery. As a result, Microsoft partnered with Databricks in 2017, developing and delivering the first-party Azure Databricks service.
Partnering with Databricks to ensure customers have an effective data analytics platform natively available on Azure rather than Synapse was a strategic move. With Fabric, however, we now see Microsoft essentially re-delivering Synapse as part of a more complete product that gets to the heart of what customers want: an end-to-end set of capabilities that automate entire data pipelines from data collection and ingestion up to analytics and visualization.
This approach should bring Synapse into more client conversations while helping Microsoft expand its reach outside the analytics department. This, of course, raises the question: What becomes of Microsoft’s partnership with Databricks? As part of OneLake, the architecture underpinning Fabric, Microsoft is leveraging Delta Lake — Databricks’ protocol for storing data in an open table format — and this move could persuade Databricks customers to adopt Fabric.
Even still, Microsoft OneLake adopts the data lakehouse architecture pioneered by Databricks, and with Fabric’s feature-rich set of upper-stack capabilities, customers may be more inclined to go all in with Microsoft Fabric and its comprehensive pricing model, which would bring a new layer of competition to the Microsoft-Databricks relationship.
This trend is indicative of what we are seeing across the cloud landscape. The hyperscalers, even those perceived as more partner friendly, are expanding into new areas of the cloud stack, posing potential risks to their partners, especially as customers continue to indicate their interest in consolidating point solutions.
That said, coopetition is nothing new in the cloud landscape, and vendors are getting more adept at navigating competitive differences to deliver outcome-specific solutions to their joint customers.
Perhaps the best example is the relationship between AWS and Snowflake, which are both spending millions of dollars to get legacy data warehouse customers to Snowflake’s platform on AWS. While AWS would naturally prefer customers adopt its own data warehouse service — Redshift — over Snowflake, AWS has realized the trade-off of forfeiting some Redshift customers to Snowflake as long as those customers are running on AWS infrastructure.
Microsoft Fabric is much broader than the data warehouse, but if AWS and Snowflake are a barometer of a successful partnership, Microsoft and Databricks will similarly learn to overcome these obstacles.
With Fabric, we expect Microsoft will slowly chip away at AWS’ share and potentially Snowflake’s and Databricks’ in the coming years. However, it is important to note we do not see Fabric as any kind of direct threat to pure play data cloud platforms, particularly Snowflake, which has the established presence and reputation in the data warehouse space specifically, not to mention easy inroads into AWS’ customer base.
In our talks with enterprise buyers, we often find customers value Snowflake as it allows them to run separate workloads as part of a shared data layer that is not tied to any specific cloud infrastructure. Despite the multicloud capabilities in OneLake, nothing changes the fact that the core data warehousing capabilities within Synapse are still built specifically for Azure infrastructure for the seamless integration with other Azure services.
We have no doubt Fabric will be attractive to Microsoft-centric shops, but attracting customers invested with other cloud providers may be a more difficult feat, solidifying Snowflake’s and Databricks’ unique value propositions.
Data Lakes and GenAI Go Hand in Hand, and Microsoft Wants to be the First Hyperscaler Strongly Associated with the Architecture
One other interesting consideration with Fabric is Microsoft’s choice of open table format. Considering its partnership with Databricks, Microsoft has opted for Delta Lake, although it plans to add external support for two other popular frameworks: Apache Iceberg and Hudi.
In general, for customers that want to build a data lake, Delta Lake is the preferred format while Apache Iceberg is more aligned with data warehouses. Defaulting to Delta Lake reflects Microsoft’s intent to remain relevant with Databricks customers, while allowing customers to query data on object storage (Amazon S3 and eventually Google Cloud Storage) reflects Microsoft’s commitment to the data lake architecture.
Due to data lakes’ ability to combine both structured and unstructured data for prescriptive analytics use cases, they are becoming increasingly popular and, in some scenarios, offer customers a way to bypass data warehouse operations altogether. GenAI, which relies on unstructured data sources, such as documents or images, will fuel customers’ desire to consolidate data warehouses into data lakes, leading us to believe that Databricks is in a strong position despite Microsoft’s Fabric announcement.
This is also one of the reasons why Snowflake is trying to add more features that support unstructured and semistructured data in hopes of changing its perception in the market from a data warehouse company to a data lake company.
The hyperscalers, however, have been arguably behind in their data lake services and messaging, and with OneLake, Microsoft wants to make sure it is the hyperscaler most strongly associated with data lakes and by default, GenAI.
GenAI Enablement Sits at the Heart of Microsoft’s PaaS Strategy
Considering Microsoft has arguably made the biggest splash in generative AI, the company’s latest PaaS developments come as no surprise. As TBR discussed in our 2Q23 Cloud Ecosystems Market Landscape, a large language model (LLM) is only as good as the data that goes inside, which means the ability to establish a centralized, single source of truth is very important for an enterprise pursuing a serious generative AI strategy.
OneLake’s ability to provide an enterprisewide repository and a no-code API to manage data will help the company address this need, and the GenAI tools embedded within Fabric will help accelerate the transition to unified data pipelines.
Mostly in preview today, there are three Copilot solutions embedded within Fabric: Copilot for Data Science and Data Engineering, Copilot for Data Factory, and Copilot for Power BI. Broadly, the Copilot solutions in Microsoft Fabric enable code generation capable of automating routine tasks and expediting the transformation from raw data to structured, which is what LLMs hunger for.
The integrations built over the years between Microsoft’s platform assets and its application portfolios ensure there is plenty of raw data entering Fabric, which, as it becomes structured, presents an ideal environment for enterprises to pursue custom GenAI development. This is where the Azure OpenAI Service enters the conversation.
While the Copilot solutions offered by Microsoft provide quick-and-easy access to GenAI capabilities, true transformational value will be unlocked as enterprises build their own GenAI applications around their proprietary data and business processes, presenting a large opportunity for Microsoft.
The Azure OpenAI service has been enabling customers to train LLMs on their proprietary data since it became generally available in January, and, at Ignite 2023, Microsoft took another step forward with the public preview launch of Azure AI Studio. A new addition to the Azure OpenAI service, Azure AI Studio brings together developer tools like Azure AI SDK with the company’s growing catalog of foundation models to enable customers to build their own copilots and other generative AI applications.
As more enterprises pursue custom GenAI development, the unified approach to data management offered by Microsoft Fabric and OneLake will become more valuable, drawing interest from enterprises with large Microsoft footprints, yet coopetition at the data layer will remain the standard.
Ultimately, Microsoft’s priority is ensuring all data can be easily fed into its foundation model service, so integrations that connect the Azure OpenAI Service with third-party data leaders like Snowflake and Databricks will prove to be popular alternatives to Microsoft’s end-to-end approach.
Microsoft Is Not Just after the Data Layer: The Race for Hybrid Cloud Control Plane Continues as Azure Arc Reaches 21,000 Customers
Throughout this report, we have touched on Microsoft’s pursuit of the data layer, but it is important to note that Microsoft’s PaaS capabilities are much broader and extend closer to the box. Owing to Windows Server, Microsoft has captured a significant portion of the enterprise OS layer, allowing the company to effectively move into the multicloud control plane, which Microsoft calls Azure Arc.
Best thought of as an abstraction layer that stiches together infrastructure assets for capabilities like monitoring, provisioning and observability, all while securing the OS instance, Azure Arc has amassed 21,000 customers in the span of four years.
In recent quarters we have seen Microsoft become increasingly transparent in its customer reporting. For instance, in 2Q23 and 3Q23 Azure Arc customer count grew 150% and 140% year-to-year, respectively, putting the customer count at just 7,200 in 2Q22. This is much lower than the 21,000 customers announced in 3Q23 and indicates vast interest from Microsoft’s install base of customers trying to bridge the gap between the cloud and legacy data center.
Another factor driving the platform’s success is Microsoft’s early support for both virtual machines (VMs) and Kubernetes. This approach contrasts with Google Cloud, whose primary goal is getting customers to move away from VMs and use containers. In other words, Google Cloud wants customers to use GKE (Google Kubernetes Engine) on premises to containerize a VM and keep it there, but also wants customers to build net-new, cloud-native apps in containers.
Google Cloud did launch Anthos for VMs in 2021, which we viewed as a direct counterattack to Azure Arc, albeit not a very effective one, as Anthos’ customer count is comparatively low and could suggest the company has not been as adept at tapping into the VMware customer base and attracting enterprises that are not ready to migrate VMs.
We will continue to monitor Azure Arc’s growing customer count in the coming quarters, and it will be interesting to see if Microsoft begins to leverage Fabric to support other managed data services outside Azure SQL via Arc to turn the hybrid platform a more complete, centralized management layer.
https://tbri.com/wp-content/uploads/2023/12/Platform-as-a-Service_Mazirama_Getty-Images.png10801080Alex Demeule, Research Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngAlex Demeule, Research Analyst2023-12-06 21:51:552023-12-06 21:51:55Microsoft Expands PaaS Portfolio on Path to AI Incumbency
The macroeconomic and industry-specific challenges that manifested in 2023 — such as rising interest rates, inflation, lack of 5G ROI, technological complexity, and the end of key stimulus programs and various other economic support mechanisms instituted by governments during the COVID-19 pandemic — are expected to persist through 2024, prompting a response from communication service providers (CSPs) and their vendors.
Join Principal Analyst Chris Antlitz Thursday, Feb. 1, 2024, for an exclusive review of TBR’s 2024 Telecom Predictions special report, Telecom Industry Retrenches in Response to Macroeconomic Pressures. Don’t miss this opportunity to learn how the latest industry challenges will impact your strategy in the coming year!
In This FREE TBR Insights Live Session on the Telecom Industry Outlook for 2024 You’ll Learn:
How companies in the telecom industry are responding to macroeconomic and industry-specific challenges
Why and how CSPs are prioritizing cash flow management
Why open RAN will not be ready for mainstream adoption in 2024
TBR webinars are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous webinars can be viewed anytime on TBR’s Webinar Portal.
For additional information or to arrange a briefing with our analysts, please contact TBR at [email protected].
https://tbri.com/wp-content/uploads/2023/11/24Predictions_Telecom_Invite_Square.png10801080TBRhttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngTBR2023-12-04 14:01:092023-12-04 17:48:222024 Telecom Industry Outlook: Navigating Macroeconomic and Industry-specific Turbulence
In 2024 generative AI (GenAI) opportunities in the digital transformation (DT) space will continue to swing between enterprisewide and function-aligned models. Vendors with broad-based enterprise relationships will capture a new wave of DT growth as buyers seek guidance and support around their data to ensure it is in compliance and safe from cyber threats.
Click the Image Below to Watch the Full 2024 Digital Transformation Predictions Session Replay Now!
In This FREE TBR Insights Live Session on GenAI Opportunities and Challenges in Digital Transformation in 2024 You’ll Learn:
How the establishment of more defined use cases around function- and/or department-aligned data will pave the way for GenAI as vendors seek to scale adoption across industries
How the launch of multiparty alliances beyond the traditional, preferred two-dimensional relationships and the rise of cobranded facilities can help promote these relationships and can deliver DT to customers in a package rather than through a multiphased approach
The research areas TBR’s Digital team will tackle in 2024
TBR webinars are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous webinars can be viewed anytime on TBR’s Webinar Portal.
For additional information or to arrange a briefing with our analysts, please contact TBR at [email protected].
https://tbri.com/wp-content/uploads/2023/11/24Predictions_Digital_Invite_Square.png10801080TBRhttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngTBR2023-12-04 14:01:012024-01-19 16:11:08Navigating GenAI Opportunities and Challenges in Digital Transformation in 2024
With every technology revolution, the speed of change depends on people; this is especially true in IT services, where robots will never be able to replace the human touch — at least not in 2024. While generative AI (GenAI) is creating much excitement in the industry and is seemingly poised to disrupt every business model, TBR sees the IT services and consulting vendors benefitting from the near-term (and constant) hype and positioning for long-term growth around less exciting, but no less important, areas such as governance, risk and compliance.
Join Principal Analyst Patrick M. Heffernan, Senior Analyst Elitsa Bakalova and Senior Analyst Kelly Lesiczka for an exclusive review of TBR’s 2024 Professional Services Predictions special report, IT Services and Consulting in 2024: Traversing GenAI Pressures, Talent Challenges, and Regulatory Waves. Don’t miss this opportunity to learn how your team can take advantage of GenAI hype in the coming year!
In This FREE TBR Insights Live Session on Predictions for GenAI in IT Services in 2024 You’ll Learn:
Why the way vendors manage their employees will remain the best indicator of IT services and consulting success
GenAI’s impact on pricing models and outcomes-based engagements as transparency rules
How companies are profiting from the constant change within “boring” taxes, regulations and supply chains
TBR webinars are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous webinars can be viewed anytime on TBR’s Webinar Portal.
For additional information or to arrange a briefing with our analysts, please contact TBR at [email protected].
https://tbri.com/wp-content/uploads/2023/11/24Predictions_ITServ_Invite_Square.png10801080TBRhttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngTBR2023-12-04 13:59:572023-12-04 17:47:54GenAI Hype in 2024: A Deep Dive into IT Services Industry Predictions
The opportunity to monetize generative AI (GenAI) in cloud emerged in 2023 and quickly became a bright spot in an otherwise challenging market. We expect AI-focused strategies to intensify during 2024, reflecting the importance of the technology to vendors’ long-term growth.
Join Principal Analyst Allan Krans, Senior Analyst Catie Merrill and Analyst Alex Demeule Thursday, Feb. 8, 2024, at 1 p.m. EST/10 a.m. PST for a deep dive into how vendors will capitalize on these new GenAI-led opportunities to combat the general slowing of cloud market opportunity growth in 2024. Don’t miss this exclusive discussion and Q&A on TBR’s 2024 Cloud Predictions special report, GenAI: A Growth Catalyst for Cloud Evolution in 2024 and Beyond!
In This FREE TBR Insights Live Session on GenAI and Cloud in 2024 You’ll Learn:
Why cloud delivery alone does not guarantee growth
How incumbents Amazon Web Services and Salesforce will ward off mounting AI competition from Google, Microsoft and SAP
How SaaS vendors will promote multiproduct sales using GenAI
How IaaS will become more tailored to workload and regulation
TBR webinars are held typically on Thursdays at 1 p.m. ET and include a 15-minute Q&A session following the main presentation. Previous webinars can be viewed anytime on TBR’s Webinar Portal.
For additional information or to arrange a briefing with our analysts, please contact TBR at [email protected].
https://tbri.com/wp-content/uploads/2023/11/24Predictions_Cloud_Invite_Square.png10801080TBRhttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngTBR2023-12-04 13:59:272023-12-04 17:50:00GenAI and the Cloud Revolution in 2024
2024 Predictions is a series of special reports examining market trends and business changes TBR’s analysts expect in the coming year. In the digital transformation edition, our team looks at expectations for GenAI’s impact, the rise of superpowers and the three things enabling new market growth.
Top 3 Predictions for Digital Transformation in 2024
GenAI hype meets reality
Ecosystems fuel disruption and lead to the rise of the superpowers
Cyber, data and regulations — the three-legged stool enabling new digital transfomration growth
Challenges and Opportunities in the Era of GenAI and Enterprise Digital Transformation
While cloud remains the backbone of buyers’ digital transformation (DT) programs, generative AI (GenAI) has thrown vendors and their technology partners into a frenzy, especially as enterprise buyers have started paying closer attention to their IT spend in response to macroeconomic headwinds.
This new dynamic creates a plethora of challenges and opportunities for technology and services vendors that guide and manage enterprise DT programs. From vendor consolidation to technology stack simplification, buyers continue to look for ways to optimize their digital assets, making it hard for vendors to introduce new technology without the appropriate use cases. Delivering value in a challenging market requires vendors to act more as strategic partners and collaborate rather than simply transact with enterprises.
GenAI is here to stay. There are certainly more unknowns than knowns today, despite everyone across the ecosystem convincing others they have found the silver bullet that will enable the creation of the next-gen enterprise business model. As with most new technologies, establishing the right frameworks as well as commercial and pricing models is a necessary first step before adoption can scale. Developing and deploying pricing mechanisms that incorporate pro bono and/or risk-sharing services and using templated offerings to standardize delivery can help vendors maintain their incumbent positions, especially as GenAI will level the skills playing field.
Expectations around differentiation are also changing, increasing the need for vendors to add specialization and often spurring them to expand their partner ecosystem. The advent of a new technology stack (e.g., next-gen GPU-run data centers that enable GenAI to reveal its full potential) will compel vendors to re-evaluate and expand their relationships with chip manufacturers — something many software and services vendors have not done for a while.
Additionally, the implications for cyber, data, regulations, ethics, and model governance will continue to dominate headlines and vendor-buyer conversations. And while vendors are in the business of making money, we believe the winning formula is to strike the right balance between constantly selling and consistently developing relationships with buyers and partners.
https://tbri.com/wp-content/uploads/2023/11/24Predictions_Digital_PrimaryImage.png9241640Bozhidar Hristov, Principal Analysthttps://tbri.com/wp-content/uploads/2021/09/TBR-Insight-Center-Logo.pngBozhidar Hristov, Principal Analyst2023-12-04 13:53:072024-01-30 17:23:11IT Ecosystem Trust Paves the Way for GenAI-enabled Growth in 2024
We may request cookies to be set on your device. We use cookies to let us know when you visit our websites, how you interact with us, to enrich your user experience, and to customize your relationship with our website.
Click on the different category headings to find out more. You can also change some of your preferences. Note that blocking some types of cookies may impact your experience on our websites and the services we are able to offer.
Essential Website Cookies
These cookies are strictly necessary to provide you with services available through our website and to use some of its features.
Because these cookies are strictly necessary to deliver the website, refuseing them will have impact how our site functions. You always can block or delete cookies by changing your browser settings and force blocking all cookies on this website. But this will always prompt you to accept/refuse cookies when revisiting our site.
We fully respect if you want to refuse cookies but to avoid asking you again and again kindly allow us to store a cookie for that. You are free to opt out any time or opt in for other cookies to get a better experience. If you refuse cookies we will remove all set cookies in our domain.
We provide you with a list of stored cookies on your computer in our domain so you can check what we stored. Due to security reasons we are not able to show or modify cookies from other domains. You can check these in your browser security settings.
Other external services
We also use different external services like Google Webfonts, Google Maps, and external Video providers. Since these providers may collect personal data like your IP address we allow you to block them here. Please be aware that this might heavily reduce the functionality and appearance of our site. Changes will take effect once you reload the page.
Google Webfont Settings:
Google Map Settings:
Google reCaptcha Settings:
Vimeo and Youtube video embeds:
Privacy Policy
You can read about our cookies and privacy settings in detail on our Privacy Policy Page.