NVIDIA 2Q24 Earnings Recap: Capitalizing on AI Infrastructure Demand and Strategic Ecosystem Collaborations

The NVIDIA vendor analysis report is new to TBR’s research stream. The report looks at corporate strategies, tactics, SWOT analysis, financials, go-to-market strategies and resource strategies. The inaugural edition today with a free trial of TBR Insight Center™!

Assessment: NVIDIA Earnings 2Q24

Robust AI infrastructure demand from cloud service providers (CSPs), enterprises and consumer internet companies drove a fifth consecutive quarter of triple-digit year-to-year revenue growth in 2Q24. During the company’s recent earnings call, NVIDIA CFO Colette Kress highlighted the strong momentum behind NVIDIA AI Enterprise while expressing the company’s expectation of generating approximately $2 billion from the sale of software and support in the current fiscal year.

 

CEO Jensen Huang harped on two platform transitions happening simultaneously: general-purpose computing shifting to accelerated computing, and human-engineered software moving to generative AI (GenAI) software. To align with these shifting market paradigms and to support NVIDIA’s expanding software revenue base, the company will continue to devote resources to innovating internally and strengthening its partner ecosystem, which includes a rich variety of ISVs, systems integrators, OEMs and ODMs.

Gain insights from TBR’s Devices Benchmark, explore the demand for AI PCs, and understand the competitive landscape among Intel, AMD and Qualcomm in the evolving Copilot+ PC segment in the below Devices TBR Insights Live session

NVIDIA Leverages Unique Capabilities of Its Diverse Partner Ecosystem to Drive Growth and Go-to-market Synergies

Strategic Collaboration with Ecosystem Partners for Scalable AI Integration

During NVIDIA’s 2Q24 earnings call, when asked about vertical integration, Huang said he was proud of the disintegrated nature of the company’s supply chain, further conveying NVIDIA’s strategy of swimming in its own lane and leveraging ecosystem partners for systems integration. While NVIDIA is unique with respect to its full stack of AI factory components, including CPUs, GPUs, networking equipment and software, leveraging ODMs and the company’s global integrator supply chain better enables its worldwide scale as well as the custom integration of the company’s components to support clients’ specific requirements.

 

For example, NVIDIA-branded systems, such as the upcoming GB200 NV72 rack-scale system, are designed and architected by NVIDIA as a rack but sold in disaggregated components. ODMs and other integration partners receive these components and are then able to build the systems closer to their final destinations, which reduces logistical complexities.

 

Similarly, NVIDIA’s MGX platform enables OEMs and ODMs to build more than 100 different modular server variations with increased flexibility compared to the company’s HGX platform, allowing for multigenerational compatibility with NVIDIA products. While these platforms benefit NVIDIA’s ODM and OEM partners by reducing the cost of integrating new components, they are also critical to NVIDIA’s go-to-market strategy as they enable faster time to market of new components.

 

Strengthening Ties with ISVs and LLM Providers

NVIDIA continues to expand and deepen its relationships with ISVs and large language model (LLM) providers to strengthen its developer ecosystem and NVIDIA AI Enterprise platform offering. For example, in 2Q24 NVIDIA announced a new AI Foundry service that leverages Meta’s Llama 3.1 to allow companies to develop customized AI applications using the capabilities of an open-source frontier-level model. Notably, Accenture was first to adopt the new service, which it plans to lean on to create custom Llama 3.1 models for both its internal use and for customers’ applications.

 

Enhancing AI Workloads

In March NVIDIA introduced a storage validation program for NVIDIA OVX computing systems, similar to its existing storage validation program for NVIDIA DGX BasePOD. In contrast to the company’s DGX systems, which are based on Hopper and Blackwell GPUs, OVX systems leverage NVIDIA’s L40S GPUs, which consume less power than Hopper and Blackwell GPUs and are best suited for training smaller LLMs, like Llama 2 7B, and graphics-intensive workloads, such as running industrial metaverse applications.

 

With the introduction of its OVX storage validation program, NVIDIA is able to verify the efficacy of storage solutions from partners, including Dell Technologies, NetApp and Pure Storage, in combination with OVX servers to ensure enterprise-grade performance, manageability, security and scalability for AI workloads. This helps enterprises pair the right storage solution with their NVIDIA-certified OVX servers, which are available from partners such as Hewlett Packard Enterprise, Lenovo and Supermicro.