$NVDA Q2 2025 AI-Generated Earnings Call Transcript Summary

NVDA

Aug 29, 2024

The operator introduces the conference call for NVIDIA's second quarter earnings and reminds listeners that the call is being webcast and may contain forward-looking statements. She also mentions the availability of a replay and the company's property rights over the call's content. The CFO then discusses the use of non-GAAP financial measures and provides a source for reconciliation with GAAP measures.

In the upcoming Goldman Sachs Communacopia and Technology Conference, Jensen will participate in a fireside chat. The company's Q2 earnings call is scheduled for November 20th, 2024. Revenue for the quarter was $30 billion, a record high. Data center revenue was $26.3 billion, with a 154% increase from the previous year. Cloud service providers and consumer internet companies were the main drivers of this growth, with a focus on Hopper architecture and Blackwell adoption. Key workloads include AI, data processing, and database processing. Inference accounted for over 40% of data center revenue in the last four quarters.

NVIDIA's demand is coming from various industries and companies, with a strong desire for their ecosystem and availability in every cloud. The NVIDIA H200 platform, with improved memory bandwidth, is being adopted by large CSPs, consumer Internet, and enterprise companies. Data center revenue in China is growing, but remains competitive. NVIDIA's leadership in inference was highlighted in the latest MLPerf benchmarks and at Computex, they unveiled Blackwell architecture-powered systems and networking for AI factories and data centers. The NVIDIA MGX modular reference architecture is being used by OEMs and ODM partners to build over 100 Blackwell-based systems. The NVIDIA Blackwell platform combines multiple chips and software to power the next generation of AI. The NVIDIA GB 200 NVL72 system, with fifth-generation NVLink, allows for faster inference and the ability to run trillion parameter models in real-time.

NVIDIA's Hopper and Blackwell demand is strong and they have made changes to the Blackwell GPU mass to improve production yields. Blackwell production is scheduled to ramp up in Q4 and is expected to generate several billion dollars in revenue. Hopper shipments are also expected to increase in the second half of fiscal 2025. The company's Ethernet for AI revenue has doubled sequentially and is being adopted by various customers and partners. Their Spectrum-X platform is seeing broad market support and they plan to launch new products every year to support demand. Sovereign AI opportunities are expanding and the company expects revenue to reach low-double-digit billions this year. The enterprise AI wave has also begun, with NVIDIA working with many Fortune 100 companies on AI initiatives.

The growth of NVIDIA is being driven by a range of applications such as AI-powered chatbots, generative AI copilots, and agents that enhance business applications and employee productivity. Amdocs, ServiceNow, SAP, Cohesity, Snowflake, and Wistron are all using NVIDIA for various purposes, such as transforming customer experience, reducing costs, and improving efficiency. Automotive and healthcare are also significant growth drivers, with the potential to generate billions of dollars in revenue. The company recently announced a new service, NVIDIA AI foundry, which allows enterprises to develop customized AI applications using open-source models. Accenture is the first company to adopt this service for their own use and to assist clients in deploying generative AI applications.

Nvidia's NIM technology has been adopted by various companies in different industries, resulting in significant cost savings and improved performance. The company has also introduced NIM agent blueprints, which are customizable reference applications for building and deploying AI applications. These blueprints, along with the NVIDIA AI enterprise software platform, are expected to contribute to the company's revenue growth. In terms of gaming, Nvidia has seen strong demand for RTX PCs, which have the capability to deliver AI-powered experiences. The company's suite of generative AI technologies, called NVIDIA ACE, is available for RTX PCs.

NVIDIA's Mecha BREAK game is the first to use their new NVIDIA ACE technology, which includes a language model optimized for on-device inference. Their gaming ecosystem is expanding with new titles and a large library on their GeForce NOW cloud gaming service. In the Pro visualization sector, revenue is up due to increased demand for AI and graphics use cases, particularly in the automotive and manufacturing industries. NVIDIA Omniverse is being used by companies like Foxconn and Mercedes-Benz to create digital twins of their factories, and new USD NIMs and connectors are being introduced to open Omniverse to new industries and incorporate generative AI. In the automotive and robotics sector, revenue is also up due to new customer ramps and increased demand for AI cockpit solutions.

NVIDIA recently won the Autonomous Brand Challenge at a conference, and many companies are using their Isaac Robotics platform. Their GAAP and non-GAAP gross margins were down sequentially due to a higher mix of new products and inventory provisions. Cash flow from operations was $14.5 billion, with $7.4 billion used for shareholder returns. The company's Board of Directors approved a $50 billion share repurchase authorization. For the third quarter, total revenue is expected to be $32.5 billion, with GAAP and non-GAAP gross margins at 74.4% and 75%, respectively. Full-year gross margins are expected to be in the mid-70% range. Operating expenses are expected to grow in the mid-to-upper 40% range as the company works on developing their next generation of products.

Jensen Huang, CEO of NVIDIA, discussed the company's expected GAAP and non-GAAP other income and expenses of $350 million, as well as the expected tax rate of 17%. He also mentioned a change in the Blackwell GPU mask, but reassured that there were no functional changes necessary and that they were currently sampling functional samples of Blackwell in various system configurations. The change is expected to have minimal impact on production, which is still on track for Q4. A variety of Blackwell-based systems were showcased at Computex and the ecosystem is being enabled to start sampling them. The functionality of Blackwell remains the same. In the Q&A portion, Huang addressed a longer-term question.

In response to a question about customer return on investment and its impact on CapEx, NVIDIA CEO Jensen Huang discusses the company's focus on accelerated computing as a solution to the slowing of CPU scaling and the growing demand for computing. This transition is expected to result in lower costs and energy consumption. Additionally, the company has recently released new libraries to support this approach.

The first platform transition from general purpose computing to accelerated computing has resulted in significant cost savings, with some applications experiencing a 50x speed increase. This has enabled the use of large language models and deep learning to train on vast amounts of data, leading to the generative AI revolution. Generative AI is a new way of doing software, using data instead of human-engineered algorithms. It has impacted all layers of computing and allows for the development of remarkable applications.

The paragraph discusses the current state of generative AI and its growth in scale and complexity. It mentions the increase in size of frontier models and the need for more compute power and data to train them. It also mentions the rise of new applications for generative AI, such as coding and recommender systems, and the potential for significant revenue in the cloud market.

NVIDIA is seeing a strong demand for its AI technology from countries looking to utilize their own data and build their own AI infrastructure. The company is also seeing growth opportunities in the sovereign AI market, where countries want to incorporate their own language and culture into their AI models. The momentum of generative AI is accelerating and there is high demand for both Blackwell and Hopper products.

Jensen Huang, CEO of NVIDIA, discusses the strong demand for their Hopper and Blackwell GPUs. He explains that the demand for Blackwell is due to the lack of GPU capacity among cloud service providers and the need for accelerated computing in data processing. He also mentions that many companies are renting out their GPUs to model makers and generative AI companies, who require them to create products.

The demand for Hopper is driven by the need for immediate use and the desire to be the first to introduce revolutionary AI technology. NVIDIA is constantly pushing to be the best and achieve their dreams, while other companies are also in a rush to get their hands on Hopper for operational and business reasons.

Jensen Huang explains that investing in NVIDIA infrastructure yields immediate returns and is the best ROI investment in computing infrastructure. He suggests comparing it to investing in general-purpose computing infrastructure, which is commoditized and has a lower ROI.

The speaker explains the benefits of investing in infrastructure for generative AI, such as cost savings, increased demand, and potential for future growth. They also suggest using NVIDIA's accelerated computing for building this infrastructure. The questioner asks about the shape of revenue growth, and the speaker mentions an increase in OpEx and purchase commitments as positive indicators.

Jensen Huang discusses the transition from general purpose computing to accelerated computing in data centers, with the use of GPUs becoming essential. He also mentions the need for GPUs in generative AI and explains that Blackwell comes in both air-cooled and liquid-cooled configurations.

The number of data centers implementing liquid cooling is increasing due to its cost-effectiveness and ability to increase AI throughput. The use of NVLink and the upcoming Grace Blackwell technology will further enhance this capability. Both liquid cooling and traditional air cooling methods are being used by CSPs. Next year is expected to be a great year for the data center business, with Blackwell being a game-changer. The industry is going through two platform transitions, with general-purpose computing shifting to accelerated computing and human engineered software transitioning to generative AI.

The speaker has two questions for Colette. The first is about the revenue from Blackwell in Q4 and whether it will be added on top of expected Hopper demand. The second question is about gross margins and whether the expected mid-70s for the year will result in a 71-72% gross margin for Q4. Colette confirms that Hopper demand will continue to grow in the second half and that Blackwell will ramp up in Q4. She also confirms the mid-70s gross margin for the full year and says there may be a slight difference in Q4 due to transitions and new product introductions.

The speaker addresses a question about the company's revenue growth in different geographies, explaining that the disclosed information may not accurately reflect where the products eventually end up. They mention a shift in who the products are sold to and clarify that the China numbers include multiple industries.

The company has a high product cadence due to the increasing complexity and cost of AI models. They believe that by scaling these models, they will reach a level of usefulness that will lead to the next industrial revolution. They have the ability to design an AI factory every year because they have all the necessary parts.

In the upcoming year, the company plans to ship a record number of CPUs, GPUs, NVLink switches, CX DPUs, ConnectX EPU, Bluefield DPUs, and InfiniBand products. This will allow them to bring AI to Ethernet and expand their market reach through partnerships with various ecosystem partners. The company's supply chain is disintegrated, allowing them to serve a wide range of customers, and they are focused on designing AI infrastructure while leaving integration to their partners.

Jensen Huang explains that the Blackwell Rack system is designed to be sold as separate components, rather than a complete rack, due to the varying needs of different data centers. The software is designed to work seamlessly across the entire rack, and the system components are integrated into an MGX modular system architecture. The supply chain and integration process is done close to the location of the data centers, with partnerships with ODMs and logistics hubs around the world.

NVIDIA's CEO, Jensen Huang, discussed the company's focus on accelerated computing and its impact on the data center industry. He highlighted the release of new libraries and their potential for opening new markets, such as data science and 5G wireless base stations. Huang emphasized the company's goal to be a technology provider rather than an integrator, and mentioned the strong demand for their products.

NVIDIA has made significant advancements in accelerating gene sequencing and protein structure prediction with Parabricks and Alpha-fold two. They are also introducing Blackwell, an AI infrastructure platform that is a major improvement over their previous Hopper model. Blackwell is designed to support massive AI workloads and can connect up to 144 GPUs in one rack with an aggregate bandwidth of 259 terabytes per second. This will provide 3-5 times more AI throughput in power-limited data centers.

NVIDIA's networking capabilities have expanded with the introduction of NVLink for GPU scale-up, Quantum InfiniBand for supercomputing, and Spectrum-X for AI on Ethernet. The company is also investing in generative AI technology, which is being used in various industries such as internet services, start-ups, and sovereign AI infrastructure. NVIDIA is also launching the NVIDIA AI Enterprise platform, which includes NeMo, NIMS, NIM agent blueprints, and AI Foundry to help enterprises customize and deploy AI models. This platform is priced at $4,500 per GPU per year, making it a cost-effective option for businesses looking to incorporate AI into their operations.

NVIDIA expects significant growth in their software TAM (total addressable market) as the number of CUDA-compatible GPUs installed increases. By the end of the year, they anticipate a $2 billion run rate for their software. The call has now ended.

This summary was generated with AI and may contain some inaccuracies.

More Earnings