$NVDA Q3 2025 AI-Generated Earnings Call Transcript Summary
The paragraph is an introduction to NVIDIA's Third Quarter Earnings Call for fiscal 2025, conducted by Joel, the conference operator. Stewart Stecker welcomes everyone and confirms the participation of Jensen Huang, CEO, and Colette Kress, CFO. The call is being webcast live on NVIDIA's Investor Relations website and will be available for replay. The content is proprietary, and forward-looking statements may be made during the call, subject to risks and uncertainties outlined in various SEC filings. The call will also discuss non-GAAP financial measures, with reconciliations available on NVIDIA's website. Stewart hands the call over to Colette Kress.
In Q3, NVIDIA achieved record revenue of $35.1 billion, a 17% sequential increase and a 94% year-on-year rise, surpassing their $32.5 billion outlook. This growth was driven by strong performance across market platforms, particularly in the Data Center segment, which saw a 112% year-on-year revenue increase to $30.8 billion. Exceptional demand for NVIDIA's H200, noted for its fast inference performance and efficiency, contributed to the revenue surge. Cloud service providers, making up about half of data center sales, doubled their revenue year-on-year by deploying NVIDIA infrastructure at scale. H200-powered cloud instances are now offered by AWS, CoreWeave, and Microsoft Azure, with Google Cloud and OCI soon to follow. Additionally, regional cloud revenue, consumer internet revenue, and inference platform growth all saw significant increases, bolstered by NVIDIA's Hopper and Ampere infrastructures supporting AI advancements.
The paragraph highlights NVIDIA's substantial advancements in software and GPU technology, particularly emphasizing the performance improvements and optimization of their Hopper and Blackwell systems. NVIDIA's innovations have significantly boosted inference throughput and reduced token processing times, with further enhancements expected from upcoming releases like NVIDIA NIM. The Blackwell system, now widely distributed to major partners including OpenAI and large enterprises, offers a comprehensive AI infrastructure solution adaptable to various configurations. Oracle and Microsoft have announced plans to leverage Blackwell's capabilities, with Oracle creating Zettascale AI Cloud computing clusters and Microsoft preparing to offer Blackwell-based cloud instances. The demand for Blackwell is exceptionally high, and NVIDIA is working to scale supply to meet customer needs.
NVIDIA's new Blackwell GPUs have significantly outperformed previous models, providing a 2.2 times performance increase over Hopper and reducing the cost of compute by requiring fewer GPUs for benchmarks like GPT-3. The Blackwell architecture, with its NVLINK Switch, also offers up to 30 times faster inference performance, making it ideal for advanced inference applications. Major AI companies like Google, Meta, Microsoft, and OpenAI, along with numerous AI-native startups, are utilizing NVIDIA's technology to drive innovation. The focus is now shifting towards Enterprise AI and Industrial AI, with companies like Cadence, Cloudera, and Salesforce working with NVIDIA to develop applications for these fields. Consulting firms like Accenture and Deloitte are also leveraging NVIDIA AI to support global enterprise integration. Accenture, specifically, has created a new business group with 30,000 professionals trained in NVIDIA AI to aid this expansion.
Accenture is leveraging NVIDIA-powered AI to enhance marketing efficiency. NVIDIA's AI applications are being widely adopted, leading to expected significant revenue growth for NVIDIA AI Enterprise. The company projects its software, service, and support revenue will exceed $2 billion annually by year-end. Industrial AI and robotics are evolving, with NVIDIA Omniverse helping major manufacturers like Foxconn improve efficiency through digital twins. Foxconn has reported significant energy savings in its Mexico facility. Despite export controls, NVIDIA's Data Center revenue in China is growing, though still below historical levels, and the market is expected to remain competitive. NVIDIA is also advancing AI initiatives globally as countries adopt their technology for industrial transformation.
The paragraph highlights significant advancements and partnerships involving NVIDIA's AI technologies. In India, Tata Communications and Yotta Data Services are expanding NVIDIA GPU deployments significantly, while companies like Infosys and Wipro are adopting NVIDIA AI Enterprise for large-scale developer upskilling. In Japan, SoftBank is building an AI supercomputer and upgrading telecommunications with NVIDIA's platforms. Major Japanese companies and consulting firms are also integrating NVIDIA AI. Revenue growth is noted in networking solutions like InfiniBand and Ethernet switches, despite a sequential dip. NVIDIA Spectrum-X Ethernet for AI has seen a threefold increase in revenue, distinguishing itself from traditional Ethernet with advanced scalability and efficiency for AI applications.
In the latest quarter, NVIDIA saw significant growth across its gaming, professional visualization, and automotive segments. Gaming revenue reached $3.3 billion, driven by strong sales of GeForce RTX GPUs and new AI PCs from ASUS and MSI. Professional visualization revenue was $486 million, boosted by demand for AI applications and NVIDIA RTX workstations. Automotive revenue hit a record $449 million, fueled by the adoption of NVIDIA Orin in self-driving technology and electric SUVs from Volvo Cars. Overall, NVIDIA's GPUs continue to play a crucial role in advancing gaming, AI, and automotive technologies.
The paragraph discusses the company's financial performance and outlook, highlighting a decrease in gross margins due to a shift towards more complex systems in their Data Center. Operating expenses rose by 9% due to development costs. In Q3, $11.2 billion was returned to shareholders through share repurchases and dividends. For Q4, projected revenue is $37.5 billion, driven by continued demand for Hopper architecture and the initial rollout of Blackwell products. Gaming revenue is expected to decline due to supply constraints, while gross margins are projected to be around 73%. The Blackwell AI infrastructure is customizable with various NVIDIA-built chips and different networking options, and as it scales, gross margins are expected to stabilize. Operating expenses are projected to be around $4.8 billion (GAAP) and $3.4 billion (non-GAAP). The company focuses on building data centers for hardware and software development and new product introductions.
The paragraph outlines financial expectations and upcoming events for a company. It estimates GAAP and non-GAAP other income and expenses to be around $400 million, with a tax rate of approximately 16.5%. The company will attend the UBS Global Technology and AI Conference on December 3rd in Scottsdale and participate in CES in Las Vegas, with a keynote by Jensen on January 6th and a Q&A for analysts on January 7th. Their Q4 fiscal 2025 earnings call is on February 26th, 2025. In a Q&A session, C.J. Muse of Cantor Fitzgerald asks about scaling for large language models, to which Jensen Huang responds that foundation model pre-training scaling continues and highlights the discovery of post-training scaling as another method.
The paragraph discusses advancements in AI training and inference, highlighting three types of scaling: pre-training, post-training, and inference time scaling. It mentions the innovation of test time scaling in models like OpenAI's ChatGPT which improves answer quality with longer processing. This has led to increased demand for infrastructure, with the industry transitioning from using 100,000 Hoppers to 100,000 Blackwells for training models. The paragraph also notes a rise in inference demand and AI-native companies, with significant enterprise adoption of agentic AI. The company claims to be the largest inference platform due to its extensive infrastructure.
In the paragraph, Toshiya Hari from Goldman Sachs asks Jensen Huang about NVIDIA's ability to execute its roadmap, including reports of heating issues, and questions about future products like Ultra and Ruben. Jensen Huang responds by stating that the production of Blackwell is proceeding well, exceeding previous estimates, and highlighting the efforts of the supply chain team. He acknowledges that demand currently exceeds supply due to the burgeoning interest in generative AI and new foundation models. Huang emphasizes the strong demand for Blackwell and affirms that execution is progressing smoothly.
The paragraph discusses the extensive engineering efforts involved in building and integrating AI supercomputers with CSPs like Dell, CoreWeave, Oracle, Microsoft, and Google, which are competing to be first. It highlights the complexity of integrating the Blackwell systems into custom data centers globally and mentions the successful increase in the supply chain, with seven custom chips developed for the Blackwell systems. Despite starting from zero shipments last quarter, the company plans to ship systems worth billions this quarter, showcasing a remarkable ramp-up, with many companies participating in their supply chain.
The paragraph highlights the strong partnerships with various companies involved in the development and ramping up of Blackwell. It emphasizes adherence to an annual roadmap to enhance platform performance, which helps to reduce costs for AI training and inferencing, making AI more accessible. In power-limited data centers, high performance per watt is essential, as it translates to greater revenue for partners. Therefore, maintaining this annual plan is crucial for both reducing costs and maximizing revenue.
In the paragraph, Timothy Arcuri from UBS asks about the expected performance and impact of Blackwell, a product mentioned by Jensen Huang. Arcuri inquires if Blackwell will surpass Hopper in sales by April and about the expected pressure on gross margins associated with Blackwell's ramp-up. Colette Kress responds confirming that gross margins are expected to be in the low-70s during the initial ramp-up phase of Blackwell but will improve to the mid-70s quickly after. Jensen Huang adds that demand for Hopper will continue into the following year, but more Blackwells will be shipped in the next quarter compared to the current one.
The paragraph discusses two fundamental shifts in computing: transitioning from CPU-based coding to GPU-powered machine learning, and the development of generative AI. The modernization of data centers worldwide to support machine learning is described as a trillion-dollar transformation, leading to the emergence of "AI factories" that operate continuously, similar to power plants. These factories generate AI and represent a significant new industry that is expected to grow for several years. The speaker emphasizes the early stages of these trends and anticipates their continued expansion.
In the discussion, Colette Kress addresses a question from Vivek regarding NVIDIA's potential to achieve mid-70s gross margins in the second half of 2025, affirming that it is a reasonable and possible goal depending on the ramp mix. Jensen Huang adds that there will be no reduction in demand ("digestion") until global data centers are modernized, as they currently rely on outdated CPU-based infrastructure. He emphasizes the necessity for data centers to transition towards supporting machine-learning and generative AI, predicting this modernization could take about four years, aligning with the growth trajectory of the IT sector.
The paragraph discusses the anticipated growth and modernization of data centers by 2030, highlighting the emerging role of generative AI as a transformative, new market segment. It likens the arrival of innovations like OpenAI to the impact of the iPhone, noting that such technologies do not replace existing ones but create new opportunities. The text outlines the rise of AI-native companies capitalizing on this platform shift, such as those offering digital artist intelligence, basic intelligence, legal intelligence, and digital marketing intelligence. It predicts continued modernization of IT and computing, alongside the creation of AI factories, forming a new industry focused on producing artificial intelligence. The conversation then transitions to a question from Stacy Rasgon of Bernstein Research.
In the paragraph, Colette Kress addresses questions about gross margins and product behavior into the next quarter. She clarifies that their low-70s gross margins range between 71% to 72.5%, potentially reaching mid-70s by year's end, due to improvements in yields and product quality. Regarding the Hopper product, significant growth in orders and rapid deployment of the H200 model is noted, making it the fastest-growing product. Hopper will continue to be sold in various configurations, including in China, while simultaneously ramping up the Blackwell product in Q4.
The paragraph discusses the potential growth of the inference market and its significance in AI technology. Joseph Moore from Morgan Stanley asks about the prospect of inference growth outpacing training in the next year and the use of Hopper chips for inference. Jensen Huang expresses the hope that inference will become widespread, utilized by companies across various departments like marketing and engineering. He envisions a future where AI is embedded in everyday computer experiences, with numerous AI-native startups contributing to this ecosystem. Huang mentions NotebookLM, a Google application he enjoys using, to illustrate the evolving landscape of AI applications.
The paragraph discusses the emergence of "physical AI," a new genre of AI that understands and predicts the physical world, which is beneficial for industrial AI and robotics. Omniverse was created to facilitate the development of physical AI by using synthetic data generation and reinforcement learning with physics feedback. The goal is to improve inference, which is challenging due to the need for high accuracy, high throughput, low cost, and low latency. As AI models become larger and multimodal, the complexity of maintaining these requirements increases.
The paragraph discusses NVIDIA's strong innovation ecosystem, highlighting how its architecture, particularly CUDA, allows for rapid innovation and ease of deployment across various systems globally. In a conference call, Aaron Rakers from Wells Fargo asks about NVIDIA's data center business, particularly the networking segment, which saw a 15% sequential decline despite strong demand and multiple cloud design wins. Colette Kress responds, emphasizing the significant year-over-year growth in networking since acquiring Mellanox, and underlining its importance to their data center strategy, expecting continued success.
In the paragraph, Colette Kress addresses two questions from Atif Malik. Regarding the first question about sovereign AI demand, she explains that the demand remains strong and continues to grow globally, particularly with the rise of generative AI. Countries are building AI models tailored to their languages and cultures, which presents growth opportunities, especially in Europe and the Asia-Pacific region. For the second question about the supply-constrained situation in gaming, Kress mentions efforts to ramp up production across various gaming products to meet demand, without specifically stating if they are reallocating supply toward data centers.
The paragraph discusses the challenge of accelerating gaming supply to meet market demand this quarter, with hopes to improve by the next calendar year. Ben Reitzes from Melius Research asks about sequential growth and potential reacceleration with improved supply, referring to a product named Blackwell. He also inquires about potential changes in tariffs with the new US administration and its impact on the company's China business. Jensen Huang states they guide quarterly, indicating a focus on current shipments and coordination with suppliers for Blackwell. He assures support for any decisions by the new administration while maintaining the company's operations.
In the paragraph, Jensen Huang discusses the distribution of computational resources in AI ecosystems, noting that most compute currently focuses on pre-training foundation models. He explains that while new post-training technologies aim to lower inference costs, there are limitations to pre-processing. As a result, real-time processing and adaptive thinking remain necessary. Huang highlights the growing scale of AI, especially with multimodal foundation models requiring massive amounts of video data for training.
The paragraph discusses the rapid growth and transformation driven by NVIDIA's computing technology, emphasizing the shift from traditional coding to machine learning on GPUs. This is resulting in the overhaul of data center infrastructure to accommodate AI and machine learning, referred to as Software 2.0. The text highlights the surge in demand for NVIDIA's products, particularly due to the expansion of AI models, startups, and inference services. It also notes the emergence of new scaling laws with innovations like ChatGPT o1, which heavily rely on computing power. Overall, AI is emphasizing its presence by reshaping industries and business operations globally.
The paragraph discusses the growing integration of AI and robotics in the workforce, leading to improved job performance and increased investments in industrial robotics due to advancements in physical AI. It highlights the rising demand for training infrastructure as researchers use extensive video and synthetic data to train AI models. The global trend towards developing national AI infrastructures is emphasized, indicating the significant impact of AI across various sectors. NVIDIA is positioned to capitalize on these opportunities due to its expertise and comprehensive infrastructure solutions, catering to diverse AI and robotics needs across different cloud and edge environments.
This summary was generated with AI and may contain some inaccuracies.