$NVDA Q1 2026 AI-Generated Earnings Call Transcript Summary

NVDA

May 28, 2025

The paragraph introduces NVIDIA Corporation's First Quarter Fiscal 2026 Financial Results Conference Call. Sarah, the conference operator, begins by welcoming everyone and providing instructions for the Q&A session. Toshiya Hari then introduces the call participants, including Jensen Huang and Colette Kress from NVIDIA. He notes that the call is being webcast live and will be available for replay. The content is proprietary to NVIDIA and cannot be reproduced without consent. The call may include forward-looking statements subject to risks and uncertainties, and participants are directed to specific financial disclosures for more details. Statements are based on current information as of May 28, 2025, with no obligation for updates unless legally required. Non-GAAP financial measures will also be discussed.

The paragraph discusses the company's strong quarterly performance, with a 69% increase in revenue year-over-year to $44 billion, primarily driven by a significant rise in data center revenue. However, the company is facing challenges due to new US export controls on their H20 data center GPU, designed for the Chinese market. These controls have impacted their ability to ship $2.5 billion in H20 revenue and resulted in a $4.5 billion inventory write-down. Despite this, they managed to repurpose some materials, slightly reducing the anticipated loss. The company is assessing options to comply with new export rules, as losing access to the growing China AI accelerator market could adversely affect their business and benefit competitors. Their Blackwell ramp has notably contributed to the surge in data center revenue.

In the latest quarter, Blackwell accounted for approximately 70% of data center compute revenue, primarily transitioning from Hopper. The introduction of GB200 NBL marked a significant architectural shift for scaling data center workloads, achieving cost efficiency in inference tokens. With improved manufacturing yields, GB200 and VO racks are now widely available. Major hyperscalers are deploying around 1,000 NBL72 racks weekly, with Microsoft deploying tens of thousands of Blackwell GPUs and planning to expand significantly with OpenAI as a key customer. The transition to Blackwell Ultra, which includes sampling of GB300 systems, is underway, maintaining design compatibility with GB200 to ensure seamless transition. GB300 will offer a 50% boost in dense FP4 inference compute performance with increased HBM. The company is committed to its annual product schedule, with a roadmap until 2028, and reports significant increases in inference demand, with Microsoft processing over 100 trillion tokens in Q1, a fivefold year-over-year increase.

The paragraph discusses the rapid growth and demand for Azure AI services, highlighting advancements in AI inference capabilities powered by NVIDIA's technology. It mentions that inference serving startups have significantly increased their token generation rates and revenues. The integration of NVIDIA's Blackwell NBL72 has enhanced AI inference throughput for new reasoning models by 30x. It also notes improvements in chatbot latency for financial institutions and highlights impressive performance results in the MLPerf inference benchmark. Additionally, NVIDIA's ongoing software optimizations continue to enhance performance, with Blackwell's performance increasing by 1.5x within a month. The expansion of AI factories, now powered by twice as many GPUs, underscores the accelerating pace of AI deployments.

NVIDIA Corporation is facilitating AI factory deployments with its full-stack architecture, supporting strategic sovereign cloud initiatives in places like Saudi Arabia, Taiwan, and the UAE. The company anticipates significant future infrastructure demands as AI transitions from generative to agentic forms, which will impact industries globally. NVIDIA's Llama NemoTron models enhance agentic AI capabilities, improving accuracy by 20% and inference speed fivefold. Partners like Accenture and Microsoft use these models to transform business practices. NVIDIA's Nexmo microservices, widely available, have notably improved performance for companies like Cisco and Nasdaq. Furthermore, NVIDIA's parallelism techniques have accelerated model training times.

The paragraph discusses NVIDIA Corporation's AI applications and partnerships, highlighting a collaboration with Yam Brands to implement AI in restaurants and cybersecurity work with companies like Checkpoint and Cloudstrike. It also covers the growth in networking revenue, emphasizing NVIDIA's NVLink technology, which significantly boosts data bandwidth. NVLink Fusion was launched, allowing tailored connectivity with partners like MediaTek and Qualcomm. Additionally, NVIDIA's enhanced Ethernet offerings have achieved significant revenue growth, with broad adoption across major cloud service providers and consumer Internet companies.

In this quarter, the company added Google Cloud and Meta as customers for its SpectrumX and Quantum X silicon photonics switches, enhancing AI factory scaling capabilities. Despite stronger gaming revenue, especially due to the Blackwell architecture, China data center revenue fell due to export licensing controls. Singapore, though significant in billing, mostly reflects orders from US customers. In gaming, revenue hit a record $3.8 billion, with strong adoption of the Blackwell architecture. The company expanded its AI PC offerings, introducing GeForce RTX 5060 and 5060 Ti for desktops and laptops, which offer improved performance at competitive prices.

The paragraph highlights recent advancements and achievements of NVIDIA Corporation and Nintendo. Nintendo has launched the Nintendo Switch 2, which utilizes NVIDIA's AI technologies and RTX GPUs for improved gaming performance. To date, Nintendo has shipped over 150 million Switch consoles. NVIDIA's revenue saw a 19% year-on-year increase. Their AI workstations, DGX Spark and DGX Station, offer significant computing power and are set for release later this year. NVIDIA's Omniverse platform is being integrated by major companies for industrial enhancements, such as reduced assembly defects and faster simulations. In the automotive sector, NVIDIA reported a 72% year-on-year revenue increase, driven by self-driving technology and strong demand for NEVs.

The paragraph discusses NVIDIA's partnerships and developments in AI and robotics, including collaborations with GM and Mercedes Benz for next-gen vehicles and humanoid robots. They announced new foundation models like Isaac Groot and NVIDIA Cosmos for companies like OneX and Uber, and mentioned GE Healthcare's use of their platform for robotic imaging and surgery. It highlights the significant future expected in robotics and autonomous vehicles. Financially, NVIDIA reported GAAP and non-GAAP gross margins of 60.5% and 61%, respectively, noting a $4.5 billion charge affecting these figures. Operating expenses rose due to increased compensation and infrastructure investments. In Q1, NVIDIA returned $14.3 billion to shareholders via repurchases and dividends, emphasizing the importance of capital return in their strategy.

The company expects total revenue for the second quarter to be around $45 billion, with slight growth across all platforms and a mixed performance in the data center due to the Blackwell ramp and a decline in China revenue. An $8 billion loss in H20 revenue is anticipated for this quarter. GAAP and non-GAAP gross margins are projected at around 71.8% and 72%, with improvements expected due to better Blackwell profitability. Operating expenses are forecasted to be approximately $5.7 billion for GAAP and $4 billion for non-GAAP, with an annual growth rate in the mid-thirty percent range for fiscal year 2026. Other income and expenses should yield an income of $450 million, and the tax rate is estimated at 16.5%. Upcoming financial community events include conferences and summits in San Francisco, London, and Paris, with the second-quarter earnings call set for August 27.

The paragraph discusses the impact of US export controls on the AI market, particularly in relation to China. It highlights that China is a major AI market with significant talent and innovation capabilities. The US export ban, specifically on the Hopper data center, has hindered American companies' ability to compete in China, allowing Chinese innovation to flourish independently. The writer argues that US export restrictions may strengthen Chinese chipmakers and threaten America's global AI leadership. They also emphasize the importance of winning over AI developers to lead in AI technology, mentioning notable AI models like DeepSeq and QN from China. Additionally, the paragraph touches on the emergence of reasoning AI, exemplified by models like Deepseeker One, which improve over time and enable smarter AI applications.

The paragraph discusses the increasing computational demands of reasoning models like DeepSeq, emphasizing the importance of U.S. infrastructure and open-source AI collaboration to maintain leadership in AI. It highlights ongoing projects and investments in onshore manufacturing in the U.S., including TSMC's chip plants in Arizona and partnerships with Foxconn and Wistron for AI supercomputer factories in Texas. The paragraph underscores commitments to enhance America's AI manufacturing capabilities and the scale of supercomputer production efforts.

The paragraph discusses President Trump's decision to rescind a previous AI diffusion rule and implement a new policy to promote US AI technology with trusted partners. During his Middle East tour, he announced significant infrastructure investments, including an AI project in Saudi Arabia and a campus in the UAE, aiming to bolster US technology leadership and economic benefits such as job creation and trade deficit reduction. The narrative highlights NVIDIA Corporation's focus on AI as a core component of future industrial growth and mentions various countries building national AI platforms. NVIDIA has launched AI infrastructure projects in several countries, seeing sovereign AI as a new growth area. The paragraph concludes with a transition to a Q&A session of a conference call led by Toshiya Hari, with Joe Moore from Morgan Stanley being the first to ask a question about scaling up inference models.

In the paragraph, Jensen Huang, likely from NVIDIA, discusses the company's capacity to meet the increasing demand for AI inference systems. He highlights that their current technology, particularly the Grace Blackwell and VLINK72 systems, is designed to excel at reasoning AI tasks. These tasks are more complex than simple chatbot interactions, involving extensive token generation and problem-solving processes. Grace Blackwell is significantly faster and more efficient than previous models like Hopper, providing a substantial improvement in inference performance. This innovation reduces costs while enhancing response quality and service. Huang emphasizes the redesign of supercomputers to achieve these advancements and mentions that they are now in full production. The paragraph ends with the next person, Sarah, introducing Vivek Arya from Bank of America Securities for a question.

The paragraph discusses the financial impact of China's market on the company, noting a previous mention of $15 billion, with $8 billion recognized in Q2. Colette Kress clarifies that while $4.6 billion was recognized in Q1 for H20, $2.5 billion worth of shipments couldn't be completed, impacting the total. For Q2, China data center revenue is expected to decrease significantly, with a highlighted $8 billion in planned H20 orders that couldn't be fulfilled, leading to a $4.5 billion write-down in inventory and purchase commitments. Looking forward, the company estimates a $50 billion total addressable market that they cannot serve due to a lack of suitable products for China. Jensen Huang adds that AI technology will transform every industry, suggesting potential growth despite these challenges.

The paragraph discusses the early stages of AI adoption across various industries, highlighting AI as a form of infrastructure essential to every sector and country. It emphasizes the role of AI in developing technology and producing tokens, and presents inference as a significant compute workload. The United States is noted as a leader in cloud-based AI, but there's an emphasis on integrating AI into on-premises enterprise systems due to the challenges of moving all company data to the cloud. It mentions new AI products, such as the RTX Pro enterprise AI server and DGX systems, designed for enterprise and developer use, and predicts a future where telecom infrastructures will be software-defined and AI-based.

The paragraph discusses the early stages of 6G infrastructure development, which will heavily rely on AI. It highlights the future integration of AI factories with traditional manufacturing to operate and enhance products and the emergence of companies focused on robotics and AI development. The conversation shifts to a Q&A, where C.J. Muse asks NVIDIA's Jensen Huang about large GPU cluster investments, mentioning examples such as Saudi Arabia, the UAE, Oracle, and XAI. Huang confirms increasing orders and supply chain enhancements and notes global AI infrastructure expansion, with around 800 AI factories being planned.

The paragraph discusses the emerging field of digitalized intelligence as an essential infrastructure akin to electricity and the Internet. It emphasizes that intelligence is crucial for all industries and countries, predicting that the development of this infrastructure is just beginning. The text highlights the need for "factories" to produce sophisticated forms of intelligence, such as reasoning AI and advanced agents that collaborate to solve problems. The conversation anticipates further announcements regarding this infrastructure in the future. Then, the dialogue shifts to Sarah introducing a question from Ben Reitzes, directed to Colette, asking for clarification regarding specific guidance.

In the article, the speaker addresses the significant financial figures surrounding an $8 billion expectation for H20, noting it's $3 billion more than anticipated, which suggests that the rest of the business must compensate by performing $2 to $3 billion better to meet a $45 billion goal. This implies the non-China segment is exceeding expectations. The speaker queries about the impact of the lifted AI diffusion rule on future growth. Colette Kress confirms the $8 billion estimate for H20, had there been no export controls, and cites growth in Blackwell and supply needs as factors in their guidance. Jensen will further address these points.

Jensen Huang discusses four positive developments in AI from the year. First, there's been a significant increase in demand for reasoning AI, which is becoming crucial for problem-solving despite initial concerns about its accuracy. Second, the lifting of AI diffusion restrictions under President Trump aims to boost the U.S. position in the global AI race by promoting the use of American technology infrastructure. Third, the importance of AI as essential infrastructure akin to electricity and the Internet is being recognized globally, opening up new opportunities. Lastly, enterprise AI, especially agentic AI, is proving to be transformative with its ability to understand complex instructions, solve problems, and utilize tools effectively.

The paragraph discusses the readiness of enterprise AI to take off, following years of developing a computing system that integrates enterprise AI and IT stacks, highlighted by the announcement of the RTX Pro enterprise server. Major IT companies are collaborating in this effort, focusing on the three pillars of enterprise IT: compute, storage, and networking. The importance of industrial AI is also emphasized, as global manufacturing and new plant constructions align with emerging technologies in AI, robotics, and Omniverse. This development necessitates extensive data training to create physical AI systems. Four key drivers are propelling this transformation. Additionally, Sarah mentions a question from Timothy Arcuri of UBS, who inquires about the ability to ship a new modified version of the H20 into China and whether it's currently being built but not shippable in fiscal Q2.

The paragraph discusses NVIDIA's situation regarding export controls that limit their ability to ship products, specifically Hopper, into China. Jensen Huang, the CEO, expresses trust in the president's vision and acknowledges the stringent limits that currently exist. While NVIDIA is considering potential products to serve the Chinese market within these constraints, nothing is currently available or planned. In another part of the discussion, the focus shifts to NVIDIA's networking business, where there is notable strength due to the adoption of Ethernet solutions at cloud service providers (CSPs). Jensen mentions three or possibly four networking platforms, highlighting NVLink, which is designed to scale up computers, a challenging endeavor compared to scaling out.

The paragraph discusses NVIDIA's advancements in networking technologies, highlighting the NVLink platform and the enhancements made to Ethernet through SpectrumX. These advancements allow for improved performance and utilization of AI clusters, with utilization rates rising from 50% to as high as 90%. This improvement can significantly increase the value of expensive clusters. Additionally, the BlueField platform is mentioned for its role in control plane operations, offering high performance and multi-tenant support. Overall, the company is experiencing strong growth across its networking platforms, and this progress marks the beginning of a new phase of growth for NVIDIA.

The paragraph highlights NVIDIA Corporation's significant growth and leadership in AI technology. The company is advancing in various AI areas, including inference, reasoning agents requiring substantial computing power, and enterprise AI deployments. NVIDIA is prepared to modernize global IT infrastructure with its products like RTX Pro and DGX systems, suited for both on-premises and cloud environments. The company is playing a crucial role in industrial AI and AI infrastructure development, with products like Omniverse and Isaac Groot powering next-generation factories and robotics. NVIDIA's commitment to AI is evident as nations invest in AI infrastructure akin to past investments in electricity and the internet. The paragraph concludes by inviting stakeholders to join NVIDIA's events in Europe, where NVIDIA’s CEO will discuss advancements in quantum GPU computing and robotics at GTC Paris and Viva Tech.

This summary was generated with AI and may contain some inaccuracies.