$NVDA Q1 2025 AI-Generated Earnings Call Transcript Summary
The operator welcomes everyone to NVIDIA's First Quarter Earnings Call and introduces the speakers. The call is being webcast and may contain forward-looking statements. The speakers will also discuss non-GAAP financial measures and upcoming events.
In the upcoming weeks, NVIDIA's CEO will deliver a keynote at a technology trade show and present at a technology conference. The company had a record-breaking quarter with revenue of $26 billion, driven by strong demand for their Hopper GPU computing platform. Data center revenue was up 23% sequentially and 427% year-on-year, with large cloud providers representing a significant portion. NVIDIA's AI infrastructure offers a strong return on investment for cloud providers, leading to increased revenue and customer growth.
Leading LLM companies such as OpenAI, Adept, Anthropic, and others are utilizing NVIDIA AI in the cloud for their projects. The Data Center sector saw strong growth this quarter, driven by enterprises and Tesla's expansion of their AI cluster. The use of NVIDIA AI infrastructure has led to breakthrough performance in autonomous driving software. Automotive and consumer internet companies are expected to be the largest growth verticals for Data Center. Meta's Llama 3 large language model, trained on 24,000 H100 GPUs, has kickstarted a wave of AI development and is powering their new AI assistant, Meta AI. As generative AI becomes more prevalent in consumer internet applications, there will be increased demand for AI compute for both training and inference. Inference has driven about 40% of Data Center revenue in the past four quarters, and both training and inference are growing significantly. Large clusters, such as those used by Meta and Tesla, are crucial for AI production and are referred to as AI factories.
NVIDIA is seeing an increasing demand for next-generation data centers that host advanced accelerated computing platforms for AI. They have worked with over 100 customers to build AI factories of various sizes, with some reaching up to 100,000 GPUs. The company's Data Center revenue is diversifying globally as countries invest in Sovereign AI, with nations building up their own domestic computing capacity through various models. NVIDIA's end-to-end offerings and partnerships allow these countries to jumpstart their AI ambitions, with Sovereign AI revenue estimated to reach high single-digit billions this year.
NVIDIA, a leading AI company, has seen a significant decrease in revenue in China due to export control restrictions. Their Hopper GPU architecture has been in high demand, with the new H200 system doubling the performance of the previous H100. The company is also working on improving their AI infrastructure for serving models, but supply for the H100 and H200 is still limited. The highly anticipated Grace Hopper Superchip is now shipping in volume.
At the International Supercomputing Conference, NVIDIA announced that nine new supercomputers are using their Grace Hopper technology, providing a combined 200 exaflops of energy-efficient AI processing power. This includes the fastest AI supercomputer in Europe and three of the most energy-efficient supercomputers in the world. The company also saw strong growth in networking, driven by InfiniBand, and has started shipping their new Spectrum-X Ethernet solution optimized for AI. This has opened up a new market for NVIDIA and is expected to become a multibillion-dollar product line within a year. Additionally, the company launched their next-generation AI factory platform, Blackwell, which offers significantly faster training and inference speeds and enables real-time generative AI on large language models.
The Blackwell platform, with its advanced technology and energy efficiency, will be available in over 100 systems at launch and is designed to support a wide range of data center environments and workloads. NVIDIA also announced a new software product, NIM, which provides optimized containers for AI inference. In the gaming sector, revenue was down 8% sequentially but up 18% year-on-year.
The GeForce RTX Super GPUs have been well-received in the market and have over 100 million installed base. NVIDIA has a full technology stack for deploying and running generative AI applications on these GPUs. They have also announced AI performance optimizations for Windows and collaborations with top game developers. In the professional visualization segment, revenue was down 8% sequentially but up 45% year-on-year. NVIDIA believes that generative AI and Omniverse industrial digitalization will drive growth in this segment. They have announced new Omniverse Cloud APIs that have been adopted by major industrial software makers and will be available on Microsoft Azure later this year. Companies are using Omniverse to digitalize their workflows.
In the second quarter, NVIDIA's Omniverse power digital twins have enabled manufacturing partner Wistron to reduce production cycle times and defect rates. In the automotive sector, revenue increased by 17% sequentially and 11% year-on-year, driven by AI cockpit solutions and self-driving platforms. NVIDIA also announced new design wins for its DRIVE Thor platform with leading EV makers. The company's gross margin expanded and operating expenses increased due to higher costs. NVIDIA returned $7.8 billion to shareholders and announced a 10-for-1 stock split and increased dividend. For the second quarter, the company expects sequential growth in all market platforms with a revenue of $28 billion.
NVIDIA expects GAAP and non-GAAP gross margins to be 74.8% and 75.5%, respectively, with operating expenses of $4 billion and $2.8 billion for the full year. Other income and expenses are expected to be around $300 million, and tax rates are estimated at 17%. The company's CEO, Jensen Huang, believes that the next industrial revolution has begun and that AI will bring significant productivity gains to many industries. NVIDIA is working with companies and countries to shift traditional data centers to accelerated computing and build AI factories. CSPs were the first to adopt NVIDIA's technology, leading to cost and energy savings and revenue growth. The company's Data Center growth is driven by strong demand for generative AI training and inference on the Hopper platform.
The advancement of generative AI is driving a transformation in computing, shifting from information retrieval models to skills generation models. This will lead to the development of AI factories and the growth of multiple vertical markets. The Blackwell platform, along with other technologies, allows for trillion-parameter scale generative AI and the expansion of AI factories. Spectrum-X opens up a new market for large-scale AI in Ethernet-only data centers.
Jensen Huang, CEO of NVIDIA, discussed the company's new software offering, NVIDIA NIMs, which provides optimized generative AI for various platforms. He also mentioned the production of Blackwell, which will start shipping in Q2 and ramp up in Q3, with customers having data centers set up by Q4. He expects to see a significant amount of revenue from Blackwell this year. A question was asked about the deployment of Blackwell compared to Hopper, and Huang explained that Blackwell comes in various configurations and may have some engineering challenges due to its liquid cooling system.
The demand for GPUs in data centers is high and the company is working hard to meet it. The CEO will answer the question directly and mention the big picture view.
The demand for GPUs is high due to the development of applications like ChatGPT, GPT-4o, multi-modality, Gemini, and Anthropic. There are also many generative AI startups in various fields. Customers are pressuring for quick delivery of systems. In the long term, computers are being completely redesigned to be intention-understanding, allowing for reasoning and planning. This is a more profound platform shift than previous ones.
The speaker discusses how the computer industry is changing, with computers now generating intelligent answers instead of retrieving prerecorded files. They also mention the strong demand for H200 and Blackwell products, and anticipate that demand will continue to outstrip supply as they transition to these new products. When asked about competition, the speaker states that they are aware of other companies' internal programs, but do not see them as major competitors in the medium to long term.
NVIDIA's accelerated computing architecture allows for processing of all aspects of a pipeline, from unstructured to structured data, training, and inference. Their architecture is versatile and can be used for various types of computing, making it a sustainable and cost-effective choice. NVIDIA is also present in every cloud and offers a platform for developers to work on. Additionally, they build AI factories, making them a comprehensive and reliable option for customers.
NVIDIA's CEO, Jensen, discusses the growing complexity of AI and how it is not just a chip problem, but a systems problem. He explains how NVIDIA's systems approach optimizes performance and lowers costs, making it the top choice for data centers. The company's rapid introduction of new platforms and significant performance jumps have been unprecedented in the data center industry.
The speaker discusses the rapid pace of technological advancements and how it affects customers who have invested in current products. They mention the upcoming release of Blackwell and the continuous cycle of new products, but reassure that customers should continue building and average their way into the new technology. The importance of time and the value of being able to quickly set up a data center are also emphasized.
The paragraph discusses the importance of being the first company to announce groundbreaking AI technology and the race among companies to achieve this. It also highlights the advantage of having all the necessary resources and infrastructure in one place to continuously improve and optimize AI systems. This allows the company to deliver high-performing systems and integrate them into their customers' data centers.
The speaker discusses how their company, NVIDIA, has deep knowledge of data center performance and how they optimize it. They also mention their focus on building every chip from the ground up and understanding how it performs. They then answer a question about the adaptability of their solutions for different workloads, stating that while their solutions are versatile, they are not necessarily general-purpose.
The speaker discusses the versatility of their platform and how it has allowed them to accelerate a variety of applications over the years. They mention the importance of parallel processing and heavily threaded code in accelerated computing. They also mention the brittleness of other architectures and how their platform is able to adapt to new advancements in AI. They believe in the scaling of models and are prepared for it due to the versatility of their platform.
The speaker discusses the limitations of creating a computer that is too specific or brittle, and mentions that their company is currently supply constrained. They mention a product called H20 and how they are trying to serve their customers with different Hopper products. They also mention that their business in China is more competitive now due to limitations on their technology. The speaker clarifies that demand for GB 200 systems is high and that they have historically sold a lot of HGX boards and GPUs, with the systems business being smaller.
Jensen Huang is asked about the strong demand for systems going forward and he explains that the way they sell the GB200 is the same as before, with disaggregated components and integration into computer makers. They are offering 100 different computer system configurations for Blackwell, which is a significant increase from previous generations. The Blackwell platform has expanded their offering and includes CPUs, liquid cooling, and networking options, making it a more expansive and energy-efficient solution for data centers. They have also added Ethernet support for customers who only operate with Ethernet. Overall, Blackwell has a lot more to offer customers this generation.
NVIDIA's CEO, Jensen Huang, explains that their ARM-based Grace CPU provides advantages in terms of cost, power consumption, and technical synergies with other chips. This advantage is not possible with current system configurations, and Grace's memory system and architecture allow for a large NVLink domain, which is important for next-generation language models. The partnership with x86 partners is still strong, but Grace offers unique benefits.
The speaker discusses the pace of innovation at NVIDIA, mentioning the upcoming Blackwell chip and a new networking technology. They also mention a rich ecosystem of partners and customers, and the availability of InfiniBand and Ethernet for different computing needs.
NVIDIA is committed to improving InfiniBand, Ethernet, and NVLink as computing fabrics and networks. They will release new switches, NICs, software stacks, CPUs, and GPUs that all run CUDA and their software stack. This will increase speed and decrease TCO, allowing for scaling out with the NVIDIA architecture and driving a new industrial revolution in manufacturing AI tokens. The call has now ended.
This summary was generated with AI and may contain some inaccuracies.