The world's largest buyer of NVIDIA GPUs in 2024 is actually Microsoft? The total purchase volume is nearly 500,000 units, nearly twice that of all competitors. xAI has happily posted the first batch of GB200 NVL72 shipments, and it is as happy as if it is celebrating the New Year in advance. Will the more GPUs you stock up, the better the model will be? No time to explain, hurry up and get in the car, the door is welded shut! Who is the largest buyer of NVIDIA GPUs this year? Just now, the answer was revealed-TOP 1 is actually Microsoft. Today, this comparison picture has been circulated wildly on the Internet. Omdia estimates the approximate amount based on publicly disclosed capital expenditures by companies, server shipments and supply chain intelligence. According to analysts at technology consulting firm Omdia, Microsoft purchased 485,000 Nvidia Hopper chips this year, ranking No. 1 among global GPU buyers. That puts Microsoft well ahead of Nvidia's second-largest U.S. customer, Meta, which bought 224,000 Hopper chips. Following Meta are xAI, Amazon (196,000 yuan) and Google (169,000 yuan). Moreover, Hopper's next-generation GPU Blackwell is coming soon, and Nvidia will make a lot of money. In short, as major global technology companies have been frantically hoarding GPUs and competing to assemble larger and larger GPU clusters, Nvidia's market value has soared to $3 trillion this year. Now, xAI has happily released the first batch of NVIDIA GB200 NVL72 shipments. Colossus, the world's largest supercomputer cluster, will become an even bigger behemoth. It feels like xAI is about to laugh out loud: "xAI's Colossus celebrates Christmas early" As you can see, on November 18, Dell founder and CEO Michael Dell said: The world's first NVIDIA GB200 NVL72 server rack is now shipping. The AI rocket is about to take off! However, does having the best chips necessarily mean you can build the best AI infrastructure? Not necessarily. Alistair Speirs, senior director of Microsoft Azure global infrastructure, said that in addition to this, it is also necessary to match the right storage components, infrastructure, software layers, host management layers, error correction capabilities, and other components required to build a complete system. For example, Grok, which owns the world's most powerful supercomputer, although many tests have pointed out that its performance is excellent, it is still often poured cold water by netizens who say: Having the largest number of GPUs in the world does not mean that your model is better. But there was no time to explain, the global GPU buying boom had already reached its peak, and all the giants had only one thing in mind - get on board! Microsoft, the world's largest buyer of Nvidia GPUs In the past two years, Nvidia's most advanced GPUs have been in short supply, and under the instructions of CEO Nadella, Microsoft has put all its efforts! According to Omdia data, Microsoft bought nearly 500,000 GPUs this year, twice as many as its competitors. At the same time, it also secured the top spot in investment amount. Microsoft has invested $13 billion in OpenAI to build out its data center infrastructure, which it uses to run its own AI services like Copilot and also rents to customers through Azure. OpenAI's latest o1 model was trained using Microsoft's Azure cloud infrastructure. The two parties are competing fiercely with Google, Anthropic, xAI and other companies that have made a strong comeback for the commanding heights of next-generation computing technology. The success of ChatGPT prompted NVIDIA to step up production of the Hopper chip overnight. Compared with the same generation of Nvidia AI processors purchased in 2023, Microsoft's order this time has more than tripled. According to Alistair Speirs, senior director of Microsoft Azure global infrastructure, high-quality data center infrastructure has become an extremely complex and "capital-intensive project." This requires years of planning, so accurately forecasting growth needs and maintaining an appropriate buffer is critical. Indeed, Nvidia GPUs have now become the hottest hard currency in Silicon Valley, triggering an unprecedented surge in AI investment. Omdia estimates that approximately 43% of server spending in 2024 will go to Nvidia. Among them, the top ten buyers of data center infrastructure (now including newcomers xAI and CoreWeave) account for 60% of global computing power investment. Global technology companies' spending on servers will reach a staggering $229 billion - $31 billion for Microsoft and $26 billion for Amazon. "Nvidia GPUs account for a very high share of server capital expenditures and are close to peak levels," said Vlad Galabov, director of cloud computing and data center research at Omdia. A new force emerges to challenge Nvidia's dominance However, Nvidia should not laugh too soon. Although it still dominates the AI chip market, its old rival AMD has also been eyeing this market. At present, AMD has made a breakthrough. According to Omdia statistics, Meta purchased 173,000 AMD MI300 chips this year, and Microsoft also purchased 96,000 chips. What is more noteworthy is that major technology giants have rapidly developed their own AI chips to reduce their dependence on Nvidia. As a pioneer, Google has devoted itself to the research and development of TPU for ten years; Meta launched the first generation of training and pushing accelerator chips for the first time last year. They each deployed about 150 self-developed chips. In addition, Amazon, the world's largest cloud computing service giant, has made eye-catching moves in the field of AI chips. At present, Amazon's self-developed Trainium and Inferentia chips have deployed about 1.3 million pieces this year. A few weeks ago, they officially announced that they would use hundreds of thousands of the latest Trainium chips to build a supercomputing cluster, mainly to provide computing power support to Anthropic, which has invested 8 billion US dollars, for training the next generation of models. In contrast, Microsoft is still in its infancy in the development of AI accelerators. It has deployed a total of 200,000 Maia chips. Among them, the demand for Google TPU chips has grown the fastest, and is even strong enough to shake Nvidia's GPU dominance in the market. Broadcom's third-quarter earnings report provided some important clues. As a supplier to many technology giants such as Google and Meta, Broadcom provides them with semiconductor solutions, and its internal data reveals some little-known procurement trends and information. For example, how many custom processors Google purchased, etc. Broadcom CEO Hock Tan has raised semiconductor revenue several times and set this year's target at $12 billion. Based on this, the revenue brought by Google TPU is expected to be between 6 billion and 9 billion US dollars, depending on the ratio of computing devices and network devices. Alexander Harrowell, chief analyst at Omdia, noted that "while there is some uncertainty about the exact proportion of computing equipment and network equipment, even at the lower estimate of $6 billion, the growth rate of TPU shipments is sufficient to seize market share from Nvidia for the first time." He continued, "Google's cloud business continues to grow as a share of total revenue and its profitability continues to improve." Behind this, it is likely that TPU acceleration instances and TPU-based AI products are at work. In addition to Google, Broadcom is also developing custom AI chips (ASICs) for three major customers: Meta and Byte. In addition, it is also working with two other customers to develop the next-generation AIXPU. From this, it can be seen that many cloud service providers prefer ASICs rather than Nvidia GPUs, mainly because the former are more cost-effective and can optimize internal workloads with higher cost-effectiveness. Morgan Stanley estimates that driven by cloud service providers, the market size of customized AI chips will grow from US$120 billion in 2024 to US$300 billion in 2027, and the growth rate will exceed that of the GPU market. The rise of TPU and customized chips has led to profound changes in the competitive landscape of the fierce AI chip market. New Wisdom |
<<: Net profit fell by 60%, iFlytek has difficulty telling the ChatGPT story
>>: Think twice before selling a used hard drive: Did you really delete everything?
WeChat is no longer just an IM. Since WeChat 4.X,...
In the post- mobile Internet era, whether it is a...
Recently, a netizen posted that he "always d...
In fact, users who pay attention to these two mobi...
This article was reviewed by Dr. Guo Xiaoqiang, a...
There is new news about Musk’s brain-computer int...
In the second half of the Internet, when the traf...
A good product can increase the probability of be...
Audit expert: Wang Guoyi Postdoctoral fellow in N...
Bullying in schools is not just a children's ...
With the continuous development of the Internet e...
The title is a bit vulgar. The origin of this is ...
Speaking of fungi, some people may say that they ...
Every February 2nd, North America welcomes the tr...
How to implement offline caching strategy for a h...