Recently, graphics chip manufacturer Nvidia released its financial report with the largest quarterly revenue growth in six years, which drove its stock price to soar 14% after the market, attracting attention from the industry. In addition, its CEO Huang Renxun previously claimed that Nvidia is already an AI (artificial intelligence) chip company (which seems to be catching up with the AI trend), and the industry is optimistic about its market performance in chips, especially AI chips. Is this really the case? Let's first look at Nvidia's financial performance this quarter. Total revenue was $2 billion, a year-on-year increase of 53.6%. Among them, the revenue of the graphics chip department accounted for 85% of its total revenue, a year-on-year increase of 52.9%, to $1.7 billion; the data center business doubled year-on-year to $240 million; the automotive business increased by 60.8% year-on-year to $127 million. From the composition of revenue, it is not difficult to see that the core business supporting Nvidia is still the graphics chip (discrete graphics card) in the traditional PC market, while the revenue related to AI-related fields or closely related to AI (such as data centers) accounts for only about 1/10 of its total revenue. So from the revenue alone, Nvidia is far from being an AI chip company. And it is a well-known fact that AI is currently a hot spot that the industry is competing for. Nvidia, which used to be a traditional PC industry discrete graphics card (display chip) and is still now, has transformed into an AI company, which will naturally attract a lot of attention. In this sense, we do not rule out the possibility that Nvidia has hyped and exaggerated the role of its chips by taking advantage of the AI trend. Of course, we are not denying that AI chips (supporting AI applications and functions) will be the development trend of the chip industry in the future. In particular, Google's artificial intelligence software AlphaGo defeated the world's top Go player Lee Sedol using deep learning technology, which indicates that artificial intelligence will be the next hot spot for competition between the technology industry and bigwigs. The development of big data and the Internet of Things has prompted technology giants such as IBM, Google, Facebook, Microsoft, and many large cloud computing companies that provide cloud services to compete to develop artificial intelligence technology in order to use the massive data (analysis) collected by future IoT devices to provide better services to the market and users. It should be noted that although different manufacturers call it differently, such as IBM calls it cognitive computing, Facebook and Google call it machine learning or artificial intelligence, chips, as one of the basic hardware of data centers that support these technologies and applications, still play an important role. Based on this trend, according to relevant statistics, at least 10% of the workloads currently running in data centers (servers) of giants including IBM, Google, Facebook, Amazon, Microsoft and cloud computing companies are related to AI applications (or developing related AI applications themselves or supporting and running customers' AI development and applications, etc.), and with the market and user demand for AI, this trend will expand in the future. This trend poses new challenges to the computing power and power consumption of basic chips in data centers, and GPUs (graphics chips) that NVIDIA has been specializing in have natural advantages. For example, the large-scale parallel computing power required for AI; under the same area, GPUs have more computing units (integer, floating-point multiplication and addition units, special computing units, etc.); GPUs have larger bandwidth memories, so they will also have good performance in high-throughput applications; GPUs have much lower energy requirements than CPUs, etc. Nevertheless, this does not mean that the data centers (servers) of the above-mentioned giants have no demand for CPUs. On the contrary, CPUs are still an indispensable part of computing tasks. In deep learning algorithm processing tasks, high-performance CPUs are also needed to execute instructions and transmit data with GPUs. At the same time, the versatility of CPUs and the complex task processing capabilities of GPUs are brought into play to achieve the best results. This is why most companies currently still use the combination of "CPU+GPU", or heterogeneous computing. In this heterogeneous mode, the serial part of the application runs on the CPU, and the GPU, as a coprocessor, is mainly responsible for the part that requires a lot of computing. From this perspective, the lack of CPU should be the shortcoming that Nvidia always has in claiming to be an AI company now and in the future. When we mention CPU, we naturally think of Intel, the leader in this field. Intel is not only a barrier that prevents Nvidia from becoming an AI chip company across domains in CPU, but it is also the biggest challenger even in GPU, which it excels in and can meet the AI needs of the above-mentioned big guys. This challenge is first reflected in Intel's innovative exploration of CPU computing power. For example, its recently released Xeon Phi chip for data center servers. According to Intel's relevant reports, the training speed of the Xeon Phi processor is 2.3 times faster than Nvidia's GPU, and the expansion path of the Xeon Phi chip in multiple nodes is 38%, and it can reach up to 128 nodes, which is currently impossible for GPUs on the market. At the same time, a system composed of 128 Xeon Phi processors is 50 times faster than a single Xeon Phi processor, which means that the scalability advantage of the Xeon Phi processor is obvious, which is crucial to meet the needs of AI applications. However, Nvidia strongly refuted Intel's statement, pointing out that Intel used data from 18 months ago, comparing four Maxwell GPUs and four Xeon Phi processors. If the updated Caffe AlexNet data is used, it will be found that four Maxwell GPUs are 30% faster than four Xeon Phi processors. Let's not discuss whose statement is closer to objective, but from the war of words between the two sides over the report, it at least shows that the CPU, which is not naturally advantageous, still has great potential to be tapped. At least from the CPU itself, Intel can narrow the gap with Nvidia, or put pressure on Nvidia. In addition, from the perspective of computing power and implementation methods that simply meet the needs of AI applications, whether GPU is the best or the only one is still controversial in the industry. Some researchers have tested that compared with GPU, FPGA has a more flexible architecture and better performance per unit of energy consumption. Deep learning algorithms can run faster and more efficiently on FPGA, and power consumption can also be reduced. This seems to explain why Intel previously acquired FPGA manufacturer Altera for $16.7 billion. Speaking of mergers and acquisitions, there is another one that the industry believes that Intel can use to enhance its competitiveness in AI chips, and may even surpass Nvidia, which is the acquisition of Nervana Systems, a company specializing in AI chips. It is said that the deep learning chips studied by Nervana Systems have higher cost-effectiveness than GPUs and 10 times the processing speed of GPUs. To illustrate the strength of Nervana Systems or the threat it poses to Nvidia, let's introduce an episode about the acquisition of Nervana Systems. It is said that when Intel approached Nervana to discuss the sale, Nervana believed that Nvidia was one of the reasonable choices because Nervana's deep learning software Neon can also run on Nvidia chips, which can help Nvidia make up for its shortcomings. However, Nvidia was not interested in Nervana, believing that its GPU-based deep learning technology was better than Nervana's. But after Nervana reached a deal with Intel, Nvidia seemed to change its mind and tried to restart the acquisition negotiations, but the opportunity was missed. In this regard, some analysts believe that letting Intel acquire Nervana is Nvidia's biggest mistake, because through this acquisition, Intel will obtain a specific product and IP for deep learning, which can be used alone, or integrated with Intel's future technology to produce more competitive and creative chip products. When it comes to integration, Intel is best at it. For example, for the acquired Nervana Systems, it can integrate related products into chips or multi-chip packages. For example, adding the Nervana Engine IP to a Xeon CPU can provide a low-cost method to achieve the performance acceleration required by AI, productize the Nervana IP, and then improve the computing power of its own CPU to meet the higher demand for data center chips in AI development and application. To sum up, we believe that given that the application of AI chips is still in its infancy (currently only accounting for about 1/10 of the load in data centers) and rival Intel's targeted mergers and acquisitions in this field and its own potential tapping and integration capabilities in CPUs, Nvidia's stock price surge in the name of AI is not a sign of peace of mind and a smooth road ahead. As a winner of Toutiao's Qingyun Plan and Baijiahao's Bai+ Plan, the 2019 Baidu Digital Author of the Year, the Baijiahao's Most Popular Author in the Technology Field, the 2019 Sogou Technology and Culture Author, and the 2021 Baijiahao Quarterly Influential Creator, he has won many awards, including the 2013 Sohu Best Industry Media Person, the 2015 China New Media Entrepreneurship Competition Beijing Third Place, the 2015 Guangmang Experience Award, the 2015 China New Media Entrepreneurship Competition Finals Third Place, and the 2018 Baidu Dynamic Annual Powerful Celebrity. |
<<: He turned a Ferrari into an electric car
>>: Farewell to 93#, the National V Standard will be implemented nationwide from January 1 next year
How to create a hit on Douyin? What are the six m...
The Chinese Valentine's Day is here Starry Sk...
People always think that their own communities ar...
On March 21, Apple quietly updated several product...
Today I will give you some of my humble opinions ...
...
Air drying Static electricity comes quietly Frequ...
The following content is the editor’s exclusive e...
Auditing expert: Yang Yanhui, Senior Garden Engin...
The TV series "People in the Alley" has...
Author: Dr. Li Changqing, a practicing physician ...
Smartphones will remain the dominant consumer dev...
Review expert: Peng Guoqiu, deputy chief physicia...
Where there is a vent, there will be a "bubbl...
[Buy the right car] The most comprehensive car bu...