As a media focusing on the artificial intelligence industry, we were really excited when Kirin 970 and Apple A11 were first released. But now more than a month has passed, and apart from manufacturers displaying parameters and media carnival, we seem to have not heard any other voices. Will mobile AI bring a revolutionary new experience to mobile phones? Will chips like Kirin 970 and Apple A11 become the most important drivers of mobile AI? In order to avoid empty talk, we interviewed developers in four business areas: BAT artificial intelligence business department, mobile terminal development, live broadcast products, and deep learning image processing. We want to know what mobile AI chips look like in their eyes. BAT developers: Model advantages are more important than terminal deploymentWe have discussed before that we can see a lot of traces of cloud AI in various apps, such as selfie beautification and voice assistants. AI already exists in mobile devices. But the actual situation is often that machine learning developers study algorithms, process data, and train models on one end, while mobile developers are responsible for other data structures and programming on the other end. The two are connected through a cloud server, and there is little intersection between them.
When it comes to mobile AI, most developers consider porting the model to the application, whether it will increase the size of the installation package, and whether they need to upload the update package for review by the app store every time they update the model. Many large companies are pursuing AI at almost any cost, and with the large amount of data readily available, developers are almost "spoiled". Complex models are trained using massive amounts of data, and the models are constantly updated to try to make them more perfect.For these large enterprises, it seems a bit early to consider which end to deploy before getting a model that satisfies both themselves and the industry. A product operator from an intelligent product of Alibaba told us that compared with the comparison between end environments, they are currently more concerned about polishing the algorithm model better. This also reflects the current status of AI development: in the transition period from algorithm-driven to product-driven, it is difficult for developers to spontaneously migrate from one development environment to another before users vote with their feet. Especially for some large companies with advantageous resources, perhaps for them, the insecurity comes from the differences between algorithms, rather than the subtleties of user experience. Developers who have undergone transformation: Endless possibilities for customizing AI However, a developer who has experienced the transition from PC to mobile reminded us that mobile AI may not be as simple as "offline AI".In the process of migrating from PC to mobile, developers found that user behaviors are very different on different devices. Similarly, even if they are all mobile devices, such as mobile phones, tablets, smart watches, and even smart speakers, user behaviors will still be very different. For example, users on tablets are more inclined to open videos, while users on mobile phones are more inclined to open text messages, etc. However, in the cloud-based intelligent recommendation algorithm, the data of these two devices are often mixed and processed together.
When mobile devices have the ability to process locally, developers can make recommendation algorithms better match different devices. Judging from the parameters currently released by Apple A11 and Kirin 970, it is actually not very meaningful to completely transplant algorithms such as smart recommendation that rely on the network to the local mobile terminal, but putting the cloud inference results on the local mobile terminal for further processing will get a very different user experience. This also means that algorithms can be better integrated with devices and further with individual users. Through the rich perception capabilities of mobile devices, geographic location, weather, light, usage time, user schedule, and even body temperature and heart rate data that were not easily obtained before can be combined into existing algorithm models through local calculations. In the future with mobile AI, food delivery apps will put your favorite hot pot at the top of the recommended flow on rainy days, and put the fastest-delivered food at the top of the list when you are at work. In short, from the developer's perspective, the greatest value of mobile AI lies in transforming the original "general AI" into "customized AI".Migrate practical developers: Mobile AI requires a more unified programming environmentAlthough the concept of "customized AI" is fascinating, it is not as easy as imagined when it comes to the actual development stage. A senior PE engineer working for a live broadcast platform told us that mobile GPU/AI chips are still a relatively new concept, with confusing API interfaces and a lack of unified programming and computing methods.As of now, Apple A11 only opens up the GPU capabilities to the CoreML framework, while most developers currently use mainstream frameworks such as Tensorflow and Caffe. This will undoubtedly affect future migrations: Should developers "castrate" the original framework or reprogram on CoreML? If manufacturers provide a higher level of abstraction, mobile AI developers can only adapt to the hardware abstraction layer one by one. If manufacturers tend to provide low-level abstraction, the differences can be shielded by a higher level of abstraction, thereby improving development efficiency.
Especially for systems like Android, which have a relatively chaotic ecosystem, developers are likely to face more obstacles. Taking the current general-purpose CPU as an example, different manufacturers have different power consumption and heat control logic. The same neural network may run smoothly on a CPU from one manufacturer, but may trigger overheating protection and frequency reduction on a CPU from another manufacturer, ultimately causing drastic fluctuations in the computing performance of the neural network. Therefore, most developers only dare to embed some networks with relatively small computational requirements or functions with relatively low calling frequencies in their products, and cannot choose to use recurrent neural networks in continuous functions. Deep learning developers: Chips are just the beginning for mobile AIFor users, the emergence of mobile AI chips means that they can get a better product experience on the terminal, but for developers, mobile AI cannot be achieved overnight by relying on chips.
We interviewed Deep Black Technology, which is known as the Chinese version of Prisma. Its CEO Jason told us that most mobile AI chips have made relatively universal optimizations in machine learning, but not much optimization for some specific computing methods. For example, the convolution calculation required in deep learning is currently more suitable for deployment in the cloud. And when planning to migrate from the cloud to the terminal, developers are also constrained by the original framework. Tensorflow can easily support deployment on iOS and Android, making migration easier. However, frameworks based on small teams such as Caffe and Torch are relatively weak in productization and currently cannot help developers migrate code to two mobile terminals. This also gives Tensorflow more advantages, especially in consumer applications.
Mobile AI chips cannot solve all problems, but many teams are trying to accelerate the deployment of mobile AI from the software side. Xnor.ai, an AI startup in Seattle, USA, which just received financing this year, uses binary neural networks to reduce the storage size of neural network models, thereby speeding up calculations and reducing computing costs. Ultimately, it is possible to deploy deep learning models in embedded devices without relying on the network. The emergence of hardware is only a cornerstone of the mobile AI ecosystem. At the business level, we also need to consider API adaptation, software optimization, and even the possible emergence of 5G networks in the future. So, mobile AI chips are useless?After communicating with developers, we began to think: What does the emergence of mobile AI chips mean to them? Although the emergence of hardware does not mean that an ecosystem can be established immediately, the bets of mobile phone manufacturers on mobile AI have undoubtedly given most developers confidence and allowed more people to join this field. More importantly, mobile AI chips allow developers to launch lightweight products without having to worry about the cost of using cloud computing resources or server downtime caused by the growth of users. It can be said that the emergence of mobile AI chips has lowered the entry threshold for consumer-grade AI applications. Back to the two mobile AI chips. Apple A11's performance weakness is not obvious, but the practice of only opening up GPU capabilities to CoreML and Metal 2 (a graphics processing software used in games) has obviously reduced the enthusiasm of many developers. This approach further consolidates the integrity and stability of the iOS ecosystem, but is more suitable for high-threshold customized development. Although Apple's choice is not wrong, it is hard to say that it is correct. After all, there is only a fine line between "complete and stable" and "self-limited". Relatively speaking, if Kirin 970 can take the lead in opening up NPU computing power to more frameworks and software, it will surely attract more developers. Of course, when every developer writes their first line of code, they don’t think about ringing the bell at the New York Stock Exchange, but rather, they rely on their natural curiosity to try to communicate with an unknown world for the first time. If mobile AI chips can allow developers to temporarily put aside their endless demands and explore mobile AI with the enthusiasm they had when they first wrote code, it will be a precious gain for both chip manufacturers and the artificial intelligence industry. Perhaps this is the ultimate significance of mobile AI chips: hardware cannot solve all problems, but it can allow more developers to participate. *The article reflects the author’s independent views and does not represent the position of Huxiu.com This article is authorized by Brain Extreme to be published on Huxiu.com and edited by Huxiu.com. Please indicate the author's name at the beginning of the article when reprinting this article, keep the article complete (including Huxiu's note and other author's identity information), and attach the source (Huxiu.com) and the link to this page. Original link: http://www.huxiu.com/article/218331.html Huxiu reserves the right to pursue the corresponding responsibility for those who reprint without following the regulations |
Haoze Yitan·New Opportunities for Upgrading Basic...
The iPhone SE was released again with all the spo...
The design, improvement and promotion of products...
2021 Yixiantian System Development Plan Yixiantian...
During the Spring Festival, I finished reading Hu...
In the past few years, the global new energy vehi...
In order to better penetrate into various industr...
Jiujiang women's clothing mini program invest...
"There are bugs again." This is the mos...
In the current computer processor market dominate...
Poets can't help but feel sad when facing the...
Introduction: Every time technology breaks throug...
Value 50: Find the 50 most valuable stocks in Chi...
The global app economy is in a period of rapid gr...
Smart bidding is the most important customer acqu...