Recently, Apple released a series of papers to explain the important working mechanisms of voice assistants, publicly revealed Siri, and contributed its different design ideas to the industry. In the first paper, Apple explained the multitasking problem in voice assistants, pointing out that in Siri, wake-up processing usually requires two steps: AI must first determine whether the voice content in the input audio matches the voice content of the trigger phrase (voice trigger detection), and then must determine whether the speaker's voice matches the voice of one or more registered users (speaker verification). The general method is to handle the two tasks separately, but Apple believes that a neural network model can be used to solve the two tasks at the same time. At the same time, it said that after verification, the performance of this method in all aspects can meet expectations. In the paper, the researchers gave an example of the model. They trained models designed based on two ideas in a dataset containing 16,000 hours of annotated samples, of which 5,000 hours of audio had voice tags and the rest had only speaker tags. Compared with the general idea of training models to obtain multiple labels, Apple trains models for multiple related tasks by cascading training data for different tasks. It was found that with the same performance, Apple's newly proposed model is more suitable for application. It can share calculations between two tasks, greatly saving memory space on the device, while reducing calculation time or waiting time and power/battery consumption. In another paper, Apple also introduced the design of a speaker recognition system for multilingual speech scenarios - knowledge graph-assisted dictation system decision-making. Taking the acoustic sub-model as an example, it can make predictions based on voice signal transmission traces, and its context-aware prediction component takes into account various interactive context signals, where the context signal contains information about the conditions for issuing commands, installed command language environments, currently selected command language environments, and whether the user switches command language environments before making a request. The results show that the advantage of this design is that they can help in situations where the speech signal is too short to produce reliable predictions through the acoustic model. In addition, Apple also proposed a supplementary study to mitigate the problem of false triggering, that is, ignoring voices that are not suitable for the voice assistant (Siri). Based on the idea of designing AI models based on graph structures, researchers proposed a graph neural network (GNN) in which each node is connected to a label. The results showed that the model reduced false triggers by 87%. |
<<: Can the iPhone still perform facial recognition when wearing a mask?
>>: Apple releases new patent? Can it solve the crease problem of foldable devices?
There are no eternal friends or eternal enemies, ...
Although it is less than half a year away from th...
Jing Xufeng's chat method for building a stro...
Hong Kong servers are favored by foreign trade co...
Your period is a window into your overall health....
Chengdu Tea Tasting Recommendation Tea Friends Re...
Nowadays, information flow advertising has become...
The "black box" that is often sought in...
"This jade mask is a kind of funeral jade. T...
This article takes " Tencent Game Manager&qu...
Recently, new cases have appeared in Beijing, Cha...
In movies and TV series, we often see scenes like...
The Central Meteorological Observatory continued ...
Apple's next-generation flagship phone, the i...