Recently, Apple released a series of papers to explain the important working mechanisms of voice assistants, publicly revealed Siri, and contributed its different design ideas to the industry. In the first paper, Apple explained the multitasking problem in voice assistants, pointing out that in Siri, wake-up processing usually requires two steps: AI must first determine whether the voice content in the input audio matches the voice content of the trigger phrase (voice trigger detection), and then must determine whether the speaker's voice matches the voice of one or more registered users (speaker verification). The general method is to handle the two tasks separately, but Apple believes that a neural network model can be used to solve the two tasks at the same time. At the same time, it said that after verification, the performance of this method in all aspects can meet expectations. In the paper, the researchers gave an example of the model. They trained models designed based on two ideas in a dataset containing 16,000 hours of annotated samples, of which 5,000 hours of audio had voice tags and the rest had only speaker tags. Compared with the general idea of training models to obtain multiple labels, Apple trains models for multiple related tasks by cascading training data for different tasks. It was found that with the same performance, Apple's newly proposed model is more suitable for application. It can share calculations between two tasks, greatly saving memory space on the device, while reducing calculation time or waiting time and power/battery consumption. In another paper, Apple also introduced the design of a speaker recognition system for multilingual speech scenarios - knowledge graph-assisted dictation system decision-making. Taking the acoustic sub-model as an example, it can make predictions based on voice signal transmission traces, and its context-aware prediction component takes into account various interactive context signals, where the context signal contains information about the conditions for issuing commands, installed command language environments, currently selected command language environments, and whether the user switches command language environments before making a request. The results show that the advantage of this design is that they can help in situations where the speech signal is too short to produce reliable predictions through the acoustic model. In addition, Apple also proposed a supplementary study to mitigate the problem of false triggering, that is, ignoring voices that are not suitable for the voice assistant (Siri). Based on the idea of designing AI models based on graph structures, researchers proposed a graph neural network (GNN) in which each node is connected to a label. The results showed that the model reduced false triggers by 87%. |
<<: Can the iPhone still perform facial recognition when wearing a mask?
>>: Apple releases new patent? Can it solve the crease problem of foldable devices?
In SEO, external links are generally difficult to...
What is an active user? Some analysis tools defin...
When making products, everyone likes to talk abou...
Author Duan Yuechu In today's era of rapid te...
Friends who do bidding promotion and information ...
"The mole on your palm, I always remember wh...
Yesterday, Lenovo announced its 2013-2014 fiscal y...
2022 Political Dragon Chart Live Broadcast 2022 V...
[[388244]] In recent years, fraud methods have em...
In Chinese folklore, there is a story that is wel...
There are too many apps to choose from on the mar...
"Live broadcast technology for starting a Ku...
With the expansion of social networks , the incre...
As live streaming and short video promotion are i...
In 2016, the highlight of the global capital mark...