Recently, Apple released a series of papers to explain the important working mechanisms of voice assistants, publicly revealed Siri, and contributed its different design ideas to the industry. In the first paper, Apple explained the multitasking problem in voice assistants, pointing out that in Siri, wake-up processing usually requires two steps: AI must first determine whether the voice content in the input audio matches the voice content of the trigger phrase (voice trigger detection), and then must determine whether the speaker's voice matches the voice of one or more registered users (speaker verification). The general method is to handle the two tasks separately, but Apple believes that a neural network model can be used to solve the two tasks at the same time. At the same time, it said that after verification, the performance of this method in all aspects can meet expectations. In the paper, the researchers gave an example of the model. They trained models designed based on two ideas in a dataset containing 16,000 hours of annotated samples, of which 5,000 hours of audio had voice tags and the rest had only speaker tags. Compared with the general idea of training models to obtain multiple labels, Apple trains models for multiple related tasks by cascading training data for different tasks. It was found that with the same performance, Apple's newly proposed model is more suitable for application. It can share calculations between two tasks, greatly saving memory space on the device, while reducing calculation time or waiting time and power/battery consumption. In another paper, Apple also introduced the design of a speaker recognition system for multilingual speech scenarios - knowledge graph-assisted dictation system decision-making. Taking the acoustic sub-model as an example, it can make predictions based on voice signal transmission traces, and its context-aware prediction component takes into account various interactive context signals, where the context signal contains information about the conditions for issuing commands, installed command language environments, currently selected command language environments, and whether the user switches command language environments before making a request. The results show that the advantage of this design is that they can help in situations where the speech signal is too short to produce reliable predictions through the acoustic model. In addition, Apple also proposed a supplementary study to mitigate the problem of false triggering, that is, ignoring voices that are not suitable for the voice assistant (Siri). Based on the idea of designing AI models based on graph structures, researchers proposed a graph neural network (GNN) in which each node is connected to a label. The results showed that the model reduced false triggers by 87%. |
<<: Can the iPhone still perform facial recognition when wearing a mask?
>>: Apple releases new patent? Can it solve the crease problem of foldable devices?
At its I/O developer conference, Google announced...
I believe everyone has experienced the "mone...
serial number Chinese content 1.1 When developing...
Introduction to the training course content: Many ...
The Spring Festival holiday is over, and handsome...
I have been dissecting Xiaohongshu accounts recen...
"It's just writing some words, everyone c...
Earlier, people were not particularly familiar wi...
(Big Soldier Data Flow Operation) Dou+ Doujia Pra...
The National People's Congress and the Chines...
GSX is a domestic online education institution fo...
According to foreign media reports, the Volkswage...
The blind box circle doesn’t seem to be peaceful ...
Photo/Hao Yanfang Peking Union Medical College Ho...
The biggest e-commerce promotion in the first hal...