Revealing the secrets of Siri, Apple publishes a paper explaining the design ideas of the voice assistant

Revealing the secrets of Siri, Apple publishes a paper explaining the design ideas of the voice assistant

Recently, Apple released a series of papers to explain the important working mechanisms of voice assistants, publicly revealed Siri, and contributed its different design ideas to the industry.

[[314061]]

In the first paper, Apple explained the multitasking problem in voice assistants, pointing out that in Siri, wake-up processing usually requires two steps: AI must first determine whether the voice content in the input audio matches the voice content of the trigger phrase (voice trigger detection), and then must determine whether the speaker's voice matches the voice of one or more registered users (speaker verification). The general method is to handle the two tasks separately, but Apple believes that a neural network model can be used to solve the two tasks at the same time. At the same time, it said that after verification, the performance of this method in all aspects can meet expectations.

In the paper, the researchers gave an example of the model. They trained models designed based on two ideas in a dataset containing 16,000 hours of annotated samples, of which 5,000 hours of audio had voice tags and the rest had only speaker tags. Compared with the general idea of ​​training models to obtain multiple labels, Apple trains models for multiple related tasks by cascading training data for different tasks. It was found that with the same performance, Apple's newly proposed model is more suitable for application. It can share calculations between two tasks, greatly saving memory space on the device, while reducing calculation time or waiting time and power/battery consumption.

In another paper, Apple also introduced the design of a speaker recognition system for multilingual speech scenarios - knowledge graph-assisted dictation system decision-making. Taking the acoustic sub-model as an example, it can make predictions based on voice signal transmission traces, and its context-aware prediction component takes into account various interactive context signals, where the context signal contains information about the conditions for issuing commands, installed command language environments, currently selected command language environments, and whether the user switches command language environments before making a request.

The results show that the advantage of this design is that they can help in situations where the speech signal is too short to produce reliable predictions through the acoustic model.

In addition, Apple also proposed a supplementary study to mitigate the problem of false triggering, that is, ignoring voices that are not suitable for the voice assistant (Siri). Based on the idea of ​​designing AI models based on graph structures, researchers proposed a graph neural network (GNN) in which each node is connected to a label. The results showed that the model reduced false triggers by 87%.

<<:  Can the iPhone still perform facial recognition when wearing a mask?

>>:  Apple releases new patent? Can it solve the crease problem of foldable devices?

Recommend

Activity rate = active users/total users? So what is an active user?

What is an active user? Some analysis tools defin...

Traffic distribution logic of Internet finance products!

When making products, everyone likes to talk abou...

Baidu bidding sem data case analysis, how to conduct bidding data analysis?

Friends who do bidding promotion and information ...

Did Lenovo make money by selling 50 million mobile phones?

Yesterday, Lenovo announced its 2013-2014 fiscal y...

The official anti-fraud tool! Experience the National Anti-Fraud Center APP

[[388244]] In recent years, fraud methods have em...

The way to fine-tune APP operations: "users + push" based on data

There are too many apps to choose from on the mar...