Teaching Alexa to follow conversations


In order to engage customers in longer, more productive conversations, Alexa needs to solve the problem of reference resolution. If Alexa says, “‘Believer’ is by Imagine Dragons”, for instance, and the customer replies, “Play their latest album”, Alexa should be able to deduce that “their” refers to Imagine Dragons.

In the past, we’ve addressed the reference resolution problem by teaching a machine learning system to map correspondences between the variables used by different Alexa services. Alexa’s knowledge base, for instance, might store information about a song’s performer in a variable called , while the Alexa music service might store the same information in a variable called ArtistName. Learning those mappings requires lots of application-specific data, annotated with variable names.

This week, at the annual meeting of the North American chapter of the Association for Computational Linguistics, we presented a new approach to this problem that, in experiments, delivered much stronger results.

We show that this approach will scale better than previous approaches, and to encourage other researchers to pursue it, we’ve publicly released one of two data sets we used to demonstrate our system, which is based on an existing public data set.

Our new approach is to rewrite customer queries in natural language, substituting entity names and other identifying data for ambiguous references. When a customer says, “Play their latest album”, our system should rewrite the query as “Play Imagine Dragons’ latest album”.

Rather than trying to map variables onto each other across services ( = ArtistName), the new system rewrites queries in natural language (“their” = “Imagine Dragons’”). For each word of an input sequence, the contextual query rewrite engine adds a word to an output sequence, according to probabilities (blue bars) computed by a neural network.

This approach has several advantages. First, because our rewrite engine learns general principles of reference, it doesn’t depend on any application-specific information, so it doesn’t require retraining when we expand Alexa’s capabilities. Second, it frees backend code from worrying about referring expressions. Individual Alexa services can always assume that they have received a fully specified utterance. Finally, training data can be annotated by any competent language speaker, without specialized knowledge of Alexa’s internal nomenclature.

In addition to the data set that we have released publicly, we also tested our system on a larger in-house data set. We evaluated performance using F1 score, which measures both false-positive and false-negative rates. On the in-house data set, when a term in the current utterance referred to a term in the most recent system response, our new approach improved the F1 score by 22%. When a term in the current utterance referred to a term in the previous user utterance, the F1 score improved by 25%.Like our earlier variable-mapping system, our new system is a neural network. When Alexa’s NLU systems receive an utterance, they determine its intent, or the action that the customer wants performed, and they assign individual words to slots, which are the variables such as , [track work], ArtistName, or SongName. Slot values are used to identify the specific data items that customers want retrieved.

The input to our neural network includes the words of the current customer utterance; the words of several prior rounds of dialogue; the intent classification of each turn of dialogue; and, for all words, the slot tags provided by the NLU system.Wherever possible, however, our system replaces individual words with generic classifiers, such as ENTITYU1, for the first entity named by the user, or ENTITYS3, for the third entity named by the system. These generic classifiers do not replace the slot tags; they complement them.This approach allows the system to generalize much more effectively during training. It prevents the network from “overfitting”, or paying undue attention to particular characteristics of training examples, such as the individual words of song titles. Instead, it focuses the network’s attention on the syntactic and semantic roles that words are playing.

Our network is a pointer network, which is a variation of the type of sequence-to-sequence (seq2seq) network commonly used in natural-language-generation tasks, such as machine translation. A seq2seq network processes an input sequence — such as string of words — in order and generates an output — such as the equivalent sentence in another language — one item at a time. A pointer network is a seq2seq network whose output is a sequence of references (or pointers) to the words of the input sequence.

With each new round of dialogue, we encode the complete dialogue using a long short-term memory, a type of neural network that remembers the data it’s seen recently and modifies its outputs accordingly. Once the dialogue encoding is up to date, the system begins to rewrite the latest customer utterance, one word at a time. For each word, it decides whether to generate a new word from a list of commonly occurring words or to copy a word from the dialogue history.

In addition to our in-house data set, we also used a data set developed at Stanford University, along with crowd-sourced rewrites of dialogues from that data set, which we released publicly. Each dialogue was rewritten five times by annotators recruited through Mechanical Turk, who were asked to replace referring terms with their referents. Annotations that received majority votes were incorporated into the new data set.

In exchange for the use of their data set, Stanford has asked that we cite the following paper, which we are happy to do:

Mihail Eric and Lakshmi Krishnan and Francois Charette and Christopher D. Manning. 2017. Key-Value Retrieval Networks for Task-Oriented Dialogue. In Proceedings of the Special Interest Group on Discourse and Dialogue (SIGDIAL).

Acknowledgments: Pushpendre Rastogi, Tongfei Chen, Lambert Mathias





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart