How Alexa learned Arabic



The Arabic version of Alexa launched in December 2021, in the Kingdom of Saudi Arabia and the United Arab Emirates, and like all new Alexa languages, it posed a unique set of challenges.

The first was to decide what forms of Arabic Alexa should speak. While the official written language in KSA and the UAE is Modern Standard Arabic (MSA), in everyday life, Arabic speakers use dialectal forms of Arabic, with many vernacular variations.

For customers, engaging with Alexa in their native dialects would be more natural than speaking MSA. So the Alexa AI team — including computational linguists — determined that Arabic Alexa would be able to understand requests in both MSA and Khaleeji (Gulf) dialects.

Alexa’s speech outputs, too, would be in both MSA and a Khaleeji dialect — MSA for formal speech, such as responses to requests for information, and Khaleeji for less formal speech, such as confirmation of alarm times and music selections. This means that someone issuing Alexa a request in one Arabic dialect might get a response in a different one. But that mirrors the experience that Arabic speakers in the region have with each other.

The core components of a new Alexa model are automatic speech recognition (ASR), which converts speech into text; natural-language understanding (NLU), which interprets the text to initiate actions; and text-to-speech (TTS), which converts NLU outputs into synthesized speech.

A key question for all three components was how to render utterances textually, both as ASR output and TTS input. Written Arabic suppresses short vowel sounds: it would be sort of like spelling the English word “begin” as “bgn”. People are usually able to infer the mssng vwls frm cntxt.

But in formal and educational texts — such as reading primers for children — vowels and some consonantal sounds are indicated by diacritical marks. So the Alexa AI team had to decide whether the ASR output should include diacritics or not.

One of the major differences between dialects is the vowel sounds, so omitting diacritics makes it easier to create a speech representation that’s applicable to all dialects, which is useful for ASR and NLU.

Moreover, there is no published writing in forms of Arabic other than MSA, so there’s no standard orthography for them, either. Asking annotators to add diacritics could introduce more ambiguity than it alleviates. In the end, the Alexa AI team decided that ASR output should use only two diacritics, the shaddah and maddah, because they help with pronunciation accuracy on entity names that pass from ASR through NLU to TTS.

These design decisions had separate implications for the various Alexa AI teams — ASR, NLU, and TTS — and of course, each of the teams faced its own particular challenges as well.

ASR

One of the ASR team’s goals was to provide a consistent output, given the lack of standardized orthography for both dialectal Arabic and foreign loanwords. One of their decisions was to represent loanwords — such as the names of French or American musicians or albums — using Latin script.

L to R: Applied-science manager Volker Leutnant and applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan

To that end, they used a so-called catalogue ingestion normalizer, which takes in a catalogue of terms in French and English and converts the corresponding Arabic-script outputs of the ASR model into Latin script.

Applied-science manager Volker Leutnant and his colleagues on the Alexa Speech team — including applied scientists Moe Hethnawi and Bashar Awwad Shiekh Hasan — began with an English acoustic model, which started out better attuned to human speech sounds than a randomly initialized model. They trained it using public datasets of Arabic speech in the target Khaleeji dialects and data from Cleo, an Alexa skill that allows multilingual customers to help train new-language models by responding to voice prompts with open-form utterances. The Cleo data included labeled utterances in additional Arabic dialects, allowing the ASR model to provide more consistent user experience for a wider range of customers.

NLU

An NLU model takes in utterances transcribed by ASR and classifies them according to intent, such as playing music. It also identifies all the slots in the utterance — such as song names or artist names — and their slot values — such as the particular artist name “Ahlam”.

The first thing the NLU model needs to do is to tokenize the input, or split it into semantic units that should be processed separately. In many languages, tokenization happens naturally during ASR. But Arabic uses word affixes — prefixes and suffixes — to convey contextual meanings.

Some of those affixes, such as articles and prepositions — the Arabic equivalents of “the” or “to” — are irrelevant to NLU and can be left attached to their word stems. But some, such as possessives, require independent slot tags. The suffix meaning “my”, for instance, in the Arabic for “my music”, tells the NLU model just which music the customer wants played. Language engineer Yangsook Park and her colleagues designed the tokenizer to split off those important affixes and leave the rest attached to their stems.

The tokenized input passes to the NLU model, which is a trilingual model, able to process inputs in Arabic, French or English. This not only helps the model handle loanwords used in Arabic, but it also enables the transfer of knowledge from French and English, which currently have more abundant training data than Arabic.

Research science manager Karolina Owczarzak and her team at Alexa AI — including research scientists Khadige Abboud, Olga Golovneva, and Christopher DiPersio — resampled the existing Arabic training data to expand the variety of training examples. For instance, their resampling tool replaces the names of artists or songs in existing utterances with other names from the song catalogue.

A crucial consideration was how many resampled utterances with the same basic structure to include in the training data. Using too many examples based on the same template — such as “let me hear <SongName> by <ArtistName>” or “play the <ArtistName> song <SongName>” —could diminish the model’s performance on other classes of utterance.

To compute the optimal number of examples per utterance template, the NLU researchers constructed a measure of utterance complexity, which factored in both the number of slots in the utterance template and the number of possible values per slot. The more complex the utterance template, the more examples it required.

L to R: Language engineer Yangsook Park, research science manager Karolina Owczarzak, and research scientists Khadige Abboud, Olga Golovneva, and Christopher DiPersio

The model-training process began with a BERT-based language model, which was pretrained on all three languages using unlabeled data and the standard language-modeling objective. That is, words of sentences were randomly masked out, and the model learned to predict the missing words from those that remained. In this stage, the NLU team augmented the Arabic dataset with data translated from English by AWS Translate.

Then the researchers trained the model to perform NLU tasks by fine-tuning it on a large corpus of annotated French and English data — that is, utterances whose intents and slots had been labeled. The idea was to use the abundant data in those two languages to teach the model some general principles of NLU processing, which could then be transferred to a model fine-tuned on the sparser labeled Arabic data.

Finally, the model was fine-tuned again on equal amounts of labeled training data in all three languages, to ensure that fine-tuning on Arabic didn’t diminish the model’s performance on the other two languages.

TTS

Whereas diacritics can get in the way of NLU, they’re indispensable to TTS: the Alexa speech synthesizer needs to know precisely which vowel sounds to produce as output. So when the Arabic TTS model gets a text string from one of Alexa’s functions — such as confirmation of a music selection from the music player — it runs it through a diacritizer, which adds the full set of diacritics back in.

L to R: Software engineer Tarek Badr, applied scientist Fan Yang, and language engineer Merouane Benhassine.

The TTS researchers, led by software engineer Tarek Badr and applied scientist Fan Yang, trained the diacritizer largely on MSA texts, with some supplemental data in Khaleeji dialects, which the Alexa team compiled itself. Inferring the correct diacritics depends on the whole utterance context: as an analogy, whether “crw” represents “craw”, “crew”, or “crow” could usually be determined from context. So the diacritizer model has an attention mechanism that attends over the complete utterance.

Outputs that should be in Khaleeji Arabic then pass through a module that converts the diacritics to representations of the appropriate short-vowels sounds, along with any other necessary transformations. This is a rule-based system that language engineer Merouane Benhassine and his colleagues built to capture the predictable relationships between MSA and Khaleeji Arabic.

The text-to-speech model itself is a neural network, which takes text as input and outputs acoustic waveforms. It takes advantage of the Amazon TTS team’s recent work on expressive speech to endow the Arabic TTS model with a lively, conversational style by default.

A new Alexa language is never simply a new language: it’s a new language targeted to a specific new locale, because customer needs and linguistic practices vary by country. Going forward, the Alexa AI team will continue working to expand Arabic to additional locales — even as it continues to extend Alexa to whole new language families.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart