The science behind Alexa’s new interactive story-creation experience


In September, Amazon senior vice president Dave Limp unveiled Amazon Devices’ new lineup of products and services. Among them was a new Alexa experience that receives customer prompts and uses AI to generate short children’s stories, complete with illustrations and background music.

The experience is slated for general release later this year. It allows children to choose themes for their stories, such as “underwater” or “enchanted forest”; protagonists, such as pirate or mermaid; colors, which will serve as visual signatures for the illustrations; and adjectives, such as “silly” or “mysterious”.

From the prompts, an AI engine generates an original five-scene story. For each scene, it also composes an illustration (often animated) and background music, and it selects appropriate sound effects. Since the experience depends heavily on AI models, it can repeatedly generate different stories from the same set of prompts.

A hybrid approach

To ensure both family-friendly visual content and a consistent visual vocabulary, the Alexa story creation experience uses a library of designed or curated, AI-generated backgrounds and foreground objects. The AI model determines which objects to use and how to arrange them on the screen.

The new Alexa story creation experience uses AI to arrange visual elements on either artist-rendered or AI-generated backgrounds, to illustrate stories produced by a separate AI module. (The images shown in this article are for illustration purposes only.)

Similarly, the background-music module augments composer-created harmonic and rhythmic patterns by automatically generating melodies, which are stored in a library for efficient runtime deployment. An AI model then assembles the background music to follow a hero character and match the moods and themes of the story scenes. Sound effects corresponding to particular characters, objects, and actions are selected in similar fashion.

The core of the story creation experience, however, is the story generator, which takes user prompts as input and outputs a story. The story text, in turn, is the input to the image and music generators.

Story generator

The story generator consists of two models, both built on top of pretrained language models. The first model — the “planner” — receives the customer-selected prompts and uses them to generate a longer set of keywords, allocated to separate scenes. These constitute the story plan. The second model — the text generator — receives the story plan and outputs the story text.

Choice of character is one of the prompts that the story generator uses to create a text.

To train the story generator, the Alexa researchers use human-written stories, including a set of stories created in-house by Amazon writers. The in-house stories are labeled according to the themes that customers will ultimately choose from, such as “underwater” and “enchanted forest”.

Related content

Amazon yesterday announced its picks for 2022 Best Books of the Year So Far, including its top book within the general-interest science category, “Stolen Focus: Why You Can’t Pay Attention — and How to Think Deeply Again”.

The first step in the training procedure is to automatically extract salient keywords from each sentence of each story, producing keyword lists, which are used to train the text generator. The lists are then randomly downsampled to just a few words each, to produce training data for the planner.

A Transformer-based coherence ranker filters the text generator’s outputs, so that only the stories that exhibit the highest quality in terms of plot coherence (e.g., character and event consistency) are selected. The same model is also used to automatically evaluate the overall quality of generated stories.

Scene generation

Because training data for the scene generation module was scarce, the Alexa researchers use a pipelined sequence of models to compose the illustrations. Pipelined architectures tend to work better with less data.

Before being sent to the scene generation model, the story text passes through two natural-language-processing (NLP) modules, which perform coreference resolution and dependency parsing, respectively. The coreference resolution module determines the referents of pronouns and other indicative words and rewrites the text accordingly. For instance, if the mermaid mentioned in scene one is referred to as “she” in scene two, the module rewrites “she” as “the mermaid”, to make it easier for the scene generator to interpret the text.

The dependency parser produces a graph that represents the relationships between objects mentioned in the text. For instance, if the text said, “The octopus swam under the boat”, nodes representing the objects “octopus” and “boat” would be added to the graph, connected by a directional edge labeled “under”. Again, this makes the text easier for the scene generator to interpret.

On the basis of the generated text, the scene generator will select a background and place the appropriate figures on it with the appropriate scale and orientation.

The first step in the scene generation pipeline is to select a background image, based on the outputs of the NLP modules and the customer’s choice of theme. The library of background images includes both artist-rendered and AI-generated images.

Next, the NLP modules’ outputs pass to a model that determines which elements from the library of designed objects the scene should contain. With that information in hand — along with visual context — another model chooses the scale and orientation of the objects and places them at specific (x, y) coordinates on the selected background image.

Many of the images in the library are animated: for instance, fish placed on the underwater background will flick their tails. But these animations are part of the image design. The orientations and locations of the fish can change, but the animations are executed algorithmically.

Music

To ensure the diversity and quality of the background music for the stories, the Alexa researchers created a large library of instrumental parts. At run time, the system can automatically combine parts to create a theme and instrumental signature for each hero character.

Related content

From physical constraints to acoustic challenges, learn how Amazon collaborated with NASA and Lockheed Martin to get Alexa to work in space.

The library includes high-quality artist-created chord progressions, harmonies, and rhythms, which an AI melody generator can use to produce melodies of similar quality that match the instrumentation of existing parts. The AI-created melodies are generated offline and stored in the library with the other musical assets.

In the library, the assets are organized by attributes such as chord progression, rhythm, and instrument type. An AI musical-arrangement system ensures that all the pieces fit together.

Like the illustration module, the music generation model processes text inputs in two ways. A text-to-speech model computes the time it will take to read the text, and a paralinguistic-analysis model scores the text along multiple axes, such as calm to exciting and sad to happy. Both models’ outputs serve as inputs to the musical-arrangement system and help determine the duration and character of the background music.

Guardrails

Beyond the compositional approach to scene generation, the researchers adopted several other techniques to ensure that the various AI models’ outputs were age appropriate.

Related content

Eliminating the need for annotation makes bias testing much more practical.

First, they curated the data used to train the models by manually and automatically screening and excluding offensive content. Second, they limit the input prompts for story creation to pre-curated selections. Third, they filter the models’ outputs to automatically identify and remove inappropriate content.

In addition, use of the Alexa story creation experience will require parental consent, which parents will be able to provide through the Alexa app.

Together, all of this means that the new Alexa story creation experience will be both safe and delightful.

[Editor’s note: The Create with Alexa service was officially launched on Nov. 29 for Echo Show devices in the United States. In September, Amazon Science explored the science behind the new service, including how scene generation works and how researchers worked to ensure the experience is age appropriate.]





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart