Converting text to images for product discovery


Generative adversarial networks (GANs), which were first introduced in 2014, have proven remarkably successful at generating synthetic images. A GAN consists of two networks, one that tries to produce convincing fakes, and one that tries to distinguish fakes from real examples. The two networks are trained together, and the competition between them can converge quickly on a useful generative model.

In a paper that was accepted to IEEE’s Winter Conference on Applications of Computer Vision, we describe a new use of GANs to generate examples of clothing that match textual product descriptions. The idea is that a shopper could use a visual guide to refine a text query until it reliably retrieved the product for which she or he was looking.

So, for instance, a shopper could search on “women’s black pants”, then add the word “petite”, then the word “capri”, and with each new word, the images on-screen would adjust accordingly. The ability to retain old visual features while adding new ones is one of the novelties of our system. The other is a color model that yields images whose colors better match the textual inputs.

The output of our image generator (bottom) and that of the traditional StackGAN model. Ours better preserves existing visual features when new ones are added and renders color more accurately.

Stacy Reilly

We tested our model’s performance against those of four different baseline systems that use a popular text-to-image GAN called StackGAN. We used two metrics that are common in studies of image-generating GANs, inception score and Fréchet inception distance. On different image attributes, our model’s inception scores were between 22% and 100% higher than those of the best-performing baselines, while its Fréchet inception distance was 81% lower. (Lower is better.)

Our model is in fact a modification of StackGAN. StackGAN simplifies the problem of synthesizing an image by splitting it into two parts: first, it generates a low-res image directly from text; second, it upsamples that image to produce a higher-res version, with added texture and more-natural coloration. Each of these procedures has its own GAN, and stacking the two GANs gives the model its name.

We add another component to this model: a long short-term memory, or LSTM. LSTMs are neural networks that process sequential inputs in order. The output corresponding to a given input factors in both the inputs and the outputs that preceded it. Training an LSTM together with a GAN in an adversarial setting enables our network to refine images as successive words are added to the text inputs. Because an LSTM is an example of a recurrent neural network, we call our system ReStGAN, for recurrent StackGAN.

Synthesizing an image from a text description is a difficult challenge, and to make it more manageable, we restricted ourselves to three similar product classes: pants, jeans, and shorts. We also standardized the images used to train our model, removing backgrounds and cropping and re-sizing images so that they were alike in shape and scale.

Auxiliaries

The training of our model was largely unsupervised, meaning that the training data consisted mainly of product titles and standardized images, which didn’t require any additional human annotation. But to increase the stability of the system, we use an auxiliary classifier to classify images generated by our model according to three properties: apparel type (pants, jeans, or shorts), color, and whether they depict men’s, women’s, or unisex clothing. The auxiliary classifier provides additional feedback during training and helps the model handle the complexity introduced by sequential inputs.

In most AI systems that process text — including ours — textual inputs are embedded, or mapped to points in a representational space such that words with similar meanings tend to cluster together. Traditional word embeddings group color terms together, but not in a way that matches human perceptual experience. The way we encode color is another innovation of our work.

Six different images, all generated from the text string “women’s black pants”. The three on the left were produced by our model, the three on the right by a standard StackGAN model.

Shiv Surya

We cluster or group colors in a representational space called LAB, which was explicitly designed so that the distance between points corresponds to perceived color differences. Using that clustering, we create a lookup table that maps visually similar colors to the same features of the textual descriptions. On one hand, this mapping ensures that the images we generate will yield slightly different shades of the same color, rather than completely different colors. It also makes the training of the model more manageable by reducing the number of color categories that it needs to learn.

Inception score — one of the two metrics we used in our experiments — evaluates images according to two criteria: recognizability and diversity. The recognizability score is based on the confidence of an existing computer vision model in classifying the image. We used three different inception scores, based on the three characteristics a classifier is trained to identify: type, color, and gender.

On the type and gender inception scores, ReStGAN yielded 22% and 27% improvements, respectively, over the scores of the best-performing StackGAN models. But on the color inception score, the improvement was 100%, indicating the utility of our color model.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart