Product search algorithms, like the ones that help customers place orders through Alexa, aim at returning the products that are most relevant to users’ queries, where relevance is usually interpreted as “anything that satisfies the users’ need”. A common way to estimate customers’ satisfaction is to rely on the judgment of human annotators. (We annotate a very small fraction of 1% of interactions.)
We’ve noticed that customers frequently engage with voice shopping results that annotators label as irrelevant. At this year’s ACM Web Search and Data Mining (WSDM) conference, in February, we will present a systematic analysis of this phenomenon.
Our paper presents hypotheses about the reasons for customers’ engagement with seemingly irrelevant results, data that support those hypotheses, and suggestions about how our findings could improve the algorithms voice assistants use to help customers discover products.
The paper explains our methodology in detail, but briefly, my colleagues — David Carmel, Elad Haramaty, Arnon Lazerson, and Yoelle Maarek — and I analyzed two different types of customer interactions with seemingly irrelevant products. The first was purchases, and the second was “engagements”, defined as interactions such as sending a search result to a cell phone or adding a product to the shopping cart.
Our analysis suggests that customers’ likelihood of buying or engaging with seemingly irrelevant products is higher for products that are broadly popular on Amazon and for products that are cheaper than the “relevant” products for a given query. Customers are also much more likely to buy/engage with irrelevant products in a few categories — such as toys and digital products — than in others — such as beauty products and groceries.
We also found that customers who have issued either very short or unusually long queries tend to be more flexible than customers whose queries are of medium length. We used query length as a proxy for specificity: a general (short) query suggests uncertainty and willingness to explore; a specific (long) query decreases the likelihood of an exact match, which makes settling for an approximate match more probable.
Finally, we considered what we call indirect relationships between relevant and irrelevant products. For example, two products have an indirect relationship if they are of the same type, brand, or category or if they tend to be purchased together. We used two different measures of indirect relationship, one based on the meanings of descriptive terms and one based on purchase history. Both correlated with increased likelihood of buying/engaging with seemingly irrelevant results.
The value of “irrelevance”
After performing our statistical analyses, we conducted a pair of experiments to assess the value of including seemingly irrelevant products in our search results. First, we identified 1,500 queries, each associated with one relevant and one irrelevant product, and considered the results of applying five different product selection strategies to all of them.
The first strategy, Optimal, always selects the product that leads to the higher purchase level or engagement level, depending on which we’re measuring. (The engagement/purchase level is the ratio of interactions that resulted in engagement/purchase actions to all the interactions in our data sample.)
Relevant always returns the relevant product, while Irrelevant always returns the irrelevant product. Random arbitrarily selects between the two, and Worst always returns the product that leads to the lower purchase or engagement level. We normalized the results of the other four strategies against Optimal.
As can be seen in the table at left, there’s a significant gap between both the engagement and purchase levels achieved by selecting only relevant results and the optimal levels, which involve purchase and engagement with irrelevant results.
In another experiment, we used the same 1,500 queries to train three different machine learning models: one learned to maximize relevance, the second to maximize purchase level, and the third to maximize engagement level. Then we built two fusion models, one that combined the relevance model and the engagement model and one that combined the relevance model and the purchase model.
Each fusion model can be tuned to assign different weights to the outputs of the two models that compose it. In the relevance-purchase fusion model, for instance, setting the relevance and purchase-level weights to 1 and 0, respectively, would yield the output of the relevance model alone; setting both models’ weights to .5 would yield an even blend of both models’ outputs. For both fusion models, we swept through a range of weights and charted the results.
As the chart at right shows, there’s a trade-off between relevance and purchase/engagement level: improving performance on one criterion affects performance on the other. If results do not satisfy the customer’s needs but appear to be relevant, the customer may understand and possibly excuse the shortfall. At the same time, purchase and engagement levels capture a more subjective type of relevance that human annotations cannot assess and may result in more-satisfying product recommendations.
The models we used to assess the trade-off between relevance and purchase/engagement level were fairly crude. A more complex machine learning model should be able to achieve better results, particularly if it is explicitly trained to consider some of the factors we identified previously, such as query length, price, and indirect relation.
While still preliminary, our results provide new insights on how to design product search algorithms and suggest that both objective relevance and purchase/engagement factors should be considered in returning results to customers.