RecSys 2022: “Recommenders are ubiquitous”


The ACM Conference on Recommender Systems (RecSys), the leading conference in the field of recommendation systems, takes place this week, and two Amazon scientists — Max Harper, a senior applied scientist, and Vanessa Murdock, a senior applied-science manager, both in the Alexa Shopping organization — are among the conference’s three general chairs, along with Jennifer Golbeck of the University of Maryland. Harper and Murdock spoke to Amazon Science about the conference program and what it indicates about the state of research on recommender systems.

Amazon Science: Can you tell us a little bit about RecSys?

Max Harper: RecSys has been around for a long time — since the ’90s — and it’s a community that’s interested in both algorithms and applications of machine learning techniques that model the behavior of users. In particular, RecSys focuses on domains where the definition of the best thing for the model to return depends on which person you ask. So it’s personalized.

Senior applied scientist Max Harper (left) and senior applied-science manager Vanessa Murdock, both of the Alexa Shopping organization, are two of the three general chairs at this year’s RecSys.

The classical applications include movies, music, and books, which are obviously taste-driven domains. But these days, it’s expanded into tons of areas, including travel, fashion, and job finding.

In addition to algorithms and applications, I’d say about 20% of the field is interested in people, how people perceive recommendations, how to design user interfaces that work well and how to shape the user experience in a variety of ways.

There’s also a whole host of machine learning issues that comes along with it, including how to measure performance, how to scale the algorithms, how to preserve users’ privacy. And finally, an increasingly important issue is the societal impacts of these algorithms.

Related content

In 2017, when the journal IEEE Internet Computing was celebrating its 20th anniversary, its editorial board decided to identify the single paper from its publication history that had best withstood the “test of time”. The honor went to a 2003 paper called “Amazon.com Recommendations: Item-to-Item Collaborative Filtering”, by then Amazon researchers Greg Linden, Brent Smith, and Jeremy York.

Vanessa Murdock: I sit between the fields of search and recommendation, and they’re somewhat different in that recommendations can be made even if the user isn’t asking for them, whereas search is usually in response to a request.

Recommenders are ubiquitous — they’re in many of the apps and tools we use every day. For example, if you’re looking for a coffee in Seattle, and you look at a map, the resolution of the map that you see on the first view will show you some points of interest, and then, if you zoom in, you’ll see more. You can view those first points of interest as recommendations, but it’s not what you usually think of as a recommender.

Your Instagram feed and Tik Tok are all recommendations. Your Twitter feed is a set of recommended tweets. It’s central to our experience with the digital world in everything that we do.

All of this research on deciding what people would like to engage with has had significant influence on online commerce and ads and sponsored placements. Your Instagram feed and Tik Tok are all recommendations. Your Twitter feed is a set of recommended tweets. It’s central to our experience with the digital world in everything that we do.

AS: In 2017, when IEEE Internet Computing celebrated its 20th anniversary, it gave its test-of-time award to Amazon’s 2003 paper on item-to-item collaborative filtering. How has the field evolved since that paper?

MH: The concept of collaborative filtering is still very, very relevant. These days, matrix factorization techniques are much more common; you use them to complete an item-customer matrix. But it’s essentially the same class of techniques.

There’s a paper at this year’s RecSys, “Revisiting the performance of iALS on item recommendation benchmarks”, and it’s part of the RecSys replicability track, which is kind of a unique thing at RecSys. This paper has to do with matrix factorization, which the field thinks of as an old-fashioned technique. And the point that authors make in this paper is that a well-tuned matrix factorization algorithm can hold its own against a whole range of more modern deep-learning algorithms.

Related content

Learn how the Amazon Music Conversations team is using pioneering machine learning to make Alexa’s discernment better than ever.

VM: The reproducibility track at RecSys is especially good because a lot of reported research is incremental gains over many years. In every paper, the numbers always go up, and the results are always significant, but the improvements don’t always add up over time. Having a reproducibility track really sets RecSys apart. It means that as we are making gains in some area, we can look back and say, “Is this really true?”

In my own work, I’ve found that when I’ve tried to reproduce work from other people, the results depend on the collection or the queries or the system parameters. And that’s not what a scientific advance really should be. So I think that that’s a very important track, and more conferences should add it.

Sequential recommendation

AS: What are some of the newer ideas in the field that you find most intriguing?

MH: If I were to pick the number one thing that seems to have taken over the conference, it would be the application of techniques from natural-language processing to the field of recommender systems. In particular, Transformers and large language models like BERT have been adapted to the context of recommendations in an interesting way.

Related content

Two-day RecSys workshop that extends the popular REVEAL to include CONSEQUENCES features Amazon organizers, speakers.

Essentially, these language models learn the semantics of sentences by modeling which words go with which other words, and you can take an analogous approach in the field of recommendations by looking at, not sentences of words, but sequences of items — for example, products at Amazon or movies at Netflix that users engage with. And by using similar training techniques to what they use in NLP, they can solve problems like next-item prediction: given that the user has looked at these three products most recently, what’s the product that they’re most likely to look at next?

Language models learn the semantics of sentences by modeling which words go with which other words, and you can take an analogous approach in the field of recommendations by looking at, not sentences of words, but sequences of items.

That concept is called sequential recommendation, and it is everywhere at RecSys this year.

AS: Does sequential recommendation use the same kind of masked training that language models do?

MH: Yeah, it does. You take a sequence of user behavior, and you hide one of the items that they actually interacted with and try to predict that that’s part of the sequence.

AS: How is that approach adapted to the new setting?

MH: Two examples I can think of: One is that there’s aren’t necessarily natural boundaries in a sequence of user interactions, so you might be tempted to look at the entire sequence of interactions in order to predict the next one. Researchers are looking at the degree to which recency is important in next-item recommendation.

Another one is that sentences are more predictable: if you’re missing a word in a sentence, it’s more likely that a human could guess what that word is. With a sequence of item clicks or ratings or purchases, there might be a lot of noise with certain items.

Related content

The scientist’s work is driving practical outcomes within an exploding machine learning research field.

Yusan Lin, who joined Amazon Fashion this year as an applied scientist, is a coauthor on a RecSys paper called “Denoising self-attentive sequential recommendation”, and it’s about that concept: how do you find those items that are potentially harmful to the performance of the system and essentially hide them from the training so that the system learns more of a clean language, if you will, of what people are interested in?

VM: Sometimes the sequence of interactions is way too predictable. In e-commerce, if you think about reordering, where, say, you order the same brand of coffee absolutely every week, there’s not really a benefit to recommending that coffee to you, even though it’s very accurate. So there’s some subtlety in there when we’re talking about predicting the next recommendation — the next good recommendation — from a sequence of user interactions.

Fairness

AS: Vanessa, are there any other recent research trends that you find particularly interesting?

VM: In the last, say, 10 years, the attention that researchers have been paying to bias and fairness is tremendously important. As we get better at predicting what people need, and as we become more embedded in everyday life, the effort to make sure that we’re not introducing unintended biases is very, very important. It’s a hard problem, and I’m very happy to see attention to that.

AS: What kind of approaches do people take to that problem?

VM: The first thing is that the researcher actually has to be aware of the problem. A lot of times the data is very large, and the items you are trying to predict are a very small subset. Suppose that you have a group of people who have blue hair, and they’re very interested in products for blue hair. You can imagine they are a tiny, tiny proportion of your data. If your recommender is based on what most people like, you’re never going to offer them anything for their blue hair.

It’s a class of problems called unknown unknowns, where there’s a small positive class, but you don’t know how big it is, and you don’t have a way to find that in your data. You know there are some people with blue hair because they’ve interacted with blue-hair things, but you don’t know how many of your customers actually have blue hair.

Related content

Research investigates how to construct recommendation algorithms when the search space is massive and how to perform natural-language searches on the COVID-19 literature.

Some approaches for that are to sample in a clever way or to create synthetic data or to do domain adaptation, where you have a large amount of known data from some other domain that you can adapt to this new area. For instance, you have a lot of data about people who have green hair, and you can adapt that to people with blue hair.

Another is to look at whether the data itself has a skew in the features. Maybe the features are accidentally correlated, or maybe something is not represented well, because the feature space for the blue haired items is too small. Those are all things to look at.

MH: I totally agree that fairness, along with privacy and explainability, are big topics at this year’s RecSys. There definitely is research into news recommendation, which is a big, important topic to the world. There’s this idea of filter bubbles, which is a long-hypothesized problem, but one that we’re seeing in practice, in which personalization technology makes the range of opinions that we see online shallower and shallower. So for instance, we’ll see news that confirms our own beliefs rather than seeing a diversity of viewpoints.

There’s some work on those topics at this year’s RecSys. One paper in particular I thought was quite interesting because they took a principled approach to looking at what it means for a news article to be diverse. There’s a shallow, algorithmic definition of diversity that most prior research has used that may or may not line up with what humans perceive as diversity in news articles.

So they took this more principled approach to measuring diversity using natural-language techniques. They provided a mathematical foundation for measuring the diversity of a set of articles and looked at how different algorithms actually behave on a news dataset. I think that work on fairness is really important and will be very influential in years to come.





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart