A quick guide to Amazon’s papers at ICML 2024


Amazon’s papers at the International Conference on Machine Learning (ICML) lean — like the conference as a whole — toward the theoretical. Although some papers deal with applications important to Amazon, such as anomaly detection and automatic speech recognition, most concern more-general topics related to machine learning, such as responsible AI and transfer learning. Learning algorithms, reinforcement learning, and privacy emerge as areas of particular interest.

Anomaly detection

Online adaptive anomaly thresholding with confidence sequences
Sophia Sun, Abishek Sankararaman, Balakrishnan (Murali) Narayanaswamy

Online adaptive anomaly thresholding with confidence sequences” proposes a method for adapting anomaly detection thresholds to distribution drift. In experiments, the researchers modeled a signal as a sequence of hand-drawn numerals from the MNIST dataset and distribution drift as a change from the numeral 0 to the numeral 1. Anomalies were modeled as numerals other than 0 or 1.

Automatic speech recognition

An efficient self-learning framework for interactive spoken dialog systems
Hitesh Tulsiani, David M. Chan, Shalini Ghosh, Garima Lalwani, Prabhat Pandey, Ankish Bansal, Sri Garimella, Ariya Rastrow, Björn Hoffmeister

Causal inference

Multiply-robust causal change attribution
Victor Quintas, Taha Bahadori, Eduardo Santiago, Jeff Mu, Dominik Janzing, David E. Heckerman

Code completion

REPOFORMER: Selective retrieval for repository-level code completion
Di Wu, Wasi Ahmad, Dejiao Zhang, Murali Krishna Ramanathan, Xiaofei Ma

Continual learning

MemoryLLM: Towards self-updatable large language models
Yu Wang, Yifan Gao, Xiusi Chen, Haoming Jiang, Shiyang Li, Jingfeng Yang, Qingyu Yin, Zheng Li, Xian Li, Bing Yin, Jingbo Shang, Julian McAuley

Contrastive learning

EMC2: Efficient MCMC negative sampling for contrastive learning with global convergence
Chung Yiu Yau, Hoi-To Wai, Parameswaran Raman, Soumajyoti Sarkar, Mingyi Hong

Data preparation

Fewer truncations improve language modeling
Hantian Ding, Zijian Wang, Giovanni Paolini, Varun Kumar, Anoop Deoras, Dan Roth, Stefano Soatto

Explainable AI

Explaining probabilistic models with distributional values
Luca Franceschi, Michele Donini, Cédric Archambeau, Matthias Seeger

Game-theoretical approaches to explainable AI, such as Shapley value analysis, compare the outputs of a black-box model with and without a particular input feature (i, represented as a blue square). Such methods typically operate on scalar values (top), discarding information captured by probabilistic models. “Explaining probabilistic models with distributional values” generalizes the game-theoretical approach to models with distributional outputs (bottom).

Hallucination mitigation

Multicalibration for confidence scoring in LLMs
Gianluca Detommaso, Martin Bertran Lopez, Riccardo Fogliato, Aaron Roth

Learning algorithms

MADA: Meta-adaptive optimizers through hyper-gradient descent
Kaan Ozkara, Can Karakus, Parameswaran Raman, Mingyi Hong, Shoham Sabach, Branislav Kveton, Volkan Cevher

Variance-reduced zeroth-order methods for fine-tuning language models
Tanmay Gautam, Youngsuk Park, Hao Zhou, Parameswaran Raman, Wooseok Ha

LLM decoding

Bifurcated attention for single-context large-batch sampling
Ben Athiwaratkun, Sujan Gonugondla, Sanjay Krishna Gouda, Hantian Ding, Qing Sun, Jun Wang, Jiacheng Guo, Liangfu Chen, Haifeng Qian, Parminder Bhatia, Ramesh Nallapati, Sudipta Sengupta, Bing Xiang

Privacy

Differentially private bias-term fine-tuning of foundation models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha, George Karypis

Computation graphs for back-propagation on ordinary (black) and differentially private (black and red) algorithms. In “Differentially private bias-term fine-tuning of foundation models”, Amazon researchers show that fine-tuning bias terms only (lower right) is much simpler than fine-tuning both bias terms and weights (left and top right), preserving accuracy while making training 2 – 30 times as fast.

Membership inference attacks on diffusion models via quantile regression
Shuai Tang, Zhiwei Steven Wu, Sergul Aydore, Michael Kearns, Aaron Roth

Reinforcement learning

Finite-time convergence and sample complexity of actor-critic multi-objective reinforcement learning
Tianchen Zhou, Fnu Hairi, Haibo Yang, Jia (Kevin) Liu, Tian Tong, Fan Yang, Michinari Momma, Yan Gao

Learning the target network in function space
Kavosh Asadi, Yao Liu, Shoham Sabach, Ming Yin, Rasool Fakoor

Most reinforcement learning (RL) algorithms involve a function (v) that predicts the value of taking a particular action when an agent is in a particular state. Often, the value function is approximated by two neural networks (θ and w), one that models the current value estimate and one that’s updated in light of new interactions. Usually, the RL loss function is designed to reconcile the two models’ parameter values. But in “Learning the target network in function space”, Amazon researchers show that reconciling the models in function space (left) doesn’t entail reconciling them in the parameter space (right). Their experiments show that dropping the requirement of parameter value equivalence can improve performance on RL tasks.

Near-optimal regret in linear MDPs with aggregate bandit feedback
Asaf Cassel, Haipeng Luo, Dmitry Sotnikov, Aviv Rosenberg

Responsible AI

Discovering bias in latent space: An unsupervised debiasing approach
Dyah Adila, Shuai Zhang, Boran Han, Bernie Wang

Retrieval-augmented generation

Automated evaluation of retrieval-augmented language models with task-specific exam generation
Gauthier Guinet, Behrooz Omidvar-Tehrani, Anoop Deoras, Laurent Callot

Robust learning

Robust multi-task learning with excess risks
Yifei He, Shiji Zhou, Guojun Zhang, Hyokun Yun, Yi Xu, Belinda Zeng, Trishul Chilimbi, Han Zhao

Scientific machine learning

Using uncertainty quantification to characterize and improve out-of-domain learning for PDEs
S. Chandra Mouli, Danielle Maddix Robinson, Shima Alizadeh, Gaurav Gupta, Andrew Stuart, Michael Mahoney, Bernie Wang

Transfer learning

Transferring knowledge from large foundation models to small downstream models
Shikai Qiu, Boran Han, Danielle Maddix Robinson, Shuai Zhang, Bernie Wang, Andrew Wilson





Source link

We will be happy to hear your thoughts

Leave a reply

Rockstary Reviews
Logo
Shopping cart