A Comparison of Methods for Treatment Assignment with an Application to
Playlist Generation
- URL: http://arxiv.org/abs/2004.11532v5
- Date: Sat, 30 Apr 2022 23:16:10 GMT
- Title: A Comparison of Methods for Treatment Assignment with an Application to
Playlist Generation
- Authors: Carlos Fern\'andez-Lor\'ia, Foster Provost, Jesse Anderton, Benjamin
Carterette, Praveen Chandar
- Abstract summary: We group the various methods proposed in the literature into three general classes of algorithms (or metalearners)
We show analytically and empirically that optimizing for the prediction of outcomes or causal effects is not the same as optimizing for treatment assignments.
This is the first comparison of the three different metalearners on a real-world application at scale.
- Score: 13.804332504576301
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study presents a systematic comparison of methods for individual
treatment assignment, a general problem that arises in many applications and
has received significant attention from economists, computer scientists, and
social scientists. We group the various methods proposed in the literature into
three general classes of algorithms (or metalearners): learning models to
predict outcomes (the O-learner), learning models to predict causal effects
(the E-learner), and learning models to predict optimal treatment assignments
(the A-learner). We compare the metalearners in terms of (1) their level of
generality and (2) the objective function they use to learn models from data;
we then discuss the implications that these characteristics have for modeling
and decision making. Notably, we demonstrate analytically and empirically that
optimizing for the prediction of outcomes or causal effects is not the same as
optimizing for treatment assignments, suggesting that in general the A-learner
should lead to better treatment assignments than the other metalearners. We
demonstrate the practical implications of our findings in the context of
choosing, for each user, the best algorithm for playlist generation in order to
optimize engagement. This is the first comparison of the three different
metalearners on a real-world application at scale (based on more than half a
billion individual treatment assignments). In addition to supporting our
analytical findings, the results show how large A/B tests can provide
substantial value for learning treatment assignment policies, rather than
simply choosing the variant that performs best on average.
Related papers
- Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data [102.16105233826917]
Learning from preference labels plays a crucial role in fine-tuning large language models.
There are several distinct approaches for preference fine-tuning, including supervised learning, on-policy reinforcement learning (RL), and contrastive learning.
arXiv Detail & Related papers (2024-04-22T17:20:18Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Machine learning with incomplete datasets using multi-objective
optimization models [1.933681537640272]
We propose an online approach to handle missing values while a classification model is learnt.
We develop a multi-objective optimization model with two objective functions for imputation and model selection.
We use an evolutionary algorithm based on NSGA II to find the optimal solutions.
arXiv Detail & Related papers (2020-12-04T03:44:33Z) - View selection in multi-view stacking: Choosing the meta-learner [0.2812395851874055]
Multi-view stacking is a framework for combining information from different views describing the same set of objects.
In this framework, a base-learner algorithm is trained on each view separately, and their predictions are then combined by a meta-learner algorithm.
arXiv Detail & Related papers (2020-10-30T13:45:14Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Does imputation matter? Benchmark for predictive models [5.802346990263708]
This paper systematically evaluates the empirical effectiveness of data imputation algorithms for predictive models.
The main contributions are (1) the recommendation of a general method for empirical benchmarking based on real-life classification tasks.
arXiv Detail & Related papers (2020-07-06T15:47:36Z) - Bayesian Meta-Prior Learning Using Empirical Bayes [3.666114237131823]
We propose a hierarchical Empirical Bayes approach that addresses the absence of informative priors, and the inability to control parameter learning rates.
Our method learns empirical meta-priors from the data itself and uses them to decouple the learning rates of first-order and second-order features.
Our findings are promising, as optimizing over sparse data is often a challenge.
arXiv Detail & Related papers (2020-02-04T05:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.