Data-Driven Off-Policy Estimator Selection: An Application in User
Marketing on An Online Content Delivery Service
- URL: http://arxiv.org/abs/2109.08621v1
- Date: Fri, 17 Sep 2021 15:53:53 GMT
- Title: Data-Driven Off-Policy Estimator Selection: An Application in User
Marketing on An Online Content Delivery Service
- Authors: Yuta Saito, Takuma Udagawa, and Kei Tateno
- Abstract summary: Off-policy evaluation is essential in domains such as healthcare, marketing or recommender systems.
Many OPE methods with theoretical backgrounds have been proposed.
It is often unknown for practitioners which estimator to use for their specific applications and purposes.
- Score: 11.986224119327387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Off-policy evaluation (OPE) is the method that attempts to estimate the
performance of decision making policies using historical data generated by
different policies without conducting costly online A/B tests. Accurate OPE is
essential in domains such as healthcare, marketing or recommender systems to
avoid deploying poor performing policies, as such policies may hart human lives
or destroy the user experience. Thus, many OPE methods with theoretical
backgrounds have been proposed. One emerging challenge with this trend is that
a suitable estimator can be different for each application setting. It is often
unknown for practitioners which estimator to use for their specific
applications and purposes. To find out a suitable estimator among many
candidates, we use a data-driven estimator selection procedure for off-policy
policy performance estimators as a practical solution. As proof of concept, we
use our procedure to select the best estimator to evaluate coupon treatment
policies on a real-world online content delivery service. In the experiment, we
first observe that a suitable estimator might change with different definitions
of the outcome variable, and thus the accurate estimator selection is critical
in real-world applications of OPE. Then, we demonstrate that, by utilizing the
estimator selection procedure, we can easily find out suitable estimators for
each purpose.
Related papers
- AutoOPE: Automated Off-Policy Estimator Selection [7.476028372444458]
Off-Policy Evaluation problem consists of evaluating the performance of counterfactual policies with data collected by another one.
We propose an automated data-driven OPE estimator selection method based on machine learning.
arXiv Detail & Related papers (2024-06-26T02:34:48Z) - OPERA: Automatic Offline Policy Evaluation with Re-weighted Aggregates of Multiple Estimators [13.408838970377035]
offline policy evaluation (OPE) allows us to evaluate and estimate a new sequential decision-making policy's performance.
We propose a new algorithm that adaptively blends a set of OPE estimators given a dataset without relying on an explicit selection using a statistical procedure.
Our work contributes to improving ease of use for a general-purpose, estimator-agnostic, off-policy evaluation framework for offline RL.
arXiv Detail & Related papers (2024-05-27T23:51:20Z) - $Δ\text{-}{\rm OPE}$: Off-Policy Estimation with Pairs of Policies [13.528097424046823]
We introduce $Deltatext-rm OPE$ methods based on the widely used Inverse Propensity Scoring estimator.
Simulated, offline, and online experiments show that our methods significantly improve performance for both evaluation and learning tasks.
arXiv Detail & Related papers (2024-05-16T12:04:55Z) - Optimal Baseline Corrections for Off-Policy Contextual Bandits [61.740094604552475]
We aim to learn decision policies that optimize an unbiased offline estimate of an online reward metric.
We propose a single framework built on their equivalence in learning scenarios.
Our framework enables us to characterize the variance-optimal unbiased estimator and provide a closed-form solution for it.
arXiv Detail & Related papers (2024-05-09T12:52:22Z) - Conformal Off-Policy Evaluation in Markov Decision Processes [53.786439742572995]
Reinforcement Learning aims at identifying and evaluating efficient control policies from data.
Most methods for this learning task, referred to as Off-Policy Evaluation (OPE), do not come with accuracy and certainty guarantees.
We present a novel OPE method based on Conformal Prediction that outputs an interval containing the true reward of the target policy with a prescribed level of certainty.
arXiv Detail & Related papers (2023-04-05T16:45:11Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Policy-Adaptive Estimator Selection for Off-Policy Evaluation [12.1655494876088]
Off-policy evaluation (OPE) aims to accurately evaluate the performance of counterfactual policies using only offline logged data.
This paper studies this challenging problem of estimator selection for OPE for the first time.
In particular, we enable an estimator selection that is adaptive to a given OPE task, by appropriately subsampling available logged data and constructing pseudo policies.
arXiv Detail & Related papers (2022-11-25T05:31:42Z) - Off-policy evaluation for learning-to-rank via interpolating the
item-position model and the position-based model [83.83064559894989]
A critical need for industrial recommender systems is the ability to evaluate recommendation policies offline, before deploying them to production.
We develop a new estimator that mitigates the problems of the two most popular off-policy estimators for rankings.
In particular, the new estimator, called INTERPOL, addresses the bias of a potentially misspecified position-based model.
arXiv Detail & Related papers (2022-10-15T17:22:30Z) - Identification of Subgroups With Similar Benefits in Off-Policy Policy
Evaluation [60.71312668265873]
We develop a method to balance the need for personalization with confident predictions.
We show that our method can be used to form accurate predictions of heterogeneous treatment effects.
arXiv Detail & Related papers (2021-11-28T23:19:12Z) - Evaluating the Robustness of Off-Policy Evaluation [10.760026478889664]
Off-policy Evaluation (OPE) evaluates the performance of hypothetical policies leveraging only offline log data.
It is particularly useful in applications where the online interaction involves high stakes and expensive setting.
We develop Interpretable Evaluation for Offline Evaluation (IEOE), an experimental procedure to evaluate OPE estimators' robustness.
arXiv Detail & Related papers (2021-08-31T09:33:13Z) - Control Variates for Slate Off-Policy Evaluation [112.35528337130118]
We study the problem of off-policy evaluation from batched contextual bandit data with multidimensional actions.
We obtain new estimators with risk improvement guarantees over both the PI and self-normalized PI estimators.
arXiv Detail & Related papers (2021-06-15T06:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.