Individualized Policy Evaluation and Learning under Clustered Network
Interference
- URL: http://arxiv.org/abs/2311.02467v2
- Date: Sun, 4 Feb 2024 18:47:55 GMT
- Title: Individualized Policy Evaluation and Learning under Clustered Network
Interference
- Authors: Yi Zhang, Kosuke Imai
- Abstract summary: We consider the problem of evaluating and learning an optimal individualized treatment rule under clustered network interference.
We propose an estimator that can be used to evaluate the empirical performance of an ITR.
We derive the finite-sample regret bound for a learned ITR, showing that the use of our efficient evaluation estimator leads to the improved performance of learned policies.
- Score: 4.560284382063488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While there now exists a large literature on policy evaluation and learning,
much of prior work assumes that the treatment assignment of one unit does not
affect the outcome of another unit. Unfortunately, ignoring interference may
lead to biased policy evaluation and ineffective learned policies. For example,
treating influential individuals who have many friends can generate positive
spillover effects, thereby improving the overall performance of an
individualized treatment rule (ITR). We consider the problem of evaluating and
learning an optimal ITR under clustered network interference (also known as
partial interference) where clusters of units are sampled from a population and
units may influence one another within each cluster. Unlike previous methods
that impose strong restrictions on spillover effects, the proposed methodology
only assumes a semiparametric structural model where each unit's outcome is an
additive function of individual treatments within the cluster. Under this
model, we propose an estimator that can be used to evaluate the empirical
performance of an ITR. We show that this estimator is substantially more
efficient than the standard inverse probability weighting estimator, which does
not impose any assumption about spillover effects. We derive the finite-sample
regret bound for a learned ITR, showing that the use of our efficient
evaluation estimator leads to the improved performance of learned policies.
Finally, we conduct simulation and empirical studies to illustrate the
advantages of the proposed methodology.
Related papers
- Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Targeted Machine Learning for Average Causal Effect Estimation Using the
Front-Door Functional [3.0232957374216953]
evaluating the average causal effect (ACE) of a treatment on an outcome often involves overcoming the challenges posed by confounding factors in observational studies.
Here, we introduce novel estimation strategies for the front-door criterion based on the targeted minimum loss-based estimation theory.
We demonstrate the applicability of these estimators to analyze the effect of early stage academic performance on future yearly income.
arXiv Detail & Related papers (2023-12-15T22:04:53Z) - Doubly Robust Estimation of Direct and Indirect Quantile Treatment
Effects with Machine Learning [0.0]
We suggest a machine learning estimator of direct and indirect quantile treatment effects under a selection-on-observables assumption.
The proposed method is based on the efficient score functions of the cumulative distribution functions of potential outcomes.
We also propose a multiplier bootstrap for statistical inference and show the validity of the multiplier.
arXiv Detail & Related papers (2023-07-03T14:27:15Z) - B-Learner: Quasi-Oracle Bounds on Heterogeneous Causal Effects Under
Hidden Confounding [51.74479522965712]
We propose a meta-learner called the B-Learner, which can efficiently learn sharp bounds on the CATE function under limits on hidden confounding.
We prove its estimates are valid, sharp, efficient, and have a quasi-oracle property with respect to the constituent estimators under more general conditions than existing methods.
arXiv Detail & Related papers (2023-04-20T18:07:19Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Taming Multi-Agent Reinforcement Learning with Estimator Variance
Reduction [12.94372063457462]
Centralised training with decentralised execution (CT-DE) serves as the foundation of many leading multi-agent reinforcement learning (MARL) algorithms.
It suffers from a critical drawback due to its reliance on learning from a single sample of the joint-action at a given state.
We propose an enhancement tool that accommodates any actor-critic MARL method.
arXiv Detail & Related papers (2022-09-02T13:44:00Z) - Imitation Learning by State-Only Distribution Matching [2.580765958706854]
Imitation Learning from observation describes policy learning in a similar way to human learning.
We propose a non-adversarial learning-from-observations approach, together with an interpretable convergence and performance metric.
arXiv Detail & Related papers (2022-02-09T08:38:50Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Generalization Bounds and Representation Learning for Estimation of
Potential Outcomes and Causal Effects [61.03579766573421]
We study estimation of individual-level causal effects, such as a single patient's response to alternative medication.
We devise representation learning algorithms that minimize our bound, by regularizing the representation's induced treatment group distance.
We extend these algorithms to simultaneously learn a weighted representation to further reduce treatment group distances.
arXiv Detail & Related papers (2020-01-21T10:16:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.