Counterfactual Plans under Distributional Ambiguity
- URL: http://arxiv.org/abs/2201.12487v1
- Date: Sat, 29 Jan 2022 03:41:47 GMT
- Title: Counterfactual Plans under Distributional Ambiguity
- Authors: Ngoc Bui, Duy Nguyen, Viet Anh Nguyen
- Abstract summary: We study the counterfactual plans under model uncertainty, in which the distribution of the model parameters is partially prescribed.
First, we propose an uncertainty quantification tool to compute the lower and upper bounds of the probability of validity for any given counterfactual plan.
We then provide corrective methods to adjust the counterfactual plan to improve the validity measure.
- Score: 12.139222986297263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations are attracting significant attention due to the
flourishing applications of machine learning models in consequential domains. A
counterfactual plan consists of multiple possibilities to modify a given
instance so that the model's prediction will be altered. As the predictive
model can be updated subject to the future arrival of new data, a
counterfactual plan may become ineffective or infeasible with respect to the
future values of the model parameters. In this work, we study the
counterfactual plans under model uncertainty, in which the distribution of the
model parameters is partially prescribed using only the first- and
second-moment information. First, we propose an uncertainty quantification tool
to compute the lower and upper bounds of the probability of validity for any
given counterfactual plan. We then provide corrective methods to adjust the
counterfactual plan to improve the validity measure. The numerical experiments
validate our bounds and demonstrate that our correction increases the
robustness of the counterfactual plans in different real-world datasets.
Related papers
- Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Source-Free Unsupervised Domain Adaptation with Hypothesis Consolidation
of Prediction Rationale [53.152460508207184]
Source-Free Unsupervised Domain Adaptation (SFUDA) is a challenging task where a model needs to be adapted to a new domain without access to target domain labels or source domain data.
This paper proposes a novel approach that considers multiple prediction hypotheses for each sample and investigates the rationale behind each hypothesis.
To achieve the optimal performance, we propose a three-step adaptation process: model pre-adaptation, hypothesis consolidation, and semi-supervised learning.
arXiv Detail & Related papers (2024-02-02T05:53:22Z) - Refining Diffusion Planner for Reliable Behavior Synthesis by Automatic
Detection of Infeasible Plans [25.326624139426514]
Diffusion-based planning has shown promising results in long-horizon, sparse-reward tasks.
However, due to their nature as generative models, diffusion models are not guaranteed to generate feasible plans.
We propose a novel approach to refine unreliable plans generated by diffusion models by providing refining guidance to error-prone plans.
arXiv Detail & Related papers (2023-10-30T10:35:42Z) - Performative Prediction with Bandit Feedback: Learning through Reparameterization [23.039885534575966]
performative prediction is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model.
We develop a reparametization that reparametrizes the performative prediction objective as a function of induced data distribution.
arXiv Detail & Related papers (2023-05-01T21:31:29Z) - Prediction-Oriented Bayesian Active Learning [51.426960808684655]
Expected predictive information gain (EPIG) is an acquisition function that measures information gain in the space of predictions rather than parameters.
EPIG leads to stronger predictive performance compared with BALD across a range of datasets and models.
arXiv Detail & Related papers (2023-04-17T10:59:57Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Learning Interpretable Deep State Space Model for Probabilistic Time
Series Forecasting [98.57851612518758]
Probabilistic time series forecasting involves estimating the distribution of future based on its history.
We propose a deep state space model for probabilistic time series forecasting whereby the non-linear emission model and transition model are parameterized by networks.
We show in experiments that our model produces accurate and sharp probabilistic forecasts.
arXiv Detail & Related papers (2021-01-31T06:49:33Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Selective Dyna-style Planning Under Limited Model Capacity [26.63876180969654]
In model-based reinforcement learning, planning with an imperfect model of the environment has the potential to harm learning progress.
In this paper, we investigate the idea of using an imperfect model selectively.
The agent should plan in parts of the state space where the model would be helpful but refrain from using the model where it would be harmful.
arXiv Detail & Related papers (2020-07-05T18:51:50Z) - Bootstrapped model learning and error correction for planning with
uncertainty in model-based RL [1.370633147306388]
A natural aim is to learn a model that reflects accurately the dynamics of the environment.
This paper explores the problem of model misspecification through uncertainty-aware reinforcement learning agents.
We propose a bootstrapped multi-headed neural network that learns the distribution of future states and rewards.
arXiv Detail & Related papers (2020-04-15T15:41:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.