Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation
- URL: http://arxiv.org/abs/2307.07907v2
- Date: Wed, 25 Oct 2023 23:51:27 GMT
- Title: Seeing is not Believing: Robust Reinforcement Learning against Spurious
Correlation
- Authors: Wenhao Ding, Laixi Shi, Yuejie Chi, Ding Zhao
- Abstract summary: We consider one critical type of robustness against spurious correlation, where different portions of the state do not have correlations induced by unobserved confounders.
A model that learns such useless or even harmful correlation could catastrophically fail when the confounder in the test case deviates from the training one.
Existing robust algorithms that assume simple and unstructured uncertainty sets are therefore inadequate to address this challenge.
- Score: 57.351098530477124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robustness has been extensively studied in reinforcement learning (RL) to
handle various forms of uncertainty such as random perturbations, rare events,
and malicious attacks. In this work, we consider one critical type of
robustness against spurious correlation, where different portions of the state
do not have correlations induced by unobserved confounders. These spurious
correlations are ubiquitous in real-world tasks, for instance, a self-driving
car usually observes heavy traffic in the daytime and light traffic at night
due to unobservable human activity. A model that learns such useless or even
harmful correlation could catastrophically fail when the confounder in the test
case deviates from the training one. Although motivated, enabling robustness
against spurious correlation poses significant challenges since the uncertainty
set, shaped by the unobserved confounder and causal structure, is difficult to
characterize and identify. Existing robust algorithms that assume simple and
unstructured uncertainty sets are therefore inadequate to address this
challenge. To solve this issue, we propose Robust State-Confounded Markov
Decision Processes (RSC-MDPs) and theoretically demonstrate its superiority in
avoiding learning spurious correlations compared with other robust RL
counterparts. We also design an empirical algorithm to learn the robust optimal
policy for RSC-MDPs, which outperforms all baselines in eight realistic
self-driving and manipulation tasks.
Related papers
- Towards Robust Text Classification: Mitigating Spurious Correlations with Causal Learning [2.7813683000222653]
We propose the Causally Calibrated Robust ( CCR) to reduce models' reliance on spurious correlations.
CCR integrates a causal feature selection method based on counterfactual reasoning, along with an inverse propensity weighting (IPW) loss function.
We show that CCR state-of-the-art performance among methods without group labels, and in some cases, it can compete with the models that utilize group labels.
arXiv Detail & Related papers (2024-11-01T21:29:07Z) - Is Difficulty Calibration All We Need? Towards More Practical Membership Inference Attacks [16.064233621959538]
We propose a query-efficient and computation-efficient MIA that directly textbfRe-levertextbfAges the original membershitextbfP scores to mtextbfItigate the errors in textbfDifficulty calibration.
arXiv Detail & Related papers (2024-08-31T11:59:42Z) - Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithm [14.517103323409307]
Sim-to-real gap represents disparity between training and testing environments.
A promising approach to addressing this challenge is distributionally robust RL.
We tackle robust RL via interactive data collection and present an algorithm with a provable sample complexity guarantee.
arXiv Detail & Related papers (2024-04-04T16:40:22Z) - Causal Representation Learning Made Identifiable by Grouping of Observational Variables [8.157856010838382]
Causal Representation Learning aims to learn a causal model for hidden features in a data-driven manner.
Here, we show identifiability based on novel, weak constraints.
We also propose a novel self-supervised estimation framework consistent with the model.
arXiv Detail & Related papers (2023-10-24T10:38:02Z) - Game-Theoretic Robust Reinforcement Learning Handles Temporally-Coupled Perturbations [98.5802673062712]
We introduce temporally-coupled perturbations, presenting a novel challenge for existing robust reinforcement learning methods.
We propose GRAD, a novel game-theoretic approach that treats the temporally-coupled robust RL problem as a partially observable two-player zero-sum game.
arXiv Detail & Related papers (2023-07-22T12:10:04Z) - Uncertainty-Aware Bootstrap Learning for Joint Extraction on
Distantly-Supervised Data [36.54640096189285]
bootstrap learning is motivated by the intuition that the higher uncertainty of an instance, the more likely the model confidence is inconsistent with the ground truths.
We first explore instance-level data uncertainty to create an initial high-confident examples.
During bootstrap learning, we propose self-ensembling as a regularizer to alleviate inter-model uncertainty produced by noisy labels.
arXiv Detail & Related papers (2023-05-05T20:06:11Z) - Probabilistically Robust Learning: Balancing Average- and Worst-case
Performance [105.87195436925722]
We propose a framework called robustness probabilistic that bridges the gap between the accurate, yet brittle average case and the robust, yet conservative worst case.
From a theoretical point of view, this framework overcomes the trade-offs between the performance and the sample-complexity of worst-case and average-case learning.
arXiv Detail & Related papers (2022-02-02T17:01:38Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Learning Causal Models Online [103.87959747047158]
Predictive models can rely on spurious correlations in the data for making predictions.
One solution for achieving strong generalization is to incorporate causal structures in the models.
We propose an online algorithm that continually detects and removes spurious features.
arXiv Detail & Related papers (2020-06-12T20:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.