A Deep Reinforcement Learning Approach to Rare Event Estimation
- URL: http://arxiv.org/abs/2211.12470v1
- Date: Tue, 22 Nov 2022 18:29:14 GMT
- Title: A Deep Reinforcement Learning Approach to Rare Event Estimation
- Authors: Anthony Corso, Kyu-Young Kim, Shubh Gupta, Grace Gao, Mykel J.
Kochenderfer
- Abstract summary: An important step in the design of autonomous systems is to evaluate the probability that a failure will occur.
In safety-critical domains, the failure probability is extremely small so that the evaluation of a policy through Monte Carlo sampling is inefficient.
We develop two adaptive importance sampling algorithms that can efficiently estimate the probability of rare events for sequential decision making systems.
- Score: 30.670114229970526
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An important step in the design of autonomous systems is to evaluate the
probability that a failure will occur. In safety-critical domains, the failure
probability is extremely small so that the evaluation of a policy through Monte
Carlo sampling is inefficient. Adaptive importance sampling approaches have
been developed for rare event estimation but do not scale well to sequential
systems with long horizons. In this work, we develop two adaptive importance
sampling algorithms that can efficiently estimate the probability of rare
events for sequential decision making systems. The basis for these algorithms
is the minimization of the Kullback-Leibler divergence between a
state-dependent proposal distribution and a target distribution over
trajectories, but the resulting algorithms resemble policy gradient and
value-based reinforcement learning. We apply multiple importance sampling to
reduce the variance of our estimate and to address the issue of multi-modality
in the optimal proposal distribution. We demonstrate our approach on a control
task with both continuous and discrete actions spaces and show accuracy
improvements over several baselines.
Related papers
- Embedding generalization within the learning dynamics: An approach based-on sample path large deviation theory [0.0]
We consider an empirical risk perturbation based learning problem that exploits methods from continuous-time perspective.
We provide an estimate in the small noise limit based on the Freidlin-Wentzell theory of large deviations.
We also present a computational algorithm that solves the corresponding variational problem leading to an optimal point estimates.
arXiv Detail & Related papers (2024-08-04T23:31:35Z) - Reliability analysis of discrete-state performance functions via
adaptive sequential sampling with detection of failure surfaces [0.0]
The paper presents a new efficient and robust method for rare event probability estimation.
The method can estimate the probabilities of multiple failure types.
It can accommodate this information to increase the accuracy of the estimated probabilities.
arXiv Detail & Related papers (2022-08-04T05:59:25Z) - GANISP: a GAN-assisted Importance SPlitting Probability Estimator [0.0]
The proposed GAN-assisted Importance SPlitting method (GANISP) improves the variance reduction for the system targeted.
An implementation of the method is available in a companion repository.
arXiv Detail & Related papers (2021-12-28T17:13:37Z) - Accelerated Policy Evaluation: Learning Adversarial Environments with
Adaptive Importance Sampling [19.81658135871748]
A biased or inaccurate policy evaluation in a safety-critical system could potentially cause unexpected catastrophic failures.
We propose the Accelerated Policy Evaluation (APE) method, which simultaneously uncovers rare events and estimates the rare event probability.
APE is scalable to large discrete or continuous spaces by incorporating function approximators.
arXiv Detail & Related papers (2021-06-19T20:03:26Z) - KL Guided Domain Adaptation [88.19298405363452]
Domain adaptation is an important problem and often needed for real-world applications.
A common approach in the domain adaptation literature is to learn a representation of the input that has the same distributions over the source and the target domain.
We show that with a probabilistic representation network, the KL term can be estimated efficiently via minibatch samples.
arXiv Detail & Related papers (2021-06-14T22:24:23Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - Risk-Sensitive Deep RL: Variance-Constrained Actor-Critic Provably Finds
Globally Optimal Policy [95.98698822755227]
We make the first attempt to study risk-sensitive deep reinforcement learning under the average reward setting with the variance risk criteria.
We propose an actor-critic algorithm that iteratively and efficiently updates the policy, the Lagrange multiplier, and the Fenchel dual variable.
arXiv Detail & Related papers (2020-12-28T05:02:26Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z) - Cautious Reinforcement Learning via Distributional Risk in the Dual
Domain [45.17200683056563]
We study the estimation of risk-sensitive policies in reinforcement learning problems defined by a Markov Decision Process (MDPs) whose state and action spaces are countably finite.
We propose a new definition of risk, which we call caution, as a penalty function added to the dual objective of the linear programming (LP) formulation of reinforcement learning.
arXiv Detail & Related papers (2020-02-27T23:18:04Z) - GenDICE: Generalized Offline Estimation of Stationary Values [108.17309783125398]
We show that effective estimation can still be achieved in important applications.
Our approach is based on estimating a ratio that corrects for the discrepancy between the stationary and empirical distributions.
The resulting algorithm, GenDICE, is straightforward and effective.
arXiv Detail & Related papers (2020-02-21T00:27:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.