GANISP: a GAN-assisted Importance SPlitting Probability Estimator
- URL: http://arxiv.org/abs/2112.15444v1
- Date: Tue, 28 Dec 2021 17:13:37 GMT
- Title: GANISP: a GAN-assisted Importance SPlitting Probability Estimator
- Authors: Malik Hassanaly and Andrew Glaws and Ryan N. King
- Abstract summary: The proposed GAN-assisted Importance SPlitting method (GANISP) improves the variance reduction for the system targeted.
An implementation of the method is available in a companion repository.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing manufacturing processes with high yield and strong reliability
relies on effective methods for rare event estimation. Genealogical importance
splitting reduces the variance of rare event probability estimators by
iteratively selecting and replicating realizations that are headed towards a
rare event. The replication step is difficult when applied to deterministic
systems where the initial conditions of the offspring realizations need to be
modified. Typically, a random perturbation is applied to the offspring to
differentiate their trajectory from the parent realization. However, this
random perturbation strategy may be effective for some systems while failing
for others, preventing variance reduction in the probability estimate. This
work seeks to address this limitation using a generative model such as a
Generative Adversarial Network (GAN) to generate perturbations that are
consistent with the attractor of the dynamical system. The proposed
GAN-assisted Importance SPlitting method (GANISP) improves the variance
reduction for the system targeted. An implementation of the method is available
in a companion repository (https://github.com/NREL/GANISP).
Related papers
- Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Distributional Shift-Aware Off-Policy Interval Estimation: A Unified
Error Quantification Framework [8.572441599469597]
We study high-confidence off-policy evaluation in the context of infinite-horizon Markov decision processes.
The objective is to establish a confidence interval (CI) for the target policy value using only offline data pre-collected from unknown behavior policies.
We show that our algorithm is sample-efficient, error-robust, and provably convergent even in non-linear function approximation settings.
arXiv Detail & Related papers (2023-09-23T06:35:44Z) - Accurate generation of stochastic dynamics based on multi-model
Generative Adversarial Networks [0.0]
Generative Adversarial Networks (GANs) have shown immense potential in fields such as text and image generation.
Here we quantitatively test this approach by applying it to a prototypical process on a lattice.
Importantly, the discreteness of the model is retained despite the noise.
arXiv Detail & Related papers (2023-05-25T10:41:02Z) - A Deep Reinforcement Learning Approach to Rare Event Estimation [30.670114229970526]
An important step in the design of autonomous systems is to evaluate the probability that a failure will occur.
In safety-critical domains, the failure probability is extremely small so that the evaluation of a policy through Monte Carlo sampling is inefficient.
We develop two adaptive importance sampling algorithms that can efficiently estimate the probability of rare events for sequential decision making systems.
arXiv Detail & Related papers (2022-11-22T18:29:14Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - Rare event estimation using stochastic spectral embedding [0.0]
Estimating the probability of rare failure events is an essential step in the reliability assessment of engineering systems.
We propose a set of modifications that tailor the algorithm to efficiently solve rare event estimation problems.
arXiv Detail & Related papers (2021-06-09T16:10:33Z) - Entropy-based adaptive design for contour finding and estimating
reliability [0.24466725954625884]
In reliability analysis, methods used to estimate failure probability are often limited by the costs associated with model evaluations.
We introduce an entropy-based GP adaptive design that, when paired with MFIS, provides more accurate failure probability estimates.
Illustrative examples are provided on benchmark data as well as an application to an impact damage simulator for National Aeronautics and Space Administration (NASA) spacesuits.
arXiv Detail & Related papers (2021-05-24T15:41:15Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.