When Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach
- URL: http://arxiv.org/abs/2504.13982v1
- Date: Fri, 18 Apr 2025 07:25:56 GMT
- Title: When Machine Learning Meets Importance Sampling: A More Efficient Rare Event Estimation Approach
- Authors: Ruoning Zhao, Xinyun Chen,
- Abstract summary: We explore the simulation task of estimating rare event probabilities for tandem queues in their steady state.<n>Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions.<n>We introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance.
- Score: 29.286353206449643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Driven by applications in telecommunication networks, we explore the simulation task of estimating rare event probabilities for tandem queues in their steady state. Existing literature has recognized that importance sampling methods can be inefficient, due to the exploding variance of the path-dependent likelihood functions. To mitigate this, we introduce a new importance sampling approach that utilizes a marginal likelihood ratio on the stationary distribution, effectively avoiding the issue of excessive variance. In addition, we design a machine learning algorithm to estimate this marginal likelihood ratio using importance sampling data. Numerical experiments indicate that our algorithm outperforms the classic importance sampling methods.
Related papers
- Exploring Variance Reduction in Importance Sampling for Efficient DNN Training [1.7767466724342067]
This paper proposes a method for estimating variance reduction during deep neural network (DNN) training using only minibatches sampled under importance sampling.<n>An absolute metric to quantify the efficiency of importance sampling is also introduced as well as an algorithm for real-time estimation of importance scores based on moving gradient statistics.
arXiv Detail & Related papers (2025-01-23T00:43:34Z) - Enhanced Importance Sampling through Latent Space Exploration in Normalizing Flows [69.8873421870522]
importance sampling is a rare event simulation technique used in Monte Carlo simulations.
We propose a method for more efficient sampling by updating the proposal distribution in the latent space of a normalizing flow.
arXiv Detail & Related papers (2025-01-06T21:18:02Z) - Failure Probability Estimation for Black-Box Autonomous Systems using State-Dependent Importance Sampling Proposals [37.579437595742995]
Estimating the probability of failure is a critical step in developing safety-critical autonomous systems.
Direct estimation methods such as Monte Carlo sampling are often impractical due to the rarity of failures in these systems.
We propose an adaptive importance sampling algorithm to address these limitations.
arXiv Detail & Related papers (2024-12-03T04:28:58Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation for Time Series [45.76310830281876]
We propose Quantile Sub-Ensembles, a novel method to estimate uncertainty with ensemble of quantile-regression-based task networks.
Our method not only produces accurate imputations that is robust to high missing rates, but also is computationally efficient due to the fast training of its non-generative model.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Adapting to Continuous Covariate Shift via Online Density Ratio Estimation [64.8027122329609]
Dealing with distribution shifts is one of the central challenges for modern machine learning.
We propose an online method that can appropriately reuse historical information.
Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound.
arXiv Detail & Related papers (2023-02-06T04:03:33Z) - A Deep Reinforcement Learning Approach to Rare Event Estimation [30.670114229970526]
An important step in the design of autonomous systems is to evaluate the probability that a failure will occur.
In safety-critical domains, the failure probability is extremely small so that the evaluation of a policy through Monte Carlo sampling is inefficient.
We develop two adaptive importance sampling algorithms that can efficiently estimate the probability of rare events for sequential decision making systems.
arXiv Detail & Related papers (2022-11-22T18:29:14Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Localized Debiased Machine Learning: Efficient Inference on Quantile
Treatment Effects and Beyond [69.83813153444115]
We consider an efficient estimating equation for the (local) quantile treatment effect ((L)QTE) in causal inference.
Debiased machine learning (DML) is a data-splitting approach to estimating high-dimensional nuisances.
We propose localized debiased machine learning (LDML), which avoids this burdensome step.
arXiv Detail & Related papers (2019-12-30T14:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.