A Learning-Based Optimal Uncertainty Quantification Method and Its
Application to Ballistic Impact Problems
- URL: http://arxiv.org/abs/2212.14709v1
- Date: Wed, 28 Dec 2022 14:30:53 GMT
- Title: A Learning-Based Optimal Uncertainty Quantification Method and Its
Application to Ballistic Impact Problems
- Authors: Xingsheng Sun, Burigede Liu
- Abstract summary: This paper concerns the optimal (supremum and infimum) uncertainty bounds for systems where the input (or prior) measure is only partially/imperfectly known.
We demonstrate the learning based framework on the uncertainty optimization problem.
We show that the approach can be used to construct maps for the performance certificate and safety in engineering practice.
- Score: 1.713291434132985
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper concerns the study of optimal (supremum and infimum) uncertainty
bounds for systems where the input (or prior) probability measure is only
partially/imperfectly known (e.g., with only statistical moments and/or on a
coarse topology) rather than fully specified. Such partial knowledge provides
constraints on the input probability measures. The theory of Optimal
Uncertainty Quantification allows us to convert the task into a constraint
optimization problem where one seeks to compute the least upper/greatest lower
bound of the system's output uncertainties by finding the extremal probability
measure of the input. Such optimization requires repeated evaluation of the
system's performance indicator (input to performance map) and is
high-dimensional and non-convex by nature. Therefore, it is difficult to find
the optimal uncertainty bounds in practice. In this paper, we examine the use
of machine learning, especially deep neural networks, to address the challenge.
We achieve this by introducing a neural network classifier to approximate the
performance indicator combined with the stochastic gradient descent method to
solve the optimization problem. We demonstrate the learning based framework on
the uncertainty quantification of the impact of magnesium alloys, which are
promising light-weight structural and protective materials. Finally, we show
that the approach can be used to construct maps for the performance certificate
and safety design in engineering practice.
Related papers
- Embedding generalization within the learning dynamics: An approach based-on sample path large deviation theory [0.0]
We consider an empirical risk perturbation based learning problem that exploits methods from continuous-time perspective.
We provide an estimate in the small noise limit based on the Freidlin-Wentzell theory of large deviations.
We also present a computational algorithm that solves the corresponding variational problem leading to an optimal point estimates.
arXiv Detail & Related papers (2024-08-04T23:31:35Z) - Efficient Robust Bayesian Optimization for Arbitrary Uncertain Inputs [13.578262325229161]
We introduce a novel robust Bayesian Optimization algorithm, AIRBO, which can effectively identify a robust optimum that performs consistently well under arbitrary input uncertainty.
Our method directly models the uncertain inputs of arbitrary distributions by empowering the Gaussian Process with the Maximum Mean Discrepancy (MMD) and further accelerates the posterior inference via Nystrom approximation.
Rigorous theoretical regret bound is established under MMD estimation error and extensive experiments on synthetic functions and real problems demonstrate that our approach can handle various input uncertainties and achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-10-31T03:29:31Z) - Scalable Bayesian Meta-Learning through Generalized Implicit Gradients [64.21628447579772]
Implicit Bayesian meta-learning (iBaML) method broadens the scope of learnable priors, but also quantifies the associated uncertainty.
Analytical error bounds are established to demonstrate the precision and efficiency of the generalized implicit gradient over the explicit one.
arXiv Detail & Related papers (2023-03-31T02:10:30Z) - Log Barriers for Safe Black-box Optimization with Application to Safe
Reinforcement Learning [72.97229770329214]
We introduce a general approach for seeking high dimensional non-linear optimization problems in which maintaining safety during learning is crucial.
Our approach called LBSGD is based on applying a logarithmic barrier approximation with a carefully chosen step size.
We demonstrate the effectiveness of our approach on minimizing violation in policy tasks in safe reinforcement learning.
arXiv Detail & Related papers (2022-07-21T11:14:47Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Lower Bounds on Cross-Entropy Loss in the Presence of Test-time
Adversaries [33.53470955144013]
In this paper, we determine optimal lower bounds on the cross-entropy loss in the presence of test-time adversaries, along with the corresponding optimal classification outputs.
We also propose and provide a proof of correctness for a bespoke algorithm to compute this lower bound efficiently.
We use our lower bounds as a diagnostic tool to determine the effectiveness of current robust training methods and find a gap from optimality at larger budgets.
arXiv Detail & Related papers (2021-04-16T21:41:28Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing
Platforms [10.86298377998459]
We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems.
The framework is built from the ground up to provide predictive uncertainty based only on one forward pass.
Our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance.
arXiv Detail & Related papers (2021-02-11T11:44:32Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.