Distance Preserving Machine Learning for Uncertainty Aware Accelerator
Capacitance Predictions
- URL: http://arxiv.org/abs/2307.02367v1
- Date: Wed, 5 Jul 2023 15:32:39 GMT
- Title: Distance Preserving Machine Learning for Uncertainty Aware Accelerator
Capacitance Predictions
- Authors: Steven Goldenberg, Malachi Schram, Kishansingh Rajput, Thomas Britton,
Chris Pappas, Dan Lu, Jared Walden, Majdi I. Radaideh, Sarah Cousineau,
Sudarshan Harave
- Abstract summary: Deep neural networks and Gaussian process approximation techniques have shown promising results, but dimensionality reduction through standard deep neural network layers is not guaranteed to maintain the distance information necessary for Gaussian process models.
We build on previous work by comparing the use of the singular value decomposition against a spectral-normalized dense layer as a feature extractor for a deep neural Gaussian process approximation model.
Our model shows improved distance preservation and predicts in-distribution capacitance values with less than 1% error.
- Score: 1.1776336798216411
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing accurate uncertainty estimations is essential for producing
reliable machine learning models, especially in safety-critical applications
such as accelerator systems. Gaussian process models are generally regarded as
the gold standard method for this task, but they can struggle with large,
high-dimensional datasets. Combining deep neural networks with Gaussian process
approximation techniques have shown promising results, but dimensionality
reduction through standard deep neural network layers is not guaranteed to
maintain the distance information necessary for Gaussian process models. We
build on previous work by comparing the use of the singular value decomposition
against a spectral-normalized dense layer as a feature extractor for a deep
neural Gaussian process approximation model and apply it to a capacitance
prediction problem for the High Voltage Converter Modulators in the Oak Ridge
Spallation Neutron Source. Our model shows improved distance preservation and
predicts in-distribution capacitance values with less than 1% error.
Related papers
- Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Parallel and Limited Data Voice Conversion Using Stochastic Variational
Deep Kernel Learning [2.5782420501870296]
This paper proposes a voice conversion method that works with limited data.
It is based on variational deep kernel learning (SVDKL)
It is possible to estimate non-smooth and more complex functions.
arXiv Detail & Related papers (2023-09-08T16:32:47Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - A hybrid data driven-physics constrained Gaussian process regression
framework with deep kernel for uncertainty quantification [21.972192114861873]
We propose a hybrid data driven-physics constrained Gaussian process regression framework.
We encode the physics knowledge with Boltzmann-Gibbs distribution and derive our model through maximum likelihood (ML) approach.
The proposed model achieves good results in high-dimensional problem, and correctly propagate the uncertainty, with very limited labelled data provided.
arXiv Detail & Related papers (2022-05-13T07:53:49Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Simple and Principled Uncertainty Estimation with Deterministic Deep
Learning via Distance Awareness [24.473250414880454]
We study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN)
By formalizing the uncertainty quantification as a minimax learning problem, we first identify input distance awareness, i.e., the model's ability to quantify the distance of a testing example from the training data in the input space.
We then propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
arXiv Detail & Related papers (2020-06-17T19:18:22Z) - Semi-supervised deep learning for high-dimensional uncertainty
quantification [6.910275451003041]
This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis.
An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space.
A deep feedforward neural network is utilized to learn the mapping relationship and reconstruct the latent space.
arXiv Detail & Related papers (2020-06-01T15:15:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.