PCENet: High Dimensional Surrogate Modeling for Learning Uncertainty
- URL: http://arxiv.org/abs/2202.05063v2
- Date: Fri, 11 Feb 2022 09:12:39 GMT
- Title: PCENet: High Dimensional Surrogate Modeling for Learning Uncertainty
- Authors: Paz Fink Shustin, Shashanka Ubaru, Vasileios Kalantzis, Lior Horesh,
Haim Avron
- Abstract summary: We present a novel surrogate model for representation learning and uncertainty quantification.
The proposed model combines a neural network approach for dimensionality reduction of the (potentially high-dimensional) data, with a surrogate model method for learning the data distribution.
Our model enables us to (a) learn a representation of the data, (b) estimate uncertainty in the high-dimensional data system, and (c) match high order moments of the output distribution.
- Score: 15.781915567005251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning data representations under uncertainty is an important task that
emerges in numerous machine learning applications. However, uncertainty
quantification (UQ) techniques are computationally intensive and become
prohibitively expensive for high-dimensional data. In this paper, we present a
novel surrogate model for representation learning and uncertainty
quantification, which aims to deal with data of moderate to high dimensions.
The proposed model combines a neural network approach for dimensionality
reduction of the (potentially high-dimensional) data, with a surrogate model
method for learning the data distribution. We first employ a variational
autoencoder (VAE) to learn a low-dimensional representation of the data
distribution. We then propose to harness polynomial chaos expansion (PCE)
formulation to map this distribution to the output target. The coefficients of
PCE are learned from the distribution representation of the training data using
a maximum mean discrepancy (MMD) approach. Our model enables us to (a) learn a
representation of the data, (b) estimate uncertainty in the high-dimensional
data system, and (c) match high order moments of the output distribution;
without any prior statistical assumptions on the data. Numerical experimental
results are presented to illustrate the performance of the proposed method.
Related papers
- Bayesian Estimation and Tuning-Free Rank Detection for Probability Mass Function Tensors [17.640500920466984]
This paper presents a novel framework for estimating the joint PMF and automatically inferring its rank from observed data.
We derive a deterministic solution based on variational inference (VI) to approximate the posterior distributions of various model parameters. Additionally, we develop a scalable version of the VI-based approach by leveraging variational inference (SVI)
Experiments involving both synthetic data and real movie recommendation data illustrate the advantages of our VI and SVI-based methods in terms of estimation accuracy, automatic rank detection, and computational efficiency.
arXiv Detail & Related papers (2024-10-08T20:07:49Z) - Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Proximal Symmetric Non-negative Latent Factor Analysis: A Novel Approach
to Highly-Accurate Representation of Undirected Weighted Networks [2.1797442801107056]
Undirected Weighted Network (UWN) is commonly found in big data-related applications.
Existing models fail in either modeling its intrinsic symmetry or low-data density.
Proximal Symmetric Nonnegative Latent-factor-analysis model is proposed.
arXiv Detail & Related papers (2023-06-06T13:03:24Z) - IB-UQ: Information bottleneck based uncertainty quantification for
neural function regression and neural operator learning [11.5992081385106]
We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks.
We incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data.
We also propose a data augmentation based information bottleneck objective which can enhance the quality of the extrapolation uncertainty.
arXiv Detail & Related papers (2023-02-07T05:56:42Z) - Low-rank statistical finite elements for scalable model-data synthesis [0.8602553195689513]
statFEM acknowledges a priori model misspecification, by embedding forcing within the governing equations.
The method reconstructs the observed data-generating processes with minimal loss of information.
This article overcomes this hurdle by embedding a low-rank approximation of the underlying dense covariance matrix.
arXiv Detail & Related papers (2021-09-10T09:51:43Z) - Efficient training of lightweight neural networks using Online
Self-Acquired Knowledge Distillation [51.66271681532262]
Online Self-Acquired Knowledge Distillation (OSAKD) is proposed, aiming to improve the performance of any deep neural model in an online manner.
We utilize k-nn non-parametric density estimation technique for estimating the unknown probability distributions of the data samples in the output feature space.
arXiv Detail & Related papers (2021-08-26T14:01:04Z) - Incorporating Causal Graphical Prior Knowledge into Predictive Modeling
via Simple Data Augmentation [92.96204497841032]
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
We propose a model-agnostic data augmentation method that allows us to exploit the prior knowledge of the conditional independence (CI) relations.
We experimentally show that the proposed method is effective in improving the prediction accuracy, especially in the small-data regime.
arXiv Detail & Related papers (2021-02-27T06:13:59Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Deep Dimension Reduction for Supervised Representation Learning [51.10448064423656]
We propose a deep dimension reduction approach to learning representations with essential characteristics.
The proposed approach is a nonparametric generalization of the sufficient dimension reduction method.
We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero.
arXiv Detail & Related papers (2020-06-10T14:47:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.