Non-Linear Spectral Dimensionality Reduction Under Uncertainty
- URL: http://arxiv.org/abs/2202.04678v1
- Date: Wed, 9 Feb 2022 19:01:33 GMT
- Title: Non-Linear Spectral Dimensionality Reduction Under Uncertainty
- Authors: Firas Laakom, Jenni Raitoharju, Nikolaos Passalis, Alexandros
Iosifidis, and Moncef Gabbouj
- Abstract summary: We propose a new dimensionality reduction framework, called NGEU, which leverages uncertainty information and directly extends several traditional approaches.
We show that the proposed NGEU formulation exhibits a global closed-form solution, and we analyze, based on the Rademacher complexity, how the underlying uncertainties theoretically affect the generalization ability of the framework.
- Score: 107.01839211235583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we consider the problem of non-linear dimensionality reduction
under uncertainty, both from a theoretical and algorithmic perspectives. Since
real-world data usually contain measurements with uncertainties and artifacts,
the input space in the proposed framework consists of probability distributions
to model the uncertainties associated with each sample. We propose a new
dimensionality reduction framework, called NGEU, which leverages uncertainty
information and directly extends several traditional approaches, e.g., KPCA,
MDA/KMFA, to receive as inputs the probability distributions instead of the
original data. We show that the proposed NGEU formulation exhibits a global
closed-form solution, and we analyze, based on the Rademacher complexity, how
the underlying uncertainties theoretically affect the generalization ability of
the framework. Empirical results on different datasets show the effectiveness
of the proposed framework.
Related papers
- Assumption-Lean Post-Integrated Inference with Negative Control Outcomes [0.0]
We introduce a robust post-integrated inference (PII) method that adjusts for latent heterogeneity using negative control outcomes.
Our method extends to projected direct effect estimands, accounting for hidden mediators, confounders, and moderators.
The proposed doubly robust estimators are consistent and efficient under minimal assumptions and potential misspecification.
arXiv Detail & Related papers (2024-10-07T12:52:38Z) - Linear Opinion Pooling for Uncertainty Quantification on Graphs [21.602569813024]
We propose a novel approach that represents (epistemic) uncertainty in terms of mixtures of Dirichlet distributions.
The effectiveness of this approach is demonstrated in a series of experiments on a variety of graph-structured datasets.
arXiv Detail & Related papers (2024-06-06T13:10:37Z) - Domain Generalization with Small Data [27.040070085669086]
We learn a domain-invariant representation based on the probabilistic framework by mapping each data point into probabilistic embeddings.
Our proposed method can marriage the measurement on the textitdistribution over distributions (i.e., the global perspective alignment) and the distribution-based contrastive semantic alignment.
arXiv Detail & Related papers (2024-02-09T02:59:08Z) - Learning Correspondence Uncertainty via Differentiable Nonlinear Least
Squares [47.83169780113135]
We propose a differentiable nonlinear least squares framework to account for uncertainty in relative pose estimation from feature correspondences.
We evaluate our approach on synthetic, as well as the KITTI and EuRoC real-world datasets.
arXiv Detail & Related papers (2023-05-16T15:21:09Z) - Excess risk analysis for epistemic uncertainty with application to
variational inference [110.4676591819618]
We present a novel EU analysis in the frequentist setting, where data is generated from an unknown distribution.
We show a relation between the generalization ability and the widely used EU measurements, such as the variance and entropy of the predictive distribution.
We propose new variational inference that directly controls the prediction and EU evaluation performances based on the PAC-Bayesian theory.
arXiv Detail & Related papers (2022-06-02T12:12:24Z) - A General Framework for quantifying Aleatoric and Epistemic uncertainty
in Graph Neural Networks [0.29494468099506893]
Graph Neural Networks (GNN) provide a powerful framework that elegantly integrates Graph theory with Machine learning.
We consider the problem of quantifying the uncertainty in predictions of GNN stemming from modeling errors and measurement uncertainty.
We propose a unified approach to treat both sources of uncertainty in a Bayesian framework.
arXiv Detail & Related papers (2022-05-20T05:25:40Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - BayesIMP: Uncertainty Quantification for Causal Data Fusion [52.184885680729224]
We study the causal data fusion problem, where datasets pertaining to multiple causal graphs are combined to estimate the average treatment effect of a target variable.
We introduce a framework which combines ideas from probabilistic integration and kernel mean embeddings to represent interventional distributions in the reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-06-07T10:14:18Z) - Accounting for Unobserved Confounding in Domain Generalization [107.0464488046289]
This paper investigates the problem of learning robust, generalizable prediction models from a combination of datasets.
Part of the challenge of learning robust models lies in the influence of unobserved confounders.
We demonstrate the empirical performance of our approach on healthcare data from different modalities.
arXiv Detail & Related papers (2020-07-21T08:18:06Z) - Towards a Kernel based Uncertainty Decomposition Framework for Data and
Models [20.348825818435767]
This paper introduces a new framework for quantifying predictive uncertainty for both data and models.
We apply this framework as a surrogate tool for predictive uncertainty quantification of point-prediction neural network models.
arXiv Detail & Related papers (2020-01-30T18:35:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.