Parzen Window Approximation on Riemannian Manifold
- URL: http://arxiv.org/abs/2012.14661v1
- Date: Tue, 29 Dec 2020 08:52:31 GMT
- Title: Parzen Window Approximation on Riemannian Manifold
- Authors: Abhishek and Shekhar Verma
- Abstract summary: In graph motivated learning, label propagation largely depends on data affinity represented as edges between connected data points.
An affinity metric which takes into consideration the irregular sampling effect to yield accurate label propagation is proposed.
- Score: 5.600982367387833
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In graph motivated learning, label propagation largely depends on data
affinity represented as edges between connected data points. The affinity
assignment implicitly assumes even distribution of data on the manifold. This
assumption may not hold and may lead to inaccurate metric assignment due to
drift towards high-density regions. The drift affected heat kernel based
affinity with a globally fixed Parzen window either discards genuine neighbors
or forces distant data points to become a member of the neighborhood. This
yields a biased affinity matrix. In this paper, the bias due to uneven data
sampling on the Riemannian manifold is catered to by a variable Parzen window
determined as a function of neighborhood size, ambient dimension, flatness
range, etc. Additionally, affinity adjustment is used which offsets the effect
of uneven sampling responsible for the bias. An affinity metric which takes
into consideration the irregular sampling effect to yield accurate label
propagation is proposed. Extensive experiments on synthetic and real-world data
sets confirm that the proposed method increases the classification accuracy
significantly and outperforms existing Parzen window estimators in graph
Laplacian manifold regularization methods.
Related papers
- Graph Out-of-Distribution Generalization with Controllable Data
Augmentation [51.17476258673232]
Graph Neural Network (GNN) has demonstrated extraordinary performance in classifying graph properties.
Due to the selection bias of training and testing data, distribution deviation is widespread.
We propose OOD calibration to measure the distribution deviation of virtual samples.
arXiv Detail & Related papers (2023-08-16T13:10:27Z) - Manifold Learning with Sparse Regularised Optimal Transport [0.17205106391379024]
Real-world datasets are subject to noisy observations and sampling, so that distilling information about the underlying manifold is a major challenge.
We propose a method for manifold learning that utilises a symmetric version of optimal transport with a quadratic regularisation.
We prove that the resulting kernel is consistent with a Laplace-type operator in the continuous limit, establish robustness to heteroskedastic noise and exhibit these results in simulations.
arXiv Detail & Related papers (2023-07-19T08:05:46Z) - Concrete Score Matching: Generalized Score Matching for Discrete Data [109.12439278055213]
"Concrete score" is a generalization of the (Stein) score for discrete settings.
"Concrete Score Matching" is a framework to learn such scores from samples.
arXiv Detail & Related papers (2022-11-02T00:41:37Z) - Score Matching for Truncated Density Estimation on a Manifold [6.53626518989653]
Recent methods propose to use score matching for truncated density estimation.
We present a novel extension of truncated score matching to a Riemannian manifold with boundary.
In simulated data experiments, our score matching estimator is able to approximate the true parameter values with a low estimation error.
arXiv Detail & Related papers (2022-06-29T14:14:49Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Hard-label Manifolds: Unexpected Advantages of Query Efficiency for
Finding On-manifold Adversarial Examples [67.23103682776049]
Recent zeroth order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives.
It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.
We propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate.
arXiv Detail & Related papers (2021-03-04T20:53:06Z) - Learning Disentangled Representations with Latent Variation
Predictability [102.4163768995288]
This paper defines the variation predictability of latent disentangled representations.
Within an adversarial generation process, we encourage variation predictability by maximizing the mutual information between latent variations and corresponding image pairs.
We develop an evaluation metric that does not rely on the ground-truth generative factors to measure the disentanglement of latent representations.
arXiv Detail & Related papers (2020-07-25T08:54:26Z) - Identification Methods With Arbitrary Interventional Distributions as
Inputs [8.185725740857595]
Causal inference quantifies cause-effect relationships by estimating counterfactual parameters from data.
We use Single World Intervention Graphs and a nested factorization of models associated with mixed graphs to give a very simple view of existing identification theory for experimental data.
arXiv Detail & Related papers (2020-04-02T17:27:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.