On-Manifold Projected Gradient Descent
- URL: http://arxiv.org/abs/2308.12279v1
- Date: Wed, 23 Aug 2023 17:50:50 GMT
- Title: On-Manifold Projected Gradient Descent
- Authors: Aaron Mahler, Tyrus Berry, Tom Stephens, Harbir Antil, Michael
Merritt, Jeanie Schreiber, Ioannis Kevrekidis
- Abstract summary: This work provides a computable, direct, and mathematically rigorous approximation to the differential geometry of class manifold for high-dimensional data.
Tools are applied to the setting of neural network image classifiers, where we generate novel, on-manifold data samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work provides a computable, direct, and mathematically rigorous
approximation to the differential geometry of class manifolds for
high-dimensional data, along with nonlinear projections from input space onto
these class manifolds. The tools are applied to the setting of neural network
image classifiers, where we generate novel, on-manifold data samples, and
implement a projected gradient descent algorithm for on-manifold adversarial
training. The susceptibility of neural networks (NNs) to adversarial attack
highlights the brittle nature of NN decision boundaries in input space.
Introducing adversarial examples during training has been shown to reduce the
susceptibility of NNs to adversarial attack; however, it has also been shown to
reduce the accuracy of the classifier if the examples are not valid examples
for that class. Realistic "on-manifold" examples have been previously generated
from class manifolds in the latent of an autoencoder. Our work explores these
phenomena in a geometric and computational setting that is much closer to the
raw, high-dimensional input space than can be provided by VAE or other black
box dimensionality reductions. We employ conformally invariant diffusion maps
(CIDM) to approximate class manifolds in diffusion coordinates, and develop the
Nystr\"{o}m projection to project novel points onto class manifolds in this
setting. On top of the manifold approximation, we leverage the spectral
exterior calculus (SEC) to determine geometric quantities such as tangent
vectors of the manifold. We use these tools to obtain adversarial examples that
reside on a class manifold, yet fool a classifier. These misclassifications
then become explainable in terms of human-understandable manipulations within
the data, by expressing the on-manifold adversary in the semantic basis on the
manifold.
Related papers
- Scaling Riemannian Diffusion Models [68.52820280448991]
We show that our method enables us to scale to high dimensional tasks on nontrivial manifold.
We model QCD densities on $SU(n)$ lattices and contrastively learned embeddings on high dimensional hyperspheres.
arXiv Detail & Related papers (2023-10-30T21:27:53Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - The Manifold Scattering Transform for High-Dimensional Point Cloud Data [16.500568323161563]
We present practical schemes for implementing the manifold scattering transform to datasets arising in naturalistic systems.
We show that our methods are effective for signal classification and manifold classification tasks.
arXiv Detail & Related papers (2022-06-21T02:15:00Z) - Diagnosing and Fixing Manifold Overfitting in Deep Generative Models [11.82509693248749]
Likelihood-based, or explicit, deep generative models use neural networks to construct flexible high-dimensional densities.
We show that observed data lies on a low-dimensional manifold embedded in high-dimensional ambient space.
We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation.
arXiv Detail & Related papers (2022-04-14T18:00:03Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Hard-label Manifolds: Unexpected Advantages of Query Efficiency for
Finding On-manifold Adversarial Examples [67.23103682776049]
Recent zeroth order hard-label attacks on image classification models have shown comparable performance to their first-order, gradient-level alternatives.
It was recently shown in the gradient-level setting that regular adversarial examples leave the data manifold, while their on-manifold counterparts are in fact generalization errors.
We propose an information-theoretic argument based on a noisy manifold distance oracle, which leaks manifold information through the adversary's gradient estimate.
arXiv Detail & Related papers (2021-03-04T20:53:06Z) - Manifold Learning via Manifold Deflation [105.7418091051558]
dimensionality reduction methods provide a valuable means to visualize and interpret high-dimensional data.
Many popular methods can fail dramatically, even on simple two-dimensional Manifolds.
This paper presents an embedding method for a novel, incremental tangent space estimator that incorporates global structure as coordinates.
Empirically, we show our algorithm recovers novel and interesting embeddings on real-world and synthetic datasets.
arXiv Detail & Related papers (2020-07-07T10:04:28Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.