Diffeomorphic Counterfactuals with Generative Models
- URL: http://arxiv.org/abs/2206.05075v1
- Date: Fri, 10 Jun 2022 13:14:21 GMT
- Title: Diffeomorphic Counterfactuals with Generative Models
- Authors: Ann-Kathrin Dombrowski, Jan E. Gerken, Klaus-Robert M\"uller, Pan
Kessel
- Abstract summary: We propose a simple but effective method to generate such counterfactuals.
More specifically, we perform a suitable diffeomorphic coordinate transformation and then perform gradient ascent in these coordinates to find counterfactuals which are classified with great confidence as a specified target class.
- Score: 2.9822184411723645
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Counterfactuals can explain classification decisions of neural networks in a
human interpretable way. We propose a simple but effective method to generate
such counterfactuals. More specifically, we perform a suitable diffeomorphic
coordinate transformation and then perform gradient ascent in these coordinates
to find counterfactuals which are classified with great confidence as a
specified target class. We propose two methods to leverage generative models to
construct such suitable coordinate systems that are either exactly or
approximately diffeomorphic. We analyze the generation process theoretically
using Riemannian differential geometry and validate the quality of the
generated counterfactuals using various qualitative and quantitative measures.
Related papers
- Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Learning Differential Invariants of Planar Curves [12.699486382844393]
We propose a learning paradigm for the numerical approximation of differential invariants of planar curves.
Deep neural-networks' (DNNs) universal approximation properties are utilized to estimate geometric measures.
arXiv Detail & Related papers (2023-03-06T19:30:43Z) - A Geometric Perspective on Variational Autoencoders [0.0]
This paper introduces a new interpretation of the Variational Autoencoder framework by taking a fully geometric point of view.
We show that using this scheme can make a vanilla VAE competitive and even better than more advanced versions on several benchmark datasets.
arXiv Detail & Related papers (2022-09-15T15:32:43Z) - Topographic VAEs learn Equivariant Capsules [84.33745072274942]
We introduce the Topographic VAE: a novel method for efficiently training deep generative models with topographically organized latent variables.
We show that such a model indeed learns to organize its activations according to salient characteristics such as digit class, width, and style on MNIST.
We demonstrate approximate equivariance to complex transformations, expanding upon the capabilities of existing group equivariant neural networks.
arXiv Detail & Related papers (2021-09-03T09:25:57Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Convolutional Hough Matching Networks [39.524998833064956]
We introduce a Hough transform perspective on convolutional matching and propose an effective geometric matching algorithm, dubbed Convolutional Hough Matching (CHM)
We cast it into a trainable neural layer with a semi-isotropic high-dimensional kernel, which learns non-rigid matching with a small number of interpretable parameters.
Our method sets a new state of the art on standard benchmarks for semantic visual correspondence, proving its strong robustness to challenging intra-class variations.
arXiv Detail & Related papers (2021-03-31T06:17:03Z) - Generative Archimedean Copulas [27.705956325584026]
We propose a new generative modeling technique for learning multidimensional cumulative distribution functions (CDFs) in the form of copulas.
We consider certain classes of copulas known as Archimedean and hierarchical Archimedean copulas, popular for their parsimonious representation and ability to model different tail dependencies.
arXiv Detail & Related papers (2021-02-22T20:45:40Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z) - Model identification and local linear convergence of coordinate descent [74.87531444344381]
We show that cyclic coordinate descent achieves model identification in finite time for a wide class of functions.
We also prove explicit local linear convergence rates for coordinate descent.
arXiv Detail & Related papers (2020-10-22T16:03:19Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.