Revisiting Latent-Space Interpolation via a Quantitative Evaluation
Framework
- URL: http://arxiv.org/abs/2110.06421v1
- Date: Wed, 13 Oct 2021 01:01:42 GMT
- Title: Revisiting Latent-Space Interpolation via a Quantitative Evaluation
Framework
- Authors: Lu Mi, Tianxing He, Core Francisco Park, Hao Wang, Yue Wang, Nir
Shavit
- Abstract summary: We show how data labeled with semantically continuous attributes can be utilized to conduct a quantitative evaluation of latent-space algorithms.
Our framework can be used to complement the standard qualitative comparison, and also enables evaluation for domains (such as graph) in which the visualization is difficult.
- Score: 14.589372535816619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent-space interpolation is commonly used to demonstrate the generalization
ability of deep latent variable models. Various algorithms have been proposed
to calculate the best trajectory between two encodings in the latent space. In
this work, we show how data labeled with semantically continuous attributes can
be utilized to conduct a quantitative evaluation of latent-space interpolation
algorithms, for variational autoencoders. Our framework can be used to
complement the standard qualitative comparison, and also enables evaluation for
domains (such as graph) in which the visualization is difficult. Interestingly,
our experiments reveal that the superiority of interpolation algorithms could
be domain-dependent. While normalised interpolation works best for the image
domain, spherical linear interpolation achieves the best performance in the
graph domain. Next, we propose a simple-yet-effective method to restrict the
latent space via a bottleneck structure in the encoder. We find that all
interpolation algorithms evaluated in this work can benefit from this
restriction. Finally, we conduct interpolation-aware training with the labeled
attributes, and show that this explicit supervision can improve the
interpolation performance.
Related papers
- Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Efficient Hierarchical Domain Adaptation for Pretrained Language Models [77.02962815423658]
Generative language models are trained on diverse, general domain corpora.
We introduce a method to scale domain adaptation to many diverse domains using a computationally efficient adapter approach.
arXiv Detail & Related papers (2021-12-16T11:09:29Z) - Gradient Matching for Domain Generalization [93.04545793814486]
A critical requirement of machine learning systems is their ability to generalize to unseen domains.
We propose an inter-domain gradient matching objective that targets domain generalization.
We derive a simpler first-order algorithm named Fish that approximates its optimization.
arXiv Detail & Related papers (2021-04-20T12:55:37Z) - Explicit homography estimation improves contrastive self-supervised
learning [0.30458514384586394]
We propose a module that serves as an additional objective in the self-supervised contrastive learning paradigm.
We show how the inclusion of this module to regress the parameters of an affine transformation or homography improves both performance and learning speed.
arXiv Detail & Related papers (2021-01-12T19:33:37Z) - Continuous and Diverse Image-to-Image Translation via Signed Attribute
Vectors [120.13149176992896]
We present an effectively signed attribute vector, which enables continuous translation on diverse mapping paths across various domains.
To enhance the visual quality of continuous translation results, we generate a trajectory between two sign-symmetrical attribute vectors.
arXiv Detail & Related papers (2020-11-02T18:59:03Z) - Autoencoder Image Interpolation by Shaping the Latent Space [12.482988592988868]
Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types.
We propose a regularization technique that shapes the latent representation to follow a manifold consistent with the training images.
arXiv Detail & Related papers (2020-08-04T12:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.