Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction
- URL: http://arxiv.org/abs/2111.06636v1
- Date: Fri, 12 Nov 2021 10:06:08 GMT
- Title: Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction
- Authors: Xili Dai, Shengbang Tong, Mingyang Li, Ziyang Wu, Kwan Ho Ryan Chan,
Pengyuan Zhai, Yaodong Yu, Michael Psenka, Xiaojun Yuan, Heung Yeung Shum, Yi
Ma
- Abstract summary: This work proposes a new computational framework for learning an explicit generative model for real-world datasets.
In particular, we propose to learn em a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space.
Our experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation.
- Score: 27.020835928724775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a new computational framework for learning an explicit
generative model for real-world datasets. In particular we propose to learn
{\em a closed-loop transcription} between a multi-class multi-dimensional data
distribution and a { linear discriminative representation (LDR)} in the feature
space that consists of multiple independent multi-dimensional linear subspaces.
In particular, we argue that the optimal encoding and decoding mappings sought
can be formulated as the equilibrium point of a {\em two-player minimax game
between the encoder and decoder}. A natural utility function for this game is
the so-called {\em rate reduction}, a simple information-theoretic measure for
distances between mixtures of subspace-like Gaussians in the feature space. Our
formulation draws inspiration from closed-loop error feedback from control
systems and avoids expensive evaluating and minimizing approximated distances
between arbitrary distributions in either the data space or the feature space.
To a large extent, this new formulation unifies the concepts and benefits of
Auto-Encoding and GAN and naturally extends them to the settings of learning a
{\em both discriminative and generative} representation for multi-class and
multi-dimensional real-world data. Our extensive experiments on many benchmark
imagery datasets demonstrate tremendous potential of this new closed-loop
formulation: under fair comparison, visual quality of the learned decoder and
classification performance of the encoder is competitive and often better than
existing methods based on GAN, VAE, or a combination of both. We notice that
the so learned features of different classes are explicitly mapped onto
approximately {\em independent principal subspaces} in the feature space; and
diverse visual attributes within each class are modeled by the {\em independent
principal components} within each subspace.
Related papers
- Subspace Representation Learning for Sparse Linear Arrays to Localize More Sources than Sensors: A Deep Learning Methodology [19.100476521802243]
We develop a novel methodology that estimates the co-array subspaces from a sample covariance for sparse linear array (SLA)
To learn such representations, we propose loss functions that gauge the separation between the desired and the estimated subspace.
The computation of learning subspaces of different dimensions is accelerated by a new batch sampling strategy.
arXiv Detail & Related papers (2024-08-29T15:14:52Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Unsupervised Manifold Linearizing and Clustering [19.879641608165887]
We propose to optimize the Maximal Coding Reduction metric with respect to both the data representation and a novel doubly cluster membership.
Experiments on CIFAR-10, -20, -100, and TinyImageNet-200 datasets show that the proposed method is much more accurate and scalable than state-of-the-art deep clustering methods.
arXiv Detail & Related papers (2023-01-04T20:08:23Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders [45.29194877564103]
This work introduces a chart autoencoder with an asymmetric encoding-decoding process that can incorporate additional semi-supervised information such as class labels.
We discuss the approximation power of such networks and derive a bound that essentially depends on the intrinsic dimension of the data manifold rather than the dimension of ambient space.
arXiv Detail & Related papers (2022-08-22T19:58:03Z) - Few-Shot Non-Parametric Learning with Deep Latent Variable Model [50.746273235463754]
We propose Non-Parametric learning by Compression with Latent Variables (NPC-LV)
NPC-LV is a learning framework for any dataset with abundant unlabeled data but very few labeled ones.
We show that NPC-LV outperforms supervised methods on all three datasets on image classification in low data regime.
arXiv Detail & Related papers (2022-06-23T09:35:03Z) - Generalized One-Class Learning Using Pairs of Complementary Classifiers [41.64645294104883]
One-class learning is the classic problem of fitting a model to the data for which annotations are available only for a single class.
In this paper, we explore novel objectives for one-class learning, which we collectively refer to as Generalized One-class Discriminative Subspaces (GODS)
arXiv Detail & Related papers (2021-06-24T18:52:05Z) - Learning optimally separated class-specific subspace representations
using convolutional autoencoder [0.0]
We propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations.
To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets.
arXiv Detail & Related papers (2021-05-19T00:45:34Z) - Switch Spaces: Learning Product Spaces with Sparse Gating [48.591045282317424]
We propose Switch Spaces, a data-driven approach for learning representations in product space.
We introduce sparse gating mechanisms that learn to choose, combine and switch spaces.
Experiments on knowledge graph completion and item recommendations show that the proposed switch space achieves new state-of-the-art performances.
arXiv Detail & Related papers (2021-02-17T11:06:59Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - Deep Metric Structured Learning For Facial Expression Recognition [58.7528672474537]
We propose a deep metric learning model to create embedded sub-spaces with a well defined structure.
A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces.
We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
arXiv Detail & Related papers (2020-01-18T06:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.