Supervised Dimensionality Reduction and Classification with
Convolutional Autoencoders
- URL: http://arxiv.org/abs/2208.12152v2
- Date: Fri, 26 Aug 2022 12:15:44 GMT
- Title: Supervised Dimensionality Reduction and Classification with
Convolutional Autoencoders
- Authors: Ioannis A. Nellas, Sotiris K. Tasoulis, Vassilis P. Plagianakos and
Spiros V. Georgakopoulos
- Abstract summary: A Convolutional Autoencoder is combined to simultaneously produce supervised dimensionality reduction and predictions.
The resulting Latent Space can be utilized to improve traditional, interpretable classification algorithms.
The proposed methodology introduces advanced explainability regarding, not only the data structure through the produced latent space, but also about the classification behaviour.
- Score: 1.1164202369517053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The joint optimization of the reconstruction and classification error is a
hard non convex problem, especially when a non linear mapping is utilized. In
order to overcome this obstacle, a novel optimization strategy is proposed, in
which a Convolutional Autoencoder for dimensionality reduction and a classifier
composed by a Fully Connected Network, are combined to simultaneously produce
supervised dimensionality reduction and predictions. It turned out that this
methodology can also be greatly beneficial in enforcing explainability of deep
learning architectures. Additionally, the resulting Latent Space, optimized for
the classification task, can be utilized to improve traditional, interpretable
classification algorithms. The experimental results, showed that the proposed
methodology achieved competitive results against the state of the art deep
learning methods, while being much more efficient in terms of parameter count.
Finally, it was empirically justified that the proposed methodology introduces
advanced explainability regarding, not only the data structure through the
produced latent space, but also about the classification behaviour.
Related papers
- Enabling Tensor Decomposition for Time-Series Classification via A Simple Pseudo-Laplacian Contrast [26.28414569796961]
We propose a novel Pseudo Laplacian Contrast (PLC) tensor decomposition framework.
It integrates the data augmentation and cross-view Laplacian to enable the extraction of class-aware representations.
Experiments on various datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-09-23T16:48:13Z) - Anti-Collapse Loss for Deep Metric Learning Based on Coding Rate Metric [99.19559537966538]
DML aims to learn a discriminative high-dimensional embedding space for downstream tasks like classification, clustering, and retrieval.
To maintain the structure of embedding space and avoid feature collapse, we propose a novel loss function called Anti-Collapse Loss.
Comprehensive experiments on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art methods.
arXiv Detail & Related papers (2024-07-03T13:44:20Z) - Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Topologically Regularized Data Embeddings [15.001598256750619]
We introduce a generic approach based on algebraic topology to incorporate topological prior knowledge into low-dimensional embeddings.
We show that jointly optimizing an embedding loss with such a topological loss function as a regularizer yields embeddings that reflect not only local proximities but also the desired topological structure.
We empirically evaluate the proposed approach on computational efficiency, robustness, and versatility in combination with linear and non-linear dimensionality reduction and graph embedding methods.
arXiv Detail & Related papers (2023-01-09T13:49:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Gradient-Based Learning of Discrete Structured Measurement Operators for
Signal Recovery [16.740247586153085]
We show how to leverage gradient-based learning to solve discrete optimization problems.
Our approach is formalized by GLODISMO (Gradient-based Learning of DIscrete Structured Measurement Operators)
We empirically demonstrate the performance and flexibility of GLODISMO in several signal recovery applications.
arXiv Detail & Related papers (2022-02-07T18:27:08Z) - Unsupervised feature selection via self-paced learning and low-redundant
regularization [6.083524716031565]
An unsupervised feature selection is proposed by integrating the framework of self-paced learning and subspace learning.
The convergence of the method is proved theoretically and experimentally.
The experimental results show that the proposed method can improve the performance of clustering methods and outperform other compared algorithms.
arXiv Detail & Related papers (2021-12-14T08:28:19Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Discretization-Aware Architecture Search [81.35557425784026]
This paper presents discretization-aware architecture search (DAtextsuperscript2S)
The core idea is to push the super-network towards the configuration of desired topology, so that the accuracy loss brought by discretization is largely alleviated.
Experiments on standard image classification benchmarks demonstrate the superiority of our approach.
arXiv Detail & Related papers (2020-07-07T01:18:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.