Learning optimally separated class-specific subspace representations
using convolutional autoencoder
- URL: http://arxiv.org/abs/2105.08865v1
- Date: Wed, 19 May 2021 00:45:34 GMT
- Title: Learning optimally separated class-specific subspace representations
using convolutional autoencoder
- Authors: Krishan Sharma (1), Shikha Gupta (1), Renu Rameshan (2) ((1) Vehant
Technologies Pvt. Ltd., (2) Indian Institute of Technology Mandi, India)
- Abstract summary: We propose a novel convolutional autoencoder based architecture to generate subspace specific feature representations.
To demonstrate the effectiveness of the proposed approach, several experiments have been carried out on state-of-the-art machine learning datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a novel convolutional autoencoder based architecture
to generate subspace specific feature representations that are best suited for
classification task. The class-specific data is assumed to lie in low
dimensional linear subspaces, which could be noisy and not well separated,
i.e., subspace distance (principal angle) between two classes is very low. The
proposed network uses a novel class-specific self expressiveness (CSSE) layer
sandwiched between encoder and decoder networks to generate class-wise subspace
representations which are well separated. The CSSE layer along with encoder/
decoder are trained in such a way that data still lies in subspaces in the
feature space with minimum principal angle much higher than that of the input
space. To demonstrate the effectiveness of the proposed approach, several
experiments have been carried out on state-of-the-art machine learning datasets
and a significant improvement in classification performance is observed over
existing subspace based transformation learning methods.
Related papers
- Subspace Prototype Guidance for Mitigating Class Imbalance in Point Cloud Semantic Segmentation [23.250178208474928]
This paper introduces a novel method, namely subspace prototype guidance (textbfSPG) to guide the training of segmentation network.
The proposed method significantly improves the segmentation performance and surpasses the state-of-the-art method.
arXiv Detail & Related papers (2024-08-20T04:31:46Z) - Feature Selection using Sparse Adaptive Bottleneck Centroid-Encoder [1.2487990897680423]
We introduce a novel nonlinear model, Sparse Adaptive Bottleneckid-Encoder (SABCE), for determining the features that discriminate between two or more classes.
The algorithm is applied to various real-world data sets, including high-dimensional biological, image, speech, and accelerometer sensor data.
arXiv Detail & Related papers (2023-06-07T21:37:21Z) - Learning Structure Aware Deep Spectral Embedding [11.509692423756448]
We propose a novel structure-aware deep spectral embedding by combining a spectral embedding loss and a structure preservation loss.
A deep neural network architecture is proposed that simultaneously encodes both types of information and aims to generate structure-aware spectral embedding.
The proposed algorithm is evaluated on six publicly available real-world datasets.
arXiv Detail & Related papers (2023-05-14T18:18:05Z) - Closed-Loop Data Transcription to an LDR via Minimaxing Rate Reduction [27.020835928724775]
This work proposes a new computational framework for learning an explicit generative model for real-world datasets.
In particular, we propose to learn em a closed-loop transcription between a multi-class multi-dimensional data distribution and a linear discriminative representation (LDR) in the feature space.
Our experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation.
arXiv Detail & Related papers (2021-11-12T10:06:08Z) - Subspace Representation Learning for Few-shot Image Classification [105.7788602565317]
We propose a subspace representation learning framework to tackle few-shot image classification tasks.
It exploits a subspace in local CNN feature space to represent an image, and measures the similarity between two images according to a weighted subspace distance (WSD)
arXiv Detail & Related papers (2021-05-02T02:29:32Z) - Joint and Progressive Subspace Analysis (JPSA) with Spatial-Spectral
Manifold Alignment for Semi-Supervised Hyperspectral Dimensionality Reduction [48.73525876467408]
We propose a novel technique for hyperspectral subspace analysis.
The technique is called joint and progressive subspace analysis (JPSA)
Experiments are conducted to demonstrate the superiority and effectiveness of the proposed JPSA on two widely-used hyperspectral datasets.
arXiv Detail & Related papers (2020-09-21T16:29:59Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z) - Robust Large-Margin Learning in Hyperbolic Space [64.42251583239347]
We present the first theoretical guarantees for learning a classifier in hyperbolic rather than Euclidean space.
We provide an algorithm to efficiently learn a large-margin hyperplane, relying on the careful injection of adversarial examples.
We prove that for hierarchical data that embeds well into hyperbolic space, the low embedding dimension ensures superior guarantees.
arXiv Detail & Related papers (2020-04-11T19:11:30Z) - Ellipsoidal Subspace Support Vector Data Description [98.67884574313292]
We propose a novel method for transforming data into a low-dimensional space optimized for one-class classification.
We provide both linear and non-linear formulations for the proposed method.
The proposed method is noticed to converge much faster than recently proposed Subspace Support Vector Data Description.
arXiv Detail & Related papers (2020-03-20T21:31:03Z) - Multi-Level Representation Learning for Deep Subspace Clustering [10.506584969668792]
This paper proposes a novel deep subspace clustering approach which uses convolutional autoencoders to transform input images into new representations lying on a union of linear subspaces.
Experiments on four real-world datasets demonstrate that our approach exhibits superior performance compared to the state-of-the-art methods on most of the subspace clustering problems.
arXiv Detail & Related papers (2020-01-19T23:29:50Z) - Deep Metric Structured Learning For Facial Expression Recognition [58.7528672474537]
We propose a deep metric learning model to create embedded sub-spaces with a well defined structure.
A new loss function that imposes Gaussian structures on the output space is introduced to create these sub-spaces.
We experimentally demonstrate that the learned embedding can be successfully used for various applications including expression retrieval and emotion recognition.
arXiv Detail & Related papers (2020-01-18T06:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.