Improving Deep Representation Learning via Auxiliary Learnable Target
Coding
- URL: http://arxiv.org/abs/2305.18680v1
- Date: Tue, 30 May 2023 01:38:54 GMT
- Title: Improving Deep Representation Learning via Auxiliary Learnable Target
Coding
- Authors: Kangjun Liu, Ke Chen, Yaowei Wang, Kui Jia
- Abstract summary: This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
- Score: 44.61627734250863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep representation learning is a subfield of machine learning that focuses
on learning meaningful and useful representations of data through deep neural
networks. However, existing methods for semantic classification typically
employ pre-defined target codes such as the one-hot and the Hadamard codes,
which can either fail or be less flexible to model inter-class correlation. In
light of this, this paper introduces a novel learnable target coding as an
auxiliary regularization of deep representation learning, which can not only
incorporate latent dependency across classes but also impose geometric
properties of target codes into representation space. Specifically, a
margin-based triplet loss and a correlation consistency loss on the proposed
target codes are designed to encourage more discriminative representations
owing to enlarging between-class margins in representation space and favoring
equal semantic correlation of learnable target codes respectively. Experimental
results on several popular visual classification and retrieval benchmarks can
demonstrate the effectiveness of our method on improving representation
learning, especially for imbalanced data.
Related papers
- Deep Metric Learning for Computer Vision: A Brief Overview [4.980117530293724]
Objective functions that optimize deep neural networks play a vital role in creating an enhanced feature representation of the input data.
Deep Metric Learning seeks to develop methods that aim to measure the similarity between data samples.
We will provide an overview of recent progress in this area and discuss state-of-the-art Deep Metric Learning approaches.
arXiv Detail & Related papers (2023-12-01T21:53:36Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Convolutional Fine-Grained Classification with Self-Supervised Target
Relation Regularization [34.8793946023412]
This paper introduces a novel target coding scheme -- dynamic target relation graphs (DTRG)
Online computation of class-level feature centers is designed to generate cross-category distance in the representation space.
The proposed target graphs can alleviate data sparsity and imbalanceness in representation learning.
arXiv Detail & Related papers (2022-08-03T11:51:53Z) - Semantic Representation and Dependency Learning for Multi-Label Image
Recognition [76.52120002993728]
We propose a novel and effective semantic representation and dependency learning (SRDL) framework to learn category-specific semantic representation for each category.
Specifically, we design a category-specific attentional regions (CAR) module to generate channel/spatial-wise attention matrices to guide model.
We also design an object erasing (OE) module to implicitly learn semantic dependency among categories by erasing semantic-aware regions.
arXiv Detail & Related papers (2022-04-08T00:55:15Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Adversarial Stacked Auto-Encoders for Fair Representation Learning [1.061960673667643]
We propose a new fair representation learning approach that leverages different levels of representation of data to tighten the fairness bounds of the learned representation.
Our results show that stacking different auto-encoders and enforcing fairness at different latent spaces result in an improvement of fairness compared to other existing approaches.
arXiv Detail & Related papers (2021-07-27T13:49:18Z) - A Framework to Enhance Generalization of Deep Metric Learning methods
using General Discriminative Feature Learning and Class Adversarial Neural
Networks [1.5469452301122175]
Metric learning algorithms aim to learn a distance function that brings semantically similar data items together and keeps dissimilar ones at a distance.
Deep Metric Learning (DML) methods are proposed that automatically extract features from data and learn a non-linear transformation from input space to a semantically embedding space.
We propose a framework to enhance the generalization power of existing DML methods in a Zero-Shot Learning (ZSL) setting.
arXiv Detail & Related papers (2021-06-11T14:24:40Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - MetaSDF: Meta-learning Signed Distance Functions [85.81290552559817]
Generalizing across shapes with neural implicit representations amounts to learning priors over the respective function space.
We formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task.
arXiv Detail & Related papers (2020-06-17T05:14:53Z) - Learning Diverse and Discriminative Representations via the Principle of
Maximal Coding Rate Reduction [32.21975128854042]
We propose the principle of Maximal Coding Rate Reduction ($textMCR2$), an information-theoretic measure that maximizes the coding rate difference between the whole dataset and the sum of each individual class.
We clarify its relationships with most existing frameworks such as cross-entropy, information bottleneck, information gain, contractive and contrastive learning, and provide theoretical guarantees for learning diverse and discriminative features.
arXiv Detail & Related papers (2020-06-15T17:23:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.