Evolutionary Simplicial Learning as a Generative and Compact Sparse
Framework for Classification
- URL: http://arxiv.org/abs/2005.07076v1
- Date: Thu, 14 May 2020 15:44:56 GMT
- Title: Evolutionary Simplicial Learning as a Generative and Compact Sparse
Framework for Classification
- Authors: Yigit Oktar, Mehmet Turkan
- Abstract summary: Simplicial learning is an adaptation of dictionary learning, where subspaces become clipped and acquire arbitrary offsets.
This paper proposes an evolutionary simplicial learning method as a generative and compact sparse framework for classification.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dictionary learning for sparse representations has been successful in many
reconstruction tasks. Simplicial learning is an adaptation of dictionary
learning, where subspaces become clipped and acquire arbitrary offsets, taking
the form of simplices. Such adaptation is achieved through additional
constraints on sparse codes. Furthermore, an evolutionary approach can be
chosen to determine the number and the dimensionality of simplices composing
the simplicial, in which most generative and compact simplicials are favored.
This paper proposes an evolutionary simplicial learning method as a generative
and compact sparse framework for classification. The proposed approach is first
applied on a one-class classification task and it appears as the most reliable
method within the considered benchmark. Most surprising results are observed
when evolutionary simplicial learning is considered within a multi-class
classification task. Since sparse representations are generative in nature,
they bear a fundamental problem of not being capable of distinguishing two
classes lying on the same subspace. This claim is validated through synthetic
experiments and superiority of simplicial learning even as a generative-only
approach is demonstrated. Simplicial learning loses its superiority over
discriminative methods in high-dimensional cases but can further be modified
with discriminative elements to achieve state-of-the-art performance in
classification tasks.
Related papers
- Cross-Class Feature Augmentation for Class Incremental Learning [45.91253737682168]
We propose a novel class incremental learning approach by incorporating a feature augmentation technique motivated by adversarial attacks.
The proposed approach has a unique perspective to utilize the previous knowledge in class incremental learning since it augments features of arbitrary target classes.
Our method consistently outperforms existing class incremental learning methods by significant margins in various scenarios.
arXiv Detail & Related papers (2023-04-04T15:48:09Z) - Learning Context-aware Classifier for Semantic Segmentation [88.88198210948426]
In this paper, contextual hints are exploited via learning a context-aware classifier.
Our method is model-agnostic and can be easily applied to generic segmentation models.
With only negligible additional parameters and +2% inference time, decent performance gain has been achieved on both small and large models.
arXiv Detail & Related papers (2023-03-21T07:00:35Z) - Generalization Bounds for Few-Shot Transfer Learning with Pretrained
Classifiers [26.844410679685424]
We study the ability of foundation models to learn representations for classification that are transferable to new, unseen classes.
We show that the few-shot error of the learned feature map on new classes is small in case of class-feature-variability collapse.
arXiv Detail & Related papers (2022-12-23T18:46:05Z) - Learning Primitive-aware Discriminative Representations for Few-shot
Learning [28.17404445820028]
Few-shot learning aims to learn a classifier that can be easily adapted to recognize novel classes with only a few labeled examples.
We propose a Primitive Mining and Reasoning Network (PMRN) to learn primitive-aware representations.
Our method achieves state-of-the-art results on six standard benchmarks.
arXiv Detail & Related papers (2022-08-20T16:22:22Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Open-Set Representation Learning through Combinatorial Embedding [62.05670732352456]
We are interested in identifying novel concepts in a dataset through representation learning based on the examples in both labeled and unlabeled classes.
We propose a learning approach, which naturally clusters examples in unseen classes using the compositional knowledge given by multiple supervised meta-classifiers on heterogeneous label spaces.
The proposed algorithm discovers novel concepts via a joint optimization of enhancing the discrimitiveness of unseen classes as well as learning the representations of known classes generalizable to novel ones.
arXiv Detail & Related papers (2021-06-29T11:51:57Z) - Class-Incremental Learning with Generative Classifiers [6.570917734205559]
We propose a new strategy for class-incremental learning: generative classification.
Our proposal is to learn the joint distribution p(x,y), factorized as p(x|y)p(y), and to perform classification using Bayes' rule.
As a proof-of-principle, here we implement this strategy by training a variational autoencoder for each class to be learned.
arXiv Detail & Related papers (2021-04-20T16:26:14Z) - Affinity-Based Hierarchical Learning of Dependent Concepts for Human
Activity Recognition [6.187780920448871]
We show that the organization of overlapping classes into hierarchies considerably improves classification performances.
This is particularly true in the case of activity recognition tasks featured in the SHL dataset.
We propose an approach based on transfer affinity among the classes to determine an optimal hierarchy for the learning process.
arXiv Detail & Related papers (2021-04-11T01:08:48Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - M2m: Imbalanced Classification via Major-to-minor Translation [79.09018382489506]
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
In this paper, we explore a novel yet simple way to alleviate this issue by augmenting less-frequent classes via translating samples from more-frequent classes.
Our experimental results on a variety of class-imbalanced datasets show that the proposed method improves the generalization on minority classes significantly compared to other existing re-sampling or re-weighting methods.
arXiv Detail & Related papers (2020-04-01T13:21:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.