Convolutional autoencoder-based multimodal one-class classification
- URL: http://arxiv.org/abs/2309.14090v1
- Date: Mon, 25 Sep 2023 12:31:18 GMT
- Title: Convolutional autoencoder-based multimodal one-class classification
- Authors: Firas Laakom, Fahad Sohrab, Jenni Raitoharju, Alexandros Iosifidis,
Moncef Gabbouj
- Abstract summary: One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
- Score: 80.52334952912808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One-class classification refers to approaches of learning using data from a
single class only. In this paper, we propose a deep learning one-class
classification method suitable for multimodal data, which relies on two
convolutional autoencoders jointly trained to reconstruct the positive input
data while obtaining the data representations in the latent space as compact as
possible. During inference, the distance of the latent representation of an
input to the origin can be used as an anomaly score. Experimental results using
a multimodal macroinvertebrate image classification dataset show that the
proposed multimodal method yields better results as compared to the unimodal
approach. Furthermore, study the effect of different input image sizes, and we
investigate how recently proposed feature diversity regularizers affect the
performance of our approach. We show that such regularizers improve
performance.
Related papers
- An Enhanced Federated Prototype Learning Method under Domain Shift [36.73020712815063]
Federated Learning (FL) allows collaborative machine learning training without sharing private data.
Recent paper introduces variance-aware dual-level prototype clustering and uses a novel $alpha$-sparsity prototype loss.
Evaluations on the Digit-5, Office-10, and DomainNet datasets show that our method performs better than existing approaches.
arXiv Detail & Related papers (2024-09-27T09:28:27Z) - On Modality Bias Recognition and Reduction [70.69194431713825]
We study the modality bias problem in the context of multi-modal classification.
We propose a plug-and-play loss function method, whereby the feature space for each label is adaptively learned.
Our method yields remarkable performance improvements compared with the baselines.
arXiv Detail & Related papers (2022-02-25T13:47:09Z) - Generalized One-Class Learning Using Pairs of Complementary Classifiers [41.64645294104883]
One-class learning is the classic problem of fitting a model to the data for which annotations are available only for a single class.
In this paper, we explore novel objectives for one-class learning, which we collectively refer to as Generalized One-class Discriminative Subspaces (GODS)
arXiv Detail & Related papers (2021-06-24T18:52:05Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - Few-Shot Named Entity Recognition: A Comprehensive Study [92.40991050806544]
We investigate three schemes to improve the model generalization ability for few-shot settings.
We perform empirical comparisons on 10 public NER datasets with various proportions of labeled data.
We create new state-of-the-art results on both few-shot and training-free settings.
arXiv Detail & Related papers (2020-12-29T23:43:16Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Mixing Consistent Deep Clustering [3.5786621294068373]
Good latent representations produce semantically mixed outputs when decoding linears of two latent representations.
We propose the Mixing Consistent Deep Clustering method which encourages representations to appear realistic.
We show that the proposed method can be added to existing autoencoders to further improve clustering performance.
arXiv Detail & Related papers (2020-11-03T19:47:06Z) - Learning Inter- and Intra-manifolds for Matrix Factorization-based
Multi-Aspect Data Clustering [3.756550107432323]
Clustering on the data with multiple aspects, such as multi-view or multi-type relational data, has become popular in recent years.
We propose to include the inter-manifold in the NMF framework, utilizing the distance information of data points of different data types (or views) to learn the diverse manifold for data clustering.
Results on several datasets demonstrate that the proposed method outperforms the state-of-the-art multi-aspect data clustering methods in both accuracy and efficiency.
arXiv Detail & Related papers (2020-09-07T02:21:08Z) - Selecting Relevant Features from a Multi-domain Representation for
Few-shot Classification [91.67977602992657]
We propose a new strategy based on feature selection, which is both simpler and more effective than previous feature adaptation approaches.
We show that a simple non-parametric classifier built on top of such features produces high accuracy and generalizes to domains never seen during training.
arXiv Detail & Related papers (2020-03-20T15:44:17Z) - Deep Inverse Feature Learning: A Representation Learning of Error [6.5358895450258325]
This paper introduces a novel perspective about error in machine learning and proposes inverse feature learning (IFL) as a representation learning approach.
Inverse feature learning method operates based on a deep clustering approach to obtain a qualitative form of the representation of error as features.
The experimental results show that the proposed method leads to promising results in classification and especially in clustering.
arXiv Detail & Related papers (2020-03-09T17:45:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.