Adversarial Multi-Binary Neural Network for Multi-class Classification
- URL: http://arxiv.org/abs/2003.11184v1
- Date: Wed, 25 Mar 2020 02:19:17 GMT
- Title: Adversarial Multi-Binary Neural Network for Multi-class Classification
- Authors: Haiyang Xu, Junwen Chen, Kun Han, Xiangang Li
- Abstract summary: We use a multi-task framework to address multi-class classification.
We employ adversarial training to distinguish the class-specific features and the class-agnostic features.
- Score: 19.298875915675502
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-class text classification is one of the key problems in machine
learning and natural language processing. Emerging neural networks deal with
the problem using a multi-output softmax layer and achieve substantial
progress, but they do not explicitly learn the correlation among classes. In
this paper, we use a multi-task framework to address multi-class
classification, where a multi-class classifier and multiple binary classifiers
are trained together. Moreover, we employ adversarial training to distinguish
the class-specific features and the class-agnostic features. The model benefits
from better feature representation. We conduct experiments on two large-scale
multi-class text classification tasks and demonstrate that the proposed
architecture outperforms baseline approaches.
Related papers
- On the Limits of Multi-modal Meta-Learning with Auxiliary Task Modulation Using Conditional Batch Normalization [35.39571632348391]
Few-shot learning aims to learn representations that can tackle novel tasks.
Recent studies show that cross-modal learning can improve representations for few-shot classification.
Language is a rich modality that can be used to guide visual learning.
arXiv Detail & Related papers (2024-05-29T04:29:12Z) - Class Binarization to NeuroEvolution for Multiclass Classification [9.179400849826216]
Multiclass classification is a fundamental and challenging task in machine learning.
Decomposing multiclass classification into a set of binary classifications is called class binarization.
We propose a new method that applies Error-Correcting Output Codes (ECOC) to design the class binarization strategies on the neuroevolution for multiclass classification.
arXiv Detail & Related papers (2023-08-26T13:26:13Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - A Multi-Class SWAP-Test Classifier [0.0]
This work presents the first multi-class SWAP-Test classifier inspired by its binary predecessor and the use of label states in recent work.
In contrast to previous work, the number of qubits required, the measurement strategy, and the topology of the circuits used is invariant to the number of classes.
Both analytical results and numerical simulations show that this classifier is not only effective when applied to diverse classification problems but also robust to certain conditions of noise.
arXiv Detail & Related papers (2023-02-06T18:31:43Z) - Generalization for multiclass classification with overparameterized
linear models [3.3434274586532515]
We show that multiclass classification behaves like binary classification in that, as long as there are not too many classes, it is possible to generalize well.
Besides various technical challenges, it turns out that the key difference from the binary classification setting is that there are relatively fewer positive training examples of each class in the multiclass setting as the number of classes increases.
arXiv Detail & Related papers (2022-06-03T05:52:43Z) - Semantic Representation and Dependency Learning for Multi-Label Image
Recognition [76.52120002993728]
We propose a novel and effective semantic representation and dependency learning (SRDL) framework to learn category-specific semantic representation for each category.
Specifically, we design a category-specific attentional regions (CAR) module to generate channel/spatial-wise attention matrices to guide model.
We also design an object erasing (OE) module to implicitly learn semantic dependency among categories by erasing semantic-aware regions.
arXiv Detail & Related papers (2022-04-08T00:55:15Z) - Gated recurrent units and temporal convolutional network for multilabel
classification [122.84638446560663]
This work proposes a new ensemble method for managing multilabel classification.
The core of the proposed approach combines a set of gated recurrent units and temporal convolutional neural networks trained with variants of the Adam gradients optimization approach.
arXiv Detail & Related papers (2021-10-09T00:00:16Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Metrics for Multi-Class Classification: an Overview [0.9176056742068814]
Classification tasks involving more than two classes are known as "multi-class classification"
Performance indicators are very useful when the aim is to evaluate and compare different classification models or machine learning techniques.
arXiv Detail & Related papers (2020-08-13T08:41:44Z) - Many-Class Few-Shot Learning on Multi-Granularity Class Hierarchy [57.68486382473194]
We study many-class few-shot (MCFS) problem in both supervised learning and meta-learning settings.
In this paper, we leverage the class hierarchy as a prior knowledge to train a coarse-to-fine classifier.
The model, "memory-augmented hierarchical-classification network (MahiNet)", performs coarse-to-fine classification where each coarse class can cover multiple fine classes.
arXiv Detail & Related papers (2020-06-28T01:11:34Z) - Learning Class Regularized Features for Action Recognition [68.90994813947405]
We introduce a novel method named Class Regularization that performs class-based regularization of layer activations.
We show that using Class Regularization blocks in state-of-the-art CNN architectures for action recognition leads to systematic improvement gains of 1.8%, 1.2% and 1.4% on the Kinetics, UCF-101 and HMDB-51 datasets, respectively.
arXiv Detail & Related papers (2020-02-07T07:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.