A Multiple Classifier Approach for Concatenate-Designed Neural Networks
- URL: http://arxiv.org/abs/2101.05457v1
- Date: Thu, 14 Jan 2021 04:32:40 GMT
- Title: A Multiple Classifier Approach for Concatenate-Designed Neural Networks
- Authors: Ka-Hou Chan, Sio-Kei Im and Wei Ke
- Abstract summary: We give the design of the classifiers, which collects the features produced between the network sets.
We use the L2 normalization method to obtain the classification score instead of the Softmax Dense.
As a result, the proposed classifiers are able to improve the accuracy in the experimental cases.
- Score: 13.017053017670467
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This article introduces a multiple classifier method to improve the
performance of concatenate-designed neural networks, such as ResNet and
DenseNet, with the purpose to alleviate the pressure on the final classifier.
We give the design of the classifiers, which collects the features produced
between the network sets, and present the constituent layers and the activation
function for the classifiers, to calculate the classification score of each
classifier. We use the L2 normalization method to obtain the classifier score
instead of the Softmax normalization. We also determine the conditions that can
enhance convergence. As a result, the proposed classifiers are able to improve
the accuracy in the experimental cases significantly, and show that the method
not only has better performance than the original models, but also produces
faster convergence. Moreover, our classifiers are general and can be applied to
all classification related concatenate-designed network models.
Related papers
- Classifier Chain Networks for Multi-Label Classification [0.0]
The classifier chain is a widely used method for analyzing multi-labeled data sets.
We introduce a generalization of the chain: the classifier chain network.
arXiv Detail & Related papers (2024-11-04T21:56:13Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Anomaly Detection using Ensemble Classification and Evidence Theory [62.997667081978825]
We present a novel approach for novel detection using ensemble classification and evidence theory.
A pool selection strategy is presented to build a solid ensemble classifier.
We use uncertainty for the anomaly detection approach.
arXiv Detail & Related papers (2022-12-23T00:50:41Z) - On the rate of convergence of a classifier based on a Transformer
encoder [55.41148606254641]
The rate of convergence of the misclassification probability of the classifier towards the optimal misclassification probability is analyzed.
It is shown that this classifier is able to circumvent the curse of dimensionality provided the aposteriori probability satisfies a suitable hierarchical composition model.
arXiv Detail & Related papers (2021-11-29T14:58:29Z) - CondNet: Conditional Classifier for Scene Segmentation [46.62529212678346]
We present a conditional classifier to replace the traditional global classifier.
It attends on the intra-class distinction, leading to stronger dense recognition capability.
The framework equipped with the conditional classifier (called CondNet) achieves new state-of-the-art performances on two datasets.
arXiv Detail & Related papers (2021-09-21T17:19:09Z) - Multiple Classifiers Based Maximum Classifier Discrepancy for
Unsupervised Domain Adaptation [25.114533037440896]
We propose to extend the structure of two classifiers to multiple classifiers to further boost its performance.
We demonstrate that, on average, adopting the structure of three classifiers normally yields the best performance as a trade-off between the accuracy and efficiency.
arXiv Detail & Related papers (2021-08-02T03:00:13Z) - An evidential classifier based on Dempster-Shafer theory and deep
learning [6.230751621285322]
We propose a new classification system based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification.
Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy.
arXiv Detail & Related papers (2021-03-25T01:29:05Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Equivalent Classification Mapping for Weakly Supervised Temporal Action
Localization [92.58946210982411]
Weakly supervised temporal action localization is a newly emerging yet widely studied topic in recent years.
The pre-classification pipeline first performs classification on each video snippet and then aggregate the snippet-level classification scores to obtain the video-level classification score.
The post-classification pipeline aggregates the snippet-level features first and then predicts the video-level classification score based on the aggregated feature.
arXiv Detail & Related papers (2020-08-18T03:54:56Z) - Conditional Classification: A Solution for Computational Energy
Reduction [2.182419181054266]
We propose a novel solution to reduce the computational complexity of convolutional neural network models.
Our proposed technique breaks the classification task into two steps: 1) coarse-grain classification, in which the input samples are classified among a set of hyper-classes, 2) fine-grain classification, in which the final labels are predicted among those hyper-classes detected at the first step.
arXiv Detail & Related papers (2020-06-29T03:50:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.