CondNet: Conditional Classifier for Scene Segmentation
- URL: http://arxiv.org/abs/2109.10322v1
- Date: Tue, 21 Sep 2021 17:19:09 GMT
- Title: CondNet: Conditional Classifier for Scene Segmentation
- Authors: Changqian Yu and Yuanjie Shao and Changxin Gao and Nong Sang
- Abstract summary: We present a conditional classifier to replace the traditional global classifier.
It attends on the intra-class distinction, leading to stronger dense recognition capability.
The framework equipped with the conditional classifier (called CondNet) achieves new state-of-the-art performances on two datasets.
- Score: 46.62529212678346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The fully convolutional network (FCN) has achieved tremendous success in
dense visual recognition tasks, such as scene segmentation. The last layer of
FCN is typically a global classifier (1x1 convolution) to recognize each pixel
to a semantic label. We empirically show that this global classifier, ignoring
the intra-class distinction, may lead to sub-optimal results.
In this work, we present a conditional classifier to replace the traditional
global classifier, where the kernels of the classifier are generated
dynamically conditioned on the input. The main advantages of the new classifier
consist of: (i) it attends on the intra-class distinction, leading to stronger
dense recognition capability; (ii) the conditional classifier is simple and
flexible to be integrated into almost arbitrary FCN architectures to improve
the prediction. Extensive experiments demonstrate that the proposed classifier
performs favourably against the traditional classifier on the FCN architecture.
The framework equipped with the conditional classifier (called CondNet)
achieves new state-of-the-art performances on two datasets. The code and models
are available at https://git.io/CondNet.
Related papers
- Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Neural Collapse Inspired Feature-Classifier Alignment for Few-Shot Class
Incremental Learning [120.53458753007851]
Few-shot class-incremental learning (FSCIL) has been a challenging problem as only a few training samples are accessible for each novel class in the new sessions.
We deal with this misalignment dilemma in FSCIL inspired by the recently discovered phenomenon named neural collapse.
We propose a neural collapse inspired framework for FSCIL. Experiments on the miniImageNet, CUB-200, and CIFAR-100 datasets demonstrate that our proposed framework outperforms the state-of-the-art performances.
arXiv Detail & Related papers (2023-02-06T18:39:40Z) - Prototype Based Classification from Hierarchy to Fairness [7.129830575525267]
A new neural network architecture, the concept subspace network (CSN), generalizes existing specialized classifiers to produce a unified model.
CSNs reproduce state-of-the-art results in fair classification when enforcing concept independence.
The CSN is inspired by existing prototype-based classifiers that promote interpretability.
arXiv Detail & Related papers (2022-05-27T14:21:41Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - An evidential classifier based on Dempster-Shafer theory and deep
learning [6.230751621285322]
We propose a new classification system based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification.
Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy.
arXiv Detail & Related papers (2021-03-25T01:29:05Z) - Self-Supervised Classification Network [3.8073142980733]
Self-supervised end-to-end classification neural network learns labels and representations simultaneously.
First unsupervised end-to-end classification network to perform well on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2021-03-19T19:29:42Z) - A Multiple Classifier Approach for Concatenate-Designed Neural Networks [13.017053017670467]
We give the design of the classifiers, which collects the features produced between the network sets.
We use the L2 normalization method to obtain the classification score instead of the Softmax Dense.
As a result, the proposed classifiers are able to improve the accuracy in the experimental cases.
arXiv Detail & Related papers (2021-01-14T04:32:40Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.