An evidential classifier based on Dempster-Shafer theory and deep
learning
- URL: http://arxiv.org/abs/2103.13549v1
- Date: Thu, 25 Mar 2021 01:29:05 GMT
- Title: An evidential classifier based on Dempster-Shafer theory and deep
learning
- Authors: Zheng Tong, Philippe Xu, Thierry Den{\oe}ux
- Abstract summary: We propose a new classification system based on Dempster-Shafer (DS) theory and a convolutional neural network (CNN) architecture for set-valued classification.
Experiments on image recognition, signal processing, and semantic-relationship classification tasks demonstrate that the proposed combination of deep CNN, DS layer, and expected utility layer makes it possible to improve classification accuracy.
- Score: 6.230751621285322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new classifier based on Dempster-Shafer (DS) theory and a
convolutional neural network (CNN) architecture for set-valued classification.
In this classifier, called the evidential deep-learning classifier,
convolutional and pooling layers first extract high-dimensional features from
input data. The features are then converted into mass functions and aggregated
by Dempster's rule in a DS layer. Finally, an expected utility layer performs
set-valued classification based on mass functions. We propose an end-to-end
learning strategy for jointly updating the network parameters. Additionally, an
approach for selecting partial multi-class acts is proposed. Experiments on
image recognition, signal processing, and semantic-relationship classification
tasks demonstrate that the proposed combination of deep CNN, DS layer, and
expected utility layer makes it possible to improve classification accuracy and
to make cautious decisions by assigning confusing patterns to multi-class sets.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Dynamic Perceiver for Efficient Visual Recognition [87.08210214417309]
We propose Dynamic Perceiver (Dyn-Perceiver) to decouple the feature extraction procedure and the early classification task.
A feature branch serves to extract image features, while a classification branch processes a latent code assigned for classification tasks.
Early exits are placed exclusively within the classification branch, thus eliminating the need for linear separability in low-level features.
arXiv Detail & Related papers (2023-06-20T03:00:22Z) - Hidden Classification Layers: Enhancing linear separability between
classes in neural networks layers [0.0]
We investigate the impact on deep network performances of a training approach.
We propose a neural network architecture which induces an error function involving the outputs of all the network layers.
arXiv Detail & Related papers (2023-06-09T10:52:49Z) - Semantic Guided Level-Category Hybrid Prediction Network for
Hierarchical Image Classification [8.456482280676884]
Hierarchical classification (HC) assigns each object with multiple labels organized into a hierarchical structure.
We propose a novel semantic guided level-category hybrid prediction network (SGLCHPN) that can jointly perform the level and category prediction in an end-to-end manner.
arXiv Detail & Related papers (2022-11-22T13:49:10Z) - Do We Really Need a Learnable Classifier at the End of Deep Neural
Network? [118.18554882199676]
We study the potential of learning a neural network for classification with the classifier randomly as an ETF and fixed during training.
Our experimental results show that our method is able to achieve similar performances on image classification for balanced datasets.
arXiv Detail & Related papers (2022-03-17T04:34:28Z) - Fusion of evidential CNN classifiers for image classification [6.230751621285322]
We propose an information-fusion approach based on belief functions to combine convolutional neural networks.
In this approach, several pre-trained DS-based CNN architectures extract features from input images and convert them into mass functions on different frames of discernment.
arXiv Detail & Related papers (2021-08-23T15:12:26Z) - No Fear of Heterogeneity: Classifier Calibration for Federated Learning
with Non-IID Data [78.69828864672978]
A central challenge in training classification models in the real-world federated system is learning with non-IID data.
We propose a novel and simple algorithm called Virtual Representations (CCVR), which adjusts the classifier using virtual representations sampled from an approximated ssian mixture model.
Experimental results demonstrate that CCVR state-of-the-art performance on popular federated learning benchmarks including CIFAR-10, CIFAR-100, and CINIC-10.
arXiv Detail & Related papers (2021-06-09T12:02:29Z) - A Multiple Classifier Approach for Concatenate-Designed Neural Networks [13.017053017670467]
We give the design of the classifiers, which collects the features produced between the network sets.
We use the L2 normalization method to obtain the classification score instead of the Softmax Dense.
As a result, the proposed classifiers are able to improve the accuracy in the experimental cases.
arXiv Detail & Related papers (2021-01-14T04:32:40Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.