Deep N-ary Error Correcting Output Codes
- URL: http://arxiv.org/abs/2009.10465v4
- Date: Tue, 15 Dec 2020 02:33:15 GMT
- Title: Deep N-ary Error Correcting Output Codes
- Authors: Hao Zhang, Joey Tianyi Zhou, Tianying Wang, Ivor W. Tsang, Rick Siow
Mong Goh
- Abstract summary: Data-independent ensemble methods like Error Correcting Output Codes (ECOC) attract increasing attention.
N-ary ECOC decomposes the original multi-class classification problem into a series of independent simpler classification subproblems.
We propose three different variants of parameter sharing architectures for deep N-ary ECOC.
- Score: 66.15481033522343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ensemble learning consistently improves the performance of multi-class
classification through aggregating a series of base classifiers. To this end,
data-independent ensemble methods like Error Correcting Output Codes (ECOC)
attract increasing attention due to its easiness of implementation and
parallelization. Specifically, traditional ECOCs and its general extension
N-ary ECOC decompose the original multi-class classification problem into a
series of independent simpler classification subproblems. Unfortunately,
integrating ECOCs, especially N-ary ECOC with deep neural networks, termed as
deep N-ary ECOC, is not straightforward and yet fully exploited in the
literature, due to the high expense of training base learners. To facilitate
the training of N-ary ECOC with deep learning base learners, we further propose
three different variants of parameter sharing architectures for deep N-ary
ECOC. To verify the generalization ability of deep N-ary ECOC, we conduct
experiments by varying the backbone with different deep neural network
architectures for both image and text classification tasks. Furthermore,
extensive ablation studies on deep N-ary ECOC show its superior performance
over other deep data-independent ensemble methods.
Related papers
- Informed deep hierarchical classification: a non-standard analysis inspired approach [0.0]
It consists in a multi-output deep neural network equipped with specific projection operators placed before each output layer.
The design of such an architecture, called lexicographic hybrid deep neural network (LH-DNN), has been possible by combining tools from different and quite distant research fields.
To assess the efficacy of the approach, the resulting network is compared against the B-CNN, a convolutional neural network tailored for hierarchical classification tasks.
arXiv Detail & Related papers (2024-09-25T14:12:50Z) - Deep Negative Correlation Classification [82.45045814842595]
Existing deep ensemble methods naively train many different models and then aggregate their predictions.
We propose deep negative correlation classification (DNCC)
DNCC yields a deep classification ensemble where the individual estimator is both accurate and negatively correlated.
arXiv Detail & Related papers (2022-12-14T07:35:20Z) - Deep Combinatorial Aggregation [58.78692706974121]
Deep ensemble is a simple and effective method that achieves state-of-the-art results for uncertainty-aware learning tasks.
In this work, we explore a generalization of deep ensemble called deep aggregation (DCA)
DCA creates multiple instances of network components and aggregates their combinations to produce diversified model proposals and predictions.
arXiv Detail & Related papers (2022-10-12T17:35:03Z) - Self-Supervised Deep Subspace Clustering with Entropy-norm [0.0]
Self-Supervised deep Subspace Clustering with Entropy-norm (S$3$CE)
S$3$CE exploits a self-supervised contrastive network to gain a more effetive feature vector.
New module with data enhancement is designed to help S$3$CE focus on the key information of data.
arXiv Detail & Related papers (2022-06-10T09:15:33Z) - Nested Collaborative Learning for Long-Tailed Visual Recognition [71.6074806468641]
NCL consists of two core components, namely Nested Individual Learning (NIL) and Nested Balanced Online Distillation (NBOD)
To learn representations more thoroughly, both NIL and NBOD are formulated in a nested way, in which the learning is conducted on not just all categories from a full perspective but some hard categories from a partial perspective.
In the NCL, the learning from two perspectives is nested, highly related and complementary, and helps the network to capture not only global and robust features but also meticulous distinguishing ability.
arXiv Detail & Related papers (2022-03-29T08:55:39Z) - Deep clustering with fusion autoencoder [0.0]
Deep clustering (DC) models capitalize on autoencoders to learn intrinsic features which facilitate the clustering process in consequence.
In this paper, a novel DC method is proposed to address this issue. Specifically, the generative adversarial network and VAE are coalesced into a new autoencoder called fusion autoencoder (FAE)
arXiv Detail & Related papers (2022-01-11T07:38:03Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Bend-Net: Bending Loss Regularized Multitask Learning Network for Nuclei
Segmentation in Histopathology Images [65.47507533905188]
We propose a novel multitask learning network with a bending loss regularizer to separate overlapped nuclei accurately.
The newly proposed multitask learning architecture enhances the generalization by learning shared representation from three tasks.
The proposed bending loss defines high penalties to concave contour points with large curvatures, and applies small penalties to convex contour points with small curvatures.
arXiv Detail & Related papers (2021-09-30T17:29:44Z) - Pseudo-supervised Deep Subspace Clustering [27.139553299302754]
Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance.
However, self-reconstruction loss of an AE ignores rich useful relation information.
It is also challenging to learn high-level similarity without feeding semantic labels.
arXiv Detail & Related papers (2021-04-08T06:25:47Z) - Multi-level Feature Learning on Embedding Layer of Convolutional
Autoencoders and Deep Inverse Feature Learning for Image Clustering [6.5358895450258325]
We use agglomerative clustering as the multi-level feature learning that provides a hierarchical structure on the latent feature space.
Applying multi-level feature learning considerably improves the basic deep convolutional embedding clustering.
Deep inverse feature learning (deep IFL) on CAE-MLE as a novel approach that leads to the state-of-the-art results.
arXiv Detail & Related papers (2020-10-05T21:24:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.