Evaluating Deep Neural Network Ensembles by Majority Voting cum
Meta-Learning scheme
- URL: http://arxiv.org/abs/2105.03819v1
- Date: Sun, 9 May 2021 03:10:56 GMT
- Title: Evaluating Deep Neural Network Ensembles by Majority Voting cum
Meta-Learning scheme
- Authors: Anmol Jain, Aishwary Kumar, Seba Susan
- Abstract summary: We propose an ensemble of seven independent Deep Neural Networks (DNNs) for a new data instance.
One-seventh of the data is deleted and replenished by bootstrap sampling from the remaining samples.
All the algorithms in this paper have been tested on five benchmark datasets.
- Score: 3.351714665243138
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Neural Networks (DNNs) are prone to overfitting and hence have high
variance. Overfitted networks do not perform well for a new data instance. So
instead of using a single DNN as classifier we propose an ensemble of seven
independent DNN learners by varying only the input to these DNNs keeping their
architecture and intrinsic properties same. To induce variety in the training
input, for each of the seven DNNs, one-seventh of the data is deleted and
replenished by bootstrap sampling from the remaining samples. We have proposed
a novel technique for combining the prediction of the DNN learners in the
ensemble. Our method is called pre-filtering by majority voting coupled with
stacked meta-learner which performs a two-step confi-dence check for the
predictions before assigning the final class labels. All the algorithms in this
paper have been tested on five benchmark datasets name-ly, Human Activity
Recognition (HAR), Gas sensor array drift, Isolet, Spam-base and Internet
advertisements. Our ensemble approach achieves higher accuracy than a single
DNN and the average individual accuracies of DNNs in the ensemble, as well as
the baseline approaches of plurality voting and meta-learning.
Related papers
- E2GNN: Efficient Graph Neural Network Ensembles for Semi-Supervised Classification [30.55931541782854]
This work studies ensemble learning for graph neural networks (GNNs) under the popular semi-supervised setting.
We propose an efficient ensemble learner--E2GNN to assemble multiple GNNs in a learnable way by leveraging both labeled and unlabeled nodes.
Comprehensive experiments over both transductive and inductive settings, across different GNN backbones and 8 benchmark datasets, demonstrate the superiority of E2GNN.
arXiv Detail & Related papers (2024-05-06T12:11:46Z) - Detecting Novelties with Empty Classes [6.953730499849023]
We build upon anomaly detection to retrieve out-of-distribution (OoD) data as candidates for new classes.
We introduce two loss functions, which 1) entice the DNN to assign OoD samples to the empty classes and 2) to minimize the inner-class feature distances between them.
arXiv Detail & Related papers (2023-04-30T19:52:47Z) - Multi-Objective Linear Ensembles for Robust and Sparse Training of Few-Bit Neural Networks [5.246498560938275]
We study the case of few-bit discrete-valued neural networks, both Binarized Neural Networks (BNNs) and Neural Networks (INNs)
Our contribution is a multi-objective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output.
We compare this BeMi approach to the current state-of-the-art in solver-based NN training and gradient-based training, focusing on BNN learning in few-shot contexts.
arXiv Detail & Related papers (2022-12-07T14:23:43Z) - Rethinking Nearest Neighbors for Visual Classification [56.00783095670361]
k-NN is a lazy learning method that aggregates the distance between the test image and top-k neighbors in a training set.
We adopt k-NN with pre-trained visual representations produced by either supervised or self-supervised methods in two steps.
Via extensive experiments on a wide range of classification tasks, our study reveals the generality and flexibility of k-NN integration.
arXiv Detail & Related papers (2021-12-15T20:15:01Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Understanding and Improving Early Stopping for Learning with Noisy
Labels [63.0730063791198]
The memorization effect of deep neural network (DNN) plays a pivotal role in many state-of-the-art label-noise learning methods.
Current methods generally decide the early stopping point by considering a DNN as a whole.
We propose to separate a DNN into different parts and progressively train them to address this problem.
arXiv Detail & Related papers (2021-06-30T07:18:00Z) - Adaptive Nearest Neighbor Machine Translation [60.97183408140499]
kNN-MT combines pre-trained neural machine translation with token-level k-nearest-neighbor retrieval.
Traditional kNN algorithm simply retrieves a same number of nearest neighbors for each target token.
We propose Adaptive kNN-MT to dynamically determine the number of k for each target token.
arXiv Detail & Related papers (2021-05-27T09:27:42Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Kernel Based Progressive Distillation for Adder Neural Networks [71.731127378807]
Adder Neural Networks (ANNs) which only contain additions bring us a new way of developing deep neural networks with low energy consumption.
There is an accuracy drop when replacing all convolution filters by adder filters.
We present a novel method for further improving the performance of ANNs without increasing the trainable parameters.
arXiv Detail & Related papers (2020-09-28T03:29:19Z) - Self-Competitive Neural Networks [0.0]
Deep Neural Networks (DNNs) have improved the accuracy of classification problems in lots of applications.
One of the challenges in training a DNN is its need to be fed by an enriched dataset to increase its accuracy and avoid it suffering from overfitting.
Recently, researchers have worked extensively to propose methods for data augmentation.
In this paper, we generate adversarial samples to refine the Domains of Attraction (DoAs) of each class. In this approach, at each stage, we use the model learned by the primary and generated adversarial data (up to that stage) to manipulate the primary data in a way that look complicated to
arXiv Detail & Related papers (2020-08-22T12:28:35Z) - One Versus all for deep Neural Network Incertitude (OVNNI)
quantification [12.734278426543332]
We propose a new technique to quantify the epistemic uncertainty of data easily.
This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification.
arXiv Detail & Related papers (2020-06-01T14:06:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.