On the Effectiveness of Neural Ensembles for Image Classification with
Small Datasets
- URL: http://arxiv.org/abs/2111.14493v1
- Date: Mon, 29 Nov 2021 12:34:49 GMT
- Title: On the Effectiveness of Neural Ensembles for Image Classification with
Small Datasets
- Authors: Lorenzo Brigato and Luca Iocchi
- Abstract summary: We focus on image classification problems with a few labeled examples per class and improve data efficiency by using an ensemble of relatively small networks.
We show that ensembling relatively shallow networks is a simple yet effective technique that is generally better than current state-of-the-art approaches for learning from small datasets.
- Score: 2.3478438171452014
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks represent the gold standard for image classification.
However, they usually need large amounts of data to reach superior performance.
In this work, we focus on image classification problems with a few labeled
examples per class and improve data efficiency by using an ensemble of
relatively small networks. For the first time, our work broadly studies the
existing concept of neural ensembling in domains with small data, through
extensive validation using popular datasets and architectures. We compare
ensembles of networks to their deeper or wider single competitors given a total
fixed computational budget. We show that ensembling relatively shallow networks
is a simple yet effective technique that is generally better than current
state-of-the-art approaches for learning from small datasets. Finally, we
present our interpretation according to which neural ensembles are more sample
efficient because they learn simpler functions.
Related papers
- Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Bandit Sampling for Multiplex Networks [8.771092194928674]
We propose an algorithm for scalable learning on multiplex networks with a large number of layers.
Online learning algorithm learns how to sample relevant neighboring layers so that only the layers with relevant information are aggregated during training.
We present experimental results on both synthetic and real-world scenarios.
arXiv Detail & Related papers (2022-02-08T03:26:34Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Self-supervised Neural Architecture Search [41.07083436560303]
We propose a self-supervised neural architecture search (SSNAS) that allows finding novel network models without the need for labeled data.
We show that such a search leads to comparable results to supervised training with a "fully labeled" NAS and that it can improve the performance of self-supervised learning.
arXiv Detail & Related papers (2020-07-03T05:09:30Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Looking back to lower-level information in few-shot learning [4.873362301533825]
We propose the utilization of lower-level, supporting information, namely the feature embeddings of the hidden neural network layers, to improve classification accuracy.
Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
arXiv Detail & Related papers (2020-05-27T20:32:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.