The Case for High-Accuracy Classification: Think Small, Think Many!
- URL: http://arxiv.org/abs/2103.10350v1
- Date: Thu, 18 Mar 2021 16:15:31 GMT
- Title: The Case for High-Accuracy Classification: Think Small, Think Many!
- Authors: Mohammad Hosseini, Mahmudul Hasan
- Abstract summary: We propose an efficient and lightweight deep classification ensemble structure based on a combination of simple color features.
Our evaluation results show considerable improvements on the prediction accuracy compared to the popular ResNet-50 model.
- Score: 4.817521691828748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To facilitate implementation of high-accuracy deep neural networks especially
on resource-constrained devices, maintaining low computation requirements is
crucial. Using very deep models for classification purposes not only decreases
the neural network training speed and increases the inference time, but also
need more data for higher prediction accuracy and to mitigate false positives.
In this paper, we propose an efficient and lightweight deep classification
ensemble structure based on a combination of simple color features, which is
particularly designed for "high-accuracy" image classifications with low false
positives. We designed, implemented, and evaluated our approach for explosion
detection use-case applied to images and videos. Our evaluation results based
on a large test test show considerable improvements on the prediction accuracy
compared to the popular ResNet-50 model, while benefiting from 7.64x faster
inference and lower computation cost.
While we applied our approach to explosion detection, our approach is general
and can be applied to other similar classification use cases as well. Given the
insight gained from our experiments, we hence propose a "think small, think
many" philosophy in classification scenarios: that transforming a single,
large, monolithic deep model into a verification-based step model ensemble of
multiple small, simple, lightweight models with narrowed-down color spaces can
possibly lead to predictions with higher accuracy.
Related papers
- Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content
Moderation [2.1756081703276]
We propose an efficient and lightweight deep classification ensemble structure.
Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content.
Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost.
arXiv Detail & Related papers (2023-09-10T21:54:03Z) - Quantifying lottery tickets under label noise: accuracy, calibration,
and complexity [6.232071870655069]
Pruning deep neural networks is a widely used strategy to alleviate the computational burden in machine learning.
We use the sparse double descent approach to identify univocally and characterise pruned models associated with classification tasks.
arXiv Detail & Related papers (2023-06-21T11:35:59Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - T-RECX: Tiny-Resource Efficient Convolutional neural networks with
early-eXit [0.0]
We show how an early exit intermediate classifier can be enhanced by the addition of an early exit intermediate classifier.
Our technique is optimized specifically for tiny-CNN sized models.
Our results show that T-RecX 1) improves the accuracy of baseline network, 2) achieves 31.58% average reduction in FLOPS in exchange for one percent accuracy across all evaluated models.
arXiv Detail & Related papers (2022-07-14T02:05:43Z) - Core Risk Minimization using Salient ImageNet [53.616101711801484]
We introduce the Salient Imagenet dataset with more than 1 million soft masks localizing core and spurious features for all 1000 Imagenet classes.
Using this dataset, we first evaluate the reliance of several Imagenet pretrained models (42 total) on spurious features.
Next, we introduce a new learning paradigm called Core Risk Minimization (CoRM) whose objective ensures that the model predicts a class using its core features.
arXiv Detail & Related papers (2022-03-28T01:53:34Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Improving Video Instance Segmentation by Light-weight Temporal
Uncertainty Estimates [11.580916951856256]
We present a time-dynamic approach to model uncertainties of instance segmentation networks.
We apply this approach to the detection of false positives and the estimation of prediction quality.
The proposed method only requires a readily trained neural network and video sequence input.
arXiv Detail & Related papers (2020-12-14T13:39:05Z) - A Partial Regularization Method for Network Compression [0.0]
We propose an approach of partial regularization rather than the original form of penalizing all parameters, which is said to be full regularization, to conduct model compression at a higher speed.
Experimental results show that as we expected, the computational complexity is reduced by observing less running time in almost all situations.
Surprisingly, it helps to improve some important metrics such as regression fitting results and classification accuracy in both training and test phases on multiple datasets.
arXiv Detail & Related papers (2020-09-03T00:38:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.