Evaluation of Model Selection for Kernel Fragment Recognition in Corn
Silage
- URL: http://arxiv.org/abs/2004.00292v1
- Date: Wed, 1 Apr 2020 08:56:01 GMT
- Title: Evaluation of Model Selection for Kernel Fragment Recognition in Corn
Silage
- Authors: Christoffer B{\o}gelund Rasmussen and Thomas B. Moeslund
- Abstract summary: We investigate a number of state of the art CNN models for the task of measuring kernel fragmentation in harvested corn silage.
We show improvements in Average Precision at an Intersection over Union of 0.5 of up to 20 percentage points while also decreasing inference time in comparison to previously published work.
- Score: 25.54556810106467
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model selection when designing deep learning systems for specific use-cases
can be a challenging task as many options exist and it can be difficult to know
the trade-off between them. Therefore, we investigate a number of state of the
art CNN models for the task of measuring kernel fragmentation in harvested corn
silage. The models are evaluated across a number of feature extractors and
image sizes in order to determine optimal model design choices based upon the
trade-off between model complexity, accuracy and speed. We show that accuracy
improvements can be made with more complex meta-architectures and speed can be
optimised by decreasing the image size with only slight losses in accuracy.
Additionally, we show improvements in Average Precision at an Intersection over
Union of 0.5 of up to 20 percentage points while also decreasing inference time
in comparison to previously published work. This result for better model
selection enables opportunities for creating systems that can aid farmers in
improving their silage quality while harvesting.
Related papers
- Scoreformer: A Surrogate Model For Large-Scale Prediction of Docking Scores [0.0]
We present ScoreFormer, a novel graph transformer model designed to accurately predict molecular docking scores.
ScoreFormer achieves competitive performance in docking score prediction and offers a substantial 1.65-fold reduction in inference time compared to existing models.
arXiv Detail & Related papers (2024-06-13T17:31:02Z) - Towards Fundamentally Scalable Model Selection: Asymptotically Fast Update and Selection [40.85209520973634]
An ideal model selection scheme should support two operations efficiently over a large pool of candidate models.
Previous solutions to model selection require high computational complexity for at least one of these two operations.
We present Standardized Embedder, an empirical realization of isolated model embedding.
arXiv Detail & Related papers (2024-06-11T17:57:49Z) - REFRESH: Responsible and Efficient Feature Reselection Guided by SHAP Values [17.489279048199304]
REFRESH is a method to reselect features so that additional constraints that are desirable towards model performance can be achieved without having to train several new models.
REFRESH's underlying algorithm is a novel technique using SHAP values and correlation analysis that can approximate for the predictions of a model without having to train these models.
arXiv Detail & Related papers (2024-03-13T18:06:43Z) - Small Effect Sizes in Malware Detection? Make Harder Train/Test Splits! [51.668411293817464]
Industry practitioners care about small improvements in malware detection accuracy because their models are deployed to hundreds of millions of machines.
Academic research is often restrained to public datasets on the order of ten thousand samples.
We devise an approach to generate a benchmark of difficulty from a pool of available samples.
arXiv Detail & Related papers (2023-12-25T21:25:55Z) - Neural Language Model Pruning for Automatic Speech Recognition [4.10609794373612]
We study model pruning methods applied to Transformer-based neural network language models for automatic speech recognition.
We explore three aspects of the pruning frame work, namely criterion, method and scheduler, analyzing their contribution in terms of accuracy and inference speed.
arXiv Detail & Related papers (2023-10-05T10:01:32Z) - Precision-Recall Divergence Optimization for Generative Modeling with
GANs and Normalizing Flows [54.050498411883495]
We develop a novel training method for generative models, such as Generative Adversarial Networks and Normalizing Flows.
We show that achieving a specified precision-recall trade-off corresponds to minimizing a unique $f$-divergence from a family we call the textitPR-divergences.
Our approach improves the performance of existing state-of-the-art models like BigGAN in terms of either precision or recall when tested on datasets such as ImageNet.
arXiv Detail & Related papers (2023-05-30T10:07:17Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Real-time Human Detection Model for Edge Devices [0.0]
Convolutional Neural Networks (CNNs) have replaced traditional feature extraction and machine learning models in detection and classification tasks.
Lightweight CNN models have been recently introduced for real-time tasks.
This paper suggests a CNN-based lightweight model that can fit on a limited edge device such as Raspberry Pi.
arXiv Detail & Related papers (2021-11-20T18:42:17Z) - A linearized framework and a new benchmark for model selection for
fine-tuning [112.20527122513668]
Fine-tuning from a collection of models pre-trained on different domains is emerging as a technique to improve test accuracy in the low-data regime.
We introduce two new baselines for model selection -- Label-Gradient and Label-Feature Correlation.
Our benchmark highlights accuracy gain with model zoo compared to fine-tuning Imagenet models.
arXiv Detail & Related papers (2021-01-29T21:57:15Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.