Confident Learning for Object Detection under Model Constraints
- URL: http://arxiv.org/abs/2601.11640v1
- Date: Wed, 14 Jan 2026 15:32:08 GMT
- Title: Confident Learning for Object Detection under Model Constraints
- Authors: Yingda Yu, Jiaqi Xuan, Shuhui Shi, Xuanyu Teng, Shuyang Xu, Guanchao Tong,
- Abstract summary: Agricultural weed detection on edge devices is subject to strict constraints on model capacity, computational resources, and real-time inference latency.<n>This paper proposes Model-Driven Data Correction (MDDC), a data-centric framework that enhances detection performance by iteratively diagnosing and correcting data quality deficiencies.
- Score: 0.05131152350448099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agricultural weed detection on edge devices is subject to strict constraints on model capacity, computational resources, and real-time inference latency, which prevent performance improvements through model scaling or ensembling. This paper proposes Model-Driven Data Correction (MDDC), a data-centric framework that enhances detection performance by iteratively diagnosing and correcting data quality deficiencies. An automated error analysis procedure categorizes detection failures into four types: false negatives, false positives, class confusion, and localization errors. These error patterns are systematically addressed through a structured train-fix-retrain pipeline with version-controlled data management. Experimental results on multiple weed detection datasets demonstrate consistent improvements of 5-25 percent in mAP at 0.5 using a fixed lightweight detector (YOLOv8n), indicating that systematic data quality optimization can effectively alleviate performance bottlenecks under fixed model capacity constraints.
Related papers
- Reliably Detecting Model Failures in Deployment Without Labels [14.069153343960734]
This paper formalizes and addresses the problem of post-deployment deterioration (PDD) monitoring.<n>We propose D3M, a practical and efficient monitoring algorithm based on the disagreement of predictive models.<n> Empirical results on both standard benchmark and a real-world large-scale internal medicine dataset demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2025-06-05T13:56:18Z) - AutoML for Multi-Class Anomaly Compensation of Sensor Drift [44.63945828405864]
Sensor drift degrades the performance of machine learning models over time.<n>Standard cross-validation method overestimates performance by inadequately accounting for drift.<n>This paper presents two solutions: (1) a novel sensor drift compensation learning paradigm for validating models, and (2) automated machine learning (AutoML) techniques to enhance classification performance and compensate sensor drift.
arXiv Detail & Related papers (2025-02-26T14:34:53Z) - Robust Confinement State Classification with Uncertainty Quantification through Ensembled Data-Driven Methods [39.27649013012046]
We develop methods for confinement state classification with uncertainty quantification and model robustness.<n>We focus on off-line analysis for TCV discharges, distinguishing L-mode, H-mode, and an in-between dithering phase (D)<n>A dataset of 302 TCV discharges is fully labeled, and will be publicly released.
arXiv Detail & Related papers (2025-02-24T18:25:22Z) - Machine Learning for ALSFRS-R Score Prediction: Making Sense of the Sensor Data [44.99833362998488]
Amyotrophic Lateral Sclerosis (ALS) is a rapidly progressive neurodegenerative disease that presents individuals with limited treatment options.
The present investigation, spearheaded by the iDPP@CLEF 2024 challenge, focuses on utilizing sensor-derived data obtained through an app.
arXiv Detail & Related papers (2024-07-10T19:17:23Z) - TeVAE: A Variational Autoencoder Approach for Discrete Online Anomaly Detection in Variable-state Multivariate Time-series Data [0.017476232824732776]
We propose a temporal variational autoencoder (TeVAE) that can detect anomalies with minimal false positives when trained on unlabelled data.
When properly configured, TeVAE flags anomalies only 6% of the time wrongly and detects 65% of anomalies present.
arXiv Detail & Related papers (2024-07-09T13:32:33Z) - An Improved Anomaly Detection Model for Automated Inspection of Power Line Insulators [0.0]
Inspection of insulators is important to ensure reliable operation of the power system.
Deep learning is being increasingly exploited to automate the inspection process.
This article proposes the use of anomaly detection along with object detection in a two-stage approach for incipient fault detection.
arXiv Detail & Related papers (2023-11-14T11:36:20Z) - ImDiffusion: Imputed Diffusion Models for Multivariate Time Series
Anomaly Detection [44.21198064126152]
We propose a novel anomaly detection framework named ImDiffusion.
ImDiffusion combines time series imputation and diffusion models to achieve accurate and robust anomaly detection.
We evaluate the performance of ImDiffusion via extensive experiments on benchmark datasets.
arXiv Detail & Related papers (2023-07-03T04:57:40Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - A Computer Vision Enabled damage detection model with improved YOLOv5
based on Transformer Prediction Head [0.0]
Current state-of-the-art deep learning (DL)-based damage detection models often lack superior feature extraction capability in complex and noisy environments.
DenseSPH-YOLOv5 is a real-time DL-based high-performance damage detection model where DenseNet blocks have been integrated with the backbone.
DenseSPH-YOLOv5 obtains a mean average precision (mAP) value of 85.25 %, F1-score of 81.18 %, and precision (P) value of 89.51 % outperforming current state-of-the-art models.
arXiv Detail & Related papers (2023-03-07T22:53:36Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - An Outlier Exposure Approach to Improve Visual Anomaly Detection
Performance for Mobile Robots [76.36017224414523]
We consider the problem of building visual anomaly detection systems for mobile robots.
Standard anomaly detection models are trained using large datasets composed only of non-anomalous data.
We tackle the problem of exploiting these data to improve the performance of a Real-NVP anomaly detection model.
arXiv Detail & Related papers (2022-09-20T15:18:13Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Out-Of-Bag Anomaly Detection [0.9449650062296822]
Data anomalies are ubiquitous in real world datasets, and can have an adverse impact on machine learning (ML) systems.
We propose a novel model-based anomaly detection method, that we call Out-of-Bag anomaly detection.
We show our method can improve the accuracy and reliability of an ML system as data pre-processing step via a case study on home valuation.
arXiv Detail & Related papers (2020-09-20T06:01:52Z) - How Training Data Impacts Performance in Learning-based Control [67.7875109298865]
This paper derives an analytical relationship between the density of the training data and the control performance.
We formulate a quality measure for the data set, which we refer to as $rho$-gap.
We show how the $rho$-gap can be applied to a feedback linearizing control law.
arXiv Detail & Related papers (2020-05-25T12:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.