Rethinking Evaluation of Infrared Small Target Detection
- URL: http://arxiv.org/abs/2509.16888v2
- Date: Tue, 23 Sep 2025 03:06:34 GMT
- Title: Rethinking Evaluation of Infrared Small Target Detection
- Authors: Youwei Pang, Xiaoqi Zhao, Lihe Zhang, Huchuan Lu, Georges El Fakhri, Xiaofeng Liu, Shijian Lu,
- Abstract summary: This paper introduces a hybrid-level metric incorporating pixel- and target-level performance, proposing a systematic error analysis method, and emphasizing the importance of cross-dataset evaluation.<n>An open-source toolkit has be released to facilitate standardized benchmarking.
- Score: 105.59753496831739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As an essential vision task, infrared small target detection (IRSTD) has seen significant advancements through deep learning. However, critical limitations in current evaluation protocols impede further progress. First, existing methods rely on fragmented pixel- and target-level specific metrics, which fails to provide a comprehensive view of model capabilities. Second, an excessive emphasis on overall performance scores obscures crucial error analysis, which is vital for identifying failure modes and improving real-world system performance. Third, the field predominantly adopts dataset-specific training-testing paradigms, hindering the understanding of model robustness and generalization across diverse infrared scenarios. This paper addresses these issues by introducing a hybrid-level metric incorporating pixel- and target-level performance, proposing a systematic error analysis method, and emphasizing the importance of cross-dataset evaluation. These aim to offer a more thorough and rational hierarchical analysis framework, ultimately fostering the development of more effective and robust IRSTD models. An open-source toolkit has be released to facilitate standardized benchmarking.
Related papers
- Pushing the Boundaries of Interpretability: Incremental Enhancements to the Explainable Boosting Machine [1.2461503242570642]
This paper aims at improving the Explainable Boosting Machine (EBM), a state-of-the-art glassbox model that delivers both high accuracy and complete transparency.<n>The work is positioned as a critical step toward developing machine learning systems that are robust, equitable, and transparent.
arXiv Detail & Related papers (2025-11-29T15:46:13Z) - AI-Generated Image Detection: An Empirical Study and Future Research Directions [6.891145787446519]
Threats posed by AI-generated media, particularly deepfakes, are raising significant challenges for forensics.<n>Several forensic methods have been proposed, they suffer from three critical gaps.<n>These limitations hinder fair comparison, obscure true robustness, and restrict deployment in security-critical applications.
arXiv Detail & Related papers (2025-11-04T18:13:48Z) - RoHOI: Robustness Benchmark for Human-Object Interaction Detection [78.18946529195254]
Human-Object Interaction (HOI) detection is crucial for robot-human assistance, enabling context-aware support.<n>We introduce the first benchmark for HOI detection, evaluating model resilience under diverse challenges.<n>Our benchmark, RoHOI, includes 20 corruption types based on the HICO-DET and V-COCO datasets and a new robustness-focused metric.
arXiv Detail & Related papers (2025-07-12T01:58:04Z) - Offline Model-Based Optimization: Comprehensive Review [61.91350077539443]
offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets.<n>Recent advances in model-based optimization have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models.<n>Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review.
arXiv Detail & Related papers (2025-03-21T16:35:02Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - EvCenterNet: Uncertainty Estimation for Object Detection using
Evidential Learning [26.535329379980094]
EvCenterNet is a novel uncertainty-aware 2D object detection framework.
We employ evidential learning to estimate both classification and regression uncertainties.
We train our model on the KITTI dataset and evaluate it on challenging out-of-distribution datasets.
arXiv Detail & Related papers (2023-03-06T11:07:11Z) - One-Stage Cascade Refinement Networks for Infrared Small Target
Detection [21.28595135499812]
Single-frame InfraRed Small Target (SIRST) detection has been a challenging task due to a lack of inherent characteristics.
We present a new research benchmark for infrared small target detection consisting of the SIRST-V2 dataset of real-world, high-resolution single-frame targets.
arXiv Detail & Related papers (2022-12-16T13:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.