Noise-Robust Tiny Object Localization with Flows
- URL: http://arxiv.org/abs/2601.00617v1
- Date: Fri, 02 Jan 2026 09:16:55 GMT
- Title: Noise-Robust Tiny Object Localization with Flows
- Authors: Huixin Sun, Linlin Yang, Ronyu Chen, Kerui Gu, Baochang Zhang, Angela Yao, Xianbin Cao,
- Abstract summary: We propose a noise-robust localization framework leveraging normalizing flows for flexible error modeling and uncertainty-guided optimization.<n>Our method captures complex, non-Gaussian prediction distributions through flow-based error modeling, enabling robust learning under noisy supervision.<n>An uncertainty-aware gradient modulation mechanism further suppresses learning from high-uncertainty, noise-prone samples, mitigating overfitting while stabilizing training.
- Score: 63.60972031108944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite significant advances in generic object detection, a persistent performance gap remains for tiny objects compared to normal-scale objects. We demonstrate that tiny objects are highly sensitive to annotation noise, where optimizing strict localization objectives risks noise overfitting. To address this, we propose Tiny Object Localization with Flows (TOLF), a noise-robust localization framework leveraging normalizing flows for flexible error modeling and uncertainty-guided optimization. Our method captures complex, non-Gaussian prediction distributions through flow-based error modeling, enabling robust learning under noisy supervision. An uncertainty-aware gradient modulation mechanism further suppresses learning from high-uncertainty, noise-prone samples, mitigating overfitting while stabilizing training. Extensive experiments across three datasets validate our approach's effectiveness. Especially, TOLF boosts the DINO baseline by 1.2% AP on the AI-TOD dataset.
Related papers
- Small Object Detection in Complex Backgrounds with Multi-Scale Attention and Global Relation Modeling [8.24377869183113]
Small object detection under complex backgrounds is a challenging task due to severe feature degradation, weak semantic representation, and inaccurate localization.<n>Existing detection frameworks are mainly designed for general objects.<n>We propose a multi-level feature enhancement and global relation modeling framework tailored for small object detection.
arXiv Detail & Related papers (2026-03-04T06:57:46Z) - Stable Neighbor Denoising for Source-free Domain Adaptive Segmentation [91.83820250747935]
Pseudo-label noise is mainly contained in unstable samples in which predictions of most pixels undergo significant variations during self-training.
We introduce the Stable Neighbor Denoising (SND) approach, which effectively discovers highly correlated stable and unstable samples.
SND consistently outperforms state-of-the-art methods in various SFUDA semantic segmentation settings.
arXiv Detail & Related papers (2024-06-10T21:44:52Z) - Training More Robust Classification Model via Discriminative Loss and Gaussian Noise Injection [7.535952418691443]
We introduce a loss function applied at the penultimate layer that explicitly enforces intra-class compactness.<n>We also propose a class-wise feature alignment mechanism that brings noisy data clusters closer to their clean counterparts.<n>Our approach significantly reinforces model robustness to various perturbations while maintaining high accuracy on clean data.
arXiv Detail & Related papers (2024-05-28T18:10:45Z) - ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - Iso-Diffusion: Improving Diffusion Probabilistic Models Using the Isotropy of the Additive Gaussian Noise [0.0]
We show how to use the isotropy of the additive noise as a constraint on the objective function to enhance the fidelity of DDPMs.<n>Our approach is simple and can be applied to any DDPM variant.
arXiv Detail & Related papers (2024-03-25T14:05:52Z) - Impact of Noisy Supervision in Foundation Model Learning [91.56591923244943]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.<n>We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Dynamic Addition of Noise in a Diffusion Model for Anomaly Detection [2.209921757303168]
Diffusion models have found valuable applications in anomaly detection by capturing the nominal data distribution and identifying anomalies via reconstruction.
Despite their merits, they struggle to localize anomalies of varying scales, especially larger anomalies such as entire missing components.
We present a novel framework that enhances the capability of diffusion models, by extending the previous introduced implicit conditioning approach Meng et al.
2022 in three significant ways.
arXiv Detail & Related papers (2024-01-09T09:57:38Z) - Dynamic Tiling: A Model-Agnostic, Adaptive, Scalable, and
Inference-Data-Centric Approach for Efficient and Accurate Small Object
Detection [3.8332251841430423]
Dynamic Tiling is a model-agnostic, adaptive, and scalable approach for small object detection.
Our method effectively resolves fragmented objects, improves detection accuracy, and minimizes computational overhead.
Overall, Dynamic Tiling outperforms existing model-agnostic uniform cropping methods.
arXiv Detail & Related papers (2023-09-20T05:25:12Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.