Tackling the Background Bias in Sparse Object Detection via Cropped
Windows
- URL: http://arxiv.org/abs/2106.02288v1
- Date: Fri, 4 Jun 2021 06:59:56 GMT
- Title: Tackling the Background Bias in Sparse Object Detection via Cropped
Windows
- Authors: Leon Amadeus Varga, Andreas Zell
- Abstract summary: We propose a simple tiling method that improves the detection capability in the remote sensing case without modifying the model itself.
The procedure was validated on three different data sets and outperformed similar approaches in performance and speed.
- Score: 17.547911599819837
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection on Unmanned Aerial Vehicles (UAVs) is still a challenging
task. The recordings are mostly sparse and contain only small objects. In this
work, we propose a simple tiling method that improves the detection capability
in the remote sensing case without modifying the model itself. By reducing the
background bias and enabling the usage of higher image resolutions during
training, our method can improve the performance of models substantially. The
procedure was validated on three different data sets and outperformed similar
approaches in performance and speed.
Related papers
- SFTrack: A Robust Scale and Motion Adaptive Algorithm for Tracking Small and Fast Moving Objects [2.9803250365852443]
This paper addresses the problem of multi-object tracking in Unmanned Aerial Vehicle (UAV) footage.
It plays a critical role in various UAV applications, including traffic monitoring systems and real-time suspect tracking by the police.
We propose a new tracking strategy, which initiates the tracking of target objects from low-confidence detections.
arXiv Detail & Related papers (2024-10-26T05:09:20Z) - DiffUHaul: A Training-Free Method for Object Dragging in Images [78.93531472479202]
We propose a training-free method, dubbed DiffUHaul, for the object dragging task.
We first apply attention masking in each denoising step to make the generation more disentangled across different objects.
In the early denoising steps, we interpolate the attention features between source and target images to smoothly fuse new layouts with the original appearance.
arXiv Detail & Related papers (2024-06-03T17:59:53Z) - Scale-Invariant Feature Disentanglement via Adversarial Learning for UAV-based Object Detection [18.11107031800982]
We propose to improve single-stage inference accuracy through learning scale-invariant features.
We apply our approach to three state-of-the-art lightweight detection frameworks on three benchmark datasets.
arXiv Detail & Related papers (2024-05-24T11:40:22Z) - Diffusion-Based Particle-DETR for BEV Perception [94.88305708174796]
Bird-Eye-View (BEV) is one of the most widely-used scene representations for visual perception in Autonomous Vehicles (AVs)
Recent diffusion-based methods offer a promising approach to uncertainty modeling for visual perception but fail to effectively detect small objects in the large coverage of the BEV.
Here, we address this problem by combining the diffusion paradigm with current state-of-the-art 3D object detectors in BEV.
arXiv Detail & Related papers (2023-12-18T09:52:14Z) - Learned Two-Plane Perspective Prior based Image Resampling for Efficient
Object Detection [20.886999159134138]
Real-time efficient perception is critical for autonomous navigation and city scale sensing.
In this work, we propose a learnable geometry-guided prior that incorporates rough geometry of the 3D scene.
Our approach improves detection rate by +4.1 $AP_S$ or +39% and in real-time performance by +5.3 $sAP_S$ or +63% for small objects over state-of-the-art (SOTA)
arXiv Detail & Related papers (2023-03-25T00:43:44Z) - Generalized Few-Shot 3D Object Detection of LiDAR Point Cloud for
Autonomous Driving [91.39625612027386]
We propose a novel task, called generalized few-shot 3D object detection, where we have a large amount of training data for common (base) objects, but only a few data for rare (novel) classes.
Specifically, we analyze in-depth differences between images and point clouds, and then present a practical principle for the few-shot setting in the 3D LiDAR dataset.
To solve this task, we propose an incremental fine-tuning method to extend existing 3D detection models to recognize both common and rare objects.
arXiv Detail & Related papers (2023-02-08T07:11:36Z) - CAMO-MOT: Combined Appearance-Motion Optimization for 3D Multi-Object
Tracking with Camera-LiDAR Fusion [34.42289908350286]
3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection.
It can be challenging to accurately track the irregular motion of objects for LiDAR-based methods.
We propose a novel camera-LiDAR fusion 3D MOT framework based on the Combined Appearance-Motion Optimization (CAMO-MOT)
arXiv Detail & Related papers (2022-09-06T14:41:38Z) - Embracing Single Stride 3D Object Detector with Sparse Transformer [63.179720817019096]
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Many 3D detectors directly follow the common practice of 2D detectors, which downsample the feature maps even after quantizing the point clouds.
We propose Single-stride Sparse Transformer (SST) to maintain the original resolution from the beginning to the end of the network.
arXiv Detail & Related papers (2021-12-13T02:12:02Z) - You Better Look Twice: a new perspective for designing accurate
detectors with reduced computations [56.34005280792013]
BLT-net is a new low-computation two-stage object detection architecture.
It reduces computations by separating objects from background using a very lite first-stage.
Resulting image proposals are then processed in the second-stage by a highly accurate model.
arXiv Detail & Related papers (2021-07-21T12:39:51Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.