Improved Handling of Motion Blur in Online Object Detection
- URL: http://arxiv.org/abs/2011.14448v2
- Date: Tue, 30 Mar 2021 14:34:38 GMT
- Title: Improved Handling of Motion Blur in Online Object Detection
- Authors: Mohamed Sayed, Gabriel Brostow
- Abstract summary: We focus on the details of egomotion induced blur.
We explore five classes of remedies, where each targets different potential causes for the performance gap between sharp and blurred images.
The other four classes of remedies address multi-scale texture, out-of-distribution testing, label generation, and conditioning by blur-type.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We wish to detect specific categories of objects, for online vision systems
that will run in the real world. Object detection is already very challenging.
It is even harder when the images are blurred, from the camera being in a car
or a hand-held phone. Most existing efforts either focused on sharp images,
with easy to label ground truth, or they have treated motion blur as one of
many generic corruptions.
Instead, we focus especially on the details of egomotion induced blur. We
explore five classes of remedies, where each targets different potential causes
for the performance gap between sharp and blurred images. For example, first
deblurring an image changes its human interpretability, but at present, only
partly improves object detection. The other four classes of remedies address
multi-scale texture, out-of-distribution testing, label generation, and
conditioning by blur-type. Surprisingly, we discover that custom label
generation aimed at resolving spatial ambiguity, ahead of all others, markedly
improves object detection. Also, in contrast to findings from classification,
we see a noteworthy boost by conditioning our model on bespoke categories of
motion blur.
We validate and cross-breed the different remedies experimentally on blurred
COCO images and real-world blur datasets, producing an easy and practical
favorite model with superior detection rates.
Related papers
- Retrieval Robust to Object Motion Blur [54.34823913494456]
We propose a method for object retrieval in images that are affected by motion blur.
We present the first large-scale datasets for blurred object retrieval.
Our method outperforms state-of-the-art retrieval methods on the new blur-retrieval datasets.
arXiv Detail & Related papers (2024-04-27T23:22:39Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - Take a Prior from Other Tasks for Severe Blur Removal [52.380201909782684]
Cross-level feature learning strategy based on knowledge distillation to learn the priors.
Semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively.
Experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and ability.
arXiv Detail & Related papers (2023-02-14T08:30:51Z) - Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors [0.190365714903665]
We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
arXiv Detail & Related papers (2022-10-17T09:07:52Z) - Thin-Plate Spline Motion Model for Image Animation [9.591298403129532]
Image animation brings life to the static object in the source image according to the driving video.
Recent works attempt to perform motion transfer on arbitrary objects through unsupervised methods without using a priori knowledge.
It remains a significant challenge for current unsupervised methods when there is a large pose gap between the objects in the source and driving images.
arXiv Detail & Related papers (2022-03-27T18:40:55Z) - A Simple and Effective Use of Object-Centric Images for Long-Tailed
Object Detection [56.82077636126353]
We take advantage of object-centric images to improve object detection in scene-centric images.
We present a simple yet surprisingly effective framework to do so.
Our approach can improve the object detection (and instance segmentation) accuracy of rare objects by 50% (and 33%) relatively.
arXiv Detail & Related papers (2021-02-17T17:27:21Z) - Improving Object Detection with Selective Self-supervised Self-training [62.792445237541145]
We study how to leverage Web images to augment human-curated object detection datasets.
We retrieve Web images by image-to-image search, which incurs less domain shift from the curated data than other search methods.
We propose a novel learning method motivated by two parallel lines of work that explore unlabeled data for image classification.
arXiv Detail & Related papers (2020-07-17T18:05:01Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.