Trash to Treasure: Low-Light Object Detection via
Decomposition-and-Aggregation
- URL: http://arxiv.org/abs/2309.03548v1
- Date: Thu, 7 Sep 2023 08:11:47 GMT
- Title: Trash to Treasure: Low-Light Object Detection via
Decomposition-and-Aggregation
- Authors: Xiaohan Cui, Long Ma, Tengyu Ma, Jinyuan Liu, Xin Fan, Risheng Liu
- Abstract summary: Object detection in low-light scenarios has attracted much attention in the past few years.
A mainstream and representative scheme introduces enhancers as the pre-processing for regular detectors.
In this work, we try to arouse the potential of enhancer + detector.
- Score: 76.45506517198956
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection in low-light scenarios has attracted much attention in the
past few years. A mainstream and representative scheme introduces enhancers as
the pre-processing for regular detectors. However, because of the disparity in
task objectives between the enhancer and detector, this paradigm cannot shine
at its best ability. In this work, we try to arouse the potential of enhancer +
detector. Different from existing works, we extend the illumination-based
enhancers (our newly designed or existing) as a scene decomposition module,
whose removed illumination is exploited as the auxiliary in the detector for
extracting detection-friendly features. A semantic aggregation module is
further established for integrating multi-scale scene-related semantic
information in the context space. Actually, our built scheme successfully
transforms the "trash" (i.e., the ignored illumination in the detector) into
the "treasure" for the detector. Plenty of experiments are conducted to reveal
our superiority against other state-of-the-art methods. The code will be public
if it is accepted.
Related papers
- You Only Look Around: Learning Illumination Invariant Feature for Low-light Object Detection [46.636878653865104]
We introduce YOLA, a novel framework for object detection in low-light scenarios.
We learn illumination-invariant features through the Lambertian image formation model.
Our empirical findings reveal significant improvements in low-light object detection tasks.
arXiv Detail & Related papers (2024-10-24T03:23:50Z) - MutDet: Mutually Optimizing Pre-training for Remote Sensing Object Detection [36.478530086163744]
We propose a novel Mutually optimizing pre-training framework for remote sensing object Detection, dubbed as MutDet.
MutDet fuses the object embeddings and detector features bidirectionally in the last encoder layer, enhancing their information interaction.
Experiments on various settings show new state-of-the-art transfer performance.
arXiv Detail & Related papers (2024-07-13T15:28:15Z) - FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion [18.829659846356765]
We propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model.
We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity.
We introduce a new challenging evaluation protocol that uses reverse image search to mitigate stylistic and thematic biases in the detector evaluation.
arXiv Detail & Related papers (2024-06-12T19:14:58Z) - Visible and Clear: Finding Tiny Objects in Difference Map [50.54061010335082]
We introduce a self-reconstruction mechanism in the detection model, and discover the strong correlation between it and the tiny objects.
Specifically, we impose a reconstruction head in-between the neck of a detector, constructing a difference map of the reconstructed image and the input, which shows high sensitivity to tiny objects.
We further develop a Difference Map Guided Feature Enhancement (DGFE) module to make the tiny feature representation more clear.
arXiv Detail & Related papers (2024-05-18T12:22:26Z) - Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection [57.646582245834324]
We propose a simple yet effective deepfake detector called LSDA.
It is based on a idea: representations with a wider variety of forgeries should be able to learn a more generalizable decision boundary.
We show that our proposed method is surprisingly effective and transcends state-of-the-art detectors across several widely used benchmarks.
arXiv Detail & Related papers (2023-11-19T09:41:10Z) - Learning Object-level Point Augmentor for Semi-supervised 3D Object
Detection [85.170578641966]
We propose an object-level point augmentor (OPA) that performs local transformations for semi-supervised 3D object detection.
In this way, the resultant augmentor is derived to emphasize object instances rather than irrelevant backgrounds.
Experiments on the ScanNet and SUN RGB-D datasets show that the proposed OPA performs favorably against the state-of-the-art methods.
arXiv Detail & Related papers (2022-12-19T06:56:14Z) - Unsupervised Change Detection in Hyperspectral Images using Feature
Fusion Deep Convolutional Autoencoders [15.978029004247617]
The proposed work aims to build a novel feature extraction system using a feature fusion deep convolutional autoencoder.
It is found that the proposed method clearly outperformed the state of the art methods in unsupervised change detection for all the datasets.
arXiv Detail & Related papers (2021-09-10T16:52:31Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Black-box Explanation of Object Detectors via Saliency Maps [66.745167677293]
We propose D-RISE, a method for generating visual explanations for the predictions of object detectors.
We show that D-RISE can be easily applied to different object detectors including one-stage detectors such as YOLOv3 and two-stage detectors such as Faster-RCNN.
arXiv Detail & Related papers (2020-06-05T02:13:35Z) - Context-Transformer: Tackling Object Confusion for Few-Shot Detection [0.0]
We propose a novel Context-Transformer within a concise deep transfer framework.
Context-Transformer can effectively leverage source-domain object knowledge as guidance.
It can adaptively integrate these relational clues to enhance the discriminative power of detector.
arXiv Detail & Related papers (2020-03-16T16:17:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.