You Only Look Around: Learning Illumination Invariant Feature for Low-light Object Detection
- URL: http://arxiv.org/abs/2410.18398v1
- Date: Thu, 24 Oct 2024 03:23:50 GMT
- Title: You Only Look Around: Learning Illumination Invariant Feature for Low-light Object Detection
- Authors: Mingbo Hong, Shen Cheng, Haibin Huang, Haoqiang Fan, Shuaicheng Liu,
- Abstract summary: We introduce YOLA, a novel framework for object detection in low-light scenarios.
We learn illumination-invariant features through the Lambertian image formation model.
Our empirical findings reveal significant improvements in low-light object detection tasks.
- Score: 46.636878653865104
- License:
- Abstract: In this paper, we introduce YOLA, a novel framework for object detection in low-light scenarios. Unlike previous works, we propose to tackle this challenging problem from the perspective of feature learning. Specifically, we propose to learn illumination-invariant features through the Lambertian image formation model. We observe that, under the Lambertian assumption, it is feasible to approximate illumination-invariant feature maps by exploiting the interrelationships between neighboring color channels and spatially adjacent pixels. By incorporating additional constraints, these relationships can be characterized in the form of convolutional kernels, which can be trained in a detection-driven manner within a network. Towards this end, we introduce a novel module dedicated to the extraction of illumination-invariant features from low-light images, which can be easily integrated into existing object detection frameworks. Our empirical findings reveal significant improvements in low-light object detection tasks, as well as promising results in both well-lit and over-lit scenarios. Code is available at \url{https://github.com/MingboHong/YOLA}.
Related papers
- GS-Phong: Meta-Learned 3D Gaussians for Relightable Novel View Synthesis [63.5925701087252]
We propose a novel method for representing a scene illuminated by a point light using a set of relightable 3D Gaussian points.
Inspired by the Blinn-Phong model, our approach decomposes the scene into ambient, diffuse, and specular components.
To facilitate the decomposition of geometric information independent of lighting conditions, we introduce a novel bilevel optimization-based meta-learning framework.
arXiv Detail & Related papers (2024-05-31T13:48:54Z) - Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation [33.142262765252795]
Detectors trained on well-lit data exhibit significant performance degradation on low-light data due to low visibility.
We propose to boost low-light object detection with zero-shot day-night domain adaptation.
Our method generalizes a detector from well-lit scenarios to low-light ones without requiring real low-light data.
arXiv Detail & Related papers (2023-12-02T20:11:48Z) - Trash to Treasure: Low-Light Object Detection via
Decomposition-and-Aggregation [76.45506517198956]
Object detection in low-light scenarios has attracted much attention in the past few years.
A mainstream and representative scheme introduces enhancers as the pre-processing for regular detectors.
In this work, we try to arouse the potential of enhancer + detector.
arXiv Detail & Related papers (2023-09-07T08:11:47Z) - Weakly-supervised Single-view Image Relighting [17.49214457620938]
We present a learning-based approach to relight a single image of Lambertian and low-frequency specular objects.
Our method enables inserting objects from photographs into new scenes and relighting them under the new environment lighting.
arXiv Detail & Related papers (2023-03-24T08:20:16Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - Learning from Pixel-Level Noisy Label : A New Perspective for Light
Field Saliency Detection [40.76268976076642]
Saliency detection with light field images is becoming attractive given the abundant cues available.
We propose to learn light field saliency from pixel-level noisy labels obtained from unsupervised hand crafted featured based saliency methods.
arXiv Detail & Related papers (2022-04-28T12:44:08Z) - High-resolution Iterative Feedback Network for Camouflaged Object
Detection [128.893782016078]
Spotting camouflaged objects that are visually assimilated into the background is tricky for object detection algorithms.
We aim to extract the high-resolution texture details to avoid the detail degradation that causes blurred vision in edges and boundaries.
We introduce a novel HitNet to refine the low-resolution representations by high-resolution features in an iterative feedback manner.
arXiv Detail & Related papers (2022-03-22T11:20:21Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - NOD: Taking a Closer Look at Detection under Extreme Low-Light
Conditions with Night Object Detection Dataset [25.29013780731876]
Low light proves more difficult for machine cognition than previously thought.
We present a large-scale dataset showing dynamic scenes captured on the streets at night.
We propose to incorporate an image enhancement module into the object detection framework and two novel data augmentation techniques.
arXiv Detail & Related papers (2021-10-20T03:44:04Z) - Sill-Net: Feature Augmentation with Separated Illumination
Representation [35.25230715669166]
We propose a novel neural network architecture called Separating-Illumination Network (Sill-Net)
Sill-Net learns to separate illumination features from images, and then during training we augment training samples with these separated illumination features in the feature space.
Experimental results demonstrate that our approach outperforms current state-of-the-art methods in several object classification benchmarks.
arXiv Detail & Related papers (2021-02-06T09:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.