Low-Light Enhancement Effect on Classification and Detection: An Empirical Study
- URL: http://arxiv.org/abs/2409.14461v1
- Date: Sun, 22 Sep 2024 14:21:31 GMT
- Title: Low-Light Enhancement Effect on Classification and Detection: An Empirical Study
- Authors: Xu Wu, Zhihui Lai, Zhou Jie, Can Gao, Xianxu Hou, Ya-nan Zhang, Linlin Shen,
- Abstract summary: We evaluate the impact of Low-Light Image Enhancement (LLIE) methods on high-level vision tasks.
Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis.
This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
- Score: 48.6762437869172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-light images are commonly encountered in real-world scenarios, and numerous low-light image enhancement (LLIE) methods have been proposed to improve the visibility of these images. The primary goal of LLIE is to generate clearer images that are more visually pleasing to humans. However, the impact of LLIE methods in high-level vision tasks, such as image classification and object detection, which rely on high-quality image datasets, is not well {explored}. To explore the impact, we comprehensively evaluate LLIE methods on these high-level vision tasks by utilizing an empirical investigation comprising image classification and object detection experiments. The evaluation reveals a dichotomy: {\textit{While Low-Light Image Enhancement (LLIE) methods enhance human visual interpretation, their effect on computer vision tasks is inconsistent and can sometimes be harmful. }} Our findings suggest a disconnect between image enhancement for human visual perception and for machine analysis, indicating a need for LLIE methods tailored to support high-level vision tasks effectively. This insight is crucial for the development of LLIE techniques that align with the needs of both human and machine vision.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Visibility Enhancement for Low-light Hazy Scenarios [18.605784907840473]
Low-light hazy scenes commonly appear at dusk and early morning.
We propose a novel method to enhance visibility for low-light hazy scenarios.
The framework is designed for enhancing visibility of the input image via fully utilizing the clues from different sub-tasks.
The simulation is designed for generating the dataset with ground-truths by the proposed low-light hazy imaging model.
arXiv Detail & Related papers (2023-08-01T15:07:38Z) - Self-Aligned Concave Curve: Illumination Enhancement for Unsupervised
Adaptation [36.050270650417325]
We propose a learnable illumination enhancement model for high-level vision.
Inspired by real camera response functions, we assume that the illumination enhancement function should be a concave curve.
Our model architecture and training designs mutually benefit each other, forming a powerful unsupervised normal-to-low light adaptation framework.
arXiv Detail & Related papers (2022-10-07T19:32:55Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - Learning with Nested Scene Modeling and Cooperative Architecture Search
for Low-Light Vision [95.45256938467237]
Images captured from low-light scenes often suffer from severe degradations.
Deep learning methods have been proposed to enhance the visual quality of low-light images.
It is still challenging to extend these enhancement techniques to handle other Low-Light Vision applications.
arXiv Detail & Related papers (2021-12-09T06:08:31Z) - NOD: Taking a Closer Look at Detection under Extreme Low-Light
Conditions with Night Object Detection Dataset [25.29013780731876]
Low light proves more difficult for machine cognition than previously thought.
We present a large-scale dataset showing dynamic scenes captured on the streets at night.
We propose to incorporate an image enhancement module into the object detection framework and two novel data augmentation techniques.
arXiv Detail & Related papers (2021-10-20T03:44:04Z) - SALYPATH: A Deep-Based Architecture for visual attention prediction [5.068678962285629]
Visual attention is useful for many computer vision applications such as image compression, recognition, and captioning.
We propose an end-to-end deep-based method, so-called SALYPATH, that efficiently predicts the scanpath of an image through features of a saliency model.
The idea is predict the scanpath by exploiting the capacity of a deep-based model to predict the saliency.
arXiv Detail & Related papers (2021-06-29T08:53:51Z) - Heterogeneous Contrastive Learning: Encoding Spatial Information for
Compact Visual Representations [183.03278932562438]
This paper presents an effective approach that adds spatial information to the encoding stage to alleviate the learning inconsistency between the contrastive objective and strong data augmentation operations.
We show that our approach achieves higher efficiency in visual representations and thus delivers a key message to inspire the future research of self-supervised visual representation learning.
arXiv Detail & Related papers (2020-11-19T16:26:25Z) - Unsupervised Foveal Vision Neural Networks with Top-Down Attention [0.3058685580689604]
We propose the fusion of bottom-up saliency and top-down attention employing only unsupervised learning techniques.
We test the performance of the proposed Gamma saliency technique on the Toronto and CAT2000 databases.
We also develop a topdown attention mechanism based on the Gamma saliency applied to the top layer of CNNs to improve scene understanding in multi-object images or images with strong background clutter.
arXiv Detail & Related papers (2020-10-18T20:55:49Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.