Learning to Control Camera Exposure via Reinforcement Learning
- URL: http://arxiv.org/abs/2404.01636v1
- Date: Tue, 2 Apr 2024 04:53:39 GMT
- Title: Learning to Control Camera Exposure via Reinforcement Learning
- Authors: Kyunghyun Lee, Ukcheol Shin, Byeong-Uk Lee,
- Abstract summary: Poorly adjusted camera exposure often leads to critical failure and performance degradation.
Traditional camera exposure control methods require multiple convergence steps and time-consuming processes.
We propose a new camera exposure control framework that rapidly controls camera exposure while performing real-time processing.
- Score: 8.359692000028891
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Adjusting camera exposure in arbitrary lighting conditions is the first step to ensure the functionality of computer vision applications. Poorly adjusted camera exposure often leads to critical failure and performance degradation. Traditional camera exposure control methods require multiple convergence steps and time-consuming processes, making them unsuitable for dynamic lighting conditions. In this paper, we propose a new camera exposure control framework that rapidly controls camera exposure while performing real-time processing by exploiting deep reinforcement learning. The proposed framework consists of four contributions: 1) a simplified training ground to simulate real-world's diverse and dynamic lighting changes, 2) flickering and image attribute-aware reward design, along with lightweight state design for real-time processing, 3) a static-to-dynamic lighting curriculum to gradually improve the agent's exposure-adjusting capability, and 4) domain randomization techniques to alleviate the limitation of the training ground and achieve seamless generalization in the wild.As a result, our proposed method rapidly reaches a desired exposure level within five steps with real-time processing (1 ms). Also, the acquired images are well-exposed and show superiority in various computer vision tasks, such as feature extraction and object detection.
Related papers
- Efficient Camera Exposure Control for Visual Odometry via Deep Reinforcement Learning [10.886819238167286]
This study employs a deep reinforcement learning framework to train agents for exposure control.
A lightweight image simulator is developed to facilitate the training process.
Different levels of reward functions are crafted to enhance the VO systems.
arXiv Detail & Related papers (2024-08-30T04:37:52Z) - Unrolled Decomposed Unpaired Learning for Controllable Low-Light Video Enhancement [48.76608212565327]
This paper makes endeavors in the direction of learning for low-light video enhancement without using paired ground truth.
Compared to low-light image enhancement, enhancing low-light videos is more difficult due to the intertwined effects of noise, exposure, and contrast in the spatial domain, jointly with the need for temporal coherence.
We propose the Unrolled Decomposed Unpaired Network (UDU-Net) for enhancing low-light videos by unrolling the optimization functions into a deep network to decompose the signal into spatial and temporal-related factors, which are updated iteratively.
arXiv Detail & Related papers (2024-08-22T11:45:11Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - CuDi: Curve Distillation for Efficient and Controllable Exposure
Adjustment [86.97592472794724]
We present Curve Distillation, CuDi, for efficient and controllable exposure adjustment without the requirement of paired or unpaired data.
Our method inherits the zero-reference learning and curve-based framework from an effective low-light image enhancement method, Zero-DCE.
We show that our method is appealing for its fast, robust, and flexible performance, outperforming state-of-the-art methods in real scenes.
arXiv Detail & Related papers (2022-07-28T17:53:46Z) - Learning Spatially Varying Pixel Exposures for Motion Deblurring [49.07867902677453]
We present a novel approach of leveraging spatially varying pixel exposures for motion deblurring.
Our work illustrates the promising role that focal-plane sensor--processors can play in the future of computational imaging.
arXiv Detail & Related papers (2022-04-14T23:41:49Z) - Burst Imaging for Light-Constrained Structure-From-Motion [4.125187280299246]
We develop an image processing technique for aiding 3D reconstruction from images acquired in low light conditions.
Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time images.
Our method is a significant step towards allowing robots to operate in low light conditions, with potential applications to robots operating in environments such as underground mines and night time operation.
arXiv Detail & Related papers (2021-08-23T02:12:40Z) - Progressive Joint Low-light Enhancement and Noise Removal for Raw Images [10.778200442212334]
Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture.
We propose a low-light image processing framework that performs joint illumination adjustment, color enhancement, and denoising.
Our framework does not need to recollect massive data when being adapted to another camera model.
arXiv Detail & Related papers (2021-06-28T16:43:52Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.