Active Illumination Control in Low-Light Environments using NightHawk
- URL: http://arxiv.org/abs/2506.06394v1
- Date: Thu, 05 Jun 2025 19:14:19 GMT
- Title: Active Illumination Control in Low-Light Environments using NightHawk
- Authors: Yash Turkar, Youngjin Kim, Karthik Dantu,
- Abstract summary: NightHawk is a framework that combines active illumination with exposure control to optimize image quality in challenging lighting conditions.<n>Results from field experiments demonstrate improvements in feature detection and matching by 47-197% enabling more reliable visual estimation in challenging lighting conditions.
- Score: 6.8108562306808835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Subterranean environments such as culverts present significant challenges to robot vision due to dim lighting and lack of distinctive features. Although onboard illumination can help, it introduces issues such as specular reflections, overexposure, and increased power consumption. We propose NightHawk, a framework that combines active illumination with exposure control to optimize image quality in these settings. NightHawk formulates an online Bayesian optimization problem to determine the best light intensity and exposure-time for a given scene. We propose a novel feature detector-based metric to quantify image utility and use it as the cost function for the optimizer. We built NightHawk as an event-triggered recursive optimization pipeline and deployed it on a legged robot navigating a culvert beneath the Erie Canal. Results from field experiments demonstrate improvements in feature detection and matching by 47-197% enabling more reliable visual estimation in challenging lighting conditions.
Related papers
- A Poisson-Guided Decomposition Network for Extreme Low-Light Image Enhancement [1.0418202570143507]
We introduce a light-weight deep learning-based method that integrates Retinex based decomposition with Poisson denoising into a unified encoder-decoder network.<n>Our method significantly improves visibility and brightness in low-light conditions, while preserving image structure and color constancy under ambient illumination.
arXiv Detail & Related papers (2025-06-04T21:40:01Z) - Learning Underwater Active Perception in Simulation [51.205673783866146]
Turbidity can jeopardise the whole mission as it may prevent correct visual documentation of the inspected structures.<n>Previous works have introduced methods to adapt to turbidity and backscattering.<n>We propose a simple yet efficient approach to enable high-quality image acquisition of assets in a broad range of water conditions.
arXiv Detail & Related papers (2025-04-23T06:48:38Z) - Low-Light Image Enhancement using Event-Based Illumination Estimation [83.81648559951684]
Low-light image enhancement (LLIE) aims to improve the visibility of images captured in poorly lit environments.<n>This paper opens a new avenue from the perspective of estimating the illumination using ''temporal-mapping'' events.<n>We construct a beam-splitter setup and collect EvLowLight dataset that includes images, temporal-mapping events, and motion events.
arXiv Detail & Related papers (2025-04-13T00:01:33Z) - LENVIZ: A High-Resolution Low-Exposure Night Vision Benchmark Dataset [3.9155038571917005]
Low Exposure Night Vision (LENVIZ) dataset is a benchmark dataset for low-light image enhancement.<n>LENVIZ offers a wide range of lighting conditions, noise levels, and scene complexities, making it the largest publicly available up-to 4K resolution benchmark in the field.<n>Each multi-exposure low-light scene has been meticulously curated and edited by expert photographers to ensure optimal image quality.
arXiv Detail & Related papers (2025-03-25T16:12:28Z) - LUMINA-Net: Low-light Upgrade through Multi-stage Illumination and Noise Adaptation Network for Image Enhancement [26.585985828583304]
Low-light image enhancement (LLIE) is a crucial task in computer vision aimed at enhancing the visual fidelity of images captured under low-illumination conditions.<n>We propose LUMINA-Net, an unsupervised deep learning framework that learns adaptive priors from low-light image pairs by integrating multi-stage illumination and reflectance modules.
arXiv Detail & Related papers (2025-02-21T03:37:58Z) - Beyond Night Visibility: Adaptive Multi-Scale Fusion of Infrared and
Visible Images [49.75771095302775]
We propose an Adaptive Multi-scale Fusion network (AMFusion) with infrared and visible images.
First, we separately fuse spatial and semantic features from infrared and visible images, where the former are used for the adjustment of light distribution.
Second, we utilize detection features extracted by a pre-trained backbone that guide the fusion of semantic features.
Third, we propose a new illumination loss to constrain fusion image with normal light intensity.
arXiv Detail & Related papers (2024-03-02T03:52:07Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Enhancing Visibility in Nighttime Haze Images Using Guided APSF and
Gradient Adaptive Convolution [28.685126418090338]
Existing nighttime dehazing methods often struggle with handling glow or low-light conditions.
In this paper, we enhance the visibility from a single nighttime haze image by suppressing glow and enhancing low-light regions.
Our method achieves a PSNR of 30.38dB, outperforming state-of-the-art methods by 13% on GTA5 nighttime haze dataset.
arXiv Detail & Related papers (2023-08-03T12:58:23Z) - Improving Aerial Instance Segmentation in the Dark with Self-Supervised
Low Light Enhancement [6.500738558466833]
Low light conditions in aerial images adversely affect the performance of vision based applications.
We propose a new method that is capable of enhancing the low light image in a self-supervised fashion.
We also propose the generation of a new low light aerial dataset using GANs.
arXiv Detail & Related papers (2021-02-10T12:24:40Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.