A ground-based dataset and a diffusion model for on-orbit low-light image enhancement
- URL: http://arxiv.org/abs/2306.14227v2
- Date: Mon, 8 Apr 2024 12:50:51 GMT
- Title: A ground-based dataset and a diffusion model for on-orbit low-light image enhancement
- Authors: Yiman Zhu, Lu Wang, Jingyi Yuan, Yu Guo,
- Abstract summary: We propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE)
To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed.
To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region.
- Score: 7.815138548685792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: On-orbit service is important for maintaining the sustainability of space environment. Space-based visible camera is an economical and lightweight sensor for situation awareness during on-orbit service. However, it can be easily affected by the low illumination environment. Recently, deep learning has achieved remarkable success in image enhancement of natural images, but seldom applied in space due to the data bottleneck. In this article, we first propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE). In the automatic data collection scheme, we focus on reducing domain gap and improving the diversity of the dataset. we collect hardware in-the-loop images based on a robotic simulation testbed imitating space lighting conditions. To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed. Afterwards, a novel diffusion model is proposed. To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region. Finally, we compare our method with previous methods using our dataset, which indicates that our method has a better capacity in on-orbit LLIE.
Related papers
- bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - Towards Robust Event-guided Low-Light Image Enhancement: A Large-Scale Real-World Event-Image Dataset and Novel Approach [7.974102031202597]
We propose a real-world (indoor and outdoor) dataset comprising over 30K pairs of images and events under both low and normal illumination conditions.
Based on the dataset, we propose a novel event-guided LIE approach, called EvLight, towards robust performance in real-world low-light scenes.
arXiv Detail & Related papers (2024-04-01T00:18:17Z) - Boosting Object Detection with Zero-Shot Day-Night Domain Adaptation [33.142262765252795]
Detectors trained on well-lit data exhibit significant performance degradation on low-light data due to low visibility.
We propose to boost low-light object detection with zero-shot day-night domain adaptation.
Our method generalizes a detector from well-lit scenarios to low-light ones without requiring real low-light data.
arXiv Detail & Related papers (2023-12-02T20:11:48Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution? [9.117415383776695]
The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace.
The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space objects'', is critical to asset protection.
The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum.
arXiv Detail & Related papers (2023-08-01T09:38:41Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - Spatially-Varying Outdoor Lighting Estimation from Intrinsics [66.04683041837784]
We present SOLID-Net, a neural network for spatially-varying outdoor lighting estimation.
We generate spatially-varying local lighting environment maps by combining global sky environment map with warped image information.
Experiments on both synthetic and real datasets show that SOLID-Net significantly outperforms previous methods.
arXiv Detail & Related papers (2021-04-09T02:28:54Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Deep Bilateral Retinex for Low-Light Image Enhancement [96.15991198417552]
Low-light images suffer from poor visibility caused by low contrast, color distortion and measurement noise.
This paper proposes a deep learning method for low-light image enhancement with a particular focus on handling the measurement noise.
The proposed method is very competitive to the state-of-the-art methods, and has significant advantage over others when processing images captured in extremely low lighting conditions.
arXiv Detail & Related papers (2020-07-04T06:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.