Real-Time Blind Defocus Deblurring for Earth Observation: The IMAGIN-e Mission Approach
- URL: http://arxiv.org/abs/2505.22128v2
- Date: Wed, 02 Jul 2025 16:31:32 GMT
- Title: Real-Time Blind Defocus Deblurring for Earth Observation: The IMAGIN-e Mission Approach
- Authors: Alejandro D. Mousist,
- Abstract summary: This work addresses mechanical defocus in Earth observation images from the IMAGIN-e mission aboard the ISS.<n>Using Sentinel-2 data, our method estimates the defocus kernel and trains a restoration model within a GAN framework.<n>The approach is currently deployed aboard the IMAGIN-e mission, demonstrating its practical application in an operational space environment.
- Score: 55.2480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work addresses mechanical defocus in Earth observation images from the IMAGIN-e mission aboard the ISS, proposing a blind deblurring approach adapted to space-based edge computing constraints. Leveraging Sentinel-2 data, our method estimates the defocus kernel and trains a restoration model within a GAN framework, effectively operating without reference images. On Sentinel-2 images with synthetic degradation, SSIM improved by 72.47% and PSNR by 25.00%, confirming the model's ability to recover lost details when the original clean image is known. On IMAGIN-e, where no reference images exist, perceptual quality metrics indicate a substantial enhancement, with NIQE improving by 60.66% and BRISQUE by 48.38%, validating real-world onboard restoration. The approach is currently deployed aboard the IMAGIN-e mission, demonstrating its practical application in an operational space environment. By efficiently handling high-resolution images under edge computing constraints, the method enables applications such as water body segmentation and contour detection while maintaining processing viability despite resource limitations.
Related papers
- Deep Learning-Based Image Recovery and Pose Estimation for Resident Space Objects [0.46873264197900916]
Training models able to identify a spacecraft and its pose presents a significant challenge due to a lack of available image data for model training.<n>This paper puts forth an innovative framework for generating realistic synthetic datasets of Resident Space Object (RSO) imagery.<n>An analysis of the proposed image recovery and regression techniques was undertaken, providing insights into the performance, potential enhancements and limitations when applied to real imagery of RSOs.
arXiv Detail & Related papers (2025-01-22T16:50:58Z) - Markers Identification for Relative Pose Estimation of an Uncooperative Target [0.0]
This paper introduces a novel method to detect structural markers on the European Space Agency's (ESA) Environmental Satellite (ENVISAT) for safe de-orbiting.
Advanced image pre-processing techniques, including noise addition and blurring, are employed to improve marker detection accuracy and robustness.
arXiv Detail & Related papers (2024-07-30T03:20:54Z) - Camera-Pose Robust Crater Detection from Chang'e 5 [18.986915927640396]
We evaluate the performance of Mask R-CNN for crater detection, comparing models pretrained on simulated data containing off-nadir view angles and to pretraining on real-lunar images.
We demonstrate pretraining on real-lunar images is superior despite the lack of images containing off-nadir view angles, achieving detection performance of 63.1 F1-score and ellipse-regression performance of 0.701 intersection over union.
arXiv Detail & Related papers (2024-06-07T01:11:31Z) - Efficient Visual State Space Model for Image Deblurring [99.54894198086852]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.<n>We propose a simple yet effective visual state space model (EVSSM) for image deblurring.<n>The proposed EVSSM performs favorably against state-of-the-art methods on benchmark datasets and real-world images.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Resource Efficient Perception for Vision Systems [0.0]
Our study introduces a framework aimed at mitigating these challenges by leveraging memory efficient patch based processing for high resolution images.
It incorporates a global context representation alongside local patch information, enabling a comprehensive understanding of the image content.
We demonstrate the effectiveness of our method through superior performance on 7 different benchmarks across classification, object detection, and segmentation.
arXiv Detail & Related papers (2024-05-12T05:33:00Z) - Diffusion Enhancement for Cloud Removal in Ultra-Resolution Remote
Sensing Imagery [48.14610248492785]
Cloud layers severely compromise the quality and effectiveness of optical remote sensing (RS) images.
Existing deep-learning (DL)-based Cloud Removal (CR) techniques encounter difficulties in accurately reconstructing the original visual authenticity and detailed semantic content of the images.
This work proposes enhancements at the data and methodology fronts to tackle this challenge.
arXiv Detail & Related papers (2024-01-25T13:14:17Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution? [9.117415383776695]
The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace.
The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space objects'', is critical to asset protection.
The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum.
arXiv Detail & Related papers (2023-08-01T09:38:41Z) - 6D Camera Relocalization in Visually Ambiguous Extreme Environments [79.68352435957266]
We propose a novel method to reliably estimate the pose of a camera given a sequence of images acquired in extreme environments such as deep seas or extraterrestrial terrains.
Our method achieves comparable performance with state-of-the-art methods on the indoor benchmark (7-Scenes dataset) using only 20% training data.
arXiv Detail & Related papers (2022-07-13T16:40:02Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.