Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution?
- URL: http://arxiv.org/abs/2308.00408v1
- Date: Tue, 1 Aug 2023 09:38:41 GMT
- Title: Space Debris: Are Deep Learning-based Image Enhancements part of the
Solution?
- Authors: Michele Jamrozik, Vincent Gaudilli\`ere, Mohamed Adel Musallam and
Djamila Aouada
- Abstract summary: The volume of space debris currently orbiting the Earth is reaching an unsustainable level at an accelerated pace.
The detection, tracking, identification, and differentiation between orbit-defined, registered spacecraft, and rogue/inactive space objects'', is critical to asset protection.
The primary objective of this work is to investigate the validity of Deep Neural Network (DNN) solutions to overcome the limitations and image artefacts most prevalent when captured with monocular cameras in the visible light spectrum.
- Score: 9.117415383776695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The volume of space debris currently orbiting the Earth is reaching an
unsustainable level at an accelerated pace. The detection, tracking,
identification, and differentiation between orbit-defined, registered
spacecraft, and rogue/inactive space ``objects'', is critical to asset
protection. The primary objective of this work is to investigate the validity
of Deep Neural Network (DNN) solutions to overcome the limitations and image
artefacts most prevalent when captured with monocular cameras in the visible
light spectrum. In this work, a hybrid UNet-ResNet34 Deep Learning (DL)
architecture pre-trained on the ImageNet dataset, is developed. Image
degradations addressed include blurring, exposure issues, poor contrast, and
noise. The shortage of space-generated data suitable for supervised DL is also
addressed. A visual comparison between the URes34P model developed in this work
and the existing state of the art in deep learning image enhancement methods,
relevant to images captured in space, is presented. Based upon visual
inspection, it is determined that our UNet model is capable of correcting for
space-related image degradations and merits further investigation to reduce its
computational complexity.
Related papers
- AEROBLADE: Training-Free Detection of Latent Diffusion Images Using Autoencoder Reconstruction Error [15.46508882889489]
A key enabler for generating high-resolution images with low computational cost has been the development of latent diffusion models (LDMs)
LDMs perform the denoising process in the low-dimensional latent space of a pre-trained autoencoder (AE) instead of the high-dimensional image space.
We propose a novel detection method which exploits an inherent component of LDMs: the AE used to transform images between image and latent space.
arXiv Detail & Related papers (2024-01-31T14:36:49Z) - A ground-based dataset and a diffusion model for on-orbit low-light image enhancement [7.815138548685792]
We propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE)
To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed.
To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region.
arXiv Detail & Related papers (2023-06-25T12:15:44Z) - SU-Net: Pose estimation network for non-cooperative spacecraft on-orbit [8.671030148920009]
Spacecraft pose estimation plays a vital role in many on-orbit space missions, such as rendezvous and docking, debris removal, and on-orbit maintenance.
We analyze the radar image characteristics of spacecraft on-orbit, then propose a new deep learning neural Network structure named Dense Residual U-shaped Network (DR-U-Net) to extract image features.
We further introduce a novel neural network based on DR-U-Net, namely Spacecraft U-shaped Network (SU-Net) to achieve end-to-end pose estimation for non-cooperative spacecraft.
arXiv Detail & Related papers (2023-02-21T11:14:01Z) - Near-filed SAR Image Restoration with Deep Learning Inverse Technique: A
Preliminary Study [5.489791364472879]
Near-field synthetic aperture radar (SAR) provides a high-resolution image of a target's scattering distribution-hot spots.
Meanwhile, imaging result suffers inevitable degradation from sidelobes, clutters, and noises.
To restore the image, current methods make simplified assumptions; for example, the point spread function (PSF) is spatially consistent, the target consists of sparse point scatters, etc.
We reformulate the degradation model into a spatially variable complex-convolution model, where the near-field SAR's system response is considered.
A model-based deep learning network is designed to restore the
arXiv Detail & Related papers (2022-11-28T01:28:33Z) - Space Non-cooperative Object Active Tracking with Deep Reinforcement
Learning [1.212848031108815]
We propose an end-to-end active visual tracking method based on DQN algorithm, named as DRLAVT.
It can guide the chasing spacecraft approach to arbitrary space non-cooperative target merely relied on color or RGBD images.
It significantly outperforms position-based visual servoing baseline algorithm that adopts state-of-the-art 2D monocular tracker, SiamRPN.
arXiv Detail & Related papers (2021-12-18T06:12:24Z) - SimIPU: Simple 2D Image and 3D Point Cloud Unsupervised Pre-Training for
Spatial-Aware Visual Representations [85.38562724999898]
We propose a 2D Image and 3D Point cloud Unsupervised pre-training strategy, called SimIPU.
Specifically, we develop a multi-modal contrastive learning framework that consists of an intra-modal spatial perception module and an inter-modal feature interaction module.
To the best of our knowledge, this is the first study to explore contrastive learning pre-training strategies for outdoor multi-modal datasets.
arXiv Detail & Related papers (2021-12-09T03:27:00Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Low Light Image Enhancement via Global and Local Context Modeling [164.85287246243956]
We introduce a context-aware deep network for low-light image enhancement.
First, it features a global context module that models spatial correlations to find complementary cues over full spatial domain.
Second, it introduces a dense residual block that captures local context with a relatively large receptive field.
arXiv Detail & Related papers (2021-01-04T09:40:54Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z) - Single-Image HDR Reconstruction by Learning to Reverse the Camera
Pipeline [100.5353614588565]
We propose to incorporate the domain knowledge of the LDR image formation pipeline into our model.
We model the HDRto-LDR image formation pipeline as the (1) dynamic range clipping, (2) non-linear mapping from a camera response function, and (3) quantization.
We demonstrate that the proposed method performs favorably against state-of-the-art single-image HDR reconstruction algorithms.
arXiv Detail & Related papers (2020-04-02T17:59:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.