Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline
and Jitter-Aware Restoration Network
- URL: http://arxiv.org/abs/2401.08171v1
- Date: Tue, 16 Jan 2024 07:26:26 GMT
- Title: Deep Linear Array Pushbroom Image Restoration: A Degradation Pipeline
and Jitter-Aware Restoration Network
- Authors: Zida Chen, Ziran Zhang, Haoying Li, Menghao Li, Yueting Chen, Qi Li,
Huajun Feng, Zhihai Xu, Shiqi Chen
- Abstract summary: Linear Array Pushbroom (LAP) imaging technology is widely used in the realm of remote sensing.
Traditional methods for restoring LAP images, such as algorithms estimating the point spread function (PSF), exhibit limited performance.
We propose a Jitter-Aware Restoration Network (JARNet) to remove the distortion and blur in two stages.
- Score: 26.86292926584254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear Array Pushbroom (LAP) imaging technology is widely used in the realm
of remote sensing. However, images acquired through LAP always suffer from
distortion and blur because of camera jitter. Traditional methods for restoring
LAP images, such as algorithms estimating the point spread function (PSF),
exhibit limited performance. To tackle this issue, we propose a Jitter-Aware
Restoration Network (JARNet), to remove the distortion and blur in two stages.
In the first stage, we formulate an Optical Flow Correction (OFC) block to
refine the optical flow of the degraded LAP images, resulting in pre-corrected
images where most of the distortions are alleviated. In the second stage, for
further enhancement of the pre-corrected images, we integrate two jitter-aware
techniques within the Spatial and Frequency Residual (SFRes) block: 1)
introducing Coordinate Attention (CoA) to the SFRes block in order to capture
the jitter state in orthogonal direction; 2) manipulating image features in
both spatial and frequency domains to leverage local and global priors.
Additionally, we develop a data synthesis pipeline, which applies Continue
Dynamic Shooting Model (CDSM) to simulate realistic degradation in LAP images.
Both the proposed JARNet and LAP image synthesis pipeline establish a
foundation for addressing this intricate challenge. Extensive experiments
demonstrate that the proposed two-stage method outperforms state-of-the-art
image restoration models. Code is available at
https://github.com/JHW2000/JARNet.
Related papers
- ConsisSR: Delving Deep into Consistency in Diffusion-based Image Super-Resolution [28.945663118445037]
Real-world image super-resolution (Real-ISR) aims at restoring high-quality (HQ) images from low-quality (LQ) inputs corrupted by unknown and complex degradations.
We introduce ConsisSR to handle both semantic and pixel-level consistency.
arXiv Detail & Related papers (2024-10-17T17:41:52Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - DR2: Diffusion-based Robust Degradation Remover for Blind Face
Restoration [66.01846902242355]
Blind face restoration usually synthesizes degraded low-quality data with a pre-defined degradation model for training.
It is expensive and infeasible to include every type of degradation to cover real-world cases in the training data.
We propose Robust Degradation Remover (DR2) to first transform the degraded image to a coarse but degradation-invariant prediction, then employ an enhancement module to restore the coarse prediction to a high-quality image.
arXiv Detail & Related papers (2023-03-13T06:05:18Z) - Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
Compressive Imaging [142.11622043078867]
We propose a principled Degradation-Aware Unfolding Framework (DAUF) that estimates parameters from the compressed image and physical mask, and then uses these parameters to control each iteration.
By plugging HST into DAUF, we establish the first Transformer-based deep unfolding method, Degradation-Aware Unfolding Half-Shuffle Transformer (DAUHST) for HSI reconstruction.
arXiv Detail & Related papers (2022-05-20T11:37:44Z) - A Differentiable Two-stage Alignment Scheme for Burst Image
Reconstruction with Large Shift [13.454711511086261]
Joint denoising and demosaicking (JDD) for burst images, namely JDD-B, has attracted much attention.
One key challenge of JDD-B lies in the robust alignment of image frames.
We propose a differentiable two-stage alignment scheme sequentially in patch and pixel level for effective JDD-B.
arXiv Detail & Related papers (2022-03-17T12:55:45Z) - Bringing Rolling Shutter Images Alive with Dual Reversed Distortion [75.78003680510193]
Rolling shutter (RS) distortion can be interpreted as the result of picking a row of pixels from instant global shutter (GS) frames over time.
We develop a novel end-to-end model, IFED, to generate dual optical flow sequence through iterative learning of the velocity field during the RS time.
arXiv Detail & Related papers (2022-03-12T14:57:49Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Deep Atrous Guided Filter for Image Restoration in Under Display Cameras [18.6418313982586]
Under Display Cameras present a promising opportunity for phone manufacturers to achieve bezel-free displays by positioning the camera behind semi-transparent OLED screens.
Such imaging systems suffer from severe image degradation due to light attenuation and diffraction effects.
We present Deep Atrous Guided Filter (DAGF), a two-stage, end-to-end approach for image restoration in UDC systems.
arXiv Detail & Related papers (2020-08-14T07:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.