Accelerating Multiframe Blind Deconvolution via Deep Learning
- URL: http://arxiv.org/abs/2306.12078v1
- Date: Wed, 21 Jun 2023 07:53:00 GMT
- Title: Accelerating Multiframe Blind Deconvolution via Deep Learning
- Authors: A. Asensio Ramos, S. Esteban Pozuelo, C. Kuckein
- Abstract summary: Ground-based solar image restoration is a computationally expensive procedure.
We propose a new method to accelerate the restoration based on algorithm unrolling.
We show that both methods significantly reduce the restoration time compared to the standard optimization procedure.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ground-based solar image restoration is a computationally expensive procedure
that involves nonlinear optimization techniques. The presence of atmospheric
turbulence produces perturbations in individual images that make it necessary
to apply blind deconvolution techniques. These techniques rely on the
observation of many short exposure frames that are used to simultaneously infer
the instantaneous state of the atmosphere and the unperturbed object. We have
recently explored the use of machine learning to accelerate this process, with
promising results. We build upon this previous work to propose several
interesting improvements that lead to better models. As well, we propose a new
method to accelerate the restoration based on algorithm unrolling. In this
method, the image restoration problem is solved with a gradient descent method
that is unrolled and accelerated aided by a few small neural networks. The role
of the neural networks is to correct the estimation of the solution at each
iterative step. The model is trained to perform the optimization in a small
fixed number of steps with a curated dataset. Our findings demonstrate that
both methods significantly reduce the restoration time compared to the standard
optimization procedure. Furthermore, we showcase that these models can be
trained in an unsupervised manner using observed images from three different
instruments. Remarkably, they also exhibit robust generalization capabilities
when applied to new datasets. To foster further research and collaboration, we
openly provide the trained models, along with the corresponding training and
evaluation code, as well as the training dataset, to the scientific community.
Related papers
- Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - ReNoise: Real Image Inversion Through Iterative Noising [62.96073631599749]
We introduce an inversion method with a high quality-to-operation ratio, enhancing reconstruction accuracy without increasing the number of operations.
We evaluate the performance of our ReNoise technique using various sampling algorithms and models, including recent accelerated diffusion models.
arXiv Detail & Related papers (2024-03-21T17:52:08Z) - Image edge enhancement for effective image classification [7.470763273994321]
We propose an edge enhancement-based method to enhance both accuracy and training speed of neural networks.
Our approach involves extracting high frequency features, such as edges, from images within the available dataset and fusing them with the original images.
arXiv Detail & Related papers (2024-01-13T10:01:34Z) - Sample Less, Learn More: Efficient Action Recognition via Frame Feature
Restoration [59.6021678234829]
We propose a novel method to restore the intermediate features for two sparsely sampled and adjacent video frames.
With the integration of our method, the efficiency of three commonly used baselines has been improved by over 50%, with a mere 0.5% reduction in recognition accuracy.
arXiv Detail & Related papers (2023-07-27T13:52:42Z) - BOOT: Data-free Distillation of Denoising Diffusion Models with
Bootstrapping [64.54271680071373]
Diffusion models have demonstrated excellent potential for generating diverse images.
Knowledge distillation has been recently proposed as a remedy that can reduce the number of inference steps to one or a few.
We present a novel technique called BOOT, that overcomes limitations with an efficient data-free distillation algorithm.
arXiv Detail & Related papers (2023-06-08T20:30:55Z) - Unsupervised Restoration of Weather-affected Images using Deep Gaussian
Process-based CycleGAN [92.15895515035795]
We describe an approach for supervising deep networks that are based on CycleGAN.
We introduce new losses for training CycleGAN that lead to more effective training, resulting in high-quality reconstructions.
We demonstrate that the proposed method can be effectively applied to different restoration tasks like de-raining, de-hazing and de-snowing.
arXiv Detail & Related papers (2022-04-23T01:30:47Z) - A comparison of different atmospheric turbulence simulation methods for
image restoration [64.24948495708337]
Atmospheric turbulence deteriorates the quality of images captured by long-range imaging systems.
Various deep learning-based atmospheric turbulence mitigation methods have been proposed in the literature.
We systematically evaluate the effectiveness of various turbulence simulation methods on image restoration.
arXiv Detail & Related papers (2022-04-19T16:21:36Z) - Learning to Rank Learning Curves [15.976034696758148]
We present a new method that saves computational budget by terminating poor configurations early on in the training.
We show that our model is able to effectively rank learning curves without having to observe many or very long learning curves.
arXiv Detail & Related papers (2020-06-05T10:49:52Z) - Learning to do multiframe wavefront sensing unsupervisedly: applications
to blind deconvolution [0.0]
We propose an unsupervised training scheme for blind deconvolution deep learning systems.
It can be applied for the correction of point-like as well as extended objects.
The network model is roughly three orders of magnitude faster than applying standard deconvolution.
arXiv Detail & Related papers (2020-06-02T08:02:12Z) - Deep Non-Line-of-Sight Reconstruction [18.38481917675749]
In this paper, we employ convolutional feed-forward networks for solving the reconstruction problem efficiently.
We devise a tailored autoencoder architecture, trained end-to-end reconstruction maps transient images directly to a depth map representation.
We demonstrate that our feed-forward network, even though it is trained solely on synthetic data, generalizes to measured data from SPAD sensors and is able to obtain results that are competitive with model-based reconstruction methods.
arXiv Detail & Related papers (2020-01-24T16:05:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.