Defocus Blur Synthesis and Deblurring via Interpolation and
Extrapolation in Latent Space
- URL: http://arxiv.org/abs/2307.15461v1
- Date: Fri, 28 Jul 2023 10:27:28 GMT
- Title: Defocus Blur Synthesis and Deblurring via Interpolation and
Extrapolation in Latent Space
- Authors: Ioana Mazilu, Shunxin Wang, Sven Dummer, Raymond Veldhuis, Christoph
Brune, and Nicola Strisciuglio
- Abstract summary: We train autoencoders with implicit and explicit regularization techniques to enforce linearity relations.
Compared to existing works, we use a simple architecture to synthesize images with flexible blur levels.
Our regularized autoencoders can effectively mimic blur and deblur, increasing data variety as a data augmentation technique.
- Score: 3.097163558730473
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Though modern microscopes have an autofocusing system to ensure optimal
focus, out-of-focus images can still occur when cells within the medium are not
all in the same focal plane, affecting the image quality for medical diagnosis
and analysis of diseases. We propose a method that can deblur images as well as
synthesize defocus blur. We train autoencoders with implicit and explicit
regularization techniques to enforce linearity relations among the
representations of different blur levels in the latent space. This allows for
the exploration of different blur levels of an object by linearly
interpolating/extrapolating the latent representations of images taken at
different focal planes. Compared to existing works, we use a simple
architecture to synthesize images with flexible blur levels, leveraging the
linear latent space. Our regularized autoencoders can effectively mimic blur
and deblur, increasing data variety as a data augmentation technique and
improving the quality of microscopic images, which would be beneficial for
further processing and analysis.
Related papers
- Single Exposure Quantitative Phase Imaging with a Conventional Microscope using Diffusion Models [2.0760654993698426]
Transport-of-Intensity Equation (TIE) often requires multiple acquisitions at different defocus distances.
We propose to use chromatic aberrations to induce the required through-focus images with a single exposure.
Our contributions offer an alternative TIE approach that leverages chromatic aberrations, achieving accurate single-exposure phase measurement with white light.
arXiv Detail & Related papers (2024-06-06T15:44:24Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Learning Single Image Defocus Deblurring with Misaligned Training Pairs [80.13320797431487]
We propose a joint deblurring and reblurring learning framework for single image defocus deblurring.
Our framework can be applied to boost defocus deblurring networks in terms of both quantitative metrics and visual quality.
arXiv Detail & Related papers (2022-11-26T07:36:33Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Deep learning Framework for Mobile Microscopy [2.432228495683345]
We discuss the limitations of the existing solutions developed for professional clinical microscopes.
We propose corresponding improvements, and compare to the other state-of-the-art mobile analytics solutions.
arXiv Detail & Related papers (2020-07-27T17:27:59Z) - Single-shot autofocusing of microscopy images using deep learning [0.30586855806896046]
Deep learning-based offline autofocusing method, termed Deep-R, is trained to rapidly and blindly autofocus a single-shot microscopy image.
Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods.
arXiv Detail & Related papers (2020-03-21T06:07:27Z) - FFusionCGAN: An end-to-end fusion method for few-focus images using
conditional GAN in cytopathological digital slides [0.0]
Multi-focus image fusion technologies compress different focus depth images into an image in which most objects are in focus.
This paper proposes a novel method for generating fused images from single-focus or few-focus images based on conditional generative adversarial network (GAN)
By integrating the network into the generative model, the quality of the generated fused images is effectively improved.
arXiv Detail & Related papers (2020-01-03T02:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.