Deep Autofocus for Synthetic Aperture Sonar
- URL: http://arxiv.org/abs/2010.15687v2
- Date: Fri, 30 Jul 2021 11:38:07 GMT
- Title: Deep Autofocus for Synthetic Aperture Sonar
- Authors: Isaac Gerg and Vishal Monga
- Abstract summary: In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
- Score: 28.306713374371814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic aperture sonar (SAS) requires precise positional and environmental
information to produce well-focused output during the image reconstruction
step. However, errors in these measurements are commonly present resulting in
defocused imagery. To overcome these issues, an \emph{autofocus} algorithm is
employed as a post-processing step after image reconstruction for the purpose
of improving image quality using the image content itself. These algorithms are
usually iterative and metric-based in that they seek to optimize an image
sharpness metric. In this letter, we demonstrate the potential of machine
learning, specifically deep learning, to address the autofocus problem. We
formulate the problem as a self-supervised, phase error estimation task using a
deep network we call Deep Autofocus. Our formulation has the advantages of
being non-iterative (and thus fast) and not requiring ground truth
focused-defocused images pairs as often required by other deblurring deep
learning methods. We compare our technique against a set of common sharpness
metrics optimized using gradient descent over a real-world dataset. Our results
demonstrate Deep Autofocus can produce imagery that is perceptually as good as
benchmark iterative techniques but at a substantially lower computational cost.
We conclude that our proposed Deep Autofocus can provide a more favorable
cost-quality trade-off than state-of-the-art alternatives with significant
potential of future research.
Related papers
- Deep Phase Coded Image Prior [34.84063452418995]
Phase-coded imaging is a method to tackle tasks such as passive depth estimation and extended depth of field.
Most of the current deep learning-based methods for depth estimation or all-in-focus imaging require a training dataset with high-quality depth maps.
We propose a new method named "Deep Phase Coded Image Prior" (DPCIP) for jointly recovering the depth map and all-in-focus image.
arXiv Detail & Related papers (2024-04-05T05:58:40Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Precise Point Spread Function Estimation [6.076995573805468]
We develop a precise mathematical model of the camera's point spread function to describe the defocus process.
Our experiments on standard planes and actual objects show that the proposed algorithm can accurately describe the defocus process.
arXiv Detail & Related papers (2022-03-06T12:43:27Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Aliasing is your Ally: End-to-End Super-Resolution from Raw Image Bursts [70.80220990106467]
This presentation addresses the problem of reconstructing a high-resolution image from multiple lower-resolution snapshots captured from slightly different viewpoints in space and time.
Key challenges for solving this problem include (i) aligning the input pictures with sub-pixel accuracy, (ii) handling raw (noisy) images for maximal faithfulness to native camera data, and (iii) designing/learning an image prior (regularizer) well suited to the task.
arXiv Detail & Related papers (2021-04-13T13:39:43Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus [34.77467193499518]
Synthetic aperture sonar (SAS) requires precise time-of-flight measurements of the transmitted/received waveform to produce well-focused imagery.
To overcome this, an emphautofocus algorithm is employed as a post-processing step after image reconstruction to improve image focus.
We propose a deep learning technique to overcome these limitations and implicitly learn the weighting function in a data-driven manner.
arXiv Detail & Related papers (2021-03-18T15:16:29Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.