Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus
- URL: http://arxiv.org/abs/2103.10312v1
- Date: Thu, 18 Mar 2021 15:16:29 GMT
- Title: Real-Time, Deep Synthetic Aperture Sonar (SAS) Autofocus
- Authors: Isaac D. Gerg and Vishal Monga
- Abstract summary: Synthetic aperture sonar (SAS) requires precise time-of-flight measurements of the transmitted/received waveform to produce well-focused imagery.
To overcome this, an emphautofocus algorithm is employed as a post-processing step after image reconstruction to improve image focus.
We propose a deep learning technique to overcome these limitations and implicitly learn the weighting function in a data-driven manner.
- Score: 34.77467193499518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetic aperture sonar (SAS) requires precise time-of-flight measurements
of the transmitted/received waveform to produce well-focused imagery. It is not
uncommon for errors in these measurements to be present resulting in image
defocusing. To overcome this, an \emph{autofocus} algorithm is employed as a
post-processing step after image reconstruction to improve image focus. A
particular class of these algorithms can be framed as a sharpness/contrast
metric-based optimization. To improve convergence, a hand-crafted weighting
function to remove "bad" areas of the image is sometimes applied to the
image-under-test before the optimization procedure. Additionally, dozens of
iterations are necessary for convergence which is a large compute burden for
low size, weight, and power (SWaP) systems. We propose a deep learning
technique to overcome these limitations and implicitly learn the weighting
function in a data-driven manner. Our proposed method, which we call Deep
Autofocus, uses features from the single-look-complex (SLC) to estimate the
phase correction which is applied in $k$-space. Furthermore, we train our
algorithm on batches of training imagery so that during deployment, only a
single iteration of our method is sufficient to autofocus. We show results
demonstrating the robustness of our technique by comparing our results to four
commonly used image sharpness metrics. Our results demonstrate Deep Autofocus
can produce imagery perceptually better than common iterative techniques but at
a lower computational cost. We conclude that Deep Autofocus can provide a more
favorable cost-quality trade-off than alternatives with significant potential
of future research.
Related papers
- Enhancing Digital Hologram Reconstruction Using Reverse-Attention Loss for Untrained Physics-Driven Deep Learning Models with Uncertain Distance [10.788482076164314]
We present a pioneering approach to addressing the Autofocusing challenge in untrained deep-learning methods.
Our method presents a significant reconstruction performance over rival methods.
For example, the difference is less than 1dB in PSNR and 0.002 in SSIM for the target sample.
arXiv Detail & Related papers (2024-01-11T01:30:46Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - DeepRM: Deep Recurrent Matching for 6D Pose Refinement [77.34726150561087]
DeepRM is a novel recurrent network architecture for 6D pose refinement.
The architecture incorporates LSTM units to propagate information through each refinement step.
DeepRM achieves state-of-the-art performance on two widely accepted challenging datasets.
arXiv Detail & Related papers (2022-05-28T16:18:08Z) - Precise Point Spread Function Estimation [6.076995573805468]
We develop a precise mathematical model of the camera's point spread function to describe the defocus process.
Our experiments on standard planes and actual objects show that the proposed algorithm can accurately describe the defocus process.
arXiv Detail & Related papers (2022-03-06T12:43:27Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Universal and Flexible Optical Aberration Correction Using Deep-Prior
Based Deconvolution [51.274657266928315]
We propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors.
Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters.
arXiv Detail & Related papers (2021-04-07T12:00:38Z) - IMU-Assisted Learning of Single-View Rolling Shutter Correction [16.242924916178282]
Rolling shutter distortion is highly undesirable for photography and computer vision algorithms.
We propose a deep neural network to predict depth and row-wise pose from a single image for rolling shutter correction.
arXiv Detail & Related papers (2020-11-05T21:33:25Z) - Deep Autofocus for Synthetic Aperture Sonar [28.306713374371814]
In this letter, we demonstrate the potential of machine learning, specifically deep learning, to address the autofocus problem.
We formulate the problem as a self-supervised, phase error estimation task using a deep network we call Deep Autofocus.
Our results demonstrate Deep Autofocus can produce imagery that is perceptually as good as benchmark iterative techniques but at a substantially lower computational cost.
arXiv Detail & Related papers (2020-10-29T15:31:15Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.