Implicit Neural Representations for Deconvolving SAS Images
- URL: http://arxiv.org/abs/2112.08539v1
- Date: Thu, 16 Dec 2021 00:24:18 GMT
- Title: Implicit Neural Representations for Deconvolving SAS Images
- Authors: Albert Reed, Thomas Blanford, Daniel C. Brown, Suren Jayasuriya
- Abstract summary: Synthetic aperture sonar (SAS) image resolution is constrained by waveform bandwidth and array geometry.
In this work, we leverage implicit neural representations (INRs), shown to be strong priors for the natural image space, to deconvolve SAS images.
We validate our method on simulated SAS data created with a point scattering model and real data captured with an in-air circular SAS.
- Score: 4.446017969073817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic aperture sonar (SAS) image resolution is constrained by waveform
bandwidth and array geometry. Specifically, the waveform bandwidth determines a
point spread function (PSF) that blurs the locations of point scatterers in the
scene. In theory, deconvolving the reconstructed SAS image with the scene PSF
restores the original distribution of scatterers and yields sharper
reconstructions. However, deconvolution is an ill-posed operation that is
highly sensitive to noise. In this work, we leverage implicit neural
representations (INRs), shown to be strong priors for the natural image space,
to deconvolve SAS images. Importantly, our method does not require training
data, as we perform our deconvolution through an analysis-bysynthesis
optimization in a self-supervised fashion. We validate our method on simulated
SAS data created with a point scattering model and real data captured with an
in-air circular SAS. This work is an important first step towards applying
neural networks for SAS image deconvolution.
Related papers
- Coarse-Fine Spectral-Aware Deformable Convolution For Hyperspectral Image Reconstruction [15.537910100051866]
We study the inverse problem of Coded Aperture Snapshot Spectral Imaging (CASSI)
We propose Coarse-Fine Spectral-Aware Deformable Convolution Network (CFSDCN)
Our CFSDCN significantly outperforms previous state-of-the-art (SOTA) methods on both simulated and real HSI datasets.
arXiv Detail & Related papers (2024-06-18T15:15:12Z) - Improving Diffusion-Based Image Synthesis with Context Prediction [49.186366441954846]
Existing diffusion models mainly try to reconstruct input image from a corrupted one with a pixel-wise or feature-wise constraint along spatial axes.
We propose ConPreDiff to improve diffusion-based image synthesis with context prediction.
Our ConPreDiff consistently outperforms previous methods and achieves a new SOTA text-to-image generation results on MS-COCO, with a zero-shot FID score of 6.21.
arXiv Detail & Related papers (2024-01-04T01:10:56Z) - Soft Random Sampling: A Theoretical and Empirical Analysis [59.719035355483875]
Soft random sampling (SRS) is a simple yet effective approach for efficient deep neural networks when dealing with massive data.
It selects a uniformly speed at random with replacement from each data set in each epoch.
It is shown to be a powerful and competitive strategy with significant and competitive performance on real-world industrial scale.
arXiv Detail & Related papers (2023-11-21T17:03:21Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Histogram Layers for Synthetic Aperture Sonar Imagery [2.452410403088629]
We present a novel application of histogram layers on SAS imagery.
The addition of histogram layer(s) within the deep learning models improved performance.
arXiv Detail & Related papers (2022-09-08T15:33:35Z) - Iterative, Deep Synthetic Aperture Sonar Image Segmentation [21.319490900396474]
We propose an unsupervised learning framework called Iterative Deep Unsupervised (IDUS) for SAS image segmentation.
IDUS can be divided into four main steps: 1) A deep network estimates class assignments; 2) Low-level image features from the deep network are clustered into superpixels; 3) Superpixels are clustered into class assignments; 4) Resulting pseudo-labels are used for loss backpropagation of the deep network prediction.
A comparison of IDUS to current state-of-the-art methods on a realistic benchmark dataset for SAS image segmentation demonstrates the benefits of our proposal.
arXiv Detail & Related papers (2022-03-28T20:41:24Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Iterative, Deep, and Unsupervised Synthetic Aperture Sonar Image
Segmentation [29.435946984214937]
We present a new iterative unsupervised algorithm for learning deep features for SAS image segmentation.
Our results show that the performance of our proposed method is considerably better than current state-of-the-art methods in SAS image segmentation.
arXiv Detail & Related papers (2021-07-30T11:37:33Z) - UltraSR: Spatial Encoding is a Missing Key for Implicit Image
Function-based Arbitrary-Scale Super-Resolution [74.82282301089994]
In this work, we propose UltraSR, a simple yet effective new network design based on implicit image functions.
We show that spatial encoding is indeed a missing key towards the next-stage high-accuracy implicit image function.
Our UltraSR sets new state-of-the-art performance on the DIV2K benchmark under all super-resolution scales.
arXiv Detail & Related papers (2021-03-23T17:36:42Z) - Sparse Signal Models for Data Augmentation in Deep Learning ATR [0.8999056386710496]
We propose a data augmentation approach to incorporate domain knowledge and improve the generalization power of a data-intensive learning algorithm.
We exploit the sparsity of the scattering centers in the spatial domain and the smoothly-varying structure of the scattering coefficients in the azimuthal domain to solve the ill-posed problem of over-parametrized model fitting.
arXiv Detail & Related papers (2020-12-16T21:46:33Z) - Learning Spatial-Spectral Prior for Super-Resolution of Hyperspectral
Imagery [79.69449412334188]
In this paper, we investigate how to adapt state-of-the-art residual learning based single gray/RGB image super-resolution approaches.
We introduce a spatial-spectral prior network (SSPN) to fully exploit the spatial information and the correlation between the spectra of the hyperspectral data.
Experimental results on some hyperspectral images demonstrate that the proposed SSPSR method enhances the details of the recovered high-resolution hyperspectral images.
arXiv Detail & Related papers (2020-05-18T14:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.