As if by magic: self-supervised training of deep despeckling networks
with MERLIN
- URL: http://arxiv.org/abs/2110.13148v1
- Date: Mon, 25 Oct 2021 16:30:09 GMT
- Title: As if by magic: self-supervised training of deep despeckling networks
with MERLIN
- Authors: Emanuele Dalsasso, Lo\"ic Denis, Florence Tupin
- Abstract summary: Self-supervised strategy based on separation of real and imaginary parts of single-look complex SAR images called MERLIN.
Networks trained with MERLIN take into account the spatial correlations due to the SAR transfer function specific to a given sensor and imaging mode.
- Score: 3.94041326670251
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Speckle fluctuations seriously limit the interpretability of synthetic
aperture radar (SAR) images. Speckle reduction has thus been the subject of
numerous works spanning at least four decades. Techniques based on deep neural
networks have recently achieved a new level of performance in terms of SAR
image restoration quality. Beyond the design of suitable network architectures
or the selection of adequate loss functions, the construction of training sets
is of uttermost importance. So far, most approaches have considered a
supervised training strategy: the networks are trained to produce outputs as
close as possible to speckle-free reference images. Speckle-free images are
generally not available, which requires resorting to natural or optical images
or the selection of stable areas in long time series to circumvent the lack of
ground truth. Self-supervision, on the other hand, avoids the use of
speckle-free images. We introduce a self-supervised strategy based on the
separation of the real and imaginary parts of single-look complex SAR images,
called MERLIN (coMplex sElf-supeRvised despeckLINg), and show that it offers a
straightforward way to train all kinds of deep despeckling networks. Networks
trained with MERLIN take into account the spatial correlations due to the SAR
transfer function specific to a given sensor and imaging mode. By requiring
only a single image, and possibly exploiting large archives, MERLIN opens the
door to hassle-free as well as large-scale training of despeckling networks.
The code of the trained models is made freely available at
https://gitlab.telecom-paris.fr/RING/MERLIN.
Related papers
- Deep Learning Based Speckle Filtering for Polarimetric SAR Images. Application to Sentinel-1 [51.404644401997736]
We propose a complete framework to remove speckle in polarimetric SAR images using a convolutional neural network.
Experiments show that the proposed approach offers exceptional results in both speckle reduction and resolution preservation.
arXiv Detail & Related papers (2024-08-28T10:07:17Z) - Multi-temporal speckle reduction with self-supervised deep neural
networks [2.9979894869734927]
Latest techniques rely on deep neural networks to restore the various structures and peculiar textures to SAR images.
Speckle filtering is generally a prerequisite to the analysis of synthetic aperture radar (SAR) images.
We extend a recent self-supervised training strategy for single-look complex SAR images, called MERLIN, to the case of multi-temporal filtering.
arXiv Detail & Related papers (2022-07-22T14:08:22Z) - SAR Despeckling Using Overcomplete Convolutional Networks [53.99620005035804]
despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
arXiv Detail & Related papers (2022-05-31T15:55:37Z) - Comparison of convolutional neural networks for cloudy optical images
reconstruction from single or multitemporal joint SAR and optical images [0.21079694661943604]
We focus on the evaluation of convolutional neural networks that use jointly SAR and optical images to retrieve the missing contents in one single polluted optical image.
We propose a simple framework that eases the creation of datasets for the training of deep nets targeting optical image reconstruction.
We show how space partitioning data structures help to query samples in terms of cloud coverage, relative acquisition date, pixel validity and relative proximity between SAR and optical images.
arXiv Detail & Related papers (2022-04-01T13:31:23Z) - Transformer-based SAR Image Despeckling [53.99620005035804]
We introduce a transformer-based network for SAR image despeckling.
The proposed despeckling network comprises of a transformer-based encoder which allows the network to learn global dependencies between different image regions.
Experiments show that the proposed method achieves significant improvements over traditional and convolutional neural network-based despeckling methods.
arXiv Detail & Related papers (2022-01-23T20:09:01Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z) - Speckle2Void: Deep Self-Supervised SAR Despeckling with Blind-Spot
Convolutional Neural Networks [30.410981386006394]
despeckling is a crucial preliminary step in scene analysis algorithms.
Recent success of deep learning envisions a new generation of despeckling techniques.
We propose a self-supervised Bayesian despeckling method.
arXiv Detail & Related papers (2020-07-04T11:38:48Z) - SAR Image Despeckling by Deep Neural Networks: from a pre-trained model
to an end-to-end training strategy [8.097773654147105]
convolutional neural networks (CNNs) have recently shown to reach state-of-the-art performance for SAR image restoration.
CNN training requires good training data: many pairs of speckle-free / speckle-corrupted images.
This paper analyzes different strategies one can adopt, depending on the speckle removal task one wishes to perform.
arXiv Detail & Related papers (2020-06-28T09:47:31Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Towards Deep Unsupervised SAR Despeckling with Blind-Spot Convolutional
Neural Networks [30.410981386006394]
Deep learning techniques have outperformed classical model-based despeckling algorithms.
In this paper, we propose a self-supervised Bayesian despeckling method.
We show that the performance of the proposed network is very close to the supervised training approach on synthetic data and competitive on real data.
arXiv Detail & Related papers (2020-01-15T12:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.