SAR Despeckling Using Overcomplete Convolutional Networks
- URL: http://arxiv.org/abs/2205.15906v1
- Date: Tue, 31 May 2022 15:55:37 GMT
- Title: SAR Despeckling Using Overcomplete Convolutional Networks
- Authors: Malsha V. Perera, Wele Gedara Chaminda Bandara, Jeya Maria Jose
Valanarasu, and Vishal M. Patel
- Abstract summary: despeckling is an important problem in remote sensing as speckle degrades SAR images.
Recent studies show that convolutional neural networks(CNNs) outperform classical despeckling methods.
This study employs an overcomplete CNN architecture to focus on learning low-level features by restricting the receptive field.
We show that the proposed network improves despeckling performance compared to recent despeckling methods on synthetic and real SAR images.
- Score: 53.99620005035804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Synthetic Aperture Radar (SAR) despeckling is an important problem in remote
sensing as speckle degrades SAR images, affecting downstream tasks like
detection and segmentation. Recent studies show that convolutional neural
networks(CNNs) outperform classical despeckling methods. Traditional CNNs try
to increase the receptive field size as the network goes deeper, thus
extracting global features. However,speckle is relatively small, and increasing
receptive field does not help in extracting speckle features. This study
employs an overcomplete CNN architecture to focus on learning low-level
features by restricting the receptive field. The proposed network consists of
an overcomplete branch to focus on the local structures and an undercomplete
branch that focuses on the global structures. We show that the proposed network
improves despeckling performance compared to recent despeckling methods on
synthetic and real SAR images.
Related papers
- Spatial Bias for Attention-free Non-local Neural Networks [11.320414512937946]
We introduce the spatial bias to learn global knowledge without self-attention in convolutional neural networks.
We show that the spatial bias achieves competitive performance that improves the classification accuracy by +0.79% and +1.5% on ImageNet-1K and cifar100 datasets.
arXiv Detail & Related papers (2023-02-24T08:16:16Z) - RDRN: Recursively Defined Residual Network for Image Super-Resolution [58.64907136562178]
Deep convolutional neural networks (CNNs) have obtained remarkable performance in single image super-resolution.
We propose a novel network architecture which utilizes attention blocks efficiently.
arXiv Detail & Related papers (2022-11-17T11:06:29Z) - Increasing the Accuracy of a Neural Network Using Frequency Selective
Mesh-to-Grid Resampling [4.211128681972148]
We propose the use of keypoint frequency selective mesh-to-grid resampling (FSMR) for the processing of input data for neural networks.
We show that depending on the network architecture and classification task the application of FSMR during training aids learning process.
The classification accuracy can be increased by up to 4.31 percentage points for ResNet50 and the Oxflower17 dataset.
arXiv Detail & Related papers (2022-09-28T21:34:47Z) - New SAR target recognition based on YOLO and very deep multi-canonical
correlation analysis [0.1503974529275767]
This paper proposes a robust feature extraction method for SAR image target classification by adaptively fusing effective features from different CNN layers.
Experiments on the MSTAR dataset demonstrate that the proposed method outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-10-28T18:10:26Z) - RSI-Net: Two-Stream Deep Neural Network Integrating GCN and Atrous CNN
for Semantic Segmentation of High-resolution Remote Sensing Images [3.468780866037609]
Two-stream deep neural network for semantic segmentation of remote sensing images (RSI-Net) is proposed in this paper.
Experiments are implemented on the Vaihingen, Potsdam and Gaofen RSI datasets.
Results demonstrate the superior performance of RSI-Net in terms of overall accuracy, F1 score and kappa coefficient when compared with six state-of-the-art RSI semantic segmentation methods.
arXiv Detail & Related papers (2021-09-19T15:57:20Z) - Over-and-Under Complete Convolutional RNN for MRI Reconstruction [57.95363471940937]
Recent deep learning-based methods for MR image reconstruction usually leverage a generic auto-encoder architecture.
We propose an Over-and-Under Complete Convolu?tional Recurrent Neural Network (OUCR), which consists of an overcomplete and an undercomplete Convolutional Recurrent Neural Network(CRNN)
The proposed method achieves significant improvements over the compressed sensing and popular deep learning-based methods with less number of trainable parameters.
arXiv Detail & Related papers (2021-06-16T15:56:34Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Dense Residual Network: Enhancing Global Dense Feature Flow for
Character Recognition [75.4027660840568]
This paper explores how to enhance the local and global dense feature flow by exploiting hierarchical features fully from all the convolution layers.
Technically, we propose an efficient and effective CNN framework, i.e., Fast Dense Residual Network (FDRN) for text recognition.
arXiv Detail & Related papers (2020-01-23T06:55:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.