Change Detection from SAR Images Based on Deformable Residual
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2104.02299v1
- Date: Tue, 6 Apr 2021 05:52:25 GMT
- Title: Change Detection from SAR Images Based on Deformable Residual
Convolutional Neural Networks
- Authors: Junjie Wang, Feng Gao, Junyu Dong
- Abstract summary: Convolutional neural networks (CNN) have made great progress for synthetic aperture radar (SAR) images change detection.
In this paper, a novel underlineDeformable underlineResidual Convolutional Neural underlineNetwork (DRNet) is designed for SAR images change detection.
- Score: 26.684293663473415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Convolutional neural networks (CNN) have made great progress for synthetic
aperture radar (SAR) images change detection. However, sampling locations of
traditional convolutional kernels are fixed and cannot be changed according to
the actual structure of the SAR images. Besides, objects may appear with
different sizes in natural scenes, which requires the network to have stronger
multi-scale representation ability. In this paper, a novel
\underline{D}eformable \underline{R}esidual Convolutional Neural
\underline{N}etwork (DRNet) is designed for SAR images change detection. First,
the proposed DRNet introduces the deformable convolutional sampling locations,
and the shape of convolutional kernel can be adaptively adjusted according to
the actual structure of ground objects. To create the deformable sampling
locations, 2-D offsets are calculated for each pixel according to the spatial
information of the input images. Then the sampling location of pixels can
adaptively reflect the spatial structure of the input images. Moreover, we
proposed a novel pooling module replacing the vanilla pooling to utilize
multi-scale information effectively, by constructing hierarchical residual-like
connections within one pooling layer, which improve the multi-scale
representation ability at a granular level. Experimental results on three real
SAR datasets demonstrate the effectiveness of the proposed DRNet.
Related papers
- Boosting Cross-Domain Point Classification via Distilling Relational Priors from 2D Transformers [59.0181939916084]
Traditional 3D networks mainly focus on local geometric details and ignore the topological structure between local geometries.
We propose a novel Priors Distillation (RPD) method to extract priors from the well-trained transformers on massive images.
Experiments on the PointDA-10 and the Sim-to-Real datasets verify that the proposed method consistently achieves the state-of-the-art performance of UDA for point cloud classification.
arXiv Detail & Related papers (2024-07-26T06:29:09Z) - Double-Shot 3D Shape Measurement with a Dual-Branch Network [14.749887303860717]
We propose a dual-branch Convolutional Neural Network (CNN)-Transformer network (PDCNet) to process different structured light (SL) modalities.
Within PDCNet, a Transformer branch is used to capture global perception in the fringe images, while a CNN branch is designed to collect local details in the speckle images.
We show that our method can reduce fringe order ambiguity while producing high-accuracy results on a self-made dataset.
arXiv Detail & Related papers (2024-07-19T10:49:26Z) - A Model-data-driven Network Embedding Multidimensional Features for
Tomographic SAR Imaging [5.489791364472879]
We propose a new model-data-driven network to achieve tomoSAR imaging based on multi-dimensional features.
We add two 2D processing modules, both convolutional encoder-decoder structures, to enhance multi-dimensional features of the imaging scene effectively.
Compared with the conventional CS-based FISTA method and DL-based gamma-Net method, the result of our proposed method has better performance on completeness while having decent imaging accuracy.
arXiv Detail & Related papers (2022-11-28T02:01:43Z) - Context-Preserving Instance-Level Augmentation and Deformable
Convolution Networks for SAR Ship Detection [50.53262868498824]
Shape deformation of targets in SAR image due to random orientation and partial information loss is an essential challenge in SAR ship detection.
We propose a data augmentation method to train a deep network that is robust to partial information loss within the targets.
arXiv Detail & Related papers (2022-02-14T07:01:01Z) - CSformer: Bridging Convolution and Transformer for Compressive Sensing [65.22377493627687]
This paper proposes a hybrid framework that integrates the advantages of leveraging detailed spatial information from CNN and the global context provided by transformer for enhanced representation learning.
The proposed approach is an end-to-end compressive image sensing method, composed of adaptive sampling and recovery.
The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing.
arXiv Detail & Related papers (2021-12-31T04:37:11Z) - Spatial Dependency Networks: Neural Layers for Improved Generative Image
Modeling [79.15521784128102]
We introduce a novel neural network for building image generators (decoders) and apply it to variational autoencoders (VAEs)
In our spatial dependency networks (SDNs), feature maps at each level of a deep neural net are computed in a spatially coherent way.
We show that augmenting the decoder of a hierarchical VAE by spatial dependency layers considerably improves density estimation.
arXiv Detail & Related papers (2021-03-16T07:01:08Z) - SaNet: Scale-aware neural Network for Parsing Multiple Spatial
Resolution Aerial Images [0.0]
We propose a novel scale-aware neural network (SaNet) for parsing multiple spatial resolution aerial images.
For coping with the imbalanced segmentation quality between larger and smaller objects caused by the scale variation, the SaNet deploys a densely connected feature network (DCFPN) module.
To alleviate the informative feature loss, a SFR module is incorporated into the network to learn scale-invariant features with spatial relation enhancement.
arXiv Detail & Related papers (2021-03-14T14:19:46Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Robust Unsupervised Small Area Change Detection from SAR Imagery Using
Deep Learning [23.203687716051697]
A robust unsupervised approach is proposed for small area change detection from synthetic aperture radar (SAR) images.
A multi-scale superpixel reconstruction method is developed to generate a difference image (DI)
A two-stage centre-constrained fuzzy c-means clustering algorithm is proposed to divide the pixels of the DI into changed, unchanged and intermediate classes.
arXiv Detail & Related papers (2020-11-22T12:50:08Z) - A Convolutional Neural Network with Parallel Multi-Scale Spatial Pooling
to Detect Temporal Changes in SAR Images [43.56177583903999]
In synthetic aperture radar (SAR) image change detection, it is quite challenging to exploit the changing information from the noisy difference image.
We propose a multi-scale spatial pooling (MSSP) network to exploit the changed information from the noisy difference image.
arXiv Detail & Related papers (2020-05-22T03:37:30Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.