Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images
- URL: http://arxiv.org/abs/2311.03679v1
- Date: Tue, 7 Nov 2023 03:10:17 GMT
- Title: Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images
- Authors: Weidong Yan, Pei Yan, Li Cao
- Abstract summary: We introduce a completely unsupervised shallow convolutional neural network (USCNN) fusion approach for change detection.
Our model has three features: the entire training process is conducted in an unsupervised manner, the network architecture is shallow, and the objective function is sparse.
Experimental results on four real remote sensing datasets indicate the feasibility and effectiveness of the proposed approach.
- Score: 1.892026266421264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of deep learning, a variety of change detection
methods based on deep learning have emerged in recent years. However, these
methods usually require a large number of training samples to train the network
model, so it is very expensive. In this paper, we introduce a completely
unsupervised shallow convolutional neural network (USCNN) fusion approach for
change detection. Firstly, the bi-temporal images are transformed into
different feature spaces by using convolution kernels of different sizes to
extract multi-scale information of the images. Secondly, the output features of
bi-temporal images at the same convolution kernels are subtracted to obtain the
corresponding difference images, and the difference feature images at the same
scale are fused into one feature image by using 1 * 1 convolution layer.
Finally, the output features of different scales are concatenated and a 1 * 1
convolution layer is used to fuse the multi-scale information of the image. The
model parameters are obtained by a redesigned sparse function. Our model has
three features: the entire training process is conducted in an unsupervised
manner, the network architecture is shallow, and the objective function is
sparse. Thus, it can be seen as a kind of lightweight network model.
Experimental results on four real remote sensing datasets indicate the
feasibility and effectiveness of the proposed approach.
Related papers
- Lightweight single-image super-resolution network based on dual paths [0.552480439325792]
Single image super-resolution(SISR) algorithms under deep learning currently have two main models, one based on convolutional neural networks and the other based on Transformer.
This paper proposes a new lightweight multi-scale feature fusion network model based on two-way complementary convolutional and Transformer.
arXiv Detail & Related papers (2024-09-10T15:31:37Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Scale Attention for Learning Deep Face Representation: A Study Against
Visual Scale Variation [69.45176408639483]
We reform the conv layer by resorting to the scale-space theory.
We build a novel style named SCale AttentioN Conv Neural Network (textbfSCAN-CNN)
As a single-shot scheme, the inference is more efficient than multi-shot fusion.
arXiv Detail & Related papers (2022-09-19T06:35:04Z) - dual unet:a novel siamese network for change detection with cascade
differential fusion [4.651756476458979]
We propose a novel Siamese neural network for change detection task, namely Dual-UNet.
In contrast to previous individually encoded the bitemporal images, we design an encoder differential-attention module to focus on the spatial difference relationships of pixels.
Experiments demonstrate that the proposed approach consistently outperforms the most advanced methods on popular seasonal change detection datasets.
arXiv Detail & Related papers (2022-08-12T14:24:09Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Point-Cloud Deep Learning of Porous Media for Permeability Prediction [0.0]
We propose a novel deep learning framework for predicting permeability of porous media from their digital images.
We model the boundary between solid matrix and pore spaces as point clouds and feed them as inputs to a neural network based on the PointNet architecture.
arXiv Detail & Related papers (2021-07-18T22:59:21Z) - ResMLP: Feedforward networks for image classification with
data-efficient training [73.26364887378597]
We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.
We will share our code based on the Timm library and pre-trained models.
arXiv Detail & Related papers (2021-05-07T17:31:44Z) - Exploiting Invariance in Training Deep Neural Networks [4.169130102668252]
Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks.
The resulting algorithm requires less parameter tuning, trains well with an initial learning rate 1.0, and easily generalizes to different tasks.
Tested on ImageNet, MS COCO, and Cityscapes datasets, our proposed technique requires fewer iterations to train, surpasses all baselines by a large margin, seamlessly works on both small and large batch size training, and applies to different computer vision tasks of image classification, object detection, and semantic segmentation.
arXiv Detail & Related papers (2021-03-30T19:18:31Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z) - Learning to Learn Parameterized Classification Networks for Scalable
Input Images [76.44375136492827]
Convolutional Neural Networks (CNNs) do not have a predictable recognition behavior with respect to the input resolution change.
We employ meta learners to generate convolutional weights of main networks for various input scales.
We further utilize knowledge distillation on the fly over model predictions based on different input resolutions.
arXiv Detail & Related papers (2020-07-13T04:27:25Z) - Extracting dispersion curves from ambient noise correlations using deep
learning [1.0237120900821557]
We present a machine-learning approach to classifying the phases of surface wave dispersion curves.
Standard FTAN analysis of surfaces observed on an array of receivers is converted to an image.
We use a convolutional neural network (U-net) architecture with a supervised learning objective and incorporate transfer learning.
arXiv Detail & Related papers (2020-02-05T23:41:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.