Single-stream CNN with Learnable Architecture for Multi-source Remote
Sensing Data
- URL: http://arxiv.org/abs/2109.06094v1
- Date: Mon, 13 Sep 2021 16:10:41 GMT
- Title: Single-stream CNN with Learnable Architecture for Multi-source Remote
Sensing Data
- Authors: Yi Yang, Daoye Zhu, Tengteng Qu, Qiangyu Wang, Fuhu Ren, Chengqi Cheng
- Abstract summary: We propose an efficient framework based on deep convolutional neural network (CNN) for multi-source remote sensing data joint classification.
The proposed method can theoretically adjust any modern CNN models to any multi-source remote sensing data set.
Experimental results demonstrate the effectiveness of the proposed single-stream CNNs.
- Score: 16.810239678639288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose an efficient and generalizable framework based on
deep convolutional neural network (CNN) for multi-source remote sensing data
joint classification. While recent methods are mostly based on multi-stream
architectures, we use group convolution to construct equivalent network
architectures efficiently within a single-stream network. We further adopt and
improve dynamic grouping convolution (DGConv) to make group convolution
hyperparameters, and thus the overall network architecture, learnable during
network training. The proposed method therefore can theoretically adjust any
modern CNN models to any multi-source remote sensing data set, and can
potentially avoid sub-optimal solutions caused by manually decided architecture
hyperparameters. In the experiments, the proposed method is applied to ResNet
and UNet, and the adjusted networks are verified on three very diverse
benchmark data sets (i.e., Houston2018 data, Berlin data, and MUUFL data).
Experimental results demonstrate the effectiveness of the proposed
single-stream CNNs, and in particular ResNet18-DGConv improves the
state-of-the-art classification overall accuracy (OA) on HS-SAR Berlin data set
from $62.23\%$ to $68.21\%$. In the experiments we have two interesting
findings. First, using DGConv generally reduces test OA variance. Second,
multi-stream is harmful to model performance if imposed to the first few
layers, but becomes beneficial if applied to deeper layers. Altogether, the
findings imply that multi-stream architecture, instead of being a strictly
necessary component in deep learning models for multi-source remote sensing
data, essentially plays the role of model regularizer. Our code is publicly
available at https://github.com/yyyyangyi/Multi-source-RS-DGConv. We hope our
work can inspire novel research in the future.
Related papers
- SVNet: Where SO(3) Equivariance Meets Binarization on Point Cloud
Representation [65.4396959244269]
The paper tackles the challenge by designing a general framework to construct 3D learning architectures.
The proposed approach can be applied to general backbones like PointNet and DGCNN.
Experiments on ModelNet40, ShapeNet, and the real-world dataset ScanObjectNN, demonstrated that the method achieves a great trade-off between efficiency, rotation, and accuracy.
arXiv Detail & Related papers (2022-09-13T12:12:19Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - Model Composition: Can Multiple Neural Networks Be Combined into a
Single Network Using Only Unlabeled Data? [6.0945220518329855]
This paper investigates the idea of combining multiple trained neural networks using unlabeled data.
To this end, the proposed method makes use of generation, filtering, and aggregation of reliable pseudo-labels collected from unlabeled data.
Our method supports using an arbitrary number of input models with arbitrary architectures and categories.
arXiv Detail & Related papers (2021-10-20T04:17:25Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - DC-NAS: Divide-and-Conquer Neural Architecture Search [108.57785531758076]
We present a divide-and-conquer (DC) approach to effectively and efficiently search deep neural architectures.
We achieve a $75.1%$ top-1 accuracy on the ImageNet dataset, which is higher than that of state-of-the-art methods using the same search space.
arXiv Detail & Related papers (2020-05-29T09:02:16Z) - Binarizing MobileNet via Evolution-based Searching [66.94247681870125]
We propose a use of evolutionary search to facilitate the construction and training scheme when binarizing MobileNet.
Inspired by one-shot architecture search frameworks, we manipulate the idea of group convolution to design efficient 1-Bit Convolutional Neural Networks (CNNs)
Our objective is to come up with a tiny yet efficient binary neural architecture by exploring the best candidates of the group convolution.
arXiv Detail & Related papers (2020-05-13T13:25:51Z) - Ensembled sparse-input hierarchical networks for high-dimensional
datasets [8.629912408966145]
We show that dense neural networks can be a practical data analysis tool in settings with small sample sizes.
A proposed method appropriately prunes the network structure by tuning only two L1-penalty parameters.
On a collection of real-world datasets with different sizes, EASIER-net selected network architectures in a data-adaptive manner and achieved higher prediction accuracy than off-the-shelf methods on average.
arXiv Detail & Related papers (2020-05-11T02:08:53Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - Inferring Convolutional Neural Networks' accuracies from their
architectural characterizations [0.0]
We study the relationships between a CNN's architecture and its performance.
We show that the attributes can be predictive of the networks' performance in two specific computer vision-based physics problems.
We use machine learning models to predict whether a network can perform better than a certain threshold accuracy before training.
arXiv Detail & Related papers (2020-01-07T16:41:58Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.