A Light-Weighted Convolutional Neural Network for Bitemporal SAR Image
Change Detection
- URL: http://arxiv.org/abs/2005.14376v2
- Date: Sun, 21 Jun 2020 02:40:04 GMT
- Title: A Light-Weighted Convolutional Neural Network for Bitemporal SAR Image
Change Detection
- Authors: Rongfang Wang, Fan Ding, Licheng Jiao, Jia-Wei Chen, Bo Liu, Wenping
Ma, Mi Wang
- Abstract summary: We propose a lightweight neural network to reduce the computational and spatial complexity.
In the proposed network, we replace normal convolutional layers with bottleneck layers that keep the same number of channels between input and output.
We verify our light-weighted neural network on four sets of bitemporal SAR images.
- Score: 40.58864817923371
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, many Convolution Neural Networks (CNN) have been successfully
employed in bitemporal SAR image change detection. However, most of the
existing networks are too heavy and occupy a large volume of memory for storage
and calculation. Motivated by this, in this paper, we propose a lightweight
neural network to reduce the computational and spatial complexity and
facilitate the change detection on an edge device. In the proposed network, we
replace normal convolutional layers with bottleneck layers that keep the same
number of channels between input and output. Next, we employ dilated
convolutional kernels with a few non-zero entries that reduce the running time
in convolutional operators. Comparing with the conventional convolutional
neural network, our light-weighted neural network will be more efficient with
fewer parameters. We verify our light-weighted neural network on four sets of
bitemporal SAR images. The experimental results show that the proposed network
can obtain better performance than the conventional CNN and has better model
generalization, especially on the challenging datasets with complex scenes.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - A Faster Approach to Spiking Deep Convolutional Neural Networks [0.0]
Spiking neural networks (SNNs) have closer dynamics to the brain than current deep neural networks.
We propose a network structure based on previous work to improve network runtime and accuracy.
arXiv Detail & Related papers (2022-10-31T16:13:15Z) - Parameter Convex Neural Networks [13.42851919291587]
We propose the exponential multilayer neural network (EMLP) which is convex with regard to the parameters of the neural network under some conditions.
For late experiments, we use the same architecture to make the exponential graph convolutional network (EGCN) and do the experiment on the graph classificaion dataset.
arXiv Detail & Related papers (2022-06-11T16:44:59Z) - ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less
Neural Networks [30.14665696695582]
ShiftNAS is the first framework tailoring Neural Architecture Search (NAS) to substantially reduce the accuracy gap between bit-shift neural networks and their real-valued counterparts.
We show that ShiftNAS sets a new state-of-the-art for bit-shift neural networks, where the accuracy increases (1.69-8.07)% on CIFAR10, (5.71-18.09)% on CIFAR100 and (4.36-67.07)% on ImageNet.
arXiv Detail & Related papers (2022-04-07T12:15:03Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Learning Frequency-aware Dynamic Network for Efficient Super-Resolution [56.98668484450857]
This paper explores a novel frequency-aware dynamic network for dividing the input into multiple parts according to its coefficients in the discrete cosine transform (DCT) domain.
In practice, the high-frequency part will be processed using expensive operations and the lower-frequency part is assigned with cheap operations to relieve the computation burden.
Experiments conducted on benchmark SISR models and datasets show that the frequency-aware dynamic network can be employed for various SISR neural architectures.
arXiv Detail & Related papers (2021-03-15T12:54:26Z) - An Alternative Practice of Tropical Convolution to Traditional
Convolutional Neural Networks [0.5837881923712392]
We propose a new type of CNNs called Tropical Convolutional Neural Networks (TCNNs)
TCNNs are built on tropical convolutions in which the multiplications and additions in conventional convolutional layers are replaced by additions and min/max operations respectively.
We show that TCNN can achieve higher expressive power than ordinary convolutional layers on the MNIST and CIFAR10 image data set.
arXiv Detail & Related papers (2021-03-03T00:13:30Z) - Deep Neural Networks using a Single Neuron: Folded-in-Time Architecture
using Feedback-Modulated Delay Loops [0.0]
We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops.
This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals.
The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.
arXiv Detail & Related papers (2020-11-19T21:45:58Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.