Sparse Array Selection Across Arbitrary Sensor Geometries with Deep
Transfer Learning
- URL: http://arxiv.org/abs/2004.11637v2
- Date: Tue, 2 Jun 2020 07:40:35 GMT
- Title: Sparse Array Selection Across Arbitrary Sensor Geometries with Deep
Transfer Learning
- Authors: Ahmet M. Elbir and Kumar Vijay Mishra
- Abstract summary: Sparse sensor array selection arises in many engineering applications, where it is imperative to obtain maximum spatial resolution from a limited number of array elements.
Recent research shows that computational complexity of array selection is reduced by replacing the conventional optimization and greedy search methods with a deep learning network.
We adopt a deep transfer learning (TL) approach, wherein we train a deep convolutional neural network (CNN) with data of a source sensor array for which calibrated data are readily available and reuse this pre-trained CNN for a different, data-insufficient target array geometry.
Numerical experiments with uniform rectangular and circular arrays demonstrate enhanced performance of TL
- Score: 22.51807198305316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse sensor array selection arises in many engineering applications, where
it is imperative to obtain maximum spatial resolution from a limited number of
array elements. Recent research shows that computational complexity of array
selection is reduced by replacing the conventional optimization and greedy
search methods with a deep learning network. However, in practice, sufficient
and well-calibrated labeled training data are unavailable and, more so, for
arbitrary array configurations. To address this, we adopt a deep transfer
learning (TL) approach, wherein we train a deep convolutional neural network
(CNN) with data of a source sensor array for which calibrated data are readily
available and reuse this pre-trained CNN for a different, data-insufficient
target array geometry to perform sparse array selection. Numerical experiments
with uniform rectangular and circular arrays demonstrate enhanced performance
of TL-CNN on the target model than the CNN trained with insufficient data from
the same model. In particular, our TL framework provides approximately 20%
higher sensor selection accuracy and 10% improvement in the
direction-of-arrival estimation error.
Related papers
- DCP: Learning Accelerator Dataflow for Neural Network via Propagation [52.06154296196845]
This work proposes an efficient data-centric approach, named Dataflow Code Propagation (DCP), to automatically find the optimal dataflow for DNN layers in seconds without human effort.
DCP learns a neural predictor to efficiently update the dataflow codes towards the desired gradient directions to minimize various optimization objectives.
For example, without using additional training data, DCP surpasses the GAMMA method that performs a full search using thousands of samples.
arXiv Detail & Related papers (2024-10-09T05:16:44Z) - A distributed neural network architecture for dynamic sensor selection
with application to bandwidth-constrained body-sensor networks [53.022158485867536]
We propose a dynamic sensor selection approach for deep neural networks (DNNs)
It is able to derive an optimal sensor subset selection for each specific input sample instead of a fixed selection for the entire dataset.
We show how we can use this dynamic selection to increase the lifetime of a wireless sensor network (WSN) by imposing constraints on how often each node is allowed to transmit.
arXiv Detail & Related papers (2023-08-16T14:04:50Z) - Sparse Array Design for Direction Finding using Deep Learning [19.061021605579683]
deep learning (DL) techniques have been introduced for designing sparse arrays.
This chapter provides a synopsis of several direction finding applications of DL-based sparse arrays.
arXiv Detail & Related papers (2023-08-08T22:45:48Z) - Multidimensional analysis using sensor arrays with deep learning for
high-precision and high-accuracy diagnosis [0.0]
We demonstrate that it becomes possible to significantly improve the measurements' precision and accuracy by feeding a deep neural network (DNN) with the data from a low-cost and low-accuracy sensor array.
The data collection is done with an array composed of 32 temperature sensors, including 16 analog and 16 digital sensors.
arXiv Detail & Related papers (2022-11-30T16:14:55Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Variational Sparse Coding with Learned Thresholding [6.737133300781134]
We propose a new approach to variational sparse coding that allows us to learn sparse distributions by thresholding samples.
We first evaluate and analyze our method by training a linear generator, showing that it has superior performance, statistical efficiency, and gradient estimation.
arXiv Detail & Related papers (2022-05-07T14:49:50Z) - Generalized Learning Vector Quantization for Classification in
Randomized Neural Networks and Hyperdimensional Computing [4.4886210896619945]
We propose a modified RVFL network that avoids computationally expensive matrix operations during training.
The proposed approach achieved state-of-the-art accuracy on a collection of datasets from the UCI Machine Learning Repository.
arXiv Detail & Related papers (2021-06-17T21:17:17Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Ensembled sparse-input hierarchical networks for high-dimensional
datasets [8.629912408966145]
We show that dense neural networks can be a practical data analysis tool in settings with small sample sizes.
A proposed method appropriately prunes the network structure by tuning only two L1-penalty parameters.
On a collection of real-world datasets with different sizes, EASIER-net selected network architectures in a data-adaptive manner and achieved higher prediction accuracy than off-the-shelf methods on average.
arXiv Detail & Related papers (2020-05-11T02:08:53Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.