Optimizing Sensor Network Design for Multiple Coverage
- URL: http://arxiv.org/abs/2405.09096v2
- Date: Mon, 20 May 2024 18:32:03 GMT
- Title: Optimizing Sensor Network Design for Multiple Coverage
- Authors: Lukas Taus, Yen-Hsi Richard Tsai,
- Abstract summary: We introduce a new objective function for the greedy (next-best-view) algorithm to design efficient and robust sensor networks.
We also introduce a Deep Learning model to accelerate the algorithm for near real-time computations.
- Score: 0.9668407688201359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sensor placement optimization methods have been studied extensively. They can be applied to a wide range of applications, including surveillance of known environments, optimal locations for 5G towers, and placement of missile defense systems. However, few works explore the robustness and efficiency of the resulting sensor network concerning sensor failure or adversarial attacks. This paper addresses this issue by optimizing for the least number of sensors to achieve multiple coverage of non-simply connected domains by a prescribed number of sensors. We introduce a new objective function for the greedy (next-best-view) algorithm to design efficient and robust sensor networks and derive theoretical bounds on the network's optimality. We further introduce a Deep Learning model to accelerate the algorithm for near real-time computations. The Deep Learning model requires the generation of training examples. Correspondingly, we show that understanding the geometric properties of the training data set provides important insights into the performance and training process of deep learning techniques. Finally, we demonstrate that a simple parallel version of the greedy approach using a simpler objective can be highly competitive.
Related papers
- HGFF: A Deep Reinforcement Learning Framework for Lifetime Maximization in Wireless Sensor Networks [5.4894758104028245]
We propose a new framework combining heterogeneous graph neural network with deep reinforcement learning to automatically construct the movement path of the sink.
We design ten types of static and dynamic maps to simulate different wireless sensor networks in the real world.
Our approach consistently outperforms the existing methods on all types of maps.
arXiv Detail & Related papers (2024-04-11T13:09:11Z) - Principled Architecture-aware Scaling of Hyperparameters [69.98414153320894]
Training a high-quality deep neural network requires choosing suitable hyperparameters, which is a non-trivial and expensive process.
In this work, we precisely characterize the dependence of initializations and maximal learning rates on the network architecture.
We demonstrate that network rankings can be easily changed by better training networks in benchmarks.
arXiv Detail & Related papers (2024-02-27T11:52:49Z) - Active search and coverage using point-cloud reinforcement learning [50.741409008225766]
This paper presents an end-to-end deep reinforcement learning solution for target search and coverage.
We show that deep hierarchical feature learning works for RL and that by using farthest point sampling (FPS) we can reduce the amount of points.
We also show that multi-head attention for point-clouds helps to learn the agent faster but converges to the same outcome.
arXiv Detail & Related papers (2023-12-18T18:16:30Z) - Efficient and robust Sensor Placement in Complex Environments [1.1421942894219899]
This paper addresses the problem of efficient and unobstructed surveillance or communication in complex environments.
We propose a greedy algorithm to achieve the objective.
Deep learning techniques are used to accelerate the evaluation of the objective function.
arXiv Detail & Related papers (2023-09-15T17:10:19Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - Finding the Optimal Network Depth in Classification Tasks [10.248235276871258]
We develop a fast end-to-end method for training lightweight neural networks using multiple classifier heads.
By allowing the model to determine the importance of each head, we are able to detect and remove unneeded components of the network.
arXiv Detail & Related papers (2020-04-17T11:08:45Z) - Learning a Probabilistic Strategy for Computational Imaging Sensor
Selection [16.553234762932938]
We propose a physics-constrained, fully differentiable, autoencoder that learns a probabilistic sensor-sampling strategy for optimized sensor design.
The proposed method learns a system's preferred sampling distribution that characterizes the correlations between different sensor selections as a binary, fully-connected Ising model.
arXiv Detail & Related papers (2020-03-23T17:52:17Z) - Regression with Deep Learning for Sensor Performance Optimization [0.0]
We re-approach non-linear regression with deep learning enabled by Keras and NumPy.
In particular, we use deep learning to parametrize a non-linear relationship between inputs and outputs of an industrial sensor.
arXiv Detail & Related papers (2020-02-22T19:58:58Z) - Large-Scale Gradient-Free Deep Learning with Recursive Local
Representation Alignment [84.57874289554839]
Training deep neural networks on large-scale datasets requires significant hardware resources.
Backpropagation, the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize.
We propose a neuro-biologically-plausible alternative to backprop that can be used to train deep networks.
arXiv Detail & Related papers (2020-02-10T16:20:02Z) - Depthwise Non-local Module for Fast Salient Object Detection Using a
Single Thread [136.2224792151324]
We propose a new deep learning algorithm for fast salient object detection.
The proposed algorithm achieves competitive accuracy and high inference efficiency simultaneously with a single CPU thread.
arXiv Detail & Related papers (2020-01-22T15:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.