Efficient Sensor Placement from Regression with Sparse Gaussian Processes in Continuous and Discrete Spaces
- URL: http://arxiv.org/abs/2303.00028v7
- Date: Thu, 22 Aug 2024 16:39:34 GMT
- Title: Efficient Sensor Placement from Regression with Sparse Gaussian Processes in Continuous and Discrete Spaces
- Authors: Kalvik Jakkala, Srinivas Akella,
- Abstract summary: The sensor placement problem is a common problem that arises when monitoring correlated phenomena.
We present a novel formulation to the SP problem based on variational approximation that can be optimized using gradient descent.
- Score: 3.729242965449096
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The sensor placement problem is a common problem that arises when monitoring correlated phenomena, such as temperature, precipitation, and salinity. Existing approaches to this problem typically formulate it as the maximization of information metrics, such as mutual information~(MI), and use optimization methods such as greedy algorithms in discrete domains, and derivative-free optimization methods such as genetic algorithms in continuous domains. However, computing MI for sensor placement requires discretizing the environment, and its computation cost depends on the size of the discretized environment. These limitations restrict these approaches from scaling to large problems. We present a novel formulation to the SP problem based on variational approximation that can be optimized using gradient descent, allowing us to efficiently find solutions in continuous domains. We generalize our method to also handle discrete environments. Our experimental results on four real-world datasets demonstrate that our approach generates sensor placements consistently on par with or better than the prior state-of-the-art approaches in terms of both MI and reconstruction quality, all while being significantly faster. Our computationally efficient approach enables both large-scale sensor placement and fast robotic sensor placement for informative path planning algorithms.
Related papers
- Adaptive Consensus Gradients Aggregation for Scaled Distributed Training [6.234802839923543]
We analyze the distributed gradient aggregation process through the lens of subspace optimization.
Our method demonstrates improved performance over the ubiquitous averaging on multiple tasks while remaining extremely efficient in both communicational and computational complexity.
arXiv Detail & Related papers (2024-11-06T08:16:39Z) - A physics-driven sensor placement optimization methodology for temperature field reconstruction [9.976807723785006]
We propose a novel physics-driven sensor placement optimization (PSPO) method for temperature field reconstruction.
The PSPO method significantly outperforms random and uniform selection methods, improving the reconstruction accuracy by nearly an order of magnitude.
arXiv Detail & Related papers (2024-09-27T03:26:38Z) - Application of deep and reinforcement learning to boundary control
problems [0.6906005491572401]
The aim is to find the optimal values for the domain boundaries such that the enclosed domain attains the desired state values.
This project explores possibilities using deep learning and reinforcement learning to solve boundary control problems.
arXiv Detail & Related papers (2023-10-21T10:56:32Z) - Memetic algorithms for Spatial Partitioning problems [26.73720392872553]
In this article, we focus on a specific type of SOP called spatial partitioning on real-world datasets.
We put forward a simple yet effective algorithm called swarm-based spatial memetic algorithm (SPATIAL) and test it on the school (re)districting problem.
arXiv Detail & Related papers (2022-08-04T20:05:46Z) - Information Entropy Initialized Concrete Autoencoder for Optimal Sensor
Placement and Reconstruction of Geophysical Fields [58.720142291102135]
We propose a new approach to the optimal placement of sensors for reconstructing geophysical fields from sparse measurements.
We demonstrate our method on the two examples: (a) temperature and (b) salinity fields around the Barents Sea and the Svalbard group of islands.
We find out that the obtained optimal sensor locations have clear physical interpretation and correspond to the boundaries between sea currents.
arXiv Detail & Related papers (2022-06-28T12:43:38Z) - Super-resolution GANs of randomly-seeded fields [68.8204255655161]
We propose a novel super-resolution generative adversarial network (GAN) framework to estimate field quantities from random sparse sensors.
The algorithm exploits random sampling to provide incomplete views of the high-resolution underlying distributions.
The proposed technique is tested on synthetic databases of fluid flow simulations, ocean surface temperature distributions measurements, and particle image velocimetry data.
arXiv Detail & Related papers (2022-02-23T18:57:53Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Distributed Optimization, Averaging via ADMM, and Network Topology [0.0]
We study the connection between network topology and convergence rates for different algorithms on a real world problem of sensor localization.
We also show interesting connections between ADMM and lifted Markov chains besides providing an explicitly characterization of its convergence.
arXiv Detail & Related papers (2020-09-05T21:44:39Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Second-Order Guarantees in Centralized, Federated and Decentralized
Nonconvex Optimization [64.26238893241322]
Simple algorithms have been shown to lead to good empirical results in many contexts.
Several works have pursued rigorous analytical justification for studying non optimization problems.
A key insight in these analyses is that perturbations play a critical role in allowing local descent algorithms.
arXiv Detail & Related papers (2020-03-31T16:54:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.