Global field reconstruction from sparse sensors with Voronoi
tessellation-assisted deep learning
- URL: http://arxiv.org/abs/2101.00554v1
- Date: Sun, 3 Jan 2021 03:43:53 GMT
- Title: Global field reconstruction from sparse sensors with Voronoi
tessellation-assisted deep learning
- Authors: Kai Fukami, Romit Maulik, Nesar Ramachandra, Koji Fukagata, and
Kunihiko Taira
- Abstract summary: We propose a data-driven spatial field recovery technique based on a structured grid-based deep-learning approach for arbitrary positioned sensors of any numbers.
The presented technique opens a new pathway towards the practical use of neural networks for real-time global field estimation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Achieving accurate and robust global situational awareness of a complex
time-evolving field from a limited number of sensors has been a longstanding
challenge. This reconstruction problem is especially difficult when sensors are
sparsely positioned in a seemingly random or unorganized manner, which is often
encountered in a range of scientific and engineering problems. Moreover, these
sensors can be in motion and can become online or offline over time. The key
leverage in addressing this scientific issue is the wealth of data accumulated
from the sensors. As a solution to this problem, we propose a data-driven
spatial field recovery technique founded on a structured grid-based
deep-learning approach for arbitrary positioned sensors of any numbers. It
should be noted that the na\"ive use of machine learning becomes prohibitively
expensive for global field reconstruction and is furthermore not adaptable to
an arbitrary number of sensors. In the present work, we consider the use of
Voronoi tessellation to obtain a structured-grid representation from sensor
locations enabling the computationally tractable use of convolutional neural
networks. One of the central features of the present method is its
compatibility with deep-learning based super-resolution reconstruction
techniques for structured sensor data that are established for image
processing. The proposed reconstruction technique is demonstrated for unsteady
wake flow, geophysical data, and three-dimensional turbulence. The current
framework is able to handle an arbitrary number of moving sensors, and thereby
overcomes a major limitation with existing reconstruction methods. The
presented technique opens a new pathway towards the practical use of neural
networks for real-time global field estimation.
Related papers
- DynST: Dynamic Sparse Training for Resource-Constrained Spatio-Temporal
Forecasting [24.00162014044092]
Earth science systems rely heavily on the extensive deployment of sensors.
Traditional approaches to sensor deployment utilize specific algorithms to design and deploy sensors.
In this paper, we introduce for the first time the concept of dynamic sparse training and are committed to adaptively, dynamically filtering important sensor data.
arXiv Detail & Related papers (2024-03-05T12:31:24Z) - Reconstruction of Fields from Sparse Sensing: Differentiable Sensor
Placement Enhances Generalization [0.0]
We introduce a general approach that employs differentiable programming to exploit sensor placement within the training of a neural network model.
Our method of differentiable placement strategies has the potential to significantly increase data collection efficiency, enable more thorough area coverage, and reduce redundancy in sensor deployment.
arXiv Detail & Related papers (2023-12-14T17:44:09Z) - Leveraging arbitrary mobile sensor trajectories with shallow recurrent
decoder networks for full-state reconstruction [4.243926243206826]
We show that a sequence-to-vector model, such as an LSTM (long, short-term memory) network, with a decoder network, dynamic information can be mapped to full state-space estimates.
The exceptional performance of the network architecture is demonstrated on three applications.
arXiv Detail & Related papers (2023-07-20T21:42:01Z) - Inducing Gaussian Process Networks [80.40892394020797]
We propose inducing Gaussian process networks (IGN), a simple framework for simultaneously learning the feature space as well as the inducing points.
The inducing points, in particular, are learned directly in the feature space, enabling a seamless representation of complex structured domains.
We report on experimental results for real-world data sets showing that IGNs provide significant advances over state-of-the-art methods.
arXiv Detail & Related papers (2022-04-21T05:27:09Z) - Learning Online Multi-Sensor Depth Fusion [100.84519175539378]
SenFuNet is a depth fusion approach that learns sensor-specific noise and outlier statistics.
We conduct experiments with various sensor combinations on the real-world CoRBS and Scene3D datasets.
arXiv Detail & Related papers (2022-04-07T10:45:32Z) - One-Bit Compressive Sensing: Can We Go Deep and Blind? [15.231885712212083]
One-bit compressive sensing is concerned with the accurate recovery of an underlying sparse signal of interest from one-bit noisy measurements.
We present a novel data-driven and model-based methodology that achieves blind recovery.
arXiv Detail & Related papers (2022-03-13T16:06:56Z) - Learning-based Localizability Estimation for Robust LiDAR Localization [13.298113481670038]
LiDAR-based localization and mapping is one of the core components in many modern robotic systems.
This work proposes a neural network-based estimation approach for detecting (non-)localizability during robot operation.
arXiv Detail & Related papers (2022-03-11T01:12:00Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Adaptive Latent Space Tuning for Non-Stationary Distributions [62.997667081978825]
We present a method for adaptive tuning of the low-dimensional latent space of deep encoder-decoder style CNNs.
We demonstrate our approach for predicting the properties of a time-varying charged particle beam in a particle accelerator.
arXiv Detail & Related papers (2021-05-08T03:50:45Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.