Energy networks for state estimation with random sensors using sparse
labels
- URL: http://arxiv.org/abs/2203.06456v1
- Date: Sat, 12 Mar 2022 15:15:38 GMT
- Title: Energy networks for state estimation with random sensors using sparse
labels
- Authors: Yash Kumar and Souvik Chakraborty
- Abstract summary: We propose a technique with an implicit optimization layer and a physics-based loss function that can learn from sparse labels.
Based on this technique we present two models for discrete and continuous prediction in space.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State estimation is required whenever we deal with high-dimensional dynamical
systems, as the complete measurement is often unavailable. It is key to gaining
insight, performing control or optimizing design tasks. Most deep
learning-based approaches require high-resolution labels and work with fixed
sensor locations, thus being restrictive in their scope. Also, doing Proper
orthogonal decomposition (POD) on sparse data is nontrivial. To tackle these
problems, we propose a technique with an implicit optimization layer and a
physics-based loss function that can learn from sparse labels. It works by
minimizing the energy of the neural network prediction, enabling it to work
with a varying number of sensors at different locations. Based on this
technique we present two models for discrete and continuous prediction in
space. We demonstrate the performance using two high-dimensional fluid problems
of Burgers' equation and Flow Past Cylinder for discrete model and using Allen
Cahn equation and Convection-diffusion equations for continuous model. We show
the models are also robust to noise in measurements.
Related papers
- Tilt your Head: Activating the Hidden Spatial-Invariance of Classifiers [0.7704032792820767]
Deep neural networks are applied in more and more areas of everyday life.
They still lack essential abilities, such as robustly dealing with spatially transformed input signals.
We propose a novel technique to emulate such an inference process for neural nets.
arXiv Detail & Related papers (2024-05-06T09:47:29Z) - Foundational Inference Models for Dynamical Systems [5.549794481031468]
We offer a fresh perspective on the classical problem of imputing missing time series data, whose underlying dynamics are assumed to be determined by ODEs.
We propose a novel supervised learning framework for zero-shot time series imputation, through parametric functions satisfying some (hidden) ODEs.
We empirically demonstrate that one and the same (pretrained) recognition model can perform zero-shot imputation across 63 distinct time series with missing values.
arXiv Detail & Related papers (2024-02-12T11:48:54Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems [17.373668215331737]
We propose a new approach called deep graph vector data description (SVDD) for anomaly detection.
We first use a transformer to preserve both short and long temporal patterns monitoring data in temporal embeddings.
We cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph.
arXiv Detail & Related papers (2023-02-24T22:14:39Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Inference from Real-World Sparse Measurements [21.194357028394226]
Real-world problems often involve complex and unstructured sets of measurements, which occur when sensors are sparsely placed in either space or time.
Deep learning architectures capable of processing sets of measurements with positions varying from set to set and extracting readouts anywhere are methodologically difficult.
We propose an attention-based model focused on applicability and practical robustness, with two key design contributions.
arXiv Detail & Related papers (2022-10-20T13:42:20Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Neural Flows: Efficient Alternative to Neural ODEs [8.01886971335823]
We propose an alternative by directly modeling the solution curves - the flow of an ODE - with a neural network.
This immediately eliminates the need for expensive numerical solvers while still maintaining the modeling capability of neural ODEs.
arXiv Detail & Related papers (2021-10-25T15:24:45Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.