Efficient deep data assimilation with sparse observations and
time-varying sensors
- URL: http://arxiv.org/abs/2310.16187v1
- Date: Tue, 24 Oct 2023 21:13:59 GMT
- Title: Efficient deep data assimilation with sparse observations and
time-varying sensors
- Authors: Sibo Cheng, Che Liu, Yike Guo, Rossella Arcucci
- Abstract summary: Voronoi-tessellation Inverse operator for VariatIonal Data assimilation (VIVID)
We introduce a novel variational DA scheme, named Voronoi-tessellation Inverse operator for VariatIonal Data assimilation (VIVID)
VIVID is adept at handling sparse, unstructured, and time-varying sensor data.
- Score: 17.249916158780884
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Variational Data Assimilation (DA) has been broadly used in engineering
problems for field reconstruction and prediction by performing a weighted
combination of multiple sources of noisy data. In recent years, the integration
of deep learning (DL) techniques in DA has shown promise in improving the
efficiency and accuracy in high-dimensional dynamical systems. Nevertheless,
existing deep DA approaches face difficulties in dealing with unstructured
observation data, especially when the placement and number of sensors are
dynamic over time. We introduce a novel variational DA scheme, named
Voronoi-tessellation Inverse operator for VariatIonal Data assimilation
(VIVID), that incorporates a DL inverse operator into the assimilation
objective function. By leveraging the capabilities of the Voronoi-tessellation
and convolutional neural networks, VIVID is adept at handling sparse,
unstructured, and time-varying sensor data. Furthermore, the incorporation of
the DL inverse operator establishes a direct link between observation and state
space, leading to a reduction in the number of minimization steps required for
DA. Additionally, VIVID can be seamlessly integrated with Proper Orthogonal
Decomposition (POD) to develop an end-to-end reduced-order DA scheme, which can
further expedite field reconstruction. Numerical experiments in a fluid
dynamics system demonstrate that VIVID can significantly outperform existing DA
and DL algorithms. The robustness of VIVID is also accessed through the
application of various levels of prior error, the utilization of varying
numbers of sensors, and the misspecification of error covariance in DA.
Related papers
- Combined Optimization of Dynamics and Assimilation with End-to-End Learning on Sparse Observations [1.492574139257933]
CODA is an end-to-end optimization scheme for jointly learning dynamics and DA directly from sparse and noisy observations.
We introduce a novel learning objective that combines unrolled auto-regressive dynamics with the data- and self-consistency terms of weak-constraint 4Dvar DA.
arXiv Detail & Related papers (2024-09-11T09:36:15Z) - Hierarchical Features Matter: A Deep Exploration of GAN Priors for Improved Dataset Distillation [51.44054828384487]
We propose a novel parameterization method dubbed Hierarchical Generative Latent Distillation (H-GLaD)
This method systematically explores hierarchical layers within the generative adversarial networks (GANs)
In addition, we introduce a novel class-relevant feature distance metric to alleviate the computational burden associated with synthetic dataset evaluation.
arXiv Detail & Related papers (2024-06-09T09:15:54Z) - ADLDA: A Method to Reduce the Harm of Data Distribution Shift in Data Augmentation [11.887799310374174]
This study introduces a novel data augmentation technique, ADLDA, aimed at mitigating the negative impact of data distribution shifts.
Experimental results demonstrate that ADLDA significantly enhances model performance across multiple datasets.
arXiv Detail & Related papers (2024-05-11T03:20:35Z) - Learning Better with Less: Effective Augmentation for Sample-Efficient
Visual Reinforcement Learning [57.83232242068982]
Data augmentation (DA) is a crucial technique for enhancing the sample efficiency of visual reinforcement learning (RL) algorithms.
It remains unclear which attributes of DA account for its effectiveness in achieving sample-efficient visual RL.
This work conducts comprehensive experiments to assess the impact of DA's attributes on its efficacy.
arXiv Detail & Related papers (2023-05-25T15:46:20Z) - Learning in latent spaces improves the predictive accuracy of deep
neural operators [0.0]
L-DeepONet is an extension of standard DeepONet, which leverages latent representations of high-dimensional PDE input and output functions identified with suitable autoencoders.
We show that L-DeepONet outperforms the standard approach in terms of both accuracy and computational efficiency across diverse time-dependent PDEs.
arXiv Detail & Related papers (2023-04-15T17:13:09Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Generalised Latent Assimilation in Heterogeneous Reduced Spaces with
Machine Learning Surrogate Models [10.410970649045943]
We develop a system which combines reduced-order surrogate models with a novel data assimilation technique.
Generalised Latent Assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation.
arXiv Detail & Related papers (2022-04-07T15:13:12Z) - DAE : Discriminatory Auto-Encoder for multivariate time-series anomaly
detection in air transportation [68.8204255655161]
We propose a novel anomaly detection model called Discriminatory Auto-Encoder (DAE)
It uses the baseline of a regular LSTM-based auto-encoder but with several decoders, each getting data of a specific flight phase.
Results show that the DAE achieves better results in both accuracy and speed of detection.
arXiv Detail & Related papers (2021-09-08T14:07:55Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Longitudinal Variational Autoencoder [1.4680035572775534]
A common approach to analyse high-dimensional data that contains missing values is to learn a low-dimensional representation using variational autoencoders (VAEs)
Standard VAEs assume that the learnt representations are i.i.d., and fail to capture the correlations between the data samples.
We propose the Longitudinal VAE (L-VAE), that uses a multi-output additive Gaussian process (GP) prior to extend the VAE's capability to learn structured low-dimensional representations.
Our approach can simultaneously accommodate both time-varying shared and random effects, produce structured low-dimensional representations
arXiv Detail & Related papers (2020-06-17T10:30:14Z) - SUOD: Accelerating Large-Scale Unsupervised Heterogeneous Outlier
Detection [63.253850875265115]
Outlier detection (OD) is a key machine learning (ML) task for identifying abnormal objects from general samples.
We propose a modular acceleration system, called SUOD, to address it.
arXiv Detail & Related papers (2020-03-11T00:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.