Accelerated Probabilistic Marching Cubes by Deep Learning for
Time-Varying Scalar Ensembles
- URL: http://arxiv.org/abs/2207.07260v1
- Date: Fri, 15 Jul 2022 02:35:41 GMT
- Title: Accelerated Probabilistic Marching Cubes by Deep Learning for
Time-Varying Scalar Ensembles
- Authors: Mengjiao Han, Tushar M. Athawale, David Pugmire, and Chris R. Johnson
- Abstract summary: This paper introduces a deep-learning-based approach to learning the level-set uncertainty for two-dimensional ensemble data.
We train the model using the first few time steps from time-varying ensemble data in our workflow.
We demonstrate that our trained model accurately infers uncertainty in level sets for new time steps and is up to 170X faster than that of the original probabilistic model.
- Score: 5.102811033640284
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visualizing the uncertainty of ensemble simulations is challenging due to the
large size and multivariate and temporal features of ensemble data sets. One
popular approach to studying the uncertainty of ensembles is analyzing the
positional uncertainty of the level sets. Probabilistic marching cubes is a
technique that performs Monte Carlo sampling of multivariate Gaussian noise
distributions for positional uncertainty visualization of level sets. However,
the technique suffers from high computational time, making interactive
visualization and analysis impossible to achieve. This paper introduces a
deep-learning-based approach to learning the level-set uncertainty for
two-dimensional ensemble data with a multivariate Gaussian noise assumption. We
train the model using the first few time steps from time-varying ensemble data
in our workflow. We demonstrate that our trained model accurately infers
uncertainty in level sets for new time steps and is up to 170X faster than that
of the original probabilistic model with serial computation and 10X faster than
that of the original parallel computation.
Related papers
- Koopman Ensembles for Probabilistic Time Series Forecasting [6.699751896019971]
We show that ensembles of independently trained models are highly overconfident and that using a training criterion that explicitly encourages the members to produce predictions with high inter-model variances greatly improves the uncertainty of the ensembles.
arXiv Detail & Related papers (2024-03-11T14:29:56Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Better Batch for Deep Probabilistic Time Series Forecasting [15.31488551912888]
We propose an innovative training method that incorporates error autocorrelation to enhance probabilistic forecasting accuracy.
Our method constructs a mini-batch as a collection of $D$ consecutive time series segments for model training.
It explicitly learns a time-varying covariance matrix over each mini-batch, encoding error correlation among adjacent time steps.
arXiv Detail & Related papers (2023-05-26T15:36:59Z) - Predicting the State of Synchronization of Financial Time Series using
Cross Recurrence Plots [75.20174445166997]
This study introduces a new method for predicting the future state of synchronization of the dynamics of two financial time series.
We adopt a deep learning framework for methodologically addressing the prediction of the synchronization state.
We find that the task of predicting the state of synchronization of two time series is in general rather difficult, but for certain pairs of stocks attainable with very satisfactory performance.
arXiv Detail & Related papers (2022-10-26T10:22:28Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TACTiS: Transformer-Attentional Copulas for Time Series [76.71406465526454]
estimation of time-varying quantities is a fundamental component of decision making in fields such as healthcare and finance.
We propose a versatile method that estimates joint distributions using an attention-based decoder.
We show that our model produces state-of-the-art predictions on several real-world datasets.
arXiv Detail & Related papers (2022-02-07T21:37:29Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Temporal Latent Auto-Encoder: A Method for Probabilistic Multivariate
Time Series Forecasting [4.131842516813833]
We introduce a novel temporal latent auto-encoder method which enables nonlinear factorization of time series.
By imposing a probabilistic latent space model, complex distributions of the input series are modeled via the decoder.
Our model achieves state-of-the-art performance on many popular multivariate datasets, with gains sometimes as high as $50%$ for several standard metrics.
arXiv Detail & Related papers (2021-01-25T22:29:40Z) - Effective Proximal Methods for Non-convex Non-smooth Regularized
Learning [27.775096437736973]
We show that the independent sampling scheme tends to improve performance of the commonly-used uniform sampling scheme.
Our new analysis also derives a speed for the sampling than best one available so far.
arXiv Detail & Related papers (2020-09-14T16:41:32Z) - Predicting Temporal Sets with Deep Neural Networks [50.53727580527024]
We propose an integrated solution based on the deep neural networks for temporal sets prediction.
A unique perspective is to learn element relationship by constructing set-level co-occurrence graph.
We design an attention-based module to adaptively learn the temporal dependency of elements and sets.
arXiv Detail & Related papers (2020-06-20T03:29:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.