BlackBox: Generalizable Reconstruction of Extremal Values from
Incomplete Spatio-Temporal Data
- URL: http://arxiv.org/abs/2005.02140v3
- Date: Thu, 8 Oct 2020 16:36:22 GMT
- Title: BlackBox: Generalizable Reconstruction of Extremal Values from
Incomplete Spatio-Temporal Data
- Authors: Tomislav Ivek, Domagoj Vlah
- Abstract summary: We present a framework to reconstruct missing data using convolutional deep neural networks.
In order to mitigate bias introduced by any one particular model, a prediction ensemble is constructed.
Our method does not rely on expert knowledge in order to accurately reproduce dynamic features of a complex oceanographic system.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We describe our submission to the Extreme Value Analysis 2019 Data Challenge
in which teams were asked to predict extremes of sea surface temperature
anomaly within spatio-temporal regions of missing data. We present a
computational framework which reconstructs missing data using convolutional
deep neural networks. Conditioned on incomplete data, we employ
autoencoder-like models as multivariate conditional distributions from which
possible reconstructions of the complete dataset are sampled using imputed
noise. In order to mitigate bias introduced by any one particular model, a
prediction ensemble is constructed to create the final distribution of extremal
values. Our method does not rely on expert knowledge in order to accurately
reproduce dynamic features of a complex oceanographic system with minimal
assumptions. The obtained results promise reusability and generalization to
other domains.
Related papers
- Multimodal Atmospheric Super-Resolution With Deep Generative Models [0.0]
Score-based diffusion modeling is a generative machine learning algorithm that can be used to sample from complex distributions.<n>In this article, we apply such a concept to the super-resolution of a high-dimensional dynamical system, given the real-time availability of low-resolution and experimentally observed sparse sensor measurements.
arXiv Detail & Related papers (2025-06-28T06:47:09Z) - A Dataset for Semantic Segmentation in the Presence of Unknowns [49.795683850385956]
Existing datasets allow evaluation of only knowns or unknowns - but not both.
We propose a novel anomaly segmentation dataset, ISSU, that features a diverse set of anomaly inputs from cluttered real-world environments.
The dataset is twice larger than existing anomaly segmentation datasets.
arXiv Detail & Related papers (2025-03-28T10:31:01Z) - Rethinking Benign Overfitting in Two-Layer Neural Networks [2.486161976966064]
We refine the feature-noise data model by incorporating class-dependent heterogeneous noise and re-examine the overfitting phenomenon in neural networks.
Our findings reveal that neural networks can leverage "data noise", previously deemed harmful, to learn implicit features that improve the classification accuracy for long-tailed data.
arXiv Detail & Related papers (2025-02-17T15:20:04Z) - SeisFusion: Constrained Diffusion Model with Input Guidance for 3D Seismic Data Interpolation and Reconstruction [26.02191880837226]
We propose a novel diffusion model reconstruction framework tailored for 3D seismic data.
We introduce a 3D neural network architecture into the diffusion model, successfully extending the 2D diffusion model to 3D space.
Our method exhibits superior reconstruction accuracy when applied to both field datasets and synthetic datasets.
arXiv Detail & Related papers (2024-03-18T05:10:13Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - ManiFlow: Implicitly Representing Manifolds with Normalizing Flows [145.9820993054072]
Normalizing Flows (NFs) are flexible explicit generative models that have been shown to accurately model complex real-world data distributions.
We propose an optimization objective that recovers the most likely point on the manifold given a sample from the perturbed distribution.
Finally, we focus on 3D point clouds for which we utilize the explicit nature of NFs, i.e. surface normals extracted from the gradient of the log-likelihood and the log-likelihood itself.
arXiv Detail & Related papers (2022-08-18T16:07:59Z) - Reconstruction of Incomplete Wildfire Data using Deep Generative Models [0.0]
We present a variant of the powerful variational autoencoder models dubbed the Missing data Conditional-Weighted Autocoderen (CMIWAE)
Our deep variable generative model requires little to no feature engineering and does not necessarily rely on the specifics of scoring in the Data Challenge.
arXiv Detail & Related papers (2022-01-16T23:27:31Z) - Harmless interpolation in regression and classification with structured
features [21.064512161584872]
Overparametrized neural networks tend to perfectly fit noisy training data yet generalize well on test data.
We present a general and flexible framework for upper bounding regression and classification risk in a reproducing kernel Hilbert space.
arXiv Detail & Related papers (2021-11-09T15:12:26Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.