Prediction in ungauged regions with sparse flow duration curves and
input-selection ensemble modeling
- URL: http://arxiv.org/abs/2011.13380v1
- Date: Thu, 26 Nov 2020 16:40:22 GMT
- Title: Prediction in ungauged regions with sparse flow duration curves and
input-selection ensemble modeling
- Authors: Dapeng Feng, Kathryn Lawson and Chaopeng Shen
- Abstract summary: We demonstrate that sparse flow duration curve (FDC) data can be migrated and assimilated by an LSTM-based network, via an encoder.
A stringent region-based holdout test showed a median Kling-Gupta efficiency (KGE) of 0.62 for a US dataset, substantially higher than previous state-of-the-art global-scale ungauged basin tests.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While long short-term memory (LSTM) models have demonstrated stellar
performance with streamflow predictions, there are major risks in applying
these models in contiguous regions with no gauges, or predictions in ungauged
regions (PUR) problems. However, softer data such as the flow duration curve
(FDC) may be already available from nearby stations, or may become available.
Here we demonstrate that sparse FDC data can be migrated and assimilated by an
LSTM-based network, via an encoder. A stringent region-based holdout test
showed a median Kling-Gupta efficiency (KGE) of 0.62 for a US dataset,
substantially higher than previous state-of-the-art global-scale ungauged basin
tests. The baseline model without FDC was already competitive (median KGE
0.56), but integrating FDCs had substantial value. Because of the inaccurate
representation of inputs, the baseline models might sometimes produce
catastrophic results. However, model generalizability was further meaningfully
improved by compiling an ensemble based on models with different input
selections.
Related papers
- More precise edge detections [0.0]
Edge detection (ED) is a base task in computer vision.
Current models still suffer from unsatisfactory precision rates.
Model architecture for more precise predictions still needs an investigation.
arXiv Detail & Related papers (2024-07-29T13:24:55Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion [56.38386580040991]
Consistency Trajectory Model (CTM) is a generalization of Consistency Models (CM)
CTM enables the efficient combination of adversarial training and denoising score matching loss to enhance performance.
Unlike CM, CTM's access to the score function can streamline the adoption of established controllable/conditional generation methods.
arXiv Detail & Related papers (2023-10-01T05:07:17Z) - Concurrent Misclassification and Out-of-Distribution Detection for
Semantic Segmentation via Energy-Based Normalizing Flow [0.0]
Recent semantic segmentation models accurately classify test-time examples that are similar to a training dataset distribution.
We propose a generative model for concurrent in-distribution misclassification (IDM) and OOD detection that relies on a normalizing flow framework.
FlowEneDet achieves promising results on Cityscapes, Cityscapes-C, FishyScapes and SegmentMeIfYouCan benchmarks in IDM/OOD detection when applied to pretrained DeepLabV3+ and SegFormer semantic segmentation models.
arXiv Detail & Related papers (2023-05-16T17:02:57Z) - Strategic Geosteeering Workflow with Uncertainty Quantification and Deep
Learning: A Case Study on the Goliat Field [0.0]
This paper presents a practical workflow consisting of offline and online phases.
The offline phase includes training and building of an uncertain prior near-well geo-model.
The online phase uses the flexible iterative ensemble smoother (FlexIES) to perform real-time assimilation of extra-deep electromagnetic data.
arXiv Detail & Related papers (2022-10-27T15:38:26Z) - Physics-Informed Graph Neural Network for Spatial-temporal Production
Forecasting [0.0]
Production forecast based on historical data provides essential value for developing hydrocarbon resources.
We propose a grid-free, physics-informed graph neural network (PI-GNN) for production forecasting.
arXiv Detail & Related papers (2022-09-23T23:28:40Z) - Probabilistic forecasting for geosteering in fluvial successions using a
generative adversarial network [0.0]
Fast updates based on real-time data are essential when drilling in complex reservoirs with high uncertainties in pre-drill models.
We propose a generative adversarial deep neural network (GAN) trained to reproduce geologically consistent 2D sections of fluvial successions.
In our example, the method reduces uncertainty and correctly predicts most major geological features up to 500 meters ahead of drill-bit.
arXiv Detail & Related papers (2022-07-04T12:52:38Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - A Generative Learning Approach for Spatio-temporal Modeling in Connected
Vehicular Network [55.852401381113786]
This paper proposes LaMI (Latency Model Inpainting), a novel framework to generate a comprehensive-temporal quality framework for wireless access latency of connected vehicles.
LaMI adopts the idea from image inpainting and synthesizing and can reconstruct the missing latency samples by a two-step procedure.
In particular, it first discovers the spatial correlation between samples collected in various regions using a patching-based approach and then feeds the original and highly correlated samples into a Varienational Autocoder (VAE)
arXiv Detail & Related papers (2020-03-16T03:43:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.