A Temporal Stochastic Bias Correction using a Machine Learning Attention model
- URL: http://arxiv.org/abs/2402.14169v5
- Date: Tue, 25 Jun 2024 17:03:22 GMT
- Title: A Temporal Stochastic Bias Correction using a Machine Learning Attention model
- Authors: Omer Nivron, Damon J. Wischik, Mathieu Vrac, Emily Shuckburgh, Alex T. Archibald,
- Abstract summary: bias correction (BC) methods struggle to adjust temporal biases.
BC methods mostly disregard the dependence between consecutive time points.
This makes it more difficult to produce reliable impact studies on such climate statistics.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Climate models are biased with respect to real-world observations. They usually need to be adjusted before being used in impact studies. The suite of statistical methods that enable such adjustments is called bias correction (BC). However, BC methods currently struggle to adjust temporal biases. Because they mostly disregard the dependence between consecutive time points. As a result, climate statistics with long-range temporal properties, such as heatwave duration and frequency, cannot be corrected accurately. This makes it more difficult to produce reliable impact studies on such climate statistics. This paper offers a novel BC methodology to correct temporal biases. This is made possible by rethinking the philosophy behind BC. We will introduce BC as a time-indexed regression task with stochastic outputs. Rethinking BC enables us to adapt state-of-the-art machine learning (ML) attention models and thereby learn different types of biases, including temporal asynchronicities. With a case study of heatwave duration statistics in Abuja, Nigeria, and Tokyo, Japan, we show more accurate results than current climate model outputs and alternative BC methods.
Related papers
- A Deconfounding Approach to Climate Model Bias Correction [26.68810227550602]
Global Climate Models (GCMs) are crucial for predicting future climate changes by simulating the Earth systems.
GCMs exhibit systematic biases due to model uncertainties, parameterization simplifications, and inadequate representation of complex climate phenomena.
This paper proposes a novel bias correction approach to utilize both GCM and observational data to learn a factor model that captures multi-cause latent confounders.
arXiv Detail & Related papers (2024-08-22T01:53:35Z) - Attention-Based Ensemble Pooling for Time Series Forecasting [55.2480439325792]
We propose a method for pooling that performs a weighted average over candidate model forecasts.
We test this method on two time-series forecasting problems: multi-step forecasting of the dynamics of the non-stationary Lorenz 63 equation, and one-step forecasting of the weekly incident deaths due to COVID-19.
arXiv Detail & Related papers (2023-10-24T22:59:56Z) - It's an Alignment, Not a Trade-off: Revisiting Bias and Variance in Deep
Models [51.66015254740692]
We show that for an ensemble of deep learning based classification models, bias and variance are emphaligned at a sample level.
We study this phenomenon from two theoretical perspectives: calibration and neural collapse.
arXiv Detail & Related papers (2023-10-13T17:06:34Z) - Towards Debiasing Frame Length Bias in Text-Video Retrieval via Causal
Intervention [72.12974259966592]
We present a unique and systematic study of a temporal bias due to frame length discrepancy between training and test sets of trimmed video clips.
We propose a causal debiasing approach and perform extensive experiments and ablation studies on the Epic-Kitchens-100, YouCook2, and MSR-VTT datasets.
arXiv Detail & Related papers (2023-09-17T15:58:27Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Realization of Causal Representation Learning to Adjust Confounding Bias
in Latent Space [28.133104562449212]
Causal DAGs(Directed Acyclic Graphs) are usually considered in a 2D plane.
In this paper, we redefine causal DAG as emphdo-DAG, in which variables' values are no longer time-stamp-dependent, and timelines can be seen as axes.
arXiv Detail & Related papers (2022-11-15T23:35:15Z) - Extracting or Guessing? Improving Faithfulness of Event Temporal
Relation Extraction [87.04153383938969]
We improve the faithfulness of TempRel extraction models from two perspectives.
The first perspective is to extract genuinely based on contextual description.
The second perspective is to provide proper uncertainty estimation.
arXiv Detail & Related papers (2022-10-10T19:53:13Z) - Adaptive Conformal Predictions for Time Series [0.0]
We argue that Adaptive Conformal Inference (ACI) is a good procedure for time series with general dependency.
We propose a parameter-free method, AgACI, that adaptively builds upon ACI based on online expert aggregation.
We conduct a real case study: electricity price forecasting.
arXiv Detail & Related papers (2022-02-15T09:57:01Z) - Learning Sample Importance for Cross-Scenario Video Temporal Grounding [30.82619216537177]
The paper investigates some superficial biases specific to the temporal grounding task.
We propose a novel method called Debiased Temporal Language Localizer (DebiasTLL) to prevent the model from naively memorizing the biases.
We evaluate the proposed model in cross-scenario temporal grounding, where the train / test data are heterogeneously sourced.
arXiv Detail & Related papers (2022-01-08T15:41:38Z) - Learning from others' mistakes: Avoiding dataset biases without modeling
them [111.17078939377313]
State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended task.
Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available.
We show a method for training models that learn to ignore these problematic correlations.
arXiv Detail & Related papers (2020-12-02T16:10:54Z) - STAS: Adaptive Selecting Spatio-Temporal Deep Features for Improving
Bias Correction on Precipitation [27.780513053310223]
We propose an end-to-end deep-learning BCoP model named Spatio-Temporal feature Auto-Selective (STAS) model to select optimal ST regularity from EC.
Experiments on an EC public dataset indicate that STAS shows state-of-the-art performance on several criteria of BCoP, named threat scores (TS)
arXiv Detail & Related papers (2020-04-13T07:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.