Forecasting Geopolitical Events with a Sparse Temporal Fusion Transformer and Gaussian Process Hybrid: A Case Study in Middle Eastern and U.S. Conflict Dynamics
- URL: http://arxiv.org/abs/2506.20935v1
- Date: Thu, 26 Jun 2025 01:53:25 GMT
- Title: Forecasting Geopolitical Events with a Sparse Temporal Fusion Transformer and Gaussian Process Hybrid: A Case Study in Middle Eastern and U.S. Conflict Dynamics
- Authors: Hsin-Hsiung Huang, Hayden Hampton,
- Abstract summary: geopolitical conflict from data sources like the Global Database of Events, Language, and Tone (GDELT)<n>In inherent sparsity, burstiness, and overdispersion of such data cause standard deep learning models to produce unreliable long-horizon predictions.<n>We introduce STFT-VNNGP, a hybrid architecture that won the 2023 Algorithms for Threat Detection competition by overcoming these limitations.
- Score: 0.552480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Forecasting geopolitical conflict from data sources like the Global Database of Events, Language, and Tone (GDELT) is a critical challenge for national security. The inherent sparsity, burstiness, and overdispersion of such data cause standard deep learning models, including the Temporal Fusion Transformer (TFT), to produce unreliable long-horizon predictions. We introduce STFT-VNNGP, a hybrid architecture that won the 2023 Algorithms for Threat Detection (ATD) competition by overcoming these limitations. Designed to bridge this gap, our model employs a two-stage process: first, a TFT captures complex temporal dynamics to generate multi-quantile forecasts. These quantiles then serve as informed inputs for a Variational Nearest Neighbor Gaussian Process (VNNGP), which performs principled spatiotemporal smoothing and uncertainty quantification. In a case study forecasting conflict dynamics in the Middle East and the U.S., STFT-VNNGP consistently outperforms a standalone TFT, showing a superior ability to predict the timing and magnitude of bursty event periods, particularly at long-range horizons. This work offers a robust framework for generating more reliable and actionable intelligence from challenging event data, with all code and workflows made publicly available to ensure reproducibility.
Related papers
- PFformer: A Position-Free Transformer Variant for Extreme-Adaptive Multivariate Time Series Forecasting [9.511600544581425]
PFformer is a position-free Transformer-based model designed for single-target MTS forecasting.<n> PFformer integrates two novel embedding strategies: Enhanced Feature-based Embedding (EFE) and Auto-Encoder-based Embedding (AEE)
arXiv Detail & Related papers (2025-02-27T22:21:27Z) - TrajGPT: Controlled Synthetic Trajectory Generation Using a Multitask Transformer-Based Spatiotemporal Model [5.5481213807584995]
We introduce TrajGPT, a transformer-based, multi-task, joint generative model to address these issues.
Taking inspiration from large language models, TrajGPT poses the problem of controlled trajectory generation as that of text infilling in natural language.
Our experiments on public and private datasets demonstrate that TrajGPT not only excels in controlled synthetic visit generation but also outperforms competing models in next-location prediction tasks.
arXiv Detail & Related papers (2024-11-07T02:47:50Z) - Timer-XL: Long-Context Transformers for Unified Time Series Forecasting [67.83502953961505]
We present Timer-XL, a causal Transformer for unified time series forecasting.<n>Based on large-scale pre-training, Timer-XL achieves state-of-the-art zero-shot performance.
arXiv Detail & Related papers (2024-10-07T07:27:39Z) - TimeDiT: General-purpose Diffusion Transformers for Time Series Foundation Model [11.281386703572842]
TimeDiT is a diffusion transformer model that combines temporal dependency learning with probabilistic sampling.<n>TimeDiT employs a unified masking mechanism to harmonize the training and inference process across diverse tasks.<n>Our systematic evaluation demonstrates TimeDiT's effectiveness both in fundamental tasks, i.e., forecasting and imputation, through zero-shot/fine-tuning.
arXiv Detail & Related papers (2024-09-03T22:31:57Z) - FATE: Focal-modulated Attention Encoder for Multivariate Time-series Forecasting [0.0]
Climate stands as one of the most pressing global challenges of the twenty-first century, with far-reaching consequences such as rising sea levels, melting glaciers, and increasingly extreme weather patterns.<n> Accurate forecasting is critical for monitoring these phenomena and supporting mitigation strategies.<n>Recent data-driven models for time-series forecasting, including CNNs, RNNs, and attention-based transformers, have shown promise but struggle with dependencies and limited parallelization.<n>In this work, we present Modulated Attention Focal (FATE) for reliable time-series forecasting.
arXiv Detail & Related papers (2024-08-21T04:40:18Z) - Physics-guided Active Sample Reweighting for Urban Flow Prediction [75.24539704456791]
Urban flow prediction is a nuanced-temporal modeling that estimates the throughput of transportation services like buses, taxis and ride-driven models.
Some recent prediction solutions bring remedies with the notion of physics-guided machine learning (PGML)
We develop a atized physics-guided network (PN), and propose a data-aware framework Physics-guided Active Sample Reweighting (P-GASR)
arXiv Detail & Related papers (2024-07-18T15:44:23Z) - Scalable Spatiotemporal Prediction with Bayesian Neural Fields [3.3299088915999295]
BayesNF integrates a deep neural network architecture for high-capacity estimation with hierarchical Bayesian inference for robust predictive uncertainty.<n>BayesNF delivers improvements on prediction problems from climate and public health data containing tens of thousands of measurements.
arXiv Detail & Related papers (2024-03-12T13:47:50Z) - Cumulative Distribution Function based General Temporal Point Processes [49.758080415846884]
CuFun model represents a novel approach to TPPs that revolves around the Cumulative Distribution Function (CDF)
Our approach addresses several critical issues inherent in traditional TPP modeling.
Our contributions encompass the introduction of a pioneering CDF-based TPP model, the development of a methodology for incorporating past event information into future event prediction.
arXiv Detail & Related papers (2024-02-01T07:21:30Z) - Towards Flexible Time-to-event Modeling: Optimizing Neural Networks via
Rank Regression [17.684526928033065]
We introduce the Deep AFT Rank-regression model for Time-to-event prediction (DART)
This model uses an objective function based on Gehan's rank statistic, which is efficient and reliable for representation learning.
The proposed method is a semiparametric approach to AFT modeling that does not impose any distributional assumptions on the survival time distribution.
arXiv Detail & Related papers (2023-07-16T13:58:28Z) - CARD: Channel Aligned Robust Blend Transformer for Time Series
Forecasting [50.23240107430597]
We design a special Transformer, i.e., Channel Aligned Robust Blend Transformer (CARD for short), that addresses key shortcomings of CI type Transformer in time series forecasting.
First, CARD introduces a channel-aligned attention structure that allows it to capture both temporal correlations among signals.
Second, in order to efficiently utilize the multi-scale knowledge, we design a token blend module to generate tokens with different resolutions.
Third, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue.
arXiv Detail & Related papers (2023-05-20T05:16:31Z) - Generative Time Series Forecasting with Diffusion, Denoise, and
Disentanglement [51.55157852647306]
Time series forecasting has been a widely explored task of great importance in many applications.
It is common that real-world time series data are recorded in a short time period, which results in a big gap between the deep model and the limited and noisy time series.
We propose to address the time series forecasting problem with generative modeling and propose a bidirectional variational auto-encoder equipped with diffusion, denoise, and disentanglement.
arXiv Detail & Related papers (2023-01-08T12:20:46Z) - Towards Long-Term Time-Series Forecasting: Feature, Pattern, and
Distribution [57.71199089609161]
Long-term time-series forecasting (LTTF) has become a pressing demand in many applications, such as wind power supply planning.
Transformer models have been adopted to deliver high prediction capacity because of the high computational self-attention mechanism.
We propose an efficient Transformerbased model, named Conformer, which differentiates itself from existing methods for LTTF in three aspects.
arXiv Detail & Related papers (2023-01-05T13:59:29Z) - Transformer Hawkes Process [79.16290557505211]
We propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies.
THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin.
We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.
arXiv Detail & Related papers (2020-02-21T13:48:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.