FlowNet: Modeling Dynamic Spatio-Temporal Systems via Flow Propagation
- URL: http://arxiv.org/abs/2511.05595v1
- Date: Wed, 05 Nov 2025 14:06:19 GMT
- Title: FlowNet: Modeling Dynamic Spatio-Temporal Systems via Flow Propagation
- Authors: Yutong Feng, Xu Liu, Yutong Xia, Yuxuan Liang,
- Abstract summary: Accurately modeling complex dynamic-temporal systems requires capturing flow-mediated interdependencies and context-sensitive interaction dynamics.<n>Existing methods, predominantly graph-based or attention-driven, rely on similarity-driven connectivity assumptions, asymmetric flow exchanges that govern system evolution.<n>We propose Spatio-Temporal Flow, a physics-inspired paradigm that explicitly coupling models dynamic node transfers through quantifiable flow transfers governed by conservation principles.<n> Experiments demonstrate that FlowNet significantly outperforms existing state-of-the-art approaches on seven metrics in the modeling of three real-world systems, validating its efficiency and physical interpretability.
- Score: 43.89691389856747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurately modeling complex dynamic spatio-temporal systems requires capturing flow-mediated interdependencies and context-sensitive interaction dynamics. Existing methods, predominantly graph-based or attention-driven, rely on similarity-driven connectivity assumptions, neglecting asymmetric flow exchanges that govern system evolution. We propose Spatio-Temporal Flow, a physics-inspired paradigm that explicitly models dynamic node couplings through quantifiable flow transfers governed by conservation principles. Building on this, we design FlowNet, a novel architecture leveraging flow tokens as information carriers to simulate source-to-destination transfers via Flow Allocation Modules, ensuring state redistribution aligns with conservation laws. FlowNet dynamically adjusts the interaction radius through an Adaptive Spatial Masking module, suppressing irrelevant noise while enabling context-aware propagation. A cascaded architecture enhances scalability and nonlinear representation capacity. Experiments demonstrate that FlowNet significantly outperforms existing state-of-the-art approaches on seven metrics in the modeling of three real-world systems, validating its efficiency and physical interpretability. We establish a principled methodology for modeling complex systems through spatio-temporal flow interactions.
Related papers
- Flow Equivariant World Models: Memory for Partially Observed Dynamic Environments [54.23746358078753]
Embodied systems experience the world as 'a symphony of flows'<n>Most neural network world models ignore this structure and instead repeatedly re-learn the same transformations from data.<n>We introduce 'Flow Equivariant World Models', a framework in which both self-motion and external object motion are unified as one- parameter Lie group 'flows'
arXiv Detail & Related papers (2026-01-03T05:22:27Z) - FluidFormer: Transformer with Continuous Convolution for Particle-based Fluid Simulation [5.167355296859346]
Learning-based fluid simulation networks have been proven as viable alternatives to traditional numerical solvers for the Navier-Stokes equations.<n>We propose the first Fluid Attention Block (FAB) with a local-global hierarchy, where continuous convolutions extract local features while self-attention captures global dependencies.<n>We pioneer the first Transformer architecture specifically designed for continuous fluid simulation, seamlessly integrated within a dual-pipeline architecture.
arXiv Detail & Related papers (2025-08-03T01:44:17Z) - Flow-Through Tensors: A Unified Computational Graph Architecture for Multi-Layer Transportation Network Optimization [20.685856719515026]
Flow Throughs (FTT) is a unified computational graph architecture that connects origin destination flows, path, probabilities and link travel times as interconnected tensors.<n>Our framework makes three key contributions: first, it establishes a consistent mathematical structure that enables gradient-based optimization across previously separate modeling elements.<n>Second, it supports multidimensional analysis of traffic patterns over time, space, and user groups with precise quantification of system efficiency.
arXiv Detail & Related papers (2025-06-30T06:42:23Z) - FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation [51.110607281391154]
FlowMo is a training-free guidance method for enhancing motion coherence in text-to-video models.<n>It estimates motion coherence by measuring the patch-wise variance across the temporal dimension and guides the model to reduce this variance dynamically during sampling.
arXiv Detail & Related papers (2025-06-01T19:55:33Z) - Rethinking Traffic Flow Forecasting: From Transition to Generatation [0.0]
We propose an Effective Multi-Branch Similarity Transformer for Traffic Flow Prediction, namely EMBSFormer.<n>We find that the factors affecting traffic flow include node-level traffic generation and graph-level traffic transition, which describe the multi-periodicity and interaction pattern of nodes, respectively.<n>For traffic transition, we employ a temporal and spatial self-attention mechanism to maintain global node interactions, and use GNN and time conv to model local node interactions, respectively.
arXiv Detail & Related papers (2025-04-19T09:52:39Z) - Learning Effective Dynamics across Spatio-Temporal Scales of Complex Flows [4.798951413107239]
We propose a novel framework, Graph-based Learning of Effective Dynamics (Graph-LED), that leverages graph neural networks (GNNs) and an attention-based autoregressive model.<n>We evaluate the proposed approach on a suite of fluid dynamics problems, including flow past a cylinder and flow over a backward-facing step over a range of Reynolds numbers.
arXiv Detail & Related papers (2025-02-11T22:14:30Z) - TransFlower: An Explainable Transformer-Based Model with Flow-to-Flow
Attention for Commuting Flow Prediction [18.232085070775835]
We introduce TransFlower, an explainable, transformer-based model employing flow-to-flow attention to predict commuting patterns.
Our model outperforms existing methods by up to 30.8% Common Part of Commuters.
arXiv Detail & Related papers (2024-02-23T16:00:04Z) - Predicting fluid-structure interaction with graph neural networks [13.567118450260178]
We present a rotation equivariant, quasi-monolithic graph neural network framework for the reduced-order modeling of fluid-structure interaction systems.
A finite element-inspired hypergraph neural network is employed to predict the evolution of the fluid state based on the state of the whole system.
The proposed framework tracks the interface description and provides stable and accurate system state predictions during roll-out for at least 2000 time steps.
arXiv Detail & Related papers (2022-10-09T07:42:23Z) - Flowformer: Linearizing Transformers with Conservation Flows [77.25101425464773]
We linearize Transformers free from specific inductive biases based on the flow network theory.
By respectively conserving the incoming flow of sinks for source competition and the outgoing flow of sources for sink allocation, Flow-Attention inherently generates informative attentions.
arXiv Detail & Related papers (2022-02-13T08:44:10Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z) - Spatial-Temporal Transformer Networks for Traffic Flow Forecasting [74.76852538940746]
We propose a novel paradigm of Spatial-Temporal Transformer Networks (STTNs) to improve the accuracy of long-term traffic forecasting.
Specifically, we present a new variant of graph neural networks, named spatial transformer, by dynamically modeling directed spatial dependencies.
The proposed model enables fast and scalable training over a long range spatial-temporal dependencies.
arXiv Detail & Related papers (2020-01-09T10:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.