Geometry-aware Active Learning of Spatiotemporal Dynamic Systems
- URL: http://arxiv.org/abs/2504.19012v2
- Date: Thu, 01 May 2025 12:49:21 GMT
- Title: Geometry-aware Active Learning of Spatiotemporal Dynamic Systems
- Authors: Xizhuo Zhang, Bing Yao,
- Abstract summary: This paper proposes a geometry-aware active learning framework for modeling dynamic systems.<n>We develop an adaptive active learning strategy to strategically identify spatial locations for data collection and further maximize the prediction accuracy.
- Score: 4.251030047034566
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rapid developments in advanced sensing and imaging have significantly enhanced information visibility, opening opportunities for predictive modeling of complex dynamic systems. However, sensing signals acquired from such complex systems are often distributed across 3D geometries and rapidly evolving over time, posing significant challenges in spatiotemporal predictive modeling. This paper proposes a geometry-aware active learning framework for modeling spatiotemporal dynamic systems. Specifically, we propose a geometry-aware spatiotemporal Gaussian Process (G-ST-GP) to effectively integrate the temporal correlations and geometric manifold features for reliable prediction of high-dimensional dynamic behaviors. In addition, we develop an adaptive active learning strategy to strategically identify informative spatial locations for data collection and further maximize the prediction accuracy. This strategy achieves the adaptive trade-off between the prediction uncertainty in the G-ST-GP model and the space-filling design guided by the geodesic distance across the 3D geometry. We implement the proposed framework to model the spatiotemporal electrodynamics in a 3D heart geometry. Numerical experiments show that our framework outperforms traditional methods lacking the mechanism of geometric information incorporation or effective data collection.
Related papers
- Conservation-informed Graph Learning for Spatiotemporal Dynamics Prediction [84.26340606752763]
In this paper, we introduce the conservation-informed GNN (CiGNN), an end-to-end explainable learning framework.<n>The network is designed to conform to the general symmetry conservation law via symmetry where conservative and non-conservative information passes over a multiscale space by a latent temporal marching strategy.<n>Results demonstrate that CiGNN exhibits remarkable baseline accuracy and generalizability, and is readily applicable to learning for prediction of varioustemporal dynamics.
arXiv Detail & Related papers (2024-12-30T13:55:59Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Geometric Trajectory Diffusion Models [58.853975433383326]
Generative models have shown great promise in generating 3D geometric systems.
Existing approaches only operate on static structures, neglecting the fact that physical systems are always dynamic in nature.
We propose geometric trajectory diffusion models (GeoTDM), the first diffusion model for modeling the temporal distribution of 3D geometric trajectories.
arXiv Detail & Related papers (2024-10-16T20:36:41Z) - Context-Conditioned Spatio-Temporal Predictive Learning for Reliable V2V Channel Prediction [25.688521281119037]
Vehicle-to-Vehicle (V2V) channel state information (CSI) prediction is challenging and crucial for optimizing downstream tasks.
Traditional prediction approaches focus on four-dimensional (4D) CSI, which includes predictions over time, bandwidth, and antenna (TX and RX) space.
We propose a novel context-conditionedtemporal predictive learning method to capture dependencies within 4D CSI data.
arXiv Detail & Related papers (2024-09-16T04:15:36Z) - STGFormer: Spatio-Temporal GraphFormer for 3D Human Pose Estimation in Video [7.345621536750547]
This paper presents the S-Temporal GraphFormer framework (STGFormer) for 3D human pose estimation in videos.<n>First, we introduce a STG attention mechanism, designed to more effectively leverage the inherent graph distributions of human body.<n>Next, we present a Modulated Hop-wise Regular GCN to independently process temporal and spatial dimensions in parallel.<n>Finally, we demonstrate our method state-of-the-art performance on the Human3.6M and MPIINF-3DHP datasets.
arXiv Detail & Related papers (2024-07-14T06:45:27Z) - Graph and Skipped Transformer: Exploiting Spatial and Temporal Modeling Capacities for Efficient 3D Human Pose Estimation [36.93661496405653]
We take a global approach to exploit Transformer-temporal information with a concise Graph and Skipped Transformer architecture.
Specifically, in 3D pose stage, coarse-grained body parts are deployed to construct a fully data-driven adaptive model.
Experiments are conducted on Human3.6M, MPI-INF-3DHP and Human-Eva benchmarks.
arXiv Detail & Related papers (2024-07-03T10:42:09Z) - GNPM: Geometric-Aware Neural Parametric Models [6.620111952225635]
We propose a learned parametric model that takes into account the local structure of data to learn disentangled shape and pose latent spaces of 4D dynamics.
We evaluate GNPMs on various datasets of humans, and show that it achieves comparable performance to state-of-the-art methods that require dense correspondences during training.
arXiv Detail & Related papers (2022-09-21T19:23:31Z) - Physics-Inspired Temporal Learning of Quadrotor Dynamics for Accurate
Model Predictive Trajectory Tracking [76.27433308688592]
Accurately modeling quadrotor's system dynamics is critical for guaranteeing agile, safe, and stable navigation.
We present a novel Physics-Inspired Temporal Convolutional Network (PI-TCN) approach to learning quadrotor's system dynamics purely from robot experience.
Our approach combines the expressive power of sparse temporal convolutions and dense feed-forward connections to make accurate system predictions.
arXiv Detail & Related papers (2022-06-07T13:51:35Z) - On the spatial attention in Spatio-Temporal Graph Convolutional Networks
for skeleton-based human action recognition [97.14064057840089]
Graphal networks (GCNs) promising performance in skeleton-based human action recognition by modeling a sequence of skeletons as a graph.
Most of the recently proposed G-temporal-based methods improve the performance by learning the graph structure at each layer of the network.
arXiv Detail & Related papers (2020-11-07T19:03:04Z) - Disentangling and Unifying Graph Convolutions for Skeleton-Based Action
Recognition [79.33539539956186]
We propose a simple method to disentangle multi-scale graph convolutions and a unified spatial-temporal graph convolutional operator named G3D.
By coupling these proposals, we develop a powerful feature extractor named MS-G3D based on which our model outperforms previous state-of-the-art methods on three large-scale datasets.
arXiv Detail & Related papers (2020-03-31T11:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.