RelMap: Reliable Spatiotemporal Sensor Data Visualization via Imputative Spatial Interpolation
- URL: http://arxiv.org/abs/2508.01240v1
- Date: Sat, 02 Aug 2025 07:25:23 GMT
- Title: RelMap: Reliable Spatiotemporal Sensor Data Visualization via Imputative Spatial Interpolation
- Authors: Juntong Chen, Huayuan Ye, He Zhu, Siwei Fu, Changbo Wang, Chenhui Li,
- Abstract summary: This paper introduces a novel shorttemporal data pipeline that achieves reliable results and produces a novel heatmap representation with uncertainty information.<n>We leverage imputation from Neural Networks (GNNs) to enhance visualization reliability and temporal resolution.
- Score: 18.947107160943595
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate and reliable visualization of spatiotemporal sensor data such as environmental parameters and meteorological conditions is crucial for informed decision-making. Traditional spatial interpolation methods, however, often fall short of producing reliable interpolation results due to the limited and irregular sensor coverage. This paper introduces a novel spatial interpolation pipeline that achieves reliable interpolation results and produces a novel heatmap representation with uncertainty information encoded. We leverage imputation reference data from Graph Neural Networks (GNNs) to enhance visualization reliability and temporal resolution. By integrating Principal Neighborhood Aggregation (PNA) and Geographical Positional Encoding (GPE), our model effectively learns the spatiotemporal dependencies. Furthermore, we propose an extrinsic, static visualization technique for interpolation-based heatmaps that effectively communicates the uncertainties arising from various sources in the interpolated map. Through a set of use cases, extensive evaluations on real-world datasets, and user studies, we demonstrate our model's superior performance for data imputation, the improvements to the interpolant with reference data, and the effectiveness of our visualization design in communicating uncertainties.
Related papers
- CAST: Cross-Attentive Spatio-Temporal feature fusion for Deepfake detection [0.0]
CNNs are effective at capturing spatial artifacts, and Transformers excel at modeling temporal inconsistencies.<n>We propose a unified CAST model that leverages cross-attention to effectively fuse spatial and temporal features.<n>We evaluate the performance of our model using the FaceForensics++, Celeb-DF, and DeepfakeDetection datasets.
arXiv Detail & Related papers (2025-06-26T18:51:17Z) - Multivariate Long-term Time Series Forecasting with Fourier Neural Filter [55.09326865401653]
We introduce FNF as the backbone and DBD as architecture to provide excellent learning capabilities and optimal learning pathways for spatial-temporal modeling.<n>We show that FNF unifies local time-domain and global frequency-domain information processing within a single backbone that extends naturally to spatial modeling.
arXiv Detail & Related papers (2025-06-10T18:40:20Z) - Kriformer: A Novel Spatiotemporal Kriging Approach Based on Graph Transformers [5.4381914710364665]
This study addresses posed by sparse sensor deployment and unreliable data by framing the problem as an environmental challenge.
A graphkriformer model, Kriformer, estimates data at locations without sensors by mining spatial and temporal correlations, even with limited resources.
arXiv Detail & Related papers (2024-09-23T11:01:18Z) - FedNE: Surrogate-Assisted Federated Neighbor Embedding for Dimensionality Reduction [47.336599393600046]
textscFedNE is a novel approach that integrates the textscFedAvg framework with the contrastive NE technique.
We conduct comprehensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-09-17T19:23:24Z) - Temporal and Heterogeneous Graph Neural Network for Remaining Useful Life Prediction [27.521188262343596]
We introduce a novel model named Temporal and Heterogeneous Graph Neural Networks (THGNN)
THGNN aggregates historical data from neighboring nodes to accurately capture the temporal dynamics and spatial correlations within the stream of sensor data.
We have validated the effectiveness of our approach through comprehensive experiments.
arXiv Detail & Related papers (2024-05-07T14:08:57Z) - Reconstructing Spatiotemporal Data with C-VAEs [49.1574468325115]
Conditional continuous representation of moving regions is commonly used.
In this work, we explore the capabilities of Conditional Varitemporal Autoencoder (C-VAE) models to generate realistic representations of regions' evolution.
arXiv Detail & Related papers (2023-07-12T15:34:10Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.