HydroVision: LiDAR-Guided Hydrometric Prediction with Vision Transformers and Hybrid Graph Learning
- URL: http://arxiv.org/abs/2409.15213v1
- Date: Mon, 23 Sep 2024 16:57:43 GMT
- Title: HydroVision: LiDAR-Guided Hydrometric Prediction with Vision Transformers and Hybrid Graph Learning
- Authors: Naghmeh Shafiee Roudbari, Ursula Eicker, Charalambos Poullis, Zachary Patterson,
- Abstract summary: Hydrometric forecasting is crucial for managing water resources, flood prediction, and environmental protection.
We propose a hybrid graph learning structure that combines static and dynamic graph learning.
Our method significantly reduces prediction error by an average of 10% across all days, with greater improvements for longer forecasting horizons.
- Score: 4.499833362998488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hydrometric forecasting is crucial for managing water resources, flood prediction, and environmental protection. Water stations are interconnected, and this connectivity influences the measurements at other stations. However, the dynamic and implicit nature of water flow paths makes it challenging to extract a priori knowledge of the connectivity structure. We hypothesize that terrain elevation significantly affects flow and connectivity. To incorporate this, we use LiDAR terrain elevation data encoded through a Vision Transformer (ViT). The ViT, which has demonstrated excellent performance in image classification by directly applying transformers to sequences of image patches, efficiently captures spatial features of terrain elevation. To account for both spatial and temporal features, we employ GRU blocks enhanced with graph convolution, a method widely used in the literature. We propose a hybrid graph learning structure that combines static and dynamic graph learning. A static graph, derived from transformer-encoded LiDAR data, captures terrain elevation relationships, while a dynamic graph adapts to temporal changes, improving the overall graph representation. We apply graph convolution in two layers through these static and dynamic graphs. Our method makes daily predictions up to 12 days ahead. Empirical results from multiple water stations in Quebec demonstrate that our method significantly reduces prediction error by an average of 10\% across all days, with greater improvements for longer forecasting horizons.
Related papers
- SGFormer: Single-Layer Graph Transformers with Approximation-Free Linear Complexity [74.51827323742506]
We evaluate the necessity of adopting multi-layer attentions in Transformers on graphs.
We show that one-layer propagation can be reduced to one-layer propagation, with the same capability for representation learning.
It suggests a new technical path for building powerful and efficient Transformers on graphs.
arXiv Detail & Related papers (2024-09-13T17:37:34Z) - Learning Lane Graphs from Aerial Imagery Using Transformers [7.718401895021425]
This work introduces a novel approach to generating successor lane graphs from aerial imagery.
We frame successor lane graphs as a collection of maximal length paths and predict them using a Detection Transformer (DETR) architecture.
We demonstrate the efficacy of our method through extensive experiments on the diverse and large-scale UrbanLaneGraph dataset.
arXiv Detail & Related papers (2024-07-08T07:42:32Z) - Through the Dual-Prism: A Spectral Perspective on Graph Data
Augmentation for Graph Classification [71.36575018271405]
We introduce the Dual-Prism (DP) augmentation method, comprising DP-Noise and DP-Mask.
We find that keeping the low-frequency eigenvalues unchanged can preserve the critical properties at a large scale when generating augmented graphs.
arXiv Detail & Related papers (2024-01-18T12:58:53Z) - TransGlow: Attention-augmented Transduction model based on Graph Neural
Networks for Water Flow Forecasting [4.915744683251151]
Hydrometric prediction of water quantity is useful for a variety of applications, including water management, flood forecasting, and flood control.
We propose atemporal forecasting model that augments the hidden state in Graph Convolution Recurrent Neural Network (GCRN) encoder-decoder.
We present a new benchmark dataset of water flow from a network of Canadian stations on rivers, streams, and lakes.
arXiv Detail & Related papers (2023-12-10T18:23:40Z) - Deep Prompt Tuning for Graph Transformers [55.2480439325792]
Fine-tuning is resource-intensive and requires storing multiple copies of large models.
We propose a novel approach called deep graph prompt tuning as an alternative to fine-tuning.
By freezing the pre-trained parameters and only updating the added tokens, our approach reduces the number of free parameters and eliminates the need for multiple model copies.
arXiv Detail & Related papers (2023-09-18T20:12:17Z) - Attention-based Dynamic Graph Convolutional Recurrent Neural Network for
Traffic Flow Prediction in Highway Transportation [0.6650227510403052]
Attention-based Dynamic Graph Convolutional Recurrent Neural Network (ADG-N) is proposed to improve traffic flow prediction in highway transportation.
A dedicated gated kernel emphasizing highly relative nodes is introduced on complete graphs to reduce overfitting for graph convolution operations.
arXiv Detail & Related papers (2023-09-13T13:57:21Z) - TransformerG2G: Adaptive time-stepping for learning temporal graph
embeddings using transformers [2.2120851074630177]
We develop a graph embedding model with uncertainty quantification, TransformerG2G, to learn temporal dynamics of temporal graphs.
Our experiments demonstrate that the proposed TransformerG2G model outperforms conventional multi-step methods.
By examining the attention weights, we can uncover temporal dependencies, identify influential elements, and gain insights into the complex interactions within the graph structure.
arXiv Detail & Related papers (2023-07-05T18:34:22Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Spectral Augmentations for Graph Contrastive Learning [50.149996923976836]
Contrastive learning has emerged as a premier method for learning representations with or without supervision.
Recent studies have shown its utility in graph representation learning for pre-training.
We propose a set of well-motivated graph transformation operations to provide a bank of candidates when constructing augmentations for a graph contrastive objective.
arXiv Detail & Related papers (2023-02-06T16:26:29Z) - Dynamic Graph Representation Learning via Graph Transformer Networks [41.570839291138114]
We propose a Transformer-based dynamic graph learning method named Dynamic Graph Transformer (DGT)
DGT has spatial-temporal encoding to effectively learn graph topology and capture implicit links.
We show that DGT presents superior performance compared with several state-of-the-art baselines.
arXiv Detail & Related papers (2021-11-19T21:44:23Z) - TCL: Transformer-based Dynamic Graph Modelling via Contrastive Learning [87.38675639186405]
We propose a novel graph neural network approach, called TCL, which deals with the dynamically-evolving graph in a continuous-time fashion.
To the best of our knowledge, this is the first attempt to apply contrastive learning to representation learning on dynamic graphs.
arXiv Detail & Related papers (2021-05-17T15:33:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.