Graph Sparsification for GCN Towards Optimal Crop Yield Predictions
- URL: http://arxiv.org/abs/2306.01725v1
- Date: Fri, 2 Jun 2023 17:51:56 GMT
- Title: Graph Sparsification for GCN Towards Optimal Crop Yield Predictions
- Authors: Saghar Bagheri, Gene Cheung, Tim Eadie
- Abstract summary: We propose a graph sparsification method based on the Fiedler number to remove edges from a complete graph kernel.
We show that our method produces a sparse graph with good GCN performance compared to other graph sparsification schemes in crop yield prediction.
- Score: 27.415307133655407
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In agronomics, predicting crop yield at a per field/county granularity is
important for farmers to minimize uncertainty and plan seeding for the next
crop cycle. While state-of-the-art prediction techniques employ graph
convolutional nets (GCN) to predict future crop yields given relevant features
and crop yields of previous years, a dense underlying graph kernel requires
long training and execution time. In this paper, we propose a graph
sparsification method based on the Fiedler number to remove edges from a
complete graph kernel, in order to lower the complexity of GCN
training/execution. Specifically, we first show that greedily removing an edge
at a time that induces the minimal change in the second eigenvalue leads to a
sparse graph with good GCN performance. We then propose a fast method to choose
an edge for removal per iteration based on an eigenvalue perturbation theorem.
Experiments show that our Fiedler-based method produces a sparse graph with
good GCN performance compared to other graph sparsification schemes in crop
yield prediction.
Related papers
- Graph Sparsification for Enhanced Conformal Prediction in Graph Neural Networks [5.896352342095999]
Conformal Prediction is a robust framework that ensures reliable coverage across machine learning tasks.
SparGCP incorporates graph sparsification and a conformal prediction-specific objective into GNN training.
Experiments on real-world graph datasets demonstrate that SparGCP outperforms existing methods.
arXiv Detail & Related papers (2024-10-28T23:53:51Z) - Spatiotemporal Forecasting Meets Efficiency: Causal Graph Process Neural Networks [5.703629317205571]
Causal Graph Graph Processes (CGPs) offer an alternative, using graph filters instead of relational field layers to reduce parameters and minimize memory consumption.
This paper introduces a non-linear model combining CGPs and GNNs fortemporal forecasting. CGProNet employs higher-order graph filters, optimizing the model with fewer parameters, reducing memory usage, and improving runtime efficiency.
arXiv Detail & Related papers (2024-05-29T08:37:48Z) - ADEdgeDrop: Adversarial Edge Dropping for Robust Graph Neural Networks [53.41164429486268]
Graph Neural Networks (GNNs) have exhibited the powerful ability to gather graph-structured information from neighborhood nodes.
The performance of GNNs is limited by poor generalization and fragile robustness caused by noisy and redundant graph data.
We propose a novel adversarial edge-dropping method (ADEdgeDrop) that leverages an adversarial edge predictor guiding the removal of edges.
arXiv Detail & Related papers (2024-03-14T08:31:39Z) - Two Heads Are Better Than One: Boosting Graph Sparse Training via
Semantic and Topological Awareness [80.87683145376305]
Graph Neural Networks (GNNs) excel in various graph learning tasks but face computational challenges when applied to large-scale graphs.
We propose Graph Sparse Training ( GST), which dynamically manipulates sparsity at the data level.
GST produces a sparse graph with maximum topological integrity and no performance degradation.
arXiv Detail & Related papers (2024-02-02T09:10:35Z) - Learning Large Graph Property Prediction via Graph Segment Training [61.344814074335304]
We propose a general framework that allows learning large graph property prediction with a constant memory footprint.
We refine the GST paradigm by introducing a historical embedding table to efficiently obtain embeddings for segments not sampled for backpropagation.
Our experiments show that GST-EFD is both memory-efficient and fast, while offering a slight boost on test accuracy over a typical full graph training regime.
arXiv Detail & Related papers (2023-05-21T02:53:25Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Unsupervised Graph Spectral Feature Denoising for Crop Yield Prediction [27.604637365723676]
Prediction of annual crop yields at a county granularity is important for national food production and price stability.
We denoise relevant features via graph spectral filtering that are inputs to a deep learning prediction model.
Using denoised features as input, performance of a crop yield prediction model can be improved noticeably.
arXiv Detail & Related papers (2022-08-04T15:18:06Z) - Graph Condensation via Receptive Field Distribution Matching [61.71711656856704]
This paper focuses on creating a small graph to represent the original graph, so that GNNs trained on the size-reduced graph can make accurate predictions.
We view the original graph as a distribution of receptive fields and aim to synthesize a small graph whose receptive fields share a similar distribution.
arXiv Detail & Related papers (2022-06-28T02:10:05Z) - Scaling Up Graph Neural Networks Via Graph Coarsening [18.176326897605225]
Scalability of graph neural networks (GNNs) is one of the major challenges in machine learning.
In this paper, we propose to use graph coarsening for scalable training of GNNs.
We show that, simply applying off-the-shelf coarsening methods, we can reduce the number of nodes by up to a factor of ten without causing a noticeable downgrade in classification accuracy.
arXiv Detail & Related papers (2021-06-09T15:46:17Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.