Alternately Optimized Graph Neural Networks
- URL: http://arxiv.org/abs/2206.03638v4
- Date: Wed, 19 Jul 2023 06:43:10 GMT
- Title: Alternately Optimized Graph Neural Networks
- Authors: Haoyu Han, Xiaorui Liu, Haitao Mao, MohamadAli Torkamani, Feng Shi,
Victor Lee, Jiliang Tang
- Abstract summary: We propose a new optimization framework for semi-supervised learning on graphs.
The proposed framework can be conveniently solved by the alternating optimization algorithms, resulting in significantly improved efficiency.
- Score: 33.98939289745346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have greatly advanced the semi-supervised node
classification task on graphs. The majority of existing GNNs are trained in an
end-to-end manner that can be viewed as tackling a bi-level optimization
problem. This process is often inefficient in computation and memory usage. In
this work, we propose a new optimization framework for semi-supervised learning
on graphs. The proposed framework can be conveniently solved by the alternating
optimization algorithms, resulting in significantly improved efficiency.
Extensive experiments demonstrate that the proposed method can achieve
comparable or better performance with state-of-the-art baselines while it has
significantly better computation and memory efficiency.
Related papers
- Efficient and Effective Implicit Dynamic Graph Neural Network [42.49148111696576]
We present Implicit Dynamic Graph Neural Network (IDGNN) a novel implicit neural network for dynamic graphs.
A key characteristic of IDGNN is that it demonstrably is well-posed, i.e., it is theoretically guaranteed to have a fixed-point representation.
arXiv Detail & Related papers (2024-06-25T19:07:21Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - GPN: A Joint Structural Learning Framework for Graph Neural Networks [36.38529113603987]
We propose a GNN-based joint learning framework that simultaneously learns the graph structure and the downstream task.
Our method is the first GNN-based bilevel optimization framework for resolving this task.
arXiv Detail & Related papers (2022-05-12T09:06:04Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Recurrent Graph Neural Network Algorithm for Unsupervised Network
Community Detection [0.0]
This paper proposes a new variant of the recurrent graph neural network algorithm for unsupervised network community detection through modularity optimization.
The new algorithm's performance is compared against a popular and fast Louvain method and a more efficient but slower Combo algorithm recently proposed by the author.
arXiv Detail & Related papers (2021-03-03T16:50:50Z) - Optimization of Graph Neural Networks with Natural Gradient Descent [1.3477333339913569]
We develop optimization algorithms for the graph-based semi-supervised learning by employing the natural gradient information in the optimization process.
To the best of our knowledge, this is the first work that has utilized the natural gradient for the optimization of graph neural networks.
arXiv Detail & Related papers (2020-08-21T18:00:53Z) - Gumbel-softmax-based Optimization: A Simple General Framework for
Optimization Problems on Graphs [5.486093983007419]
We propose a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks.
High-quality solutions can be obtained with much less time consuming compared to traditional approaches.
arXiv Detail & Related papers (2020-04-14T14:11:00Z) - Gradient Centralization: A New Optimization Technique for Deep Neural
Networks [74.935141515523]
gradient centralization (GC) operates directly on gradients by centralizing the gradient vectors to have zero mean.
GC can be viewed as a projected gradient descent method with a constrained loss function.
GC is very simple to implement and can be easily embedded into existing gradient based DNNs with only one line of code.
arXiv Detail & Related papers (2020-04-03T10:25:00Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.