Physics-Informed Graph Learning: A Survey
- URL: http://arxiv.org/abs/2202.10679v1
- Date: Tue, 22 Feb 2022 05:46:24 GMT
- Title: Physics-Informed Graph Learning: A Survey
- Authors: Ciyuan Peng, Feng Xia, Vidya Saikrishna, Huan Liu
- Abstract summary: We introduce a unified framework of graph learning models, and then examine existing PIGL methods in relation to the unified framework.
This survey paper is expected to stimulate innovative research and development activities pertaining to PIGL.
- Score: 25.474725468416118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An expeditious development of graph learning in recent years has found
innumerable applications in several diversified fields. Of the main associated
challenges are the volume and complexity of graph data. A lot of research has
been evolving around the preservation of graph data in a low dimensional space.
The graph learning models suffer from the inability to maintain original graph
information. In order to compensate for this inability, physics-informed graph
learning (PIGL) is emerging. PIGL incorporates physics rules while performing
graph learning, which enables numerous potentials. This paper presents a
systematic review of PIGL methods. We begin with introducing a unified
framework of graph learning models, and then examine existing PIGL methods in
relation to the unified framework. We also discuss several future challenges
for PIGL. This survey paper is expected to stimulate innovative research and
development activities pertaining to PIGL.
Related papers
- Parametric Graph Representations in the Era of Foundation Models: A Survey and Position [69.48708136448694]
Graphs have been widely used in the past decades of big data and AI to model comprehensive relational data.
Identifying meaningful graph laws can significantly enhance the effectiveness of various applications.
arXiv Detail & Related papers (2024-10-16T00:01:31Z) - Continual Learning on Graphs: Challenges, Solutions, and Opportunities [72.7886669278433]
We provide a comprehensive review of existing continual graph learning (CGL) algorithms.
We compare methods with traditional continual learning techniques and analyze the applicability of the traditional continual learning techniques to forgetting tasks.
We will maintain an up-to-date repository featuring a comprehensive list of accessible algorithms.
arXiv Detail & Related papers (2024-02-18T12:24:45Z) - A Survey of Data-Efficient Graph Learning [16.053913182723143]
We introduce a novel concept of Data-Efficient Graph Learning (DEGL) as a research frontier.
We systematically review recent advances on several key aspects, including self-supervised graph learning, semi-supervised graph learning, and few-shot graph learning.
arXiv Detail & Related papers (2024-02-01T09:28:48Z) - Graph Domain Adaptation: Challenges, Progress and Prospects [61.9048172631524]
We propose graph domain adaptation as an effective knowledge-transfer paradigm across graphs.
GDA introduces a bunch of task-related graphs as source graphs and adapts the knowledge learnt from source graphs to the target graphs.
We outline the research status and challenges, propose a taxonomy, introduce the details of representative works, and discuss the prospects.
arXiv Detail & Related papers (2024-02-01T02:44:32Z) - Continual Graph Learning: A Survey [4.618696834991205]
Research on continual learning (CL) mainly focuses on data represented in the Euclidean space.
Most graph learning models are tailored for static graphs.
Catastrophic forgetting also emerges in graph learning models when being trained incrementally.
arXiv Detail & Related papers (2023-01-28T15:42:49Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - Graph Learning and Its Advancements on Large Language Models: A Holistic Survey [37.01696685233113]
This survey focuses on the most recent advancements in integrating graph learning with pre-trained language models.
We provide a holistic review that analyzes current works from the perspective of graph structure, and discusses the latest applications, trends, and challenges in graph learning.
arXiv Detail & Related papers (2022-12-17T22:05:07Z) - Data Augmentation for Deep Graph Learning: A Survey [66.04015540536027]
We first propose a taxonomy for graph data augmentation and then provide a structured review by categorizing the related work based on the augmented information modalities.
Focusing on the two challenging problems in DGL (i.e., optimal graph learning and low-resource graph learning), we also discuss and review the existing learning paradigms which are based on graph data augmentation.
arXiv Detail & Related papers (2022-02-16T18:30:33Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.