Approximate Network Motif Mining Via Graph Learning
- URL: http://arxiv.org/abs/2206.01008v1
- Date: Thu, 2 Jun 2022 12:15:05 GMT
- Title: Approximate Network Motif Mining Via Graph Learning
- Authors: Carlos Oliver, Dexiong Chen, Vincent Mallet, Pericles Philippopoulos,
Karsten Borgwardt
- Abstract summary: Frequent and structurally related subgraphs, also known as network motifs, are valuable features of many graph datasets.
High computational complexity of identifying motif sets in arbitrary datasets has limited their use in many real-world datasets.
By automatically leveraging statistical properties of datasets, machine learning approaches have shown promise in several tasks with complexity.
- Score: 4.2873412319680035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Frequent and structurally related subgraphs, also known as network motifs,
are valuable features of many graph datasets. However, the high computational
complexity of identifying motif sets in arbitrary datasets (motif mining) has
limited their use in many real-world datasets. By automatically leveraging
statistical properties of datasets, machine learning approaches have shown
promise in several tasks with combinatorial complexity and are therefore a
promising candidate for network motif mining. In this work we seek to
facilitate the development of machine learning approaches aimed at motif
mining. We propose a formulation of the motif mining problem as a node
labelling task. In addition, we build benchmark datasets and evaluation metrics
which test the ability of models to capture different aspects of motif
discovery such as motif number, size, topology, and scarcity. Next, we propose
MotiFiesta, a first attempt at solving this problem in a fully differentiable
manner with promising results on challenging baselines. Finally, we demonstrate
through MotiFiesta that this learning setting can be applied simultaneously to
general-purpose data mining and interpretable feature extraction for graph
classification tasks.
Related papers
- Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks [23.07584018576066]
We propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE.
We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning.
Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings.
arXiv Detail & Related papers (2025-02-19T23:59:05Z) - Revisiting Graph Neural Networks on Graph-level Tasks: Comprehensive Experiments, Analysis, and Improvements [54.006506479865344]
We propose a unified evaluation framework for graph-level Graph Neural Networks (GNNs)
This framework provides a standardized setting to evaluate GNNs across diverse datasets.
We also propose a novel GNN model with enhanced expressivity and generalization capabilities.
arXiv Detail & Related papers (2025-01-01T08:48:53Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - Learning From Graph-Structured Data: Addressing Design Issues and Exploring Practical Applications in Graph Representation Learning [2.492884361833709]
We present an exhaustive review of the latest advancements in graph representation learning and Graph Neural Networks (GNNs)
GNNs, tailored to handle graph-structured data, excel in deriving insights and predictions from intricate relational information.
Our work delves into the capabilities of GNNs, examining their foundational designs and their application in addressing real-world challenges.
arXiv Detail & Related papers (2024-11-09T19:10:33Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - A Model-Agnostic Graph Neural Network for Integrating Local and Global Information [2.6164652182042505]
Graph Neural Networks (GNNs) have achieved promising performance in a variety of graph-focused tasks.
Existing GNNs suffer from two significant limitations: a lack of interpretability in their results due to their black-box nature, and an inability to learn representations of varying orders.
We propose a novel Model-agnostic Graph Neural Network (MaGNet) framework, which is able to effectively integrate information of various orders, extract knowledge from high-order neighbors, and provide meaningful and interpretable results by identifying influential compact graph structures.
arXiv Detail & Related papers (2023-09-23T19:07:03Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Graph Prototypical Networks for Few-shot Learning on Attributed Networks [72.31180045017835]
We propose a graph meta-learning framework -- Graph Prototypical Networks (GPN)
GPN is able to perform textitmeta-learning on an attributed network and derive a highly generalizable model for handling the target classification task.
arXiv Detail & Related papers (2020-06-23T04:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.