Generic and Trend-aware Curriculum Learning for Relation Extraction in
Graph Neural Networks
- URL: http://arxiv.org/abs/2205.08625v1
- Date: Tue, 17 May 2022 20:46:02 GMT
- Title: Generic and Trend-aware Curriculum Learning for Relation Extraction in
Graph Neural Networks
- Authors: Nidhi Vakil and Hadi Amiri
- Abstract summary: We present a generic and trend-aware curriculum learning approach for graph neural networks.
It extends existing approaches by incorporating sample-level loss trends to better discriminate easier from harder samples and schedule them for training.
- Score: 12.335698325757491
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present a generic and trend-aware curriculum learning approach for graph
neural networks. It extends existing approaches by incorporating sample-level
loss trends to better discriminate easier from harder samples and schedule them
for training. The model effectively integrates textual and structural
information for relation extraction in text graphs. Experimental results show
that the model provides robust estimations of sample difficulty and shows
sizable improvement over the state-of-the-art approaches across several
datasets.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - CORE: Data Augmentation for Link Prediction via Information Bottleneck [25.044734252779975]
Link prediction (LP) is a fundamental task in graph representation learning.
We propose a novel data augmentation method, COmplete and REduce (CORE) to learn compact and predictive augmentations for LP models.
arXiv Detail & Related papers (2024-04-17T03:20:42Z) - PAC Learnability under Explanation-Preserving Graph Perturbations [15.83659369727204]
Graph neural networks (GNNs) operate over graphs, enabling the model to leverage the complex relationships and dependencies in graph-structured data.
A graph explanation is a subgraph which is an almost sufficient' statistic of the input graph with respect to its classification label.
This work considers two methods for leveraging such perturbation invariances in the design and training of GNNs.
arXiv Detail & Related papers (2024-02-07T17:23:15Z) - Globally Interpretable Graph Learning via Distribution Matching [12.885580925389352]
We aim to answer an important question that is not yet well studied: how to provide a global interpretation for the graph learning procedure?
We formulate this problem as globally interpretable graph learning, which targets on distilling high-level and human-intelligible patterns that dominate the learning procedure.
We propose a novel model fidelity metric, tailored for evaluating the fidelity of the resulting model trained on interpretations.
arXiv Detail & Related papers (2023-06-18T00:50:36Z) - Optimal Propagation for Graph Neural Networks [51.08426265813481]
We propose a bi-level optimization approach for learning the optimal graph structure.
We also explore a low-rank approximation model for further reducing the time complexity.
arXiv Detail & Related papers (2022-05-06T03:37:00Z) - Addressing Bias in Visualization Recommenders by Identifying Trends in
Training Data: Improving VizML Through a Statistical Analysis of the Plotly
Community Feed [55.41644538483948]
Machine learning is a promising approach to visualization recommendation due to its high scalability and representational power.
Our research project aims to address training bias in machine learning visualization recommendation systems by identifying trends in the training data through statistical analysis.
arXiv Detail & Related papers (2022-03-09T18:36:46Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z) - Comparison of Syntactic and Semantic Representations of Programs in
Neural Embeddings [1.0878040851638]
It compares graph convolutional networks using different graph representations in the task of program embedding.
It shows that the sparsity of control flow graphs and the implicit aggregation of graph convolutional networks cause these models to perform worse than naive models.
arXiv Detail & Related papers (2020-01-24T21:30:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.