Ensembling Graph Predictions for AMR Parsing
- URL: http://arxiv.org/abs/2110.09131v1
- Date: Mon, 18 Oct 2021 09:35:39 GMT
- Title: Ensembling Graph Predictions for AMR Parsing
- Authors: Hoang Thanh Lam, Gabriele Picco, Yufang Hou, Young-Suk Lee, Lam M.
Nguyen, Dzung T. Phan, Vanessa L\'opez, Ramon Fernandez Astudillo
- Abstract summary: In many machine learning tasks, models are trained to predict structure data such as graphs.
In this work, we formalize this problem as mining the largest graph that is the most supported by a collection of graph predictions.
We show that the proposed approach can combine the strength of state-of-the-art AMRs to create new predictions that are more accurate than any individual models in five standard benchmark datasets.
- Score: 28.625065956013778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In many machine learning tasks, models are trained to predict structure data
such as graphs. For example, in natural language processing, it is very common
to parse texts into dependency trees or abstract meaning representation (AMR)
graphs. On the other hand, ensemble methods combine predictions from multiple
models to create a new one that is more robust and accurate than individual
predictions. In the literature, there are many ensembling techniques proposed
for classification or regression problems, however, ensemble graph prediction
has not been studied thoroughly. In this work, we formalize this problem as
mining the largest graph that is the most supported by a collection of graph
predictions. As the problem is NP-Hard, we propose an efficient heuristic
algorithm to approximate the optimal solution. To validate our approach, we
carried out experiments in AMR parsing problems. The experimental results
demonstrate that the proposed approach can combine the strength of
state-of-the-art AMR parsers to create new predictions that are more accurate
than any individual models in five standard benchmark datasets.
Related papers
- Ranking and Combining Latent Structured Predictive Scores without Labeled Data [2.5064967708371553]
This paper introduces a novel structured unsupervised ensemble learning model (SUEL)
It exploits the dependency between a set of predictors with continuous predictive scores, rank the predictors without labeled data and combine them to an ensembled score with weights.
The efficacy of the proposed methods is rigorously assessed through both simulation studies and real-world application of risk genes discovery.
arXiv Detail & Related papers (2024-08-14T20:14:42Z) - Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs [49.547988001231424]
We propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction.
Design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps.
We achieve promoted efficiency and leading performances on five large-scale benchmarks.
arXiv Detail & Related papers (2024-03-15T12:00:12Z) - LPNL: Scalable Link Prediction with Large Language Models [46.65436204783482]
This work focuses on the link prediction task and introduces $textbfLPNL$ (Link Prediction via Natural Language), a framework based on large language models.
We design novel prompts for link prediction that articulate graph details in natural language.
We propose a two-stage sampling pipeline to extract crucial information from the graphs, and a divide-and-conquer strategy to control the input tokens.
arXiv Detail & Related papers (2024-01-24T04:50:16Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help
Multiple Graph Applications [38.83545631999851]
We propose a framework of graph-aware language model pre-training on a large graph corpus.
We conduct experiments on Amazon's real internal datasets and large public datasets.
arXiv Detail & Related papers (2023-06-05T04:46:44Z) - A Graph-Enhanced Click Model for Web Search [67.27218481132185]
We propose a novel graph-enhanced click model (GraphCM) for web search.
We exploit both intra-session and inter-session information for the sparsity and cold-start problems.
arXiv Detail & Related papers (2022-06-17T08:32:43Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Graph Classification by Mixture of Diverse Experts [67.33716357951235]
We present GraphDIVE, a framework leveraging mixture of diverse experts for imbalanced graph classification.
With a divide-and-conquer principle, GraphDIVE employs a gating network to partition an imbalanced graph dataset into several subsets.
Experiments on real-world imbalanced graph datasets demonstrate the effectiveness of GraphDIVE.
arXiv Detail & Related papers (2021-03-29T14:03:03Z) - Discrete Graph Structure Learning for Forecasting Multiple Time Series [14.459541930646205]
Time series forecasting is an extensively studied subject in statistics, economics, and computer science.
In this work, we propose learning the structure simultaneously with a graph neural network (GNN) if the graph is unknown.
Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning.
arXiv Detail & Related papers (2021-01-18T03:36:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.