Unsupervised Learning of Graph from Recipes
- URL: http://arxiv.org/abs/2401.12088v1
- Date: Mon, 22 Jan 2024 16:25:47 GMT
- Title: Unsupervised Learning of Graph from Recipes
- Authors: Aissatou Diallo, Antonis Bikakis, Luke Dickens, Anthony Hunter, Rob
Miller
- Abstract summary: We propose a model to identify relevant information from recipes and generate a graph to represent the sequence of actions in the recipe.
We iteratively learn the graph structure and the parameters of a $mathsfGNN$ encoding the texts (text-to-graph) one sequence at a time.
We evaluate the approach by comparing the identified entities with annotated datasets, comparing the difference between the input and output texts, and comparing our generated graphs with those generated by state of the art methods.
- Score: 8.410402833223364
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Cooking recipes are one of the most readily available kinds of procedural
text. They consist of natural language instructions that can be challenging to
interpret. In this paper, we propose a model to identify relevant information
from recipes and generate a graph to represent the sequence of actions in the
recipe. In contrast with other approaches, we use an unsupervised approach. We
iteratively learn the graph structure and the parameters of a $\mathsf{GNN}$
encoding the texts (text-to-graph) one sequence at a time while providing the
supervision by decoding the graph into text (graph-to-text) and comparing the
generated text to the input. We evaluate the approach by comparing the
identified entities with annotated datasets, comparing the difference between
the input and output texts, and comparing our generated graphs with those
generated by state of the art methods.
Related papers
- Instruction-Based Molecular Graph Generation with Unified Text-Graph Diffusion Model [22.368332915420606]
Unified Text-Graph Diffusion Model (UTGDiff) is a framework to generate molecular graphs from instructions.
UTGDiff features a unified text-graph transformer as the denoising network, derived from pre-trained language models.
Our experimental results demonstrate that UTGDiff consistently outperforms sequence-based baselines in tasks involving instruction-based molecule generation and editing.
arXiv Detail & Related papers (2024-08-19T11:09:15Z) - Explanation Graph Generation via Generative Pre-training over Synthetic
Graphs [6.25568933262682]
The generation of explanation graphs is a significant task that aims to produce explanation graphs in response to user input.
Current research commonly fine-tunes a text-based pre-trained language model on a small downstream dataset that is annotated with labeled graphs.
We propose a novel pre-trained framework EG3P(for Explanation Graph Generation via Generative Pre-training over synthetic graphs) for the explanation graph generation task.
arXiv Detail & Related papers (2023-06-01T13:20:22Z) - Improving Graph-Based Text Representations with Character and Word Level
N-grams [30.699644290131044]
We propose a new word-character text graph that combines word and character n-gram nodes together with document nodes.
We also propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph.
arXiv Detail & Related papers (2022-10-12T08:07:54Z) - What does Transformer learn about source code? [26.674180481543264]
transformer-based representation models have achieved state-of-the-art (SOTA) performance in many tasks.
We propose the aggregated attention score, a method to investigate the structural information learned by the transformer.
We also put forward the aggregated attention graph, a new way to extract program graphs from the pre-trained models automatically.
arXiv Detail & Related papers (2022-07-18T09:33:04Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - Learning to Generate Scene Graph from Natural Language Supervision [52.18175340725455]
We propose one of the first methods that learn from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as scene graph.
We leverage an off-the-shelf object detector to identify and localize object instances, match labels of detected regions to concepts parsed from captions, and thus create "pseudo" labels for learning scene graph.
arXiv Detail & Related papers (2021-09-06T03:38:52Z) - Joint Graph Learning and Matching for Semantic Feature Correspondence [69.71998282148762]
We propose a joint emphgraph learning and matching network, named GLAM, to explore reliable graph structures for boosting graph matching.
The proposed method is evaluated on three popular visual matching benchmarks (Pascal VOC, Willow Object and SPair-71k)
It outperforms previous state-of-the-art graph matching methods by significant margins on all benchmarks.
arXiv Detail & Related papers (2021-09-01T08:24:02Z) - Structural Information Preserving for Graph-to-Text Generation [59.00642847499138]
The task of graph-to-text generation aims at producing sentences that preserve the meaning of input graphs.
We propose to tackle this problem by leveraging richer training signals that can guide our model for preserving input information.
Experiments on two benchmarks for graph-to-text generation show the effectiveness of our approach over a state-of-the-art baseline.
arXiv Detail & Related papers (2021-02-12T20:09:01Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Exploiting Structured Knowledge in Text via Graph-Guided Representation
Learning [73.0598186896953]
We present two self-supervised tasks learning over raw text with the guidance from knowledge graphs.
Building upon entity-level masked language models, our first contribution is an entity masking scheme.
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training.
arXiv Detail & Related papers (2020-04-29T14:22:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.