Controlling Topic-Focus Articulation in Meaning-to-Text Generation using
Graph Neural Networks
- URL: http://arxiv.org/abs/2310.02053v1
- Date: Tue, 3 Oct 2023 13:51:01 GMT
- Title: Controlling Topic-Focus Articulation in Meaning-to-Text Generation using
Graph Neural Networks
- Authors: Chunliu Wang, Rik van Noord, Johan Bos
- Abstract summary: We try three different methods for topic-focus articulation (TFA) employing graph neural models for a meaning-to-text generation task.
We propose a novel encoding strategy about node aggregation in graph neural models, which instead of traditional encoding by aggregating adjacent node information, learns node representations by using depth-first search.
- Score: 8.334427140256606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A bare meaning representation can be expressed in various ways using natural
language, depending on how the information is structured on the surface level.
We are interested in finding ways to control topic-focus articulation when
generating text from meaning. We focus on distinguishing active and passive
voice for sentences with transitive verbs. The idea is to add pragmatic
information such as topic to the meaning representation, thereby forcing either
active or passive voice when given to a natural language generation system. We
use graph neural models because there is no explicit information about word
order in a meaning represented by a graph. We try three different methods for
topic-focus articulation (TFA) employing graph neural models for a
meaning-to-text generation task. We propose a novel encoding strategy about
node aggregation in graph neural models, which instead of traditional encoding
by aggregating adjacent node information, learns node representations by using
depth-first search. The results show our approach can get competitive
performance with state-of-art graph models on general text generation, and lead
to significant improvements on the task of active-passive conversion compared
to traditional adjacency-based aggregation strategies. Different types of TFA
can have a huge impact on the performance of the graph models.
Related papers
- Verbalized Graph Representation Learning: A Fully Interpretable Graph Model Based on Large Language Models Throughout the Entire Process [8.820909397907274]
We propose a verbalized graph representation learning (VGRL) method which is fully interpretable.
In contrast to traditional graph machine learning models, VGRL constrains this parameter space to be text description.
We conduct several studies to empirically evaluate the effectiveness of VGRL.
arXiv Detail & Related papers (2024-10-02T12:07:47Z) - KMF: Knowledge-Aware Multi-Faceted Representation Learning for Zero-Shot
Node Classification [75.95647590619929]
Zero-Shot Node Classification (ZNC) has been an emerging and crucial task in graph data analysis.
We propose a Knowledge-Aware Multi-Faceted framework (KMF) that enhances the richness of label semantics.
A novel geometric constraint is developed to alleviate the problem of prototype drift caused by node information aggregation.
arXiv Detail & Related papers (2023-08-15T02:38:08Z) - Enhancing Dialogue Generation via Dynamic Graph Knowledge Aggregation [23.54754465832362]
In conventional graph neural networks (GNNs) message passing on a graph is independent from text.
This training regime leads to a semantic gap between graph knowledge and text.
We propose a novel framework for knowledge graph enhanced dialogue generation.
arXiv Detail & Related papers (2023-06-28T13:21:00Z) - ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings [20.25180279903009]
We propose Contrastive Graph-Text pretraining (ConGraT) for jointly learning separate representations of texts and nodes in a text-attributed graph (TAG)
Our method trains a language model (LM) and a graph neural network (GNN) to align their representations in a common latent space using a batch-wise contrastive learning objective inspired by CLIP.
Experiments demonstrate that ConGraT outperforms baselines on various downstream tasks, including node and text category classification, link prediction, and language modeling.
arXiv Detail & Related papers (2023-05-23T17:53:30Z) - Conversational Semantic Parsing using Dynamic Context Graphs [68.72121830563906]
We consider the task of conversational semantic parsing over general purpose knowledge graphs (KGs) with millions of entities, and thousands of relation-types.
We focus on models which are capable of interactively mapping user utterances into executable logical forms.
arXiv Detail & Related papers (2023-05-04T16:04:41Z) - Improving Graph-Based Text Representations with Character and Word Level
N-grams [30.699644290131044]
We propose a new word-character text graph that combines word and character n-gram nodes together with document nodes.
We also propose two new graph-based neural models, WCTextGCN and WCTextGAT, for modeling our proposed text graph.
arXiv Detail & Related papers (2022-10-12T08:07:54Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - GINet: Graph Interaction Network for Scene Parsing [58.394591509215005]
We propose a Graph Interaction unit (GI unit) and a Semantic Context Loss (SC-loss) to promote context reasoning over image regions.
The proposed GINet outperforms the state-of-the-art approaches on the popular benchmarks, including Pascal-Context and COCO Stuff.
arXiv Detail & Related papers (2020-09-14T02:52:45Z) - Iterative Context-Aware Graph Inference for Visual Dialog [126.016187323249]
We propose a novel Context-Aware Graph (CAG) neural network.
Each node in the graph corresponds to a joint semantic feature, including both object-based (visual) and history-related (textual) context representations.
arXiv Detail & Related papers (2020-04-05T13:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.