High-order Semantic Role Labeling
- URL: http://arxiv.org/abs/2010.04641v1
- Date: Fri, 9 Oct 2020 15:33:54 GMT
- Title: High-order Semantic Role Labeling
- Authors: Zuchao Li, Hai Zhao, Rui Wang, Kevin Parnow
- Abstract summary: This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
- Score: 86.29371274587146
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic role labeling is primarily used to identify predicates, arguments,
and their semantic relationships. Due to the limitations of modeling methods
and the conditions of pre-identified predicates, previous work has focused on
the relationships between predicates and arguments and the correlations between
arguments at most, while the correlations between predicates have been
neglected for a long time. High-order features and structure learning were very
common in modeling such correlations before the neural network era. In this
paper, we introduce a high-order graph structure for the neural semantic role
labeling model, which enables the model to explicitly consider not only the
isolated predicate-argument pairs but also the interaction between the
predicate-argument pairs. Experimental results on 7 languages of the CoNLL-2009
benchmark show that the high-order structural learning techniques are
beneficial to the strong performing SRL models and further boost our baseline
to achieve new state-of-the-art results.
Related papers
- Entity-Aware Self-Attention and Contextualized GCN for Enhanced Relation Extraction in Long Sentences [5.453850739960517]
We propose a novel model, Entity-aware Self-attention Contextualized GCN (ESC-GCN), which efficiently incorporates syntactic structure of input sentences and semantic context of sequences.
Our model achieves encouraging performance as compared to existing dependency-based and sequence-based models.
arXiv Detail & Related papers (2024-09-15T10:50:51Z) - Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - Prototype-based Embedding Network for Scene Graph Generation [105.97836135784794]
Current Scene Graph Generation (SGG) methods explore contextual information to predict relationships among entity pairs.
Due to the diverse visual appearance of numerous possible subject-object combinations, there is a large intra-class variation within each predicate category.
Prototype-based Embedding Network (PE-Net) models entities/predicates with prototype-aligned compact and distinctive representations.
PL is introduced to help PE-Net efficiently learn such entitypredicate matching, and Prototype Regularization (PR) is devised to relieve the ambiguous entity-predicate matching.
arXiv Detail & Related papers (2023-03-13T13:30:59Z) - CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument
Extraction [22.746071199667146]
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document.
We propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages.
In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models.
arXiv Detail & Related papers (2022-05-01T16:03:54Z) - Language Models Explain Word Reading Times Better Than Empirical
Predictability [20.38397241720963]
The traditional approach in cognitive reading research assumes that word predictability from sentence context is best captured by cloze completion probability.
Probability language models provide deeper explanations for syntactic and semantic effects than CCP.
N-gram and RNN probabilities of the present word more consistently predicted reading performance compared with topic models or CCP.
arXiv Detail & Related papers (2022-02-02T16:38:43Z) - Neural Unsupervised Semantic Role Labeling [48.69930912510414]
We present the first neural unsupervised model for semantic role labeling.
We decompose the task as two argument related subtasks, identification and clustering.
Experiments on CoNLL-2009 English dataset demonstrate that our model outperforms previous state-of-the-art baseline.
arXiv Detail & Related papers (2021-04-19T04:50:16Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.