Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures
- URL: http://arxiv.org/abs/2212.01094v1
- Date: Fri, 2 Dec 2022 11:19:16 GMT
- Title: Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures
- Authors: Simone Conia and Edoardo Barba and Alessandro Scir\`e and Roberto
Navigli
- Abstract summary: We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
- Score: 104.32063681736349
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: One of the common traits of past and present approaches for Semantic Role
Labeling (SRL) is that they rely upon discrete labels drawn from a predefined
linguistic inventory to classify predicate senses and their arguments. However,
we argue this need not be the case. In this paper, we present an approach that
leverages Definition Modeling to introduce a generalized formulation of SRL as
the task of describing predicate-argument structures using natural language
definitions instead of discrete labels. Our novel formulation takes a first
step towards placing interpretability and flexibility foremost, and yet our
experiments and analyses on PropBank-style and FrameNet-style, dependency-based
and span-based SRL also demonstrate that a flexible model with an interpretable
output does not necessarily come at the expense of performance. We release our
software for research purposes at https://github.com/SapienzaNLP/dsrl.
Related papers
- How Abstract Is Linguistic Generalization in Large Language Models?
Experiments with Argument Structure [2.530495315660486]
We investigate the degree to which pre-trained Transformer-based large language models represent relationships between contexts.
We find that LLMs perform well in generalizing the distribution of a novel noun argument between related contexts.
However, LLMs fail at generalizations between related contexts that have not been observed during pre-training.
arXiv Detail & Related papers (2023-11-08T18:58:43Z) - Interpretable Word Sense Representations via Definition Generation: The
Case of Semantic Change Analysis [3.515619810213763]
We propose using automatically generated natural language definitions of contextualised word usages as interpretable word and word sense representations.
We demonstrate how the resulting sense labels can make existing approaches to semantic change analysis more interpretable.
arXiv Detail & Related papers (2023-05-19T20:36:21Z) - Offline RL for Natural Language Generation with Implicit Language Q
Learning [87.76695816348027]
Large language models can be inconsistent when it comes to completing user specified tasks.
We propose a novel RL method, that combines both the flexible utility framework of RL with the ability of supervised learning.
In addition to empirically validating ILQL, we present a detailed empirical analysis situations where offline RL can be useful in natural language generation settings.
arXiv Detail & Related papers (2022-06-05T18:38:42Z) - Infusing Finetuning with Semantic Dependencies [62.37697048781823]
We show that, unlike syntax, semantics is not brought to the surface by today's pretrained models.
We then use convolutional graph encoders to explicitly incorporate semantic parses into task-specific finetuning.
arXiv Detail & Related papers (2020-12-10T01:27:24Z) - Measuring Association Between Labels and Free-Text Rationales [60.58672852655487]
In interpretable NLP, we require faithful rationales that reflect the model's decision-making process for an explained instance.
We demonstrate that pipelines, existing models for faithful extractive rationalization on information-extraction style tasks, do not extend as reliably to "reasoning" tasks requiring free-text rationales.
We turn to models that jointly predict and rationalize, a class of widely used high-performance models for free-text rationalization whose faithfulness is not yet established.
arXiv Detail & Related papers (2020-10-24T03:40:56Z) - Semantic Role Labeling as Syntactic Dependency Parsing [19.919191146167584]
Three common syntactic patterns account for over 98% of the PropBank-style semantic role labeling annotations.
We present a conversion scheme that packs SRL annotations into dependency tree representations through joint labels.
arXiv Detail & Related papers (2020-10-21T17:46:11Z) - Syntax Role for Neural Semantic Role Labeling [77.5166510071142]
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence.
Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance.
Recent neural SRL studies show that syntax information becomes much less important for neural semantic role labeling.
arXiv Detail & Related papers (2020-09-12T07:01:12Z) - Refining Implicit Argument Annotation for UCCA [6.873471412788333]
This paper proposes a typology for fine-grained implicit argument annotation on top of Universal Cognitive Conceptual's foundational layer.
The proposed implicit argument categorisation is driven by theories of implicit role interpretation and consists of six types: Deictic, Generic, Genre-based, Type-identifiable, Non-specific, and Iterated-set.
arXiv Detail & Related papers (2020-05-26T17:24:15Z) - Unsupervised Transfer of Semantic Role Models from Verbal to Nominal
Domain [65.04669567781634]
We investigate a transfer scenario where we assume role-annotated data for the source verbal domain but only unlabeled data for the target nominal domain.
Our key assumption, enabling the transfer between the two domains, is that selectional preferences of a role do not strongly depend on whether the relation is triggered by a verb or a noun.
The method substantially outperforms baselines, such as unsupervised and direct transfer' methods, on the English CoNLL-2009 dataset.
arXiv Detail & Related papers (2020-05-01T09:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.