Exploring Non-Verbal Predicates in Semantic Role Labeling: Challenges
and Opportunities
- URL: http://arxiv.org/abs/2307.01870v1
- Date: Tue, 4 Jul 2023 18:28:59 GMT
- Title: Exploring Non-Verbal Predicates in Semantic Role Labeling: Challenges
and Opportunities
- Authors: Riccardo Orlando and Simone Conia and Roberto Navigli
- Abstract summary: Non-verbal predicates appear in the benchmarks we commonly use to measure progress in Semantic Role Labeling.
We show that state-of-the-art systems are still incapable of transferring knowledge across different predicate types.
We present a novel, manually-annotated challenge set designed to give equal importance to verbal, nominal, and adjectival predicate-argument structures.
- Score: 40.46449855403553
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Although we have witnessed impressive progress in Semantic Role Labeling
(SRL), most of the research in the area is carried out assuming that the
majority of predicates are verbs. Conversely, predicates can also be expressed
using other parts of speech, e.g., nouns and adjectives. However, non-verbal
predicates appear in the benchmarks we commonly use to measure progress in SRL
less frequently than in some real-world settings -- newspaper headlines,
dialogues, and tweets, among others. In this paper, we put forward a new
PropBank dataset which boasts wide coverage of multiple predicate types. Thanks
to it, we demonstrate empirically that standard benchmarks do not provide an
accurate picture of the current situation in SRL and that state-of-the-art
systems are still incapable of transferring knowledge across different
predicate types. Having observed these issues, we also present a novel,
manually-annotated challenge set designed to give equal importance to verbal,
nominal, and adjectival predicate-argument structures. We use such dataset to
investigate whether we can leverage different linguistic resources to promote
knowledge transfer. In conclusion, we claim that SRL is far from "solved", and
its integration with other semantic tasks might enable significant improvements
in the future, especially for the long tail of non-verbal predicates, thereby
facilitating further research on SRL for non-verbal predicates.
Related papers
- The language of prompting: What linguistic properties make a prompt
successful? [13.034603322224548]
LLMs can be prompted to achieve impressive zero-shot or few-shot performance in many NLP tasks.
Yet, we still lack a systematic understanding of how linguistic properties of prompts correlate with task performance.
We investigate both grammatical properties such as mood, tense, aspect and modality, as well as lexico-semantic variation through the use of synonyms.
arXiv Detail & Related papers (2023-11-03T15:03:36Z) - Are Large Language Models Robust Coreference Resolvers? [17.60248310475889]
We show that prompting for coreference can outperform current unsupervised coreference systems.
Further investigations reveal that instruction-tuned LMs generalize surprisingly well across domains, languages, and time periods.
arXiv Detail & Related papers (2023-05-23T19:38:28Z) - Semantic Role Labeling Meets Definition Modeling: Using Natural Language
to Describe Predicate-Argument Structures [104.32063681736349]
We present an approach to describe predicate-argument structures using natural language definitions instead of discrete labels.
Our experiments and analyses on PropBank-style and FrameNet-style, dependency-based and span-based SRL also demonstrate that a flexible model with an interpretable output does not necessarily come at the expense of performance.
arXiv Detail & Related papers (2022-12-02T11:19:16Z) - Self-Supervised Speech Representation Learning: A Review [105.1545308184483]
Self-supervised representation learning methods promise a single universal model that would benefit a wide variety of tasks and domains.
Speech representation learning is experiencing similar progress in three main categories: generative, contrastive, and predictive methods.
This review presents approaches for self-supervised speech representation learning and their connection to other research areas.
arXiv Detail & Related papers (2022-05-21T16:52:57Z) - Testing the Ability of Language Models to Interpret Figurative Language [69.59943454934799]
Figurative and metaphorical language are commonplace in discourse.
It remains an open question to what extent modern language models can interpret nonliteral phrases.
We introduce Fig-QA, a Winograd-style nonliteral language understanding task.
arXiv Detail & Related papers (2022-04-26T23:42:22Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Intrinsic Probing through Dimension Selection [69.52439198455438]
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks.
Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it.
In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted.
arXiv Detail & Related papers (2020-10-06T15:21:08Z) - Semantic Relatedness for Keyword Disambiguation: Exploiting Different
Embeddings [0.0]
We propose an approach to keyword disambiguation which grounds on a semantic relatedness between words and senses provided by an external inventory (ontology) that is not known at training time.
Experimental results show that this approach achieves results comparable with the state of the art when applied for Word Sense Disambiguation (WSD) without training for a particular domain.
arXiv Detail & Related papers (2020-02-25T16:44:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.