Syntax Role for Neural Semantic Role Labeling
- URL: http://arxiv.org/abs/2009.05737v1
- Date: Sat, 12 Sep 2020 07:01:12 GMT
- Title: Syntax Role for Neural Semantic Role Labeling
- Authors: Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai
- Abstract summary: Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence.
Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance.
Recent neural SRL studies show that syntax information becomes much less important for neural semantic role labeling.
- Score: 77.5166510071142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semantic role labeling (SRL) is dedicated to recognizing the semantic
predicate-argument structure of a sentence. Previous studies in terms of
traditional models have shown syntactic information can make remarkable
contributions to SRL performance; however, the necessity of syntactic
information was challenged by a few recent neural SRL studies that demonstrate
impressive performance without syntactic backbones and suggest that syntax
information becomes much less important for neural semantic role labeling,
especially when paired with recent deep neural network and large-scale
pre-trained language models. Despite this notion, the neural SRL field still
lacks a systematic and full investigation on the relevance of syntactic
information in SRL, for both dependency and both monolingual and multilingual
settings. This paper intends to quantify the importance of syntactic
information for neural SRL in the deep learning framework. We introduce three
typical SRL frameworks (baselines), sequence-based, tree-based, and
graph-based, which are accompanied by two categories of exploiting syntactic
information: syntax pruning-based and syntax feature-based. Experiments are
conducted on the CoNLL-2005, 2009, and 2012 benchmarks for all languages
available, and results show that neural SRL models can still benefit from
syntactic information under certain conditions. Furthermore, we show the
quantitative significance of syntax to neural SRL models together with a
thorough empirical survey using existing models.
Related papers
- Large Language Models as Neurolinguistic Subjects: Identifying Internal Representations for Form and Meaning [49.60849499134362]
This study investigates the linguistic understanding of Large Language Models (LLMs) regarding signifier (form) and signified (meaning)
Traditional psycholinguistic evaluations often reflect statistical biases that may misrepresent LLMs' true linguistic capabilities.
We introduce a neurolinguistic approach, utilizing a novel method that combines minimal pair and diagnostic probing to analyze activation patterns across model layers.
arXiv Detail & Related papers (2024-11-12T04:16:44Z) - Precision, Stability, and Generalization: A Comprehensive Assessment of RNNs learnability capability for Classifying Counter and Dyck Languages [9.400009043451046]
This study investigates the learnability of Recurrent Neural Networks (RNNs) in classifying structured formal languages.
Traditionally, both first-order (LSTM) and second-order (O2RNN) RNNs have been considered effective for such tasks.
arXiv Detail & Related papers (2024-10-04T03:22:49Z) - Analysis of Argument Structure Constructions in a Deep Recurrent Language Model [0.0]
We explore the representation and processing of Argument Structure Constructions (ASCs) in a recurrent neural language model.
Our results show that sentence representations form distinct clusters corresponding to the four ASCs across all hidden layers.
This indicates that even a relatively simple, brain-constrained recurrent neural network can effectively differentiate between various construction types.
arXiv Detail & Related papers (2024-08-06T09:27:41Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Neuro-Symbolic Reinforcement Learning with First-Order Logic [63.003353499732434]
We propose a novel RL method for text-based games with a recent neuro-symbolic framework called Logical Neural Network.
Our experimental results show RL training with the proposed method converges significantly faster than other state-of-the-art neuro-symbolic methods in a TextWorld benchmark.
arXiv Detail & Related papers (2021-10-21T08:21:49Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Neural Unsupervised Semantic Role Labeling [48.69930912510414]
We present the first neural unsupervised model for semantic role labeling.
We decompose the task as two argument related subtasks, identification and clustering.
Experiments on CoNLL-2009 English dataset demonstrate that our model outperforms previous state-of-the-art baseline.
arXiv Detail & Related papers (2021-04-19T04:50:16Z) - Structural Supervision Improves Few-Shot Learning and Syntactic
Generalization in Neural Language Models [47.42249565529833]
Humans can learn structural properties about a word from minimal experience.
We assess the ability of modern neural language models to reproduce this behavior in English.
arXiv Detail & Related papers (2020-10-12T14:12:37Z) - Syntax Representation in Word Embeddings and Neural Networks -- A Survey [4.391102490444539]
This paper covers approaches of evaluating the amount of syntactic information included in the representations of words.
We mainly summarize re-search on English monolingual data on language modeling tasks.
We describe which pre-trained models and representations of language are best suited for transfer to syntactic tasks.
arXiv Detail & Related papers (2020-10-02T15:44:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.