Neural Unsupervised Semantic Role Labeling
- URL: http://arxiv.org/abs/2104.09047v1
- Date: Mon, 19 Apr 2021 04:50:16 GMT
- Title: Neural Unsupervised Semantic Role Labeling
- Authors: Kashif Munir, Hai Zhao, Zuchao Li
- Abstract summary: We present the first neural unsupervised model for semantic role labeling.
We decompose the task as two argument related subtasks, identification and clustering.
Experiments on CoNLL-2009 English dataset demonstrate that our model outperforms previous state-of-the-art baseline.
- Score: 48.69930912510414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of semantic role labeling (SRL) is dedicated to finding the
predicate-argument structure. Previous works on SRL are mostly supervised and
do not consider the difficulty in labeling each example which can be very
expensive and time-consuming. In this paper, we present the first neural
unsupervised model for SRL. To decompose the task as two argument related
subtasks, identification and clustering, we propose a pipeline that
correspondingly consists of two neural modules. First, we train a neural model
on two syntax-aware statistically developed rules. The neural model gets the
relevance signal for each token in a sentence, to feed into a BiLSTM, and then
an adversarial layer for noise-adding and classifying simultaneously, thus
enabling the model to learn the semantic structure of a sentence. Then we
propose another neural model for argument role clustering, which is done
through clustering the learned argument embeddings biased towards their
dependency relations. Experiments on CoNLL-2009 English dataset demonstrate
that our model outperforms previous state-of-the-art baseline in terms of
non-neural models for argument identification and classification.
Related papers
- Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO
Regularization [15.517787031620864]
The territory of LASSO is extended to two-layer ReLU neural networks, a fashionable and powerful nonlinear regression model.
We show that the LASSO estimator can stably reconstruct the neural network and identify $mathcalSstar$ when the number of samples scales logarithmically.
Our theory lies in an extended Restricted Isometry Property (RIP)-based analysis framework for two-layer ReLU neural networks.
arXiv Detail & Related papers (2023-05-07T13:05:09Z) - Dependency-based Mixture Language Models [53.152011258252315]
We introduce the Dependency-based Mixture Language Models.
In detail, we first train neural language models with a novel dependency modeling objective.
We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention.
arXiv Detail & Related papers (2022-03-19T06:28:30Z) - End-to-end Semantic Role Labeling with Neural Transition-based Model [25.921541005563856]
End-to-end semantic role labeling (SRL) has been received increasing interest.
Recent work is mostly focused on graph-based neural models.
We present the first work of transition-based neural models for end-to-end SRL.
arXiv Detail & Related papers (2021-01-02T07:35:54Z) - High-order Semantic Role Labeling [86.29371274587146]
This paper introduces a high-order graph structure for the neural semantic role labeling model.
It enables the model to explicitly consider not only the isolated predicate-argument pairs but also the interaction between the predicate-argument pairs.
Experimental results on 7 languages of the CoNLL-2009 benchmark show that the high-order structural learning techniques are beneficial to the strong performing SRL models.
arXiv Detail & Related papers (2020-10-09T15:33:54Z) - Syntax Role for Neural Semantic Role Labeling [77.5166510071142]
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence.
Previous studies in terms of traditional models have shown syntactic information can make remarkable contributions to SRL performance.
Recent neural SRL studies show that syntax information becomes much less important for neural semantic role labeling.
arXiv Detail & Related papers (2020-09-12T07:01:12Z) - Closed Loop Neural-Symbolic Learning via Integrating Neural Perception,
Grammar Parsing, and Symbolic Reasoning [134.77207192945053]
Prior methods learn the neural-symbolic models using reinforcement learning approaches.
We introduce the textbfgrammar model as a textitsymbolic prior to bridge neural perception and symbolic reasoning.
We propose a novel textbfback-search algorithm which mimics the top-down human-like learning procedure to propagate the error.
arXiv Detail & Related papers (2020-06-11T17:42:49Z) - Multi-Step Inference for Reasoning Over Paragraphs [95.91527524872832]
Complex reasoning over text requires understanding and chaining together free-form predicates and logical connectives.
We present a compositional model reminiscent of neural module networks that can perform chained logical reasoning.
arXiv Detail & Related papers (2020-04-06T21:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.