Persian Semantic Role Labeling Using Transfer Learning and BERT-Based
Models
- URL: http://arxiv.org/abs/2306.10339v1
- Date: Sat, 17 Jun 2023 12:50:09 GMT
- Title: Persian Semantic Role Labeling Using Transfer Learning and BERT-Based
Models
- Authors: Saeideh Niksirat Aghdam, Sayyed Ali Hossayni, Erfan Khedersolh Sadeh,
Nasim Khozouei, Behrouz Minaei Bidgoli
- Abstract summary: We present an end-to-end SRL method that not only eliminates the need for feature extraction but also outperforms existing methods in facing new samples.
The proposed method does not employ any auxiliary features and shows more than 16 (83.16) percent improvement in accuracy against previous methods in similar circumstances.
- Score: 5.592292907237565
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Semantic role labeling (SRL) is the process of detecting the
predicate-argument structure of each predicate in a sentence. SRL plays a
crucial role as a pre-processing step in many NLP applications such as topic
and concept extraction, question answering, summarization, machine translation,
sentiment analysis, and text mining. Recently, in many languages, unified SRL
dragged lots of attention due to its outstanding performance, which is the
result of overcoming the error propagation problem. However, regarding the
Persian language, all previous works have focused on traditional methods of SRL
leading to a drop in accuracy and imposing expensive feature extraction steps
in terms of financial resources, time and energy consumption. In this work, we
present an end-to-end SRL method that not only eliminates the need for feature
extraction but also outperforms existing methods in facing new samples in
practical situations. The proposed method does not employ any auxiliary
features and shows more than 16 (83.16) percent improvement in accuracy against
previous methods in similar circumstances.
Related papers
- Tractable Offline Learning of Regular Decision Processes [50.11277112628193]
This work studies offline Reinforcement Learning (RL) in a class of non-Markovian environments called Regular Decision Processes (RDPs)
Ins, the unknown dependency of future observations and rewards from the past interactions can be captured experimentally.
Many algorithms first reconstruct this unknown dependency using automata learning techniques.
arXiv Detail & Related papers (2024-09-04T14:26:58Z) - DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection [0.7922558880545527]
Divergence-Aware Hallucination-Remediated SRL projection (DAHRS)
We implement DAHRS, leveraging linguistically-informed remediation alignment followed by greedy First-Come First-CFA (F) SRL projection.
We achieve a higher word-level F1 over XSRL: 87.6% vs. 77.3% (EN-FR) and 89.0% vs. 82.7% (EN-ES)
arXiv Detail & Related papers (2024-07-12T14:13:59Z) - Language Rectified Flow: Advancing Diffusion Language Generation with Probabilistic Flows [53.31856123113228]
This paper proposes Language Rectified Flow (ours)
Our method is based on the reformulation of the standard probabilistic flow models.
Experiments and ablation studies demonstrate that our method can be general, effective, and beneficial for many NLP tasks.
arXiv Detail & Related papers (2024-03-25T17:58:22Z) - Orthogonal Subspace Learning for Language Model Continual Learning [45.35861158925975]
O-LoRA is a simple and efficient approach for continual learning in language models.
Our method induces only marginal additional parameter costs and requires no user data storage for replay.
arXiv Detail & Related papers (2023-10-22T02:23:44Z) - Revisiting the Linear-Programming Framework for Offline RL with General
Function Approximation [24.577243536475233]
offline reinforcement learning (RL) concerns pursuing an optimal policy for sequential decision-making from a pre-collected dataset.
Recent theoretical progress has focused on developing sample-efficient offline RL algorithms with various relaxed assumptions on data coverage and function approximators.
We revisit the linear-programming framework for offline RL, and advance the existing results in several aspects.
arXiv Detail & Related papers (2022-12-28T15:28:12Z) - PriMeSRL-Eval: A Practical Quality Metric for Semantic Role Labeling
Systems Evaluation [66.79238445033795]
We propose a more strict SRL evaluation metric PriMeSRL.
We show that PriMeSRL drops the quality evaluation of all SoTA SRL models significantly.
We also show that PriMeSRLsuccessfully penalizes actual failures in SoTA SRL models.
arXiv Detail & Related papers (2022-10-12T17:04:28Z) - Transition-based Semantic Role Labeling with Pointer Networks [0.40611352512781856]
We propose the first transition-based SRL approach that is capable of completely processing an input sentence in a single left-to-right pass.
Thanks to our implementation based on Pointer Networks, full SRL can be accurately and efficiently done in $O(n2)$, achieving the best performance to date on the majority of languages from the CoNLL-2009 shared task.
arXiv Detail & Related papers (2022-05-20T08:38:44Z) - Text Generation with Efficient (Soft) Q-Learning [91.47743595382758]
Reinforcement learning (RL) offers a more flexible solution by allowing users to plug in arbitrary task metrics as reward.
We introduce a new RL formulation for text generation from the soft Q-learning perspective.
We apply the approach to a wide range of tasks, including learning from noisy/negative examples, adversarial attacks, and prompt generation.
arXiv Detail & Related papers (2021-06-14T18:48:40Z) - Reducing Confusion in Active Learning for Part-Of-Speech Tagging [100.08742107682264]
Active learning (AL) uses a data selection algorithm to select useful training samples to minimize annotation cost.
We study the problem of selecting instances which maximally reduce the confusion between particular pairs of output tags.
Our proposed AL strategy outperforms other AL strategies by a significant margin.
arXiv Detail & Related papers (2020-11-02T06:24:58Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z) - Cross-Lingual Semantic Role Labeling with High-Quality Translated
Training Corpus [41.031187560839555]
Cross-lingual semantic role labeling is one promising way to address the problem.
We propose a novel alternative based on corpus translation, constructing high-quality training datasets for the target languages.
Experimental results on Universal Proposition Bank show that the translation-based method is highly effective.
arXiv Detail & Related papers (2020-04-14T04:16:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.