Deep Multi-Task Model for Sarcasm Detection and Sentiment Analysis in
Arabic Language
- URL: http://arxiv.org/abs/2106.12488v1
- Date: Wed, 23 Jun 2021 16:00:32 GMT
- Title: Deep Multi-Task Model for Sarcasm Detection and Sentiment Analysis in
Arabic Language
- Authors: Abdelkader El Mahdaouy, Abdellah El Mekki, Kabil Essefar, Nabil El
Mamoun, Ismail Berrada, Ahmed Khoumsi
- Abstract summary: This paper introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing knowledge interaction between the two tasks.
Overall obtained results show that our proposed model outperforms its single-task counterparts on both Arabic Sentiment Analysis (SA) and sarcasm detection sub-tasks.
- Score: 1.1254693939127909
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The prominence of figurative language devices, such as sarcasm and irony,
poses serious challenges for Arabic Sentiment Analysis (SA). While previous
research works tackle SA and sarcasm detection separately, this paper
introduces an end-to-end deep Multi-Task Learning (MTL) model, allowing
knowledge interaction between the two tasks. Our MTL model's architecture
consists of a Bidirectional Encoder Representation from Transformers (BERT)
model, a multi-task attention interaction module, and two task classifiers. The
overall obtained results show that our proposed model outperforms its
single-task counterparts on both SA and sarcasm detection sub-tasks.
Related papers
- VEGA: Learning Interleaved Image-Text Comprehension in Vision-Language Large Models [76.94378391979228]
We introduce a new, more demanding task known as Interleaved Image-Text (IITC)
This task challenges models to discern and disregard superfluous elements in both images and text to accurately answer questions.
In support of this task, we further craft a new VEGA dataset, tailored for the IITC task on scientific content, and devised a subtask, Image-Text Association (ITA)
arXiv Detail & Related papers (2024-06-14T17:59:40Z) - Multitask Multimodal Prompted Training for Interactive Embodied Task
Completion [48.69347134411864]
Embodied MultiModal Agent (EMMA) is a unified encoder-decoder model that reasons over images and trajectories.
By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks.
arXiv Detail & Related papers (2023-11-07T15:27:52Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - MaMMUT: A Simple Architecture for Joint Learning for MultiModal Tasks [59.09343552273045]
We propose a decoder-only model for multimodal tasks, which is surprisingly effective in jointly learning of these disparate vision-language tasks.
We demonstrate that joint learning of these diverse objectives is simple, effective, and maximizes the weight-sharing of the model across these tasks.
Our model achieves the state of the art on image-text and text-image retrieval, video question answering and open-vocabulary detection tasks, outperforming much larger and more extensively trained foundational models.
arXiv Detail & Related papers (2023-03-29T16:42:30Z) - Multitasking Models are Robust to Structural Failure: A Neural Model for
Bilingual Cognitive Reserve [78.3500985535601]
We find a surprising connection between multitask learning and robustness to neuron failures.
Our experiments show that bilingual language models retain higher performance under various neuron perturbations.
We provide a theoretical justification for this robustness by mathematically analyzing linear representation learning.
arXiv Detail & Related papers (2022-10-20T22:23:27Z) - CS-UM6P at SemEval-2022 Task 6: Transformer-based Models for Intended
Sarcasm Detection in English and Arabic [6.221019624345408]
Sarcasm is a form of figurative language where the intended meaning of a sentence differs from its literal meaning.
In this paper, we present our participating system to the intended sarcasm detection task in English and Arabic languages.
arXiv Detail & Related papers (2022-06-16T19:14:54Z) - BERT-based Multi-Task Model for Country and Province Level Modern
Standard Arabic and Dialectal Arabic Identification [1.1254693939127909]
We present our deep learning-based system, submitted to the second NADI shared task for country-level and province-level identification of Modern Standard Arabic (MSA) and Dialectal Arabic (DA)
The obtained results show that our MTL model outperforms single-task models on most subtasks.
arXiv Detail & Related papers (2021-06-23T16:07:58Z) - Combining Context-Free and Contextualized Representations for Arabic
Sarcasm Detection and Sentiment Identification [0.0]
This paper proffers team SPPU-AASM's submission for the WANLP ArSarcasm shared-task 2021, which centers around the sarcasm and sentiment polarity detection of Arabic tweets.
The proposed system achieves a F1-sarcastic score of 0.62 and a F-PN score of 0.715 for the sarcasm and sentiment detection tasks, respectively.
arXiv Detail & Related papers (2021-03-09T19:39:43Z) - AraBERT and Farasa Segmentation Based Approach For Sarcasm and Sentiment
Detection in Arabic Tweets [0.0]
One of the subtasks aims at developing a system that identifies whether a given Arabic tweet is sarcastic in nature or not.
The other aims to identify the sentiment of the Arabic tweet.
Our final approach was ranked seventh and fourth in the Sarcasm and Sentiment Detection subtasks respectively.
arXiv Detail & Related papers (2021-03-02T12:33:50Z) - SPLAT: Speech-Language Joint Pre-Training for Spoken Language
Understanding [61.02342238771685]
Spoken language understanding requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.
Various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.
We propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.
arXiv Detail & Related papers (2020-10-05T19:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.