TCE at Qur'an QA 2023 Shared Task: Low Resource Enhanced
Transformer-based Ensemble Approach for Qur'anic QA
- URL: http://arxiv.org/abs/2401.13060v1
- Date: Tue, 23 Jan 2024 19:32:54 GMT
- Title: TCE at Qur'an QA 2023 Shared Task: Low Resource Enhanced
Transformer-based Ensemble Approach for Qur'anic QA
- Authors: Mohammed Alaa Elkomy, Amany Sarhan
- Abstract summary: We present our approach to tackle Qur'an QA 2023 shared tasks A and B.
To address the challenge of low-resourced training data, we rely on transfer learning together with a voting ensemble.
We employ different architectures and learning mechanisms for a range of Arabic pre-trained transformer-based models for both tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present our approach to tackle Qur'an QA 2023 shared tasks
A and B. To address the challenge of low-resourced training data, we rely on
transfer learning together with a voting ensemble to improve prediction
stability across multiple runs. Additionally, we employ different architectures
and learning mechanisms for a range of Arabic pre-trained transformer-based
models for both tasks. To identify unanswerable questions, we propose using a
thresholding mechanism. Our top-performing systems greatly surpass the baseline
performance on the hidden split, achieving a MAP score of 25.05% for task A and
a partial Average Precision (pAP) of 57.11% for task B.
Related papers
- Solution for OOD-CV Workshop SSB Challenge 2024 (Open-Set Recognition Track) [6.998958192483059]
The challenge required identifying whether a test sample belonged to the semantic classes of a classifier's training set.
We proposed a hybrid approach, experimenting with the fusion of various post-hoc OOD detection techniques and different Test-Time Augmentation strategies.
Our best-performing method combined Test-Time Augmentation with the post-hoc OOD techniques, achieving a strong balance between AUROC and FPR95 scores.
arXiv Detail & Related papers (2024-09-30T13:28:14Z) - DARA: Decomposition-Alignment-Reasoning Autonomous Language Agent for Question Answering over Knowledge Graphs [70.54226917774933]
We propose the DecompositionAlignment-Reasoning Agent (DARA) framework.
DARA effectively parses questions into formal queries through a dual mechanism.
We show that DARA attains performance comparable to state-of-the-art enumerating-and-ranking-based methods for KGQA.
arXiv Detail & Related papers (2024-06-11T09:09:37Z) - Smart Sampling: Self-Attention and Bootstrapping for Improved Ensembled Q-Learning [0.6963971634605796]
We present a novel method aimed at enhancing the sample efficiency of ensemble Q learning.
Our proposed approach integrates multi-head self-attention into the ensembled Q networks while bootstrapping the state-action pairs ingested by the ensemble.
arXiv Detail & Related papers (2024-05-14T00:57:02Z) - Mavericks at ArAIEval Shared Task: Towards a Safer Digital Space --
Transformer Ensemble Models Tackling Deception and Persuasion [0.0]
We present our approaches for task 1-A and task 2-A of the shared task which focus on persuasion technique detection and disinformation detection respectively.
The tasks use multigenre snippets of tweets and news articles for the given binary classification problem.
We achieved a micro F1-score of 0.742 on task 1-A (8th rank on the leaderboard) and 0.901 on task 2-A (7th rank on the leaderboard) respectively.
arXiv Detail & Related papers (2023-11-30T17:26:57Z) - Learning to Generalize for Cross-domain QA [11.627572092891226]
We propose a novel approach that combines prompting methods and linear probing then fine-tuning strategy.
Our method has been theoretically and empirically shown to be effective in enhancing the generalization ability of both generative and discriminative models.
Our method can be easily integrated into any pre-trained models and offers a promising solution to the under-explored cross-domain QA task.
arXiv Detail & Related papers (2023-05-14T17:53:54Z) - ForkMerge: Mitigating Negative Transfer in Auxiliary-Task Learning [59.08197876733052]
Auxiliary-Task Learning (ATL) aims to improve the performance of the target task by leveraging the knowledge obtained from related tasks.
Sometimes, learning multiple tasks simultaneously results in lower accuracy than learning only the target task, known as negative transfer.
ForkMerge is a novel approach that periodically forks the model into multiple branches, automatically searches the varying task weights.
arXiv Detail & Related papers (2023-01-30T02:27:02Z) - Reducing Variance in Temporal-Difference Value Estimation via Ensemble
of Deep Networks [109.59988683444986]
MeanQ is a simple ensemble method that estimates target values as ensemble means.
We show that MeanQ shows remarkable sample efficiency in experiments on the Atari Learning Environment benchmark.
arXiv Detail & Related papers (2022-09-16T01:47:36Z) - DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for
Question Answering in a Low-resource Domain [10.172732008860539]
The research in machine reading comprehension has been understudied in several domains, including religious texts.
The goal of the Qur'an QA 2022 shared task is to fill this gap by producing state-of-the-art question answering and reading comprehension research on Qur'an.
arXiv Detail & Related papers (2022-05-12T11:17:23Z) - Learning to Perturb Word Embeddings for Out-of-distribution QA [55.103586220757464]
We propose a simple yet effective DA method based on a noise generator, which learns to perturb the word embedding of the input questions and context without changing their semantics.
We validate the performance of the QA models trained with our word embedding on a single source dataset, on five different target domains.
Notably, the model trained with ours outperforms the model trained with more than 240K artificially generated QA pairs.
arXiv Detail & Related papers (2021-05-06T14:12:26Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - Device-Robust Acoustic Scene Classification Based on Two-Stage
Categorization and Data Augmentation [63.98724740606457]
We present a joint effort of four groups, namely GT, USTC, Tencent, and UKE, to tackle Task 1 - Acoustic Scene Classification (ASC) in the DCASE 2020 Challenge.
Task 1a focuses on ASC of audio signals recorded with multiple (real and simulated) devices into ten different fine-grained classes.
Task 1b concerns with classification of data into three higher-level classes using low-complexity solutions.
arXiv Detail & Related papers (2020-07-16T15:07:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.