Explainable AI: XAI-Guided Context-Aware Data Augmentation
- URL: http://arxiv.org/abs/2506.03484v1
- Date: Wed, 04 Jun 2025 01:47:24 GMT
- Title: Explainable AI: XAI-Guided Context-Aware Data Augmentation
- Authors: Melkamu Abay Mersha, Mesay Gemeda Yigezu, Atnafu Lambebo Tonja, Hassan Shakil, Samer Iskander, Olga Kolesnikova, Jugal Kalita,
- Abstract summary: The scarcity of labeled data remains a fundamental challenge in developing robust and generalizable AI models.<n>Conventional data augmentation techniques introduce noise, cause semantic drift, disrupt contextual coherence, lack control, and lead to overfitting.<n>We propose XAI-Guided Context-Aware Data Augmentation.<n>Our approach integrates an iterative feedback loop, which refines augmented data over multiple augmentation cycles based on explainability-driven insights.
- Score: 5.721631408241837
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Explainable AI (XAI) has emerged as a powerful tool for improving the performance of AI models, going beyond providing model transparency and interpretability. The scarcity of labeled data remains a fundamental challenge in developing robust and generalizable AI models, particularly for low-resource languages. Conventional data augmentation techniques introduce noise, cause semantic drift, disrupt contextual coherence, lack control, and lead to overfitting. To address these challenges, we propose XAI-Guided Context-Aware Data Augmentation. This novel framework leverages XAI techniques to modify less critical features while selectively preserving most task-relevant features. Our approach integrates an iterative feedback loop, which refines augmented data over multiple augmentation cycles based on explainability-driven insights and the model performance gain. Our experimental results demonstrate that XAI-SR-BT and XAI-PR-BT improve the accuracy of models on hate speech and sentiment analysis tasks by 6.6% and 8.1%, respectively, compared to the baseline, using the Amharic dataset with the XLM-R model. XAI-SR-BT and XAI-PR-BT outperform existing augmentation techniques by 4.8% and 5%, respectively, on the same dataset and model. Overall, XAI-SR-BT and XAI-PR-BT consistently outperform both baseline and conventional augmentation techniques across all tasks and models. This study provides a more controlled, interpretable, and context-aware solution to data augmentation, addressing critical limitations of existing augmentation techniques and offering a new paradigm shift for leveraging XAI techniques to enhance AI model training.
Related papers
- VirtualXAI: A User-Centric Framework for Explainability Assessment Leveraging GPT-Generated Personas [0.07499722271664146]
The demand for eXplainable AI (XAI) has increased to enhance the interpretability, transparency, and trustworthiness of AI models.<n>We propose a framework that integrates quantitative benchmarking with qualitative user assessments through virtual personas.<n>This yields an estimated XAI score and provides tailored recommendations for both the optimal AI model and the XAI method for a given scenario.
arXiv Detail & Related papers (2025-03-06T09:44:18Z) - IMPACTX: Improving Model Performance by Appropriately predicting CorrecT eXplanations [0.0]
IMPACTX is a novel approach that leverages XAI as a fully automated attention mechanism.<n>It provides proper feature attribution maps for the model's decisions, without relying on external XAI methods.
arXiv Detail & Related papers (2025-02-17T14:15:20Z) - Verifying Machine Unlearning with Explainable AI [46.7583989202789]
We investigate the effectiveness of Explainable AI (XAI) in verifying Machine Unlearning (MU) within context of harbor front monitoring.
Our proof-of-concept introduces attribution feature as an innovative verification step for MU, expanding beyond traditional metrics.
We propose two novel XAI-based metrics, Heatmap Coverage (HC) and Attention Shift (AS) to evaluate the effectiveness of these methods.
arXiv Detail & Related papers (2024-11-20T13:57:32Z) - Unleashing LLM Reasoning Capability via Scalable Question Synthesis from Scratch [54.12139707822201]
We propose ScaleQuest, a novel, scalable, and cost-effective data synthesis method.<n>By generating diverse questions from scratch, we produce a dataset of 1 million problem-solution pairs.<n>Our experiments demonstrate that models trained on our data outperform existing open-source datasets.
arXiv Detail & Related papers (2024-10-24T12:42:04Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Explainable AI for Enhancing Efficiency of DL-based Channel Estimation [1.0136215038345013]
Support of artificial intelligence based decision-making is a key element in future 6G networks.<n>In such applications, using AI as black-box models is risky and challenging.<n>We propose a novel-based XAI-CHEST framework that is oriented toward channel estimation in wireless communications.
arXiv Detail & Related papers (2024-07-09T16:24:21Z) - A Deep Dive into Perturbations as Evaluation Technique for Time Series
XAI [13.269396832189754]
XAI for time series data has become increasingly important in finance, healthcare, and climate science.
evaluating the quality of explanations, such as attributions provided by XAI techniques, remains challenging.
This paper provides an in-depth analysis of using perturbations to evaluate attributions extracted from time series models.
arXiv Detail & Related papers (2023-07-11T08:26:08Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Optimizing Explanations by Network Canonization and Hyperparameter
Search [74.76732413972005]
Rule-based and modified backpropagation XAI approaches often face challenges when being applied to modern model architectures.
Model canonization is the process of re-structuring the model to disregard problematic components without changing the underlying function.
In this work, we propose canonizations for currently relevant model blocks applicable to popular deep neural network architectures.
arXiv Detail & Related papers (2022-11-30T17:17:55Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.