Explainable News Summarization -- Analysis and mitigation of Disagreement Problem
- URL: http://arxiv.org/abs/2410.18560v1
- Date: Thu, 24 Oct 2024 09:07:44 GMT
- Title: Explainable News Summarization -- Analysis and mitigation of Disagreement Problem
- Authors: Seema Aswani, Sujala D. Shetty,
- Abstract summary: Explanationable AI (XAI) techniques for text summarization provide valuable understanding of how the summaries are generated.
Recent studies have highlighted a major challenge in this area, known as the disagreement problem.
This problem occurs when different XAI methods offer contradictory explanations for the summary generated from the same input article.
We propose a novel approach that utilizes sentence transformers and the k-means clustering algorithm to first segment the input article and then generate the explanation of the summary generated for each segment.
- Score: 2.685668802278156
- License:
- Abstract: Explainable AI (XAI) techniques for text summarization provide valuable understanding of how the summaries are generated. Recent studies have highlighted a major challenge in this area, known as the disagreement problem. This problem occurs when different XAI methods offer contradictory explanations for the summary generated from the same input article. This inconsistency across XAI methods has been evaluated using predefined metrics designed to quantify agreement levels between them, revealing significant disagreement. This impedes the reliability and interpretability of XAI in this area. To address this challenge, we propose a novel approach that utilizes sentence transformers and the k-means clustering algorithm to first segment the input article and then generate the explanation of the summary generated for each segment. By producing regional or segmented explanations rather than comprehensive ones, a decrease in the observed disagreement between XAI methods is hypothesized. This segmentation-based approach was used on two news summarization datasets, namely Extreme Summarization(XSum) and CNN-DailyMail, and the experiment was conducted using multiple disagreement metrics. Our experiments validate the hypothesis by showing a significant reduction in disagreement among different XAI methods. Additionally, a JavaScript visualization tool is developed, that is easy to use and allows users to interactively explore the color-coded visualization of the input article and the machine-generated summary based on the attribution scores of each sentences.
Related papers
- H-STAR: LLM-driven Hybrid SQL-Text Adaptive Reasoning on Tables [56.73919743039263]
This paper introduces a novel algorithm that integrates both symbolic and semantic (textual) approaches in a two-stage process to address limitations.
Our experiments demonstrate that H-STAR significantly outperforms state-of-the-art methods across three question-answering (QA) and fact-verification datasets.
arXiv Detail & Related papers (2024-06-29T21:24:19Z) - Local Universal Explainer (LUX) -- a rule-based explainer with factual, counterfactual and visual explanations [7.673339435080445]
Local Universal Explainer (LUX) is a rule-based explainer that can generate factual, counterfactual and visual explanations.
It is based on a modified version of decision tree algorithms that allows for oblique splits and integration with feature importance XAI methods such as SHAP.
We tested our method on real and synthetic datasets and compared it with state-of-the-art rule-based explainers such as LORE, EXPLAN and Anchor.
arXiv Detail & Related papers (2023-10-23T13:04:15Z) - Scene Text Recognition Models Explainability Using Local Features [11.990881697492078]
Scene Text Recognition (STR) Explainability is the study on how humans can understand the cause of a model's prediction.
Recent XAI literatures on STR only provide a simple analysis and do not fully explore other XAI methods.
We specifically work on data explainability frameworks, called attribution-based methods, that explain the important parts of an input data in deep learning models.
We propose a new method, STRExp, to take into consideration the local explanations, i.e. the individual character prediction explanations.
arXiv Detail & Related papers (2023-10-14T10:01:52Z) - Theoretical Behavior of XAI Methods in the Presence of Suppressor
Variables [0.8602553195689513]
In recent years, the community of 'explainable artificial intelligence' (XAI) has created a vast body of methods to bridge a perceived gap between model 'complexity' and 'interpretability'
We show that the majority of the studied approaches will attribute non-zero importance to a non-class-related suppressor feature in the presence of correlated noise.
arXiv Detail & Related papers (2023-06-02T11:41:19Z) - An Experimental Investigation into the Evaluation of Explainability
Methods [60.54170260771932]
This work compares 14 different metrics when applied to nine state-of-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references.
Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy.
arXiv Detail & Related papers (2023-05-25T08:07:07Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - The Factual Inconsistency Problem in Abstractive Text Summarization: A
Survey [25.59111855107199]
neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries.
At a high level, such neural models can freely generate summaries without any constraint on the words or phrases used.
However, the neural model's abstraction ability is a double-edged sword.
arXiv Detail & Related papers (2021-04-30T08:46:13Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Data Representing Ground-Truth Explanations to Evaluate XAI Methods [0.0]
Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research.
We propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods.
arXiv Detail & Related papers (2020-11-18T16:54:53Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Exploring Explainable Selection to Control Abstractive Summarization [51.74889133688111]
We develop a novel framework that focuses on explainability.
A novel pair-wise matrix captures the sentence interactions, centrality, and attribute scores.
A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content.
arXiv Detail & Related papers (2020-04-24T14:39:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.