Explainable Authorship Identification in Cultural Heritage Applications:
Analysis of a New Perspective
- URL: http://arxiv.org/abs/2311.02237v1
- Date: Fri, 3 Nov 2023 20:51:15 GMT
- Title: Explainable Authorship Identification in Cultural Heritage Applications:
Analysis of a New Perspective
- Authors: Mattia Setzu and Silvia Corbara and Anna Monreale and Alejandro Moreo
and Fabrizio Sebastiani
- Abstract summary: We explore the applicability of existing general-purpose eXplainable Artificial Intelligence (XAI) techniques to AId.
In particular, we assess the relative merits of three different types of XAI techniques on three different AId tasks.
Our analysis shows that, while these techniques make important first steps towards explainable Authorship Identification, more work remains to be done.
- Score: 48.031678295495574
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While a substantial amount of work has recently been devoted to enhance the
performance of computational Authorship Identification (AId) systems, little to
no attention has been paid to endowing AId systems with the ability to explain
the reasons behind their predictions. This lacking substantially hinders the
practical employment of AId methodologies, since the predictions returned by
such systems are hardly useful unless they are supported with suitable
explanations. In this paper, we explore the applicability of existing
general-purpose eXplainable Artificial Intelligence (XAI) techniques to AId,
with a special focus on explanations addressed to scholars working in cultural
heritage. In particular, we assess the relative merits of three different types
of XAI techniques (feature ranking, probing, factuals and counterfactual
selection) on three different AId tasks (authorship attribution, authorship
verification, same-authorship verification) by running experiments on real AId
data. Our analysis shows that, while these techniques make important first
steps towards explainable Authorship Identification, more work remains to be
done in order to provide tools that can be profitably integrated in the
workflows of scholars.
Related papers
- SyROCCo: Enhancing Systematic Reviews using Machine Learning [6.805429133535976]
This paper explores the use of machine learning techniques to help navigate the systematic review process.
The application of ML techniques to subsequent stages of a review, such as data extraction and evidence mapping, is in its infancy.
arXiv Detail & Related papers (2024-06-24T11:04:43Z) - A Systematic Literature Review on Explainability for Machine/Deep
Learning-based Software Engineering Research [23.966640472958105]
This paper presents a systematic literature review of approaches that aim to improve the explainability of AI models within the context of Software Engineering.
We aim to summarize the SE tasks where XAI techniques have shown success to date; (2) classify and analyze different XAI techniques; and (3) investigate existing evaluation approaches.
arXiv Detail & Related papers (2024-01-26T03:20:40Z) - Strategies to exploit XAI to improve classification systems [0.0]
XAI aims to provide insights into the decision-making process of AI models, allowing users to understand their results beyond their decisions.
Most XAI literature focuses on how to explain an AI system, while less attention has been given to how XAI methods can be exploited to improve an AI system.
arXiv Detail & Related papers (2023-06-09T10:38:26Z) - Impact Of Explainable AI On Cognitive Load: Insights From An Empirical
Study [0.0]
This study measures cognitive load, task performance, and task time for implementation-independent XAI explanation types using a COVID-19 use case.
We found that these explanation types strongly influence end-users' cognitive load, task performance, and task time.
arXiv Detail & Related papers (2023-04-18T09:52:09Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.