A novel approach to generate datasets with XAI ground truth to evaluate
image models
- URL: http://arxiv.org/abs/2302.05624v2
- Date: Tue, 3 Oct 2023 21:01:30 GMT
- Title: A novel approach to generate datasets with XAI ground truth to evaluate
image models
- Authors: Miquel Mir\'o-Nicolau, Antoni Jaume-i-Cap\'o, Gabriel Moy\`a-Alcover
- Abstract summary: We propose a new method to generate datasets with ground truth (GT)
We conducted a set of experiments that compared our GT with real model explanations and obtained excellent results confirming that our proposed method is correct.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increased usage of artificial intelligence (AI), it is imperative to
understand how these models work internally. These needs have led to the
development of a new field called eXplainable artificial intelligence (XAI).
This field consists of on a set of techniques that allows us to theoretically
determine the cause of the AI decisions. One main issue of XAI is how to verify
the works on this field, taking into consideration the lack of ground truth
(GT). In this study, we propose a new method to generate datasets with GT. We
conducted a set of experiments that compared our GT with real model
explanations and obtained excellent results confirming that our proposed method
is correct.
Related papers
- AI in a vat: Fundamental limits of efficient world modelling for agent sandboxing and interpretability [84.52205243353761]
Recent work proposes using world models to generate controlled virtual environments in which AI agents can be tested before deployment.
We investigate ways of simplifying world models that remain agnostic to the AI agent under evaluation.
arXiv Detail & Related papers (2025-04-06T20:35:44Z) - Explainable AI-Based Interface System for Weather Forecasting Model [21.801445160287532]
This study defines three requirements for explanations of black-box models in meteorology through user studies.
Appropriate XAI methods are mapped to each requirement, and the generated explanations are tested quantitatively and qualitatively.
Results indicate that the explanations increase decision utility and user trust.
arXiv Detail & Related papers (2025-04-01T13:52:34Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Robustness of Explainable Artificial Intelligence in Industrial Process Modelling [43.388607981317016]
We evaluate current XAI methods by scoring them based on ground truth simulations and sensitivity analysis.
We show the differences between XAI methods in their ability to correctly predict the true sensitivity of the modeled industrial process.
arXiv Detail & Related papers (2024-07-12T09:46:26Z) - Assessing Fidelity in XAI post-hoc techniques: A Comparative Study with
Ground Truth Explanations Datasets [0.0]
XAI methods based on the backpropagation of output information to input yield higher accuracy and reliability.
Backpropagation method tends to generate more noisy saliency maps.
Findings have significant implications for the advancement of XAI methods.
arXiv Detail & Related papers (2023-11-03T14:57:24Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - AI-Generated Images as Data Source: The Dawn of Synthetic Era [61.879821573066216]
generative AI has unlocked the potential to create synthetic images that closely resemble real-world photographs.
This paper explores the innovative concept of harnessing these AI-generated images as new data sources.
In contrast to real data, AI-generated data exhibit remarkable advantages, including unmatched abundance and scalability.
arXiv Detail & Related papers (2023-10-03T06:55:19Z) - Explanation-by-Example Based on Item Response Theory [0.0]
This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach.
From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.
arXiv Detail & Related papers (2022-10-04T14:36:33Z) - OAK4XAI: Model towards Out-Of-Box eXplainable Artificial Intelligence
for Digital Agriculture [4.286327408435937]
We build an Agriculture Computing Ontology (AgriComO) to explain the knowledge mined in agriculture.
XAI tries to provide human-understandable explanations for decision-making and trained AI models.
arXiv Detail & Related papers (2022-09-29T21:20:25Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.