Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and
Explainable Automatic Recruitment
- URL: http://arxiv.org/abs/2012.00360v1
- Date: Tue, 1 Dec 2020 09:36:59 GMT
- Title: Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Fair and
Explainable Automatic Recruitment
- Authors: Alfonso Ortega and Julian Fierrez and Aythami Morales and Zilong Wang
and Tony Ribeiro
- Abstract summary: We propose an ILP technique that can learn a propositional logic theory equivalent to a given black-box system.
We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains.
- Score: 11.460075612587591
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning methods are growing in relevance for biometrics and personal
information processing in domains such as forensics, e-health, recruitment, and
e-learning. In these domains, white-box (human-readable) explanations of
systems built on machine learning methods can become crucial. Inductive Logic
Programming (ILP) is a subfield of symbolic AI aimed to automatically learn
declarative theories about the process of data. Learning from Interpretation
Transition (LFIT) is an ILP technique that can learn a propositional logic
theory equivalent to a given black-box system (under certain conditions). The
present work takes a first step to a general methodology to incorporate
accurate declarative explanations to classic machine learning by checking the
viability of LFIT in a specific AI application scenario: fair recruitment based
on an automatic tool generated with machine learning methods for ranking
Curricula Vitae that incorporates soft biometric information (gender and
ethnicity). We show the expressiveness of LFIT for this specific problem and
propose a scheme that can be applicable to other domains.
Related papers
- The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Interpretable and Explainable Machine Learning Methods for Predictive
Process Monitoring: A Systematic Literature Review [1.3812010983144802]
This paper presents a systematic review on the explainability and interpretability of machine learning (ML) models within the context of predictive process mining.
We provide a comprehensive overview of the current methodologies and their applications across various application domains.
Our findings aim to equip researchers and practitioners with a deeper understanding of how to develop and implement more trustworthy, transparent, and effective intelligent systems for process analytics.
arXiv Detail & Related papers (2023-12-29T12:43:43Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Planning for Learning Object Properties [117.27898922118946]
We formalize the problem of automatically training a neural network to recognize object properties as a symbolic planning problem.
We use planning techniques to produce a strategy for automating the training dataset creation and the learning process.
We provide an experimental evaluation in both a simulated and a real environment.
arXiv Detail & Related papers (2023-01-15T09:37:55Z) - Feature Visualization within an Automated Design Assessment leveraging
Explainable Artificial Intelligence Methods [0.0]
Automated capability assessment, mainly leveraged by deep learning systems driven from 3D CAD data, have been presented.
Current assessment systems may be able to assess CAD data with regards to abstract features, but without any geometrical indicator about the reasons of the system's decision.
Within the NeuroCAD Project, xAI methods are used to identify geometrical features which are associated with a certain abstract feature.
arXiv Detail & Related papers (2022-01-28T13:31:42Z) - A Practical Tutorial on Explainable AI Techniques [5.671062637797752]
This tutorial is meant to be the go-to handbook for any audience with a computer science background.
It aims at getting intuitive insights of machine learning models, accompanied with straight, fast, and intuitive explanations out of the box.
arXiv Detail & Related papers (2021-11-13T17:47:31Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Altruist: Argumentative Explanations through Local Interpretations of
Predictive Models [10.342433824178825]
Existing explanation techniques are often not comprehensible to the end user.
We introduce a preliminary meta-explanation methodology that identifies the truthful parts of feature importance oriented interpretations.
Experimentation strongly indicates that an ensemble of multiple interpretation techniques yields considerably more truthful explanations.
arXiv Detail & Related papers (2020-10-15T10:36:48Z) - Induction and Exploitation of Subgoal Automata for Reinforcement
Learning [75.55324974788475]
We present ISA, an approach for learning and exploiting subgoals in episodic reinforcement learning (RL) tasks.
ISA interleaves reinforcement learning with the induction of a subgoal automaton, an automaton whose edges are labeled by the task's subgoals.
A subgoal automaton also consists of two special states: a state indicating the successful completion of the task, and a state indicating that the task has finished without succeeding.
arXiv Detail & Related papers (2020-09-08T16:42:55Z) - Explainable AI for Classification using Probabilistic Logic Inference [9.656846523452502]
We present an explainable classification method.
Our method works by first constructing a symbolic Knowledge Base from the training data, and then performing probabilistic inferences on such Knowledge Base with linear programming.
It identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the artley Value based method.
arXiv Detail & Related papers (2020-05-05T11:39:23Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.