Harnessing value from data science in business: ensuring explainability
and fairness of solutions
- URL: http://arxiv.org/abs/2108.07714v1
- Date: Tue, 10 Aug 2021 11:59:38 GMT
- Title: Harnessing value from data science in business: ensuring explainability
and fairness of solutions
- Authors: Krzysztof Chomiak and Micha{\l} Miktus
- Abstract summary: The paper introduces concepts of fairness and explainability (XAI) in artificial intelligence, oriented to solve a sophisticated business problems.
For fairness, the authors discuss the bias-inducing specifics, as well as relevant mitigation methods, concluding with a set of recipes for introducing fairness in data-driven organizations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The paper introduces concepts of fairness and explainability (XAI) in
artificial intelligence, oriented to solve a sophisticated business problems.
For fairness, the authors discuss the bias-inducing specifics, as well as
relevant mitigation methods, concluding with a set of recipes for introducing
fairness in data-driven organizations. Additionally, for XAI, the authors audit
specific algorithms paired with demonstrational business use-cases, discuss a
plethora of techniques of explanations quality quantification and provide an
overview of future research avenues.
Related papers
- Explainability in AI Based Applications: A Framework for Comparing Different Techniques [2.5874041837241304]
In business applications, the challenge lies in selecting an appropriate explainability method that balances comprehensibility with accuracy.
This paper proposes a novel method for the assessment of the agreement of different explainability techniques.
By providing a practical framework for understanding the agreement of diverse explainability techniques, our research aims to facilitate the broader integration of interpretable AI systems in business applications.
arXiv Detail & Related papers (2024-10-28T09:45:34Z) - Advancing Fairness in Natural Language Processing: From Traditional Methods to Explainability [0.9065034043031668]
The thesis addresses the need for equity and transparency in NLP systems.
It introduces an innovative algorithm to mitigate biases in high-risk NLP applications.
It also presents a model-agnostic explainability method that identifies and ranks concepts in Transformer models.
arXiv Detail & Related papers (2024-10-16T12:38:58Z) - A Hypothesis on Good Practices for AI-based Systems for Financial Time
Series Forecasting: Towards Domain-Driven XAI Methods [0.0]
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks.
These models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance.
This paper explores good practices for deploying explainability in AI-based systems for finance.
arXiv Detail & Related papers (2023-11-13T17:56:45Z) - A Comprehensive Review on Financial Explainable AI [29.229196780505532]
We provide a comparative survey of methods that aim to improve the explainability of deep learning models within the context of finance.
We categorize the collection of explainable AI methods according to their corresponding characteristics.
We review the concerns and challenges of adopting explainable AI methods, together with future directions we deemed appropriate and important.
arXiv Detail & Related papers (2023-09-21T10:30:49Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - A toolkit of dilemmas: Beyond debiasing and fairness formulas for
responsible AI/ML [0.0]
Approaches to fair and ethical AI have recently fallen under the scrutiny of the emerging field of critical data studies.
This paper advocates for a situated reasoning and creative engagement with the dilemmas surrounding responsible algorithmic/data-driven systems.
arXiv Detail & Related papers (2023-03-03T13:58:24Z) - Explainable Data-Driven Optimization: From Context to Decision and Back
Again [76.84947521482631]
Data-driven optimization uses contextual information and machine learning algorithms to find solutions to decision problems with uncertain parameters.
We introduce a counterfactual explanation methodology tailored to explain solutions to data-driven problems.
We demonstrate our approach by explaining key problems in operations management such as inventory management and routing.
arXiv Detail & Related papers (2023-01-24T15:25:16Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.