Impact of Legal Requirements on Explainability in Machine Learning
- URL: http://arxiv.org/abs/2007.05479v1
- Date: Fri, 10 Jul 2020 16:57:18 GMT
- Title: Impact of Legal Requirements on Explainability in Machine Learning
- Authors: Adrien Bibal, Michael Lognoul, Alexandre de Streel and Beno\^it
Fr\'enay
- Abstract summary: Our research analyzes explanation obligations imposed for private and public decision-making, and how they can be implemented by machine learning techniques.
In particular, our research analyzes the requirements on explainability imposed by European laws and their implications for machine learning (ML) models.
- Score: 63.24965775030674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The requirements on explainability imposed by European laws and their
implications for machine learning (ML) models are not always clear. In that
perspective, our research analyzes explanation obligations imposed for private
and public decision-making, and how they can be implemented by machine learning
techniques.
Related papers
- Engineering the Law-Machine Learning Translation Problem: Developing Legally Aligned Models [0.0]
We introduce a five-stage interdisciplinary framework that integrates legal and ML-technical analysis during machine learning model development.
This framework facilitates designing ML models in a legally aligned way and identifying high-performing models that are legally justifiable.
arXiv Detail & Related papers (2025-04-23T13:41:17Z) - Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - UnUnlearning: Unlearning is not sufficient for content regulation in advanced generative AI [50.61495097098296]
We revisit the paradigm in which unlearning is used for Large Language Models (LLMs)
We introduce a concept of ununlearning, where unlearned knowledge gets reintroduced in-context.
We argue that content filtering for impermissible knowledge will be required and even exact unlearning schemes are not enough for effective content regulation.
arXiv Detail & Related papers (2024-06-27T10:24:35Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Position Paper: Bridging the Gap Between Machine Learning and Sensitivity Analysis [9.191045750996526]
We argue that interpretations of machine learning (ML) models can be seen as a form of sensitivity analysis (SA)
We call attention to the benefits of a unified SA-based view of explanations in ML and the necessity to fully credit related work.
arXiv Detail & Related papers (2023-12-20T17:59:11Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Unveiling the Potential of Counterfactuals Explanations in Employability [0.0]
We show how counterfactuals are applied to employability-related problems involving machine learning algorithms.
The use cases presented go beyond the mere application of counterfactuals as explanations.
arXiv Detail & Related papers (2023-05-17T09:13:53Z) - Explainable Machine Learning for Fraud Detection [0.47574189356217006]
The application of machine learning to support the processing of large datasets holds promise in many industries, including financial services.
In this paper, we explore explainability methods in the domain of real-time fraud detection by investigating the selection of appropriate background datasets and runtime trade-offs on both supervised and unsupervised models.
arXiv Detail & Related papers (2021-05-13T14:12:02Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Understanding Interpretability by generalized distillation in Supervised
Classification [3.5473853445215897]
Recent interpretation strategies focus on human understanding of the underlying decision mechanisms of the complex Machine Learning models.
We propose an interpretation-by-distillation formulation that is defined relative to other ML models.
We evaluate our proposed framework on the MNIST, Fashion-MNIST and Stanford40 datasets.
arXiv Detail & Related papers (2020-12-05T17:42:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.