FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems
- URL: http://arxiv.org/abs/2209.03805v1
- Date: Thu, 8 Sep 2022 13:25:02 GMT
- Title: FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems
- Authors: Kacper Sokol and Alexander Hepburn and Rafael Poyiadzi and Matthew
Clifford and Raul Santos-Rodriguez and Peter Flach
- Abstract summary: We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
- Score: 69.24490096929709
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predictive systems, in particular machine learning algorithms, can take
important, and sometimes legally binding, decisions about our everyday life. In
most cases, however, these systems and decisions are neither regulated nor
certified. Given the potential harm that these algorithms can cause, their
qualities such as fairness, accountability and transparency (FAT) are of
paramount importance. To ensure high-quality, fair, transparent and reliable
predictive systems, we developed an open source Python package called FAT
Forensics. It can inspect important fairness, accountability and transparency
aspects of predictive algorithms to automatically and objectively report them
back to engineers and users of such systems. Our toolbox can evaluate all
elements of a predictive pipeline: data (and their features), models and
predictions. Published under the BSD 3-Clause open source licence, FAT
Forensics is opened up for personal and commercial usage.
Related papers
- Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Fairness Score and Process Standardization: Framework for Fairness
Certification in Artificial Intelligence Systems [0.4297070083645048]
We propose a novel Fairness Score to measure the fairness of a data-driven AI system.
It will also provide a framework to operationalise the concept of fairness and facilitate the commercial deployment of such systems.
arXiv Detail & Related papers (2022-01-10T15:45:12Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Outlining Traceability: A Principle for Operationalizing Accountability
in Computing Systems [1.0152838128195467]
Traceability requires establishing not only how a system worked but how it was created and for what purpose.
Traceability connects records of how the system was constructed and what the system did mechanically to the broader goals of governance.
This map reframes existing discussions around accountability and transparency, using the principle of traceability to show how, when, and why transparency can be deployed to serve accountability goals.
arXiv Detail & Related papers (2021-01-23T00:13:20Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Automatic Open-World Reliability Assessment [11.380522815465985]
Image classification in the open-world must handle out-of-distribution (OOD) images.
We formalize the open-world recognition reliability problem and propose multiple automatic reliability assessment policies.
arXiv Detail & Related papers (2020-11-11T01:56:23Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Applying Transparency in Artificial Intelligence based Personalization
Systems [5.671950073691286]
Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
arXiv Detail & Related papers (2020-04-02T11:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.