FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms
- URL: http://arxiv.org/abs/2307.05029v1
- Date: Tue, 11 Jul 2023 06:05:06 GMT
- Title: FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms
- Authors: Normen Yu, Gang Tan, Saeid Tizpaz-Niari
- Abstract summary: This thesis explores whether open-sourced machine learning (ML) model explanation tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems.
This thesis presents FairLay-ML, a proof-of-concept GUI integrating some of the most promising tools to provide intuitive explanations for unfair logic in ML models.
- Score: 13.649336187121095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This thesis explores open-sourced machine learning (ML) model explanation
tools to understand whether these tools can allow a layman to visualize,
understand, and suggest intuitive remedies to unfairness in ML-based
decision-support systems. Machine learning models trained on datasets biased
against minority groups are increasingly used to guide life-altering social
decisions, prompting the urgent need to study their logic for unfairness. Due
to this problem's impact on vast populations of the general public, it is
critical for the layperson -- not just subject matter experts in social justice
or machine learning experts -- to understand the nature of unfairness within
these algorithms and the potential trade-offs. Existing research on fairness in
machine learning focuses mostly on the mathematical definitions and tools to
understand and remedy unfair models, with some directly citing user-interactive
tools as necessary for future work. This thesis presents FairLay-ML, a
proof-of-concept GUI integrating some of the most promising tools to provide
intuitive explanations for unfair logic in ML models by integrating existing
research tools (e.g. Local Interpretable Model-Agnostic Explanations) with
existing ML-focused GUI (e.g. Python Streamlit). We test FairLay-ML using
models of various accuracy and fairness generated by an unfairness detector
tool, Parfait-ML, and validate our results using Themis. Our study finds that
the technology stack used for FairLay-ML makes it easy to install and provides
real-time black-box explanations of pre-trained models to users. Furthermore,
the explanations provided translate to actionable remedies.
Related papers
- Democratize with Care: The need for fairness specific features in
user-interface based open source AutoML tools [0.0]
Automated Machine Learning (AutoML) streamlines the machine learning model development process.
This democratization allows more users (including non-experts) to access and utilize state-of-the-art machine-learning expertise.
However, AutoML tools may also propagate bias in the way these tools handle the data, model choices, and optimization approaches adopted.
arXiv Detail & Related papers (2023-12-16T19:54:00Z) - Machine Vision Therapy: Multimodal Large Language Models Can Enhance Visual Robustness via Denoising In-Context Learning [67.0609518552321]
We propose to conduct Machine Vision Therapy which aims to rectify the noisy predictions from vision models.
By fine-tuning with the denoised labels, the learning model performance can be boosted in an unsupervised manner.
arXiv Detail & Related papers (2023-12-05T07:29:14Z) - Transparency challenges in policy evaluation with causal machine learning -- improving usability and accountability [0.0]
There is no globally interpretable way to understand how a model makes estimates.
It is difficult to understand whether causal machine learning models are functioning in ways that are fair.
This paper explores why transparency issues are a problem for causal machine learning in public policy evaluation applications.
arXiv Detail & Related papers (2023-10-20T02:48:29Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Evaluating Language Models for Mathematics through Interactions [116.67206980096513]
We introduce CheckMate, a prototype platform for humans to interact with and evaluate large language models (LLMs)
We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics.
We derive a taxonomy of human behaviours and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness.
arXiv Detail & Related papers (2023-06-02T17:12:25Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Classification via Transformer Neural Networks: Case Study of an
Educational Domain [0.0913755431537592]
This paper presents a preliminary investigation of fairness constraint in transformer networks on Law School Student neural datasets.
We have employed fairness metrics for evaluation and check the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-06-03T06:34:16Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.