FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms
- URL: http://arxiv.org/abs/2307.05029v1
- Date: Tue, 11 Jul 2023 06:05:06 GMT
- Title: FairLay-ML: Intuitive Remedies for Unfairness in Data-Driven
Social-Critical Algorithms
- Authors: Normen Yu, Gang Tan, Saeid Tizpaz-Niari
- Abstract summary: This thesis explores whether open-sourced machine learning (ML) model explanation tools can allow a layman to visualize, understand, and suggest intuitive remedies to unfairness in ML-based decision-support systems.
This thesis presents FairLay-ML, a proof-of-concept GUI integrating some of the most promising tools to provide intuitive explanations for unfair logic in ML models.
- Score: 13.649336187121095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This thesis explores open-sourced machine learning (ML) model explanation
tools to understand whether these tools can allow a layman to visualize,
understand, and suggest intuitive remedies to unfairness in ML-based
decision-support systems. Machine learning models trained on datasets biased
against minority groups are increasingly used to guide life-altering social
decisions, prompting the urgent need to study their logic for unfairness. Due
to this problem's impact on vast populations of the general public, it is
critical for the layperson -- not just subject matter experts in social justice
or machine learning experts -- to understand the nature of unfairness within
these algorithms and the potential trade-offs. Existing research on fairness in
machine learning focuses mostly on the mathematical definitions and tools to
understand and remedy unfair models, with some directly citing user-interactive
tools as necessary for future work. This thesis presents FairLay-ML, a
proof-of-concept GUI integrating some of the most promising tools to provide
intuitive explanations for unfair logic in ML models by integrating existing
research tools (e.g. Local Interpretable Model-Agnostic Explanations) with
existing ML-focused GUI (e.g. Python Streamlit). We test FairLay-ML using
models of various accuracy and fairness generated by an unfairness detector
tool, Parfait-ML, and validate our results using Themis. Our study finds that
the technology stack used for FairLay-ML makes it easy to install and provides
real-time black-box explanations of pre-trained models to users. Furthermore,
the explanations provided translate to actionable remedies.
Related papers
- Analyzing Fairness of Computer Vision and Natural Language Processing Models [1.0923877073891446]
Machine learning (ML) algorithms play a crucial role in decision making across diverse fields such as healthcare, finance, education, and law enforcement.
Despite their widespread adoption, these systems raise ethical and social concerns due to potential biases and fairness issues.
This study focuses on evaluating and improving the fairness of Computer Vision and Natural Language Processing (NLP) models applied to unstructured datasets.
arXiv Detail & Related papers (2024-12-13T06:35:55Z) - Analyzing Fairness of Classification Machine Learning Model with Structured Dataset [1.0923877073891446]
This study investigates the fairness of machine learning models applied to structured datasets in classification tasks.
Three fairness libraries; Fairlearn by Microsoft, AIF360 by IBM, and the What If Tool by Google were employed.
The research aims to assess the extent of bias in the ML models, compare the effectiveness of these libraries, and derive actionable insights for practitioners.
arXiv Detail & Related papers (2024-12-13T06:31:09Z) - Learning to Ask: When LLM Agents Meet Unclear Instruction [55.65312637965779]
Large language models (LLMs) can leverage external tools for addressing a range of tasks unattainable through language skills alone.
We evaluate the performance of LLMs tool-use under imperfect instructions, analyze the error patterns, and build a challenging tool-use benchmark called Noisy ToolBench.
We propose a novel framework, Ask-when-Needed (AwN), which prompts LLMs to ask questions to users whenever they encounter obstacles due to unclear instructions.
arXiv Detail & Related papers (2024-08-31T23:06:12Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Evaluating Language Models for Mathematics through Interactions [116.67206980096513]
We introduce CheckMate, a prototype platform for humans to interact with and evaluate large language models (LLMs)
We conduct a study with CheckMate to evaluate three language models (InstructGPT, ChatGPT, and GPT-4) as assistants in proving undergraduate-level mathematics.
We derive a taxonomy of human behaviours and uncover that despite a generally positive correlation, there are notable instances of divergence between correctness and perceived helpfulness.
arXiv Detail & Related papers (2023-06-02T17:12:25Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Classification via Transformer Neural Networks: Case Study of an
Educational Domain [0.0913755431537592]
This paper presents a preliminary investigation of fairness constraint in transformer networks on Law School Student neural datasets.
We have employed fairness metrics for evaluation and check the trade-off between fairness and accuracy.
arXiv Detail & Related papers (2022-06-03T06:34:16Z) - MAML is a Noisy Contrastive Learner [72.04430033118426]
Model-agnostic meta-learning (MAML) is one of the most popular and widely-adopted meta-learning algorithms nowadays.
We provide a new perspective to the working mechanism of MAML and discover that: MAML is analogous to a meta-learner using a supervised contrastive objective function.
We propose a simple but effective technique, zeroing trick, to alleviate such interference.
arXiv Detail & Related papers (2021-06-29T12:52:26Z) - Towards Model-informed Precision Dosing with Expert-in-the-loop Machine
Learning [0.0]
We consider a ML framework that may accelerate model learning and improve its interpretability by incorporating human experts into the model learning loop.
We propose a novel human-in-the-loop ML framework aimed at dealing with learning problems that the cost of data annotation is high.
With an application to precision dosing, our experimental results show that the approach can learn interpretable rules from data and may potentially lower experts' workload.
arXiv Detail & Related papers (2021-06-28T03:45:09Z) - LiFT: A Scalable Framework for Measuring Fairness in ML Applications [18.54302159142362]
We present the LinkedIn Fairness Toolkit (LiFT), a framework for scalable computation of fairness metrics as part of large ML systems.
We discuss the challenges encountered in incorporating fairness tools in practice and the lessons learned during deployment at LinkedIn.
arXiv Detail & Related papers (2020-08-14T03:55:31Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.