Reducing Unintended Bias of ML Models on Tabular and Textual Data
- URL: http://arxiv.org/abs/2108.02662v1
- Date: Thu, 5 Aug 2021 14:55:56 GMT
- Title: Reducing Unintended Bias of ML Models on Tabular and Textual Data
- Authors: Guilherme Alves, Maxime Amblard, Fabien Bernier, Miguel Couceiro and
Amedeo Napoli
- Abstract summary: We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models.
We introduce several improvements such as automating the choice of FixOut's parameters.
We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.
- Score: 5.503546193689538
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Unintended biases in machine learning (ML) models are among the major
concerns that must be addressed to maintain public trust in ML. In this paper,
we address process fairness of ML models that consists in reducing the
dependence of models on sensitive features, without compromising their
performance. We revisit the framework FixOut that is inspired in the approach
"fairness through unawareness" to build fairer models. We introduce several
improvements such as automating the choice of FixOut's parameters. Also, FixOut
was originally proposed to improve fairness of ML models on tabular data. We
also demonstrate the feasibility of FixOut's workflow for models on textual
data. We present several experimental results that illustrate the fact that
FixOut improves process fairness on different classification settings.
Related papers
- Influence Functions for Scalable Data Attribution in Diffusion Models [52.92223039302037]
Diffusion models have led to significant advancements in generative modelling.
Yet their widespread adoption poses challenges regarding data attribution and interpretability.
In this paper, we aim to help address such challenges by developing an textitinfluence functions framework.
arXiv Detail & Related papers (2024-10-17T17:59:02Z) - SMILE: Zero-Shot Sparse Mixture of Low-Rank Experts Construction From Pre-Trained Foundation Models [85.67096251281191]
We present an innovative approach to model fusion called zero-shot Sparse MIxture of Low-rank Experts (SMILE) construction.
SMILE allows for the upscaling of source models into an MoE model without extra data or further training.
We conduct extensive experiments across diverse scenarios, such as image classification and text generation tasks, using full fine-tuning and LoRA fine-tuning.
arXiv Detail & Related papers (2024-08-19T17:32:15Z) - Harnessing Large Language Models as Post-hoc Correctors [6.288056740658763]
We show that an LLM can work as a post-hoc corrector to propose corrections for the predictions of an arbitrary Machine Learning model.
We form a contextual knowledge database by incorporating the dataset's label information and the ML model's predictions on the validation dataset.
Our experimental results on text analysis and the challenging molecular predictions show that model improves the performance of a number of models by up to 39%.
arXiv Detail & Related papers (2024-02-20T22:50:41Z) - CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
Performance and Calibration [59.48235003469116]
We show that data augmentation consistently enhances OOD performance.
We also show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance.
arXiv Detail & Related papers (2023-09-14T16:16:40Z) - fairml: A Statistician's Take on Fair Machine Learning Modelling [0.0]
We describe the fairml package which implements our previous work (Scutari, Panero, and Proissl 2022) and related models in the literature.
fairml is designed around classical statistical models and penalised regression results.
The constraint used to enforce fairness is to model estimation, making it possible to mix-and-match the desired model family and fairness definition for each application.
arXiv Detail & Related papers (2023-05-03T09:59:53Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Scaling up Trustless DNN Inference with Zero-Knowledge Proofs [47.42532753464726]
We present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done.
We provide the first ZKSNARK proof of valid inference for a full resolution ImageNet model, achieving 79% top-5 accuracy.
arXiv Detail & Related papers (2022-10-17T00:35:38Z) - Statistical inference for individual fairness [24.622418924551315]
We focus on the problem of detecting violations of individual fairness in machine learning models.
We develop a suite of inference tools for the adversarial cost function.
We demonstrate the utility of our tools in a real-world case study.
arXiv Detail & Related papers (2021-03-30T22:49:25Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z) - VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven
Model Interpretability Applied to the Ironmaking Industry [70.10343492784465]
It is necessary to expose to the process engineer, not solely the model predictions, but also their interpretability.
Model-agnostic local interpretability solutions based on LIME have recently emerged to improve the original method.
We present in this paper a novel approach, VAE-LIME, for local interpretability of data-driven models forecasting the temperature of the hot metal produced by a blast furnace.
arXiv Detail & Related papers (2020-07-15T07:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.