Fairness-aware Configuration of Machine Learning Libraries
- URL: http://arxiv.org/abs/2202.06196v1
- Date: Sun, 13 Feb 2022 04:04:33 GMT
- Title: Fairness-aware Configuration of Machine Learning Libraries
- Authors: Saeid Tizpaz-Niari and Ashish Kumar and Gang Tan and Ashutosh Trivedi
- Abstract summary: This paper investigates the parameter space of machine learning (ML) algorithms in aggravating or mitigating fairness bugs.
Three search-based software testing algorithms are proposed to uncover the precision-fairness frontier.
- Score: 21.416261003364177
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the parameter space of machine learning (ML)
algorithms in aggravating or mitigating fairness bugs. Data-driven software is
increasingly applied in social-critical applications where ensuring fairness is
of paramount importance. The existing approaches focus on addressing fairness
bugs by either modifying the input dataset or modifying the learning
algorithms. On the other hand, the selection of hyperparameters, which provide
finer controls of ML algorithms, may enable a less intrusive approach to
influence the fairness. Can hyperparameters amplify or suppress discrimination
present in the input dataset? How can we help programmers in detecting,
understanding, and exploiting the role of hyperparameters to improve the
fairness?
We design three search-based software testing algorithms to uncover the
precision-fairness frontier of the hyperparameter space. We complement these
algorithms with statistical debugging to explain the role of these parameters
in improving fairness. We implement the proposed approaches in the tool
Parfait-ML (PARameter FAIrness Testing for ML Libraries) and show its
effectiveness and utility over five mature ML algorithms as used in six
social-critical applications. In these applications, our approach successfully
identified hyperparameters that significantly improve (vis-a-vis the
state-of-the-art techniques) the fairness without sacrificing precision.
Surprisingly, for some algorithms (e.g., random forest), our approach showed
that certain configuration of hyperparameters (e.g., restricting the search
space of attributes) can amplify biases across applications. Upon further
investigation, we found intuitive explanations of these phenomena, and the
results corroborate similar observations from the literature.
Related papers
- Fairness Through Controlled (Un)Awareness in Node Embeddings [4.818571559544213]
We show how the parametrization of the emphCrossWalk algorithm influences the ability to infer a sensitive attributes from node embeddings.
This functionality offers a valuable tool for improving the fairness of ML systems utilizing graph embeddings.
arXiv Detail & Related papers (2024-07-29T14:01:26Z) - Masked Thought: Simply Masking Partial Reasoning Steps Can Improve Mathematical Reasoning Learning of Language Models [102.72940700598055]
In reasoning tasks, even a minor error can cascade into inaccurate results.
We develop a method that avoids introducing external resources, relying instead on perturbations to the input.
Our training approach randomly masks certain tokens within the chain of thought, a technique we found to be particularly effective for reasoning tasks.
arXiv Detail & Related papers (2024-03-04T16:21:54Z) - Robustness of Algorithms for Causal Structure Learning to Hyperparameter
Choice [2.3020018305241337]
Hyper parameter tuning can make the difference between state-of-the-art and poor prediction performance for any algorithm.
We investigate the influence of hyper parameter selection on causal structure learning tasks.
arXiv Detail & Related papers (2023-10-27T15:34:08Z) - Interactive Hyperparameter Optimization in Multi-Objective Problems via
Preference Learning [65.51668094117802]
We propose a human-centered interactive HPO approach tailored towards multi-objective machine learning (ML)
Instead of relying on the user guessing the most suitable indicator for their needs, our approach automatically learns an appropriate indicator.
arXiv Detail & Related papers (2023-09-07T09:22:05Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - FAIRLEARN:Configurable and Interpretable Algorithmic Fairness [1.2183405753834557]
There is a need to mitigate any bias arising from either training samples or implicit assumptions made about the data samples.
Many approaches have been proposed to make learning algorithms fair by detecting and mitigating bias in different stages of optimization.
We propose the FAIRLEARN procedure that produces a fair algorithm by incorporating user constraints into the optimization procedure.
arXiv Detail & Related papers (2021-11-17T03:07:18Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Individually Fair Gradient Boosting [86.1984206610373]
We consider the task of enforcing individual fairness in gradient boosting.
We show that our algorithm converges globally and generalizes.
We also demonstrate the efficacy of our algorithm on three ML problems susceptible to algorithmic bias.
arXiv Detail & Related papers (2021-03-31T03:06:57Z) - A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization [5.337302350000984]
We present Fairband, a bandit-based fairness-aware hyper parameter optimization (HO) algorithm.
By introducing fairness notions into HO, we enable seamless and efficient integration of fairness objectives into real-world ML pipelines.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off through hyper parameter optimization.
arXiv Detail & Related papers (2020-10-07T21:35:16Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Online Hyperparameter Search Interleaved with Proximal Parameter Updates [9.543667840503739]
We develop a method that relies on the structure of proximal gradient methods and does not require a smooth cost function.
Such a method is applied to Leave-one-out (LOO)-validated Lasso and Group Lasso.
Numerical experiments corroborate the convergence of the proposed method to a local optimum of the LOO validation error curve.
arXiv Detail & Related papers (2020-04-06T15:54:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.