Optimizing fairness tradeoffs in machine learning with multiobjective
meta-models
- URL: http://arxiv.org/abs/2304.12190v1
- Date: Fri, 21 Apr 2023 13:42:49 GMT
- Title: Optimizing fairness tradeoffs in machine learning with multiobjective
meta-models
- Authors: William G. La Cava
- Abstract summary: We present a flexible framework for defining the fair machine learning task as a weighted classification problem with multiple cost functions.
We use multiobjective optimization to define the sample weights used in model training for a given machine learner, and adapt the weights to optimize multiple metrics of fairness and accuracy.
On a set of real-world problems, this approach outperforms current state-of-the-art methods by finding solution sets with preferable error/fairness trade-offs.
- Score: 0.913755431537592
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Improving the fairness of machine learning models is a nuanced task that
requires decision makers to reason about multiple, conflicting criteria. The
majority of fair machine learning methods transform the error-fairness
trade-off into a single objective problem with a parameter controlling the
relative importance of error versus fairness. We propose instead to directly
optimize the error-fairness tradeoff by using multi-objective optimization. We
present a flexible framework for defining the fair machine learning task as a
weighted classification problem with multiple cost functions. This framework is
agnostic to the underlying prediction model as well as the metrics. We use
multiobjective optimization to define the sample weights used in model training
for a given machine learner, and adapt the weights to optimize multiple metrics
of fairness and accuracy across a set of tasks. To reduce the number of
optimized parameters, and to constrain their complexity with respect to
population subgroups, we propose a novel meta-model approach that learns to map
protected attributes to sample weights, rather than optimizing those weights
directly. On a set of real-world problems, this approach outperforms current
state-of-the-art methods by finding solution sets with preferable
error/fairness trade-offs.
Related papers
- Unlearning as multi-task optimization: A normalized gradient difference approach with an adaptive learning rate [105.86576388991713]
We introduce a normalized gradient difference (NGDiff) algorithm, enabling us to have better control over the trade-off between the objectives.
We provide a theoretical analysis and empirically demonstrate the superior performance of NGDiff among state-of-the-art unlearning methods on the TOFU and MUSE datasets.
arXiv Detail & Related papers (2024-10-29T14:41:44Z) - NegMerge: Consensual Weight Negation for Strong Machine Unlearning [21.081262106431506]
Machine unlearning aims to selectively remove specific knowledge from a model.
Current methods rely on fine-tuning models on the forget set, generating a task vector, and subtracting it from the original model.
We propose a novel method that leverages all given fine-tuned models rather than selecting a single one.
arXiv Detail & Related papers (2024-10-08T00:50:54Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Fairly Accurate: Learning Optimal Accuracy vs. Fairness Tradeoffs for
Hate Speech Detection [8.841221697099687]
We introduce a differentiable measure that enables direct optimization of group fairness in model training.
We evaluate our methods on the specific task of hate speech detection.
Empirical results across convolutional, sequential, and transformer-based neural architectures show superior empirical accuracy vs. fairness trade-offs over prior work.
arXiv Detail & Related papers (2022-04-15T22:11:25Z) - FORML: Learning to Reweight Data for Fairness [2.105564340986074]
We introduce Fairness Optimized Reweighting via Meta-Learning (FORML)
FORML balances fairness constraints and accuracy by jointly optimizing training sample weights and a neural network's parameters.
We show that FORML improves equality of opportunity fairness criteria over existing state-of-the-art reweighting methods by approximately 1% on image classification tasks and by approximately 5% on a face prediction task.
arXiv Detail & Related papers (2022-02-03T17:36:07Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - A Primal-Dual Subgradient Approachfor Fair Meta Learning [23.65344558042896]
Few shot meta-learning is well-known with its fast-adapted capability and accuracy generalization onto unseen tasks.
We propose a Primal-Dual Fair Meta-learning framework, namely PDFM, which learns to train fair machine learning models using only a few examples.
arXiv Detail & Related papers (2020-09-26T19:47:38Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Fair Bayesian Optimization [25.80374249896801]
We introduce a general constrained Bayesian optimization framework to optimize the performance of any machine learning (ML) model.
We apply BO with fairness constraints to a range of popular models, including random forests, boosting, and neural networks.
We show that our approach is competitive with specialized techniques that enforce model-specific fairness constraints.
arXiv Detail & Related papers (2020-06-09T08:31:08Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.