Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task
Learning
- URL: http://arxiv.org/abs/2106.02705v1
- Date: Fri, 4 Jun 2021 20:28:54 GMT
- Title: Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task
Learning
- Authors: Yuyan Wang, Xuezhi Wang, Alex Beutel, Flavien Prost, Jilin Chen, Ed H.
Chi
- Abstract summary: We are concerned with how group fairness as an ML fairness concept plays out in the multi-task scenario.
In multi-task learning, several tasks are learned jointly to exploit task correlations for a more efficient inductive transfer.
We propose a Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi-task learning.
- Score: 18.666340309506605
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: As multi-task models gain popularity in a wider range of machine learning
applications, it is becoming increasingly important for practitioners to
understand the fairness implications associated with those models. Most
existing fairness literature focuses on learning a single task more fairly,
while how ML fairness interacts with multiple tasks in the joint learning
setting is largely under-explored. In this paper, we are concerned with how
group fairness (e.g., equal opportunity, equalized odds) as an ML fairness
concept plays out in the multi-task scenario. In multi-task learning, several
tasks are learned jointly to exploit task correlations for a more efficient
inductive transfer. This presents a multi-dimensional Pareto frontier on (1)
the trade-off between group fairness and accuracy with respect to each task, as
well as (2) the trade-offs across multiple tasks. We aim to provide a deeper
understanding on how group fairness interacts with accuracy in multi-task
learning, and we show that traditional approaches that mainly focus on
optimizing the Pareto frontier of multi-task accuracy might not perform well on
fairness goals. We propose a new set of metrics to better capture the
multi-dimensional Pareto frontier of fairness-accuracy trade-offs uniquely
presented in a multi-task learning setting. We further propose a
Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi-task
learning. Experiments on several real-world datasets demonstrate the
effectiveness of our proposed approach.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Fairness in Multi-Task Learning via Wasserstein Barycenters [0.0]
Algorithmic Fairness is an established field in machine learning that aims to reduce biases in data.
We develop a method that extends the definition of Strong Demographic Parity to multi-task learning using multi-marginal Wasserstein barycenters.
Our approach provides a closed form solution for the optimal fair multi-task predictor including both regression and binary classification tasks.
arXiv Detail & Related papers (2023-06-16T19:53:34Z) - Equitable Multi-task Learning [18.65048321820911]
Multi-task learning (MTL) has achieved great success in various research domains, such as CV, NLP and IR.
We propose a novel multi-task optimization method, named EMTL, to achieve equitable MTL.
Our method stably outperforms state-of-the-art methods on the public benchmark datasets of two different research domains.
arXiv Detail & Related papers (2023-06-15T03:37:23Z) - Learning to Teach Fairness-aware Deep Multi-task Learning [17.30805079904122]
We propose a flexible approach that learns how to be fair in a multi-task setting by selecting which objective (accuracy or fairness) to optimize at each step.
Experiments on three real datasets show that L2T-FMT improves on both fairness (12-19%) and accuracy (up to 2%) over state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-16T18:43:16Z) - Channel Exchanging Networks for Multimodal and Multitask Dense Image
Prediction [125.18248926508045]
We propose Channel-Exchanging-Network (CEN) which is self-adaptive, parameter-free, and more importantly, applicable for both multimodal fusion and multitask learning.
CEN dynamically exchanges channels betweenworks of different modalities.
For the application of dense image prediction, the validity of CEN is tested by four different scenarios.
arXiv Detail & Related papers (2021-12-04T05:47:54Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z) - Small Towers Make Big Differences [59.243296878666285]
Multi-task learning aims at solving multiple machine learning tasks at the same time.
A good solution to a multi-task learning problem should be generalizable in addition to being Pareto optimal.
We propose a method of under- parameterized self-auxiliaries for multi-task models to achieve the best of both worlds.
arXiv Detail & Related papers (2020-08-13T10:45:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.