Learning to Teach Fairness-aware Deep Multi-task Learning
- URL: http://arxiv.org/abs/2206.08403v1
- Date: Thu, 16 Jun 2022 18:43:16 GMT
- Title: Learning to Teach Fairness-aware Deep Multi-task Learning
- Authors: Arjun Roy, Eirini Ntoutsi
- Abstract summary: We propose a flexible approach that learns how to be fair in a multi-task setting by selecting which objective (accuracy or fairness) to optimize at each step.
Experiments on three real datasets show that L2T-FMT improves on both fairness (12-19%) and accuracy (up to 2%) over state-of-the-art approaches.
- Score: 17.30805079904122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness-aware learning mainly focuses on single task learning (STL). The
fairness implications of multi-task learning (MTL) have only recently been
considered and a seminal approach has been proposed that considers the
fairness-accuracy trade-off for each task and the performance trade-off among
different tasks. Instead of a rigid fairness-accuracy trade-off formulation, we
propose a flexible approach that learns how to be fair in a MTL setting by
selecting which objective (accuracy or fairness) to optimize at each step. We
introduce the L2T-FMT algorithm that is a teacher-student network trained
collaboratively; the student learns to solve the fair MTL problem while the
teacher instructs the student to learn from either accuracy or fairness,
depending on what is harder to learn for each task. Moreover, this dynamic
selection of which objective to use at each step for each task reduces the
number of trade-off weights from 2T to T, where T is the number of tasks. Our
experiments on three real datasets show that L2T-FMT improves on both fairness
(12-19%) and accuracy (up to 2%) over state-of-the-art approaches.
Related papers
- Token-Efficient Leverage Learning in Large Language Models [13.830828529873056]
Large Language Models (LLMs) have excelled in various tasks but perform better in high-resource scenarios.
Data scarcity and the inherent difficulty of adapting LLMs to specific tasks compound the challenge.
We present a streamlined implement of this methodology called Token-Efficient Leverage Learning (TELL)
arXiv Detail & Related papers (2024-04-01T04:39:44Z) - Fair Resource Allocation in Multi-Task Learning [12.776767874217663]
Multi-task learning (MTL) can leverage the shared knowledge across tasks, resulting in improved data efficiency and generalization performance.
A major challenge in MTL lies in the presence of conflicting gradients, which can hinder the fair optimization of some tasks.
Inspired by fair resource allocation in communication networks, we propose FairGrad, a novel MTL optimization method.
arXiv Detail & Related papers (2024-02-23T22:46:14Z) - PEMT: Multi-Task Correlation Guided Mixture-of-Experts Enables Parameter-Efficient Transfer Learning [28.353530290015794]
We propose PEMT, a novel parameter-efficient fine-tuning framework based on multi-task transfer learning.
We conduct experiments on a broad range of tasks over 17 datasets.
arXiv Detail & Related papers (2024-02-23T03:59:18Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Multi-Task Learning as a Bargaining Game [63.49888996291245]
In Multi-task learning (MTL), a joint model is trained to simultaneously make predictions for several tasks.
Since the gradients of these different tasks may conflict, training a joint model for MTL often yields lower performance than its corresponding single-task counterparts.
We propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
arXiv Detail & Related papers (2022-02-02T13:21:53Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP [39.457091182683406]
We aim to provide task distributions for meta-learning by considering self-supervised tasks automatically proposed from unlabeled text.
Our analysis shows that all these factors meaningfully alter the task distribution, some inducing significant improvements in downstream few-shot accuracy of the meta-learned models.
arXiv Detail & Related papers (2021-11-02T01:50:09Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task
Learning [18.666340309506605]
We are concerned with how group fairness as an ML fairness concept plays out in the multi-task scenario.
In multi-task learning, several tasks are learned jointly to exploit task correlations for a more efficient inductive transfer.
We propose a Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi-task learning.
arXiv Detail & Related papers (2021-06-04T20:28:54Z) - Measuring and Harnessing Transference in Multi-Task Learning [58.48659733262734]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We analyze the dynamics of information transfer, or transference, across tasks throughout training.
arXiv Detail & Related papers (2020-10-29T08:25:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.