Skill-Aligned Fairness in Multi-Agent Learning for Collaboration in Healthcare
- URL: http://arxiv.org/abs/2508.18708v2
- Date: Fri, 05 Sep 2025 03:49:51 GMT
- Title: Skill-Aligned Fairness in Multi-Agent Learning for Collaboration in Healthcare
- Authors: Promise Osaine Ekpo, Brian La, Thomas Wiener, Saesha Agarwal, Arshia Agrawal, Gonzalo Gonzalez-Pumariega, Lekan P. Molu, Angelique Taylor,
- Abstract summary: In healthcare, equitable task allocation requires workload balance or expertise alignment to prevent burnout and overuse of highly skilled agents.<n>We propose FairSkillMARL, a framework that defines fairness as the dual objective of workload balance and skill-task alignment.<n>We introduce MARLHospital, a customizable healthcare-inspired environment for modeling team compositions and energy-constrained scheduling impacts on fairness.
- Score: 1.7466032719896134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness in multi-agent reinforcement learning (MARL) is often framed as a workload balance problem, overlooking agent expertise and the structured coordination required in real-world domains. In healthcare, equitable task allocation requires workload balance or expertise alignment to prevent burnout and overuse of highly skilled agents. Workload balance refers to distributing an approximately equal number of subtasks or equalised effort across healthcare workers, regardless of their expertise. We make two contributions to address this problem. First, we propose FairSkillMARL, a framework that defines fairness as the dual objective of workload balance and skill-task alignment. Second, we introduce MARLHospital, a customizable healthcare-inspired environment for modeling team compositions and energy-constrained scheduling impacts on fairness, as no existing simulators are well-suited for this problem. We conducted experiments to compare FairSkillMARL in conjunction with four standard MARL methods, and against two state-of-the-art fairness metrics. Our results suggest that fairness based solely on equal workload might lead to task-skill mismatches and highlight the need for more robust metrics that capture skill-task misalignment. Our work provides tools and a foundation for studying fairness in heterogeneous multi-agent systems where aligning effort with expertise is critical.
Related papers
- Fair-GNE : Generalized Nash Equilibrium-Seeking Fairness in Multiagent Healthcare Automation [0.0]
Existing multi-agent reinforcement learning approaches steer fairness by shaping reward through post hoc orchestrations.<n>We address this shortcoming with a learning enabled optimization scheme among self-interested decision makers.<n>Our results communicate our formulations, evaluation metrics, and equilibrium-seeking innovations in large multi-agent learning-based healthcare systems.
arXiv Detail & Related papers (2025-11-18T04:48:50Z) - Interactional Fairness in LLM Multi-Agent Systems: An Evaluation Framework [0.0]
We introduce a novel framework for evaluating Interactional fairness encompassing Interpersonal fairness (IF) and Informational fairness (InfF) in multi-agent systems.<n>We validate our framework through a pilot study using controlled simulations of a resource negotiation task.<n>Results show that tone and justification quality significantly affect acceptance decisions even when objective outcomes are held constant.
arXiv Detail & Related papers (2025-05-17T13:24:13Z) - Predicting Multi-Agent Specialization via Task Parallelizability [4.9553580237478]
We show that specialist teams outperform generalist ones when environmental constraints limit task parallelizability.<n>We also observe that as the state space expands, agents tend to converge on specialist strategies, even when generalist ones are theoretically more efficient.<n>Our findings provide a principled framework for interpreting specialization given the task and environment.
arXiv Detail & Related papers (2025-03-19T21:33:48Z) - Cooperation and Fairness in Multi-Agent Reinforcement Learning [6.164771707307928]
In resource-constrained environments of mobility and transportation systems, efficiency may be achieved at the expense of fairness.
We consider the problem of fair multi-agent navigation for a group of decentralized agents using multi-agent reinforcement learning (MARL)
We find that our model yields a 14% improvement in efficiency and a 5% improvement in fairness over a baseline trained using random assignments.
arXiv Detail & Related papers (2024-10-19T00:10:52Z) - Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Skill-Based Reinforcement Learning with Intrinsic Reward Matching [77.34726150561087]
We present Intrinsic Reward Matching (IRM), which unifies task-agnostic skill pretraining and task-aware finetuning.
IRM enables us to utilize pretrained skills far more effectively than previous skill selection methods.
arXiv Detail & Related papers (2022-10-14T00:04:49Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Understanding and Improving Fairness-Accuracy Trade-offs in Multi-Task
Learning [18.666340309506605]
We are concerned with how group fairness as an ML fairness concept plays out in the multi-task scenario.
In multi-task learning, several tasks are learned jointly to exploit task correlations for a more efficient inductive transfer.
We propose a Multi-Task-Aware Fairness (MTA-F) approach to improve fairness in multi-task learning.
arXiv Detail & Related papers (2021-06-04T20:28:54Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.