Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks
- URL: http://arxiv.org/abs/2502.04850v1
- Date: Fri, 07 Feb 2025 11:39:27 GMT
- Title: Aequa: Fair Model Rewards in Collaborative Learning via Slimmable Networks
- Authors: Nurbek Tastan, Samuel Horvath, Karthik Nandakumar,
- Abstract summary: Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data.
One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions.
This work focuses on fair reward allocation, where the participants are incentivized through model rewards.
- Score: 4.494911384096143
- License:
- Abstract: Collaborative learning enables multiple participants to learn a single global model by exchanging focused updates instead of sharing data. One of the core challenges in collaborative learning is ensuring that participants are rewarded fairly for their contributions, which entails two key sub-problems: contribution assessment and reward allocation. This work focuses on fair reward allocation, where the participants are incentivized through model rewards - differentiated final models whose performance is commensurate with the contribution. In this work, we leverage the concept of slimmable neural networks to collaboratively learn a shared global model whose performance degrades gracefully with a reduction in model width. We also propose a post-training fair allocation algorithm that determines the model width for each participant based on their contributions. We theoretically study the convergence of our proposed approach and empirically validate it using extensive experiments on different datasets and architectures. We also extend our approach to enable training-time model reward allocation.
Related papers
- Redefining Contributions: Shapley-Driven Federated Learning [3.9539878659683363]
Federated learning (FL) has emerged as a pivotal approach in machine learning.
It is challenging to ensure global model convergence when participants do not contribute equally and/or honestly.
This paper proposes a novel contribution assessment method called ShapFed for fine-grained evaluation of participant contributions in FL.
arXiv Detail & Related papers (2024-06-01T22:40:31Z) - FedSAC: Dynamic Submodel Allocation for Collaborative Fairness in Federated Learning [46.30755524556465]
We present FedSAC, a novel Federated learning framework with dynamic Submodel Allocation for Collaborative fairness.
We develop a submodel allocation module with a theoretical guarantee of fairness.
Experiments conducted on three public benchmarks demonstrate that FedSAC outperforms all baseline methods in both fairness and model accuracy.
arXiv Detail & Related papers (2024-05-28T15:43:29Z) - Secrets of RLHF in Large Language Models Part II: Reward Modeling [134.97964938009588]
We introduce a series of novel methods to mitigate the influence of incorrect and ambiguous preferences in the dataset.
We also introduce contrastive learning to enhance the ability of reward models to distinguish between chosen and rejected responses.
arXiv Detail & Related papers (2024-01-11T17:56:59Z) - Fair yet Asymptotically Equal Collaborative Learning [32.588043205577435]
In collaborative learning with streaming data, nodes jointly and continuously learn a machine learning (ML) model by sharing the latest model updates computed from their latest streaming data.
This paper explores an incentive design that guarantees fairness so that nodes receive rewards commensurate to their contributions.
We empirically demonstrate in two settings with real-world streaming data, that our proposed approach outperforms existing baselines in fairness and learning performance while remaining competitive in preserving equality.
arXiv Detail & Related papers (2023-06-09T08:57:14Z) - Incentivizing Honesty among Competitors in Collaborative Learning and Optimization [4.999814847776097]
Collaborative learning techniques have the potential to enable machine learning models that are superior to models trained on a single entity's data.
In many cases, potential participants in such collaborative schemes are competitors on a downstream task.
arXiv Detail & Related papers (2023-05-25T17:28:41Z) - Model-Contrastive Federated Learning [92.9075661456444]
Federated learning enables multiple parties to collaboratively train a machine learning model without communicating their local data.
We propose MOON: model-contrastive federated learning.
Our experiments show that MOON significantly outperforms the other state-of-the-art federated learning algorithms on various image classification tasks.
arXiv Detail & Related papers (2021-03-30T11:16:57Z) - Collaborative Fairness in Federated Learning [24.7378023761443]
We propose a novel Collaborative Fair Federated Learning (CFFL) framework for deep learning.
CFFL enforces participants to converge to different models, thus achieving fairness without compromising predictive performance.
Experiments on benchmark datasets demonstrate that CFFL achieves high fairness and delivers comparable accuracy to the Distributed framework.
arXiv Detail & Related papers (2020-08-27T14:39:09Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - Task-Feature Collaborative Learning with Application to Personalized
Attribute Prediction [166.87111665908333]
We propose a novel multi-task learning method called Task-Feature Collaborative Learning (TFCL)
Specifically, we first propose a base model with a heterogeneous block-diagonal structure regularizer to leverage the collaborative grouping of features and tasks.
As a practical extension, we extend the base model by allowing overlapping features and differentiating the hard tasks.
arXiv Detail & Related papers (2020-04-29T02:32:04Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.