WallStreetFeds: Client-Specific Tokens as Investment Vehicles in Federated Learning
- URL: http://arxiv.org/abs/2506.20518v1
- Date: Wed, 25 Jun 2025 15:05:01 GMT
- Title: WallStreetFeds: Client-Specific Tokens as Investment Vehicles in Federated Learning
- Authors: Arno Geimer, Beltran Fiz Pontiveros, Radu State,
- Abstract summary: Federated Learning (FL) is a collaborative machine learning paradigm which allows participants to collectively train a model while training data remains private.<n>In this paper, we propose a novel framework which introduces client-specific tokens as investment vehicles within the FL ecosystem.
- Score: 1.827018440608344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a collaborative machine learning paradigm which allows participants to collectively train a model while training data remains private. This paradigm is especially beneficial for sectors like finance, where data privacy, security and model performance are paramount. FL has been extensively studied in the years following its introduction, leading to, among others, better performing collaboration techniques, ways to defend against other clients trying to attack the model, and contribution assessment methods. An important element in for-profit Federated Learning is the development of incentive methods to determine the allocation and distribution of rewards for participants. While numerous methods for allocation have been proposed and thoroughly explored, distribution frameworks remain relatively understudied. In this paper, we propose a novel framework which introduces client-specific tokens as investment vehicles within the FL ecosystem. Our framework aims to address the limitations of existing incentive schemes by leveraging a decentralized finance (DeFi) platform and automated market makers (AMMs) to create a more flexible and scalable reward distribution system for participants, and a mechanism for third parties to invest in the federation learning process.
Related papers
- Incentivizing Inclusive Contributions in Model Sharing Markets [47.66231950174746]
This paper proposes inclusive and incentivized personalized federated learning (iPFL)<n>iPFL incentivizes data holders with diverse purposes to collaboratively train personalized models without revealing raw data.<n> Empirical studies on eleven AI tasks demonstrate that iPFL consistently achieves the highest economic utility.
arXiv Detail & Related papers (2025-05-05T08:45:26Z) - Blockchain-based Framework for Scalable and Incentivized Federated Learning [0.820828081284034]
Federated Learning (FL) enables collaborative model training without sharing raw data, preserving privacy while harnessing distributed datasets.<n>Traditional FL systems often rely on centralized aggregating mechanisms, introducing trust issues, single points of failure, and limited mechanisms for incentivizing meaningful client contributions.<n>This paper presents a blockchain-based FL framework that addresses these limitations by integrating smart contracts and a novel hybrid incentive mechanism.
arXiv Detail & Related papers (2025-02-20T00:38:35Z) - On the Volatility of Shapley-Based Contribution Metrics in Federated Learning [1.827018440608344]
Federated learning (FL) is a collaborative and privacy-preserving Machine Learning paradigm.<n>Inaccurate allocation of contributions can undermine trust, lead to unfair compensation, and thus participants may lack the incentive to join or actively contribute to the federation.<n>We provide an extensive analysis of the discrepancies of Shapley values across a set of aggregation strategies and examine them on an overall and a per-client level.
arXiv Detail & Related papers (2024-05-13T13:55:34Z) - An Auction-based Marketplace for Model Trading in Federated Learning [54.79736037670377]
Federated learning (FL) is increasingly recognized for its efficacy in training models using locally distributed data.
We frame FL as a marketplace of models, where clients act as both buyers and sellers.
We propose an auction-based solution to ensure proper pricing based on performance gain.
arXiv Detail & Related papers (2024-02-02T07:25:53Z) - Incentive Allocation in Vertical Federated Learning Based on Bankruptcy Problem [0.0]
Vertical federated learning (VFL) is a promising approach for collaboratively training machine learning models.<n>In this paper, we focus on the problem of allocating incentives to the passive parties by the active party.<n>Using the Talmudic division rule, which leads to the Nucleolus, we ensure a fair distribution of incentives.
arXiv Detail & Related papers (2023-07-07T11:08:18Z) - Welfare and Fairness Dynamics in Federated Learning: A Client Selection
Perspective [1.749935196721634]
Federated learning (FL) is a privacy-preserving learning technique that enables distributed computing devices to train shared learning models.
The economic considerations of the clients, such as fairness and incentive, are yet to be fully explored.
We propose a novel incentive mechanism that involves a client selection process to remove low-quality clients and a money transfer process to ensure a fair reward distribution.
arXiv Detail & Related papers (2023-02-17T16:31:19Z) - Adaptive incentive for cross-silo federated learning: A multi-agent
reinforcement learning approach [12.596779831510508]
Cross-silo federated learning (FL) enables organizations to train global models on isolated data.
We propose a novel adaptive mechanism for cross-silo FL, towards incentivizing organizations to contribute data.
arXiv Detail & Related papers (2023-02-15T06:45:35Z) - FedToken: Tokenized Incentives for Data Contribution in Federated
Learning [33.93936816356012]
We propose a contribution-based tokenized incentive scheme, namely textttFedToken, backed by blockchain technology.
We first approximate the contribution of local models during model aggregation, then strategically schedule clients lowering the communication rounds for convergence.
arXiv Detail & Related papers (2022-09-20T14:58:08Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - An Incentive Mechanism for Federated Learning in Wireless Cellular
network: An Auction Approach [75.08185720590748]
Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning.
In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users.
We formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers.
arXiv Detail & Related papers (2020-09-22T01:50:39Z) - LotteryFL: Personalized and Communication-Efficient Federated Learning
with Lottery Ticket Hypothesis on Non-IID Datasets [52.60094373289771]
Federated learning is a popular distributed machine learning paradigm with enhanced privacy.
We propose LotteryFL -- a personalized and communication-efficient federated learning framework.
We show that LotteryFL significantly outperforms existing solutions in terms of personalization and communication cost.
arXiv Detail & Related papers (2020-08-07T20:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.