MURIM: Multidimensional Reputation-based Incentive Mechanism for Federated Learning
- URL: http://arxiv.org/abs/2512.13955v1
- Date: Mon, 15 Dec 2025 23:18:32 GMT
- Title: MURIM: Multidimensional Reputation-based Incentive Mechanism for Federated Learning
- Authors: Sindhuja Madabushi, Dawood Wasif, Jin-Hee Cho,
- Abstract summary: Federated Learning (FL) has emerged as a leading privacy-preserving machine learning paradigm.<n>FL continues to face key challenges, including weak client incentives, privacy risks, and resource constraints.<n>We propose MURIM, a Reputation-based Incentive Mechanism that jointly considers client reliability, privacy, resource capacity, and fairness.
- Score: 3.8054072718666574
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has emerged as a leading privacy-preserving machine learning paradigm, enabling participants to share model updates instead of raw data. However, FL continues to face key challenges, including weak client incentives, privacy risks, and resource constraints. Assessing client reliability is essential for fair incentive allocation and ensuring that each client's data contributes meaningfully to the global model. To this end, we propose MURIM, a MUlti-dimensional Reputation-based Incentive Mechanism that jointly considers client reliability, privacy, resource capacity, and fairness while preventing malicious or unreliable clients from earning undeserved rewards. MURIM allocates incentives based on client contribution, latency, and reputation, supported by a reliability verification module. Extensive experiments on MNIST, FMNIST, and ADULT Income datasets demonstrate that MURIM achieves up to 18% improvement in fairness metrics, reduces privacy attack success rates by 5-9%, and improves robustness against poisoning and noisy-gradient attacks by up to 85% compared to state-of-the-art baselines. Overall, MURIM effectively mitigates adversarial threats, promotes fair and truthful participation, and preserves stable model convergence across heterogeneous and dynamic federated settings.
Related papers
- Stragglers Can Contribute More: Uncertainty-Aware Distillation for Asynchronous Federated Learning [61.249748418757946]
Asynchronous federated learning (FL) has recently gained attention for its enhanced efficiency and scalability.<n>We propose FedEcho, a novel framework that incorporates uncertainty-aware distillation to enhance the asynchronous FL performances.<n>We demonstrate that FedEcho consistently outperforms existing asynchronous federated learning baselines.
arXiv Detail & Related papers (2025-11-25T06:25:25Z) - FLARE: Adaptive Multi-Dimensional Reputation for Robust Client Reliability in Federated Learning [0.6524460254566904]
Federated learning (FL) enables collaborative model training while preserving data privacy.<n>It remains vulnerable to malicious clients who compromise model integrity through Byzantine attacks, data poisoning, or adaptive adversarial behaviors.<n>We propose FLARE, an adaptive reputation-based framework that transforms client reliability assessment from binary decisions to a continuous, multi-dimensional trust evaluation.
arXiv Detail & Related papers (2025-11-18T17:57:40Z) - Information-Theoretic Reward Modeling for Stable RLHF: Detecting and Mitigating Reward Hacking [78.69179041551014]
We propose an information-theoretic reward modeling framework based on the Information Bottleneck principle.<n>We show that InfoRM filters out preference-irrelevant information to alleviate reward misgeneralization.<n>We also introduce IBL, a distribution-level regularization that penalizes such deviations, effectively expanding the optimization landscape.
arXiv Detail & Related papers (2025-10-15T15:51:59Z) - CoSIFL: Collaborative Secure and Incentivized Federated Learning with Differential Privacy [1.1266158555540042]
CoSIFL is a framework that integrates proactive alarming for robust security and local differential privacy.<n>A Tullock contest-inspired incentive module rewards honest clients for both data contributions and reliable alarm triggers.<n>We prove that the server-client game admits a unique equilibrium, and analyze how clients' multi-dimensional attributes - such as non-IID degrees and privacy budgets - jointly affect system efficiency.
arXiv Detail & Related papers (2025-09-27T08:45:40Z) - Understanding and Benchmarking the Trustworthiness in Multimodal LLMs for Video Understanding [59.50808215134678]
This study introduces Trust-videoLLMs, a first comprehensive benchmark evaluating 23 state-of-the-art videoLLMs.<n>Results reveal significant limitations in dynamic scene comprehension, cross-modal resilience and real-world risk mitigation.
arXiv Detail & Related papers (2025-06-14T04:04:54Z) - Addressing Data Quality Decompensation in Federated Learning via Dynamic Client Selection [7.603415982653868]
Shapley-Bid Reputation Optimized Federated Learning (SBRO-FL) is a unified framework integrating dynamic bidding, reputation modeling, and cost-aware selection.<n>A reputation system, inspired by prospect theory, captures historical performance while penalizing inconsistency.<n>Experiments on FashionMNIST, EMNIST, CIFAR-10, and SVHN datasets show that SBRO-FL improves accuracy, convergence speed, and robustness, even in adversarial and low-bid interference scenarios.
arXiv Detail & Related papers (2025-05-27T14:06:51Z) - Mitigating Membership Inference Vulnerability in Personalized Federated Learning [6.260747047974035]
Federated Learning (FL) has emerged as a promising paradigm for collaborative model training without the need to share clients' personal data.<n>We introduce IFCA-MIR, an improved version of IFCA that integrates MIA risk assessment into the clustering process.<n>We demonstrate that IFCA-MIR significantly reduces MIA risk while maintaining comparable model accuracy and fairness as the original IFCA.
arXiv Detail & Related papers (2025-03-12T14:10:35Z) - FinP: Fairness-in-Privacy in Federated Learning by Addressing Disparities in Privacy Risk [2.840505903487544]
FinP is a novel framework specifically designed to address disparities in privacy risk.<n>It mitigates disproportionate vulnerability to source inference attacks (SIA)<n>It achieves improvement in fairness-in-privacy with minimal impact on utility.
arXiv Detail & Related papers (2025-02-25T00:56:47Z) - MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [92.40586697273868]
Timely updating of Internet of Things data is crucial for achieving immersion in vehicular metaverse services.<n>We propose an immersion-aware model trading framework that enables efficient and privacy-preserving data provisioning through federated learning.<n> Experimental results show that the proposed framework outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Certifiably Byzantine-Robust Federated Conformal Prediction [49.23374238798428]
We introduce a novel framework Rob-FCP, which executes robust federated conformal prediction effectively countering malicious clients.
We empirically demonstrate the robustness of Rob-FCP against diverse proportions of malicious clients under a variety of Byzantine attacks.
arXiv Detail & Related papers (2024-06-04T04:43:30Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.