FRIDA: Free-Rider Detection using Privacy Attacks
- URL: http://arxiv.org/abs/2410.05020v2
- Date: Fri, 19 Sep 2025 07:02:15 GMT
- Title: FRIDA: Free-Rider Detection using Privacy Attacks
- Authors: Pol G. Recasens, Ádám Horváth, Alberto Gutierrez-Torre, Jordi Torres, Josep Ll. Berral, Balázs Pejó,
- Abstract summary: Federated learning enables multiple parties to train a machine learning model collaboratively.<n>Free-riders compromise the integrity of the learning process and slow down the convergence of the global model.<n>We propose FRIDA: free-rider detection using privacy attacks.
- Score: 1.1269336981919518
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is increasingly popular as it enables multiple parties with limited datasets and resources to train a machine learning model collaboratively. However, similar to other collaborative systems, federated learning is vulnerable to free-riders - participants who benefit from the global model without contributing. Free-riders compromise the integrity of the learning process and slow down the convergence of the global model, resulting in increased costs for honest participants. To address this challenge, we propose FRIDA: free-rider detection using privacy attacks. Instead of focusing on implicit effects of free-riding, FRIDA utilizes membership and property inference attacks to directly infer evidence of genuine client training. Our extensive evaluation demonstrates that FRIDA is effective across a wide range of scenarios.
Related papers
- BadFU: Backdoor Federated Learning through Adversarial Machine Unlearning [7.329446721934861]
Federated learning (FL) has been widely adopted as a decentralized training paradigm.<n>In this paper, we present the first backdoor attack in the context of federated unlearning.
arXiv Detail & Related papers (2025-08-21T13:17:01Z) - Federated Testing (FedTest): A New Scheme to Enhance Convergence and Mitigate Adversarial Attacks in Federating Learning [35.14491996649841]
We introduce a novel federated learning framework, which we call federated testing for federated learning (FedTest)
In FedTest, the local data of a specific user is used to train the model of that user and test the models of the other users.
Our numerical results reveal that the proposed method not only accelerates convergence rates but also diminishes the potential influence of malicious users.
arXiv Detail & Related papers (2025-01-19T21:01:13Z) - Celtibero: Robust Layered Aggregation for Federated Learning [0.0]
We introduce Celtibero, a novel defense mechanism that integrates layered aggregation to enhance robustness against adversarial manipulation.
We demonstrate that Celtibero consistently achieves high main task accuracy (MTA) while maintaining minimal attack success rates (ASR) across a range of untargeted and targeted poisoning attacks.
arXiv Detail & Related papers (2024-08-26T12:54:00Z) - Guaranteeing Data Privacy in Federated Unlearning with Dynamic User Participation [21.07328631033828]
Federated Unlearning (FU) can eliminate influences of Federated Learning (FL) users' data from trained global FL models.
A straightforward FU method involves removing the unlearned users and subsequently retraining a new global FL model from scratch with all remaining users.
We propose a privacy-preserving FU framework, aimed at ensuring privacy while effectively managing dynamic user participation.
arXiv Detail & Related papers (2024-06-03T03:39:07Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.
This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.
We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - FLEDGE: Ledger-based Federated Learning Resilient to Inference and
Backdoor Attacks [8.866045560761528]
Federated learning (FL) is a distributed learning process that allows multiple parties (or clients) to collaboratively train a machine learning model without having them share their private data.
Recent research has demonstrated the effectiveness of inference and poisoning attacks on FL.
We present a ledger-based FL framework known as FLEDGE that allows making parties accountable for their behavior and achieve reasonable efficiency for mitigating inference and poisoning attacks.
arXiv Detail & Related papers (2023-10-03T14:55:30Z) - Selective Knowledge Sharing for Privacy-Preserving Federated
Distillation without A Good Teacher [52.2926020848095]
Federated learning is vulnerable to white-box attacks and struggles to adapt to heterogeneous clients.
This paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD.
arXiv Detail & Related papers (2023-04-04T12:04:19Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Ensemble Federated Adversarial Training with Non-IID data [1.5878082907673585]
Adversarial samples can confuse and cheat the client models to achieve malicious purposes.
We introduce a novel Ensemble Federated Adversarial Training Method, termed as EFAT.
Our proposed method achieves promising results compared with solely combining federated learning with adversarial approaches.
arXiv Detail & Related papers (2021-10-26T03:55:20Z) - Federated Robustness Propagation: Sharing Adversarial Robustness in
Federated Learning [98.05061014090913]
Federated learning (FL) emerges as a popular distributed learning schema that learns from a set of participating users without requiring raw data to be shared.
adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges.
We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users.
We propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics.
arXiv Detail & Related papers (2021-06-18T15:52:33Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Towards Causal Federated Learning For Enhanced Robustness and Privacy [5.858642952428615]
Federated learning is an emerging privacy-preserving distributed machine learning approach.
Data samples across all participating clients are usually not independent and identically distributed.
In this paper, we propose an approach for learning invariant (causal) features common to all participating clients in a federated learning setup.
arXiv Detail & Related papers (2021-04-14T00:08:45Z) - A Reputation Mechanism Is All You Need: Collaborative Fairness and
Adversarial Robustness in Federated Learning [24.442595192268872]
Federated learning (FL) is an emerging practical framework for effective and scalable machine learning.
In conventional FL, all participants receive the global model (equal rewards), which might be unfair to the high-contributing participants.
We propose a novel RFFL framework to achieve collaborative fairness and adversarial robustness simultaneously via a reputation mechanism.
arXiv Detail & Related papers (2020-11-20T15:52:45Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Free-rider Attacks on Model Aggregation in Federated Learning [10.312968200748116]
We introduce here the first theoretical and experimental analysis of free-rider attacks on federated learning schemes based on iterative parameters aggregation.
We provide formal guarantees for these attacks to converge to the aggregated models of the fair participants.
We conclude by providing recommendations to avoid free-rider attacks in real world applications of federated learning.
arXiv Detail & Related papers (2020-06-21T20:20:38Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.