Random Client Selection on Contrastive Federated Learning for Tabular Data
- URL: http://arxiv.org/abs/2505.10759v1
- Date: Fri, 16 May 2025 00:20:02 GMT
- Title: Random Client Selection on Contrastive Federated Learning for Tabular Data
- Authors: Achmad Ginanjar, Xue Li, Priyanka Singh, Wen Hua,
- Abstract summary: Vertical Federated Learning (VFL) has revolutionised collaborative machine learning by enabling privacy-preserving model training across multiple parties.<n>It remains vulnerable to information leakage during intermediate computation sharing.<n>Contrastive Federated Learning (CFL) was introduced to mitigate these privacy concerns through representation learning.<n>This paper presents a comprehensive experimental analysis of gradient-based attacks in CFL environments.
- Score: 11.930322590346139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vertical Federated Learning (VFL) has revolutionised collaborative machine learning by enabling privacy-preserving model training across multiple parties. However, it remains vulnerable to information leakage during intermediate computation sharing. While Contrastive Federated Learning (CFL) was introduced to mitigate these privacy concerns through representation learning, it still faces challenges from gradient-based attacks. This paper presents a comprehensive experimental analysis of gradient-based attacks in CFL environments and evaluates random client selection as a defensive strategy. Through extensive experimentation, we demonstrate that random client selection proves particularly effective in defending against gradient attacks in the CFL network. Our findings provide valuable insights for implementing robust security measures in contrastive federated learning systems, contributing to the development of more secure collaborative learning frameworks
Related papers
- Robust Asymmetric Heterogeneous Federated Learning with Corrupted Clients [60.22876915395139]
This paper studies a challenging robust federated learning task with model heterogeneous and data corrupted clients.<n>Data corruption is unavoidable due to factors such as random noise, compression artifacts, or environmental conditions in real-world deployment.<n>We propose a novel Robust Asymmetric Heterogeneous Federated Learning framework to address these issues.
arXiv Detail & Related papers (2025-03-12T09:52:04Z) - Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning [83.90283731845867]
We consider feature reconstruction attacks, a common risk targeting input data compromise.<n>We show that Federated-based models are resistant to state-of-the-art feature reconstruction attacks.
arXiv Detail & Related papers (2024-12-16T12:02:12Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - UIFV: Data Reconstruction Attack in Vertical Federated Learning [5.404398887781436]
Vertical Federated Learning (VFL) facilitates collaborative machine learning without the need for participants to share raw private data.<n>Recent studies have revealed privacy risks where adversaries might reconstruct sensitive features through data leakage during the learning process.<n>Our work exposes severe privacy vulnerabilities within VFL systems that pose real threats to practical VFL applications.
arXiv Detail & Related papers (2024-06-18T13:18:52Z) - Secure Vertical Federated Learning Under Unreliable Connectivity [22.03946356498099]
We present vFedSec, a first dropout-tolerant VFL protocol.
It achieves secure and efficient model training by using an innovative Secure Layer alongside an embedding-padding technique.
arXiv Detail & Related papers (2023-05-26T10:17:36Z) - Efficient Vertical Federated Learning with Secure Aggregation [10.295508659999783]
We present a novel design for training vertical FL securely and efficiently using state-of-the-art security modules for secure aggregation.
We demonstrate empirically that our method does not impact training performance whilst obtaining 9.1e2 3.8e4 speedup compared to homomorphic encryption (HE)
arXiv Detail & Related papers (2023-05-18T18:08:36Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning [14.134217287912008]
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients.
Recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks.
We present a principled framework for evaluating and comparing different forms of client privacy leakage attacks.
arXiv Detail & Related papers (2020-04-22T05:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.