CATFL: Certificateless Authentication-based Trustworthy Federated
Learning for 6G Semantic Communications
- URL: http://arxiv.org/abs/2302.00271v1
- Date: Wed, 1 Feb 2023 06:26:44 GMT
- Title: CATFL: Certificateless Authentication-based Trustworthy Federated
Learning for 6G Semantic Communications
- Authors: Gaolei Li, Yuanyuan Zhao, Yi Li
- Abstract summary: Federated learning (FL) provides an emerging approach for collaboratively training semantic encoder/decoder models of semantic communication systems.
Most existing studies on trustworthy FL aim to eliminate data poisoning threats that are produced by malicious clients.
A certificateless authentication-based trustworthy federated learning framework is proposed, which mutually authenticates the identity of clients and server.
- Score: 12.635921154497987
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) provides an emerging approach for collaboratively
training semantic encoder/decoder models of semantic communication systems,
without private user data leaving the devices. Most existing studies on
trustworthy FL aim to eliminate data poisoning threats that are produced by
malicious clients, but in many cases, eliminating model poisoning attacks
brought by fake servers is also an important objective. In this paper, a
certificateless authentication-based trustworthy federated learning (CATFL)
framework is proposed, which mutually authenticates the identity of clients and
server. In CATFL, each client verifies the server's signature information
before accepting the delivered global model to ensure that the global model is
not delivered by false servers. On the contrary, the server also verifies the
server's signature information before accepting the delivered model updates to
ensure that they are submitted by authorized clients. Compared to PKI-based
methods, the CATFL can avoid too high certificate management overheads.
Meanwhile, the anonymity of clients shields data poisoning attacks, while
real-name registration may suffer from user-specific privacy leakage risks.
Therefore, a pseudonym generation strategy is also presented in CATFL to
achieve a trade-off between identity traceability and user anonymity, which is
essential to conditionally prevent from user-specific privacy leakage.
Theoretical security analysis and evaluation results validate the superiority
of CATFL.
Related papers
- CryptoFormalEval: Integrating LLMs and Formal Verification for Automated Cryptographic Protocol Vulnerability Detection [41.94295877935867]
We introduce a benchmark to assess the ability of Large Language Models to autonomously identify vulnerabilities in new cryptographic protocols.
We created a dataset of novel, flawed, communication protocols and designed a method to automatically verify the vulnerabilities found by the AI agents.
arXiv Detail & Related papers (2024-11-20T14:16:55Z) - Protection against Source Inference Attacks in Federated Learning using Unary Encoding and Shuffling [6.260747047974035]
Federated Learning (FL) enables clients to train a joint model without disclosing their local data.
Recently, the source inference attack (SIA) has been proposed where an honest-but-curious central server tries to identify exactly which client owns a specific data record.
We propose a defense against SIAs by using a trusted shuffler, without compromising the accuracy of the joint model.
arXiv Detail & Related papers (2024-11-10T13:17:11Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - FLoW3 -- Web3 Empowered Federated Learning [0.0]
Federated Learning is susceptible to various kinds of attacks like Data Poisoning, Model Poisoning and Man in the Middle attack.
validation is done through consensus by employing Novelty Detection and Snowball protocol.
System is realized by implementing in python and Foundry for smart contract development.
arXiv Detail & Related papers (2023-12-09T04:05:07Z) - Who Leaked the Model? Tracking IP Infringers in Accountable Federated Learning [51.26221422507554]
Federated learning (FL) is an effective collaborative learning framework to coordinate data and computation resources from massive and distributed clients in training.
Such collaboration results in non-trivial intellectual property (IP) represented by the model parameters that should be protected and shared by the whole party rather than an individual user.
To block such IP leakage, it is essential to make the IP identifiable in the shared model and locate the anonymous infringer who first leaks it.
We propose Decodable Unique Watermarking (DUW) for complying with the requirements of accountable FL.
arXiv Detail & Related papers (2023-12-06T00:47:55Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FedSOV: Federated Model Secure Ownership Verification with Unforgeable
Signature [60.99054146321459]
Federated learning allows multiple parties to collaborate in learning a global model without revealing private data.
We propose a cryptographic signature-based federated learning model ownership verification scheme named FedSOV.
arXiv Detail & Related papers (2023-05-10T12:10:02Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - Efficient and Privacy Preserving Group Signature for Federated Learning [2.121963121603413]
Federated Learning (FL) is a Machine Learning (ML) technique that aims to reduce the threats to user data privacy.
This paper proposes an efficient and privacy-preserving protocol for FL based on group signature.
arXiv Detail & Related papers (2022-07-12T04:12:10Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - CRFL: Certifiably Robust Federated Learning against Backdoor Attacks [59.61565692464579]
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
arXiv Detail & Related papers (2021-06-15T16:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.