Cerberus: Exploring Federated Prediction of Security Events
- URL: http://arxiv.org/abs/2209.03050v1
- Date: Wed, 7 Sep 2022 10:31:20 GMT
- Title: Cerberus: Exploring Federated Prediction of Security Events
- Authors: Mohammad Naseri, Yufei Han, Enrico Mariconti, Yun Shen, Gianluca
Stringhini, Emiliano De Cristofaro
- Abstract summary: We explore the feasibility of using Federated Learning (FL) to predict future security events.
We introduce Cerberus, a system enabling collaborative training of Recurrent Neural Network (RNN) models for participating organizations.
- Score: 21.261584854569893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern defenses against cyberattacks increasingly rely on proactive
approaches, e.g., to predict the adversary's next actions based on past events.
Building accurate prediction models requires knowledge from many organizations;
alas, this entails disclosing sensitive information, such as network
structures, security postures, and policies, which might often be undesirable
or outright impossible. In this paper, we explore the feasibility of using
Federated Learning (FL) to predict future security events. To this end, we
introduce Cerberus, a system enabling collaborative training of Recurrent
Neural Network (RNN) models for participating organizations. The intuition is
that FL could potentially offer a middle-ground between the non-private
approach where the training data is pooled at a central server and the
low-utility alternative of only training local models. We instantiate Cerberus
on a dataset obtained from a major security company's intrusion prevention
product and evaluate it vis-a-vis utility, robustness, and privacy, as well as
how participants contribute to and benefit from the system. Overall, our work
sheds light on both the positive aspects and the challenges of using FL for
this task and paves the way for deploying federated approaches to predictive
security.
Related papers
- Edge-Only Universal Adversarial Attacks in Distributed Learning [49.546479320670464]
In this work, we explore the feasibility of generating universal adversarial attacks when an attacker has access to the edge part of the model only.
Our approach shows that adversaries can induce effective mispredictions in the unknown cloud part by leveraging key features on the edge side.
Our results on ImageNet demonstrate strong attack transferability to the unknown cloud part.
arXiv Detail & Related papers (2024-11-15T11:06:24Z) - Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Defending against Data Poisoning Attacks in Federated Learning via User Elimination [0.0]
This paper introduces a novel framework focused on the strategic elimination of adversarial users within a federated model.
We detect anomalies in the aggregation phase of the Federated Algorithm, by integrating metadata gathered by the local training instances with Differential Privacy techniques.
Our experiments demonstrate the efficacy of our methods, significantly mitigating the risk of data poisoning while maintaining user privacy and model performance.
arXiv Detail & Related papers (2024-04-19T10:36:00Z) - Security and Privacy Issues and Solutions in Federated Learning for
Digital Healthcare [0.0]
We present vulnerabilities, attacks, and defenses based on the widened attack surfaces of Federated Learning.
We suggest promising new research directions toward a more robust FL.
arXiv Detail & Related papers (2024-01-16T16:07:53Z) - Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Challenges and approaches for mitigating byzantine attacks in federated
learning [6.836162272841266]
Federated learning (FL) is an attractive distributed learning framework in which numerous wireless end-user devices can train a global model with the data remained autochthonous.
Despite the promising prospect, byzantine attack, an intractable threat in conventional distributed network, is discovered to be rather efficacious against FL as well.
We propose a new byzantine attack method called weight attack to defeat those defense schemes, and conduct experiments to demonstrate its threat.
arXiv Detail & Related papers (2021-12-29T09:24:05Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.