Anomaly Detection via Federated Learning
- URL: http://arxiv.org/abs/2210.06614v1
- Date: Wed, 12 Oct 2022 22:40:29 GMT
- Title: Anomaly Detection via Federated Learning
- Authors: Marc Vucovich, Amogh Tarcar, Penjo Rebelo, Narendra Gade, Ruchi
Porwal, Abdul Rahman, Christopher Redino, Kevin Choi, Dhruv Nandakumar,
Robert Schiller, Edward Bowen, Alex West, Sanmitra Bhattacharya, Balaji
Veeramani
- Abstract summary: We propose a novel anomaly detector via federated learning to detect malicious network activity on a client's server.
By using our novel min-max scalar and sampling technique, called FedSam, we determined federated learning allows the global model to learn from each client's data.
- Score: 3.0755847416657613
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning has helped advance the field of anomaly detection by
incorporating classifiers and autoencoders to decipher between normal and
anomalous behavior. Additionally, federated learning has provided a way for a
global model to be trained with multiple clients' data without requiring the
client to directly share their data. This paper proposes a novel anomaly
detector via federated learning to detect malicious network activity on a
client's server. In our experiments, we use an autoencoder with a classifier in
a federated learning framework to determine if the network activity is benign
or malicious. By using our novel min-max scalar and sampling technique, called
FedSam, we determined federated learning allows the global model to learn from
each client's data and, in turn, provide a means for each client to improve
their intrusion detection system's defense against cyber-attacks.
Related papers
- Federated Face Forgery Detection Learning with Personalized Representation [63.90408023506508]
Deep generator technology can produce high-quality fake videos that are indistinguishable, posing a serious social threat.
Traditional forgery detection methods directly centralized training on data.
The paper proposes a novel federated face forgery detection learning with personalized representation.
arXiv Detail & Related papers (2024-06-17T02:20:30Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Network Anomaly Detection Using Federated Learning [0.483420384410068]
We introduce a robust and scalable framework that enables efficient network anomaly detection.
We leverage federated learning, in which multiple participants train a global model jointly.
The proposed method performs better than baseline machine learning techniques on the UNSW-NB15 data set.
arXiv Detail & Related papers (2023-03-13T20:16:30Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Abuse and Fraud Detection in Streaming Services Using Heuristic-Aware
Machine Learning [0.45880283710344055]
This work presents a fraud and abuse detection framework for streaming services by modeling user streaming behavior.
We study the use of semi-supervised as well as supervised approaches for anomaly detection.
To the best of our knowledge, this is the first paper to use machine learning methods for fraud and abuse detection in real-world scale streaming services.
arXiv Detail & Related papers (2022-03-04T03:57:58Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Provably Secure Federated Learning against Malicious Clients [31.85264586217373]
Malicious clients can corrupt the global model to predict incorrect labels for testing examples.
We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.
Our method can achieve a certified accuracy of 88% on MNIST when 20 out of 1,000 clients are malicious.
arXiv Detail & Related papers (2021-02-03T03:24:17Z) - Adversarial Robustness through Bias Variance Decomposition: A New
Perspective for Federated Learning [41.525434598682764]
Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint.
We show that this paradigm might inherit the adversarial vulnerability of the centralized neural network.
We propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms.
arXiv Detail & Related papers (2020-09-18T18:58:25Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.