Federated Learning with Anomaly Detection via Gradient and Reconstruction Analysis
- URL: http://arxiv.org/abs/2403.10000v1
- Date: Fri, 15 Mar 2024 03:54:45 GMT
- Title: Federated Learning with Anomaly Detection via Gradient and Reconstruction Analysis
- Authors: Zahir Alsulaimawi,
- Abstract summary: We introduce a novel framework that synergizes gradient-based analysis with autoencoder-driven data reconstruction to detect and mitigate poisoned data with unprecedented precision.
Our method outperforms existing solutions by 15% in anomaly detection accuracy while maintaining a minimal false positive rate.
Our work paves the way for future advancements in distributed learning security.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the evolving landscape of Federated Learning (FL), the challenge of ensuring data integrity against poisoning attacks is paramount, particularly for applications demanding stringent privacy preservation. Traditional anomaly detection strategies often struggle to adapt to the distributed nature of FL, leaving a gap our research aims to bridge. We introduce a novel framework that synergizes gradient-based analysis with autoencoder-driven data reconstruction to detect and mitigate poisoned data with unprecedented precision. Our approach uniquely combines detecting anomalous gradient patterns with identifying reconstruction errors, significantly enhancing FL model security. Validated through extensive experiments on MNIST and CIFAR-10 datasets, our method outperforms existing solutions by 15\% in anomaly detection accuracy while maintaining a minimal false positive rate. This robust performance, consistent across varied data types and network sizes, underscores our framework's potential in securing FL deployments in critical domains such as healthcare and finance. By setting new benchmarks for anomaly detection within FL, our work paves the way for future advancements in distributed learning security.
Related papers
- An Empirical Study of Vulnerability Detection using Federated Learning [19.14259520825083]
Federated Learning (FL) has been investigated as a promising means of addressing the data silo problem in vulnerability detection.
This paper first proposes VulFL, an effective evaluation framework for FL-based vulnerability detection.
Our study sheds light on the potential of FL in vulnerability detection, which can be used to guide the design of FL-based solutions for vulnerability detection.
arXiv Detail & Related papers (2024-11-25T05:21:12Z) - Typicalness-Aware Learning for Failure Detection [26.23185979968123]
Deep neural networks (DNNs) often suffer from the overconfidence issue, where incorrect predictions are made with high confidence scores.
We propose a novel approach called Typicalness-Aware Learning (TAL) to address this issue and improve failure detection performance.
arXiv Detail & Related papers (2024-11-04T11:09:47Z) - Trustworthy Intrusion Detection: Confidence Estimation Using Latent Space [7.115540429006041]
This work introduces a novel method for enhancing confidence in anomaly detection in Intrusion Detection Systems (IDS)
By developing a confidence metric derived from latent space representations, we aim to improve the reliability of IDS predictions against cyberattacks.
Applying to the NSL-KDD dataset, our approach focuses on binary classification tasks to effectively distinguish between normal and malicious network activities.
arXiv Detail & Related papers (2024-09-19T08:09:44Z) - FedAD-Bench: A Unified Benchmark for Federated Unsupervised Anomaly Detection in Tabular Data [11.42231457116486]
FedAD-Bench is a benchmark for evaluating unsupervised anomaly detection algorithms within the context of federated learning.
We identify key challenges such as model aggregation inefficiencies and metric unreliability.
Our work aims to establish a standardized benchmark to guide future research and development in federated anomaly detection.
arXiv Detail & Related papers (2024-08-08T13:14:19Z) - Enhancing Security in Federated Learning through Adaptive
Consensus-Based Model Update Validation [2.28438857884398]
This paper introduces an advanced approach for fortifying Federated Learning (FL) systems against label-flipping attacks.
We propose a consensus-based verification process integrated with an adaptive thresholding mechanism.
Our results indicate a significant mitigation of label-flipping attacks, bolstering the FL system's resilience.
arXiv Detail & Related papers (2024-03-05T20:54:56Z) - Enhancing Infrared Small Target Detection Robustness with Bi-Level
Adversarial Framework [61.34862133870934]
We propose a bi-level adversarial framework to promote the robustness of detection in the presence of distinct corruptions.
Our scheme remarkably improves 21.96% IOU across a wide array of corruptions and notably promotes 4.97% IOU on the general benchmark.
arXiv Detail & Related papers (2023-09-03T06:35:07Z) - PULL: Reactive Log Anomaly Detection Based On Iterative PU Learning [58.85063149619348]
We propose PULL, an iterative log analysis method for reactive anomaly detection based on estimated failure time windows.
Our evaluation shows that PULL consistently outperforms ten benchmark baselines across three different datasets.
arXiv Detail & Related papers (2023-01-25T16:34:43Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Semantic Perturbations with Normalizing Flows for Improved
Generalization [62.998818375912506]
We show that perturbations in the latent space can be used to define fully unsupervised data augmentations.
We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective.
arXiv Detail & Related papers (2021-08-18T03:20:00Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.