Detection of Insider Attacks in Distributed Projected Subgradient
Algorithms
- URL: http://arxiv.org/abs/2101.06917v1
- Date: Mon, 18 Jan 2021 08:01:06 GMT
- Title: Detection of Insider Attacks in Distributed Projected Subgradient
Algorithms
- Authors: Sissi Xiaoxiao Wu, Gangqiang Li, Shengli Zhang, and Xiaohui Lin
- Abstract summary: We show that a general neural network is particularly suitable for detecting and localizing malicious agents.
We propose to adopt one of the state-of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol.
In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods.
- Score: 11.096339082411882
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The gossip-based distributed algorithms are widely used to solve
decentralized optimization problems in various multi-agent applications, while
they are generally vulnerable to data injection attacks by internal malicious
agents as each agent locally estimates its decent direction without an
authorized supervision. In this work, we explore the application of artificial
intelligence (AI) technologies to detect internal attacks. We show that a
general neural network is particularly suitable for detecting and localizing
the malicious agents, as they can effectively explore nonlinear relationship
underlying the collected data. Moreover, we propose to adopt one of the
state-of-art approaches in federated learning, i.e., a collaborative
peer-to-peer machine learning protocol, to facilitate training our neural
network models by gossip exchanges. This advanced approach is expected to make
our model more robust to challenges with insufficient training data, or
mismatched test data. In our simulations, a least-squared problem is considered
to verify the feasibility and effectiveness of AI-based methods. Simulation
results demonstrate that the proposed AI-based methods are beneficial to
improve performance of detecting and localizing malicious agents over
score-based methods, and the peer-to-peer neural network model is indeed robust
to target issues.
Related papers
- Edge AI Collaborative Learning: Bayesian Approaches to Uncertainty Estimation [0.0]
We focus on determining confidence levels in learning outcomes considering the spatial variability of data encountered by independent agents.
We implement a 3D environment simulation using the Webots platform to simulate collaborative mapping tasks.
Experiments demonstrate that BNNs can effectively support uncertainty estimation in a distributed learning context.
arXiv Detail & Related papers (2024-10-11T09:20:16Z) - Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Data-efficient Weakly-supervised Learning for On-line Object Detection
under Domain Shift in Robotics [24.878465999976594]
Several object detection methods have been proposed in the literature, the vast majority based on Deep Convolutional Neural Networks (DCNNs)
These methods have important limitations for robotics: Learning solely on off-line data may introduce biases, and prevents adaptation to novel tasks.
In this work, we investigate how weakly-supervised learning can cope with these problems.
arXiv Detail & Related papers (2020-12-28T16:36:11Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.