SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning
- URL: http://arxiv.org/abs/2101.02780v1
- Date: Thu, 7 Jan 2021 22:01:30 GMT
- Title: SHARKS: Smart Hacking Approaches for RisK Scanning in Internet-of-Things
and Cyber-Physical Systems based on Machine Learning
- Authors: Tanujay Saha, Najwa Aaraj, Neel Ajjarapu, Niraj K. Jha
- Abstract summary: Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are increasingly being deployed across multiple functionalities.
These devices are inherently not secure across their comprehensive software, hardware, and network stacks.
We present an innovative technique for detecting unknown system vulnerabilities, managing these vulnerabilities, and improving incident response.
- Score: 5.265938973293016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cyber-physical systems (CPS) and Internet-of-Things (IoT) devices are
increasingly being deployed across multiple functionalities, ranging from
healthcare devices and wearables to critical infrastructures, e.g., nuclear
power plants, autonomous vehicles, smart cities, and smart homes. These devices
are inherently not secure across their comprehensive software, hardware, and
network stacks, thus presenting a large attack surface that can be exploited by
hackers. In this article, we present an innovative technique for detecting
unknown system vulnerabilities, managing these vulnerabilities, and improving
incident response when such vulnerabilities are exploited. The novelty of this
approach lies in extracting intelligence from known real-world CPS/IoT attacks,
representing them in the form of regular expressions, and employing machine
learning (ML) techniques on this ensemble of regular expressions to generate
new attack vectors and security vulnerabilities. Our results show that 10 new
attack vectors and 122 new vulnerability exploits can be successfully generated
that have the potential to exploit a CPS or an IoT ecosystem. The ML
methodology achieves an accuracy of 97.4% and enables us to predict these
attacks efficiently with an 87.2% reduction in the search space. We demonstrate
the application of our method to the hacking of the in-vehicle network of a
connected car. To defend against the known attacks and possible novel exploits,
we discuss a defense-in-depth mechanism for various classes of attacks and the
classification of data targeted by such attacks. This defense mechanism
optimizes the cost of security measures based on the sensitivity of the
protected resource, thus incentivizing its adoption in real-world CPS/IoT by
cybersecurity practitioners.
Related papers
- Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Adv-Bot: Realistic Adversarial Botnet Attacks against Network Intrusion
Detection Systems [0.7829352305480285]
A growing number of researchers are recently investigating the feasibility of such attacks against machine learning-based security systems.
This study was to investigate the actual feasibility of adversarial attacks, specifically evasion attacks, against network-based intrusion detection systems.
Our goal is to create adversarial botnet traffic that can avoid detection while still performing all of its intended malicious functionality.
arXiv Detail & Related papers (2023-03-12T14:01:00Z) - Attack Techniques and Threat Identification for Vulnerabilities [1.1689657956099035]
prioritization and focus become critical, to spend their limited time on the highest risk vulnerabilities.
In this work, we use machine learning and natural language processing techniques, as well as several publicly available data sets.
We first map the vulnerabilities to a standard set of common weaknesses, and then common weaknesses to the attack techniques.
This approach yields a Mean Reciprocal Rank (MRR) of 0.95, an accuracy comparable with those reported for state-of-the-art systems.
arXiv Detail & Related papers (2022-06-22T15:27:49Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Reinforcement Learning for Feedback-Enabled Cyber Resilience [24.92055101652206]
Cyber resilience provides a new security paradigm that complements inadequate protection with resilience mechanisms.
A Cyber-Resilient Mechanism ( CRM) adapts to the known or zero-day threats and uncertainties in real-time.
We review the literature on RL for cyber resiliency and discuss the cyber-resilient defenses against three major types of vulnerabilities.
arXiv Detail & Related papers (2021-07-02T01:08:45Z) - GRAVITAS: Graphical Reticulated Attack Vectors for Internet-of-Things
Aggregate Security [5.918387680589584]
Internet-of-Things (IoT) and cyber-physical systems (CPSs) may consist of thousands of devices connected in a complex network topology.
We describe a comprehensive risk management system, called GRAVITAS, for IoT/CPS that can identify undiscovered attack vectors.
arXiv Detail & Related papers (2021-05-31T19:35:23Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial EXEmples: A Survey and Experimental Evaluation of Practical
Attacks on Machine Learning for Windows Malware Detection [67.53296659361598]
adversarial EXEmples can bypass machine learning-based detection by perturbing relatively few input bytes.
We develop a unifying framework that does not only encompass and generalize previous attacks against machine-learning models, but also includes three novel attacks.
These attacks, named Full DOS, Extend and Shift, inject the adversarial payload by respectively manipulating the DOS header, extending it, and shifting the content of the first section.
arXiv Detail & Related papers (2020-08-17T07:16:57Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.