Insider Threats Mitigation: Role of Penetration Testing
- URL: http://arxiv.org/abs/2407.17346v1
- Date: Wed, 24 Jul 2024 15:14:48 GMT
- Title: Insider Threats Mitigation: Role of Penetration Testing
- Authors: Krutarth Chauhan,
- Abstract summary: This study aims to improve the knowledge of penetration testing as a critical part of insider threat defense.
We look at how penetration testing is used in different industries, present case studies with real-world implementations, and discuss the obstacles and constraints that businesses must overcome.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional security solutions are insufficient to address the urgent cybersecurity challenge posed by insider attacks. While a great deal of research has been done in this area, our systematic literature analysis attempts to give readers a thorough grasp of penetration testing's role in reducing insider risks. We aim to arrange and integrate the body of knowledge on insider threat prevention by using a grounded theory approach for a thorough literature review. This analysis classifies and evaluates the approaches used in penetration testing today, including how well they uncover and mitigate insider threats and how well they work in tandem with other security procedures. Additionally, we look at how penetration testing is used in different industries, present case studies with real-world implementations, and discuss the obstacles and constraints that businesses must overcome. This study aims to improve the knowledge of penetration testing as a critical part of insider threat defense, helping to create more comprehensive and successful security policies.
Related papers
- Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - A Security Risk Assessment Method for Distributed Ledger Technology (DLT) based Applications: Three Industry Case Studies [2.0911725600823527]
This study aims to raise awareness of the cybersecurity of distributed ledger technology.
We have developed a database with possible security threats and known attacks on distributed ledger technologies.
The method has subsequently been evaluated in three case studies.
arXiv Detail & Related papers (2024-01-22T20:57:23Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Towards new challenges of modern Pentest [0.0]
This study aims to present current methodologies, tools, and potential challenges applied to Pentest from an updated systematic literature review.
Also, it presents new challenges such as automation of techniques, management of costs associated with offensive security, and the difficulty in hiring qualified professionals to perform Pentest.
arXiv Detail & Related papers (2023-11-21T19:32:23Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - A framework for comprehensible multi-modal detection of cyber threats [3.4018740224268567]
Detection of malicious activities in corporate environments is a very complex task and much effort has been invested into research of its automation.
We discuss these limitations and design a detection framework which combines observed events from different sources of data.
We demonstrate applicability of the framework on a case study of a real malware infection observed in a corporate network.
arXiv Detail & Related papers (2021-11-10T16:09:52Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Modeling Penetration Testing with Reinforcement Learning Using
Capture-the-Flag Challenges: Trade-offs between Model-free Learning and A
Priori Knowledge [0.0]
Penetration testing is a security exercise aimed at assessing the security of a system by simulating attacks against it.
This paper focuses on simplified penetration testing problems expressed in the form of capture the flag hacking challenges.
We show how this challenge may be eased by relying on different forms of prior knowledge that may be provided to the agent.
arXiv Detail & Related papers (2020-05-26T11:23:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.