To Patch or Not to Patch: Motivations, Challenges, and Implications for Cybersecurity
- URL: http://arxiv.org/abs/2502.17703v1
- Date: Mon, 24 Feb 2025 22:52:35 GMT
- Title: To Patch or Not to Patch: Motivations, Challenges, and Implications for Cybersecurity
- Authors: Jason R. C. Nurse,
- Abstract summary: We take a fresh look at the question of patching and critically explore why organizations choose to patch or decide against it.<n>Key motivators include organizational needs, the IT/security team's relationship with vendors, and legal and regulatory requirements.<n>There are also numerous significant reasons discovered for why the decision is taken not to patch.
- Score: 2.7195102129095003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As technology has become more embedded into our society, the security of modern-day systems is paramount. One topic which is constantly under discussion is that of patching, or more specifically, the installation of updates that remediate security vulnerabilities in software or hardware systems. This continued deliberation is motivated by complexities involved with patching; in particular, the various incentives and disincentives for organizations and their cybersecurity teams when deciding whether to patch. In this paper, we take a fresh look at the question of patching and critically explore why organizations and IT/security teams choose to patch or decide against it (either explicitly or due to inaction). We tackle this question by aggregating and synthesizing prominent research and industry literature on the incentives and disincentives for patching, specifically considering the human aspects in the context of these motives. Through this research, this study identifies key motivators such as organizational needs, the IT/security team's relationship with vendors, and legal and regulatory requirements placed on the business and its staff. There are also numerous significant reasons discovered for why the decision is taken not to patch, including limited resources (e.g., person-power), challenges with manual patch management tasks, human error, bad patches, unreliable patch management tools, and the perception that related vulnerabilities would not be exploited. These disincentives, in combination with the motivators above, highlight the difficult balance that organizations and their security teams need to maintain on a daily basis. Finally, we conclude by discussing implications of these findings and important future considerations.
Related papers
- Realigning Incentives to Build Better Software: a Holistic Approach to Vendor Accountability [7.627207028377776]
We argue that the challenge around better quality software is due in no small part to a sequence of misaligned incentives.
Lack of liability means software vendors have every incentive to rush low-quality software onto the market.
This paper outlines a holistic technical and policy framework we believe is needed to incentivize better and more secure software development.
arXiv Detail & Related papers (2025-04-10T14:05:24Z) - Towards Trustworthy GUI Agents: A Survey [64.6445117343499]
This survey examines the trustworthiness of GUI agents in five critical dimensions.
We identify major challenges such as vulnerability to adversarial attacks, cascading failure modes in sequential decision-making.
As GUI agents become more widespread, establishing robust safety standards and responsible development practices is essential.
arXiv Detail & Related papers (2025-03-30T13:26:00Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Position: A taxonomy for reporting and describing AI security incidents [57.98317583163334]
We argue that specific are required to describe and report security incidents of AI systems.
Existing frameworks for either non-AI security or generic AI safety incident reporting are insufficient to capture the specific properties of AI security.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - A Mixed-Methods Study of Open-Source Software Maintainers On Vulnerability Management and Platform Security Features [6.814841205623832]
This paper investigates the perspectives of OSS maintainers on vulnerability management and platform security features.<n>We find that supply chain mistrust and lack of automation for vulnerability management are the most challenging.<n> barriers to adopting platform security features include a lack of awareness and the perception that they are not necessary.
arXiv Detail & Related papers (2024-09-12T00:15:03Z) - From Chaos to Consistency: The Role of CSAF in Streamlining Security Advisories [4.850201420807801]
The Common Security Advisory Format (CSAF) aims to bring security advisories into a standardized format.
Our results show that CSAF is currently rarely used.
One of the main reasons is that systems are not yet designed for automation.
arXiv Detail & Related papers (2024-08-27T10:22:59Z) - Security in IS and social engineering -- an overview and state of the art [0.6345523830122166]
The digitization of all processes and the opening to IoT devices has fostered the emergence of a new formof crime, i.e. cybercrime.
The maliciousness of such attacks lies in the fact that they turn users into facilitators of cyber-attacks, to the point of being perceived as the weak link'' of cybersecurity.
Knowing how to anticipate, identifying weak signals and outliers, detect early and react quickly to computer crime are therefore priority issues requiring a prevention and cooperation approach.
arXiv Detail & Related papers (2024-06-17T13:25:27Z) - Artificial Intelligence in Industry 4.0: A Review of Integration Challenges for Industrial Systems [45.31340537171788]
Cyber-Physical Systems (CPS) generate vast data sets that can be leveraged by Artificial Intelligence (AI) for applications including predictive maintenance and production planning.<n>Despite the demonstrated potential of AI, its widespread adoption in sectors like manufacturing remains limited.
arXiv Detail & Related papers (2024-05-28T20:54:41Z) - Red-Teaming for Generative AI: Silver Bullet or Security Theater? [42.35800543892003]
We argue that while red-teaming may be a valuable big-tent idea for characterizing GenAI harm mitigations, industry may effectively apply red-teaming and other strategies behind closed doors to safeguard AI.
To move toward a more robust toolbox of evaluations for generative AI, we synthesize our recommendations into a question bank meant to guide and scaffold future AI red-teaming practices.
arXiv Detail & Related papers (2024-01-29T05:46:14Z) - Just-in-Time Detection of Silent Security Patches [7.840762542485285]
Security patches can be em silent, i.e., they do not always come with comprehensive advisories such as CVEs.
This lack of transparency leaves users oblivious to available security updates, providing ample opportunity for attackers to exploit unpatched vulnerabilities.
We propose to leverage large language models (LLMs) to augment patch information with generated code change explanations.
arXiv Detail & Related papers (2023-12-02T22:53:26Z) - AI for IT Operations (AIOps) on Cloud Platforms: Reviews, Opportunities
and Challenges [60.56413461109281]
Artificial Intelligence for IT operations (AIOps) aims to combine the power of AI with the big data generated by IT Operations processes.
We discuss in depth the key types of data emitted by IT Operations activities, the scale and challenges in analyzing them, and where they can be helpful.
We categorize the key AIOps tasks as - incident detection, failure prediction, root cause analysis and automated actions.
arXiv Detail & Related papers (2023-04-10T15:38:12Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.