Vulnerability Disclosure or Notification? Best Practices for Reaching Stakeholders at Scale
- URL: http://arxiv.org/abs/2506.14323v1
- Date: Tue, 17 Jun 2025 09:04:26 GMT
- Title: Vulnerability Disclosure or Notification? Best Practices for Reaching Stakeholders at Scale
- Authors: Ting-Han Chen, Jeroen van der Ham-de Vos,
- Abstract summary: Security researchers are interested in security vulnerabilities, but these security vulnerabilities create risks for stakeholders.<n>Coordinated Vulnerability Disclosure has been an accepted best practice for many years in disclosing newly discovered vulnerabilities.<n>This practice has some similarities with vulnerability disclosure but should be distinguished from it, providing other challenges and requiring a different approach.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Security researchers are interested in security vulnerabilities, but these security vulnerabilities create risks for stakeholders. Coordinated Vulnerability Disclosure has been an accepted best practice for many years in disclosing newly discovered vulnerabilities. This practice has mostly worked, but it can become challenging when there are many different parties involved. There has also been research into known vulnerabilities, using datasets or active scans to discover how many machines are still vulnerable. The ethical guidelines suggest that researchers also make an effort to notify the owners of these machines. We posit that this differs from vulnerability disclosure, but rather the practice of vulnerability notification. This practice has some similarities with vulnerability disclosure but should be distinguished from it, providing other challenges and requiring a different approach. Based on our earlier disclosure experience and on prior work documenting their disclosure and notification operations, we provide a meta-review on vulnerability disclosure and notification to observe the shifts in strategies in recent years. We assess how researchers initiated their messaging and examine the outcomes. We then compile the best practices for the existing disclosure guidelines and for notification operations.
Related papers
- Identity Theft in AI Conference Peer Review [50.18240135317708]
We discuss newly uncovered cases of identity theft in the scientific peer-review process within artificial intelligence (AI) research.<n>We detail how dishonest researchers exploit the peer-review system by creating fraudulent reviewer profiles to manipulate paper evaluations.
arXiv Detail & Related papers (2025-08-06T02:36:52Z) - DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - On Categorizing Open Source Software Security Vulnerability Reporting Mechanisms on GitHub [1.7174932174564534]
Open-source projects are essential to software development, but publicly disclosing vulnerabilities without fixes increases the risk of exploitation.<n>The Open Source Security Foundation (OpenSSF) addresses this issue by promoting robust security policies to enhance project security.<n>Current research reveals that many projects perform poorly on OpenSSF criteria, indicating a need for stronger security practices.
arXiv Detail & Related papers (2025-02-11T09:23:24Z) - Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.<n>We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.<n>Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - Investigating Vulnerability Disclosures in Open-Source Software Using Bug Bounty Reports and Security Advisories [6.814841205623832]
We conduct an empirical study on 3,798 reviewed GitHub security advisories and 4,033 disclosed OSS bug bounty reports.<n>We are the first to determine the explicit process describing how OSS vulnerabilities propagate from security advisories and bug bounty reports.
arXiv Detail & Related papers (2025-01-29T16:36:41Z) - Discovery of Timeline and Crowd Reaction of Software Vulnerability Disclosures [47.435076500269545]
Apache Log4J was found to be vulnerable to remote code execution attacks.
More than 35,000 packages were forced to update their Log4J libraries with the latest version.
It is practically reasonable for software developers to update their third-party libraries whenever the software vendors have released a vulnerable-free version.
arXiv Detail & Related papers (2024-11-12T01:55:51Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Measuring the Exploitation of Weaknesses in the Wild [0.0]
A weakness is a bug or fault type that can be exploited through an operation that results in a security-relevant error.
This work introduces a simple metric to determine the probability of a weakness being exploited in the wild for any 30-day window.
Our analysis reveals that 92 % of the weaknesses are not being constantly exploited.
arXiv Detail & Related papers (2024-05-02T13:49:51Z) - Attack Techniques and Threat Identification for Vulnerabilities [1.1689657956099035]
prioritization and focus become critical, to spend their limited time on the highest risk vulnerabilities.
In this work, we use machine learning and natural language processing techniques, as well as several publicly available data sets.
We first map the vulnerabilities to a standard set of common weaknesses, and then common weaknesses to the attack techniques.
This approach yields a Mean Reciprocal Rank (MRR) of 0.95, an accuracy comparable with those reported for state-of-the-art systems.
arXiv Detail & Related papers (2022-06-22T15:27:49Z) - Early Detection of Security-Relevant Bug Reports using Machine Learning:
How Far Are We? [6.438136820117887]
In a typical maintenance scenario, security-relevant bug reports are prioritised by the development team when preparing corrective patches.
Open security-relevant bug reports can become a critical leak of sensitive information that attackers can leverage to perform zero-day attacks.
In recent years, approaches for the detection of security-relevant bug reports based on machine learning have been reported with promising performance.
arXiv Detail & Related papers (2021-12-19T11:30:29Z) - Certifiers Make Neural Networks Vulnerable to Availability Attacks [70.69104148250614]
We show for the first time that fallback strategies can be deliberately triggered by an adversary.
In addition to naturally occurring abstains for some inputs and perturbations, the adversary can use training-time attacks to deliberately trigger the fallback.
We design two novel availability attacks, which show the practical relevance of these threats.
arXiv Detail & Related papers (2021-08-25T15:49:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.