Lessons from the Long Tail: Analysing Unsafe Dependency Updates across
Software Ecosystems
- URL: http://arxiv.org/abs/2309.04197v1
- Date: Fri, 8 Sep 2023 08:17:09 GMT
- Title: Lessons from the Long Tail: Analysing Unsafe Dependency Updates across
Software Ecosystems
- Authors: Supatsara Wattanakriengkrai, Raula Gaikovina Kula, Christoph Treude,
Kenichi Matsumoto
- Abstract summary: We present preliminary data based on three representative samples from a population of 88,416 pull requests (PRs)
We identify unsafe dependency updates (i.e., any pull request that risks being unsafe during runtime)
This includes developing best practises to address unsafe dependency updates not only in top-tier libraries but throughout the ecosystem.
- Score: 11.461455369774765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A risk in adopting third-party dependencies into an application is their
potential to serve as a doorway for malicious code to be injected (most often
unknowingly). While many initiatives from both industry and research
communities focus on the most critical dependencies (i.e., those most depended
upon within the ecosystem), little is known about whether the rest of the
ecosystem suffers the same fate. Our vision is to promote and establish safer
practises throughout the ecosystem. To motivate our vision, in this paper, we
present preliminary data based on three representative samples from a
population of 88,416 pull requests (PRs) and identify unsafe dependency updates
(i.e., any pull request that risks being unsafe during runtime), which clearly
shows that unsafe dependency updates are not limited to highly impactful
libraries. To draw attention to the long tail, we propose a research agenda
comprising six key research questions that further explore how to safeguard
against these unsafe activities. This includes developing best practises to
address unsafe dependency updates not only in top-tier libraries but throughout
the entire ecosystem.
Related papers
- On Security Weaknesses and Vulnerabilities in Deep Learning Systems [32.14068820256729]
We specifically look into deep learning (DL) framework and perform the first systematic study of vulnerabilities in DL systems.
We propose a two-stream data analysis framework to explore vulnerability patterns from various databases.
We conducted a large-scale empirical study of 3,049 DL vulnerabilities to better understand the patterns of vulnerability and the challenges in fixing them.
arXiv Detail & Related papers (2024-06-12T23:04:13Z) - See to Believe: Using Visualization To Motivate Updating Third-party Dependencies [1.7914660044009358]
Security vulnerabilities introduced by applications using third-party dependencies are on the increase.
Developers are wary of library updates, even to fix vulnerabilities, citing that being unaware, or that the migration effort to update outweighs the decision.
In this paper, we hypothesize that the dependency graph visualization (DGV) approach will motivate developers to update.
arXiv Detail & Related papers (2024-05-15T03:57:27Z) - A Survey of Third-Party Library Security Research in Application Software [3.280510821619164]
With the widespread use of third-party libraries, associated security risks and potential vulnerabilities are increasingly apparent.
Malicious attackers can exploit these vulnerabilities to infiltrate systems, execute unauthorized operations, or steal sensitive information.
Research on third-party libraries in software becomes paramount to address this growing security challenge.
arXiv Detail & Related papers (2024-04-27T16:35:02Z) - PsySafe: A Comprehensive Framework for Psychological-based Attack,
Defense, and Evaluation of Multi-agent System Safety [73.51336434996931]
Multi-agent systems, when enhanced with Large Language Models (LLMs), exhibit profound capabilities in collective intelligence.
However, the potential misuse of this intelligence for malicious purposes presents significant risks.
We propose a framework (PsySafe) grounded in agent psychology, focusing on identifying how dark personality traits in agents can lead to risky behaviors.
Our experiments reveal several intriguing phenomena, such as the collective dangerous behaviors among agents, agents' self-reflection when engaging in dangerous behavior, and the correlation between agents' psychological assessments and dangerous behaviors.
arXiv Detail & Related papers (2024-01-22T12:11:55Z) - Safeguarded Progress in Reinforcement Learning: Safe Bayesian
Exploration for Control Policy Synthesis [63.532413807686524]
This paper addresses the problem of maintaining safety during training in Reinforcement Learning (RL)
We propose a new architecture that handles the trade-off between efficient progress and safety during exploration.
arXiv Detail & Related papers (2023-12-18T16:09:43Z) - Dependency Practices for Vulnerability Mitigation [4.710141711181836]
We analyze more than 450 vulnerabilities in the npm ecosystem to understand why dependent packages remain vulnerable.
We identify over 200,000 npm packages that are infected through their dependencies.
We use 9 features to build a prediction model that identifies packages that quickly adopt the vulnerability fix and prevent further propagation of vulnerabilities.
arXiv Detail & Related papers (2023-10-11T19:48:46Z) - Analyzing Maintenance Activities of Software Libraries [65.268245109828]
Industrial applications heavily integrate open-source software libraries nowadays.
I want to introduce an automatic monitoring approach for industrial applications to identify open-source dependencies that show negative signs regarding their current or future maintenance activities.
arXiv Detail & Related papers (2023-06-09T16:51:25Z) - Promises and Perils of Mining Software Package Ecosystem Data [10.787686237395816]
Third-party packages have led to the emergence of large software package ecosystems with a maze of inter-dependencies.
Understanding the infrastructure and dynamics of package ecosystems has given rise to approaches for better code reuse, automated updates, and the avoidance of vulnerabilities.
In this chapter, we review promises and perils of mining the rich data related to software package ecosystems available to software engineering researchers.
arXiv Detail & Related papers (2023-05-29T03:09:48Z) - Evaluating Model-free Reinforcement Learning toward Safety-critical
Tasks [70.76757529955577]
This paper revisits prior work in this scope from the perspective of state-wise safe RL.
We propose Unrolling Safety Layer (USL), a joint method that combines safety optimization and safety projection.
To facilitate further research in this area, we reproduce related algorithms in a unified pipeline and incorporate them into SafeRL-Kit.
arXiv Detail & Related papers (2022-12-12T06:30:17Z) - Towards Safe Policy Improvement for Non-Stationary MDPs [48.9966576179679]
Many real-world problems of interest exhibit non-stationarity, and when stakes are high, the cost associated with a false stationarity assumption may be unacceptable.
We take the first steps towards ensuring safety, with high confidence, for smoothly-varying non-stationary decision problems.
Our proposed method extends a type of safe algorithm, called a Seldonian algorithm, through a synthesis of model-free reinforcement learning with time-series analysis.
arXiv Detail & Related papers (2020-10-23T20:13:51Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.