Systematic literature review of the trust reinforcement mechanisms exist in package ecosystems
- URL: http://arxiv.org/abs/2407.02522v1
- Date: Wed, 26 Jun 2024 13:21:24 GMT
- Title: Systematic literature review of the trust reinforcement mechanisms exist in package ecosystems
- Authors: Angel Temelko, Fang Hou, Siamak Farshidi, Slinger Jansen,
- Abstract summary: This review was to pinpoint prevailing trends, methods, and concerns in trust tools for the present npm environment.
By analyzing earlier studies, our intention was to spot any overlooked areas and steer our research to address them.
- Score: 1.76118478012913
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We conducted a thorough SLR to better grasp the challenges and possible solutions associated with existing npm security tools. Our goal was to delve into documented experiences and findings. Specifically, we were keen to learn about the motivations behind choosing third-party packages, software engineers' responses to warning messages, and their overall understanding of security issues. The main aim of this review was to pinpoint prevailing trends, methods, and concerns in trust tools for the present npm environment. Furthermore, we sought to understand the complexities of integrating SECO into platforms such as npm. By analyzing earlier studies, our intention was to spot any overlooked areas and steer our research to address them.
Related papers
- A Survey on Trustworthy LLM Agents: Threats and Countermeasures [67.23228612512848]
Large Language Models (LLMs) and Multi-agent Systems (MAS) have significantly expanded the capabilities of LLM ecosystems.
We propose the TrustAgent framework, a comprehensive study on the trustworthiness of agents.
arXiv Detail & Related papers (2025-03-12T08:42:05Z) - A Survey of Safety on Large Vision-Language Models: Attacks, Defenses and Evaluations [127.52707312573791]
This survey provides a comprehensive analysis of LVLM safety, covering key aspects such as attacks, defenses, and evaluation methods.
We introduce a unified framework that integrates these interrelated components, offering a holistic perspective on the vulnerabilities of LVLMs.
We conduct a set of safety evaluations on the latest LVLM, Deepseek Janus-Pro, and provide a theoretical analysis of the results.
arXiv Detail & Related papers (2025-02-14T08:42:43Z) - LLMs in Software Security: A Survey of Vulnerability Detection Techniques and Insights [12.424610893030353]
Large Language Models (LLMs) are emerging as transformative tools for software vulnerability detection.
This paper provides a detailed survey of LLMs in vulnerability detection.
We address challenges such as cross-language vulnerability detection, multimodal data integration, and repository-level analysis.
arXiv Detail & Related papers (2025-02-10T21:33:38Z) - A Survey on Point-of-Interest Recommendation: Models, Architectures, and Security [40.18083295666298]
Point-of-Interest (POI) recommendation systems are crucial for enriching user experiences, enabling personalized interactions, and optimizing decision-making processes in the digital landscape.
We systematically examine the transition from traditional models to advanced techniques such as large language models.
We address the increasing importance of security, examining potential vulnerabilities and privacy-preserving approaches.
arXiv Detail & Related papers (2024-10-03T04:11:42Z) - Towards Threat Modelling of IoT Context-Sharing Platforms [4.098759138493994]
We propose a framework for threat modelling and security analysis of a generic IoT context-sharing solution.
We identify significant security challenges in the design of IoT context-sharing platforms.
Our threat modelling provides an in-depth analysis of the techniques and sub-techniques adversaries may use to exploit these systems.
arXiv Detail & Related papers (2024-08-22T02:41:06Z) - Sok: Comprehensive Security Overview, Challenges, and Future Directions of Voice-Controlled Systems [10.86045604075024]
The integration of Voice Control Systems into smart devices accentuates the importance of their security.
Current research has uncovered numerous vulnerabilities in VCS, presenting significant risks to user privacy and security.
This study introduces a hierarchical model structure for VCS, providing a novel lens for categorizing and analyzing existing literature in a systematic manner.
We classify attacks based on their technical principles and thoroughly evaluate various attributes, such as their methods, targets, vectors, and behaviors.
arXiv Detail & Related papers (2024-05-27T12:18:46Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science [65.77763092833348]
Intelligent agents powered by large language models (LLMs) have demonstrated substantial promise in autonomously conducting experiments and facilitating scientific discoveries across various disciplines.
While their capabilities are promising, these agents also introduce novel vulnerabilities that demand careful consideration for safety.
This paper conducts a thorough examination of vulnerabilities in LLM-based agents within scientific domains, shedding light on potential risks associated with their misuse and emphasizing the need for safety measures.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - The Art of Defending: A Systematic Evaluation and Analysis of LLM
Defense Strategies on Safety and Over-Defensiveness [56.174255970895466]
Large Language Models (LLMs) play an increasingly pivotal role in natural language processing applications.
This paper presents Safety and Over-Defensiveness Evaluation (SODE) benchmark.
arXiv Detail & Related papers (2023-12-30T17:37:06Z) - Zero-Knowledge Proof-based Verifiable Decentralized Machine Learning in Communication Network: A Comprehensive Survey [31.111210313340454]
Decentralized approaches to machine learning introduce challenges related to trust and verifiability.
We present a comprehensive review of Zero-Knowledge Proof-based Verifiable Machine Learning (ZKP-VML)
arXiv Detail & Related papers (2023-10-23T12:15:23Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Robust Recommender System: A Survey and Future Directions [58.87305602959857]
We first present a taxonomy to organize current techniques for withstanding malicious attacks and natural noise.
We then explore state-of-the-art methods in each category, including fraudster detection, adversarial training, certifiable robust training for defending against malicious attacks.
We discuss robustness across varying recommendation scenarios and its interplay with other properties like accuracy, interpretability, privacy, and fairness.
arXiv Detail & Related papers (2023-09-05T08:58:46Z) - New Challenges in Reinforcement Learning: A Survey of Security and
Privacy [26.706957408693363]
Reinforcement learning (RL) is one of the most important branches of AI.
RL has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics.
Some of these applications and systems have been shown to be vulnerable to security or privacy attacks.
arXiv Detail & Related papers (2022-12-31T12:30:43Z) - Q-Mixing Network for Multi-Agent Pathfinding in Partially Observable
Grid Environments [62.997667081978825]
We consider the problem of multi-agent navigation in partially observable grid environments.
We suggest utilizing the reinforcement learning approach when the agents, first, learn the policies that map observations to actions and then follow these policies to reach their goals.
arXiv Detail & Related papers (2021-08-13T09:44:47Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.