Security Vulnerabilities in Software Supply Chain for Autonomous Vehicles
- URL: http://arxiv.org/abs/2509.16899v1
- Date: Sun, 21 Sep 2025 03:22:05 GMT
- Title: Security Vulnerabilities in Software Supply Chain for Autonomous Vehicles
- Authors: Md Wasiul Haque, Md Erfan, Sagar Dasgupta, Md Rayhanur Rahman, Mizanur Rahman,
- Abstract summary: We analyze security vulnerabilities in open-source software components in autonomous vehicles.<n>The goal is to inform researchers, practitioners, and policymakers about the existing security flaws in the commonplace open-source software ecosystem in the AV domain.
- Score: 5.600944414742589
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The interest in autonomous vehicles (AVs) for critical missions, including transportation, rescue, surveillance, reconnaissance, and mapping, is growing rapidly due to their significant safety and mobility benefits. AVs consist of complex software systems that leverage artificial intelligence (AI), sensor fusion algorithms, and real-time data processing. Additionally, AVs are becoming increasingly reliant on open-source software supply chains, such as open-source packages, third-party software components, AI models, and third-party datasets. Software security best practices in the automotive sector are often an afterthought for developers. Thus, significant cybersecurity risks exist in the software supply chain of AVs, particularly when secure software development practices are not rigorously implemented. For example, Upstream's 2024 Automotive Cybersecurity Report states that 49.5% of cyberattacks in the automotive sector are related to exploiting security vulnerabilities in software systems. In this chapter, we analyze security vulnerabilities in open-source software components in AVs. We utilize static analyzers on popular open-source AV software, such as Autoware, Apollo, and openpilot. Specifically, this chapter covers: (1) prevalent software security vulnerabilities of AVs; and (2) a comparison of static analyzer outputs for different open-source AV repositories. The goal is to inform researchers, practitioners, and policymakers about the existing security flaws in the commonplace open-source software ecosystem in the AV domain. The findings would emphasize the necessity of security best practices earlier in the software development lifecycle to reduce cybersecurity risks, thereby ensuring system reliability, safeguarding user data, and maintaining public trust in an increasingly automated world.
Related papers
- ORCA -- An Automated Threat Analysis Pipeline for O-RAN Continuous Development [57.61878484176942]
Open-Radio Access Network (O-RAN) integrates numerous software components in a cloud-like deployment, opening the radio access network to previously unconsidered security threats.<n>Current vulnerability assessment practices often rely on manual, labor-intensive, and subjective investigations, leading to inconsistencies in the threat analysis.<n>We propose an automated pipeline that leverages Natural Language Processing (NLP) to minimize human intervention and associated biases.
arXiv Detail & Related papers (2026-01-20T07:31:59Z) - Frontier AI Auditing: Toward Rigorous Third-Party Assessment of Safety and Security Practices at Leading AI Companies [57.521647436515785]
We define frontier AI auditing as rigorous third-party verification of frontier AI developers' safety and security claims.<n>We introduce AI Assurance Levels (AAL-1 to AAL-4), ranging from time-bounded system audits to continuous, deception-resilient verification.
arXiv Detail & Related papers (2026-01-16T18:44:09Z) - Report on NSF Workshop on Science of Safe AI [75.96202715567088]
New advances in machine learning are leading to new opportunities to develop technology-based solutions to societal problems.<n>To fulfill the promise of AI, we must address how to develop AI-based systems that are accurate and performant but also safe and trustworthy.<n>This report is the result of the discussions in the working groups that addressed different aspects of safety at the workshop.
arXiv Detail & Related papers (2025-06-24T18:55:29Z) - S3C2 Summit 2024-09: Industry Secure Software Supply Chain Summit [50.93790634176803]
Over the past several years, there has been an exponential increase in cyberattacks targeting software supply chains.<n>The ever-evolving threat of software supply chain attacks has garnered interest from the software industry and the US government.<n>Three researchers from the NSF-backed Secure Software Supply Chain Center (S3C2) conducted a Secure Software Supply Chain Summit with a diverse set of 12 practitioners from 9 companies.
arXiv Detail & Related papers (2025-05-15T17:48:14Z) - Cybersecurity for Autonomous Vehicles [0.0]
The increasing adoption of autonomous vehicles is bringing a major shift in the automotive industry.<n>As these vehicles become more connected, cybersecurity threats have emerged as a serious concern.<n>This paper discusses major cybersecurity challenges like vulnerabilities in software and hardware, risks from wireless communication, and threats through external interfaces.
arXiv Detail & Related papers (2025-04-28T18:31:37Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.<n>First, we propose using standardized AI flaw reports and rules of engagement for researchers.<n>Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.<n>Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security [56.219994752894294]
We argue that adapting existing processes for AI security reporting is doomed to fail due to fundamental shortcomings for the distinctive characteristics of AI systems.<n>Based on our proposal to address these shortcomings, we discuss an approach to AI security reporting and how the new AI paradigm, AI agents, will further reinforce the need for specialized AI security incident reporting advancements.
arXiv Detail & Related papers (2024-12-19T13:50:26Z) - Assessing the Threat Level of Software Supply Chains with the Log Model [4.1920378271058425]
The use of free and open source software (FOSS) components in all software systems is estimated to be above 90%.
This work presents a novel approach of assessing threat levels in FOSS supply chains with the log model.
arXiv Detail & Related papers (2023-11-20T12:44:37Z) - Software Repositories and Machine Learning Research in Cyber Security [0.0]
The integration of robust cyber security defenses has become essential across all phases of software development.
Attempts have been made to leverage topic modeling and machine learning for the detection of these early-stage vulnerabilities in the software requirements process.
arXiv Detail & Related papers (2023-11-01T17:46:07Z) - Software supply chain: review of attacks, risk assessment strategies and
security controls [0.13812010983144798]
The software product is a source of cyber-attacks that target organizations by using their software supply chain as a distribution vector.
We analyze the most common software supply chain attacks by providing the latest trend of analyzed attacks.
This study introduces unique security controls to mitigate analyzed cyber-attacks and risks by linking them with real-life security incidence and attacks.
arXiv Detail & Related papers (2023-05-23T15:25:39Z) - Security for Machine Learning-based Software Systems: a survey of
threats, practices and challenges [0.76146285961466]
How to securely develop the machine learning-based modern software systems (MLBSS) remains a big challenge.
latent vulnerabilities and privacy issues exposed to external users and attackers will be largely neglected and hard to be identified.
We consider that security for machine learning-based software systems may arise from inherent system defects or external adversarial attacks.
arXiv Detail & Related papers (2022-01-12T23:20:25Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.