BOMs Away! Inside the Minds of Stakeholders: A Comprehensive Study of
Bills of Materials for Software Systems
- URL: http://arxiv.org/abs/2309.12206v2
- Date: Fri, 22 Sep 2023 16:14:38 GMT
- Title: BOMs Away! Inside the Minds of Stakeholders: A Comprehensive Study of
Bills of Materials for Software Systems
- Authors: Trevor Stalnaker, Nathan Wintersgill, Oscar Chaparro, Massimiliano Di
Penta, Daniel M German, Denys Poshyvanyk
- Abstract summary: Software Bills of Materials (SBOMs) have emerged as tools to facilitate the management of software dependencies, vulnerabilities, licenses, and the supply chain.
Recent studies have shown that SBOMs are still an early technology not yet adequately adopted in practice.
We identify 12 major challenges facing the creation and use of SBOMs, including those related to the SBOM content, deficiencies in SBOM tools, SBOM maintenance and verification, and domain-specific challenges.
- Score: 11.719062411327952
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software Bills of Materials (SBOMs) have emerged as tools to facilitate the
management of software dependencies, vulnerabilities, licenses, and the supply
chain. While significant effort has been devoted to increasing SBOM awareness
and developing SBOM formats and tools, recent studies have shown that SBOMs are
still an early technology not yet adequately adopted in practice. Expanding on
previous research, this paper reports a comprehensive study that investigates
the current challenges stakeholders encounter when creating and using SBOMs.
The study surveyed 138 practitioners belonging to five stakeholder groups
(practitioners familiar with SBOMs, members of critical open source projects,
AI/ML, cyber-physical systems, and legal practitioners) using differentiated
questionnaires, and interviewed 8 survey respondents to gather further insights
about their experience. We identified 12 major challenges facing the creation
and use of SBOMs, including those related to the SBOM content, deficiencies in
SBOM tools, SBOM maintenance and verification, and domain-specific challenges.
We propose and discuss 4 actionable solutions to the identified challenges and
present the major avenues for future research and development.
Related papers
- Augmenting Software Bills of Materials with Software Vulnerability Description: A Preliminary Study on GitHub [8.727176816793179]
This paper reports the results of a preliminary study in which we augmented SBOMs of 40 open-source projects with information about Common Vulnerabilities and Exposures.
Our augmented SBOMs have been evaluated by submitting pull requests and by asking project owners to answer a survey.
Although, in most cases, augmented SBOMs were not directly accepted because owners required a continuous SBOM update, the received feedback shows the usefulness of the suggested SBOM augmentation.
arXiv Detail & Related papers (2025-03-18T08:04:22Z) - SBOM Challenges for Developers: From Analysis of Stack Overflow Questions [2.1122022139737426]
The proportion of resolved questions about SBOM use is 15.0% which is extremely low.
The number of new questions has increased steadily from 2020 to 2023.
SBOM users have three major challenges on SBOM tools.
arXiv Detail & Related papers (2025-02-06T11:08:29Z) - Gaps Between Research and Practice When Measuring Representational Harms Caused by LLM-Based Systems [88.35461485731162]
We identify four types of challenges that prevent practitioners from effectively using publicly available instruments for measuring representational harms.
Our goal is to advance the development of instruments for measuring representational harms that are well-suited to practitioner needs.
arXiv Detail & Related papers (2024-11-23T22:13:38Z) - SUPER: Evaluating Agents on Setting Up and Executing Tasks from Research Repositories [55.161075901665946]
Super aims to capture the realistic challenges faced by researchers working with Machine Learning (ML) and Natural Language Processing (NLP) research repositories.
Our benchmark comprises three distinct problem sets: 45 end-to-end problems with annotated expert solutions, 152 sub problems derived from the expert set that focus on specific challenges, and 602 automatically generated problems for larger-scale development.
We show that state-of-the-art approaches struggle to solve these problems with the best model (GPT-4o) solving only 16.3% of the end-to-end set, and 46.1% of the scenarios.
arXiv Detail & Related papers (2024-09-11T17:37:48Z) - Apprentices to Research Assistants: Advancing Research with Large Language Models [0.0]
Large Language Models (LLMs) have emerged as powerful tools in various research domains.
This article examines their potential through a literature review and firsthand experimentation.
arXiv Detail & Related papers (2024-04-09T15:53:06Z) - An Empirical Study of Challenges in Machine Learning Asset Management [15.07444988262748]
Despite existing research, a significant knowledge gap remains in operational challenges like model versioning, data traceability, and collaboration.
Our study aims to address this gap by analyzing 15,065 posts from developer forums and platforms.
We uncover 133 topics related to asset management challenges, grouped into 16 macro-topics, with software dependency, model deployment, and model training being the most discussed.
arXiv Detail & Related papers (2024-02-25T05:05:52Z) - Competition-Level Problems are Effective LLM Evaluators [121.15880285283116]
This paper aims to evaluate the reasoning capacities of large language models (LLMs) in solving recent programming problems in Codeforces.
We first provide a comprehensive evaluation of GPT-4's peiceived zero-shot performance on this task, considering various aspects such as problems' release time, difficulties, and types of errors encountered.
Surprisingly, theThoughtived performance of GPT-4 has experienced a cliff like decline in problems after September 2021 consistently across all the difficulties and types of problems.
arXiv Detail & Related papers (2023-12-04T18:58:57Z) - SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models [70.5763210869525]
We introduce an expansive benchmark suite SciBench for Large Language Model (LLM)
SciBench contains a dataset featuring a range of collegiate-level scientific problems from mathematics, chemistry, and physics domains.
The results reveal that the current LLMs fall short of delivering satisfactory performance, with the best overall score of merely 43.22%.
arXiv Detail & Related papers (2023-07-20T07:01:57Z) - On the Way to SBOMs: Investigating Design Issues and Solutions in
Practice [25.12690604349815]
The Software Bill of Materials (SBOM) has emerged as a promising solution, providing a machine-readable inventory of software components used.
This paper presents an analysis of 4,786 GitHub discussions from 510 SBOM-related projects.
arXiv Detail & Related papers (2023-04-26T03:30:31Z) - Machine Learning Practices Outside Big Tech: How Resource Constraints
Challenge Responsible Development [1.8275108630751844]
Machine learning practitioners from diverse occupations and backgrounds are increasingly using machine learning (ML) methods.
Past research often excludes the broader, lesser-resourced ML community.
These practitioners share many of the same ML development difficulties and ethical conundrums as their Big Tech counterparts.
arXiv Detail & Related papers (2021-10-06T17:25:21Z) - Understanding the Usability Challenges of Machine Learning In
High-Stakes Decision Making [67.72855777115772]
Machine learning (ML) is being applied to a diverse and ever-growing set of domains.
In many cases, domain experts -- who often have no expertise in ML or data science -- are asked to use ML predictions to make high-stakes decisions.
We investigate the ML usability challenges present in the domain of child welfare screening through a series of collaborations with child welfare screeners.
arXiv Detail & Related papers (2021-03-02T22:50:45Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.