No Silver Bullet: Towards Demonstrating Secure Software Development for Danish Small and Medium Enterprises in a Business-to-Business Model
- URL: http://arxiv.org/abs/2503.04293v1
- Date: Thu, 06 Mar 2025 10:25:15 GMT
- Title: No Silver Bullet: Towards Demonstrating Secure Software Development for Danish Small and Medium Enterprises in a Business-to-Business Model
- Authors: Raha Asadi, Bodil Biering, Vincent van Dijk, Oksana Kulyk, Elda Paja,
- Abstract summary: This study investigates ways for SMEs to demonstrate their security when operating in a business-to-business model.<n>Our findings indicate five distinctive security demonstration approaches, namely: Certifications, Reports, Questionnaires, Interactive Sessions and Social Proof.<n>We discuss the challenges, benefits, and recommendations related to these approaches, concluding that none of them is a one-size-fits all solution.
- Score: 0.6407952035735351
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software developing small and medium enterprises (SMEs) play a crucial role as suppliers to larger corporations and public administration. It is therefore necessary for them to be able to demonstrate that their products meet certain security criteria, both to gain trust of their customers and to comply to standards that demand such a demonstration. In this study we have investigated ways for SMEs to demonstrate their security when operating in a business-to-business model, conducting semi-structured interviews (N=16) with practitioners from different SMEs in Denmark and validating our findings in a follow-up workshop (N=6). Our findings indicate five distinctive security demonstration approaches, namely: Certifications, Reports, Questionnaires, Interactive Sessions and Social Proof. We discuss the challenges, benefits, and recommendations related to these approaches, concluding that none of them is a one-size-fits all solution and that more research into relative advantages of these approaches and their combinations is needed.
Related papers
- Advancing Embodied Agent Security: From Safety Benchmarks to Input Moderation [52.83870601473094]
Embodied agents exhibit immense potential across a multitude of domains.
Existing research predominantly concentrates on the security of general large language models.
This paper introduces a novel input moderation framework, meticulously designed to safeguard embodied agents.
arXiv Detail & Related papers (2025-04-22T08:34:35Z) - MMDT: Decoding the Trustworthiness and Safety of Multimodal Foundation Models [101.70140132374307]
Multimodal foundation models (MMFMs) play a crucial role in various applications, including autonomous driving, healthcare, and virtual assistants.
Existing benchmarks on multimodal models either predominantly assess the helpfulness of these models, or only focus on limited perspectives such as fairness and privacy.
We present the first unified platform, MMDT (Multimodal DecodingTrust), designed to provide a comprehensive safety and trustworthiness evaluation for MMFMs.
arXiv Detail & Related papers (2025-03-19T01:59:44Z) - Reproducibility Study of Cooperation, Competition, and Maliciousness: LLM-Stakeholders Interactive Negotiation [0.0]
We validate the original findings using a range of open-weight models.<n>We propose a communication-free baseline to test whether successful negotiations are possible without agent interaction.<n>This work also provides insights into the accessibility, fairness, environmental impact, and privacy considerations of LLM-based negotiation systems.
arXiv Detail & Related papers (2025-02-22T14:28:49Z) - Safety at Scale: A Comprehensive Survey of Large Model Safety [298.05093528230753]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Assessing AI Adoption and Digitalization in SMEs: A Framework for Implementation [0.0]
There is a significant gap between SMEs and large corporations in their use of AI.<n>This study identifies critical drivers and obstacles to achieving intelligent transformation.<n>It proposes a framework model to address key challenges and provide actionable guidelines.
arXiv Detail & Related papers (2025-01-14T15:10:25Z) - Agent-SafetyBench: Evaluating the Safety of LLM Agents [72.92604341646691]
We introduce Agent-SafetyBench, a comprehensive benchmark to evaluate the safety of large language models (LLMs)<n>Agent-SafetyBench encompasses 349 interaction environments and 2,000 test cases, evaluating 8 categories of safety risks and covering 10 common failure modes frequently encountered in unsafe interactions.<n>Our evaluation of 16 popular LLM agents reveals a concerning result: none of the agents achieves a safety score above 60%.
arXiv Detail & Related papers (2024-12-19T02:35:15Z) - On Large Language Models in Mission-Critical IT Governance: Are We Ready Yet? [7.098487130130114]
Security of critical infrastructure has been a pressing concern since the advent of computers.<n>Recent events reveal the increasing difficulty of meeting these challenges.<n>We aim to explore practitioners' views on integrating Generative AI into the governance of IT MCSs.
arXiv Detail & Related papers (2024-12-16T12:21:05Z) - MultiTrust: A Comprehensive Benchmark Towards Trustworthy Multimodal Large Language Models [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.<n>Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.<n>Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - Multimodal Large Language Models to Support Real-World Fact-Checking [80.41047725487645]
Multimodal large language models (MLLMs) carry the potential to support humans in processing vast amounts of information.
While MLLMs are already being used as a fact-checking tool, their abilities and limitations in this regard are understudied.
We propose a framework for systematically assessing the capacity of current multimodal models to facilitate real-world fact-checking.
arXiv Detail & Related papers (2024-03-06T11:32:41Z) - A Survey of Confidence Estimation and Calibration in Large Language Models [86.692994151323]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide range of tasks in various domains.
Despite their impressive performance, they can be unreliable due to factual errors in their generations.
Assessing their confidence and calibrating them across different tasks can help mitigate risks and enable LLMs to produce better generations.
arXiv Detail & Related papers (2023-11-14T16:43:29Z) - SMEs Confidentiality Issues and Adoption of Good Cybersecurity Practices [0.0]
Small and medium-sized enterprises (SME) are considered more vulnerable to cyber-attacks.
We are designing a do-it-yourself (DIY) security assessment and capability improvement method, CYSEC.
In this paper, we explore the importance of dynamic consent and its effect on SMEs trust perception and sharing information.
arXiv Detail & Related papers (2020-07-16T09:24:51Z) - "It's Not Something We Have Talked to Our Team About": Results From a
Preliminary Investigation of Cybersecurity Challenges in Denmark [0.5249805590164901]
We conducted a preliminary study running semi-structured interviews with four employees from four different companies.
Our results show that companies are lacking fundamental security protection and are in need of guidance and tools.
We discuss steps towards further investigation towards developing a framework targeting SMEs that want to adopt straightforward and actionable IT security guidance.
arXiv Detail & Related papers (2020-07-10T09:07:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.