TrustOps: Continuously Building Trustworthy Software
- URL: http://arxiv.org/abs/2412.03201v1
- Date: Wed, 04 Dec 2024 10:41:01 GMT
- Title: TrustOps: Continuously Building Trustworthy Software
- Authors: Eduardo Brito, Fernando Castillo, Pille Pullonen-Raudvere, Sebastian Werner,
- Abstract summary: We argue that gathering verifiable evidence during software development and operations is needed for creating a new trust model.
We present TrustOps, an approach for continuously collecting verifiable evidence in all phases of the software life cycle.
- Score: 42.81677042059531
- License:
- Abstract: Software services play a crucial role in daily life, with automated actions determining access to resources and information. Trusting service providers to perform these actions fairly and accurately is essential, yet challenging for users to verify. Even with publicly available codebases, the rapid pace of development and the complexity of modern deployments hinder the understanding and evaluation of service actions, including for experts. Hence, current trust models rely heavily on the assumption that service providers follow best practices and adhere to laws and regulations, which is increasingly impractical and risky, leading to undetected flaws and data leaks. In this paper, we argue that gathering verifiable evidence during software development and operations is needed for creating a new trust model. Therefore, we present TrustOps, an approach for continuously collecting verifiable evidence in all phases of the software life cycle, relying on and combining already existing tools and trust-enhancing technologies to do so. For this, we introduce the adaptable core principles of TrustOps and provide a roadmap for future research and development.
Related papers
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective [314.7991906491166]
Generative Foundation Models (GenFMs) have emerged as transformative tools.
Their widespread adoption raises critical concerns regarding trustworthiness across dimensions.
This paper presents a comprehensive framework to address these challenges through three key contributions.
arXiv Detail & Related papers (2025-02-20T06:20:36Z) - Development and Adoption of SATD Detection Tools: A State-of-practice Report [5.670597842524448]
Self-Admitted Technical Debt (SATD) refers to instances where developers knowingly introduce suboptimal solutions into code.
This paper provides a comprehensive state-of-practice report on the development and adoption of SATD detection tools.
arXiv Detail & Related papers (2024-12-18T12:06:53Z) - Advocate -- Trustworthy Evidence in Cloud Systems [39.58317527488534]
The rapid evolution of cloud-native applications, characterized by dynamic, interconnected services, presents significant challenges for maintaining trustworthy and auditable systems.
Traditional methods of verification and certification are often inadequate due to the fast-past and dynamic development practices common in cloud computing.
This paper introduces Advocate, a novel agent-based system designed to generate verifiable evidence of cloud-native application operations.
arXiv Detail & Related papers (2024-10-17T12:09:26Z) - LightSC: The Making of a Usable Security Classification Tool for DevSecOps [0.0]
We propose five principles for a security classification to be emphDevOps-ready
We then exemplify how one can make a security classification methodology DevOps-ready.
Since such work seems to be new within the usable security community, we extract from our process a general, three-steps recipe'
Our tool is perceived (by the test subjects) as most useful in the design phase, but also during the testing phase where the security class would be one of the metrics used to evaluate the quality of their software.
arXiv Detail & Related papers (2024-10-02T17:17:14Z) - Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion? [60.84912551069379]
We present the Code-Development Benchmark (Codev-Bench), a fine-grained, real-world, repository-level, and developer-centric evaluation framework.
Codev-Agent is an agent-based system that automates repository crawling, constructs execution environments, extracts dynamic calling chains from existing unit tests, and generates new test samples to avoid data leakage.
arXiv Detail & Related papers (2024-10-02T09:11:10Z) - From Literature to Practice: Exploring Fairness Testing Tools for the Software Industry Adoption [5.901307724130718]
In today's world, we need to ensure that AI systems are fair and unbiased.
Current fairness testing tools need significant improvements to better support software developers.
New tools should be user-friendly, well-documented, and flexible enough to handle different kinds of data.
arXiv Detail & Related papers (2024-09-04T04:23:08Z) - The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources [100.23208165760114]
Foundation model development attracts a rapidly expanding body of contributors, scientists, and applications.
To help shape responsible development practices, we introduce the Foundation Model Development Cheatsheet.
arXiv Detail & Related papers (2024-06-24T15:55:49Z) - Agent-Driven Automatic Software Improvement [55.2480439325792]
This research proposal aims to explore innovative solutions by focusing on the deployment of agents powered by Large Language Models (LLMs)
The iterative nature of agents, which allows for continuous learning and adaptation, can help surpass common challenges in code generation.
We aim to use the iterative feedback in these systems to further fine-tune the LLMs underlying the agents, becoming better aligned to the task of automated software improvement.
arXiv Detail & Related papers (2024-06-24T15:45:22Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - On the Importance of Trust in Next-Generation Networked CPS Systems: An
AI Perspective [2.1055643409860734]
We propose trust as a measure to evaluate the status of network agents and improve the decision-making process.
Trust relations are based on evidence created by the interactions of entities within a protocol.
We show how utilizing the trust evidence can improve the performance and the security of Federated Learning.
arXiv Detail & Related papers (2021-04-16T02:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.