Confidential Computing Transparency
- URL: http://arxiv.org/abs/2409.03720v1
- Date: Thu, 5 Sep 2024 17:24:05 GMT
- Title: Confidential Computing Transparency
- Authors: Ceren Kocaoğullar, Tina Marjanov, Ivan Petrov, Ben Laurie, Al Cutter, Christoph Kern, Alice Hutchings, Alastair R. Beresford,
- Abstract summary: We propose a Confidential Computing Transparency framework with progressive levels of transparency.
This framework goes beyond current measures like open-source code and audits by incorporating accountability for reviewers.
Our tiered approach provides a practical pathway to achieving transparency in complex, real-world systems.
- Score: 7.9699781371465965
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Confidential Computing is a security paradigm designed to protect data in-use by leveraging hardware-based Trusted Execution Environments (TEEs). While TEEs offer significant security benefits, the need for user trust remains a challenge, as attestation alone cannot guarantee the absence of vulnerabilities or backdoors. To address this, we propose a Confidential Computing Transparency framework with progressive levels of transparency. This framework goes beyond current measures like open-source code and audits by incorporating accountability for reviewers and robust technical safeguards, creating a comprehensive trust chain. Our tiered approach provides a practical pathway to achieving transparency in complex, real-world systems. Through a user study with 400 participants, we demonstrate that higher levels of transparency are associated with increased user comfort, particularly for sensitive data types.
Related papers
- The Pitfalls of "Security by Obscurity" And What They Mean for Transparent AI [4.627627425427264]
We identify three key themes in the security community's perspective on the benefits of transparency.
We then provide a case study discussion on how transparency has shaped the research subfield of anonymization.
arXiv Detail & Related papers (2025-01-30T17:04:35Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [46.404531555921906]
We propose an architecture for blockchain-based PAISs aimed at preserving both confidentiality and transparency.
Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - Blockchain-Enhanced Framework for Secure Third-Party Vendor Risk Management and Vigilant Security Controls [0.6990493129893112]
This paper proposes a comprehensive secure framework for managing third-party vendor risk.
It integrates blockchain technology to ensure transparency, traceability, and immutability in vendor assessments and interactions.
arXiv Detail & Related papers (2024-11-20T16:42:14Z) - Trustworthy AI: Securing Sensitive Data in Large Language Models [0.0]
Large Language Models (LLMs) have transformed natural language processing (NLP) by enabling robust text generation and understanding.
This paper proposes a comprehensive framework for embedding trust mechanisms into LLMs to dynamically control the disclosure of sensitive information.
arXiv Detail & Related papers (2024-09-26T19:02:33Z) - Physical Layer Deception with Non-Orthogonal Multiplexing [52.11755709248891]
We propose a novel framework of physical layer deception (PLD) to actively counteract wiretapping attempts.
PLD combines PLS with deception technologies to actively counteract wiretapping attempts.
We prove the validity of the PLD framework with in-depth analyses and demonstrate its superiority over conventional PLS approaches.
arXiv Detail & Related papers (2024-06-30T16:17:39Z) - Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning [14.187385349716518]
Existing methods for privacy preservation rely on image encryption or perceptual transformation approaches.
We propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.
arXiv Detail & Related papers (2024-04-08T19:46:20Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Blockchain-based Zero Trust on the Edge [5.323279718522213]
This paper proposes a novel approach based on Zero Trust Architecture (ZTA) extended with blockchain to further enhance security.
The blockchain component serves as an immutable database for storing users' requests and is used to verify trustworthiness by analyzing and identifying potentially malicious user activities.
We discuss the framework, processes of the approach, and the experiments carried out on a testbed to validate its feasibility and applicability in the smart city context.
arXiv Detail & Related papers (2023-11-28T12:43:21Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency [4.003809001962519]
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI)
The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting.
Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact)
arXiv Detail & Related papers (2022-08-01T08:26:57Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Applying Transparency in Artificial Intelligence based Personalization
Systems [5.671950073691286]
Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
arXiv Detail & Related papers (2020-04-02T11:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.