A Confidential Computing Transparency Framework for a Comprehensive Trust Chain
- URL: http://arxiv.org/abs/2409.03720v2
- Date: Thu, 05 Dec 2024 22:06:35 GMT
- Title: A Confidential Computing Transparency Framework for a Comprehensive Trust Chain
- Authors: Ceren Kocaoğullar, Tina Marjanov, Ivan Petrov, Ben Laurie, Al Cutter, Christoph Kern, Alice Hutchings, Alastair R. Beresford,
- Abstract summary: Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.
TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.
We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
- Score: 7.9699781371465965
- License:
- Abstract: Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments (TEEs) that use attestation to verify their integrity, authenticity, and certain runtime properties, along with those of the binaries they execute. However, TEEs require user trust, as attestation alone cannot guarantee the absence of vulnerabilities or backdoors. Enhanced transparency can mitigate the reliance on naive trust. Some organisations currently employ various transparency measures, including open-source firmware, publishing technical documentation, or undergoing external audits, but these require investments with unclear returns. This may discourage the adoption of transparency, leaving users with limited visibility into system privacy measures. Additionally, the lack of standardisation complicates meaningful comparisons between implementations. To address these challenges, we propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency. To evaluate whether our transparency framework contributes to an increase in end-user trust, we conducted an empirical study with over 800 non-expert participants. The results indicate that greater transparency improves user comfort, with participants willing to share various types of personal data across different levels of transparency. The study also reveals misconceptions about transparency, highlighting the need for clear communication and user education.
Related papers
- The Pitfalls of "Security by Obscurity" And What They Mean for Transparent AI [4.627627425427264]
We identify three key themes in the security community's perspective on the benefits of transparency.
We then provide a case study discussion on how transparency has shaped the research subfield of anonymization.
arXiv Detail & Related papers (2025-01-30T17:04:35Z) - Balancing Confidentiality and Transparency for Blockchain-based Process-Aware Information Systems [46.404531555921906]
We propose an architecture for blockchain-based PAISs aimed at preserving both confidentiality and transparency.
Smart contracts enact, enforce and store public interactions, while attribute-based encryption techniques are adopted to specify access grants to confidential information.
arXiv Detail & Related papers (2024-12-07T20:18:36Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - The Many Facets of Trust in AI: Formalizing the Relation Between Trust
and Fairness, Accountability, and Transparency [4.003809001962519]
Efforts to promote fairness, accountability, and transparency are assumed to be critical in fostering Trust in AI (TAI)
The lack of exposition on trust itself suggests that trust is commonly understood, uncomplicated, or even uninteresting.
Our analysis of TAI publications reveals numerous orientations which differ in terms of who is doing the trusting (agent), in what (object), on the basis of what (basis), in order to what (objective), and why (impact)
arXiv Detail & Related papers (2022-08-01T08:26:57Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Applying Transparency in Artificial Intelligence based Personalization
Systems [5.671950073691286]
Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
arXiv Detail & Related papers (2020-04-02T11:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.