Clio-X: AWeb3 Solution for Privacy-Preserving AI Access to Digital Archives
- URL: http://arxiv.org/abs/2507.08853v1
- Date: Wed, 09 Jul 2025 05:30:38 GMT
- Title: Clio-X: AWeb3 Solution for Privacy-Preserving AI Access to Digital Archives
- Authors: Victoria L. Lemieux, Rosa Gil, Faith Molosiwa, Qihong Zhou, Binming Li, Roberto Garcia, Luis De La Torre Cubillo, Zehua Wang,
- Abstract summary: This paper explores how privacy-enhancing technologies can support archives to preserve control over sensitive content while still being able to make it available for access by researchers.<n>We present Clio-X, a decentralized, privacy-first Web3 digital solution designed to embed PETs into archival and support AI-enabled reference and access.
- Score: 1.3713383780077602
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As archives turn to artificial intelligence to manage growing volumes of digital records, privacy risks inherent in current AI data practices raise critical concerns about data sovereignty and ethical accountability. This paper explores how privacy-enhancing technologies (PETs) and Web3 architectures can support archives to preserve control over sensitive content while still being able to make it available for access by researchers. We present Clio-X, a decentralized, privacy-first Web3 digital solution designed to embed PETs into archival workflows and support AI-enabled reference and access. Drawing on a user evaluation of a medium-fidelity prototype, the study reveals both interest in the potential of the solution and significant barriers to adoption related to trust, system opacity, economic concerns, and governance. Using Rogers' Diffusion of Innovation theory, we analyze the sociotechnical dimensions of these barriers and propose a path forward centered on participatory design and decentralized governance through a Clio-X Decentralized Autonomous Organization. By integrating technical safeguards with community-based oversight, Clio-X offers a novel model to ethically deploy AI in cultural heritage contexts.
Related papers
- Rethinking Data Protection in the (Generative) Artificial Intelligence Era [115.71019708491386]
We propose a four-level taxonomy that captures the diverse protection needs arising in modern (generative) AI models and systems.<n>Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline.
arXiv Detail & Related papers (2025-07-03T02:45:51Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - The Critical Canvas--How to regain information autonomy in the AI era [11.15944540843097]
The Critical Canvas is an information exploration platform designed to restore balance between algorithmic efficiency and human agency.
The platform transforms overwhelming technical information into actionable insights.
It enables more informed decision-making and effective policy development in the age of AI.
arXiv Detail & Related papers (2024-11-25T08:46:02Z) - Reclaiming "Open AI" -- AI Model Serving Can Be Open Access, Yet Monetizable and Loyal [39.63122342758896]
The rapid rise of AI has split model serving between open-weight distribution and opaque API-based approaches.<n>This position paper introduces, rigorously formulates, and champions the Open-access, Monetizable, and Loyal (OML) paradigm for AI model serving.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Privacy-Preserving Decentralized AI with Confidential Computing [0.7893328752331561]
This paper addresses privacy protection in decentralized Artificial Intelligence (AI) using Confidential Computing (CC) within the Atoma Network.
CC leverages hardware-based Trusted Execution Environments (TEEs) to provide isolation for processing sensitive data.
We explore how we can integrate TEEs into Atoma's decentralized framework.
arXiv Detail & Related papers (2024-10-17T16:50:48Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation [0.26107298043931204]
Generative AI has ushered in the ability to generate content that closely mimics human contributions.
These models can be used to manipulate public opinion and distort perceptions, resulting in a decline in trust towards digital platforms.
This study contributes to marketing literature and practice in three ways.
arXiv Detail & Related papers (2024-03-17T13:08:28Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.