The Privacy Pillar -- A Conceptual Framework for Foundation Model-based Systems
- URL: http://arxiv.org/abs/2311.06998v1
- Date: Mon, 13 Nov 2023 00:44:06 GMT
- Title: The Privacy Pillar -- A Conceptual Framework for Foundation Model-based Systems
- Authors: Tingting Bi, Guangsheng Yu, Qinghua Lu, Xiwei Xu, Nick Van Beest,
- Abstract summary: Foundation models present both significant challenges and incredible opportunities.
There is currently a lack of consensus regarding the comprehensive scope of both technical and non-technical issues that the privacy evaluation process should encompass.
This paper introduces a novel conceptual framework that integrates various responsible AI patterns from multiple perspectives.
- Score: 10.261775049562708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI and its relevant technologies, including machine learning, deep learning, chatbots, virtual assistants, and others, are currently undergoing a profound transformation of development and organizational processes within companies. Foundation models present both significant challenges and incredible opportunities. In this context, ensuring the quality attributes of foundation model-based systems is of paramount importance, and with a particular focus on the challenging issue of privacy due to the sensitive nature of the data and information involved. However, there is currently a lack of consensus regarding the comprehensive scope of both technical and non-technical issues that the privacy evaluation process should encompass. Additionally, there is uncertainty about which existing methods are best suited to effectively address these privacy concerns. In response to this challenge, this paper introduces a novel conceptual framework that integrates various responsible AI patterns from multiple perspectives, with the specific aim of safeguarding privacy.
Related papers
- Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - State-of-the-Art Approaches to Enhancing Privacy Preservation of Machine Learning Datasets: A Survey [0.0]
This paper examines the evolving landscape of machine learning (ML) and its profound impact across various sectors.
It focuses on the emerging field of Privacy-preserving Machine Learning (PPML)
As ML applications become increasingly integral to industries like telecommunications, financial technology, and surveillance, they raise significant privacy concerns.
arXiv Detail & Related papers (2024-02-25T17:31:06Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Advancing Differential Privacy: Where We Are Now and Future Directions for Real-World Deployment [100.1798289103163]
We present a detailed review of current practices and state-of-the-art methodologies in the field of differential privacy (DP)
Key points and high-level contents of the article were originated from the discussions from "Differential Privacy (DP): Challenges Towards the Next Frontier"
This article aims to provide a reference point for the algorithmic and design decisions within the realm of privacy, highlighting important challenges and potential research directions.
arXiv Detail & Related papers (2023-04-14T05:29:18Z) - A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy [47.89042524852868]
Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
arXiv Detail & Related papers (2023-02-21T12:52:12Z) - New Challenges in Reinforcement Learning: A Survey of Security and
Privacy [26.706957408693363]
Reinforcement learning (RL) is one of the most important branches of AI.
RL has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics.
Some of these applications and systems have been shown to be vulnerable to security or privacy attacks.
arXiv Detail & Related papers (2022-12-31T12:30:43Z) - 10 Security and Privacy Problems in Large Foundation Models [69.70602220716718]
A pre-trained foundation model is like an operating system'' of the AI ecosystem.
A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem.
In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models.
arXiv Detail & Related papers (2021-10-28T21:45:53Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.