10 Security and Privacy Problems in Large Foundation Models
- URL: http://arxiv.org/abs/2110.15444v3
- Date: Fri, 9 Jun 2023 15:53:54 GMT
- Title: 10 Security and Privacy Problems in Large Foundation Models
- Authors: Jinyuan Jia, Hongbin Liu, Neil Zhenqiang Gong
- Abstract summary: A pre-trained foundation model is like an operating system'' of the AI ecosystem.
A security or privacy issue of a pre-trained foundation model leads to a single point of failure for the AI ecosystem.
In this book chapter, we discuss 10 basic security and privacy problems for the pre-trained foundation models.
- Score: 69.70602220716718
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models--such as GPT, CLIP, and DINO--have achieved revolutionary
progress in the past several years and are commonly believed to be a promising
approach for general-purpose AI. In particular, self-supervised learning is
adopted to pre-train a foundation model using a large amount of unlabeled data.
A pre-trained foundation model is like an ``operating system'' of the AI
ecosystem. Specifically, a foundation model can be used as a feature extractor
for many downstream tasks with little or no labeled training data. Existing
studies on foundation models mainly focused on pre-training a better foundation
model to improve its performance on downstream tasks in non-adversarial
settings, leaving its security and privacy in adversarial settings largely
unexplored. A security or privacy issue of a pre-trained foundation model leads
to a single point of failure for the AI ecosystem. In this book chapter, we
discuss 10 basic security and privacy problems for the pre-trained foundation
models, including six confidentiality problems, three integrity problems, and
one availability problem. For each problem, we discuss potential opportunities
and challenges. We hope our book chapter will inspire future research on the
security and privacy of foundation models.
Related papers
- New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - A Comprehensive Study on Model Initialization Techniques Ensuring
Efficient Federated Learning [0.0]
Federated learning(FL) has emerged as a promising paradigm for training machine learning models in a distributed and privacy-preserving manner.
The choice of methods used for models plays a crucial role in the performance, convergence speed, communication efficiency, privacy guarantees of federated learning systems.
Our research meticulously compares, categorizes, and delineates the merits and demerits of each technique, examining their applicability across diverse FL scenarios.
arXiv Detail & Related papers (2023-10-31T23:26:58Z) - Can Foundation Models Help Us Achieve Perfect Secrecy? [11.073539163281524]
A key promise of machine learning is the ability to assist users with personal tasks.
A gold standard privacy-preserving system will satisfy perfect secrecy.
However, privacy and quality appear to be in tension in existing systems for personal tasks.
arXiv Detail & Related papers (2022-05-27T02:32:26Z) - On the Opportunities and Risks of Foundation Models [256.61956234436553]
We call these models foundation models to underscore their critically central yet incomplete character.
This report provides a thorough account of the opportunities and risks of foundation models.
To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration.
arXiv Detail & Related papers (2021-08-16T17:50:08Z) - Decentralized Federated Learning Preserves Model and Data Privacy [77.454688257702]
We propose a fully decentralized approach, which allows to share knowledge between trained models.
Students are trained on the output of their teachers via synthetically generated input data.
The results show that an untrained student model, trained on the teachers output reaches comparable F1-scores as the teacher.
arXiv Detail & Related papers (2021-02-01T14:38:54Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.