Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future
Directions
- URL: http://arxiv.org/abs/2209.04022v1
- Date: Thu, 8 Sep 2022 20:16:21 GMT
- Title: Privacy of Autonomous Vehicles: Risks, Protection Methods, and Future
Directions
- Authors: Chulin Xie, Zhong Cao, Yunhui Long, Diange Yang, Ding Zhao, Bo Li
- Abstract summary: We provide a new taxonomy for privacy risks and protection methods in AVs.
We categorize privacy in AVs into three levels: individual, population, and proprietary.
- Score: 23.778855805039438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in machine learning have enabled its wide application in
different domains, and one of the most exciting applications is autonomous
vehicles (AVs), which have encouraged the development of a number of ML
algorithms from perception to prediction to planning. However, training AVs
usually requires a large amount of training data collected from different
driving environments (e.g., cities) as well as different types of personal
information (e.g., working hours and routes). Such collected large data,
treated as the new oil for ML in the data-centric AI era, usually contains a
large amount of privacy-sensitive information which is hard to remove or even
audit. Although existing privacy protection approaches have achieved certain
theoretical and empirical success, there is still a gap when applying them to
real-world applications such as autonomous vehicles. For instance, when
training AVs, not only can individually identifiable information reveal
privacy-sensitive information, but also population-level information such as
road construction within a city, and proprietary-level commercial secrets of
AVs. Thus, it is critical to revisit the frontier of privacy risks and
corresponding protection approaches in AVs to bridge this gap. Following this
goal, in this work, we provide a new taxonomy for privacy risks and protection
methods in AVs, and we categorize privacy in AVs into three levels: individual,
population, and proprietary. We explicitly list out recent challenges to
protect each of these levels of privacy, summarize existing solutions to these
challenges, discuss the lessons and conclusions, and provide potential future
directions and opportunities for both researchers and practitioners. We believe
this work will help to shape the privacy research in AV and guide the privacy
protection technology design.
Related papers
- Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions [12.451936012379319]
Large Language Models (LLMs) represent a significant advancement in artificial intelligence, finding applications across various domains.
Their reliance on massive internet-sourced datasets for training brings notable privacy issues.
Certain application-specific scenarios may require fine-tuning these models on private data.
arXiv Detail & Related papers (2024-08-10T05:41:19Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - Work-in-Progress: Crash Course: Can (Under Attack) Autonomous Driving Beat Human Drivers? [60.51287814584477]
This paper evaluates the inherent risks in autonomous driving by examining the current landscape of AVs.
We develop specific claims highlighting the delicate balance between the advantages of AVs and potential security challenges in real-world scenarios.
arXiv Detail & Related papers (2024-05-14T09:42:21Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - Privacy in Large Language Models: Attacks, Defenses and Future Directions [84.73301039987128]
We analyze the current privacy attacks targeting large language models (LLMs) and categorize them according to the adversary's assumed capabilities.
We present a detailed overview of prominent defense strategies that have been developed to counter these privacy attacks.
arXiv Detail & Related papers (2023-10-16T13:23:54Z) - Can Language Models be Instructed to Protect Personal Information? [30.187731765653428]
We introduce PrivQA -- a benchmark to assess the privacy/utility trade-off when a model is instructed to protect specific categories of personal information in a simulated scenario.
We find that adversaries can easily circumvent these protections with simple jailbreaking methods through textual and/or image inputs.
We believe PrivQA has the potential to support the development of new models with improved privacy protections, as well as the adversarial robustness of these protections.
arXiv Detail & Related papers (2023-10-03T17:30:33Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - More Than Privacy: Applying Differential Privacy in Key Areas of
Artificial Intelligence [62.3133247463974]
We show that differential privacy can do more than just privacy preservation in AI.
It can also be used to improve security, stabilize learning, build fair models, and impose composition in selected areas of AI.
arXiv Detail & Related papers (2020-08-05T03:07:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.