Location Privacy Threats and Protections in 6G Vehicular Networks: A Comprehensive Review
- URL: http://arxiv.org/abs/2305.04503v2
- Date: Wed, 08 Jan 2025 02:47:39 GMT
- Title: Location Privacy Threats and Protections in 6G Vehicular Networks: A Comprehensive Review
- Authors: Baihe Ma, Xu Wang, Xiaojie Lin, Yanna Jiang, Caijun Sun, Zhe Wang, Guangsheng Yu, Suirui Zhu, Ying He, Wei Ni, Ren Ping Liu,
- Abstract summary: Location privacy is critical in vehicular networks, where drivers' trajectories and personal information can be exposed.
This survey reviews comprehensively different localization techniques, including sensing infrastructure-based, optical vision-based, and cellular radio-based localization.
We classify Location Privacy Preserving Mechanisms (LPPMs) into user-side, server-side, and user-server-interface-based, and evaluate their effectiveness.
- Score: 23.901688216192397
- License:
- Abstract: Location privacy is critical in vehicular networks, where drivers' trajectories and personal information can be exposed, allowing adversaries to launch data and physical attacks that threaten drivers' safety and personal security. This survey reviews comprehensively different localization techniques, including widely used ones like sensing infrastructure-based, optical vision-based, and cellular radio-based localization, and identifies inadequately addressed location privacy concerns. We classify Location Privacy Preserving Mechanisms (LPPMs) into user-side, server-side, and user-server-interface-based, and evaluate their effectiveness. Our analysis shows that the user-server-interface-based LPPMs have received insufficient attention in the literature, despite their paramount importance in vehicular networks. Further, we examine methods for balancing data utility and privacy protection for existing LPPMs in vehicular networks and highlight emerging challenges from future upper-layer location privacy attacks, wireless technologies, and network convergences. By providing insights into the relationship between localization techniques and location privacy, and evaluating the effectiveness of different LPPMs, this survey can help inform the development of future LPPMs in vehicular networks.
Related papers
- Investigating Vulnerabilities of GPS Trip Data to Trajectory-User Linking Attacks [49.1574468325115]
We propose a novel attack to reconstruct user identifiers in GPS trip datasets consisting of single trips.
We show that the risk of re-identification is significant even when personal identifiers have been removed.
Further investigations indicate that users who frequently visit locations that are only visited by a small number of others tend to be more vulnerable to re-identification.
arXiv Detail & Related papers (2025-02-12T08:54:49Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - Differentially Private Data Release on Graphs: Inefficiencies and Unfairness [48.96399034594329]
This paper characterizes the impact of Differential Privacy on bias and unfairness in the context of releasing information about networks.
We consider a network release problem where the network structure is known to all, but the weights on edges must be released privately.
Our work provides theoretical foundations and empirical evidence into the bias and unfairness arising due to privacy in these networked decision problems.
arXiv Detail & Related papers (2024-08-08T08:37:37Z) - Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - Your Car Tells Me Where You Drove: A Novel Path Inference Attack via CAN Bus and OBD-II Data [57.22545280370174]
On Path Diagnostic - Intrusion & Inference (OPD-II) is a novel path inference attack leveraging a physical car model and a map matching algorithm.
We implement our attack on a set of four different cars and a total number of 41 tracks in different road and traffic scenarios.
arXiv Detail & Related papers (2024-06-30T04:21:46Z) - Indoor Location Fingerprinting Privacy: A Comprehensive Survey [0.09831489366502298]
The pervasive integration of Indoor Positioning Systems (IPS) leads to the widespread adoption of Location-Based Services (LBS)
indoor location fingerprinting employs diverse signal fingerprints from user devices, enabling precise location identification by Location Service Providers (LSP)
Despite its broad applications across various domains, indoor location fingerprinting introduces a notable privacy risk, as both LSP and potential adversaries inherently have access to this sensitive information, compromising users' privacy.
arXiv Detail & Related papers (2024-04-10T21:02:58Z) - Differentiated Security Architecture for Secure and Efficient Infotainment Data Communication in IoV Networks [55.340315838742015]
Negligence on the security of infotainment data communication in IoV networks can unintentionally open an easy access point for social engineering attacks.
In particular, we first classify data communication in the IoV network, examine the security focus of each data communication, and then develop a differentiated security architecture to provide security protection on a file-to-file basis.
arXiv Detail & Related papers (2024-03-29T12:01:31Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Protecting Personalized Trajectory with Differential Privacy under Temporal Correlations [37.88484505367802]
This paper proposes a personalized trajectory privacy protection mechanism (PTPPM)
We identify a protection location set (PLS) for each location by employing the Hilbert curve-based minimum distance search algorithm.
We put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism.
arXiv Detail & Related papers (2024-01-20T12:59:08Z) - Sparse Federated Training of Object Detection in the Internet of
Vehicles [13.864554148921826]
Object detection is one of the key technologies in the Internet of Vehicles (IoV)
Current object detection methods are mostly based on centralized deep training, that is, the sensitive data obtained by edge devices need to be uploaded to the server.
We propose a federated learning-based framework, where well-trained local models are shared in the central server.
arXiv Detail & Related papers (2023-09-07T08:58:41Z) - Privacy-Utility Trades in Crowdsourced Signal Map Obfuscation [20.58763760239068]
Crowdsource celluar signal strength measurements can be used to generate signal maps to improve network performance.
We consider obfuscating such data before the data leaves the mobile device.
Our evaluation results, based on multiple, diverse, real-world signal map datasets, demonstrate the feasibility of concurrently achieving adequate privacy and utility.
arXiv Detail & Related papers (2022-01-13T03:46:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.