Privacy Information Classification: A Hybrid Approach
- URL: http://arxiv.org/abs/2101.11574v1
- Date: Wed, 27 Jan 2021 18:03:18 GMT
- Title: Privacy Information Classification: A Hybrid Approach
- Authors: Jiaqi Wu, Weihua Li, Quan Bai, Takayuki Ito, Ahmed Moustafa
- Abstract summary: This study proposes and develops a hybrid privacy classification approach to detect and classify privacy information from OSNs.
The proposed hybrid approach employs both deep learning models and ontology-based models for privacy-related information extraction.
- Score: 9.642559585173517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large amount of information has been published to online social networks
every day. Individual privacy-related information is also possibly disclosed
unconsciously by the end-users. Identifying privacy-related data and protecting
the online social network users from privacy leakage turn out to be
significant. Under such a motivation, this study aims to propose and develop a
hybrid privacy classification approach to detect and classify privacy
information from OSNs. The proposed hybrid approach employs both deep learning
models and ontology-based models for privacy-related information extraction.
Extensive experiments are conducted to validate the proposed hybrid approach,
and the empirical results demonstrate its superiority in assisting online
social network users against privacy leakage.
Related papers
- Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - A Survey of Privacy-Preserving Model Explanations: Privacy Risks, Attacks, and Countermeasures [50.987594546912725]
Despite a growing corpus of research in AI privacy and explainability, there is little attention on privacy-preserving model explanations.
This article presents the first thorough survey about privacy attacks on model explanations and their countermeasures.
arXiv Detail & Related papers (2024-03-31T12:44:48Z) - When Graph Convolution Meets Double Attention: Online Privacy Disclosure Detection with Multi-Label Text Classification [6.700420953065072]
It is important to detect such unwanted privacy disclosures to help alert people affected and the online platform.
In this paper, privacy disclosure detection is modeled as a multi-label text classification problem.
A new privacy disclosure detection model is proposed to construct an MLTC classifier for detecting online privacy disclosures.
arXiv Detail & Related papers (2023-11-27T15:25:17Z) - A Unified View of Differentially Private Deep Generative Modeling [60.72161965018005]
Data with privacy concerns comes with stringent regulations that frequently prohibited data access and data sharing.
Overcoming these obstacles is key for technological progress in many real-world application scenarios that involve privacy sensitive data.
Differentially private (DP) data publishing provides a compelling solution, where only a sanitized form of the data is publicly released.
arXiv Detail & Related papers (2023-09-27T14:38:16Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - Momentum Gradient Descent Federated Learning with Local Differential
Privacy [10.60240656423935]
In the big data era, the privacy of personal information has been more pronounced.
In this article, we propose integrating federated learning and local differential privacy with momentum gradient descent to improve the performance of machine learning models.
arXiv Detail & Related papers (2022-09-28T13:30:38Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - The Privacy Onion Effect: Memorization is Relative [76.46529413546725]
We show an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable exposes a new layer of previously-safe points to the same attack.
It suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.
arXiv Detail & Related papers (2022-06-21T15:25:56Z) - Applications of Differential Privacy in Social Network Analysis: A
Survey [60.696428840516724]
Differential privacy is effective in sharing information and preserving privacy with a strong guarantee.
Social network analysis has been extensively adopted in many applications, opening a new arena for the application of differential privacy.
arXiv Detail & Related papers (2020-10-06T19:06:03Z) - Learning With Differential Privacy [3.618133010429131]
Differential privacy comes to the rescue with a proper promise of protection against leakage.
It uses a randomized response technique at the time of collection of the data which promises strong privacy with better utility.
arXiv Detail & Related papers (2020-06-10T02:04:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.