Community Detection for Access-Control Decisions: Analysing the Role of
Homophily and Information Diffusion in Online Social Networks
- URL: http://arxiv.org/abs/2104.09137v2
- Date: Mon, 7 Jun 2021 08:55:24 GMT
- Title: Community Detection for Access-Control Decisions: Analysing the Role of
Homophily and Information Diffusion in Online Social Networks
- Authors: Nicolas E. Diaz Ferreyra, Tobias Hecking, Esma A\"imeur, Maritta
Heisel, and H. Ulrich Hoppe
- Abstract summary: Access-Control Lists (ACLs) are one of the most important privacy features of Online Social Networks (OSNs)
This work investigates the use of community-detection algorithms for the automatic generation of ACLs in OSNs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Access-Control Lists (ACLs) (a.k.a. friend lists) are one of the most
important privacy features of Online Social Networks (OSNs) as they allow users
to restrict the audience of their publications. Nevertheless, creating and
maintaining custom ACLs can introduce a high cognitive burden on average OSNs
users since it normally requires assessing the trustworthiness of a large
number of contacts. In principle, community detection algorithms can be
leveraged to support the generation of ACLs by mapping a set of examples (i.e.
contacts labelled as untrusted) to the emerging communities inside the user's
ego-network. However, unlike users' access-control preferences, traditional
community-detection algorithms do not take the homophily characteristics of
such communities into account (i.e. attributes shared among members).
Consequently, this strategy may lead to inaccurate ACL configurations and
privacy breaches under certain homophily scenarios. This work investigates the
use of community-detection algorithms for the automatic generation of ACLs in
OSNs. Particularly, it analyses the performance of the aforementioned approach
under different homophily conditions through a simulation model. Furthermore,
since private information may reach the scope of untrusted recipients through
the re-sharing affordances of OSNs, information diffusion processes are also
modelled and taken explicitly into account. Altogether, the removal of
gatekeeper nodes is further explored as a strategy to counteract unwanted data
dissemination.
Related papers
- Unveiling Privacy Vulnerabilities: Investigating the Role of Structure in Graph Data [17.11821761700748]
This study advances the understanding and protection against privacy risks emanating from network structure.
We develop a novel graph private attribute inference attack, which acts as a pivotal tool for evaluating the potential for privacy leakage through network structures.
Our attack model poses a significant threat to user privacy, and our graph data publishing method successfully achieves the optimal privacy-utility trade-off.
arXiv Detail & Related papers (2024-07-26T07:40:54Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - GNNBleed: Inference Attacks to Unveil Private Edges in Graphs with
Realistic Access to GNN Models [3.0509197593879844]
This paper investigates edge privacy in contexts where adversaries possess black-box GNN model access.
We introduce a series of privacy attacks grounded on the message-passing mechanism of GNNs.
arXiv Detail & Related papers (2023-11-03T20:26:03Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Evading Community Detection via Counterfactual Neighborhood Search [10.990525728657747]
Community detection is useful for social media platforms to discover tightly connected groups of users who share common interests.
Some users may wish to preserve their anonymity and opt out of community detection for various reasons, such as affiliation with political or religious organizations, without leaving the platform.
In this study, we address the challenge of community membership hiding, which involves strategically altering the structural properties of a network graph to prevent one or more nodes from being identified by a given community detection algorithm.
arXiv Detail & Related papers (2023-10-13T07:30:50Z) - Social-Aware Clustered Federated Learning with Customized Privacy Preservation [38.00035804720786]
We propose a novel Social-aware Clustered Federated Learning scheme, where mutually trusted individuals can freely form a social cluster.
By mixing model updates in a social group, adversaries can only eavesdrop the social-layer combined results, but not the privacy of individuals.
Experiments on Facebook network and MNIST/CIFAR-10 datasets validate that our SCFL can effectively enhance learning utility, improve user payoff, and enforce customizable privacy protection.
arXiv Detail & Related papers (2022-12-25T10:16:36Z) - Cross-Network Social User Embedding with Hybrid Differential Privacy
Guarantees [81.6471440778355]
We propose a Cross-network Social User Embedding framework, namely DP-CroSUE, to learn the comprehensive representations of users in a privacy-preserving way.
In particular, for each heterogeneous social network, we first introduce a hybrid differential privacy notion to capture the variation of privacy expectations for heterogeneous data types.
To further enhance user embeddings, a novel cross-network GCN embedding model is designed to transfer knowledge across networks through those aligned users.
arXiv Detail & Related papers (2022-09-04T06:22:37Z) - Content Popularity Prediction in Fog-RANs: A Clustered Federated
Learning Based Approach [66.31587753595291]
We propose a novel mobility-aware popularity prediction policy, which integrates content popularities in terms of local users and mobile users.
For local users, the content popularity is predicted by learning the hidden representations of local users and contents.
For mobile users, the content popularity is predicted via user preference learning.
arXiv Detail & Related papers (2022-06-13T03:34:00Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.