Adversaries with Limited Information in the Friedkin--Johnsen Model
- URL: http://arxiv.org/abs/2306.10313v2
- Date: Tue, 12 Sep 2023 19:05:59 GMT
- Title: Adversaries with Limited Information in the Friedkin--Johnsen Model
- Authors: Sijing Tu, Stefan Neumann, Aristides Gionis
- Abstract summary: In recent years, online social networks have been the target of adversaries who seek to introduce discord into societies.
We present approximation algorithms for detecting a small set of users who are highly influential for the disagreement and polarization in the network.
- Score: 25.89905526128351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, online social networks have been the target of adversaries
who seek to introduce discord into societies, to undermine democracies and to
destabilize communities. Often the goal is not to favor a certain side of a
conflict but to increase disagreement and polarization. To get a mathematical
understanding of such attacks, researchers use opinion-formation models from
sociology, such as the Friedkin--Johnsen model, and formally study how much
discord the adversary can produce when altering the opinions for only a small
set of users. In this line of work, it is commonly assumed that the adversary
has full knowledge about the network topology and the opinions of all users.
However, the latter assumption is often unrealistic in practice, where user
opinions are not available or simply difficult to estimate accurately.
To address this concern, we raise the following question: Can an attacker sow
discord in a social network, even when only the network topology is known? We
answer this question affirmatively. We present approximation algorithms for
detecting a small set of users who are highly influential for the disagreement
and polarization in the network. We show that when the adversary radicalizes
these users and if the initial disagreement/polarization in the network is not
very high, then our method gives a constant-factor approximation on the setting
when the user opinions are known. To find the set of influential users, we
provide a novel approximation algorithm for a variant of MaxCut in graphs with
positive and negative edge weights. We experimentally evaluate our methods,
which have access only to the network topology, and we find that they have
similar performance as methods that have access to the network topology and all
user opinions. We further present an NP-hardness proof, which was an open
question by Chen and Racz [IEEE Trans. Netw. Sci. Eng., 2021].
Related papers
- Structure and dynamics of growing networks of Reddit threads [0.0]
We study a Reddit community in which people participate to judge or be judged with respect to some behavior.
We model threads of this community as complex networks of user interactions growing in time.
We show that the evolution of Reddit networks differ from other real social networks, despite falling in the same category.
arXiv Detail & Related papers (2024-09-06T07:53:33Z) - User Strategization and Trustworthy Algorithms [81.82279667028423]
We show that user strategization can actually help platforms in the short term.
We then show that it corrupts platforms' data and ultimately hurts their ability to make counterfactual decisions.
arXiv Detail & Related papers (2023-12-29T16:09:42Z) - Evading Community Detection via Counterfactual Neighborhood Search [10.990525728657747]
Community detection is useful for social media platforms to discover tightly connected groups of users who share common interests.
Some users may wish to preserve their anonymity and opt out of community detection for various reasons, such as affiliation with political or religious organizations, without leaving the platform.
In this study, we address the challenge of community membership hiding, which involves strategically altering the structural properties of a network graph to prevent one or more nodes from being identified by a given community detection algorithm.
arXiv Detail & Related papers (2023-10-13T07:30:50Z) - BeMap: Balanced Message Passing for Fair Graph Neural Network [50.910842893257275]
We show that message passing could amplify the bias when the 1-hop neighbors from different demographic groups are unbalanced.
We propose BeMap, a fair message passing method, that balances the number of the 1-hop neighbors of each node among different demographic groups.
arXiv Detail & Related papers (2023-06-07T02:16:36Z) - The Devil is in the Conflict: Disentangled Information Graph Neural
Networks for Fraud Detection [17.254383007779616]
We argue that the performance degradation is mainly attributed to the inconsistency between topology and attribute.
We propose a simple and effective method that uses the attention mechanism to adaptively fuse two views.
Our model can significantly outperform stateof-the-art baselines on real-world fraud detection datasets.
arXiv Detail & Related papers (2022-10-22T08:21:49Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Relational Graph Neural Networks for Fraud Detection in a Super-App
environment [53.561797148529664]
We propose a framework of relational graph convolutional networks methods for fraudulent behaviour prevention in the financial services of a Super-App.
We use an interpretability algorithm for graph neural networks to determine the most important relations to the classification task of the users.
Our results show that there is an added value when considering models that take advantage of the alternative data of the Super-App and the interactions found in their high connectivity.
arXiv Detail & Related papers (2021-07-29T00:02:06Z) - Your most telling friends: Propagating latent ideological features on
Twitter using neighborhood coherence [0.0]
We use Twitter data to produce an ideological scaling for 370K users, and analyze the two families of propagation methods on a population of 6.5M users.
We find that, when coherence is considered, the ideology of a user is better estimated from those with similar neighborhoods, than from their immediate neighbors.
arXiv Detail & Related papers (2021-03-12T13:01:59Z) - Detecting Online Hate Speech: Approaches Using Weak Supervision and
Network Embedding Models [2.3322477552758234]
We propose a weak supervision deep learning model that quantitatively uncover hateful users and (ii) present a novel qualitative analysis to uncover indirect hateful conversations.
We evaluate our model on 19.2M posts and show that our weak supervision model outperforms the baseline models in identifying indirect hateful interactions.
We also analyze a multilayer network, constructed from two types of user interactions in Gab(quote and reply) and interaction scores from the weak supervision model as edge weights, to predict hateful users.
arXiv Detail & Related papers (2020-07-24T18:13:52Z) - Fairness Through Robustness: Investigating Robustness Disparity in Deep
Learning [61.93730166203915]
We argue that traditional notions of fairness are not sufficient when the model is vulnerable to adversarial attacks.
We show that measuring robustness bias is a challenging task for DNNs and propose two methods to measure this form of bias.
arXiv Detail & Related papers (2020-06-17T22:22:24Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.