Heterogeneous Peer Effects in the Linear Threshold Model
- URL: http://arxiv.org/abs/2201.11242v1
- Date: Thu, 27 Jan 2022 00:23:26 GMT
- Title: Heterogeneous Peer Effects in the Linear Threshold Model
- Authors: Christopher Tran, Elena Zheleva
- Abstract summary: The Linear Threshold Model describes how information diffuses through a social network.
We propose causal inference methods for estimating individual thresholds that can more accurately predict whether and when individuals will be affected by their peers.
Our experimental results on synthetic and real-world datasets show that our proposed models can better predict individual-level thresholds in the Linear Threshold Model.
- Score: 13.452510519858995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Linear Threshold Model is a widely used model that describes how
information diffuses through a social network. According to this model, an
individual adopts an idea or product after the proportion of their neighbors
who have adopted it reaches a certain threshold. Typical applications of the
Linear Threshold Model assume that thresholds are either the same for all
network nodes or randomly distributed, even though some people may be more
susceptible to peer pressure than others. To address individual-level
differences, we propose causal inference methods for estimating individual
thresholds that can more accurately predict whether and when individuals will
be affected by their peers. We introduce the concept of heterogeneous peer
effects and develop a Structural Causal Model which corresponds to the Linear
Threshold Model and supports heterogeneous peer effect identification and
estimation. We develop two algorithms for individual threshold estimation, one
based on causal trees and one based on causal meta-learners. Our experimental
results on synthetic and real-world datasets show that our proposed models can
better predict individual-level thresholds in the Linear Threshold Model and
thus more precisely predict which nodes will get activated over time.
Related papers
- Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - Towards Theoretical Understandings of Self-Consuming Generative Models [56.84592466204185]
This paper tackles the emerging challenge of training generative models within a self-consuming loop.
We construct a theoretical framework to rigorously evaluate how this training procedure impacts the data distributions learned by future models.
We present results for kernel density estimation, delivering nuanced insights such as the impact of mixed data training on error propagation.
arXiv Detail & Related papers (2024-02-19T02:08:09Z) - Distilling Influences to Mitigate Prediction Churn in Graph Neural
Networks [4.213427823201119]
Models with similar performances exhibit significant disagreement in the predictions of individual samples, referred to as prediction churn.
We propose a novel metric called Influence Difference (ID) to quantify the variation in reasons used by nodes across models.
We also consider the differences between nodes with a stable and an unstable prediction, positing that both equally utilize different reasons.
As an efficient approximation, we introduce DropDistillation (DD) that matches the output for a graph perturbed by edge deletions.
arXiv Detail & Related papers (2023-10-02T07:37:28Z) - Dual Student Networks for Data-Free Model Stealing [79.67498803845059]
Two main challenges are estimating gradients of the target model without access to its parameters, and generating a diverse set of training samples.
We propose a Dual Student method where two students are symmetrically trained in order to provide the generator a criterion to generate samples that the two students disagree on.
We show that our new optimization framework provides more accurate gradient estimation of the target model and better accuracies on benchmark classification datasets.
arXiv Detail & Related papers (2023-09-18T18:11:31Z) - A performance characteristic curve for model evaluation: the application
in information diffusion prediction [3.8711489380602804]
We propose a metric based on information entropy to quantify the randomness in diffusion data, then identify a scaling pattern between the randomness and the prediction accuracy of the model.
Data points in the patterns by different sequence lengths, system sizes, and randomness all collapse into a single curve, capturing a model's inherent capability of making correct predictions.
The validity of the curve is tested by three prediction models in the same family, reaching conclusions in line with existing studies.
arXiv Detail & Related papers (2023-09-18T07:32:57Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Confidence estimation of classification based on the distribution of the
neural network output layer [4.529188601556233]
One of the most common problems preventing the application of prediction models in the real world is lack of generalization.
We propose novel methods that estimate uncertainty of particular predictions generated by a neural network classification model.
The proposed methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.
arXiv Detail & Related papers (2022-10-14T12:32:50Z) - On the Prediction Instability of Graph Neural Networks [2.3605348648054463]
Instability of trained models can affect reliability, reliability, and trust in machine learning systems.
We systematically assess the prediction instability of node classification with state-of-the-art Graph Neural Networks (GNNs)
We find that up to one third of the incorrectly classified nodes differ across algorithm runs.
arXiv Detail & Related papers (2022-05-20T10:32:59Z) - Estimation of Bivariate Structural Causal Models by Variational Gaussian
Process Regression Under Likelihoods Parametrised by Normalising Flows [74.85071867225533]
Causal mechanisms can be described by structural causal models.
One major drawback of state-of-the-art artificial intelligence is its lack of explainability.
arXiv Detail & Related papers (2021-09-06T14:52:58Z) - A Twin Neural Model for Uplift [59.38563723706796]
Uplift is a particular case of conditional treatment effect modeling.
We propose a new loss function defined by leveraging a connection with the Bayesian interpretation of the relative risk.
We show our proposed method is competitive with the state-of-the-art in simulation setting and on real data from large scale randomized experiments.
arXiv Detail & Related papers (2021-05-11T16:02:39Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.