Minimizing Polarization in Noisy Leader-Follower Opinion Dynamics
- URL: http://arxiv.org/abs/2308.07008v1
- Date: Mon, 14 Aug 2023 08:52:24 GMT
- Title: Minimizing Polarization in Noisy Leader-Follower Opinion Dynamics
- Authors: Wanyue Xu and Zhongzhi Zhang
- Abstract summary: We consider a problem of polarization optimization for the leader-follower opinion dynamics in a noisy social network.
We adopt the popular leader-follower DeGroot model, where the opinion of every leader is identical and remains unchanged, while the opinion of every follower is subject to white noise.
We show that the objective function is monotone and supermodular. We then propose a simple greedy algorithm with an approximation factor $1-1/e$ that approximately solves the problem in $O((n-q)3)$ time.
- Score: 17.413930396663833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The operation of creating edges has been widely applied to optimize relevant
quantities of opinion dynamics. In this paper, we consider a problem of
polarization optimization for the leader-follower opinion dynamics in a noisy
social network with $n$ nodes and $m$ edges, where a group $Q$ of $q$ nodes are
leaders, and the remaining $n-q$ nodes are followers. We adopt the popular
leader-follower DeGroot model, where the opinion of every leader is identical
and remains unchanged, while the opinion of every follower is subject to white
noise. The polarization is defined as the steady-state variance of the
deviation of each node's opinion from leaders' opinion, which equals one half
of the effective resistance $\mathcal{R}_Q$ between the node group $Q$ and all
other nodes. Concretely, we propose and study the problem of minimizing
$\mathcal{R}_Q$ by adding $k$ new edges with each incident to a node in $Q$. We
show that the objective function is monotone and supermodular. We then propose
a simple greedy algorithm with an approximation factor $1-1/e$ that
approximately solves the problem in $O((n-q)^3)$ time. To speed up the
computation, we also provide a fast algorithm to compute
$(1-1/e-\eps)$-approximate effective resistance $\mathcal{R}_Q$, the running
time of which is $\Otil (mk\eps^{-2})$ for any $\eps>0$, where the $\Otil
(\cdot)$ notation suppresses the ${\rm poly} (\log n)$ factors. Extensive
experiment results show that our second algorithm is both effective and
efficient.
Related papers
- Online Matrix Completion: A Collaborative Approach with Hott Items [19.781869063637387]
We investigate the low rank matrix completion problem in an online setting with $M$ users, $N$ items, $T$ rounds, and an unknown rank-$r$ reward matrix $Rin mathbbRMtimes N$.
arXiv Detail & Related papers (2024-08-11T18:49:52Z) - Dynamic Correlation Clustering in Sublinear Update Time [33.079608335361286]
We study the classic problem of correlation clustering in dynamic node streams.
In this setting, nodes are either added or randomly deleted over time, and each node pair is connected by a positive or negative edge.
We present an algorithm that maintains an $O(1)$-approximation with $O$(polylog $n$) amortized update time.
arXiv Detail & Related papers (2024-06-13T14:07:15Z) - Near Sample-Optimal Reduction-based Policy Learning for Average Reward
MDP [58.13930707612128]
This work considers the sample complexity of obtaining an $varepsilon$-optimal policy in an average reward Markov Decision Process (AMDP)
We prove an upper bound of $widetilde O(H varepsilon-3 ln frac1delta)$ samples per state-action pair, where $H := sp(h*)$ is the span of bias of any optimal policy, $varepsilon$ is the accuracy and $delta$ is the failure probability.
arXiv Detail & Related papers (2022-12-01T15:57:58Z) - Lower Bounds on the Worst-Case Complexity of Efficient Global
Optimization [11.523746174066702]
We derive a unified lower bound for the complexity of efficient global optimization in terms of the metric entropy of a ball in its corresponding reproducing kernel Hilbert space(RKHS)
We show that this lower bound nearly matches the upper bound attained by non-adaptive search algorithms for the commonly used squared exponential kernel and the Mat'ern kernel.
arXiv Detail & Related papers (2022-09-20T11:57:13Z) - Faster Sampling from Log-Concave Distributions over Polytopes via a
Soft-Threshold Dikin Walk [28.431572772564518]
We consider the problem of sampling from a $d$-dimensional log-concave distribution $pi(theta) propto e-f(theta)$ constrained to a polytope $K$ defined by $m$ inequalities.
Our main result is a "soft-warm'' variant of the Dikin walk Markov chain that requires at most $O((md + d L2 R2) times MDomega-1) log(fracwdelta)$ arithmetic operations to sample from $pi$
arXiv Detail & Related papers (2022-06-19T11:33:07Z) - Approximate Function Evaluation via Multi-Armed Bandits [51.146684847667125]
We study the problem of estimating the value of a known smooth function $f$ at an unknown point $boldsymbolmu in mathbbRn$, where each component $mu_i$ can be sampled via a noisy oracle.
We design an instance-adaptive algorithm that learns to sample according to the importance of each coordinate, and with probability at least $1-delta$ returns an $epsilon$ accurate estimate of $f(boldsymbolmu)$.
arXiv Detail & Related papers (2022-03-18T18:50:52Z) - Corralling a Larger Band of Bandits: A Case Study on Switching Regret
for Linear Bandits [99.86860277006318]
We consider the problem of combining and learning over a set of adversarial algorithms with the goal of adaptively tracking the best one on the fly.
The CORRAL of Agarwal et al. achieves this goal with a regret overhead of order $widetildeO(sqrtd S T)$ where $M$ is the number of base algorithms and $T$ is the time horizon.
Motivated by this issue, we propose a new recipe to corral a larger band of bandit algorithms whose regret overhead has only emphlogarithmic dependence on $M$ as long
arXiv Detail & Related papers (2022-02-12T21:55:44Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Maximizing Influence of Leaders in Social Networks [15.05158252504978]
We consider the edge addition problem for the DeGroot model of opinion dynamics in a social network with $n$ nodes and $m$ edges.
We show that our algorithm is efficient and effective, which scales to large networks with more than a million nodes.
arXiv Detail & Related papers (2021-06-11T02:31:46Z) - Optimal Regret Algorithm for Pseudo-1d Bandit Convex Optimization [51.23789922123412]
We study online learning with bandit feedback (i.e. learner has access to only zeroth-order oracle) where cost/reward functions admit a "pseudo-1d" structure.
We show a lower bound of $min(sqrtdT, T3/4)$ for the regret of any algorithm, where $T$ is the number of rounds.
We propose a new algorithm sbcalg that combines randomized online gradient descent with a kernelized exponential weights method to exploit the pseudo-1d structure effectively.
arXiv Detail & Related papers (2021-02-15T08:16:51Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.