Maximizing Influence of Leaders in Social Networks
- URL: http://arxiv.org/abs/2106.06128v1
- Date: Fri, 11 Jun 2021 02:31:46 GMT
- Title: Maximizing Influence of Leaders in Social Networks
- Authors: Xiaotian Zhou and Zhongzhi Zhang
- Abstract summary: We consider the edge addition problem for the DeGroot model of opinion dynamics in a social network with $n$ nodes and $m$ edges.
We show that our algorithm is efficient and effective, which scales to large networks with more than a million nodes.
- Score: 15.05158252504978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The operation of adding edges has been frequently used to the study of
opinion dynamics in social networks for various purposes. In this paper, we
consider the edge addition problem for the DeGroot model of opinion dynamics in
a social network with $n$ nodes and $m$ edges, in the presence of a small
number $s \ll n$ of competing leaders with binary opposing opinions 0 or 1.
Concretely, we pose and investigate the problem of maximizing the equilibrium
overall opinion by creating $k$ new edges in a candidate edge set, where each
edge is incident to a 1-valued leader and a follower node. We show that the
objective function is monotone and submodular. We then propose a simple greedy
algorithm with an approximation factor $(1-\frac{1}{e})$ that approximately
solves the problem in $O(n^3)$ time. Moreover, we provide a fast algorithm with
a $(1-\frac{1}{e}-\epsilon)$ approximation ratio and
$\tilde{O}(mk\epsilon^{-2})$ time complexity for any $\epsilon>0$, where
$\tilde{O}(\cdot)$ notation suppresses the ${\rm poly} (\log n)$ factors.
Extensive experiments demonstrate that our second approximate algorithm is
efficient and effective, which scales to large networks with more than a
million nodes.
Related papers
- Deep Neural Networks: Multi-Classification and Universal Approximation [0.0]
We demonstrate that a ReLU deep neural network with a width of $2$ and a depth of $2N+4M-1$ layers can achieve finite sample memorization for any dataset comprising $N$ elements.
We also provide depth estimates for approximating $W1,p$ functions and width estimates for approximating $Lp(Omega;mathbbRm)$ for $mgeq1$.
arXiv Detail & Related papers (2024-09-10T14:31:21Z) - A Near-Linear Time Approximation Algorithm for Beyond-Worst-Case Graph Clustering [18.29151197560866]
We consider the semi-random graph model of [Makarychev, Makarychev and Vijayaraghavan, STOC'12].
A time algorithm is known to approximate the Balanced Cut problem up to value $O(alpha)$ [MMV'12] as long as the cut $(A, B)$ has size $Omega(alpha)$.
We study the fine-grained complexity of the problem and present the first near-linear time subroutine that achieves similar performances to that of [MMV'12].
arXiv Detail & Related papers (2024-06-07T11:40:54Z) - Minimizing Polarization in Noisy Leader-Follower Opinion Dynamics [17.413930396663833]
We consider a problem of polarization optimization for the leader-follower opinion dynamics in a noisy social network.
We adopt the popular leader-follower DeGroot model, where the opinion of every leader is identical and remains unchanged, while the opinion of every follower is subject to white noise.
We show that the objective function is monotone and supermodular. We then propose a simple greedy algorithm with an approximation factor $1-1/e$ that approximately solves the problem in $O((n-q)3)$ time.
arXiv Detail & Related papers (2023-08-14T08:52:24Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Most Neural Networks Are Almost Learnable [52.40331776572531]
We show that for any fixed $epsilon>0$ and depth $i$, there is a poly-time algorithm that learns random Xavier networks of depth $i$.
The algorithm runs in time and sample complexity of $(bard)mathrmpoly(epsilon-1)$, where $bar d$ is the size of the network.
For some cases of sigmoid and ReLU-like activations the bound can be improved to $(bard)mathrmpolylog(eps
arXiv Detail & Related papers (2023-05-25T22:27:42Z) - Mind the gap: Achieving a super-Grover quantum speedup by jumping to the
end [114.3957763744719]
We present a quantum algorithm that has rigorous runtime guarantees for several families of binary optimization problems.
We show that the algorithm finds the optimal solution in time $O*(2(0.5-c)n)$ for an $n$-independent constant $c$.
We also show that for a large fraction of random instances from the $k$-spin model and for any fully satisfiable or slightly frustrated $k$-CSP formula, statement (a) is the case.
arXiv Detail & Related papers (2022-12-03T02:45:23Z) - Distributed Saddle-Point Problems Under Similarity [173.19083235638104]
We show that a given suboptimality $epsilon0$ is achieved master/workers networks in $Omegabig.
We then propose algorithms matching the lower bounds either types of networks (up to log-overs)
We assess effectiveness of the proposed algorithms on a robust logistic regression problem.
arXiv Detail & Related papers (2021-07-22T14:25:16Z) - Quantum complexity of minimum cut [0.2538209532048867]
We characterize the quantum query and time complexity of the minimum cut problem in the adjacency matrix model.
Our algorithm uses a quantum algorithm for graph sparsification by Apers and de Wolf (FOCS 2020) and results on the structure of near-minimum cuts by Kawarabayashi and Thorup (STOC 2015) and Rubinstein, Schramm and Weinberg (ITCS 2018)
arXiv Detail & Related papers (2020-11-19T13:51:49Z) - Streaming Complexity of SVMs [110.63976030971106]
We study the space complexity of solving the bias-regularized SVM problem in the streaming model.
We show that for both problems, for dimensions of $frac1lambdaepsilon$, one can obtain streaming algorithms with spacely smaller than $frac1lambdaepsilon$.
arXiv Detail & Related papers (2020-07-07T17:10:00Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.