Fast Evaluation for Relevant Quantities of Opinion Dynamics
- URL: http://arxiv.org/abs/2101.08143v2
- Date: Sat, 12 Jun 2021 15:19:51 GMT
- Title: Fast Evaluation for Relevant Quantities of Opinion Dynamics
- Authors: Wanyue Xu, Qi Bao, Zhongzhi Zhang
- Abstract summary: We present a nearly linear time algorithm to estimate relevant quantities in $ell$ norms of some vectors.
Our algorithm is based on the Laplacian solvers, and has a proved theoretical guarantee of error for each quantity.
We execute extensive numerical experiments on a variety of real networks, which demonstrate that our approximation algorithm is efficient and effective, scalable to large graphs having millions of nodes.
- Score: 10.463365653675693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the main subjects in the field of social networks is to quantify
conflict, disagreement, controversy, and polarization, and some quantitative
indicators have been developed to quantify these concepts. However, direct
computation of these indicators involves the operations of matrix inversion and
multiplication, which make it computationally infeasible for large-scale graphs
with millions of nodes. In this paper, by reducing the problem of computing
relevant quantities to evaluating $\ell_2$ norms of some vectors, we present a
nearly linear time algorithm to estimate all these quantities. Our algorithm is
based on the Laplacian solvers, and has a proved theoretical guarantee of error
for each quantity. We execute extensive numerical experiments on a variety of
real networks, which demonstrate that our approximation algorithm is efficient
and effective, scalable to large graphs having millions of nodes.
Related papers
- Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - A Fast Algorithm for Moderating Critical Nodes via Edge Removal [19.130541561303293]
We study the problem of removing $k$ edges from a network to minimize the information centrality of a target node.
We propose three approximation greedy algorithms using novel techniques such as random walk-based Schur complement approximation and fast sum estimation.
To complement our theoretical analysis, we conduct a comprehensive set of experiments on synthetic and real networks with over one million nodes.
arXiv Detail & Related papers (2023-09-09T13:54:34Z) - Randomized Polar Codes for Anytime Distributed Machine Learning [66.46612460837147]
We present a novel distributed computing framework that is robust to slow compute nodes, and is capable of both approximate and exact computation of linear operations.
We propose a sequential decoding algorithm designed to handle real valued data while maintaining low computational complexity for recovery.
We demonstrate the potential applications of this framework in various contexts, such as large-scale matrix multiplication and black-box optimization.
arXiv Detail & Related papers (2023-09-01T18:02:04Z) - Multi-task Learning for Gaussian Graphical Regressions with High
Dimensional Covariates [5.726899123970559]
We propose a multi-task learning estimator for fitting Gaussian graphical regression models.
For computation, we consider an efficient augmented Lagrangian algorithm, which solves subproblems with a semi-smooth Newton method.
We show that the error rate of the multi-task learning based estimates has much improvement over that of the separate node-wise lasso estimates.
arXiv Detail & Related papers (2022-05-21T20:48:51Z) - A Survey of Quantization Methods for Efficient Neural Network Inference [75.55159744950859]
quantization is the problem of distributing continuous real-valued numbers over a fixed discrete set of numbers to minimize the number of bits required.
It has come to the forefront in recent years due to the remarkable performance of Neural Network models in computer vision, natural language processing, and related areas.
Moving from floating-point representations to low-precision fixed integer values represented in four bits or less holds the potential to reduce the memory footprint and latency by a factor of 16x.
arXiv Detail & Related papers (2021-03-25T06:57:11Z) - Overlapping community detection in networks via sparse spectral
decomposition [1.0660480034605242]
We consider the problem of estimating overlapping community memberships in a network, where each node can belong to multiple communities.
Our algorithm is based on sparse principal subspace estimation with iterative thresholding.
We show that a fixed point of the algorithm corresponds to correct node memberships under a version of the block model.
arXiv Detail & Related papers (2020-09-20T07:31:09Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Accelerating Neural Network Inference by Overflow Aware Quantization [16.673051600608535]
Inherited heavy computation of deep neural networks prevents their widespread applications.
We propose an overflow aware quantization method by designing trainable adaptive fixed-point representation.
With the proposed method, we are able to fully utilize the computing power to minimize the quantization loss and obtain optimized inference performance.
arXiv Detail & Related papers (2020-05-27T11:56:22Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.