Cycle-tree guided attack of random K-core: Spin glass model and
efficient message-passing algorithm
- URL: http://arxiv.org/abs/2110.05940v3
- Date: Tue, 19 Jul 2022 06:05:31 GMT
- Title: Cycle-tree guided attack of random K-core: Spin glass model and
efficient message-passing algorithm
- Authors: Hai-Jun Zhou
- Abstract summary: A minimum attack set contains the smallest number of vertices whose removal induces complete collapse of the K-core.
Here we tackle this optimal initial-condition problem from the spin-glass perspective of cycle-tree maximum packing.
The good performance and time efficiency of CTGA are verified on the regular random and Erd"os-R'enyi random graph ensembles.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The K-core of a graph is the maximal subgraph within which each vertex is
connected to at least K other vertices. It is a fundamental network concept for
understanding threshold cascading processes with a discontinuous percolation
transition. A minimum attack set contains the smallest number of vertices whose
removal induces complete collapse of the K-core. Here we tackle this
prototypical optimal initial-condition problem from the spin-glass perspective
of cycle-tree maximum packing and propose a cycle-tree guided attack (CTGA)
message-passing algorithm. The good performance and time efficiency of CTGA are
verified on the regular random and Erd\"os-R\'enyi random graph ensembles. Our
central idea of transforming a long-range correlated dynamical process to
static structural patterns may also be instructive to other hard optimization
and control problems.
Related papers
- Rethinking and Accelerating Graph Condensation: A Training-Free Approach with Class Partition [56.26113670151363]
Graph condensation is a data-centric solution to replace the large graph with a small yet informative condensed graph.
Existing GC methods suffer from intricate optimization processes, necessitating excessive computing resources.
We propose a training-free GC framework termed Class-partitioned Graph Condensation (CGC)
CGC achieves state-of-the-art performance with a more efficient condensation process.
arXiv Detail & Related papers (2024-05-22T14:57:09Z) - Disentangled Condensation for Large-scale Graphs [31.781721873508978]
Graph condensation has emerged as an intriguing technique to save the expensive training costs of Graph Neural Networks (GNNs)
We propose to disentangle the condensation process into a two-stage GNN-free paradigm, independently condensing nodes and generating edges.
This simple yet effective approach achieves at least 10 times faster than state-of-the-art methods with comparable accuracy on medium-scale graphs.
arXiv Detail & Related papers (2024-01-18T09:59:00Z) - MeanCut: A Greedy-Optimized Graph Clustering via Path-based Similarity
and Degree Descent Criterion [0.6906005491572401]
spectral clustering is popular and attractive due to the remarkable performance, easy implementation, and strong adaptability.
We propose MeanCut as the objective function and greedily optimize it in degree descending order for a nondestructive graph partition.
The validity of our algorithm is demonstrated by testifying on real-world benchmarks and application of face recognition.
arXiv Detail & Related papers (2023-12-07T06:19:39Z) - One-step Bipartite Graph Cut: A Normalized Formulation and Its
Application to Scalable Subspace Clustering [56.81492360414741]
We show how to enforce a one-step normalized cut for bipartite graphs, especially with linear-time complexity.
In this paper, we first characterize a novel one-step bipartite graph cut criterion with normalized constraints, and theoretically prove its equivalence to a trace problem.
We extend this cut criterion to a scalable subspace clustering approach, where adaptive anchor learning, bipartite graph learning, and one-step normalized bipartite graph partitioning are simultaneously modeled.
arXiv Detail & Related papers (2023-05-12T11:27:20Z) - Hierarchical cycle-tree packing model for $K$-core attack problem [0.0]
A hierarchical cycle-tree packing model is introduced here for this challenging optimization problem.
We analyze this model through the replica-symmetric cavity method of statistical physics.
The associated hierarchical cycle-tree guided attack (tt hCTGA) is able to construct nearly optimal attack solutions for regular random graphs.
arXiv Detail & Related papers (2023-03-02T06:47:33Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - Total Variation Graph Neural Networks [5.571369922847262]
Recently proposed Graph Neural Networks (GNNs) are trained with an unsupervised minimum cut objective.
We propose a GNN model that computes cluster assignments by optimizing a tighter relaxation of the minimum cut.
arXiv Detail & Related papers (2022-11-11T14:13:14Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Semi-Supervised Clustering of Sparse Graphs: Crossing the
Information-Theoretic Threshold [3.6052935394000234]
Block model is a canonical random graph model for clustering and community detection on network-structured data.
No estimator based on the network topology can perform substantially better than chance on sparse graphs if the model parameter is below a certain threshold.
We prove that with an arbitrary fraction of the labels feasible throughout the parameter domain.
arXiv Detail & Related papers (2022-05-24T00:03:25Z) - Distributed stochastic optimization with large delays [59.95552973784946]
One of the most widely used methods for solving large-scale optimization problems is distributed asynchronous gradient descent (DASGD)
We show that DASGD converges to a global optimal implementation model under same delay assumptions.
arXiv Detail & Related papers (2021-07-06T21:59:49Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.