Practical and Performant Enhancements for Maximization of Algebraic Connectivity
- URL: http://arxiv.org/abs/2511.08694v1
- Date: Thu, 13 Nov 2025 01:02:22 GMT
- Title: Practical and Performant Enhancements for Maximization of Algebraic Connectivity
- Authors: Leonard Jung, Alan Papalia, Kevin Doherty, Michael Everett,
- Abstract summary: We develop a specialized solver for algebraic connectivity that yields an average 2x runtime speedup.<n>We propose automatic schemes that guarantee graph connectivity without requiring manual specification of edges.
- Score: 7.054572089717147
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Long-term state estimation over graphs remains challenging as current graph estimation methods scale poorly on large, long-term graphs. To address this, our work advances a current state-of-the-art graph sparsification algorithm, maximizing algebraic connectivity (MAC). MAC is a sparsification method that preserves estimation performance by maximizing the algebraic connectivity, a spectral graph property that is directly connected to the estimation error. Unfortunately, MAC remains computationally prohibitive for online use and requires users to manually pre-specify a connectivity-preserving edge set. Our contributions close these gaps along three complementary fronts: we develop a specialized solver for algebraic connectivity that yields an average 2x runtime speedup; we investigate advanced step size strategies for MAC's optimization procedure to enhance both convergence speed and solution quality; and we propose automatic schemes that guarantee graph connectivity without requiring manual specification of edges. Together, these contributions make MAC more scalable, reliable, and suitable for real-time estimation applications.
Related papers
- GPU-friendly and Linearly Convergent First-order Methods for Certifying Optimal $k$-sparse GLMs [7.079949618914198]
Branch-and-Bound (BnB) frameworks can certify optimality using perspective relaxations.<n>Existing methods for solving these relaxations are computationally intensive, limiting their scalability.<n>We develop a unified proximal framework that is both linearly convergent and computationally efficient.
arXiv Detail & Related papers (2026-03-01T22:26:09Z) - MAC: An Efficient Gradient Preconditioning using Mean Activation Approximated Curvature [7.512116180634991]
Second-order optimization methods for training neural networks, such as KFAC, exhibit superior convergence by utilizing curvature information of loss landscape.<n>We analyze the two components that constitute the layer-wise Fisher information matrix (FIM) used in KFAC: the Kronecker factors related to activations and pre-activation gradients.<n>We propose efficient approximations for them, resulting in a computationally efficient optimization method called MAC.<n>To the best of our knowledge, MAC is the first algorithm to apply the Kronecker factorization to the FIM of attention layers used in transformers and explicitly integrate attention scores into the preconditioning.
arXiv Detail & Related papers (2025-06-10T05:38:04Z) - A Quantum Genetic Algorithm Framework for the MaxCut Problem [49.59986385400411]
The proposed method introduces a Quantum Genetic Algorithm (QGA) using a Grover-based evolutionary framework and divide-and-conquer principles.<n>On complete graphs, the proposed method consistently achieves the true optimal MaxCut values, outperforming the Semidefinite Programming (SDP) approach.<n>On ErdHos-R'enyi random graphs, the QGA demonstrates competitive performance, achieving median solutions within 92-96% of the SDP results.
arXiv Detail & Related papers (2025-01-02T05:06:16Z) - Graph-Mamba: Towards Long-Range Graph Sequence Modeling with Selective
State Spaces [4.928791850200171]
We introduce Graph-Mamba, the first attempt to enhance long-range context modeling in graph networks.
We formulate graph-centric node prioritization and permutation strategies to enhance context-aware reasoning.
Experiments on ten benchmark datasets demonstrate that Graph-Mamba outperforms state-of-the-art methods in long-range graph prediction tasks.
arXiv Detail & Related papers (2024-02-01T17:21:53Z) - Scalable Bayesian Structure Learning for Gaussian Graphical Models Using Marginal Pseudo-likelihood [2.312692134587988]
We develop continuous-time (birth-death) and discrete-time (reversible jump) Markov chain Monte Carlo (MCMC) algorithms that efficiently explore the posterior over graph space.<n>The algorithms scale to large graph spaces, enabling parallel exploration for graphs with over 1,000 nodes.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - Efficiently Learning the Graph for Semi-supervised Learning [4.518012967046983]
We show how to learn the best graphs from the sparse families efficiently using the conjugate gradient method.
Our approach can also be used to learn the graph efficiently online with sub-linear regret, under mild smoothness assumptions.
We implement our approach and demonstrate significant ($sim$10-100x) speedups over prior work on semi-supervised learning with learned graphs on benchmark datasets.
arXiv Detail & Related papers (2023-06-12T13:22:06Z) - Extending Compositional Attention Networks for Social Reasoning in
Videos [84.12658971655253]
We propose a novel deep architecture for the task of reasoning about social interactions in videos.
We leverage the multi-step reasoning capabilities of Compositional Attention Networks (MAC), and propose a multimodal extension (MAC-X)
arXiv Detail & Related papers (2022-10-03T19:03:01Z) - CSGO: Constrained-Softassign Gradient Optimization For Large Graph Matching [0.7456521449098222]
This paper integrates several well-known graph matching algorithms into a framework: the constrained gradient method.<n>In attributed graph matching tasks, CSGO achieves an over 10X increase in speed compared to current constrained gradient algorithms.
arXiv Detail & Related papers (2022-08-17T11:25:03Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models [96.1052289276254]
This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle.
Surprisingly, by making a small change to the low-performing solver, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin.
arXiv Detail & Related papers (2020-04-16T16:20:53Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.