Faster Inference of Cell Complexes from Flows via Matrix Factorization
- URL: http://arxiv.org/abs/2508.21372v1
- Date: Fri, 29 Aug 2025 07:32:29 GMT
- Title: Faster Inference of Cell Complexes from Flows via Matrix Factorization
- Authors: Til Spreuer, Josef Hoppe, Michael T. Schaub,
- Abstract summary: Given a set of edge-flow signals observed on a graph, lift the graph to a cell complex, such that the observed edge-flow signals can be represented as a sparse combination of gradient and curl flows on the cell complex.<n>We show that our new approach is significantly less expensive than priors, while achieving only marginally worse performance in most settings.
- Score: 13.248211055426077
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the following inference problem: Given a set of edge-flow signals observed on a graph, lift the graph to a cell complex, such that the observed edge-flow signals can be represented as a sparse combination of gradient and curl flows on the cell complex. Specifically, we aim to augment the observed graph by a set of 2-cells (polygons encircled by closed, non-intersecting paths), such that the eigenvectors of the Hodge Laplacian of the associated cell complex provide a sparse, interpretable representation of the observed edge flows on the graph. As it has been shown that the general problem is NP-hard in prior work, we here develop a novel matrix-factorization-based heuristic to solve the problem. Using computational experiments, we demonstrate that our new approach is significantly less computationally expensive than prior heuristics, while achieving only marginally worse performance in most settings. In fact, we find that for specifically noisy settings, our new approach outperforms the previous state of the art in both solution quality and computational speed.
Related papers
- Learning Time-Varying Graphs from Incomplete Graph Signals [1.7430416823420511]
We develop an efficient Alternating Direction Multiplier algorithm for solving the problem of imputing missing data from a graph.<n>We prove that the proposed ADMM scheme converges to and we derive a stationary point.
arXiv Detail & Related papers (2025-10-19T11:12:13Z) - Taming the Interacting Particle Langevin Algorithm: The Superlinear case [0.0]
We develop a new class of stable, under such non-linearities, algorithms called tamed interacting particle Langevin algorithms (tIPLA)<n>We obtain non-asymptotic convergence error estimates in Wasserstein-2 distance for the new class under the best known rate.
arXiv Detail & Related papers (2024-03-28T17:11:25Z) - Stochastic Gradient Descent for Gaussian Processes Done Right [86.83678041846971]
We show that when emphdone right -- by which we mean using specific insights from optimisation and kernel communities -- gradient descent is highly effective.
We introduce a emphstochastic dual descent algorithm, explain its design in an intuitive manner and illustrate the design choices.
Our method places Gaussian process regression on par with state-of-the-art graph neural networks for molecular binding affinity prediction.
arXiv Detail & Related papers (2023-10-31T16:15:13Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Representing Edge Flows on Graphs via Sparse Cell Complexes [6.74438532801556]
We introduce the flow representation learning problem, i.e., the problem of augmenting the observed graph by a set of cells.
We show that this problem is NP-hard and introduce an efficient approximation algorithm for its solution.
arXiv Detail & Related papers (2023-09-04T14:30:20Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Approximate Decomposable Submodular Function Minimization for
Cardinality-Based Components [30.33731479053404]
Minimizing a sum of simple submodular functions of limited support has numerous applications in machine learning.
We develop fast techniques for instances where components in the sum are cardinality-based, meaning they depend only on the size of the input set.
We develop the first approximation algorithms for this problem, where the approximations can be quickly computed via reduction to a sparse graph cut problem.
arXiv Detail & Related papers (2021-10-28T02:36:55Z) - Cogradient Descent for Bilinear Optimization [124.45816011848096]
We introduce a Cogradient Descent algorithm (CoGD) to address the bilinear problem.
We solve one variable by considering its coupling relationship with the other, leading to a synchronous gradient descent.
Our algorithm is applied to solve problems with one variable under the sparsity constraint.
arXiv Detail & Related papers (2020-06-16T13:41:54Z) - Fast Convex Relaxations using Graph Discretizations [13.977100716044102]
Matching and vision problems are fundamentals of computer computation applications.
Applying techniques comes with a significant computational effort reducing their feasibility in practical applications.
This setup allows us to faithfully work on problems constructed by SLIC or Cut-Pursuit.
arXiv Detail & Related papers (2020-04-23T11:14:38Z) - Semiparametric Nonlinear Bipartite Graph Representation Learning with
Provable Guarantees [106.91654068632882]
We consider the bipartite graph and formalize its representation learning problem as a statistical estimation problem of parameters in a semiparametric exponential family distribution.
We show that the proposed objective is strongly convex in a neighborhood around the ground truth, so that a gradient descent-based method achieves linear convergence rate.
Our estimator is robust to any model misspecification within the exponential family, which is validated in extensive experiments.
arXiv Detail & Related papers (2020-03-02T16:40:36Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.