CCMamba: Selective State-Space Models for Higher-Order Graph Learning on Combinatorial Complexes
- URL: http://arxiv.org/abs/2601.20518v1
- Date: Wed, 28 Jan 2026 11:52:13 GMT
- Title: CCMamba: Selective State-Space Models for Higher-Order Graph Learning on Combinatorial Complexes
- Authors: Jiawen Chen, Qi Shao, Mingtong Zhou, Duxin Chen, Wenwu Yu,
- Abstract summary: Topological deep learning has emerged for modeling higher-order structures beyond pairwise interactions.<n>We propose Combinatorial Complex Mamba, the first unified mamba-based neural framework for learning on relational complexes.<n> CCMamba reformulates message passing as a selective state-space modeling problem by organizing multi-rank incidence relations into structured sequences processed by rank-aware state-space models.
- Score: 16.627877999057436
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Topological deep learning has emerged for modeling higher-order relational structures beyond pairwise interactions that standard graph neural networks fail to capture. Although combinatorial complexes offer a unified topological framework, most existing topological deep learning methods rely on local message passing via attention mechanisms, which incur quadratic complexity and remain low-dimensional, limiting scalability and rank-aware information aggregation in higher-order complexes.We propose Combinatorial Complex Mamba (CCMamba), the first unified mamba-based neural framework for learning on combinatorial complexes. CCMamba reformulates message passing as a selective state-space modeling problem by organizing multi-rank incidence relations into structured sequences processed by rank-aware state-space models. This enables adaptive, directional, and long range information propagation in linear time without self attention. We further establish the theoretical analysis that the expressive power upper-bound of CCMamba message passing is the 1-Weisfeiler-Lehman test. Experiments on graph, hypergraph, and simplicial benchmarks demonstrate that CCMamba consistently outperforms existing methods while exhibiting improved scalability and robustness to depth.
Related papers
- SEHFS: Structural Entropy-Guided High-Order Correlation Learning for Multi-View Multi-Label Feature Selection [32.73824178667282]
We propose Structural Entropy Guided High-Order Correlation Learning for Multi-View Multi-Label Feature Selection (SEHFS)<n>SEHFS group features with strong high-order redundancy into a single cluster within the encoding tree.<n>New framework based on the fusion of information theory and matrix methods is adopted, which learns a shared semantic matrix and view-specific contribution to reconstruct a global view matrix.
arXiv Detail & Related papers (2026-03-03T14:15:18Z) - Topology Identification and Inference over Graphs [61.06365536861156]
Topology identification and inference of processes evolving over graphs arise in timely applications involving brain, transportation, financial, power, as well as social and information networks.<n>This chapter provides an overview of graph topology identification and statistical inference methods for multidimensional data.
arXiv Detail & Related papers (2025-12-11T00:47:09Z) - The Human Brain as a Combinatorial Complex [3.849079578881503]
Current graph-based representations of brain networks miss the higher-order dependencies that characterize neural complexity.<n>We propose a framework for constructing complexes (CCs) from fMRI time series data that captures both pairwise and higher-order neural interactions.<n>This work provides a framework for brain network representation that preserves fundamental higher-order structure invisible to traditional graph methods.
arXiv Detail & Related papers (2025-11-22T19:04:13Z) - Sequential-Parallel Duality in Prefix Scannable Models [68.39855814099997]
Recent developments have given rise to various models, such as Gated Linear Attention (GLA) and Mamba.<n>This raises a natural question: can we characterize the full class of neural sequence models that support near-constant-time parallel evaluation and linear-time, constant-space sequential inference?
arXiv Detail & Related papers (2025-06-12T17:32:02Z) - Broad Spectrum Structure Discovery in Large-Scale Higher-Order Networks [1.7273380623090848]
We introduce a class of probabilistic models that efficiently represents and discovers a broad spectrum of mesoscale structure in large-scale hypergraphs.<n>By modeling observed node interactions through latent interactions among classes using low-rank representations, our approach tractably captures rich structural patterns.<n>Our model improves link prediction over state-of-the-art methods and discovers interpretable structures in diverse real-world systems.
arXiv Detail & Related papers (2025-05-27T20:34:58Z) - Topological Deep Learning with State-Space Models: A Mamba Approach for Simplicial Complexes [4.787059527893628]
We propose a novel architecture designed to operate with simplicial complexes, utilizing the Mamba state-space model as its backbone.
Our approach generates sequences for the nodes based on the neighboring cells, enabling direct communication between all higher-order structures, regardless of their rank.
arXiv Detail & Related papers (2024-09-18T14:49:25Z) - GrootVL: Tree Topology is All You Need in State Space Model [66.36757400689281]
GrootVL is a versatile multimodal framework that can be applied to both visual and textual tasks.
Our method significantly outperforms existing structured state space models on image classification, object detection and segmentation.
By fine-tuning large language models, our approach achieves consistent improvements in multiple textual tasks at minor training cost.
arXiv Detail & Related papers (2024-06-04T15:09:29Z) - Topological Deep Learning: Going Beyond Graph Data [26.325857542512047]
We present a unifying deep learning framework built upon a richer data structure that includes widely adopted topological domains.
Specifically, we first introduce complexes, a novel type of topological domain.
We develop a class of message-passing complex neural networks (CCNNs) focusing primarily on attention-based CCNNs.
arXiv Detail & Related papers (2022-06-01T16:21:28Z) - Deep Equilibrium Assisted Block Sparse Coding of Inter-dependent
Signals: Application to Hyperspectral Imaging [71.57324258813675]
A dataset of inter-dependent signals is defined as a matrix whose columns demonstrate strong dependencies.
A neural network is employed to act as structure prior and reveal the underlying signal interdependencies.
Deep unrolling and Deep equilibrium based algorithms are developed, forming highly interpretable and concise deep-learning-based architectures.
arXiv Detail & Related papers (2022-03-29T21:00:39Z) - Dist2Cycle: A Simplicial Neural Network for Homology Localization [66.15805004725809]
Simplicial complexes can be viewed as high dimensional generalizations of graphs that explicitly encode multi-way ordered relations.
We propose a graph convolutional model for learning functions parametrized by the $k$-homological features of simplicial complexes.
arXiv Detail & Related papers (2021-10-28T14:59:41Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.