ECHO: Encoding Communities via High-order Operators
- URL: http://arxiv.org/abs/2602.22446v1
- Date: Wed, 25 Feb 2026 22:14:29 GMT
- Title: ECHO: Encoding Communities via High-order Operators
- Authors: Emilio Ferrara,
- Abstract summary: Community detection in attributed networks faces a fundamental divide: topological algorithms ignore semantic features, while Graph Neural Networks (GNNs) encounter devastating computational bottlenecks.<n>We introduce ECHO, a scalable, self supervised architecture that reframes community detection as an adaptive, multi scale diffusion process.<n>ECHO completely bypasses traditional O(N2) memory bottlenecks without sacrificing the mathematical precision of global gradients.
- Score: 8.970269049715933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Community detection in attributed networks faces a fundamental divide: topological algorithms ignore semantic features, while Graph Neural Networks (GNNs) encounter devastating computational bottlenecks. Specifically, GNNs suffer from a Semantic Wall of feature over smoothing in dense or heterophilic networks, and a Systems Wall driven by the O(N^2) memory constraints of pairwise clustering. To dismantle these barriers, we introduce ECHO (Encoding Communities via High order Operators), a scalable, self supervised architecture that reframes community detection as an adaptive, multi scale diffusion process. ECHO features a Topology Aware Router that automatically analyzes structural heuristics sparsity, density, and assortativity to route graphs through the optimal inductive bias, preventing heterophilic poisoning while ensuring semantic densification. Coupled with a memory sharded full batch contrastive objective and a novel chunked O(N \cdot K) similarity extraction method, ECHO completely bypasses traditional O(N^2) memory bottlenecks without sacrificing the mathematical precision of global gradients. Extensive evaluations demonstrate that this topology feature synergy consistently overcomes the classical resolution limit. On synthetic LFR benchmarks scaled up to 1 million nodes, ECHO achieves scale invariant accuracy despite severe topological noise. Furthermore, on massive real world social networks with over 1.6 million nodes and 30 million edges, it completes clustering in mere minutes with throughputs exceeding 2,800 nodes per second matching the speed of highly optimized purely topological baselines. The implementation utilizes a unified framework that automatically engages memory sharded optimization to support adoption across varying hardware constraints. GitHub Repository: https://github.com/emilioferrara/ECHO-GNN
Related papers
- Joint Optimization of Model Partitioning and Resource Allocation for Anti-Jamming Collaborative Inference Systems [52.842088497389746]
This letter focuses on an anti-jamming collaborative inference system in the presence of a malicious jammer.<n>We first analyze the effects of jamming and DNN partitioning on inference accuracy via data regression.<n>We propose an efficient alternating optimization-based algorithm, which decomposes the problem into three subproblems.
arXiv Detail & Related papers (2026-03-03T03:52:52Z) - kNN-Graph: An adaptive graph model for $k$-nearest neighbors [17.882218619659756]
We present an adaptive graph model that decouples inference latency from computational complexity.<n>We demonstrate this architecture significantly accelerates inference speeds, achieving real-time performance, without compromising classification accuracy.
arXiv Detail & Related papers (2026-01-23T07:15:53Z) - Learning Wireless Interference Patterns: Decoupled GNN for Throughput Prediction in Heterogeneous Multi-Hop p-CSMA Networks [4.303580795892996]
Decoupled Graph Convolutional Network (D-GCN) is a novel architecture that explicitly separates processing of a node's own transmission probability from neighbor interference effects.<n>D-GCN attains 3.3% NMAE, outperforms strong baselines, remains tractable even when exact analytical methods become computationally infeasible.
arXiv Detail & Related papers (2025-10-15T22:13:59Z) - Gaussian Primitive Optimized Deformable Retinal Image Registration [19.882820812725523]
Deformable retinal image registration is notoriously difficult due to large homogeneous regions and sparse but critical vascular features.<n>We introduce a novel iterative framework that performs structured message passing to overcome these challenges.<n>Experiments on the FIRE dataset show that GPO reduces the target registration error from 6.2,px to 2.4,px and increases the AUC at 25,px from 0.770 to 0.938.
arXiv Detail & Related papers (2025-08-23T00:44:50Z) - Sheaf Graph Neural Networks via PAC-Bayes Spectral Optimization [13.021238902084647]
Over-smoothing in Graph Neural Networks (GNNs) causes collapse in distinct node features.<n>We introduce SGPC (Sheaf GNNs with PAC-Bayes), a unified architecture that combines cellular-sheaf message passing with several mechanisms.<n> Experiments on nine homophilic and heterophilic benchmarks show that SGPC outperforms state-of-the-art spectral and sheaf-based GNNs.
arXiv Detail & Related papers (2025-08-01T06:39:28Z) - ReDiSC: A Reparameterized Masked Diffusion Model for Scalable Node Classification with Structured Predictions [64.17845687013434]
We propose ReDiSC, a structured diffusion model for structured node classification.<n>We show that ReDiSC achieves superior or highly competitive performance compared to state-of-the-art GNN, label propagation, and diffusion-based baselines.<n> Notably, ReDiSC scales effectively to large-scale datasets on which previous structured diffusion methods fail due to computational constraints.
arXiv Detail & Related papers (2025-07-19T04:46:53Z) - Geminet: Learning the Duality-based Iterative Process for Lightweight Traffic Engineering in Changing Topologies [53.38648279089736]
Geminet is a lightweight and scalable ML-based TE framework that can handle changing topologies.<n>Its neural network size is only 0.04% to 7% of existing schemes.<n>When trained on large-scale topologies, Geminet consumes under 10 GiB of memory, more than eight times less than the 80-plus GiB required by HARP.
arXiv Detail & Related papers (2025-06-30T09:09:50Z) - ScaleGNN: Towards Scalable Graph Neural Networks via Adaptive High-order Neighboring Feature Fusion [73.85920403511706]
We propose ScaleGNN, a novel framework that adaptively fuses multi-hop node features for scalable and effective graph learning.<n>We show that ScaleGNN consistently outperforms state-of-the-art GNNs in both predictive accuracy and computational efficiency.
arXiv Detail & Related papers (2025-04-22T14:05:11Z) - Decentralized Nonconvex Composite Federated Learning with Gradient Tracking and Momentum [78.27945336558987]
Decentralized server (DFL) eliminates reliance on client-client architecture.<n>Non-smooth regularization is often incorporated into machine learning tasks.<n>We propose a novel novel DNCFL algorithm to solve these problems.
arXiv Detail & Related papers (2025-04-17T08:32:25Z) - Verifying message-passing neural networks via topology-based bounds tightening [3.3267518043390205]
We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs)
Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems.
We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
arXiv Detail & Related papers (2024-02-21T17:05:27Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.