Learning from Frustration: Torsor CNNs on Graphs
- URL: http://arxiv.org/abs/2510.23288v1
- Date: Mon, 27 Oct 2025 12:59:45 GMT
- Title: Learning from Frustration: Torsor CNNs on Graphs
- Authors: Daiyuan Li, Shreya Arya, Robert Ghrist,
- Abstract summary: We introduce Torsor CNNs, a framework for learning on graphs with local symmetries encoded as edge potentials.<n>We demonstrate its applicability to multi-view 3D recognition, where relative camera poses naturally define the required edge potentials.
- Score: 0.4369550829556577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most equivariant neural networks rely on a single global symmetry, limiting their use in domains where symmetries are instead local. We introduce Torsor CNNs, a framework for learning on graphs with local symmetries encoded as edge potentials-- group-valued transformations between neighboring coordinate frames. We establish that this geometric construction is fundamentally equivalent to the classical group synchronization problem, yielding: (1) a Torsor Convolutional Layer that is provably equivariant to local changes in coordinate frames, and (2) the frustration loss--a standalone geometric regularizer that encourages locally equivariant representations when added to any NN's training objective. The Torsor CNN framework unifies and generalizes several architectures--including classical CNNs and Gauge CNNs on manifolds-- by operating on arbitrary graphs without requiring a global coordinate system or smooth manifold structure. We establish the mathematical foundations of this framework and demonstrate its applicability to multi-view 3D recognition, where relative camera poses naturally define the required edge potentials.
Related papers
- On the Geometric Coherence of Global Aggregation in Federated GNN [0.0]
Federated Learning (FL) enables distributed training across multiple clients without centralized data sharing.<n> Graph Neural Networks (GNNs) model relational data through message passing.<n>In federated GNN settings, client graphs often exhibit heterogeneous structural and propagation characteristics.<n>Our work identifies a geometric failure mode of global aggregation in Cross- Domain Federated GNNs.<n>We propose GGRS, a server-side framework that regulates client updates prior to aggregation based on geometric admissibility criteria.
arXiv Detail & Related papers (2026-02-17T11:34:04Z) - Adaptive Riemannian Graph Neural Networks [29.859977834688625]
We introduce a novel framework that learns a continuous and anisotropic metric tensor field over the graph.<n>It allows each node to determine its optimal local geometry, enabling the model to fluidly adapt to the graph's structural landscape.<n>Our method demonstrates superior performance on both homophilic and heterophilic benchmark geometries.
arXiv Detail & Related papers (2025-08-04T16:55:02Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Beyond Canonicalization: How Tensorial Messages Improve Equivariant Message Passing [15.687514300950813]
We present a framework based on local reference frames ("local canonicalization") which can be integrated with any architecture without restrictions.<n>Our framework applies to message passing on geometric data in Euclidean spaces of arbitrary dimension.<n>We demonstrate the superiority of tensorial messages and achieve state-of-the-art results on normal vector regression and competitive results on other standard 3D point cloud tasks.
arXiv Detail & Related papers (2024-05-24T09:41:06Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A new perspective on building efficient and expressive 3D equivariant
graph neural networks [39.0445472718248]
We propose a hierarchy of 3D isomorphism to evaluate the expressive power of equivariant GNNs.
Our work leads to two crucial modules for designing expressive and efficient geometric GNNs.
To demonstrate the applicability of our theory, we propose LEFTNet which effectively implements these modules.
arXiv Detail & Related papers (2023-04-07T18:08:27Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - NeuroMorph: Unsupervised Shape Interpolation and Correspondence in One
Go [109.88509362837475]
We present NeuroMorph, a new neural network architecture that takes as input two 3D shapes.
NeuroMorph produces smooth and point-to-point correspondences between them.
It works well for a large variety of input shapes, including non-isometric pairs from different object categories.
arXiv Detail & Related papers (2021-06-17T12:25:44Z) - Neural Subdivision [58.97214948753937]
This paper introduces Neural Subdivision, a novel framework for data-driven coarseto-fine geometry modeling.
We optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category.
We demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.
arXiv Detail & Related papers (2020-05-04T20:03:21Z) - A Simple Fix for Convolutional Neural Network via Coordinate Embedding [2.1320960069210484]
We propose a simple approach to incorporate the coordinate information to the CNN model through coordinate embedding.
Our approach does not change the downstream model architecture and can be easily applied to the pre-trained models for the task like object detection.
arXiv Detail & Related papers (2020-03-24T00:31:27Z) - A Rotation-Invariant Framework for Deep Point Cloud Analysis [132.91915346157018]
We introduce a new low-level purely rotation-invariant representation to replace common 3D Cartesian coordinates as the network inputs.
Also, we present a network architecture to embed these representations into features, encoding local relations between points and their neighbors, and the global shape structure.
We evaluate our method on multiple point cloud analysis tasks, including shape classification, part segmentation, and shape retrieval.
arXiv Detail & Related papers (2020-03-16T14:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.