Quasi-Conformal Convolution : A Learnable Convolution for Deep Learning on Riemann Surfaces
- URL: http://arxiv.org/abs/2502.01356v2
- Date: Tue, 04 Feb 2025 07:01:04 GMT
- Title: Quasi-Conformal Convolution : A Learnable Convolution for Deep Learning on Riemann Surfaces
- Authors: Han Zhang, Tsz Lok Ip, Lok Ming Lui,
- Abstract summary: Deep learning on non-Euclidean domains is important for analyzing complex geometric data.<n>We introduce Quasi-conformal Convolution (QCC) to define convolution on non-Euclidean domains.<n>We develop the Quasi-Conformal Convolutional Neural Network (QCCNN) to address a variety of tasks related to geometric data.
- Score: 3.096214093393036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning on non-Euclidean domains is important for analyzing complex geometric data that lacks common coordinate systems and familiar Euclidean properties. A central challenge in this field is to define convolution on domains, which inherently possess irregular and non-Euclidean structures. In this work, we introduce Quasi-conformal Convolution (QCC), a novel framework for defining convolution on Riemann surfaces using quasi-conformal theories. Each QCC operator is linked to a specific quasi-conformal mapping, enabling the adjustment of the convolution operation through manipulation of this mapping. By utilizing trainable estimator modules that produce Quasi-conformal mappings, QCC facilitates adaptive and learnable convolution operators that can be dynamically adjusted according to the underlying data structured on Riemann surfaces. QCC unifies a broad range of spatially defined convolutions, facilitating the learning of tailored convolution operators on each underlying surface optimized for specific tasks. Building on this foundation, we develop the Quasi-Conformal Convolutional Neural Network (QCCNN) to address a variety of tasks related to geometric data. We validate the efficacy of QCCNN through the classification of images defined on curvilinear Riemann surfaces, demonstrating superior performance in this context. Additionally, we explore its potential in medical applications, including craniofacial analysis using 3D facial data and lesion segmentation on 3D human faces, achieving enhanced accuracy and reliability.
Related papers
- Geometry-Aware Spiking Graph Neural Network [24.920334588995072]
We propose a Geometry-Aware Spiking Graph Neural Network that unifies spike-based neural dynamics with adaptive representation learning.<n>Experiments on multiple benchmarks show that GSG achieves superior accuracy, robustness, and energy efficiency compared to both Euclidean SNNs and manifold-based GNNs.
arXiv Detail & Related papers (2025-08-09T02:52:38Z) - Adaptive Riemannian Graph Neural Networks [29.859977834688625]
We introduce a novel framework that learns a continuous and anisotropic metric tensor field over the graph.<n>It allows each node to determine its optimal local geometry, enabling the model to fluidly adapt to the graph's structural landscape.<n>Our method demonstrates superior performance on both homophilic and heterophilic benchmark geometries.
arXiv Detail & Related papers (2025-08-04T16:55:02Z) - Geometric Operator Learning with Optimal Transport [77.16909146519227]
We propose integrating optimal transport (OT) into operator learning for partial differential equations (PDEs) on complex geometries.<n>For 3D simulations focused on surfaces, our OT-based neural operator embeds the surface geometry into a 2D parameterized latent space.<n> Experiments with Reynolds-averaged Navier-Stokes equations (RANS) on the ShapeNet-Car and DrivAerNet-Car datasets show that our method achieves better accuracy and also reduces computational expenses.
arXiv Detail & Related papers (2025-07-26T21:28:25Z) - Estimating Dataset Dimension via Singular Metrics under the Manifold Hypothesis: Application to Inverse Problems [0.6138671548064356]
We propose a framework to deal with three key tasks: estimating the intrinsic dimension of the manifold, constructing appropriate local coordinates, and learning mappings between ambient and manifold spaces.<n>We focus on estimating the ID of datasets by analyzing the numerical rank of the VAE decoder pullback metric.<n>The estimated ID guides the construction of an atlas of local charts using a mixture of invertible VAEs, enabling accurate manifold parameterization and efficient inference.
arXiv Detail & Related papers (2025-07-09T21:22:59Z) - AdS-GNN -- a Conformally Equivariant Graph Neural Network [9.96018310438305]
We build a neural network that is equivariant under general conformal transformations.<n>We validate our model on tasks from computer vision and statistical physics.
arXiv Detail & Related papers (2025-05-19T09:08:52Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Score-based pullback Riemannian geometry [10.649159213723106]
We propose a framework for data-driven Riemannian geometry that is scalable in both geometry and learning.
We produce high-quality geodesics through the data support and reliably estimates the intrinsic dimension of the data manifold.
Our framework can naturally be used with anisotropic normalizing flows by adopting isometry regularization during training.
arXiv Detail & Related papers (2024-10-02T18:52:12Z) - Geometry of the Space of Partitioned Networks: A Unified Theoretical and Computational Framework [3.69102525133732]
"Space of networks" has a complex structure that cannot be adequately described using conventional statistical tools.<n>We introduce a measure-theoretic formalism for modeling generalized network structures such as graphs, hypergraphs, or graphs whose nodes come with a partition into categorical classes.<n>We show that our metric is an Alexandrov space of non-negative curvature, and leverage this structure to define gradients for certain functionals commonly arising in geometric data analysis tasks.
arXiv Detail & Related papers (2024-09-10T07:58:37Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Improving embedding of graphs with missing data by soft manifolds [51.425411400683565]
The reliability of graph embeddings depends on how much the geometry of the continuous space matches the graph structure.
We introduce a new class of manifold, named soft manifold, that can solve this situation.
Using soft manifold for graph embedding, we can provide continuous spaces to pursue any task in data analysis over complex datasets.
arXiv Detail & Related papers (2023-11-29T12:48:33Z) - Tuning the Geometry of Graph Neural Networks [0.7614628596146599]
spatial graph convolution operators have been heralded as key to the success of Graph Neural Networks (GNNs)
We show that this aggregation operator is in fact tunable, and explicit regimes in which certain choices of operators -- and therefore, embedding geometries -- might be more appropriate.
arXiv Detail & Related papers (2022-07-12T23:28:03Z) - Neural Convolutional Surfaces [59.172308741945336]
This work is concerned with a representation of shapes that disentangles fine, local and possibly repeating geometry, from global, coarse structures.
We show that this approach achieves better neural shape compression than the state of the art, as well as enabling manipulation and transfer of shape details.
arXiv Detail & Related papers (2022-04-05T15:40:11Z) - Surface Vision Transformers: Attention-Based Modelling applied to
Cortical Analysis [8.20832544370228]
We introduce a domain-agnostic architecture to study any surface data projected onto a spherical manifold.
A vision transformer model encodes the sequence of patches via successive multi-head self-attention layers.
Experiments show that the SiT generally outperforms surface CNNs, while performing comparably on registered and unregistered data.
arXiv Detail & Related papers (2022-03-30T15:56:11Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - Neural Marching Cubes [14.314650721573743]
We introduce Neural Marching Cubes (NMC), a data-driven approach for extracting a triangle mesh from a discretized implicit field.
We show that our network learns local features with limited fields, hence it generalizes well to new shapes and new datasets.
arXiv Detail & Related papers (2021-06-21T17:18:52Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.