Modelling brain connectomes networks: Solv is a worthy competitor to hyperbolic geometry!
- URL: http://arxiv.org/abs/2407.16077v1
- Date: Mon, 22 Jul 2024 22:36:04 GMT
- Title: Modelling brain connectomes networks: Solv is a worthy competitor to hyperbolic geometry!
- Authors: Dorota Celińska-Kopczyńska, Eryk Kopczyński,
- Abstract summary: We suggest an embedding algorithm based on Simulating Annealing that allows us to embed connectomes to Euclidean, Spherical, Hyperbolic, Solv, Nil, and product geometries.
Our findings suggest that while three-dimensional hyperbolic embeddings yield the best results in many cases, Solv embeddings perform reasonably well.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Finding suitable embeddings for connectomes (spatially embedded complex networks that map neural connections in the brain) is crucial for analyzing and understanding cognitive processes. Recent studies have found two-dimensional hyperbolic embeddings superior to Euclidean embeddings in modeling connectomes across species, especially human connectomes. However, those studies had limitations: geometries other than Euclidean, hyperbolic, or spherical were not considered. Following William Thurston's suggestion that the networks of neurons in the brain could be successfully represented in Solv geometry, we study the goodness-of-fit of the embeddings for 21 connectome networks (8 species). To this end, we suggest an embedding algorithm based on Simulating Annealing that allows us to embed connectomes to Euclidean, Spherical, Hyperbolic, Solv, Nil, and product geometries. Our algorithm tends to find better embeddings than the state-of-the-art, even in the hyperbolic case. Our findings suggest that while three-dimensional hyperbolic embeddings yield the best results in many cases, Solv embeddings perform reasonably well.
Related papers
- Towards Non-Euclidean Foundation Models: Advancing AI Beyond Euclidean Frameworks [19.08129891252494]
This workshop focuses on the intersection of Non-Euclidean Foundation Models and Geometric Learning (NEGEL)<n>Non-Euclidean spaces have been shown to provide more efficient and effective representations for data with intrinsic geometric properties.
arXiv Detail & Related papers (2025-05-20T14:28:59Z) - Geometry Distributions [51.4061133324376]
We propose a novel geometric data representation that models geometry as distributions.
Our approach uses diffusion models with a novel network architecture to learn surface point distributions.
We evaluate our representation qualitatively and quantitatively across various object types, demonstrating its effectiveness in achieving high geometric fidelity.
arXiv Detail & Related papers (2024-11-25T04:06:48Z) - Hyperbolic Brain Representations [44.99833362998488]
We look at the structure and functions of the human brain, highlighting the alignment between the brain's hierarchical nature and hyperbolic geometry.
Empirical evidence indicates that hyperbolic neural networks outperform Euclidean models for tasks including natural language processing, computer vision and complex network analysis.
Despite its nascent adoption, hyperbolic geometry holds promise for improving machine learning models.
arXiv Detail & Related papers (2024-09-04T19:58:25Z) - Riemannian Residual Neural Networks [58.925132597945634]
We show how to extend the residual neural network (ResNet)
ResNets have become ubiquitous in machine learning due to their beneficial learning properties, excellent empirical results, and easy-to-incorporate nature when building varied neural networks.
arXiv Detail & Related papers (2023-10-16T02:12:32Z) - Deep Learning the Shape of the Brain Connectome [6.165163123577484]
We show for the first time how one can leverage deep neural networks to estimate a geodesic metric of the brain.
Our method achieves excellent performance in geodesic-white-matter-pathway alignment.
arXiv Detail & Related papers (2022-03-06T17:51:31Z) - Dive into Layers: Neural Network Capacity Bounding using Algebraic
Geometry [55.57953219617467]
We show that the learnability of a neural network is directly related to its size.
We use Betti numbers to measure the topological geometric complexity of input data and the neural network.
We perform the experiments on a real-world dataset MNIST and the results verify our analysis and conclusion.
arXiv Detail & Related papers (2021-09-03T11:45:51Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Fully Hyperbolic Neural Networks [63.22521652077353]
We propose a fully hyperbolic framework to build hyperbolic networks based on the Lorentz model.
We show that our method has better performance for building both shallow and deep networks.
arXiv Detail & Related papers (2021-05-31T03:36:49Z) - Multi-View Brain HyperConnectome AutoEncoder For Brain State
Classification [0.0]
We propose a new strategy to build a hyperconnectome for each brain view based on nearest neighbour algorithm.
We also design a hyperconnectome autoencoder framework which operates directly on the multi-view hyperconnectomes.
Our experiments showed that the learned embeddings by HCAE yield to better results for brain state classification.
arXiv Detail & Related papers (2020-09-24T08:51:44Z) - Deep Hypergraph U-Net for Brain Graph Embedding and Classification [0.0]
Network neuroscience examines the brain as a system represented by a network (or connectome)
We propose Hypergraph U-Net, a novel data embedding framework leveraging the hypergraph structure to learn low-dimensional embeddings of data samples.
We tested our method on small-scale and large-scale heterogeneous brain connectomic datasets including morphological and functional brain networks of autistic and demented patients.
arXiv Detail & Related papers (2020-08-30T08:15:18Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - The impossibility of low rank representations for triangle-rich complex
networks [9.550745725703292]
We argue that such graph embeddings do notcapture salient properties of complex networks.
We mathematically prove that any embedding that can successfully create these two properties must have rank nearly linear in the number of vertices.
Among other implications, this establishes that popular embedding techniques such as Singular Value Decomposition and node2vec fail to capture significant structural aspects of real-world complex networks.
arXiv Detail & Related papers (2020-03-27T20:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.