Geometric Algebra Transformer
- URL: http://arxiv.org/abs/2305.18415v3
- Date: Mon, 20 Nov 2023 08:31:51 GMT
- Title: Geometric Algebra Transformer
- Authors: Johann Brehmer, Pim de Haan, S\"onke Behrends, Taco Cohen
- Abstract summary: Problems involving geometric data arise in physics, chemistry, robotics, computer vision, and many other fields.
There is no single architecture that can be applied to such a wide variety of geometric types while respecting their symmetries.
We introduce the Geometric Algebra Transformer (GATr), a general-purpose architecture for geometric data.
- Score: 16.656636729960727
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Problems involving geometric data arise in physics, chemistry, robotics,
computer vision, and many other fields. Such data can take numerous forms, for
instance points, direction vectors, translations, or rotations, but to date
there is no single architecture that can be applied to such a wide variety of
geometric types while respecting their symmetries. In this paper we introduce
the Geometric Algebra Transformer (GATr), a general-purpose architecture for
geometric data. GATr represents inputs, outputs, and hidden states in the
projective geometric (or Clifford) algebra, which offers an efficient
16-dimensional vector-space representation of common geometric objects as well
as operators acting on them. GATr is equivariant with respect to E(3), the
symmetry group of 3D Euclidean space. As a Transformer, GATr is versatile,
efficient, and scalable. We demonstrate GATr in problems from n-body modeling
to wall-shear-stress estimation on large arterial meshes to robotic motion
planning. GATr consistently outperforms both non-geometric and equivariant
baselines in terms of error, data efficiency, and scalability.
Related papers
- Fully Geometric Multi-Hop Reasoning on Knowledge Graphs with Transitive Relations [50.05281461410368]
We introduce GeometrE, a geometric embedding method for multi-hop reasoning.<n>It does not require learning the logical operations and enables full geometric interpretability.<n>Our experiments show that GeometrE outperforms current state-of-the-art methods on standard benchmark datasets.
arXiv Detail & Related papers (2025-05-18T11:17:50Z) - Geometry-Informed Neural Operator Transformer [0.8906214436849201]
This work introduces the Geometry-Informed Neural Operator Transformer (GINOT), which integrates the transformer architecture with the neural operator framework to enable forward predictions for arbitrary geometries.
The performance of GINOT is validated on multiple challenging datasets, showcasing its high accuracy and strong generalization capabilities for complex and arbitrary 2D and 3D geometries.
arXiv Detail & Related papers (2025-04-28T03:39:27Z) - GAGrasp: Geometric Algebra Diffusion for Dexterous Grasping [3.9108453320793326]
We propose GAGrasp, a novel framework for dexterous grasp generation.
By encoding the SE(3) symmetry constraint directly into the architecture, our method improves data and parameter efficiency.
We incorporate a differentiable physics-informed refinement layer, which ensures that generated grasps are physically plausible and stable.
arXiv Detail & Related papers (2025-03-06T06:00:55Z) - Large Language-Geometry Model: When LLM meets Equivariance [53.8505081745406]
We propose EquiLLM, a novel framework for representing 3D physical systems.
We show that EquiLLM delivers significant improvements over previous methods across molecular dynamics simulation, human motion simulation, and antibody design.
arXiv Detail & Related papers (2025-02-16T14:50:49Z) - Geometry Informed Tokenization of Molecules for Language Model Generation [85.80491667588923]
We consider molecule generation in 3D space using language models (LMs)
Although tokenization of molecular graphs exists, that for 3D geometries is largely unexplored.
We propose the Geo2Seq, which converts molecular geometries into $SE(3)$-invariant 1D discrete sequences.
arXiv Detail & Related papers (2024-08-19T16:09:59Z) - Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics [4.4970885242855845]
Lorentz Geometric Algebra Transformer (L-GATr) is a new multi-purpose architecture for high-energy physics.
L-GATr is first demonstrated on regression and classification tasks from particle physics.
We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network.
arXiv Detail & Related papers (2024-05-23T17:15:41Z) - Transolver: A Fast Transformer Solver for PDEs on General Geometries [66.82060415622871]
We present Transolver, which learns intrinsic physical states hidden behind discretized geometries.
By calculating attention to physics-aware tokens encoded from slices, Transovler can effectively capture intricate physical correlations.
Transolver achieves consistent state-of-the-art with 22% relative gain across six standard benchmarks and also excels in large-scale industrial simulations.
arXiv Detail & Related papers (2024-02-04T06:37:38Z) - A Hitchhiker's Guide to Geometric GNNs for 3D Atomic Systems [87.30652640973317]
Recent advances in computational modelling of atomic systems represent them as geometric graphs with atoms embedded as nodes in 3D Euclidean space.
Geometric Graph Neural Networks have emerged as the preferred machine learning architecture powering applications ranging from protein structure prediction to molecular simulations and material generation.
This paper provides a comprehensive and self-contained overview of the field of Geometric GNNs for 3D atomic systems.
arXiv Detail & Related papers (2023-12-12T18:44:19Z) - Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers [16.656636729960727]
We study versions of this architecture for Euclidean, projective, and conformal algebras.
The simplest Euclidean architecture is computationally cheap, but has a smaller symmetry group and is not as sample-efficient.
Both the conformal algebra and an improved version of the projective algebra define powerful, performant architectures.
arXiv Detail & Related papers (2023-11-08T15:12:31Z) - GTA: A Geometry-Aware Attention Mechanism for Multi-View Transformers [63.41460219156508]
We argue that existing positional encoding schemes are suboptimal for 3D vision tasks.
We propose a geometry-aware attention mechanism that encodes the geometric structure of tokens as relative transformation.
We show that our attention, called Geometric Transform Attention (GTA), improves learning efficiency and performance of state-of-the-art transformer-based NVS models.
arXiv Detail & Related papers (2023-10-16T13:16:09Z) - Geometric Clifford Algebra Networks [53.456211342585824]
We propose Geometric Clifford Algebra Networks (GCANs) for modeling dynamical systems.
GCANs are based on symmetry group transformations using geometric (Clifford) algebras.
arXiv Detail & Related papers (2023-02-13T18:48:33Z) - Geometry-Contrastive Transformer for Generalized 3D Pose Transfer [95.56457218144983]
The intuition of this work is to perceive the geometric inconsistency between the given meshes with the powerful self-attention mechanism.
We propose a novel geometry-contrastive Transformer that has an efficient 3D structured perceiving ability to the global geometric inconsistencies.
We present a latent isometric regularization module together with a novel semi-synthesized dataset for the cross-dataset 3D pose transfer task.
arXiv Detail & Related papers (2021-12-14T13:14:24Z) - Geometric Algebra Attention Networks for Small Point Clouds [0.0]
Problems in the physical sciences deal with relatively small sets of points in two- or three-dimensional space.
We present rotation- and permutation-equivariant architectures for deep learning on these small point clouds.
We demonstrate the usefulness of these architectures by training models to solve sample problems relevant to physics, chemistry, and biology.
arXiv Detail & Related papers (2021-10-05T22:52:12Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Embed Me If You Can: A Geometric Perceptron [14.274582421372308]
We introduce an extension of the multilayer hypersphere perceptron (MLHP)
Our model is superior to the vanilla multilayer perceptron when classifying 3D Tetris shapes.
arXiv Detail & Related papers (2020-06-11T15:25:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.