UNiTE: Unitary N-body Tensor Equivariant Network with Applications to
Quantum Chemistry
- URL: http://arxiv.org/abs/2105.14655v1
- Date: Mon, 31 May 2021 00:48:18 GMT
- Title: UNiTE: Unitary N-body Tensor Equivariant Network with Applications to
Quantum Chemistry
- Authors: Zhuoran Qiao, Anders S. Christensen, Frederick R. Manby, Matthew
Welborn, Anima Anandkumar, Thomas F. Miller III
- Abstract summary: We propose unitary $N$-body tensor equivariant neural network (UNiTE) for general class of symmetric tensors.
UNiTE is equivariant with respect to the actions of a unitary group, such as the group of 3D rotations.
When applied to quantum chemistry, UNiTE outperforms all state-of-the-art machine learning methods.
- Score: 33.067344811580604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Equivariant neural networks have been successful in incorporating various
types of symmetries, but are mostly limited to vector representations of
geometric objects. Despite the prevalence of higher-order tensors in various
application domains, e.g. in quantum chemistry, equivariant neural networks for
general tensors remain unexplored. Previous strategies for learning equivariant
functions on tensors mostly rely on expensive tensor factorization which is not
scalable when the dimensionality of the problem becomes large. In this work, we
propose unitary $N$-body tensor equivariant neural network (UNiTE), an
architecture for a general class of symmetric tensors called $N$-body tensors.
The proposed neural network is equivariant with respect to the actions of a
unitary group, such as the group of 3D rotations. Furthermore, it has a linear
time complexity with respect to the number of non-zero elements in the tensor.
We also introduce a normalization method, viz., Equivariant Normalization, to
improve generalization of the neural network while preserving symmetry. When
applied to quantum chemistry, UNiTE outperforms all state-of-the-art machine
learning methods of that domain with over 110% average improvements on multiple
benchmarks. Finally, we show that UNiTE achieves a robust zero-shot
generalization performance on diverse down stream chemistry tasks, while being
three orders of magnitude faster than conventional numerical methods with
competitive accuracy.
Related papers
- Compressing multivariate functions with tree tensor networks [0.0]
One-dimensional tensor networks are increasingly being used as a numerical ansatz for continuum functions.
We show how more structured tree tensor networks offer a significantly more efficient ansatz than the commonly used tensor train.
arXiv Detail & Related papers (2024-10-04T16:20:52Z) - Equivariant Neural Tangent Kernels [2.373992571236766]
We give explicit expressions for neural tangent kernels (NTKs) of group convolutional neural networks.
In numerical experiments, we demonstrate superior performance for equivariant NTKs over non-equivariant NTKs on a classification task for medical images.
arXiv Detail & Related papers (2024-06-10T17:43:13Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Unifying O(3) Equivariant Neural Networks Design with Tensor-Network Formalism [12.008737454250463]
We propose using fusion diagrams, a technique widely employed in simulating SU($2$)-symmetric quantum many-body problems, to design new equivariant components for equivariant neural networks.
When applied to particles within a given local neighborhood, the resulting components, which we term "fusion blocks," serve as universal approximators of any continuous equivariant function.
Our approach, which combines tensor networks with equivariant neural networks, suggests a potentially fruitful direction for designing more expressive equivariant neural networks.
arXiv Detail & Related papers (2022-11-14T16:06:59Z) - PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant
Aggregator Network for Particle Physics [64.5726087590283]
We present a machine learning architecture that uses a set of inputs maximally reduced with respect to the full 6-dimensional Lorentz symmetry.
We show that the resulting network outperforms all existing competitors despite much lower model complexity.
arXiv Detail & Related papers (2022-11-01T13:36:50Z) - Generalization capabilities of neural networks in lattice applications [0.0]
We investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones.
We show that our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts.
arXiv Detail & Related papers (2021-12-23T11:48:06Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Tensor-Train Networks for Learning Predictive Modeling of
Multidimensional Data [0.0]
A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications.
We show that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation.
An algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational power.
arXiv Detail & Related papers (2021-01-22T16:14:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.