A Study on Quantum Graph Neural Networks Applied to Molecular Physics
- URL: http://arxiv.org/abs/2408.03427v1
- Date: Tue, 6 Aug 2024 20:06:48 GMT
- Title: A Study on Quantum Graph Neural Networks Applied to Molecular Physics
- Authors: Simone Piperno, Andrea Ceschini, Su Yeon Chang, Michele Grossi, Sofia Vallecorsa, Massimo Panella,
- Abstract summary: This paper introduces a novel architecture for Quantum Graph Neural Networks, which is significantly different from previous approaches found in the literature.
The proposed approach produces similar outcomes with respect to previous models but with fewer parameters, resulting in an extremely interpretable architecture rooted in the underlying physics of the problem.
- Score: 0.5277756703318045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel architecture for Quantum Graph Neural Networks, which is significantly different from previous approaches found in the literature. The proposed approach produces similar outcomes with respect to previous models but with fewer parameters, resulting in an extremely interpretable architecture rooted in the underlying physics of the problem. The architectural novelties arise from three pivotal aspects. Firstly, we employ an embedding updating method that is analogous to classical Graph Neural Networks, therefore bridging the classical-quantum gap. Secondly, each layer is devoted to capturing interactions of distinct orders, aligning with the physical properties of the system. Lastly, we harness SWAP gates to emulate the problem's inherent symmetry, a novel strategy not found currently in the literature. The obtained results in the considered experiments are encouraging to lay the foundation for continued research in this field.
Related papers
- Discovering Message Passing Hierarchies for Mesh-Based Physics Simulation [61.89682310797067]
We introduce DHMP, which learns Dynamic Hierarchies for Message Passing networks through a differentiable node selection method.
Our experiments demonstrate the effectiveness of DHMP, achieving 22.7% improvement on average compared to recent fixed-hierarchy message passing networks.
arXiv Detail & Related papers (2024-10-03T15:18:00Z) - Point Neuron Learning: A New Physics-Informed Neural Network Architecture [8.545030794905584]
This paper proposes a new physics-informed neural network architecture.
It embeds the fundamental solution of the wave equation into the network architecture, enabling the learned model to strictly satisfy the wave equation.
Compared to other PINN methods, our approach directly processes complex numbers and offers better interpretability and generalizability.
arXiv Detail & Related papers (2024-08-30T02:07:13Z) - Understanding the differences in Foundation Models: Attention, State Space Models, and Recurrent Neural Networks [50.29356570858905]
We introduce the Dynamical Systems Framework (DSF), which allows a principled investigation of all these architectures in a common representation.
We provide principled comparisons between softmax attention and other model classes, discussing the theoretical conditions under which softmax attention can be approximated.
This shows the DSF's potential to guide the systematic development of future more efficient and scalable foundation models.
arXiv Detail & Related papers (2024-05-24T17:19:57Z) - A singular Riemannian Geometry Approach to Deep Neural Networks III. Piecewise Differentiable Layers and Random Walks on $n$-dimensional Classes [49.32130498861987]
We study the case of non-differentiable activation functions, such as ReLU.
Two recent works introduced a geometric framework to study neural networks.
We illustrate our findings with some numerical experiments on classification of images and thermodynamic problems.
arXiv Detail & Related papers (2024-04-09T08:11:46Z) - A Novel Convolutional Neural Network Architecture with a Continuous Symmetry [10.854440554663576]
This paper introduces a new Convolutional Neural Network (ConvNet) architecture inspired by a class of partial differential equations (PDEs)
With comparable performance on the image classification task, it allows for the modification of the weights via a continuous group of symmetry.
arXiv Detail & Related papers (2023-08-03T08:50:48Z) - The autoregressive neural network architecture of the Boltzmann distribution of pairwise interacting spins systems [0.0]
Generative Autoregressive Neural Networks (ARNNs) have recently demonstrated exceptional results in image and language generation tasks.
This work presents an exact mapping of the Boltzmann distribution of binary pairwise interacting systems into autoregressive form.
The resulting ARNN architecture has weights and biases of its first layer corresponding to the Hamiltonian's couplings and external fields.
arXiv Detail & Related papers (2023-02-16T15:05:37Z) - Hybrid neural network reduced order modelling for turbulent flows with
geometric parameters [0.0]
This paper introduces a new technique mixing up a classical Galerkin-projection approach together with a data-driven method to obtain a versatile and accurate algorithm for the resolution of geometrically parametrized incompressible turbulent Navier-Stokes problems.
The effectiveness of this procedure is demonstrated on two different test cases: a classical academic back step problem and a shape deformation Ahmed body application.
arXiv Detail & Related papers (2021-07-20T16:06:18Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - Forecasting Sequential Data using Consistent Koopman Autoencoders [52.209416711500005]
A new class of physics-based methods related to Koopman theory has been introduced, offering an alternative for processing nonlinear dynamical systems.
We propose a novel Consistent Koopman Autoencoder model which, unlike the majority of existing work, leverages the forward and backward dynamics.
Key to our approach is a new analysis which explores the interplay between consistent dynamics and their associated Koopman operators.
arXiv Detail & Related papers (2020-03-04T18:24:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.