De-novo Chemical Reaction Generation by Means of Temporal Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2310.17341v3
- Date: Wed, 1 Nov 2023 23:27:13 GMT
- Title: De-novo Chemical Reaction Generation by Means of Temporal Convolutional
Neural Networks
- Authors: Andrei Buin, Hung Yi Chiang, S. Andrew Gadsden, Faraz A. Alderson
- Abstract summary: We present here a combination of two networks, Recurrent Neural Networks (RNN) and Temporarily Convolutional Neural Networks (TCN)
Recurrent Neural Networks are known for their autoregressive properties and are frequently used in language modelling with direct application to SMILES generation.
The relatively novel TCNs possess similar properties with wide receptive field while obeying the causality required for natural language processing (NLP)
It is shown that different fine-tuning protocols have a profound impact on generative scope of the model when applied on a dataset of interest via transfer learning.
- Score: 3.357271554042638
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present here a combination of two networks, Recurrent Neural Networks
(RNN) and Temporarily Convolutional Neural Networks (TCN) in de novo reaction
generation using the novel Reaction Smiles-like representation of reactions
(CGRSmiles) with atom mapping directly incorporated. Recurrent Neural Networks
are known for their autoregressive properties and are frequently used in
language modelling with direct application to SMILES generation. The relatively
novel TCNs possess similar properties with wide receptive field while obeying
the causality required for natural language processing (NLP). The combination
of both latent representations expressed through TCN and RNN results in an
overall better performance compared to RNN alone. Additionally, it is shown
that different fine-tuning protocols have a profound impact on generative scope
of the model when applied on a dataset of interest via transfer learning.
Related papers
- Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - Topological Representations of Heterogeneous Learning Dynamics of Recurrent Spiking Neural Networks [16.60622265961373]
Spiking Neural Networks (SNNs) have become an essential paradigm in neuroscience and artificial intelligence.
Recent advances in literature have studied the network representations of deep neural networks.
arXiv Detail & Related papers (2024-03-19T05:37:26Z) - On The Expressivity of Recurrent Neural Cascades [48.87943990557107]
Recurrent Neural Cascades (RNCs) are the recurrent neural networks with no cyclic dependencies among recurrent neurons.
We show that RNCs can achieve the expressivity of all regular languages by introducing neurons that can implement groups.
arXiv Detail & Related papers (2023-12-14T15:47:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A comparison between Recurrent Neural Networks and classical machine
learning approaches In Laser induced breakdown spectroscopy [0.8399688944263843]
Recurrent Neural Networks are classes of Artificial Neural Networks that establish connections between different nodes.
Laser induced breakdown spectroscopy (LIBS) technique is used for quantitative analysis of aluminum alloys by different Recurrent Neural Network architecture.
arXiv Detail & Related papers (2023-04-16T08:26:11Z) - Heterogeneous Recurrent Spiking Neural Network for Spatio-Temporal
Classification [13.521272923545409]
Spi Neural Networks are often touted as brain-inspired learning models for the third wave of Artificial Intelligence.
This paper presents a heterogeneous spiking neural network (HRSNN) with unsupervised learning for video recognition tasks.
We show that HRSNN can achieve similar performance to state-of-the-temporal backpropagation trained supervised SNN, but with less computation.
arXiv Detail & Related papers (2022-09-22T16:34:01Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - The Spectral Bias of Polynomial Neural Networks [63.27903166253743]
Polynomial neural networks (PNNs) have been shown to be particularly effective at image generation and face recognition, where high-frequency information is critical.
Previous studies have revealed that neural networks demonstrate a $textitspectral bias$ towards low-frequency functions, which yields faster learning of low-frequency components during training.
Inspired by such studies, we conduct a spectral analysis of the Tangent Kernel (NTK) of PNNs.
We find that the $Pi$-Net family, i.e., a recently proposed parametrization of PNNs, speeds up the
arXiv Detail & Related papers (2022-02-27T23:12:43Z) - Coupled Oscillatory Recurrent Neural Network (coRNN): An accurate and
(gradient) stable architecture for learning long time dependencies [15.2292571922932]
We propose a novel architecture for recurrent neural networks.
Our proposed RNN is based on a time-discretization of a system of second-order ordinary differential equations.
Experiments show that the proposed RNN is comparable in performance to the state of the art on a variety of benchmarks.
arXiv Detail & Related papers (2020-10-02T12:35:04Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.