Accelerating nanomaterials discovery with artificial intelligence at the
HPC centers
- URL: http://arxiv.org/abs/2208.07612v1
- Date: Tue, 16 Aug 2022 09:02:16 GMT
- Title: Accelerating nanomaterials discovery with artificial intelligence at the
HPC centers
- Authors: \c{S}ener \"Oz\"onder and H. K\"ubra K\"u\c{c}\"ukkartal
- Abstract summary: Study of properties of chemicals, drugs, biomaterials and alloys requires decades of dedicated work.
New artificial intelligence and optimization methods can be used to inverted this research procedure.
We present an example smart search on the doped graphene quantum dot parameter space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Study of properties of chemicals, drugs, biomaterials and alloys requires
decades of dedicated work. Often times the outcome, however, is not what is
expected for practical applications. This research procedure can be inverted by
the new artificial intelligence and optimization methods. Instead of studying
the properties of a material and its structurally close derivatives, the
chemical and structural parameter space that contains all possible derivatives
of that material can be scanned in a fast and smart way at the HPC centers. As
a result of this, the particular material that has the specific physical or
chemical properties can be found. Here we show how Bayesian optimization,
Gaussian regression and artificial neural networks can be used towards this
goal. We present an example smart search on the doped graphene quantum dot
parameter space.
Related papers
- InversionGNN: A Dual Path Network for Multi-Property Molecular Optimization [77.79862482208326]
InversionGNN is an effective yet sample-efficient dual-path graph neural network (GNN) for multi-objective drug discovery.
We train the model for multi-property prediction to acquire knowledge of the optimal combination of functional groups.
Then the learned chemical knowledge helps the inversion generation path to generate molecules with required properties.
arXiv Detail & Related papers (2025-03-03T12:53:36Z) - On the practical applicability of modern DFT functionals for chemical computations. Case study of DM21 applicability for geometry optimization [55.88862563823878]
This study focuses on evaluating the efficiency of DM21 functional in predicting molecular geometries.
We implement geometry optimization in PySCF for the DM21 functional in geometry optimization problem.
Our findings reveal both the potential and the current challenges of using neural network functionals for geometry optimization in DFT.
arXiv Detail & Related papers (2025-01-21T14:01:06Z) - Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
Neuromorphic computing uses spiking neural networks (SNNs) to perform inference tasks.
embedding a small payload within each spike exchanged between spiking neurons can enhance inference accuracy without increasing energy consumption.
split computing - where an SNN is partitioned across two devices - is a promising solution.
This paper presents the first comprehensive study of a neuromorphic wireless split computing architecture that employs multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Geometry-Informed Neural Operator for Large-Scale 3D PDEs [76.06115572844882]
We propose the geometry-informed neural operator (GINO) to learn the solution operator of large-scale partial differential equations.
We successfully trained GINO to predict the pressure on car surfaces using only five hundred data points.
arXiv Detail & Related papers (2023-09-01T16:59:21Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Efficient SGD Neural Network Training via Sublinear Activated Neuron
Identification [22.361338848134025]
We present a fully connected two-layer neural network for shifted ReLU activation to enable activated neuron identification in sublinear time via geometric search.
We also prove that our algorithm can converge in $O(M2/epsilon2)$ time with network size quadratic in the coefficient norm upper bound $M$ and error term $epsilon$.
arXiv Detail & Related papers (2023-07-13T05:33:44Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - HOAX: A Hyperparameter Optimization Algorithm Explorer for Neural
Networks [0.0]
The bottleneck for trajectory-based methods to study photoinduced processes is still the huge number of electronic structure calculations.
We present an innovative solution, in which the amount of electronic structure calculations is drastically reduced, by employing machine learning algorithms and methods borrowed from the realm of artificial intelligence.
arXiv Detail & Related papers (2023-02-01T11:12:35Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - NeuralNEB -- Neural Networks can find Reaction Paths Fast [7.7365628406567675]
Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
arXiv Detail & Related papers (2022-07-20T15:29:45Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Modelling and optimization of nanovector synthesis for applications in
drug delivery systems [0.0]
Review focuses on the use of artificial intelligence and metaheuristic algorithms for nanoparticles synthesis in drug delivery systems.
neural networks are better at modelling NVs properties than linear regression algorithms and response surface methodology.
For metaheuristic algorithms, benchmark functions were optimized with cuckoo search, firefly algorithm, genetic algorithm and symbiotic organism search.
arXiv Detail & Related papers (2021-11-10T20:52:27Z) - Deep-Learning Density Functional Theory Hamiltonian for Efficient ab
initio Electronic-Structure Calculation [13.271547916205675]
We develop a deep neural network approach to represent DFT Hamiltonian (DeepH) of crystalline materials.
The method provides a solution to the accuracy-efficiency dilemma of DFT and opens opportunities to explore large-scale material systems.
arXiv Detail & Related papers (2021-04-08T14:08:10Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.