Generating QM1B with PySCF$_{\text{IPU}}$
- URL: http://arxiv.org/abs/2311.01135v1
- Date: Thu, 2 Nov 2023 10:31:20 GMT
- Title: Generating QM1B with PySCF$_{\text{IPU}}$
- Authors: Alexander Mathiasen, Hatem Helal, Kerstin Klaser, Paul Balanca, Josef
Dean, Carlo Luschi, Dominique Beaini, Andrew Fitzgibbon, Dominic Masters
- Abstract summary: This paper introduces the data generator PySCF$_textIPU$ using Intelligence Processing Units (IPUs)
It allows us to create the dataset QM1B with one billion training examples containing 9-11 heavy atoms.
We highlight several limitations of QM1B and emphasise the low-resolution of our DFT options, which also serves as motivation for even larger, more accurate datasets.
- Score: 40.29005019051567
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The emergence of foundation models in Computer Vision and Natural Language
Processing have resulted in immense progress on downstream tasks. This progress
was enabled by datasets with billions of training examples. Similar benefits
are yet to be unlocked for quantum chemistry, where the potential of deep
learning is constrained by comparatively small datasets with 100k to 20M
training examples. These datasets are limited in size because the labels are
computed using the accurate (but computationally demanding) predictions of
Density Functional Theory (DFT). Notably, prior DFT datasets were created using
CPU supercomputers without leveraging hardware acceleration. In this paper, we
take a first step towards utilising hardware accelerators by introducing the
data generator PySCF$_{\text{IPU}}$ using Intelligence Processing Units (IPUs).
This allowed us to create the dataset QM1B with one billion training examples
containing 9-11 heavy atoms. We demonstrate that a simple baseline neural
network (SchNet 9M) improves its performance by simply increasing the amount of
training data without additional inductive biases. To encourage future
researchers to use QM1B responsibly, we highlight several limitations of QM1B
and emphasise the low-resolution of our DFT options, which also serves as
motivation for even larger, more accurate datasets. Code and dataset are
available on Github: http://github.com/graphcore-research/pyscf-ipu
Related papers
- SEART Data Hub: Streamlining Large-Scale Source Code Mining and Pre-Processing [13.717170962455526]
We present the SEART Data Hub, a web application that allows to easily build and pre-process large-scale datasets featuring code mined from public GitHub repositories.
Through a simple web interface, researchers can specify a set of mining criteria as well as specific pre-processing steps they want to perform.
After submitting the request, the user will receive an email with a download link for the required dataset within a few hours.
arXiv Detail & Related papers (2024-09-27T11:42:19Z) - Dataset Quantization [72.61936019738076]
We present dataset quantization (DQ), a new framework to compress large-scale datasets into small subsets.
DQ is the first method that can successfully distill large-scale datasets such as ImageNet-1k with a state-of-the-art compression ratio.
arXiv Detail & Related papers (2023-08-21T07:24:29Z) - Neural Architecture Search via Two Constant Shared Weights Initialisations [0.0]
We present a zero-cost metric that highly correlated with the train set accuracy across the NAS-Bench-101, NAS-Bench-201 and NAS-Bench-NLP benchmark datasets.
Our method is easy to integrate within existing NAS algorithms and takes a fraction of a second to evaluate a single network.
arXiv Detail & Related papers (2023-02-09T02:25:38Z) - Scalable training of graph convolutional neural networks for fast and
accurate predictions of HOMO-LUMO gap in molecules [1.8947048356389908]
This work focuses on building GCNN models on HPC systems to predict material properties of millions of molecules.
We use HydraGNN, our in-house library for large-scale GCNN training, leveraging distributed data parallelism in PyTorch.
We perform parallel training on two open-source large-scale graph datasets to build a GCNN predictor for an important quantum property known as the HOMO-LUMO gap.
arXiv Detail & Related papers (2022-07-22T20:54:22Z) - NeuralNEB -- Neural Networks can find Reaction Paths Fast [7.7365628406567675]
Quantum mechanical methods like Density Functional Theory (DFT) are used with great success alongside efficient search algorithms for studying kinetics of reactive systems.
Machine Learning (ML) models have turned out to be excellent emulators of small molecule DFT calculations and could possibly replace DFT in such tasks.
In this paper we train state of the art equivariant Graph Neural Network (GNN)-based models on around 10.000 elementary reactions from the Transition1x dataset.
arXiv Detail & Related papers (2022-07-20T15:29:45Z) - Scaling Up Models and Data with $\texttt{t5x}$ and $\texttt{seqio}$ [118.04625413322827]
$texttt5x$ and $texttseqio$ are open source software libraries for building and training language models.
These libraries have been used to train models with hundreds of billions of parameters on datasets with multiple terabytes of training data.
arXiv Detail & Related papers (2022-03-31T17:12:13Z) - Superiority of Simplicity: A Lightweight Model for Network Device
Workload Prediction [58.98112070128482]
We propose a lightweight solution for series prediction based on historic observations.
It consists of a heterogeneous ensemble method composed of two models - a neural network and a mean predictor.
It achieves an overall $R2$ score of 0.10 on the available FedCSIS 2020 challenge dataset.
arXiv Detail & Related papers (2020-07-07T15:44:16Z) - One-step regression and classification with crosspoint resistive memory
arrays [62.997667081978825]
High speed, low energy computing machines are in demand to enable real-time artificial intelligence at the edge.
One-step learning is supported by simulations of the prediction of the cost of a house in Boston and the training of a 2-layer neural network for MNIST digit recognition.
Results are all obtained in one computational step, thanks to the physical, parallel, and analog computing within the crosspoint array.
arXiv Detail & Related papers (2020-05-05T08:00:07Z) - On Coresets for Support Vector Machines [61.928187390362176]
A coreset is a small, representative subset of the original data points.
We show that our algorithm can be used to extend the applicability of any off-the-shelf SVM solver to streaming, distributed, and dynamic data settings.
arXiv Detail & Related papers (2020-02-15T23:25:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.