An ensemble of VisNet, Transformer-M, and pretraining models for
molecular property prediction in OGB Large-Scale Challenge @ NeurIPS 2022
- URL: http://arxiv.org/abs/2211.12791v2
- Date: Wed, 16 Aug 2023 11:23:05 GMT
- Title: An ensemble of VisNet, Transformer-M, and pretraining models for
molecular property prediction in OGB Large-Scale Challenge @ NeurIPS 2022
- Authors: Yusong Wang, Shaoning Li, Zun Wang, Xinheng He, Bin Shao, Tie-Yan Liu
and Tong Wang
- Abstract summary: ViSNet Team achieved the MAE of 0.0723 eV on the test-challenge set, dramatically reducing the error by 39.75% compared with the best method in the last year competition.
- Score: 48.109627319222334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the technical report, we provide our solution for OGB-LSC 2022 Graph
Regression Task. The target of this task is to predict the quantum chemical
property, HOMO-LUMO gap for a given molecule on PCQM4Mv2 dataset. In the
competition, we designed two kinds of models: Transformer-M-ViSNet which is an
geometry-enhanced graph neural network for fully connected molecular graphs and
Pretrained-3D-ViSNet which is a pretrained ViSNet by distilling geomeotric
information from optimized structures. With an ensemble of 22 models, ViSNet
Team achieved the MAE of 0.0723 eV on the test-challenge set, dramatically
reducing the error by 39.75% compared with the best method in the last year
competition.
Related papers
- Triplet Interaction Improves Graph Transformers: Accurate Molecular Graph Learning with Triplet Graph Transformers [26.11060210663556]
We propose the Triplet Graph Transformer (TGT) that enables direct communication between pairs within a 3-tuple of nodes.
TGT is applied to molecular property prediction by first predicting interatomic distances from 2D graphs and then using these distances for downstream tasks.
arXiv Detail & Related papers (2024-02-07T02:53:06Z) - Heterogenous Ensemble of Models for Molecular Property Prediction [55.91865861896012]
We propose a method for considering different modalities on molecules.
We ensemble these models with a HuberRegressor.
This yields a winning solution to the 2textsuperscriptnd edition of the OGB Large-Scale Challenge (2022)
arXiv Detail & Related papers (2022-11-20T17:25:26Z) - ViSNet: an equivariant geometry-enhanced graph neural network with
vector-scalar interactive message passing for molecules [69.05950120497221]
We propose an equivariant geometry-enhanced graph neural network called ViSNet, which elegantly extracts geometric features and efficiently models molecular structures.
Our proposed ViSNet outperforms state-of-the-art approaches on multiple MD benchmarks, including MD17, revised MD17 and MD22, and achieves excellent chemical property prediction on QM9 and Molecule3D datasets.
arXiv Detail & Related papers (2022-10-29T07:12:46Z) - PSAQ-ViT V2: Towards Accurate and General Data-Free Quantization for
Vision Transformers [2.954890575035673]
Data-free quantization can potentially address data privacy and security concerns in model compression.
Recently, PSAQ-ViT designs a relative value metric, patch similarity, to generate data from pre-trained vision transformers (ViTs)
In this paper, we propose PSAQ-ViT V2, a more accurate and general data-free quantization framework for ViTs.
arXiv Detail & Related papers (2022-09-13T01:55:53Z) - Global Context Vision Transformers [78.5346173956383]
We propose global context vision transformer (GC ViT), a novel architecture that enhances parameter and compute utilization for computer vision.
We address the lack of the inductive bias in ViTs, and propose to leverage a modified fused inverted residual blocks in our architecture.
Our proposed GC ViT achieves state-of-the-art results across image classification, object detection and semantic segmentation tasks.
arXiv Detail & Related papers (2022-06-20T18:42:44Z) - Benchmarking Graphormer on Large-Scale Molecular Modeling Datasets [87.00711479972503]
This note describes the recent updates of Graphormer.
With a global receptive field and an adaptive aggregation strategy, Graphormer is more powerful than classic message-passing-based GNNs.
In the meanwhile, it greatly outperforms the competitors in the recent Open Catalyst Challenge.
arXiv Detail & Related papers (2022-03-09T15:40:10Z) - An Empirical Study of Graphormer on Large-Scale Molecular Modeling
Datasets [87.00711479972503]
"Graphormer-V2" could attain better results on large-scale molecular modeling datasets than the vanilla one.
With a global receptive field and an adaptive aggregation strategy, Graphormer is more powerful than classic message-passing-based GNNs.
arXiv Detail & Related papers (2022-02-28T16:32:42Z) - On Graph Neural Network Ensembles for Large-Scale Molecular Property
Prediction [0.0]
The PCQM4M-LSC dataset defines a molecular HOMO-LUMO property prediction task on about 3.8M graphs.
We show our current work-in-progress solution which builds an ensemble of three graph neural networks models.
arXiv Detail & Related papers (2021-06-29T15:58:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.