Learning the Neighborhood: Contrast-Free Multimodal Self-Supervised Molecular Graph Pretraining
- URL: http://arxiv.org/abs/2509.22468v1
- Date: Fri, 26 Sep 2025 15:16:20 GMT
- Title: Learning the Neighborhood: Contrast-Free Multimodal Self-Supervised Molecular Graph Pretraining
- Authors: Boshra Ariguib, Mathias Niepert, Andrei Manolache,
- Abstract summary: We introduce C-FREE (Contrast-Free Representation learning on Ego-nets), a simple framework that integrates 2D graphs with ensembles of 3D conformers.<n>C-FREE learns molecular representations by predicting subgraph embeddings from their complementary neighborhoods in the latent space.<n>C-FREE state-of-the-art results on MoleculeNet, surpassing contrastive, generative, and other multimodal self-supervised methods.
- Score: 21.71848826907517
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: High-quality molecular representations are essential for property prediction and molecular design, yet large labeled datasets remain scarce. While self-supervised pretraining on molecular graphs has shown promise, many existing approaches either depend on hand-crafted augmentations or complex generative objectives, and often rely solely on 2D topology, leaving valuable 3D structural information underutilized. To address this gap, we introduce C-FREE (Contrast-Free Representation learning on Ego-nets), a simple framework that integrates 2D graphs with ensembles of 3D conformers. C-FREE learns molecular representations by predicting subgraph embeddings from their complementary neighborhoods in the latent space, using fixed-radius ego-nets as modeling units across different conformers. This design allows us to integrate both geometric and topological information within a hybrid Graph Neural Network (GNN)-Transformer backbone, without negatives, positional encodings, or expensive pre-processing. Pretraining on the GEOM dataset, which provides rich 3D conformational diversity, C-FREE achieves state-of-the-art results on MoleculeNet, surpassing contrastive, generative, and other multimodal self-supervised methods. Fine-tuning across datasets with diverse sizes and molecule types further demonstrates that pretraining transfers effectively to new chemical domains, highlighting the importance of 3D-informed molecular representations.
Related papers
- BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning [11.862370962277938]
We present a novel generative model, BindGPT, which uses a conceptually simple but powerful approach to create 3D molecules within the protein's binding site.
We show how such simple conceptual approach combined with pretraining and scaling can perform on par or better than the current best specialized diffusion models.
arXiv Detail & Related papers (2024-06-06T02:10:50Z) - SE3Set: Harnessing equivariant hypergraph neural networks for molecular representation learning [27.713870291922333]
We develop an SE(3) equivariant hypergraph neural network architecture tailored for advanced molecular representation learning.
SE3Set has shown performance on par with state-of-the-art (SOTA) models for small molecule datasets.
It excels on the MD22 dataset, achieving a notable improvement of approximately 20% in accuracy across all molecules.
arXiv Detail & Related papers (2024-05-26T10:43:16Z) - 3D-Mol: A Novel Contrastive Learning Framework for Molecular Property Prediction with 3D Information [1.1777304970289215]
3D-Mol is a novel approach designed for more accurate spatial structure representation.
It deconstructs molecules into three hierarchical graphs to better extract geometric information.
We compare 3D-Mol with various state-of-the-art baselines on 7 benchmarks and demonstrate our outstanding performance.
arXiv Detail & Related papers (2023-09-28T10:05:37Z) - Geometry-aware Line Graph Transformer Pre-training for Molecular
Property Prediction [4.598522704308923]
Geometry-aware line graph transformer (Galformer) pre-training is a novel self-supervised learning framework.
Galformer consistently outperforms all baselines on both classification and regression tasks.
arXiv Detail & Related papers (2023-09-01T14:20:48Z) - Automated 3D Pre-Training for Molecular Property Prediction [54.15788181794094]
We propose a novel 3D pre-training framework (dubbed 3D PGT)
It pre-trains a model on 3D molecular graphs, and then fine-tunes it on molecular graphs without 3D structures.
Extensive experiments on 2D molecular graphs are conducted to demonstrate the accuracy, efficiency and generalization ability of the proposed 3D PGT.
arXiv Detail & Related papers (2023-06-13T14:43:13Z) - Bi-level Contrastive Learning for Knowledge-Enhanced Molecule Representations [68.32093648671496]
We introduce GODE, which accounts for the dual-level structure inherent in molecules.<n> Molecules possess an intrinsic graph structure and simultaneously function as nodes within a broader molecular knowledge graph.<n>By pre-training two GNNs on different graph structures, GODE effectively fuses molecular structures with their corresponding knowledge graph substructures.
arXiv Detail & Related papers (2023-06-02T15:49:45Z) - Learning Versatile 3D Shape Generation with Improved AR Models [91.87115744375052]
Auto-regressive (AR) models have achieved impressive results in 2D image generation by modeling joint distributions in the grid space.
We propose the Improved Auto-regressive Model (ImAM) for 3D shape generation, which applies discrete representation learning based on a latent vector instead of volumetric grids.
arXiv Detail & Related papers (2023-03-26T12:03:18Z) - Geometry-Complete Diffusion for 3D Molecule Generation and Optimization [3.8366697175402225]
We introduce the Geometry-Complete Diffusion Model (GCDM) for 3D molecule generation.
GCDM outperforms existing 3D molecular diffusion models by significant margins across conditional and unconditional settings.
We also show that GCDM's geometric features can be repurposed to consistently optimize the geometry and chemical composition of existing 3D molecules.
arXiv Detail & Related papers (2023-02-08T20:01:51Z) - 3D Infomax improves GNNs for Molecular Property Prediction [1.9703625025720701]
We propose pre-training a model to reason about the geometry of molecules given only their 2D molecular graphs.
We show that 3D pre-training provides significant improvements for a wide range of properties.
arXiv Detail & Related papers (2021-10-08T13:30:49Z) - GeoMol: Torsional Geometric Generation of Molecular 3D Conformer
Ensembles [60.12186997181117]
Prediction of a molecule's 3D conformer ensemble from the molecular graph holds a key role in areas of cheminformatics and drug discovery.
Existing generative models have several drawbacks including lack of modeling important molecular geometry elements.
We propose GeoMol, an end-to-end, non-autoregressive and SE(3)-invariant machine learning approach to generate 3D conformers.
arXiv Detail & Related papers (2021-06-08T14:17:59Z) - ATOM3D: Tasks On Molecules in Three Dimensions [91.72138447636769]
Deep neural networks have recently gained significant attention.
In this work we present ATOM3D, a collection of both novel and existing datasets spanning several key classes of biomolecules.
We develop three-dimensional molecular learning networks for each of these tasks, finding that they consistently improve performance.
arXiv Detail & Related papers (2020-12-07T20:18:23Z) - Self-Supervised Graph Transformer on Large-Scale Molecular Data [73.3448373618865]
We propose a novel framework, GROVER, for molecular representation learning.
GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data.
We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning.
arXiv Detail & Related papers (2020-06-18T08:37:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.