Exploiting Hierarchical Interactions for Protein Surface Learning
- URL: http://arxiv.org/abs/2401.10144v1
- Date: Wed, 17 Jan 2024 14:10:40 GMT
- Title: Exploiting Hierarchical Interactions for Protein Surface Learning
- Authors: Yiqun Lin, Liang Pan, Yi Li, Ziwei Liu, and Xiaomeng Li
- Abstract summary: Intrinsically, potential function sites in protein surfaces are determined by both geometric and chemical features.
In this paper, we present a principled framework based on deep learning techniques, namely Hierarchical Chemical and Geometric Feature Interaction Network (HCGNet)
Our method outperforms the prior state-of-the-art method by 2.3% in site prediction task and 3.2% in interaction matching task.
- Score: 52.10066114039307
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting interactions between proteins is one of the most important yet
challenging problems in structural bioinformatics. Intrinsically, potential
function sites in protein surfaces are determined by both geometric and
chemical features. However, existing works only consider handcrafted or
individually learned chemical features from the atom type and extract geometric
features independently. Here, we identify two key properties of effective
protein surface learning: 1) relationship among atoms: atoms are linked with
each other by covalent bonds to form biomolecules instead of appearing alone,
leading to the significance of modeling the relationship among atoms in
chemical feature learning. 2) hierarchical feature interaction: the neighboring
residue effect validates the significance of hierarchical feature interaction
among atoms and between surface points and atoms (or residues). In this paper,
we present a principled framework based on deep learning techniques, namely
Hierarchical Chemical and Geometric Feature Interaction Network (HCGNet), for
protein surface analysis by bridging chemical and geometric features with
hierarchical interactions. Extensive experiments demonstrate that our method
outperforms the prior state-of-the-art method by 2.3% in site prediction task
and 3.2% in interaction matching task, respectively. Our code is available at
https://github.com/xmed-lab/HCGNet.
Related papers
- Atom-Motif Contrastive Transformer for Molecular Property Prediction [68.85399466928976]
Graph Transformer (GT) models have been widely used in the task of Molecular Property Prediction (MPP)
We propose a novel Atom-Motif Contrastive Transformer (AMCT) which explores atom-level interactions and considers motif-level interactions.
Our proposed AMCT is extensively evaluated on seven popular benchmark datasets, and both quantitative and qualitative results firmly demonstrate its effectiveness.
arXiv Detail & Related papers (2023-10-11T10:03:10Z) - C3Net: interatomic potential neural network for prediction of
physicochemical properties in heterogenous systems [0.0]
We propose a deep neural network architecture for atom type embeddings in its molecular context.
The architecture is applied to predict physicochemical properties in heterogeneous systems.
arXiv Detail & Related papers (2023-09-27T00:51:24Z) - Atomic and Subgraph-aware Bilateral Aggregation for Molecular
Representation Learning [57.670845619155195]
We introduce a new model for molecular representation learning called the Atomic and Subgraph-aware Bilateral Aggregation (ASBA)
ASBA addresses the limitations of previous atom-wise and subgraph-wise models by incorporating both types of information.
Our method offers a more comprehensive way to learn representations for molecular property prediction and has broad potential in drug and material discovery applications.
arXiv Detail & Related papers (2023-05-22T00:56:00Z) - EquiPocket: an E(3)-Equivariant Geometric Graph Neural Network for Ligand Binding Site Prediction [49.674494450107005]
Predicting the binding sites of target proteins plays a fundamental role in drug discovery.
Most existing deep-learning methods consider a protein as a 3D image by spatially clustering its atoms into voxels.
This work proposes EquiPocket, an E(3)-equivariant Graph Neural Network (GNN) for binding site prediction.
arXiv Detail & Related papers (2023-02-23T17:18:26Z) - Predicting Molecule-Target Interaction by Learning Biomedical Network
and Molecule Representations [10.128856077021625]
We propose a pseudo-siamese Graph Neural Network method, namely MTINet+, which learns both biomedical network topological and molecule structural/chemical information as representations to predict potential interaction of given molecule and target pair.
In the experiments of different molecule-target interaction tasks, MTINet+ significantly outperforms over the state-of-the-art baselines.
arXiv Detail & Related papers (2023-02-02T10:00:46Z) - GEM-2: Next Generation Molecular Property Prediction Network with
Many-body and Full-range Interaction Modeling [24.94616336296936]
GEM-2 is a novel method for solving the Schr"odinger equation for molecules.
It considers both the long-range and many-body interactions in molecules.
arXiv Detail & Related papers (2022-08-11T15:12:25Z) - Multi-Scale Representation Learning on Proteins [78.31410227443102]
This paper introduces a multi-scale graph construction of a protein -- HoloProt.
The surface captures coarser details of the protein, while sequence as primary component and structure captures finer details.
Our graph encoder then learns a multi-scale representation by allowing each level to integrate the encoding from level(s) below with the graph at that level.
arXiv Detail & Related papers (2022-04-04T08:29:17Z) - Message Passing Networks for Molecules with Tetrahedral Chirality [8.391459650489123]
We develop two custom aggregation functions for message passing neural networks to learn properties of molecules with tetrahedral chirality.
Results show modest improvements over a baseline sum aggregator, highlighting opportunities for further architecture development.
arXiv Detail & Related papers (2020-11-24T03:03:09Z) - BERTology Meets Biology: Interpreting Attention in Protein Language
Models [124.8966298974842]
We demonstrate methods for analyzing protein Transformer models through the lens of attention.
We show that attention captures the folding structure of proteins, connecting amino acids that are far apart in the underlying sequence, but spatially close in the three-dimensional structure.
We also present a three-dimensional visualization of the interaction between attention and protein structure.
arXiv Detail & Related papers (2020-06-26T21:50:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.