Persistent spectral based machine learning (PerSpect ML) for drug design
- URL: http://arxiv.org/abs/2002.00582v1
- Date: Mon, 3 Feb 2020 07:14:21 GMT
- Title: Persistent spectral based machine learning (PerSpect ML) for drug design
- Authors: Zhenyu Meng, Kelin Xia
- Abstract summary: We propose persistent spectral based machine learning (PerSpect ML) models for drug design.
We consider 11 persistent spectral variables and use them as the feature for machine learning models in protein-ligand binding affinity prediction.
Our results, for all these databases, are better than all existing models, as far as we know.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose persistent spectral based machine learning
(PerSpect ML) models for drug design. Persistent spectral models, including
persistent spectral graph, persistent spectral simplicial complex and
persistent spectral hypergraph, are proposed based on spectral graph theory,
spectral simplicial complex theory and spectral hypergraph theory,
respectively. Different from all previous spectral models, a filtration
process, as proposed in persistent homology, is introduced to generate
multiscale spectral models. More specifically, from the filtration process, a
series of nested topological representations, i,e., graphs, simplicial
complexes, and hypergraphs, can be systematically generated and their spectral
information can be obtained. Persistent spectral variables are defined as the
function of spectral variables over the filtration value. Mathematically,
persistent multiplicity (of zero eigenvalues) is exactly the persistent Betti
number (or Betti curve). We consider 11 persistent spectral variables and use
them as the feature for machine learning models in protein-ligand binding
affinity prediction. We systematically test our models on three most
commonly-used databases, including PDBbind-2007, PDBbind-2013 and PDBbind-2016.
Our results, for all these databases, are better than all existing models, as
far as we know. This demonstrates the great power of our PerSpect ML in
molecular data analysis and drug design.
Related papers
- GrassNet: State Space Model Meets Graph Neural Network [57.62885438406724]
Graph State Space Network (GrassNet) is a novel graph neural network with theoretical support that provides a simple yet effective scheme for designing arbitrary graph spectral filters.
To the best of our knowledge, our work is the first to employ SSMs for the design of graph GNN spectral filters.
Extensive experiments on nine public benchmarks reveal that GrassNet achieves superior performance in real-world graph modeling tasks.
arXiv Detail & Related papers (2024-08-16T07:33:58Z) - Spectral Invariant Learning for Dynamic Graphs under Distribution Shifts [57.19908334882441]
Dynamic graph neural networks (DyGNNs) currently struggle with handling distribution shifts that are inherent in dynamic graphs.
We propose to study distribution shifts on dynamic graphs in the spectral domain for the first time.
arXiv Detail & Related papers (2024-03-08T04:07:23Z) - Graph Generation via Spectral Diffusion [51.60814773299899]
We present GRASP, a novel graph generative model based on 1) the spectral decomposition of the graph Laplacian matrix and 2) a diffusion process.
Specifically, we propose to use a denoising model to sample eigenvectors and eigenvalues from which we can reconstruct the graph Laplacian and adjacency matrix.
Our permutation invariant model can also handle node features by concatenating them to the eigenvectors of each node.
arXiv Detail & Related papers (2024-02-29T09:26:46Z) - HoloNets: Spectral Convolutions do extend to Directed Graphs [59.851175771106625]
Conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs.
Here we show this traditional reliance on the graph Fourier transform to be superfluous.
We provide a frequency-response interpretation of newly developed filters, investigate the influence of the basis used to express filters and discuss the interplay with characteristic operators on which networks are based.
arXiv Detail & Related papers (2023-10-03T17:42:09Z) - Mass Spectra Prediction with Structural Motif-based Graph Neural
Networks [21.71309513265843]
MoMS-Net is a system that predicts mass spectra using the information derived from structural motifs and the implementation of Graph Neural Networks (GNNs)
We have tested our model across diverse mass spectra and have observed its superiority over other existing models.
arXiv Detail & Related papers (2023-06-28T10:33:57Z) - Specformer: Spectral Graph Neural Networks Meet Transformers [51.644312964537356]
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions.
We introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain.
By stacking multiple Specformer layers, one can build a powerful spectral GNN.
arXiv Detail & Related papers (2023-03-02T07:36:23Z) - Ensemble Spectral Prediction (ESP) Model for Metabolite Annotation [10.640447979978436]
Key challenge in metabolomics is annotating measured spectra from a biological sample with chemical identities.
We propose a novel machine learning model, Ensemble Spectral Prediction (ESP), for metabolite annotation.
arXiv Detail & Related papers (2022-03-25T17:05:41Z) - Implicit Data-Driven Regularization in Deep Neural Networks under SGD [0.0]
spectral analysis of large random matrices involved in a trained deep neural network (DNN)
We find that these spectra can be classified into three main types: Marvcenko-Pastur spectrum (MP), Marvcenko-Pastur spectrum with few bleeding outliers (MPB), and Heavy tailed spectrum (HT)
arXiv Detail & Related papers (2021-11-26T06:36:16Z) - Gaussian Processes on Graphs via Spectral Kernel Learning [9.260186030255081]
We propose a graph spectrum-based Gaussian process for prediction of signals defined on nodes of the graph.
We demonstrate the interpretability of the model in synthetic experiments from which we show the various ground truth spectral filters can be accurately recovered.
arXiv Detail & Related papers (2020-06-12T17:51:22Z) - Spectral Learning on Matrices and Tensors [74.88243719463053]
We show that tensor decomposition can pick up latent effects that are missed by matrix methods.
We also outline computational techniques to design efficient tensor decomposition methods.
arXiv Detail & Related papers (2020-04-16T22:53:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.