Attribute Graphs Underlying Molecular Generative Models: Path to Learning with Limited Data
- URL: http://arxiv.org/abs/2207.07174v2
- Date: Thu, 29 Aug 2024 19:27:49 GMT
- Title: Attribute Graphs Underlying Molecular Generative Models: Path to Learning with Limited Data
- Authors: Samuel C. Hoffman, Payel Das, Karthikeyan Shanmugam, Kahini Wadhawan, Prasanna Sattigeri,
- Abstract summary: We provide an algorithm that relies on perturbation experiments on latent codes of a pre-trained generative autoencoder to uncover an attribute graph.
We show that one can fit an effective graphical model that models a structural equation model between latent codes.
Using a pre-trained generative autoencoder trained on a large dataset of small molecules, we demonstrate that the graphical model can be used to predict a specific property.
- Score: 42.517927809224275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training generative models that capture rich semantics of the data and interpreting the latent representations encoded by such models are very important problems in un-/self-supervised learning. In this work, we provide a simple algorithm that relies on perturbation experiments on latent codes of a pre-trained generative autoencoder to uncover an attribute graph that is implied by the generative model. We perform perturbation experiments to check for influence of a given latent variable on a subset of attributes. Given this, we show that one can fit an effective graphical model that models a structural equation model between latent codes taken as exogenous variables and attributes taken as observed variables. One interesting aspect is that a single latent variable controls multiple overlapping subsets of attributes unlike conventional approaches that try to impose full independence. Using a pre-trained generative autoencoder trained on a large dataset of small molecules, we demonstrate that the graphical model between various molecular attributes and latent codes learned by our algorithm can be used to predict a specific property for molecules which are drawn from a different distribution. We compare prediction models trained on various feature subsets chosen by simple baselines, as well as existing causal discovery and sparse learning/feature selection methods, with the ones in the derived Markov blanket from our method. Results show empirically that the predictor that relies on our Markov blanket attributes is robust to distribution shifts when transferred or fine-tuned with a few samples from the new distribution, especially when training data is limited.
Related papers
- Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Variational Autoencoding Molecular Graphs with Denoising Diffusion
Probabilistic Model [0.0]
We propose a novel deep generative model that incorporates a hierarchical structure into the probabilistic latent vectors.
We demonstrate that our model can design effective molecular latent vectors for molecular property prediction from some experiments by small datasets on physical properties and activity.
arXiv Detail & Related papers (2023-07-02T17:29:41Z) - Learning Sparsity of Representations with Discrete Latent Variables [15.05207849434673]
We propose a sparse deep latent generative model SDLGM to explicitly model degree of sparsity.
The resulting sparsity of a representation is not fixed, but fits to the observation itself under the pre-defined restriction.
For inference and learning, we develop an amortized variational method based on MC gradient estimator.
arXiv Detail & Related papers (2023-04-03T12:47:18Z) - Dynamic Latent Separation for Deep Learning [67.62190501599176]
A core problem in machine learning is to learn expressive latent variables for model prediction on complex data.
Here, we develop an approach that improves expressiveness, provides partial interpretation, and is not restricted to specific applications.
arXiv Detail & Related papers (2022-10-07T17:56:53Z) - Learning Sparse Latent Representations for Generator Model [7.467412443287767]
We present a new unsupervised learning method to enforce sparsity on the latent space for the generator model.
Our model consists of only one top-down generator network that maps the latent variable to the observed data.
arXiv Detail & Related papers (2022-09-20T18:58:24Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Hierarchical Few-Shot Generative Models [18.216729811514718]
We study a latent variables approach that extends the Neural Statistician to a fully hierarchical approach with an attention-based point to set-level aggregation.
Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime.
arXiv Detail & Related papers (2021-10-23T19:19:39Z) - Improving the Reconstruction of Disentangled Representation Learners via Multi-Stage Modeling [54.94763543386523]
Current autoencoder-based disentangled representation learning methods achieve disentanglement by penalizing the ( aggregate) posterior to encourage statistical independence of the latent factors.
We present a novel multi-stage modeling approach where the disentangled factors are first learned using a penalty-based disentangled representation learning method.
Then, the low-quality reconstruction is improved with another deep generative model that is trained to model the missing correlated latent variables.
arXiv Detail & Related papers (2020-10-25T18:51:15Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.