Active learning of digenic functions with boolean matrix logic programming
- URL: http://arxiv.org/abs/2408.14487v2
- Date: Sat, 28 Sep 2024 12:39:29 GMT
- Title: Active learning of digenic functions with boolean matrix logic programming
- Authors: Lun Ai, Stephen H. Muggleton, Shi-shun Liang, Geoff S. Baldwin,
- Abstract summary: We apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery.
Learn the intricate genetic interactions within genome-scale metabolic network models.
A new system, $BMLP_active$, efficiently explores the genomic hypothesis space by guiding informative experimentation.
- Score: 4.762323642506732
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery, based on comprehensive databases of metabolic processes called genome-scale metabolic network models (GEMs). Predicted host behaviours are not always correctly described by GEMs. Learning the intricate genetic interactions within GEMs presents computational and empirical challenges. To address these, we describe a novel approach called Boolean Matrix Logic Programming (BMLP) by leveraging boolean matrices to evaluate large logic programs. We introduce a new system, $BMLP_{active}$, which efficiently explores the genomic hypothesis space by guiding informative experimentation through active learning. In contrast to sub-symbolic methods, $BMLP_{active}$ encodes a state-of-the-art GEM of a widely accepted bacterial host in an interpretable and logical representation using datalog logic programs. Notably, $BMLP_{active}$ can successfully learn the interaction between a gene pair with fewer training examples than random experimentation, overcoming the increase in experimental design space. $BMLP_{active}$ enables rapid optimisation of metabolic models and offers a realistic approach to a self-driving lab for microbial engineering.
Related papers
- Simulating Petri nets with Boolean Matrix Logic Programming [4.762323642506732]
We introduce a novel approach to deal with the limitations of high-level symbol manipulations.
Within this framework, we propose two novel BMLP algorithms for a class of Petri nets known as elementary nets.
We demonstrate empirically that BMLP algorithms can evaluate these programs 40 times faster than tabled B-Prolog, SWI-Prolog, XSB-Prolog and Clingo.
arXiv Detail & Related papers (2024-05-18T23:17:00Z) - Boolean matrix logic programming for active learning of gene functions in genome-scale metabolic network models [4.762323642506732]
We seek to apply logic-based machine learning techniques to facilitate cellular engineering and drive biological discovery.
We introduce a new system, $BMLP_active$, which efficiently explores the genomic hypothesis space by guiding informative experimentation.
$BMLP_active$ can successfully learn the interaction between a gene pair with fewer training examples than random experimentation.
arXiv Detail & Related papers (2024-05-10T09:51:06Z) - Human Comprehensible Active Learning of Genome-Scale Metabolic Networks [7.838090421892651]
A comprehensible machine learning approach that efficiently explores the hypothesis space and guides experimental design is urgently needed.
We introduce a novel machine learning framework ILP-iML1515 based on Inductive Logic Programming (ILP)
ILP-iML1515 is built on comprehensible logical representations of a genome-scale metabolic model and can update the model by learning new logical structures from auxotrophic mutant trials.
arXiv Detail & Related papers (2023-08-24T12:42:00Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Deep Bayesian Active Learning for Accelerating Stochastic Simulation [74.58219903138301]
Interactive Neural Process (INP) is a deep active learning framework for simulations and with active learning approaches.
For active learning, we propose a novel acquisition function, Latent Information Gain (LIG), calculated in the latent space of NP based models.
The results demonstrate STNP outperforms the baselines in the learning setting and LIG achieves the state-of-the-art for active learning.
arXiv Detail & Related papers (2021-06-05T01:31:51Z) - Automated Biodesign Engineering by Abductive Meta-Interpretive Learning [8.788941848262786]
We propose an automated biodesign engineering framework empowered by Abductive Meta-Interpretive Learning ($Meta_Abd$)
In this work, we propose an automated biodesign engineering framework empowered by Abductive Meta-Interpretive Learning ($Meta_Abd$)
arXiv Detail & Related papers (2021-05-17T12:10:26Z) - Neural Multi-Hop Reasoning With Logical Rules on Biomedical Knowledge
Graphs [10.244651735862627]
We conduct an empirical study based on the real-world task of drug repurposing.
We formulate this task as a link prediction problem where both compounds and diseases correspond to entities in a knowledge graph.
We propose a new method, PoLo, that combines policy-guided walks based on reinforcement learning with logical rules.
arXiv Detail & Related papers (2021-03-18T16:46:11Z) - Investigating the Scalability and Biological Plausibility of the
Activation Relaxation Algorithm [62.997667081978825]
Activation Relaxation (AR) algorithm provides a simple and robust approach for approximating the backpropagation of error algorithm.
We show that the algorithm can be further simplified and made more biologically plausible by introducing a learnable set of backwards weights.
We also investigate whether another biologically implausible assumption of the original AR algorithm -- the frozen feedforward pass -- can be relaxed without damaging performance.
arXiv Detail & Related papers (2020-10-13T08:02:38Z) - Predictive Coding Approximates Backprop along Arbitrary Computation
Graphs [68.8204255655161]
We develop a strategy to translate core machine learning architectures into their predictive coding equivalents.
Our models perform equivalently to backprop on challenging machine learning benchmarks.
Our method raises the potential that standard machine learning algorithms could in principle be directly implemented in neural circuitry.
arXiv Detail & Related papers (2020-06-07T15:35:47Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.