A step toward a reinforcement learning de novo genome assembler
- URL: http://arxiv.org/abs/2102.02649v4
- Date: Thu, 7 Mar 2024 20:47:45 GMT
- Title: A step toward a reinforcement learning de novo genome assembler
- Authors: Kleber Padovani, Roberto Xavier, Rafael Cabral Borges, Andre Carvalho,
Anna Reali, Annie Chateau, Ronnie Alves
- Abstract summary: Machine learning may emerge as an alternative (or complementary) way for developing more accurate and automated assemblers.
This study shed light on the application of machine learning, using reinforcement learning (RL), in genome assembly.
We improved the reward system and optimized the exploration of the state space based on pruning and in collaboration with evolutionary computing.
- Score: 0.4749981032986242
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: De novo genome assembly is a relevant but computationally complex task in
genomics. Although de novo assemblers have been used successfully in several
genomics projects, there is still no 'best assembler', and the choice and setup
of assemblers still rely on bioinformatics experts. Thus, as with other
computationally complex problems, machine learning may emerge as an alternative
(or complementary) way for developing more accurate and automated assemblers.
Reinforcement learning has proven promising for solving complex activities
without supervision - such games - and there is a pressing need to understand
the limits of this approach to 'real' problems, such as the DFA problem. This
study aimed to shed light on the application of machine learning, using
reinforcement learning (RL), in genome assembly. We expanded upon the sole
previous approach found in the literature to solve this problem by carefully
exploring the learning aspects of the proposed intelligent agent, which uses
the Q-learning algorithm, and we provided insights for the next steps of
automated genome assembly development. We improved the reward system and
optimized the exploration of the state space based on pruning and in
collaboration with evolutionary computing. We tested the new approaches on 23
new larger environments, which are all available on the internet. Our results
suggest consistent performance progress; however, we also found limitations,
especially concerning the high dimensionality of state and action spaces.
Finally, we discuss paths for achieving efficient and automated genome assembly
in real scenarios considering successful RL applications - including deep
reinforcement learning.
Related papers
- AlphaEvolve: A coding agent for scientific and algorithmic discovery [63.13852052551106]
We present AlphaEvolve, an evolutionary coding agent that substantially enhances capabilities of state-of-the-art LLMs.<n>AlphaEvolve orchestrates an autonomous pipeline of LLMs, whose task is to improve an algorithm by making direct changes to the code.<n>We demonstrate the broad applicability of this approach by applying it to a number of important computational problems.
arXiv Detail & Related papers (2025-06-16T06:37:18Z) - Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey [59.52058740470727]
Edge-cloud collaborative computing (ECCC) has emerged as a pivotal paradigm for addressing the computational demands of modern intelligent applications.<n>Recent advancements in AI, particularly deep learning and large language models (LLMs), have dramatically enhanced the capabilities of these distributed systems.<n>This survey provides a structured tutorial on fundamental architectures, enabling technologies, and emerging applications.
arXiv Detail & Related papers (2025-05-03T13:55:38Z) - AIDE: AI-Driven Exploration in the Space of Code [6.401493599308353]
We introduce AI-Driven Exploration (AIDE), a machine learning engineering agent powered by large language models (LLMs)
AIDE frames machine learning engineering as a code optimization problem, and formulates trial-and-error as a tree search in the space of potential solutions.
By strategically reusing and refining promising solutions, AIDE effectively trades computational resources for enhanced performance.
arXiv Detail & Related papers (2025-02-18T18:57:21Z) - Uncertainty Estimation in Multi-Agent Distributed Learning for AI-Enabled Edge Devices [0.0]
Edge IoT devices have seen a paradigm shift with the introduction of FPGAs and AI accelerators.
This advancement has vastly amplified their computational capabilities, emphasizing the practicality of edge AI.
Our study explores methods that enable distributed data processing through AI-enabled edge devices, enhancing collaborative learning capabilities.
arXiv Detail & Related papers (2024-03-14T07:40:32Z) - SERL: A Software Suite for Sample-Efficient Robotic Reinforcement
Learning [85.21378553454672]
We develop a library containing a sample efficient off-policy deep RL method, together with methods for computing rewards and resetting the environment.
We find that our implementation can achieve very efficient learning, acquiring policies for PCB board assembly, cable routing, and object relocation.
These policies achieve perfect or near-perfect success rates, extreme robustness even under perturbations, and exhibit emergent robustness recovery and correction behaviors.
arXiv Detail & Related papers (2024-01-29T10:01:10Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Reinforcement learning informed evolutionary search for autonomous
systems testing [15.210312666486029]
We propose augmenting the evolutionary search (ES) with a reinforcement learning (RL) agent trained using surrogate rewards derived from domain knowledge.
In our approach, known as RIGAA, we first train an RL agent to learn useful constraints of the problem and then use it to produce a certain part of the initial population of the search algorithm.
We evaluate RIGAA on two case studies: maze generation for an autonomous ant robot and road topology generation for an autonomous vehicle lane keeping assist system.
arXiv Detail & Related papers (2023-08-24T13:11:07Z) - Contribution \`a l'Optimisation d'un Comportement Collectif pour un
Groupe de Robots Autonomes [0.0]
This thesis studies the domain of collective robotics, and more particularly the optimization problems of multirobot systems.
The first contribution is the use of the Butterfly Algorithm Optimization (BOA) to solve the Unknown Area Exploration problem.
The second contribution is the development of a new simulation framework for benchmarking dynamic incremental problems in robotics.
arXiv Detail & Related papers (2023-06-10T21:49:08Z) - A study on a Q-Learning algorithm application to a manufacturing
assembly problem [0.8937905773981699]
This study focuses on the implementation of a reinforcement learning algorithm in an assembly problem of a given object.
A model-free Q-Learning algorithm is applied, considering the learning of a matrix of Q-values (Q-table) from the successive interactions with the environment.
The optimisation approach achieved very promising results by learning the optimal assembly sequence 98.3% of the times.
arXiv Detail & Related papers (2023-04-17T15:38:34Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - GENEOnet: A new machine learning paradigm based on Group Equivariant
Non-Expansive Operators. An application to protein pocket detection [97.5153823429076]
We introduce a new computational paradigm based on Group Equivariant Non-Expansive Operators.
We test our method, called GENEOnet, on a key problem in drug design: detecting pockets on the surface of proteins that can host.
arXiv Detail & Related papers (2022-01-31T11:14:51Z) - Recent Developments in Program Synthesis with Evolutionary Algorithms [1.8047694351309207]
We identify the relevant evolutionary program synthesis approaches and provide an in-depth analysis of their performance.
The most influential approaches we identify are stack-based, grammar-guided, as well as linear genetic programming.
For future work, we encourage researchers not only to use a program's output for assessing the quality of a solution but also the way towards a solution.
arXiv Detail & Related papers (2021-08-27T11:38:27Z) - Investigating Bi-Level Optimization for Learning and Vision from a
Unified Perspective: A Survey and Beyond [114.39616146985001]
In machine learning and computer vision fields, despite the different motivations and mechanisms, a lot of complex problems contain a series of closely related subproblms.
In this paper, we first uniformly express these complex learning and vision problems from the perspective of Bi-Level Optimization (BLO)
Then we construct a value-function-based single-level reformulation and establish a unified algorithmic framework to understand and formulate mainstream gradient-based BLO methodologies.
arXiv Detail & Related papers (2021-01-27T16:20:23Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.