What Matters in Reinforcement Learning for Tractography
- URL: http://arxiv.org/abs/2305.09041v2
- Date: Wed, 17 May 2023 19:30:27 GMT
- Title: What Matters in Reinforcement Learning for Tractography
- Authors: Antoine Th\'eberge, Christian Desrosiers, Maxime Descoteaux,
Pierre-Marc Jodoin
- Abstract summary: Deep reinforcement learning (RL) has been proposed to learn the tractography procedure and train agents to reconstruct the structure of the white matter without manually curated reference streamlines.
We thoroughly explore the different components of the proposed framework, such as the choice of the RL algorithm, seeding strategy, the input signal and reward function, and shed light on their impact.
We propose a series of recommendations concerning the choice of RL algorithm, the input to the agents, the reward function and more to help future work using reinforcement learning for tractography.
- Score: 12.940129711489005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, deep reinforcement learning (RL) has been proposed to learn the
tractography procedure and train agents to reconstruct the structure of the
white matter without manually curated reference streamlines. While the
performances reported were competitive, the proposed framework is complex, and
little is still known about the role and impact of its multiple parts. In this
work, we thoroughly explore the different components of the proposed framework,
such as the choice of the RL algorithm, seeding strategy, the input signal and
reward function, and shed light on their impact. Approximately 7,400 models
were trained for this work, totalling nearly 41,000 hours of GPU time. Our goal
is to guide researchers eager to explore the possibilities of deep RL for
tractography by exposing what works and what does not work with the category of
approach. As such, we ultimately propose a series of recommendations concerning
the choice of RL algorithm, the input to the agents, the reward function and
more to help future work using reinforcement learning for tractography. We also
release the open source codebase, trained models, and datasets for users and
researchers wanting to explore reinforcement learning for tractography.
Related papers
- RLeXplore: Accelerating Research in Intrinsically-Motivated Reinforcement Learning [50.55776190278426]
Extrinsic rewards can effectively guide reinforcement learning (RL) agents in specific tasks.
We introduce RLeXplore, a unified, highly modularized, and plug-and-play framework offering reliable implementations of eight state-of-the-art intrinsic reward algorithms.
arXiv Detail & Related papers (2024-05-29T22:23:20Z) - Leveraging Optimal Transport for Enhanced Offline Reinforcement Learning
in Surgical Robotic Environments [4.2569494803130565]
We introduce an innovative algorithm designed to assign rewards to offline trajectories, using a small number of high-quality expert demonstrations.
This approach circumvents the need for handcrafted rewards, unlocking the potential to harness vast datasets for policy learning.
arXiv Detail & Related papers (2023-10-13T03:39:15Z) - Implicit Offline Reinforcement Learning via Supervised Learning [83.8241505499762]
Offline Reinforcement Learning (RL) via Supervised Learning is a simple and effective way to learn robotic skills from a dataset collected by policies of different expertise levels.
We show how implicit models can leverage return information and match or outperform explicit algorithms to acquire robotic skills from fixed datasets.
arXiv Detail & Related papers (2022-10-21T21:59:42Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Reinforcement Learning on Graph: A Survey [0.3867363075280544]
We provide a comprehensive overview of RL models and graph mining and generalize these algorithms to Graph Reinforcement Learning (GRL)
We discuss the applications of GRL methods across various domains and summarize the method description, open-source codes, and benchmark datasets of GRL methods.
We propose possible important directions and challenges to be solved in the future.
arXiv Detail & Related papers (2022-04-13T01:25:58Z) - Jump-Start Reinforcement Learning [68.82380421479675]
We present a meta algorithm that can use offline data, demonstrations, or a pre-existing policy to initialize an RL policy.
In particular, we propose Jump-Start Reinforcement Learning (JSRL), an algorithm that employs two policies to solve tasks.
We show via experiments that JSRL is able to significantly outperform existing imitation and reinforcement learning algorithms.
arXiv Detail & Related papers (2022-04-05T17:25:22Z) - The Challenges of Exploration for Offline Reinforcement Learning [8.484491887821473]
We study the two processes of reinforcement learning: collecting informative experience and inferring optimal behaviour.
The task-agnostic setting for data collection, where the task is not known a priori, is of particular interest.
We use this decoupled framework to strengthen intuitions about exploration and the data prerequisites for effective offline RL.
arXiv Detail & Related papers (2022-01-27T23:59:56Z) - Information Directed Reward Learning for Reinforcement Learning [64.33774245655401]
We learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible.
In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types.
We support our findings with extensive evaluations in multiple environments and with different types of queries.
arXiv Detail & Related papers (2021-02-24T18:46:42Z) - Distributed Deep Reinforcement Learning: An Overview [0.0]
In this article, we provide a survey of the role of the distributed approaches in DRL.
We overview the state of the field, by studying the key research works that have a significant impact on how we can use distributed methods in DRL.
Also, we evaluate these methods on different tasks and compare their performance with each other and with single actor and learner agents.
arXiv Detail & Related papers (2020-11-22T13:24:35Z) - Reannealing of Decaying Exploration Based On Heuristic Measure in Deep
Q-Network [82.20059754270302]
We propose an algorithm based on the idea of reannealing, that aims at encouraging exploration only when it is needed.
We perform an illustrative case study showing that it has potential to both accelerate training and obtain a better policy.
arXiv Detail & Related papers (2020-09-29T20:40:00Z) - PBCS : Efficient Exploration and Exploitation Using a Synergy between
Reinforcement Learning and Motion Planning [8.176152440971897]
"Plan, Backplay, Chain Skills" combines motion planning and reinforcement learning to solve hard exploration environments.
We show that this method outperforms state-of-the-art RL algorithms in 2D maze environments of various sizes.
arXiv Detail & Related papers (2020-04-24T11:37:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.