Hint assisted reinforcement learning: an application in radio astronomy
- URL: http://arxiv.org/abs/2301.03933v1
- Date: Tue, 10 Jan 2023 12:24:13 GMT
- Title: Hint assisted reinforcement learning: an application in radio astronomy
- Authors: Sarod Yatawatta
- Abstract summary: We propose to use hints generated by the environment as an aid to the reinforcement learning process mitigating the complexity of model construction.
Results in several environments show that we get the increased sample efficiency by using hints as compared to model free methods.
- Score: 2.4366811507669115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model based reinforcement learning has proven to be more sample efficient
than model free methods. On the other hand, the construction of a dynamics
model in model based reinforcement learning has increased complexity. Data
processing tasks in radio astronomy are such situations where the original
problem which is being solved by reinforcement learning itself is the creation
of a model. Fortunately, many methods based on heuristics or signal processing
do exist to perform the same tasks and we can leverage them to propose the best
action to take, or in other words, to provide a `hint'. We propose to use
`hints' generated by the environment as an aid to the reinforcement learning
process mitigating the complexity of model construction. We modify the soft
actor critic algorithm to use hints and use the alternating direction method of
multipliers algorithm with inequality constraints to train the agent. Results
in several environments show that we get the increased sample efficiency by
using hints as compared to model free methods.
Related papers
- Model-based Policy Optimization using Symbolic World Model [46.42871544295734]
The application of learning-based control methods in robotics presents significant challenges.
One is that model-free reinforcement learning algorithms use observation data with low sample efficiency.
We suggest approximating transition dynamics with symbolic expressions, which are generated via symbolic regression.
arXiv Detail & Related papers (2024-07-18T13:49:21Z) - RLIF: Interactive Imitation Learning as Reinforcement Learning [56.997263135104504]
We show how off-policy reinforcement learning can enable improved performance under assumptions that are similar but potentially even more practical than those of interactive imitation learning.
Our proposed method uses reinforcement learning with user intervention signals themselves as rewards.
This relaxes the assumption that intervening experts in interactive imitation learning should be near-optimal and enables the algorithm to learn behaviors that improve over the potential suboptimal human expert.
arXiv Detail & Related papers (2023-11-21T21:05:21Z) - RetICL: Sequential Retrieval of In-Context Examples with Reinforcement Learning [53.52699766206808]
We propose Retrieval for In-Context Learning (RetICL), a learnable method for modeling and optimally selecting examples sequentially for in-context learning.
We evaluate RetICL on math word problem solving and scientific question answering tasks and show that it consistently outperforms or matches and learnable baselines.
arXiv Detail & Related papers (2023-05-23T20:15:56Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Deep Active Learning with Noise Stability [24.54974925491753]
Uncertainty estimation for unlabeled data is crucial to active learning.
We propose a novel algorithm that leverages noise stability to estimate data uncertainty.
Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis.
arXiv Detail & Related papers (2022-05-26T13:21:01Z) - Maximum Entropy Model-based Reinforcement Learning [0.0]
This work connects exploration techniques and model-based reinforcement learning.
We have designed a novel exploration method that takes into account features of the model-based approach.
We also demonstrate through experiments that our method significantly improves the performance of the model-based algorithm Dreamer.
arXiv Detail & Related papers (2021-12-02T13:07:29Z) - DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation [57.358212277226315]
In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator.
Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms.
This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk.
We propose a more data-efficient IfO algorithm
arXiv Detail & Related papers (2021-03-31T23:46:32Z) - Sample-efficient reinforcement learning using deep Gaussian processes [18.044018772331636]
Reinforcement learning provides a framework for learning to control which actions to take towards completing a task through trial-and-error.
In model-based reinforcement learning efficiency is improved by learning to simulate the world dynamics.
We introduce deep Gaussian processes where the depth of the compositions introduces model complexity while incorporating prior knowledge on the dynamics brings smoothness and structure.
arXiv Detail & Related papers (2020-11-02T13:37:57Z) - Model-Agnostic Explanations using Minimal Forcing Subsets [11.420687735660097]
We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision.
Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem.
Results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior.
arXiv Detail & Related papers (2020-11-01T22:45:16Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.