Synthesizing Programmatic Policies with Actor-Critic Algorithms and ReLU
Networks
- URL: http://arxiv.org/abs/2308.02729v1
- Date: Fri, 4 Aug 2023 22:17:32 GMT
- Title: Synthesizing Programmatic Policies with Actor-Critic Algorithms and ReLU
Networks
- Authors: Spyros Orfanos and Levi H. S. Lelis
- Abstract summary: Programmatically Interpretable Reinforcement Learning (PIRL) encodes policies in human-readable computer programs.
In this paper, we show that PIRL-specific algorithms are not needed, depending on the language used to encode the programmatic policies.
We use a connection between ReLU neural networks and oblique decision trees to translate the policy learned with actor-critic algorithms into programmatic policies.
- Score: 20.2777559515384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Programmatically Interpretable Reinforcement Learning (PIRL) encodes policies
in human-readable computer programs. Novel algorithms were recently introduced
with the goal of handling the lack of gradient signal to guide the search in
the space of programmatic policies. Most of such PIRL algorithms first train a
neural policy that is used as an oracle to guide the search in the programmatic
space. In this paper, we show that such PIRL-specific algorithms are not
needed, depending on the language used to encode the programmatic policies.
This is because one can use actor-critic algorithms to directly obtain a
programmatic policy. We use a connection between ReLU neural networks and
oblique decision trees to translate the policy learned with actor-critic
algorithms into programmatic policies. This translation from ReLU networks
allows us to synthesize policies encoded in programs with if-then-else
structures, linear transformations of the input values, and PID operations.
Empirical results on several control problems show that this translation
approach is capable of learning short and effective policies. Moreover, the
translated policies are at least competitive and often far superior to the
policies PIRL algorithms synthesize.
Related papers
- Iteratively Refined Behavior Regularization for Offline Reinforcement
Learning [57.10922880400715]
In this paper, we propose a new algorithm that substantially enhances behavior-regularization based on conservative policy iteration.
By iteratively refining the reference policy used for behavior regularization, conservative policy update guarantees gradually improvement.
Experimental results on the D4RL benchmark indicate that our method outperforms previous state-of-the-art baselines in most tasks.
arXiv Detail & Related papers (2023-06-09T07:46:24Z) - Multi-Objective Policy Gradients with Topological Constraints [108.10241442630289]
We present a new algorithm for a policy gradient in TMDPs by a simple extension of the proximal policy optimization (PPO) algorithm.
We demonstrate this on a real-world multiple-objective navigation problem with an arbitrary ordering of objectives both in simulation and on a real robot.
arXiv Detail & Related papers (2022-09-15T07:22:58Z) - Programmatic Policy Extraction by Iterative Local Search [0.15229257192293197]
We present a simple and direct approach to extracting a programmatic policy from a pretrained neural policy.
Both when trained using a hand crafted expert policy and a learned neural policy, our method discovers simple and interpretable policies that perform almost as well as the original.
arXiv Detail & Related papers (2022-01-18T10:39:40Z) - Learning Optimal Antenna Tilt Control Policies: A Contextual Linear
Bandit Approach [65.27783264330711]
Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity.
We devise algorithms learning optimal tilt control policies from existing data.
We show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.
arXiv Detail & Related papers (2022-01-06T18:24:30Z) - Learning Robust Policy against Disturbance in Transition Dynamics via
State-Conservative Policy Optimization [63.75188254377202]
Deep reinforcement learning algorithms can perform poorly in real-world tasks due to discrepancy between source and target environments.
We propose a novel model-free actor-critic algorithm to learn robust policies without modeling the disturbance in advance.
Experiments in several robot control tasks demonstrate that SCPO learns robust policies against the disturbance in transition dynamics.
arXiv Detail & Related papers (2021-12-20T13:13:05Z) - Neural Network Compatible Off-Policy Natural Actor-Critic Algorithm [16.115903198836694]
Learning optimal behavior from existing data is one of the most important problems in Reinforcement Learning (RL)
This is known as "off-policy control" in RL where an agent's objective is to compute an optimal policy based on the data obtained from the given policy (known as the behavior policy)
This work proposes an off-policy natural actor-critic algorithm that utilizes state-action distribution correction for handling the off-policy behavior and the natural policy gradient for sample efficiency.
arXiv Detail & Related papers (2021-10-19T14:36:45Z) - Learning to Synthesize Programs as Interpretable and Generalizable
Policies [25.258598215642067]
We present a framework that learns to synthesize a program, which details the procedure to solve a task in a flexible and expressive manner.
Experimental results demonstrate that the proposed framework not only learns to reliably synthesize task-solving programs but also outperforms DRL and program synthesis baselines.
arXiv Detail & Related papers (2021-08-31T07:03:06Z) - Cautious Policy Programming: Exploiting KL Regularization in Monotonic
Policy Improvement for Reinforcement Learning [11.82492300303637]
We propose a novel value-based reinforcement learning (RL) algorithm that can ensure monotonic policy improvement during learning.
We demonstrate that the proposed algorithm can trade o? performance and stability in both didactic classic control problems and challenging high-dimensional Atari games.
arXiv Detail & Related papers (2021-07-13T01:03:10Z) - On-Line Policy Iteration for Infinite Horizon Dynamic Programming [0.0]
We propose an on-line policy iteration (PI) algorithm for finite-state infinite horizon discounted dynamic programming.
The algorithm converges in a finite number of stages to a type of locally optimal policy.
It is also well-suited for on-line PI algorithms with value and policy approximations.
arXiv Detail & Related papers (2021-06-01T19:50:22Z) - Learning Sampling Policy for Faster Derivative Free Optimization [100.27518340593284]
We propose a new reinforcement learning based ZO algorithm (ZO-RL) with learning the sampling policy for generating the perturbations in ZO optimization instead of using random sampling.
Our results show that our ZO-RL algorithm can effectively reduce the variances of ZO gradient by learning a sampling policy, and converge faster than existing ZO algorithms in different scenarios.
arXiv Detail & Related papers (2021-04-09T14:50:59Z) - Deep Policy Dynamic Programming for Vehicle Routing Problems [89.96386273895985]
We propose Deep Policy Dynamic Programming (D PDP) to combine the strengths of learned neurals with those of dynamic programming algorithms.
D PDP prioritizes and restricts the DP state space using a policy derived from a deep neural network, which is trained to predict edges from example solutions.
We evaluate our framework on the travelling salesman problem (TSP) and the vehicle routing problem (VRP) and show that the neural policy improves the performance of (restricted) DP algorithms.
arXiv Detail & Related papers (2021-02-23T15:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.