SALSA-RL: Stability Analysis in the Latent Space of Actions for Reinforcement Learning
- URL: http://arxiv.org/abs/2502.15512v1
- Date: Fri, 21 Feb 2025 15:09:39 GMT
- Title: SALSA-RL: Stability Analysis in the Latent Space of Actions for Reinforcement Learning
- Authors: Xuyang Li, Romit Maulik,
- Abstract summary: We propose SALSA-RL (Stability Analysis in the Latent Space of Actions), a novel RL framework that models control actions as dynamic, time-dependent variables evolving within a latent space.<n>By employing a pre-trained encoder-decoder and a state-dependent linear system, our approach enables both stability analysis and interpretability.
- Score: 2.7075926292355286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep reinforcement learning (DRL) methods have made significant advances in handling continuous action spaces. However, real-world control systems--especially those requiring precise and reliable performance--often demand formal stability, and existing DRL approaches typically lack explicit mechanisms to ensure or analyze stability. To address this limitation, we propose SALSA-RL (Stability Analysis in the Latent Space of Actions), a novel RL framework that models control actions as dynamic, time-dependent variables evolving within a latent space. By employing a pre-trained encoder-decoder and a state-dependent linear system, our approach enables both stability analysis and interpretability. We demonstrated that SALSA-RL can be deployed in a non-invasive manner for assessing the local stability of actions from pretrained RL agents without compromising on performance across diverse benchmark environments. By enabling a more interpretable analysis of action generation, SALSA-RL provides a powerful tool for advancing the design, analysis, and theoretical understanding of RL systems.
Related papers
- Agentic Reinforced Policy Optimization [66.96989268893932]
Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks.<n>Current RL algorithms inadequately balance the models' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions.<n>We propose Agentic Reinforced Policy Optimization (ARPO), a novel agentic RL algorithm tailored for training multi-turn LLM-based agents.
arXiv Detail & Related papers (2025-07-26T07:53:11Z) - RL-DAUNCE: Reinforcement Learning-Driven Data Assimilation with Uncertainty-Aware Constrained Ensembles [1.609702184777697]
We develop RL-DAUNCE, a new RL-based method that enhances data assimilation with physical constraints.<n>First, RL-DAUNCE inherits the computational efficiency of machine learning.<n>Second, RL-DAUNCE emphasizes uncertainty by advancing multiple ensemble members.<n>Third, RL-DAUNCE's ensemble-as-agents design facilitates the enforcement of physical constraints.
arXiv Detail & Related papers (2025-05-08T17:43:35Z) - Offline Robotic World Model: Learning Robotic Policies without a Physics Simulator [50.191655141020505]
Reinforcement Learning (RL) has demonstrated impressive capabilities in robotic control but remains challenging due to high sample complexity, safety concerns, and the sim-to-real gap.
We introduce Offline Robotic World Model (RWM-O), a model-based approach that explicitly estimates uncertainty to improve policy learning without reliance on a physics simulator.
arXiv Detail & Related papers (2025-04-23T12:58:15Z) - xSRL: Safety-Aware Explainable Reinforcement Learning -- Safety as a Product of Explainability [8.016667413960995]
We propose xSRL, a framework that integrates both local and global explanations to provide a comprehensive understanding of RL agents' behavior.<n>xSRL also enables developers to identify policy vulnerabilities through adversarial attacks, offering tools to debug and patch agents without retraining.<n>Our experiments and user studies demonstrate xSRL's effectiveness in increasing safety in RL systems, making them more reliable and trustworthy for real-world deployment.
arXiv Detail & Related papers (2024-12-26T18:19:04Z) - ActSafe: Active Exploration with Safety Constraints for Reinforcement Learning [48.536695794883826]
We present ActSafe, a novel model-based RL algorithm for safe and efficient exploration.<n>We show that ActSafe guarantees safety during learning while also obtaining a near-optimal policy in finite time.<n>In addition, we propose a practical variant of ActSafe that builds on latest model-based RL advancements.
arXiv Detail & Related papers (2024-10-12T10:46:02Z) - Symmetric Reinforcement Learning Loss for Robust Learning on Diverse Tasks and Model Scales [13.818149654692863]
Reinforcement learning (RL) training is inherently unstable due to factors such as moving targets and high gradient variance.
In this work, we improve the stability of RL training by adapting the reverse cross entropy (RCE) from supervised learning for noisy data to define a symmetric RL loss.
arXiv Detail & Related papers (2024-05-27T19:28:33Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Control invariant set enhanced safe reinforcement learning: improved
sampling efficiency, guaranteed stability and robustness [0.0]
This work proposes a novel approach to RL training, called control invariant set (CIS) enhanced RL.
The robustness of the proposed approach is investigated in the presence of uncertainty.
Results show a significant improvement in sampling efficiency during offline training and closed-loop stability guarantee in the online implementation.
arXiv Detail & Related papers (2023-05-24T22:22:19Z) - Control invariant set enhanced reinforcement learning for process
control: improved sampling efficiency and guaranteed stability [0.0]
This work proposes a novel approach to RL training, called control invariant set (CIS) enhanced RL.
The approach consists of two learning stages: offline and online.
The results show a significant improvement in sampling efficiency during offline training and closed-loop stability in the online implementation.
arXiv Detail & Related papers (2023-04-11T21:27:36Z) - Reachability Verification Based Reliability Assessment for Deep
Reinforcement Learning Controlled Robotics and Autonomous Systems [17.679681019347065]
Deep Reinforcement Learning (DRL) has achieved impressive performance in robotics and autonomous systems (RAS)
A key challenge to its deployment in real-life operations is the presence of spuriously unsafe DRL policies.
This paper proposes a novel quantitative reliability assessment framework for DRL-controlled RAS.
arXiv Detail & Related papers (2022-10-26T19:25:46Z) - LCRL: Certified Policy Synthesis via Logically-Constrained Reinforcement
Learning [78.2286146954051]
LCRL implements model-free Reinforcement Learning (RL) algorithms over unknown Decision Processes (MDPs)
We present case studies to demonstrate the applicability, ease of use, scalability, and performance of LCRL.
arXiv Detail & Related papers (2022-09-21T13:21:00Z) - RORL: Robust Offline Reinforcement Learning via Conservative Smoothing [72.8062448549897]
offline reinforcement learning can exploit the massive amount of offline data for complex decision-making tasks.
Current offline RL algorithms are generally designed to be conservative for value estimation and action selection.
We propose Robust Offline Reinforcement Learning (RORL) with a novel conservative smoothing technique.
arXiv Detail & Related papers (2022-06-06T18:07:41Z) - KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed
Stability in Nonlinear Dynamical Systems [66.9461097311667]
We propose a model-based reinforcement learning framework with formal stability guarantees.
The proposed method learns the system dynamics up to a confidence interval using feature representation.
We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system.
arXiv Detail & Related papers (2022-06-03T17:27:04Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Dependability Analysis of Deep Reinforcement Learning based Robotics and
Autonomous Systems [10.499662874457998]
Black-box nature of Deep Reinforcement Learning (DRL) and uncertain deployment-environments of Robotics pose new challenges on its dependability.
In this paper, we define a set of dependability properties in temporal logic and construct a Discrete-Time Markov Chain (DTMC) to model the dynamics of risk/failures of a DRL-driven RAS.
Our experimental results show that the proposed method is effective as a holistic assessment framework, while uncovers conflicts between the properties that may need trade-offs in the training.
arXiv Detail & Related papers (2021-09-14T08:42:29Z) - Lyapunov-based uncertainty-aware safe reinforcement learning [0.0]
InReinforcement learning (RL) has shown a promising performance in learning optimal policies for a variety of sequential decision-making tasks.
In many real-world RL problems, besides optimizing the main objectives, the agent is expected to satisfy a certain level of safety.
We propose a Lyapunov-based uncertainty-aware safe RL model to address these limitations.
arXiv Detail & Related papers (2021-07-29T13:08:15Z) - Uncertainty Weighted Actor-Critic for Offline Reinforcement Learning [63.53407136812255]
Offline Reinforcement Learning promises to learn effective policies from previously-collected, static datasets without the need for exploration.
Existing Q-learning and actor-critic based off-policy RL algorithms fail when bootstrapping from out-of-distribution (OOD) actions or states.
We propose Uncertainty Weighted Actor-Critic (UWAC), an algorithm that detects OOD state-action pairs and down-weights their contribution in the training objectives accordingly.
arXiv Detail & Related papers (2021-05-17T20:16:46Z) - Combining Pessimism with Optimism for Robust and Efficient Model-Based
Deep Reinforcement Learning [56.17667147101263]
In real-world tasks, reinforcement learning agents encounter situations that are not present during training time.
To ensure reliable performance, the RL agents need to exhibit robustness against worst-case situations.
We propose the Robust Hallucinated Upper-Confidence RL (RH-UCRL) algorithm to provably solve this problem.
arXiv Detail & Related papers (2021-03-18T16:50:17Z) - Predictive Coding for Locally-Linear Control [92.35650774524399]
High-dimensional observations and unknown dynamics are major challenges when applying optimal control to many real-world decision making tasks.
The Learning Controllable Embedding (LCE) framework addresses these challenges by embedding the observations into a lower dimensional latent space.
We show theoretically that explicit next-observation prediction can be replaced with predictive coding.
arXiv Detail & Related papers (2020-03-02T18:20:41Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.