Shared Control of Holonomic Wheelchairs through Reinforcement Learning
- URL: http://arxiv.org/abs/2507.17055v1
- Date: Tue, 22 Jul 2025 22:31:11 GMT
- Title: Shared Control of Holonomic Wheelchairs through Reinforcement Learning
- Authors: Jannis Bähler, Diego Paez-Granados, Jorge Peña-Queralta,
- Abstract summary: State-of-the-art work showed the potential of shared control in improving safety in navigation for non-holonomic robots.<n>We propose a reinforcement learning-based method, which takes a 2D user input and outputs a 3D motion.<n>We show that our method ensures collision-free navigation while smartly orienting the wheelchair and showing better or competitive smoothness.
- Score: 1.4970676989901233
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Smart electric wheelchairs can improve user experience by supporting the driver with shared control. State-of-the-art work showed the potential of shared control in improving safety in navigation for non-holonomic robots. However, for holonomic systems, current approaches often lead to unintuitive behavior for the user and fail to utilize the full potential of omnidirectional driving. Therefore, we propose a reinforcement learning-based method, which takes a 2D user input and outputs a 3D motion while ensuring user comfort and reducing cognitive load on the driver. Our approach is trained in Isaac Gym and tested in simulation in Gazebo. We compare different RL agent architectures and reward functions based on metrics considering cognitive load and user comfort. We show that our method ensures collision-free navigation while smartly orienting the wheelchair and showing better or competitive smoothness compared to a previous non-learning-based method. We further perform a sim-to-real transfer and demonstrate, to the best of our knowledge, the first real-world implementation of RL-based shared control for an omnidirectional mobility platform.
Related papers
- A Systematic Study of Multi-Agent Deep Reinforcement Learning for Safe and Robust Autonomous Highway Ramp Entry [0.0]
We study a highway ramp function that controls the vehicles forward-moving actions to minimize collisions with the stream of highway traffic into which a merging (ego) vehicle enters.<n>We take a game-theoretic multi-agent (MA) approach to this problem and study the use of controllers based on deep reinforcement learning (DRL)<n>The work presented in this paper extends existing work by studying the interaction of more than two vehicles (agents) and does so by systematically expanding the road scene with additional traffic and ego vehicles.
arXiv Detail & Related papers (2024-11-21T21:23:46Z) - On-Board Vision-Language Models for Personalized Autonomous Vehicle Motion Control: System Design and Real-World Validation [17.085548386025412]
Vision-Language Models (VLMs) offer promising solutions to personalized driving.
We propose a lightweight yet effective on-board VLM framework that provides low-latency personalized driving performance.
Our system has demonstrated the ability to provide safe, comfortable, and personalized driving experiences across various scenarios.
arXiv Detail & Related papers (2024-11-17T23:20:37Z) - CarDreamer: Open-Source Learning Platform for World Model based Autonomous Driving [25.49856190295859]
World model (WM) based reinforcement learning (RL) has emerged as a promising approach by learning and predicting the complex dynamics of various environments.
There does not exist an accessible platform for training and testing such algorithms in sophisticated driving environments.
We introduce CarDreamer, the first open-source learning platform designed specifically for developing WM based autonomous driving algorithms.
arXiv Detail & Related papers (2024-05-15T05:57:20Z) - Reinforcement Learning for Versatile, Dynamic, and Robust Bipedal Locomotion Control [106.32794844077534]
This paper presents a study on using deep reinforcement learning to create dynamic locomotion controllers for bipedal robots.
We develop a general control solution that can be used for a range of dynamic bipedal skills, from periodic walking and running to aperiodic jumping and standing.
This work pushes the limits of agility for bipedal robots through extensive real-world experiments.
arXiv Detail & Related papers (2024-01-30T10:48:43Z) - Learning and Adapting Agile Locomotion Skills by Transferring Experience [71.8926510772552]
We propose a framework for training complex robotic skills by transferring experience from existing controllers to jumpstart learning new tasks.
We show that our method enables learning complex agile jumping behaviors, navigating to goal locations while walking on hind legs, and adapting to new environments.
arXiv Detail & Related papers (2023-04-19T17:37:54Z) - FastRLAP: A System for Learning High-Speed Driving via Deep RL and
Autonomous Practicing [71.76084256567599]
We present a system that enables an autonomous small-scale RC car to drive aggressively from visual observations using reinforcement learning (RL)
Our system, FastRLAP (faster lap), trains autonomously in the real world, without human interventions, and without requiring any simulation or expert demonstrations.
The resulting policies exhibit emergent aggressive driving skills, such as timing braking and acceleration around turns and avoiding areas which impede the robot's motion, approaching the performance of a human driver using a similar first-person interface over the course of training.
arXiv Detail & Related papers (2023-04-19T17:33:47Z) - Deep Reinforcement Learning-Based Mapless Crowd Navigation with
Perceived Risk of the Moving Crowd for Mobile Robots [0.0]
Current state-of-the-art crowd navigation approaches are mainly deep reinforcement learning (DRL)-based.
We propose a method that includes a Collision Probability (CP) in the observation space to give the robot a sense of the level of danger of the moving crowd.
arXiv Detail & Related papers (2023-04-07T11:29:59Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Vision-Based Autonomous Car Racing Using Deep Imitative Reinforcement
Learning [13.699336307578488]
Deep imitative reinforcement learning approach (DIRL) achieves agile autonomous racing using visual inputs.
We validate our algorithm both in a high-fidelity driving simulation and on a real-world 1/20-scale RC-car with limited onboard computation.
arXiv Detail & Related papers (2021-07-18T00:00:48Z) - Reinforcement Learning for Robust Parameterized Locomotion Control of
Bipedal Robots [121.42930679076574]
We present a model-free reinforcement learning framework for training robust locomotion policies in simulation.
domain randomization is used to encourage the policies to learn behaviors that are robust across variations in system dynamics.
We demonstrate this on versatile walking behaviors such as tracking a target walking velocity, walking height, and turning yaw.
arXiv Detail & Related papers (2021-03-26T07:14:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.