SocNavGym: A Reinforcement Learning Gym for Social Navigation
- URL: http://arxiv.org/abs/2304.14102v2
- Date: Fri, 7 Jul 2023 04:00:36 GMT
- Title: SocNavGym: A Reinforcement Learning Gym for Social Navigation
- Authors: Aditya Kapoor, Sushant Swamy, Luis Manso and Pilar Bachiller
- Abstract summary: SocNavGym is an advanced simulation environment for social navigation.
It can generate different types of social navigation scenarios.
It can also be configured to work with different hand-crafted and data-driven social reward signals.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is essential for autonomous robots to be socially compliant while
navigating in human-populated environments. Machine Learning and, especially,
Deep Reinforcement Learning have recently gained considerable traction in the
field of Social Navigation. This can be partially attributed to the resulting
policies not being bound by human limitations in terms of code complexity or
the number of variables that are handled. Unfortunately, the lack of safety
guarantees and the large data requirements by DRL algorithms make learning in
the real world unfeasible. To bridge this gap, simulation environments are
frequently used. We propose SocNavGym, an advanced simulation environment for
social navigation that can generate a wide variety of social navigation
scenarios and facilitates the development of intelligent social agents.
SocNavGym is light-weight, fast, easy-to-use, and can be effortlessly
configured to generate different types of social navigation scenarios. It can
also be configured to work with different hand-crafted and data-driven social
reward signals and to yield a variety of evaluation metrics to benchmark
agents' performance. Further, we also provide a case study where a Dueling-DQN
agent is trained to learn social-navigation policies using SocNavGym. The
results provides evidence that SocNavGym can be used to train an agent from
scratch to navigate in simple as well as complex social scenarios. Our
experiments also show that the agents trained using the data-driven reward
function displays more advanced social compliance in comparison to the
heuristic-based reward function.
Related papers
- Online Context Learning for Socially-compliant Navigation [49.609656402450746]
This letter introduces an online context learning method that aims to empower robots to adapt to new social environments online.
Experiments using a community-wide simulator show that our method outperforms the state-of-the-art ones.
arXiv Detail & Related papers (2024-06-17T12:59:13Z) - Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - SocialGFs: Learning Social Gradient Fields for Multi-Agent Reinforcement Learning [58.84311336011451]
We propose a novel gradient-based state representation for multi-agent reinforcement learning.
We employ denoising score matching to learn the social gradient fields (SocialGFs) from offline samples.
In practice, we integrate SocialGFs into the widely used multi-agent reinforcement learning algorithms, e.g., MAPPO.
arXiv Detail & Related papers (2024-05-03T04:12:19Z) - Principles and Guidelines for Evaluating Social Robot Navigation
Algorithms [44.51586279645062]
Social robot navigation is difficult to evaluate because it involves dynamic human agents and their perceptions of the appropriateness of robot behavior.
Our contributions include (a) a definition of a socially navigating robot as one that respects the principles of safety, comfort, legibility, politeness, social competency, agent understanding, proactivity, and responsiveness to context, (b) guidelines for the use of metrics, development of scenarios, benchmarks, datasets, and simulators to evaluate social navigation, and (c) a social navigation metrics framework to make it easier to compare results from different simulators, robots and datasets.
arXiv Detail & Related papers (2023-06-29T07:31:43Z) - SOCIALGYM 2.0: Simulator for Multi-Agent Social Robot Navigation in
Shared Human Spaces [13.116180950665962]
SocialGym 2 is a multi-agent navigation simulator for social robots.
It replicates real-world dynamics in complex environments, including doorways, hallways, intersections, and roundabouts.
SocialGym 2 offers an accessible python interface that integrates with a navigation stack through ROS messaging.
arXiv Detail & Related papers (2023-03-09T21:21:05Z) - Exploiting Socially-Aware Tasks for Embodied Social Navigation [17.48110264302196]
We propose an end-to-end architecture that exploits Socially-Aware Tasks to inject into a reinforcement learning navigation policy.
To this end, our tasks exploit the notion of immediate and future dangers of collision.
We validate our approach on Gibson4+ and Habitat-Matterport3D datasets.
arXiv Detail & Related papers (2022-12-01T18:52:46Z) - SoLo T-DIRL: Socially-Aware Dynamic Local Planner based on
Trajectory-Ranked Deep Inverse Reinforcement Learning [4.008601554204486]
This work proposes a new framework for a socially-aware dynamic local planner in crowded environments by building on the recently proposed Trajectory-ranked Entropy Deep Inverse Reinforcement Learning (T-MEDIRL)
To address the social navigation problem, our multi-modal learning planner explicitly considers social interaction factors, as well as social-awareness factors into T-MEDIRL pipeline to learn a reward function from human demonstrations.
Our evaluation shows that this method can successfully make a robot navigate in a crowded social environment and outperforms the state-of-art social navigation methods in terms of the success rate, navigation
arXiv Detail & Related papers (2022-09-16T15:13:33Z) - Socially Compliant Navigation Dataset (SCAND): A Large-Scale Dataset of
Demonstrations for Social Navigation [92.66286342108934]
Social navigation is the capability of an autonomous agent, such as a robot, to navigate in a'socially compliant' manner in the presence of other intelligent agents such as humans.
Our dataset contains 8.7 hours, 138 trajectories, 25 miles of socially compliant, human teleoperated driving demonstrations.
arXiv Detail & Related papers (2022-03-28T19:09:11Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z) - PsiPhi-Learning: Reinforcement Learning with Demonstrations using
Successor Features and Inverse Temporal Difference Learning [102.36450942613091]
We propose an inverse reinforcement learning algorithm, called emphinverse temporal difference learning (ITD)
We show how to seamlessly integrate ITD with learning from online environment interactions, arriving at a novel algorithm for reinforcement learning with demonstrations, called $Psi Phi$-learning.
arXiv Detail & Related papers (2021-02-24T21:12:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.