SANGO: Socially Aware Navigation through Grouped Obstacles
- URL: http://arxiv.org/abs/2411.19497v1
- Date: Fri, 29 Nov 2024 06:29:46 GMT
- Title: SANGO: Socially Aware Navigation through Grouped Obstacles
- Authors: Rahath Malladi, Amol Harsh, Arshia Sangwan, Sunita Chauhan, Sandeep Manjanna,
- Abstract summary: This paper introduces SANGO, a novel method that ensures socially appropriate behavior by dynamically grouping obstacles and adhering to social norms.<n>Using deep reinforcement learning, SANGO trains agents to navigate complex environments leveraging the DBSCAN algorithm for obstacle clustering and Proximal Policy Optimization (PPO) for path planning.<n>The proposed approach improves safety and social compliance by maintaining appropriate distances and reducing collision rates.
- Score: 0.09895793818721334
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper introduces SANGO (Socially Aware Navigation through Grouped Obstacles), a novel method that ensures socially appropriate behavior by dynamically grouping obstacles and adhering to social norms. Using deep reinforcement learning, SANGO trains agents to navigate complex environments leveraging the DBSCAN algorithm for obstacle clustering and Proximal Policy Optimization (PPO) for path planning. The proposed approach improves safety and social compliance by maintaining appropriate distances and reducing collision rates. Extensive experiments conducted in custom simulation environments demonstrate SANGO's superior performance in significantly reducing discomfort (by up to 83.5%), reducing collision rates (by up to 29.4%) and achieving higher successful navigation in dynamic and crowded scenarios. These findings highlight the potential of SANGO for real-world applications, paving the way for advanced socially adept robotic navigation systems.
Related papers
- From Obstacles to Etiquette: Robot Social Navigation with VLM-Informed Path Selection [57.74400052368147]
This paper presents a social robot navigation framework that integrates geometric planning with contextual social reasoning.<n>The system first extracts obstacles and human dynamics to generate geometrically feasible candidate paths, then leverages a fine-tuned vision-language model (VLM) to evaluate these paths.<n>Experiments in four social navigation contexts demonstrate that our method achieves the best overall performance with the lowest personal space violation duration, the minimal pedestrian-facing time, and no social zone intrusions.
arXiv Detail & Related papers (2026-02-09T18:46:12Z) - Learning to Navigate Socially Through Proactive Risk Perception [28.68878818274302]
We describe our submission to the IROS 2025 RoboSense Challenge Social Navigation Track.<n>This track focuses on developing RGBD-based perception and navigation systems.<n>We introduce a Proactive Risk Perception Module to enhance social navigation performance.
arXiv Detail & Related papers (2025-10-09T07:22:12Z) - SOE: Sample-Efficient Robot Policy Self-Improvement via On-Manifold Exploration [58.05143960563826]
On-Manifold Exploration (SOE) is a framework that enhances policy exploration and improvement in robotic manipulation.<n>SOE learns a compact latent representation of task-relevant factors and constrains exploration to the manifold of valid actions.<n>It can be seamlessly integrated with arbitrary policy models as a plug-in module, augmenting exploration without degrading the base policy performance.
arXiv Detail & Related papers (2025-09-23T17:54:47Z) - Active Test-time Vision-Language Navigation [60.69722522420299]
ATENA is a test-time active learning framework that enables a practical human-robot interaction via episodic feedback on uncertain navigation outcomes.<n>In particular, ATENA learns to increase certainty in successful episodes and decrease it in failed ones, improving uncertainty calibration.<n>In addition, we propose a self-active learning strategy that enables an agent to evaluate its navigation outcomes based on confident predictions.
arXiv Detail & Related papers (2025-06-07T02:24:44Z) - GSON: A Group-based Social Navigation Framework with Large Multimodal Model [9.94576166903495]
This paper introduces GSON, a novel group-based social navigation framework.
GSON uses visual prompting to enable zero-shot extraction of social relationships among pedestrians.
We validate GSON through extensive real-world mobile robot navigation experiments.
arXiv Detail & Related papers (2024-09-26T17:27:15Z) - Disentangling Uncertainty for Safe Social Navigation using Deep Reinforcement Learning [0.4218593777811082]
This work introduces a novel approach that integrates aleatoric, epistemic, and predictive uncertainty estimation into a DRL navigation framework for policy distribution uncertainty estimates.
In uncertain decision-making situations, we propose to change the robot's social behavior to conservative collision avoidance.
The results show improved training performance with ODV and dropout in PPO and reveal that the training scenario has an impact on the generalization.
arXiv Detail & Related papers (2024-09-16T18:49:38Z) - SoNIC: Safe Social Navigation with Adaptive Conformal Inference and Constrained Reinforcement Learning [26.554847852013737]
Reinforcement Learning (RL) has enabled social robots to generate trajectories without human-designed rules or interventions.
We propose the first algorithm, SoNIC, that integrates adaptiveconformityal inference (ACI) with constrained reinforcement learning (CRL) to learn safe policies for social navigation.
Our method outperforms state-of-the-art baselines in terms of both safety and adherence to social norms by a large margin and demonstrates much stronger robustness to out-of-distribution scenarios.
arXiv Detail & Related papers (2024-07-24T17:57:21Z) - Belief Aided Navigation using Bayesian Reinforcement Learning for Avoiding Humans in Blind Spots [0.0]
This study introduces a novel algorithm, BNBRL+, predicated on the partially observable Markov decision process framework to assess risks in unobservable areas.
It integrates the dynamics between the robot, humans, and inferred beliefs to determine the navigation paths and embeds social norms within the reward function.
The model's ability to navigate effectively in spaces with limited visibility and avoid obstacles dynamically can significantly improve the safety and reliability of autonomous vehicles.
arXiv Detail & Related papers (2024-03-15T08:50:39Z) - Robust Driving Policy Learning with Guided Meta Reinforcement Learning [49.860391298275616]
We introduce an efficient method to train diverse driving policies for social vehicles as a single meta-policy.
By randomizing the interaction-based reward functions of social vehicles, we can generate diverse objectives and efficiently train the meta-policy.
We propose a training strategy to enhance the robustness of the ego vehicle's driving policy using the environment where social vehicles are controlled by the learned meta-policy.
arXiv Detail & Related papers (2023-07-19T17:42:36Z) - iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed
Multi-Agent Reinforcement Learning [57.24340061741223]
We introduce a distributed multi-agent reinforcement learning (MARL) algorithm that can predict trajectories and intents in dense and heterogeneous traffic scenarios.
Our approach for intent-aware planning, iPLAN, allows agents to infer nearby drivers' intents solely from their local observations.
arXiv Detail & Related papers (2023-06-09T20:12:02Z) - CCE: Sample Efficient Sparse Reward Policy Learning for Robotic Navigation via Confidence-Controlled Exploration [72.24964965882783]
Confidence-Controlled Exploration (CCE) is designed to enhance the training sample efficiency of reinforcement learning algorithms for sparse reward settings such as robot navigation.
CCE is based on a novel relationship we provide between gradient estimation and policy entropy.
We demonstrate through simulated and real-world experiments that CCE outperforms conventional methods that employ constant trajectory lengths and entropy regularization.
arXiv Detail & Related papers (2023-06-09T18:45:15Z) - Safety-compliant Generative Adversarial Networks for Human Trajectory
Forecasting [95.82600221180415]
Human forecasting in crowds presents the challenges of modelling social interactions and outputting collision-free multimodal distribution.
We introduce SGANv2, an improved safety-compliant SGAN architecture equipped with motion-temporal interaction modelling and a transformer-based discriminator design.
arXiv Detail & Related papers (2022-09-25T15:18:56Z) - SoLo T-DIRL: Socially-Aware Dynamic Local Planner based on
Trajectory-Ranked Deep Inverse Reinforcement Learning [4.008601554204486]
This work proposes a new framework for a socially-aware dynamic local planner in crowded environments by building on the recently proposed Trajectory-ranked Entropy Deep Inverse Reinforcement Learning (T-MEDIRL)
To address the social navigation problem, our multi-modal learning planner explicitly considers social interaction factors, as well as social-awareness factors into T-MEDIRL pipeline to learn a reward function from human demonstrations.
Our evaluation shows that this method can successfully make a robot navigate in a crowded social environment and outperforms the state-of-art social navigation methods in terms of the success rate, navigation
arXiv Detail & Related papers (2022-09-16T15:13:33Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Cautious Adaptation For Reinforcement Learning in Safety-Critical
Settings [129.80279257258098]
Reinforcement learning (RL) in real-world safety-critical target settings like urban driving is hazardous.
We propose a "safety-critical adaptation" task setting: an agent first trains in non-safety-critical "source" environments.
We propose a solution approach, CARL, that builds on the intuition that prior experience in diverse environments equips an agent to estimate risk.
arXiv Detail & Related papers (2020-08-15T01:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.