Causality-enhanced Decision-Making for Autonomous Mobile Robots in Dynamic Environments
- URL: http://arxiv.org/abs/2504.11901v2
- Date: Thu, 17 Apr 2025 08:41:44 GMT
- Title: Causality-enhanced Decision-Making for Autonomous Mobile Robots in Dynamic Environments
- Authors: Luca Castri, Gloria Beraldo, Nicola Bellotto,
- Abstract summary: We propose a novel causality-based decision-making framework to predict battery usage and human obstructions.<n>We also develop a new Gazebo-based simulator designed to model context-sensitive human-robot spatial interactions.<n>Our findings highlight how causal reasoning enables autonomous robots to operate more efficiently and safely in dynamic environments shared with humans.
- Score: 2.037693212747679
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The growing integration of robots in shared environments -- such as warehouses, shopping centres, and hospitals -- demands a deep understanding of the underlying dynamics and human behaviours, including how, when, and where individuals engage in various activities and interactions. This knowledge goes beyond simple correlation studies and requires a more comprehensive causal analysis. By leveraging causal inference to model cause-and-effect relationships, we can better anticipate critical environmental factors and enable autonomous robots to plan and execute tasks more effectively. To this end, we propose a novel causality-based decision-making framework that reasons over a learned causal model to predict battery usage and human obstructions, understanding how these factors could influence robot task execution. Such reasoning framework assists the robot in deciding when and how to complete a given task. To achieve this, we developed also PeopleFlow, a new Gazebo-based simulator designed to model context-sensitive human-robot spatial interactions in shared workspaces. PeopleFlow features realistic human and robot trajectories influenced by contextual factors such as time, environment layout, and robot state, and can simulate a large number of agents. While the simulator is general-purpose, in this paper we focus on a warehouse-like environment as a case study, where we conduct an extensive evaluation benchmarking our causal approach against a non-causal baseline. Our findings demonstrate the efficacy of the proposed solutions, highlighting how causal reasoning enables autonomous robots to operate more efficiently and safely in dynamic environments shared with humans.
Related papers
- Experimental Evaluation of ROS-Causal in Real-World Human-Robot Spatial Interaction Scenarios [3.8625803348911774]
We present an experimental evaluation of ROS-Causal, a ROS-based framework for causal discovery in human-robot spatial interactions.
We show how causal models can be extracted directly onboard by robots during data collection.
The online causal models generated from the simulation are consistent with those from lab experiments.
arXiv Detail & Related papers (2024-06-07T14:20:30Z) - Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction [9.806227900768926]
We propose to model social motion forecasting in a shared human-robot representation space.
ECHO operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios.
We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2024-02-07T11:37:14Z) - Efficient Causal Discovery for Robotics Applications [2.1244188321694146]
We present a practical demonstration of our approach for fast and accurate causal analysis, known as Filtered PCMCI (F-PCMCI)
The provided application illustrates how our F-PCMCI can accurately and promptly reconstruct the causal model of a human-robot interaction scenario.
arXiv Detail & Related papers (2023-10-23T13:30:07Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Causal Discovery of Dynamic Models for Predicting Human Spatial
Interactions [5.742409080817885]
We propose an application of causal discovery methods to model human-robot spatial interactions.
New methods and practical solutions are discussed to exploit, for the first time, a state-of-the-art causal discovery algorithm.
arXiv Detail & Related papers (2022-10-29T08:56:48Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and
Robotics Together [68.44697646919515]
This paper presents several human-robot systems that utilize spatial computing to enable novel robot use cases.
The combination of spatial computing and egocentric sensing on mixed reality devices enables them to capture and understand human actions and translate these to actions with spatial meaning.
arXiv Detail & Related papers (2022-02-03T10:04:26Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Deployment and Evaluation of a Flexible Human-Robot Collaboration Model
Based on AND/OR Graphs in a Manufacturing Environment [2.3848738964230023]
A major bottleneck to effectively deploy collaborative robots to manufacturing industries is developing task planning algorithms.
A pick-and-place palletization task, which requires the collaboration between humans and robots, is investigated.
The results of this study demonstrate how human-robot collaboration models like the one we propose can leverage the flexibility and the comfort of operators in the workplace.
arXiv Detail & Related papers (2020-07-13T22:05:34Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.