Designing Environments Conducive to Interpretable Robot Behavior
- URL: http://arxiv.org/abs/2007.00820v2
- Date: Sun, 2 Aug 2020 16:34:05 GMT
- Title: Designing Environments Conducive to Interpretable Robot Behavior
- Authors: Anagha Kulkarni, Sarath Sreedharan, Sarah Keren, Tathagata
Chakraborti, David Smith and Subbarao Kambhampati
- Abstract summary: We investigate the opportunities and limitations of environment design as a tool to promote a type of interpretable behavior.
We formulate a novel environment design framework that considers design over multiple tasks and over a time horizon.
- Score: 35.95540723324049
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Designing robots capable of generating interpretable behavior is a
prerequisite for achieving effective human-robot collaboration. This means that
the robots need to be capable of generating behavior that aligns with human
expectations and, when required, provide explanations to the humans in the
loop. However, exhibiting such behavior in arbitrary environments could be
quite expensive for robots, and in some cases, the robot may not even be able
to exhibit the expected behavior. Given structured environments (like
warehouses and restaurants), it may be possible to design the environment so as
to boost the interpretability of the robot's behavior or to shape the human's
expectations of the robot's behavior. In this paper, we investigate the
opportunities and limitations of environment design as a tool to promote a type
of interpretable behavior -- known in the literature as explicable behavior. We
formulate a novel environment design framework that considers design over
multiple tasks and over a time horizon. In addition, we explore the
longitudinal aspect of explicable behavior and the trade-off that arises
between the cost of design and the cost of generating explicable behavior over
a time horizon.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Guessing human intentions to avoid dangerous situations in caregiving robots [1.3546242205182986]
We propose an algorithm that detects risky situations for humans, selecting a robot action that removes the danger in real time.
We use the simulation-based approach to ATM and adopt the 'like-me' policy to assign intentions and actions to people.
The algorithm has been implemented as part of an existing cognitive architecture and tested in simulation scenarios.
arXiv Detail & Related papers (2024-03-24T20:43:29Z) - Robot Interaction Behavior Generation based on Social Motion Forecasting for Human-Robot Interaction [9.806227900768926]
We propose to model social motion forecasting in a shared human-robot representation space.
ECHO operates in the aforementioned shared space to predict the future motions of the agents encountered in social scenarios.
We evaluate our model in multi-person and human-robot motion forecasting tasks and obtain state-of-the-art performance by a large margin.
arXiv Detail & Related papers (2024-02-07T11:37:14Z) - Singing the Body Electric: The Impact of Robot Embodiment on User
Expectations [7.408858358967414]
Users develop mental models of robots to conceptualize what kind of interactions they can have with those robots.
conceptualizations are often formed before interactions with the robot and are based only on observing the robot's physical design.
We propose to use multimodal features of robot embodiments to predict what kinds of expectations users will have about a given robot's social and physical capabilities.
arXiv Detail & Related papers (2024-01-13T04:42:48Z) - Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots [119.55240471433302]
Habitat 3.0 is a simulation platform for studying collaborative human-robot tasks in home environments.
It addresses challenges in modeling complex deformable bodies and diversity in appearance and motion.
Human-in-the-loop infrastructure enables real human interaction with simulated robots via mouse/keyboard or a VR interface.
arXiv Detail & Related papers (2023-10-19T17:29:17Z) - Towards a Causal Probabilistic Framework for Prediction,
Action-Selection & Explanations for Robot Block-Stacking Tasks [4.244706520140677]
Causal models provide a principled framework to encode formal knowledge of the causal relationships that govern the robot's interaction with its environment.
We propose a novel causal probabilistic framework to embed a physics simulation capability into a structural causal model to permit robots to perceive and assess the current state of a block-stacking task.
arXiv Detail & Related papers (2023-08-11T15:58:15Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.