Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures
- URL: http://arxiv.org/abs/2205.05492v1
- Date: Wed, 11 May 2022 13:33:14 GMT
- Title: Two ways to make your robot proactive: reasoning about human intentions,
or reasoning about possible futures
- Authors: Sera Buyukgoz, Jasmin Grosinger, Mohamed Chetouani and Alessandro
Saffiotti
- Abstract summary: We investigate two ways to make robots proactive.
One way is to recognize humans' intentions and to act to fulfill them, like opening the door that you are about to cross.
The other way is to reason about possible future threats or opportunities and to act to prevent or to foster them.
- Score: 69.03494351066846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots sharing their space with humans need to be proactive in order to be
helpful. Proactive robots are able to act on their own initiative in an
anticipatory way to benefit humans. In this work, we investigate two ways to
make robots proactive. One way is to recognize humans' intentions and to act to
fulfill them, like opening the door that you are about to cross. The other way
is to reason about possible future threats or opportunities and to act to
prevent or to foster them, like recommending you to take an umbrella since rain
has been forecasted. In this paper, we present approaches to realize these two
types of proactive behavior. We then present an integrated system that can
generate proactive robot behavior by reasoning on both factors: intentions and
predictions. We illustrate our system on a sample use case including a domestic
robot and a human. We first run this use case with the two separate proactive
systems, intention-based and prediction-based, and then run it with our
integrated system. The results show that the integrated system is able to take
into account a broader variety of aspects that are needed for proactivity.
Related papers
- An Epistemic Human-Aware Task Planner which Anticipates Human Beliefs and Decisions [8.309981857034902]
The aim is to build a robot policy that accounts for uncontrollable human behaviors.
We propose a novel planning framework and build a solver based on AND-OR search.
Preliminary experiments in two domains, one novel and one adapted, demonstrate the effectiveness of the framework.
arXiv Detail & Related papers (2024-09-27T08:27:36Z) - Guessing human intentions to avoid dangerous situations in caregiving robots [1.3546242205182986]
We propose an algorithm that detects risky situations for humans, selecting a robot action that removes the danger in real time.
We use the simulation-based approach to ATM and adopt the 'like-me' policy to assign intentions and actions to people.
The algorithm has been implemented as part of an existing cognitive architecture and tested in simulation scenarios.
arXiv Detail & Related papers (2024-03-24T20:43:29Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - Aligning Robot and Human Representations [50.070982136315784]
We argue that current representation learning approaches in robotics should be studied from the perspective of how well they accomplish the objective of representation alignment.
We mathematically define the problem, identify its key desiderata, and situate current methods within this formalism.
arXiv Detail & Related papers (2023-02-03T18:59:55Z) - Learning Latent Representations to Co-Adapt to Humans [12.71953776723672]
Non-stationary humans are challenging for robot learners.
In this paper we introduce an algorithmic formalism that enables robots to co-adapt alongside dynamic humans.
arXiv Detail & Related papers (2022-12-19T16:19:24Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Doing Right by Not Doing Wrong in Human-Robot Collaboration [8.078753289996417]
We propose a novel approach to learning fair and sociable behavior, not by reproducing positive behavior, but rather by avoiding negative behavior.
In this study, we highlight the importance of incorporating sociability in robot manipulation, as well as the need to consider fairness in human-robot interactions.
arXiv Detail & Related papers (2022-02-05T23:05:10Z) - Probabilistic Human Motion Prediction via A Bayesian Neural Network [71.16277790708529]
We propose a probabilistic model for human motion prediction in this paper.
Our model could generate several future motions when given an observed motion sequence.
We extensively validate our approach on a large scale benchmark dataset Human3.6m.
arXiv Detail & Related papers (2021-07-14T09:05:33Z) - Supportive Actions for Manipulation in Human-Robot Coworker Teams [15.978389978586414]
We term actions that support interaction by reducing future interference with others as supportive robot actions.
We compare two robot modes in a shared table pick-and-place task: (1) Task-oriented: the robot only takes actions to further its own task objective and (2) Supportive: the robot sometimes prefers supportive actions to task-oriented ones.
Our experiments in simulation, using a simplified human model, reveal that supportive actions reduce the interference between agents, especially in more difficult tasks, but also cause the robot to take longer to complete the task.
arXiv Detail & Related papers (2020-05-02T09:37:10Z) - Human Grasp Classification for Reactive Human-to-Robot Handovers [50.91803283297065]
We propose an approach for human-to-robot handovers in which the robot meets the human halfway.
We collect a human grasp dataset which covers typical ways of holding objects with various hand shapes and poses.
We present a planning and execution approach that takes the object from the human hand according to the detected grasp and hand position.
arXiv Detail & Related papers (2020-03-12T19:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.