Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs
- URL: http://arxiv.org/abs/2004.12248v1
- Date: Sat, 25 Apr 2020 23:02:04 GMT
- Title: Joint Inference of States, Robot Knowledge, and Human (False-)Beliefs
- Authors: Tao Yuan, Hangxin Liu, Lifeng Fan, Zilong Zheng, Tao Gao, Yixin Zhu,
Song-Chun Zhu
- Abstract summary: Aiming to understand how human (false-temporal)-belief-a core socio-cognitive ability unify-would affect human interactions with robots, this paper proposes to adopt a graphical model to the representation of object states, robot knowledge, and human (false-)beliefs.
An inference algorithm is derived to fuse individual pg from all robots across multi-views into a joint pg, which affords more effective reasoning inference capability to overcome the errors originated from a single view.
- Score: 90.20235972293801
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aiming to understand how human (false-)belief--a core socio-cognitive
ability--would affect human interactions with robots, this paper proposes to
adopt a graphical model to unify the representation of object states, robot
knowledge, and human (false-)beliefs. Specifically, a parse graph (pg) is
learned from a single-view spatiotemporal parsing by aggregating various object
states along the time; such a learned representation is accumulated as the
robot's knowledge. An inference algorithm is derived to fuse individual pg from
all robots across multi-views into a joint pg, which affords more effective
reasoning and inference capability to overcome the errors originated from a
single view. In the experiments, through the joint inference over pg-s, the
system correctly recognizes human (false-)belief in various settings and
achieves better cross-view accuracy on a challenging small object tracking
dataset.
Related papers
- Learning Object Properties Using Robot Proprioception via Differentiable Robot-Object Interaction [52.12746368727368]
Differentiable simulation has become a powerful tool for system identification.
Our approach calibrates object properties by using information from the robot, without relying on data from the object itself.
We demonstrate the effectiveness of our method on a low-cost robotic platform.
arXiv Detail & Related papers (2024-10-04T20:48:38Z) - A Multi-Modal Explainability Approach for Human-Aware Robots in Multi-Party Conversation [39.87346821309096]
We present an addressee estimation model with improved performance in comparison with the previous SOTA.
We also propose several ways to incorporate explainability and transparency in the aforementioned architecture.
arXiv Detail & Related papers (2024-05-20T13:09:32Z) - Teaching Unknown Objects by Leveraging Human Gaze and Augmented Reality
in Human-Robot Interaction [3.1473798197405953]
This dissertation aims to teach a robot unknown objects in the context of Human-Robot Interaction (HRI)
The combination of eye tracking and Augmented Reality created a powerful synergy that empowered the human teacher to communicate with the robot.
The robot's object detection capabilities exhibited comparable performance to state-of-the-art object detectors trained on extensive datasets.
arXiv Detail & Related papers (2023-12-12T11:34:43Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning Reward Functions for Robotic Manipulation by Observing Humans [92.30657414416527]
We use unlabeled videos of humans solving a wide range of manipulation tasks to learn a task-agnostic reward function for robotic manipulation policies.
The learned rewards are based on distances to a goal in an embedding space learned using a time-contrastive objective.
arXiv Detail & Related papers (2022-11-16T16:26:48Z) - Reasoning about Counterfactuals to Improve Human Inverse Reinforcement
Learning [5.072077366588174]
Humans naturally infer other agents' beliefs and desires by reasoning about their observable behavior.
We propose to incorporate the learner's current understanding of the robot's decision making into our model of human IRL.
We also propose a novel measure for estimating the difficulty for a human to predict instances of a robot's behavior in unseen environments.
arXiv Detail & Related papers (2022-03-03T17:06:37Z) - INVIGORATE: Interactive Visual Grounding and Grasping in Clutter [56.00554240240515]
INVIGORATE is a robot system that interacts with human through natural language and grasps a specified object in clutter.
We train separate neural networks for object detection, for visual grounding, for question generation, and for OBR detection and grasping.
We build a partially observable Markov decision process (POMDP) that integrates the learned neural network modules.
arXiv Detail & Related papers (2021-08-25T07:35:21Z) - Learning User-Preferred Mappings for Intuitive Robot Control [28.183430654834307]
We propose a method for learning the human's preferred or preconceived mapping from a few robot queries.
We make this approach data-efficient by recognizing that human mappings have strong priors.
Our simulated and experimental results suggest that learning the mapping between inputs and robot actions improves objective and subjective performance.
arXiv Detail & Related papers (2020-07-22T18:54:35Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.