Knowledge Acquisition and Completion for Long-Term Human-Robot
Interactions using Knowledge Graph Embedding
- URL: http://arxiv.org/abs/2301.06834v1
- Date: Tue, 17 Jan 2023 12:23:40 GMT
- Title: Knowledge Acquisition and Completion for Long-Term Human-Robot
Interactions using Knowledge Graph Embedding
- Authors: E. Bartoli, F. Argenziano, V. Suriani, D. Nardi
- Abstract summary: We propose an architecture to gather data from users and environments in long-runs of continual learning.
We adopt Knowledge Graph Embedding techniques to generalize the acquired information with the goal of incrementally extending the robot's inner representation of the environment.
We evaluate the performance of the overall continual learning architecture by measuring the capabilities of the robot of learning entities and relations coming from unknown contexts.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Human-Robot Interaction (HRI) systems, a challenging task is sharing the
representation of the operational environment, fusing symbolic knowledge and
perceptions, between users and robots. With the existing HRI pipelines, users
can teach the robots some concepts to increase their knowledge base.
Unfortunately, the data coming from the users are usually not enough dense for
building a consistent representation. Furthermore, the existing approaches are
not able to incrementally build up their knowledge base, which is very
important when robots have to deal with dynamic contexts. To this end, we
propose an architecture to gather data from users and environments in long-runs
of continual learning. We adopt Knowledge Graph Embedding techniques to
generalize the acquired information with the goal of incrementally extending
the robot's inner representation of the environment. We evaluate the
performance of the overall continual learning architecture by measuring the
capabilities of the robot of learning entities and relations coming from
unknown contexts through a series of incremental learning sessions.
Related papers
- One to rule them all: natural language to bind communication, perception and action [0.9302364070735682]
This paper presents an advanced architecture for robotic action planning that integrates communication, perception, and planning with Large Language Models (LLMs)
The Planner Module is the core of the system where LLMs embedded in a modified ReAct framework are employed to interpret and carry out user commands.
The modified ReAct framework further enhances the execution space by providing real-time environmental perception and the outcomes of physical actions.
arXiv Detail & Related papers (2024-11-22T16:05:54Z) - Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Interactive Continual Learning Architecture for Long-Term
Personalization of Home Service Robots [11.648129262452116]
We develop a novel interactive continual learning architecture for continual learning of semantic knowledge in a home environment through human-robot interaction.
The architecture builds on core cognitive principles of learning and memory for efficient and real-time learning of new knowledge from humans.
arXiv Detail & Related papers (2024-03-06T04:55:39Z) - OG-SGG: Ontology-Guided Scene Graph Generation. A Case Study in Transfer
Learning for Telepresence Robotics [124.08684545010664]
Scene graph generation from images is a task of great interest to applications such as robotics.
We propose an initial approximation to a framework called Ontology-Guided Scene Graph Generation (OG-SGG)
arXiv Detail & Related papers (2022-02-21T13:23:15Z) - Human-Robot Collaboration and Machine Learning: A Systematic Review of
Recent Research [69.48907856390834]
Human-robot collaboration (HRC) is the approach that explores the interaction between a human and a robot.
This paper proposes a thorough literature review of the use of machine learning techniques in the context of HRC.
arXiv Detail & Related papers (2021-10-14T15:14:33Z) - Low Dimensional State Representation Learning with Robotics Priors in
Continuous Action Spaces [8.692025477306212]
Reinforcement learning algorithms have proven to be capable of solving complicated robotics tasks in an end-to-end fashion.
We propose a framework combining the learning of a low-dimensional state representation, from high-dimensional observations coming from the robot's raw sensory readings, with the learning of the optimal policy.
arXiv Detail & Related papers (2021-07-04T15:42:01Z) - A Road-map to Robot Task Execution with the Functional Object-Oriented
Network [77.93376696738409]
functional object-oriented network (FOON) is a knowledge graph representation for robots.
Taking the form of a bipartite graph, a FOON contains symbolic or high-level information that would be pertinent to a robot's understanding of its environment and tasks.
arXiv Detail & Related papers (2021-06-01T00:43:04Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Dynamic Knowledge Graphs as Semantic Memory Model for Industrial Robots [0.7863638253070437]
We present a model for semantic memory that allows machines to collect information and experiences to become more proficient with time.
After a semantic analysis of the data, information is stored in a knowledge graph which is used to comprehend instructions, expressed in natural language, and execute the required tasks in a deterministic manner.
arXiv Detail & Related papers (2021-01-04T17:15:30Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.