Towards Abstract Relational Learning in Human Robot Interaction
- URL: http://arxiv.org/abs/2011.10364v1
- Date: Fri, 20 Nov 2020 12:06:46 GMT
- Title: Towards Abstract Relational Learning in Human Robot Interaction
- Authors: Mohamadreza Faridghasemnia, Daniele Nardi, Alessandro Saffiotti
- Abstract summary: Humans have a rich representation of the entities in their environment.
If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way.
In this work, we address the problem of how to obtain these representations through human-robot interaction.
- Score: 73.67226556788498
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans have a rich representation of the entities in their environment.
Entities are described by their attributes, and entities that share attributes
are often semantically related. For example, if two books have "Natural
Language Processing" as the value of their `title' attribute, we can expect
that their `topic' attribute will also be equal, namely, "NLP". Humans tend to
generalize such observations, and infer sufficient conditions under which the
`topic' attribute of any entity is "NLP". If robots need to interact
successfully with humans, they need to represent entities, attributes, and
generalizations in a similar way. This ends in a contextualized cognitive agent
that can adapt its understanding, where context provides sufficient conditions
for a correct understanding. In this work, we address the problem of how to
obtain these representations through human-robot interaction. We integrate
visual perception and natural language input to incrementally build a semantic
model of the world, and then use inductive reasoning to infer logical rules
that capture generic semantic relations, true in this model. These relations
can be used to enrich the human-robot interaction, to populate a knowledge base
with inferred facts, or to remove uncertainty in the robot's sensory inputs.
Related papers
- Learning Human-like Representations to Enable Learning Human Values [12.628307026004656]
We argue that representational alignment between humans and AI agents facilitates value alignment.
We focus on ethics as one aspect of value alignment and train ML agents using a variety of methods.
arXiv Detail & Related papers (2023-12-21T18:31:33Z) - Compositional Zero-Shot Learning for Attribute-Based Object Reference in
Human-Robot Interaction [0.0]
Language-enabled robots must be able to comprehend referring expressions to identify a particular object from visual perception.
Visual observations of an object may not be available when it is referred to, and the number of objects and attributes may also be unbounded in open worlds.
We implement an attribute-based zero-shot learning method that uses a list of attributes to perform referring expression comprehension in open worlds.
arXiv Detail & Related papers (2023-12-21T08:29:41Z) - Enhancing Interpretability and Interactivity in Robot Manipulation: A
Neurosymbolic Approach [0.0]
We present a neurosymbolic architecture for coupling language-guided visual reasoning with robot manipulation.
A non-expert human user can prompt the robot using unconstrained natural language, providing a referring expression (REF), a question (VQA) or a grasp action instruction.
We generate a 3D vision-and-language synthetic dataset of tabletop scenes in a simulation environment to train our approach and perform extensive evaluations in both synthetic and real-world scenes.
arXiv Detail & Related papers (2022-10-03T12:21:45Z) - Context Limitations Make Neural Language Models More Human-Like [32.488137777336036]
We show discrepancies in context access between modern neural language models (LMs) and humans in incremental sentence processing.
Additional context limitation was needed to make LMs better simulate human reading behavior.
Our analyses also showed that human-LM gaps in memory access are associated with specific syntactic constructions.
arXiv Detail & Related papers (2022-05-23T17:01:13Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Emergence of Machine Language: Towards Symbolic Intelligence with Neural
Networks [73.94290462239061]
We propose to combine symbolism and connectionism principles by using neural networks to derive a discrete representation.
By designing an interactive environment and task, we demonstrated that machines could generate a spontaneous, flexible, and semantic language.
arXiv Detail & Related papers (2022-01-14T14:54:58Z) - Exemplars-guided Empathetic Response Generation Controlled by the
Elements of Human Communication [88.52901763928045]
We propose an approach that relies on exemplars to cue the generative model on fine stylistic properties that signal empathy to the interlocutor.
We empirically show that these approaches yield significant improvements in empathetic response quality in terms of both automated and human-evaluated metrics.
arXiv Detail & Related papers (2021-06-22T14:02:33Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - A model of interaction semantics [0.0]
I structure the model of interaction semantics similar to the semantics of a formal language.
I arrive at a model of interaction semantics which, in the sense of the late Ludwig Wittgenstein, can do without a'mental' mapping from characters to concepts.
arXiv Detail & Related papers (2020-07-13T09:22:59Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.