Bio-inspired robot perception coupled with robot-modeled human
perception
- URL: http://arxiv.org/abs/2109.00097v1
- Date: Tue, 31 Aug 2021 22:22:55 GMT
- Title: Bio-inspired robot perception coupled with robot-modeled human
perception
- Authors: Tobias Fischer
- Abstract summary: My overarching research goal is to provide robots with perceptional abilities that allow interactions with humans in a human-like manner.
I use the principles of the human visual system to develop new computer vision algorithms.
- Score: 4.534608952448841
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: My overarching research goal is to provide robots with perceptional abilities
that allow interactions with humans in a human-like manner. To develop these
perceptional abilities, I believe that it is useful to study the principles of
the human visual system. I use these principles to develop new computer vision
algorithms and validate their effectiveness in intelligent robotic systems. I
am enthusiastic about this approach as it offers the dual benefit of uncovering
principles inherent in the human visual system, as well as applying these
principles to its artificial counterpart. Fig. 1 contains a depiction of my
research.
Related papers
- Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - SACSoN: Scalable Autonomous Control for Social Navigation [62.59274275261392]
We develop methods for training policies for socially unobtrusive navigation.
By minimizing this counterfactual perturbation, we can induce robots to behave in ways that do not alter the natural behavior of humans in the shared space.
We collect a large dataset where an indoor mobile robot interacts with human bystanders.
arXiv Detail & Related papers (2023-06-02T19:07:52Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z) - Enabling the Sense of Self in a Dual-Arm Robot [2.741266294612776]
We present a neural network architecture that enables a dual-arm robot to get a sense of itself in an environment.
We demonstrate experimentally that a robot can distinguish itself with an accuracy of 88.7% on average in cluttered environmental settings.
arXiv Detail & Related papers (2020-11-13T17:25:07Z) - Joint Mind Modeling for Explanation Generation in Complex Human-Robot
Collaborative Tasks [83.37025218216888]
We propose a novel explainable AI (XAI) framework for achieving human-like communication in human-robot collaborations.
The robot builds a hierarchical mind model of the human user and generates explanations of its own mind as a form of communications.
Results show that the generated explanations of our approach significantly improves the collaboration performance and user perception of the robot.
arXiv Detail & Related papers (2020-07-24T23:35:03Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Learning to Complement Humans [67.38348247794949]
A rising vision for AI in the open world centers on the development of systems that can complement humans for perceptual, diagnostic, and reasoning tasks.
We demonstrate how an end-to-end learning strategy can be harnessed to optimize the combined performance of human-machine teams.
arXiv Detail & Related papers (2020-05-01T20:00:23Z) - Robot self/other distinction: active inference meets neural networks
learning in a mirror [9.398766540452632]
We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror.
The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning.
Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions.
arXiv Detail & Related papers (2020-04-11T19:51:47Z) - A Model of Fast Concept Inference with Object-Factorized Cognitive
Programs [3.4763296976688443]
We present an algorithm that emulates the human cognitives of object factorization and sub-goaling, allowing human-level inference speed, improving accuracy, and making the output more explainable.
arXiv Detail & Related papers (2020-02-10T18:48:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.