Revisiting Proprioceptive Sensing for Articulated Object Manipulation
- URL: http://arxiv.org/abs/2305.09584v1
- Date: Tue, 16 May 2023 16:31:10 GMT
- Title: Revisiting Proprioceptive Sensing for Articulated Object Manipulation
- Authors: Thomas Lips, Francis wyffels
- Abstract summary: We create a system that uses proprioceptive sensing to open cabinets with a position-controlled robot and a parallel gripper.
We perform a qualitative evaluation of this system, where we find that slip between the gripper and handle limits the performance.
This poses the question: should we make more use of proprioceptive information during contact in articulated object manipulation systems, or is it not worth the added complexity, and can we manage with vision alone?
- Score: 1.4772379476524404
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots that assist humans will need to interact with articulated objects such
as cabinets or microwaves. Early work on creating systems for doing so used
proprioceptive sensing to estimate joint mechanisms during contact. However,
nowadays, almost all systems use only vision and no longer consider
proprioceptive information during contact. We believe that proprioceptive
information during contact is a valuable source of information and did not find
clear motivation for not using it in the literature. Therefore, in this paper,
we create a system that, starting from a given grasp, uses proprioceptive
sensing to open cabinets with a position-controlled robot and a parallel
gripper. We perform a qualitative evaluation of this system, where we find that
slip between the gripper and handle limits the performance. Nonetheless, we
find that the system already performs quite well. This poses the question:
should we make more use of proprioceptive information during contact in
articulated object manipulation systems, or is it not worth the added
complexity, and can we manage with vision alone? We do not have an answer to
this question, but we hope to spark some discussion on the matter. The codebase
and videos of the system are available at
https://tlpss.github.io/revisiting-proprioception-for-articulated-manipulation/.
Related papers
- Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - DetGPT: Detect What You Need via Reasoning [33.00345609506097]
We introduce a new paradigm for object detection that we call reasoning-based object detection.
Unlike conventional object detection methods that rely on specific object names, our approach enables users to interact with the system using natural language instructions.
Our proposed method, called DetGPT, leverages state-of-the-art multi-modal models and open-vocabulary object detectors.
arXiv Detail & Related papers (2023-05-23T15:37:28Z) - Characterizing Manipulation from AI Systems [7.344068411174193]
We build upon prior literature on manipulation from other fields and characterize the space of possible notions of manipulation.
We propose a definition of manipulation based on our characterization.
Third, we discuss the connections between manipulation and related concepts, such as deception and coercion.
arXiv Detail & Related papers (2023-03-16T15:19:21Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Robots-Dont-Cry: Understanding Falsely Anthropomorphic Utterances in
Dialog Systems [64.10696852552103]
Highly anthropomorphic responses might make users uncomfortable or implicitly deceive them into thinking they are interacting with a human.
We collect human ratings on the feasibility of approximately 900 two-turn dialogs sampled from 9 diverse data sources.
arXiv Detail & Related papers (2022-10-22T12:10:44Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - The R-U-A-Robot Dataset: Helping Avoid Chatbot Deception by Detecting
User Questions About Human or Non-Human Identity [41.43519695929595]
We aim to understand how system designers might allow their systems to confirm its non-human identity.
We collect over 2,500 phrasings related to the intent of Are you a robot?"
We compare classifiers to recognize the intent and discuss the precision/recall and model complexity tradeoffs.
arXiv Detail & Related papers (2021-06-04T20:04:33Z) - Careful with That! Observation of Human Movements to Estimate Objects
Properties [106.925705883949]
We focus on the features of human motor actions that communicate insights on the weight of an object.
Our final goal is to enable a robot to autonomously infer the degree of care required in object handling.
arXiv Detail & Related papers (2021-03-02T08:14:56Z) - Understanding Contexts Inside Robot and Human Manipulation Tasks through
a Vision-Language Model and Ontology System in a Video Stream [4.450615100675747]
We present a vision dataset under a strictly constrained knowledge domain for both robot and human manipulations.
We propose a scheme to generate a combination of visual attentions and an evolving knowledge graph filled with commonsense knowledge.
The proposed scheme allows the robot to mimic human-like intentional behaviors by watching real-time videos.
arXiv Detail & Related papers (2020-03-02T19:48:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.