Robotic Grasping of Fully-Occluded Objects using RF Perception
- URL: http://arxiv.org/abs/2012.15436v1
- Date: Thu, 31 Dec 2020 04:01:45 GMT
- Title: Robotic Grasping of Fully-Occluded Objects using RF Perception
- Authors: Tara Boroushaki, Junshan Leng, Ian Clester, Alberto Rodriguez, Fadel
Adib
- Abstract summary: RF-Grasp is a robotic system that can grasp fully-occluded objects in unstructured environments.
RF-Grasp relies on an eye-in-hand camera and batteryless RFID tags attached to objects of interest.
- Score: 18.339320861642722
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We present the design, implementation, and evaluation of RF-Grasp, a robotic
system that can grasp fully-occluded objects in unknown and unstructured
environments. Unlike prior systems that are constrained by the line-of-sight
perception of vision and infrared sensors, RF-Grasp employs RF (Radio
Frequency) perception to identify and locate target objects through occlusions,
and perform efficient exploration and complex manipulation tasks in
non-line-of-sight settings.
RF-Grasp relies on an eye-in-hand camera and batteryless RFID tags attached
to objects of interest. It introduces two main innovations: (1) an RF-visual
servoing controller that uses the RFID's location to selectively explore the
environment and plan an efficient trajectory toward an occluded target, and (2)
an RF-visual deep reinforcement learning network that can learn and execute
efficient, complex policies for decluttering and grasping.
We implemented and evaluated an end-to-end physical prototype of RF-Grasp and
a state-of-the-art baseline. We demonstrate it improves success rate and
efficiency by up to 40-50% in cluttered settings. We also demonstrate RF-Grasp
in novel tasks such mechanical search of fully-occluded objects behind
obstacles, opening up new possibilities for robotic manipulation.
Related papers
- A Cross-Scene Benchmark for Open-World Drone Active Tracking [54.235808061746525]
Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations.
We propose a unified cross-scene cross-domain benchmark for open-world drone active tracking called DAT.
We also propose a reinforcement learning-based drone tracking method called R-VAT.
arXiv Detail & Related papers (2024-12-01T09:37:46Z) - Renormalized Connection for Scale-preferred Object Detection in Satellite Imagery [51.83786195178233]
We design a Knowledge Discovery Network (KDN) to implement the renormalization group theory in terms of efficient feature extraction.
Renormalized connection (RC) on the KDN enables synergistic focusing'' of multi-scale features.
RCs extend the multi-level feature's divide-and-conquer'' mechanism of the FPN-based detectors to a wide range of scale-preferred tasks.
arXiv Detail & Related papers (2024-09-09T13:56:22Z) - Multi-Stage Fusion Architecture for Small-Drone Localization and Identification Using Passive RF and EO Imagery: A Case Study [0.1872664641238533]
This work develops a multi-stage fusion architecture using passive radio frequency (RF) and electro-optic (EO) imagery data.
Supervised deep learning based techniques and unsupervised foreground/background separation techniques are explored to cope with challenging environments.
The proposed fusion architecture is tested and the tracking and performance is quantified over the range.
arXiv Detail & Related papers (2024-03-30T22:53:28Z) - Radio Frequency Fingerprinting via Deep Learning: Challenges and Opportunities [4.800138615859937]
Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing.
Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint.
This paper systematically identifies and analyzes the essential considerations and challenges encountered in the creation of DL-based RFF systems.
arXiv Detail & Related papers (2023-10-25T06:45:49Z) - RF-Annotate: Automatic RF-Supervised Image Annotation of Common Objects
in Context [0.25019493958767397]
Wireless tags are increasingly used to track and identify common items of interest such as retail goods, food, medicine, clothing, books, documents, keys, equipment, and more.
We present RF-Annotate, a pipeline for autonomous pixel-wise image annotation which enables robots to collect labelled visual data of objects of interest as they encounter them within their environment.
arXiv Detail & Related papers (2022-11-16T11:25:38Z) - GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and
Specular Objects Using Generalizable NeRF [7.47805672405939]
We propose a multiview RGB-based 6-DoF grasp detection network, GraspNeRF, to achieve material-agnostic object grasping in clutter.
Compared to the existing NeRF-based 3-DoF grasp detection methods, our system can perform zero-shot NeRF construction with sparse RGB inputs and reliably detect 6-DoF grasps, both in real-time.
For training data, we generate a large-scale photorealistic domain-randomized synthetic dataset of grasping in cluttered tabletop scenes.
arXiv Detail & Related papers (2022-10-12T20:31:23Z) - NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance
Fields [54.27264716713327]
We show that a Neural Radiance Fields (NeRF) representation of a scene can be used to train dense object descriptors.
We use an optimized NeRF to extract dense correspondences between multiple views of an object, and then use these correspondences as training data for learning a view-invariant representation of the object.
Dense correspondence models supervised with our method significantly outperform off-the-shelf learned descriptors by 106%.
arXiv Detail & Related papers (2022-03-03T18:49:57Z) - RF-Net: a Unified Meta-learning Framework for RF-enabled One-shot Human
Activity Recognition [9.135311655929366]
Device-free (or contactless) sensing is more sensitive to environment changes than device-based (or wearable) sensing.
Existing solutions to RF-HAR entail a laborious data collection process for adapting to new environments.
We propose RF-Net as a meta-learning based approach to one-shot RF-HAR; it reduces the labeling efforts for environment adaptation to the minimum level.
arXiv Detail & Related papers (2021-10-29T01:58:29Z) - Infrared Small-Dim Target Detection with Transformer under Complex
Backgrounds [155.388487263872]
We propose a new infrared small-dim target detection method with the transformer.
We adopt the self-attention mechanism of the transformer to learn the interaction information of image features in a larger range.
We also design a feature enhancement module to learn more features of small-dim targets.
arXiv Detail & Related papers (2021-09-29T12:23:41Z) - Achieving Real-Time LiDAR 3D Object Detection on a Mobile Device [53.323878851563414]
We propose a compiler-aware unified framework incorporating network enhancement and pruning search with the reinforcement learning techniques.
Specifically, a generator Recurrent Neural Network (RNN) is employed to provide the unified scheme for both network enhancement and pruning search automatically.
The proposed framework achieves real-time 3D object detection on mobile devices with competitive detection performance.
arXiv Detail & Related papers (2020-12-26T19:41:15Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.