F-SIOL-310: A Robotic Dataset and Benchmark for Few-Shot Incremental
Object Learning
- URL: http://arxiv.org/abs/2103.12242v1
- Date: Tue, 23 Mar 2021 00:25:50 GMT
- Title: F-SIOL-310: A Robotic Dataset and Benchmark for Few-Shot Incremental
Object Learning
- Authors: Ali Ayub, Alan R. Wagner
- Abstract summary: We present F-SIOL-310 (Few-Shot Incremental Object Learning) for testing few-shot incremental object learning capability for robotic vision.
We also provide benchmarks and evaluations of 8 incremental learning algorithms on F-SIOL-310 for future comparisons.
- Score: 9.89901717499058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has achieved remarkable success in object recognition tasks
through the availability of large scale datasets like ImageNet. However, deep
learning systems suffer from catastrophic forgetting when learning
incrementally without replaying old data. For real-world applications, robots
also need to incrementally learn new objects. Further, since robots have
limited human assistance available, they must learn from only a few examples.
However, very few object recognition datasets and benchmarks exist to test
incremental learning capability for robotic vision. Further, there is no
dataset or benchmark specifically designed for incremental object learning from
a few examples. To fill this gap, we present a new dataset termed F-SIOL-310
(Few-Shot Incremental Object Learning) which is specifically captured for
testing few-shot incremental object learning capability for robotic vision. We
also provide benchmarks and evaluations of 8 incremental learning algorithms on
F-SIOL-310 for future comparisons. Our results demonstrate that the few-shot
incremental object learning problem for robotic vision is far from being
solved.
Related papers
- Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Lifelong Ensemble Learning based on Multiple Representations for
Few-Shot Object Recognition [6.282068591820947]
We present a lifelong ensemble learning approach based on multiple representations to address the few-shot object recognition problem.
To facilitate lifelong learning, each approach is equipped with a memory unit for storing and retrieving object information instantly.
We have performed extensive sets of experiments to assess the performance of the proposed approach in offline, and open-ended scenarios.
arXiv Detail & Related papers (2022-05-04T10:29:10Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - Actionable Models: Unsupervised Offline Reinforcement Learning of
Robotic Skills [93.12417203541948]
We propose the objective of learning a functional understanding of the environment by learning to reach any goal state in a given dataset.
We find that our method can operate on high-dimensional camera images and learn a variety of skills on real robots that generalize to previously unseen scenes and objects.
arXiv Detail & Related papers (2021-04-15T20:10:11Z) - ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition [21.594641488685376]
We present the ORBIT dataset and benchmark, grounded in a real-world application of teachable object recognizers for people who are blind/low vision.
The dataset contains 3,822 videos of 486 objects recorded by people who are blind/low-vision on their mobile phones.
The benchmark reflects a realistic, highly challenging recognition problem, providing a rich playground to drive research in robustness to few-shot, high-variation conditions.
arXiv Detail & Related papers (2021-04-08T15:32:01Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - A Framework for Efficient Robotic Manipulation [79.10407063260473]
We show that a single robotic arm can learn sparse-reward manipulation policies from pixels.
We show that, given only 10 demonstrations, a single robotic arm can learn sparse-reward manipulation policies from pixels.
arXiv Detail & Related papers (2020-12-14T22:18:39Z) - An Overview of Deep Learning Architectures in Few-Shot Learning Domain [0.0]
Few-Shot Learning (also known as one-shot learning) is a sub-field of machine learning that aims to create models that can learn the desired objective with less data.
We have reviewed some of the well-known deep learning-based approaches towards few-shot learning.
arXiv Detail & Related papers (2020-08-12T06:58:45Z) - Visual Imitation Made Easy [102.36509665008732]
We present an alternate interface for imitation that simplifies the data collection process while allowing for easy transfer to robots.
We use commercially available reacher-grabber assistive tools both as a data collection device and as the robot's end-effector.
We experimentally evaluate on two challenging tasks: non-prehensile pushing and prehensile stacking, with 1000 diverse demonstrations for each task.
arXiv Detail & Related papers (2020-08-11T17:58:50Z) - Tell me what this is: Few-Shot Incremental Object Learning by a Robot [22.387008072671005]
This paper presents a system for incrementally training a robot to recognize different object categories.
The paper uses a recently developed state-of-the-art method for few-shot incremental learning of objects.
arXiv Detail & Related papers (2020-07-15T04:42:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.