The ObjectFolder Benchmark: Multisensory Learning with Neural and Real
Objects
- URL: http://arxiv.org/abs/2306.00956v1
- Date: Thu, 1 Jun 2023 17:51:22 GMT
- Title: The ObjectFolder Benchmark: Multisensory Learning with Neural and Real
Objects
- Authors: Ruohan Gao, Yiming Dou, Hao Li, Tanmay Agarwal, Jeannette Bohg, Yunzhu
Li, Li Fei-Fei, Jiajun Wu
- Abstract summary: We introduce the Object Benchmark, a benchmark suite of 10 tasks for multisensory object-centric learning.
We also introduce the Object Real dataset, including the multisensory measurements for 100 real-world household objects.
- Score: 51.22194706674366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the ObjectFolder Benchmark, a benchmark suite of 10 tasks for
multisensory object-centric learning, centered around object recognition,
reconstruction, and manipulation with sight, sound, and touch. We also
introduce the ObjectFolder Real dataset, including the multisensory
measurements for 100 real-world household objects, building upon a newly
designed pipeline for collecting the 3D meshes, videos, impact sounds, and
tactile readings of real-world objects. We conduct systematic benchmarking on
both the 1,000 multisensory neural objects from ObjectFolder, and the real
multisensory data from ObjectFolder Real. Our results demonstrate the
importance of multisensory perception and reveal the respective roles of
vision, audio, and touch for different object-centric learning tasks. By
publicly releasing our dataset and benchmark suite, we hope to catalyze and
enable new research in multisensory object-centric learning in computer vision,
robotics, and beyond. Project page: https://objectfolder.stanford.edu
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Localizing Active Objects from Egocentric Vision with Symbolic World
Knowledge [62.981429762309226]
The ability to actively ground task instructions from an egocentric view is crucial for AI agents to accomplish tasks or assist humans virtually.
We propose to improve phrase grounding models' ability on localizing the active objects by: learning the role of objects undergoing change and extracting them accurately from the instructions.
We evaluate our framework on Ego4D and Epic-Kitchens datasets.
arXiv Detail & Related papers (2023-10-23T16:14:05Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - Lifelong Ensemble Learning based on Multiple Representations for
Few-Shot Object Recognition [6.282068591820947]
We present a lifelong ensemble learning approach based on multiple representations to address the few-shot object recognition problem.
To facilitate lifelong learning, each approach is equipped with a memory unit for storing and retrieving object information instantly.
We have performed extensive sets of experiments to assess the performance of the proposed approach in offline, and open-ended scenarios.
arXiv Detail & Related papers (2022-05-04T10:29:10Z) - ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer [46.24535144252644]
We present Object 2.0, a large-scale dataset of common household objects in the form of implicit neural representations.
Our dataset is 10 times larger in the amount of objects and orders of magnitude faster in time.
We show that models learned from virtual objects in our dataset successfully transfer to their real-world counterparts.
arXiv Detail & Related papers (2022-04-05T17:55:01Z) - ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and
Tactile Representations [52.226947570070784]
We present Object, a dataset of 100 objects that addresses both challenges with two key innovations.
First, Object encodes the visual, auditory, and tactile sensory data for all objects, enabling a number of multisensory object recognition tasks.
Second, Object employs a uniform, object-centric simulations, and implicit representation for each object's visual textures, tactile readings, and tactile readings, making the dataset flexible to use and easy to share.
arXiv Detail & Related papers (2021-09-16T14:00:59Z) - Capturing the objects of vision with neural networks [0.0]
Human visual perception carves a scene at its physical joints, decomposing the world into objects.
Deep neural network (DNN) models of visual object recognition, by contrast, remain largely tethered to the sensory input.
We review related work in both fields and examine how these fields can help each other.
arXiv Detail & Related papers (2021-09-07T21:49:53Z) - A Transfer Learning Approach to Cross-Modal Object Recognition: From
Visual Observation to Robotic Haptic Exploration [13.482253411041292]
We introduce the problem of cross-modal visuo-tactile object recognition with robotic active exploration.
We propose an approach constituted by four steps: finding a visuo-tactile common representation, defining a suitable set of features, transferring the features across the domains, and classifying the objects.
The proposed approach achieves an accuracy of 94.7%, which is comparable with the accuracy of the monomodal case.
arXiv Detail & Related papers (2020-01-18T14:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.