A large scale multi-view RGBD visual affordance learning dataset
- URL: http://arxiv.org/abs/2203.14092v3
- Date: Wed, 13 Sep 2023 01:18:40 GMT
- Title: A large scale multi-view RGBD visual affordance learning dataset
- Authors: Zeyad Khalifa, Syed Afaq Ali Shah
- Abstract summary: We introduce a large scale multi-view RGBD visual affordance learning dataset.
This is the first ever and the largest multi-view RGBD visual affordance learning dataset.
Several state-of-the-art deep learning networks are evaluated each for affordance recognition and segmentation tasks.
- Score: 4.3773754388936625
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The physical and textural attributes of objects have been widely studied for
recognition, detection and segmentation tasks in computer vision.~A number of
datasets, such as large scale ImageNet, have been proposed for feature learning
using data hungry deep neural networks and for hand-crafted feature extraction.
To intelligently interact with objects, robots and intelligent machines need
the ability to infer beyond the traditional physical/textural attributes, and
understand/learn visual cues, called visual affordances, for affordance
recognition, detection and segmentation. To date there is no publicly available
large dataset for visual affordance understanding and learning. In this paper,
we introduce a large scale multi-view RGBD visual affordance learning dataset,
a benchmark of 47210 RGBD images from 37 object categories, annotated with 15
visual affordance categories. To the best of our knowledge, this is the first
ever and the largest multi-view RGBD visual affordance learning dataset. We
benchmark the proposed dataset for affordance segmentation and recognition
tasks using popular Vision Transformer and Convolutional Neural Networks.
Several state-of-the-art deep learning networks are evaluated each for
affordance recognition and segmentation tasks. Our experimental results
showcase the challenging nature of the dataset and present definite prospects
for new and robust affordance learning algorithms. The dataset is publicly
available at https://sites.google.com/view/afaqshah/dataset.
Related papers
- Learning Object-Centric Representation via Reverse Hierarchy Guidance [73.05170419085796]
Object-Centric Learning (OCL) seeks to enable Neural Networks to identify individual objects in visual scenes.
RHGNet introduces a top-down pathway that works in different ways in the training and inference processes.
Our model achieves SOTA performance on several commonly used datasets.
arXiv Detail & Related papers (2024-05-17T07:48:27Z) - SeeBel: Seeing is Believing [0.9790236766474201]
We propose three visualizations that enable users to compare dataset statistics and AI performance for segmenting all images.
Our project tries to further increase the interpretability of the trained AI model for segmentation by visualizing its image attention weights.
We propose to conduct surveys on real users to study the efficacy of our visualization tool in computer vision and AI domain.
arXiv Detail & Related papers (2023-12-18T05:11:00Z) - Deep Neural Networks in Video Human Action Recognition: A Review [21.00217656391331]
Video behavior recognition is one of the most foundational tasks of computer vision.
Deep neural networks are built for recognizing pixel-level information such as images with RGB, RGB-D, or optical flow formats.
In our article, the performance of deep neural networks surpassed most of the techniques in the feature learning and extraction tasks.
arXiv Detail & Related papers (2023-05-25T03:54:41Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - A Variational Graph Autoencoder for Manipulation Action Recognition and
Prediction [1.1816942730023883]
We introduce a deep graph autoencoder to jointly learn recognition and prediction of manipulation tasks from symbolic scene graphs.
Our network has a variational autoencoder structure with two branches: one for identifying the input graph type and one for predicting the future graphs.
We benchmark our new model against different state-of-the-art methods on two different datasets, MANIAC and MSRC-9, and show that our proposed model can achieve better performance.
arXiv Detail & Related papers (2021-10-25T21:40:42Z) - One-Shot Object Affordance Detection in the Wild [76.46484684007706]
Affordance detection refers to identifying the potential action possibilities of objects in an image.
We devise a One-Shot Affordance Detection Network (OSAD-Net) that estimates the human action purpose and then transfers it to help detect the common affordance from all candidate images.
With complex scenes and rich annotations, our PADv2 dataset can be used as a test bed to benchmark affordance detection methods.
arXiv Detail & Related papers (2021-08-08T14:53:10Z) - Exploiting the relationship between visual and textual features in
social networks for image classification with zero-shot deep learning [0.0]
In this work, we propose a classifier ensemble based on the transferable learning capabilities of the CLIP neural network architecture.
Our experiments, based on image classification tasks according to the labels of the Places dataset, are performed by first considering only the visual part.
Considering the associated texts to the images can help to improve the accuracy depending on the goal.
arXiv Detail & Related papers (2021-07-08T10:54:59Z) - REGRAD: A Large-Scale Relational Grasp Dataset for Safe and
Object-Specific Robotic Grasping in Clutter [52.117388513480435]
We present a new dataset named regrad to sustain the modeling of relationships among objects and grasps.
Our dataset is collected in both forms of 2D images and 3D point clouds.
Users are free to import their own object models for the generation of as many data as they want.
arXiv Detail & Related papers (2021-04-29T05:31:21Z) - 3D AffordanceNet: A Benchmark for Visual Object Affordance Understanding [33.68455617113953]
We present a 3D AffordanceNet dataset, a benchmark of 23k shapes from 23 semantic object categories, annotated with 18 visual affordance categories.
Three state-of-the-art point cloud deep learning networks are evaluated on all tasks.
arXiv Detail & Related papers (2021-03-30T14:46:27Z) - Scaling Up Visual and Vision-Language Representation Learning With Noisy
Text Supervision [57.031588264841]
We leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps.
A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss.
We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme.
arXiv Detail & Related papers (2021-02-11T10:08:12Z) - Neural Data Server: A Large-Scale Search Engine for Transfer Learning
Data [78.74367441804183]
We introduce Neural Data Server (NDS), a large-scale search engine for finding the most useful transfer learning data to the target domain.
NDS consists of a dataserver which indexes several large popular image datasets, and aims to recommend data to a client.
We show the effectiveness of NDS in various transfer learning scenarios, demonstrating state-of-the-art performance on several target datasets.
arXiv Detail & Related papers (2020-01-09T01:21:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.