ConceptFactory: Facilitate 3D Object Knowledge Annotation with Object Conceptualization
- URL: http://arxiv.org/abs/2411.00448v1
- Date: Fri, 01 Nov 2024 08:50:04 GMT
- Title: ConceptFactory: Facilitate 3D Object Knowledge Annotation with Object Conceptualization
- Authors: Jianhua Sun, Yuxuan Li, Longfei Xu, Nange Wang, Jiude Wei, Yining Zhang, Cewu Lu,
- Abstract summary: ConceptFactory aims at promoting machine intelligence to learn comprehensive object knowledge from both vision and robotics aspects.
It consists of two critical parts: ConceptFactory Suite and ConceptFactory Asset.
- Score: 41.54457853741178
- License:
- Abstract: We present ConceptFactory, a novel scope to facilitate more efficient annotation of 3D object knowledge by recognizing 3D objects through generalized concepts (i.e. object conceptualization), aiming at promoting machine intelligence to learn comprehensive object knowledge from both vision and robotics aspects. This idea originates from the findings in human cognition research that the perceptual recognition of objects can be explained as a process of arranging generalized geometric components (e.g. cuboids and cylinders). ConceptFactory consists of two critical parts: i) ConceptFactory Suite, a unified toolbox that adopts Standard Concept Template Library (STL-C) to drive a web-based platform for object conceptualization, and ii) ConceptFactory Asset, a large collection of conceptualized objects acquired using ConceptFactory suite. Our approach enables researchers to effortlessly acquire or customize extensive varieties of object knowledge to comprehensively study different object understanding tasks. We validate our idea on a wide range of benchmark tasks from both vision and robotics aspects with state-of-the-art algorithms, demonstrating the high quality and versatility of annotations provided by our approach. Our website is available at https://apeirony.github.io/ConceptFactory.
Related papers
- Discovering Conceptual Knowledge with Analytic Ontology Templates for Articulated Objects [42.9186628100765]
We aim to endow machine intelligence with an analogous capability through performing at the conceptual level.
AOT-driven approach yields benefits in three key perspectives.
arXiv Detail & Related papers (2024-09-18T04:53:38Z) - Deep Models for Multi-View 3D Object Recognition: A Review [16.500711021549947]
Multi-view 3D representations for object recognition has thus far demonstrated the most promising results for achieving state-of-the-art performance.
This review paper comprehensively covers recent progress in multi-view 3D object recognition methods for 3D classification and retrieval tasks.
arXiv Detail & Related papers (2024-04-23T16:54:31Z) - CLiC: Concept Learning in Context [54.81654147248919]
This paper builds upon recent advancements in visual concept learning.
It involves acquiring a visual concept from a source image and subsequently applying it to an object in a target image.
To localize the concept learning, we employ soft masks that contain both the concept within the mask and the surrounding image area.
arXiv Detail & Related papers (2023-11-28T01:33:18Z) - Beyond Object Recognition: A New Benchmark towards Object Concept
Learning [57.94446186103925]
We propose a challenging Object Concept Learning task to push the envelope of object understanding.
It requires machines to reason out object affordances and simultaneously give the reason: what attributes make an object possesses these affordances.
By analyzing the causal structure of OCL, we present a baseline, Object Concept Reasoning Network (OCRN)
arXiv Detail & Related papers (2022-12-06T02:11:34Z) - Learning by Asking Questions for Knowledge-based Novel Object
Recognition [64.55573343404572]
In real-world object recognition, there are numerous object classes to be recognized. Conventional image recognition based on supervised learning can only recognize object classes that exist in the training data, and thus has limited applicability in the real world.
Inspired by this, we study a framework for acquiring external knowledge through question generation that would help the model instantly recognize novel objects.
Our pipeline consists of two components: the Object-based object recognition, and the Question Generator, which generates knowledge-aware questions to acquire novel knowledge.
arXiv Detail & Related papers (2022-10-12T02:51:58Z) - Object Recognition as Classification of Visual Properties [5.1652563977194434]
We present an object recognition process based on Ranganathan's four-phased faceted knowledge organization process.
We briefly introduce the ongoing project MultiMedia UKC, whose aim is to build an object recognition resource.
arXiv Detail & Related papers (2021-12-20T13:50:07Z) - Unsupervised Learning of Compositional Energy Concepts [70.11673173291426]
We propose COMET, which discovers and represents concepts as separate energy functions.
Comet represents both global concepts as well as objects under a unified framework.
arXiv Detail & Related papers (2021-11-04T17:46:12Z) - ObjectFolder: A Dataset of Objects with Implicit Visual, Auditory, and
Tactile Representations [52.226947570070784]
We present Object, a dataset of 100 objects that addresses both challenges with two key innovations.
First, Object encodes the visual, auditory, and tactile sensory data for all objects, enabling a number of multisensory object recognition tasks.
Second, Object employs a uniform, object-centric simulations, and implicit representation for each object's visual textures, tactile readings, and tactile readings, making the dataset flexible to use and easy to share.
arXiv Detail & Related papers (2021-09-16T14:00:59Z) - VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating
3D ARTiculated Objects [19.296344218177534]
The space of 3D articulated objects is exceptionally rich in their myriad semantic categories, diverse shape geometry, and complicated part functionality.
Previous works mostly abstract kinematic structure with estimated joint parameters and part poses as the visual representations for manipulating 3D articulated objects.
We propose object-centric actionable visual priors as a novel perception-interaction handshaking point that the perception system outputs more actionable guidance than kinematic structure estimation.
arXiv Detail & Related papers (2021-06-28T07:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.