Towers of Babel: Combining Images, Language, and 3D Geometry for
Learning Multimodal Vision
- URL: http://arxiv.org/abs/2108.05863v1
- Date: Thu, 12 Aug 2021 17:16:49 GMT
- Title: Towers of Babel: Combining Images, Language, and 3D Geometry for
Learning Multimodal Vision
- Authors: Xiaoshi Wu, Hadar Averbuch-Elor, Jin Sun and Noah Snavely
- Abstract summary: We present a new, large-scale dataset of landmark photo collections that contains descriptive text in the form of captions and hierarchical category names.
WikiScenes forms a new testbed for multimodal reasoning involving images, text, and 3D geometry.
- Score: 50.07532560364523
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The abundance and richness of Internet photos of landmarks and cities has led
to significant progress in 3D vision over the past two decades, including
automated 3D reconstructions of the world's landmarks from tourist photos.
However, a major source of information available for these 3D-augmented
collections---namely language, e.g., from image captions---has been virtually
untapped. In this work, we present WikiScenes, a new, large-scale dataset of
landmark photo collections that contains descriptive text in the form of
captions and hierarchical category names. WikiScenes forms a new testbed for
multimodal reasoning involving images, text, and 3D geometry. We demonstrate
the utility of WikiScenes for learning semantic concepts over images and 3D
models. Our weakly-supervised framework connects images, 3D structure, and
semantics---utilizing the strong constraints provided by 3D geometry---to
associate semantic concepts to image pixels and 3D points.
Related papers
- ImageNet3D: Towards General-Purpose Object-Level 3D Understanding [20.837297477080945]
We present ImageNet3D, a large dataset for general-purpose object-level 3D understanding.
ImageNet3D augments 200 categories from the ImageNet dataset with 2D bounding box, 3D pose, 3D location annotations, and image captions interleaved with 3D information.
We consider two new tasks, probing of object-level 3D awareness and open vocabulary pose estimation, besides standard classification and pose estimation.
arXiv Detail & Related papers (2024-06-13T22:44:26Z) - HaLo-NeRF: Learning Geometry-Guided Semantics for Exploring Unconstrained Photo Collections [19.05215193265488]
We present a localization system that connects neural representations of scenes depicting large-scale landmarks with text describing a semantic region within the scene.
Our approach is built upon the premise that images physically grounded in space can provide a powerful supervision signal for localizing new concepts.
Our results show that HaLo-NeRF can accurately localize a variety of semantic concepts related to architectural landmarks.
arXiv Detail & Related papers (2024-02-14T14:02:04Z) - Weakly-Supervised 3D Visual Grounding based on Visual Linguistic Alignment [26.858034573776198]
We propose a weakly supervised approach for 3D visual grounding based on Visual Linguistic Alignment.
Our 3D-VLA exploits the superior ability of current large-scale vision-language models on aligning the semantics between texts and 2D images.
During the inference stage, the learned text-3D correspondence will help us ground the text queries to the 3D target objects even without 2D images.
arXiv Detail & Related papers (2023-12-15T09:08:14Z) - TeMO: Towards Text-Driven 3D Stylization for Multi-Object Meshes [67.5351491691866]
We present a novel framework, dubbed TeMO, to parse multi-object 3D scenes and edit their styles.
Our method can synthesize high-quality stylized content and outperform the existing methods over a wide range of multi-object 3D meshes.
arXiv Detail & Related papers (2023-12-07T12:10:05Z) - Uni3D: Exploring Unified 3D Representation at Scale [66.26710717073372]
We present Uni3D, a 3D foundation model to explore the unified 3D representation at scale.
Uni3D uses a 2D ViT end-to-end pretrained to align the 3D point cloud features with the image-text aligned features.
We show that the strong Uni3D representation also enables applications such as 3D painting and retrieval in the wild.
arXiv Detail & Related papers (2023-10-10T16:49:21Z) - Lowis3D: Language-Driven Open-World Instance-Level 3D Scene
Understanding [57.47315482494805]
Open-world instance-level scene understanding aims to locate and recognize unseen object categories that are not present in the annotated dataset.
This task is challenging because the model needs to both localize novel 3D objects and infer their semantic categories.
We propose to harness pre-trained vision-language (VL) foundation models that encode extensive knowledge from image-text pairs to generate captions for 3D scenes.
arXiv Detail & Related papers (2023-08-01T07:50:14Z) - Multi-CLIP: Contrastive Vision-Language Pre-training for Question
Answering tasks in 3D Scenes [68.61199623705096]
Training models to apply common-sense linguistic knowledge and visual concepts from 2D images to 3D scene understanding is a promising direction that researchers have only recently started to explore.
We propose a novel 3D pre-training Vision-Language method, namely Multi-CLIP, that enables a model to learn language-grounded and transferable 3D scene point cloud representations.
arXiv Detail & Related papers (2023-06-04T11:08:53Z) - PLA: Language-Driven Open-Vocabulary 3D Scene Understanding [57.47315482494805]
Open-vocabulary scene understanding aims to localize and recognize unseen categories beyond the annotated label space.
Recent breakthrough of 2D open-vocabulary perception is driven by Internet-scale paired image-text data with rich vocabulary concepts.
We propose to distill knowledge encoded in pre-trained vision-language (VL) foundation models through captioning multi-view images from 3D.
arXiv Detail & Related papers (2022-11-29T15:52:22Z) - Disentangling 3D Prototypical Networks For Few-Shot Concept Learning [29.02523358573336]
We present neural architectures that disentangle RGB-D images into objects' shapes and styles and a map of the background scene.
Our networks incorporate architectural biases that reflect the image formation process, 3D geometry of the world scene, and shape-style interplay.
arXiv Detail & Related papers (2020-11-06T14:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.