Diffusion 3D Features (Diff3F): Decorating Untextured Shapes with Distilled Semantic Features
- URL: http://arxiv.org/abs/2311.17024v2
- Date: Tue, 2 Apr 2024 19:11:35 GMT
- Title: Diffusion 3D Features (Diff3F): Decorating Untextured Shapes with Distilled Semantic Features
- Authors: Niladri Shekhar Dutt, Sanjeev Muralikrishnan, Niloy J. Mitra,
- Abstract summary: Diff3F is a class-agnostic feature descriptor for untextured input shapes.
We distill diffusion features from image foundational models onto input shapes.
In the process, we produce (diffusion) features in 2D that we subsequently lift and aggregate on the original surface.
- Score: 27.44390031735071
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Diff3F as a simple, robust, and class-agnostic feature descriptor that can be computed for untextured input shapes (meshes or point clouds). Our method distills diffusion features from image foundational models onto input shapes. Specifically, we use the input shapes to produce depth and normal maps as guidance for conditional image synthesis. In the process, we produce (diffusion) features in 2D that we subsequently lift and aggregate on the original surface. Our key observation is that even if the conditional image generations obtained from multi-view rendering of the input shapes are inconsistent, the associated image features are robust and, hence, can be directly aggregated across views. This produces semantic features on the input shapes, without requiring additional data or training. We perform extensive experiments on multiple benchmarks (SHREC'19, SHREC'20, FAUST, and TOSCA) and demonstrate that our features, being semantic instead of geometric, produce reliable correspondence across both isometric and non-isometrically related shape families. Code is available via the project page at https://diff3f.github.io/
Related papers
- DiffComplete: Diffusion-based Generative 3D Shape Completion [114.43353365917015]
We introduce a new diffusion-based approach for shape completion on 3D range scans.
We strike a balance between realism, multi-modality, and high fidelity.
DiffComplete sets a new SOTA performance on two large-scale 3D shape completion benchmarks.
arXiv Detail & Related papers (2023-06-28T16:07:36Z) - Zero-Shot 3D Shape Correspondence [67.18775201037732]
We propose a novel zero-shot approach to computing correspondences between 3D shapes.
We exploit the exceptional reasoning capabilities of recent foundation models in language and vision.
Our approach produces highly plausible results in a zero-shot manner, especially between strongly non-isometric shapes.
arXiv Detail & Related papers (2023-06-05T21:14:23Z) - NAISR: A 3D Neural Additive Model for Interpretable Shape Representation [10.284366517948929]
We propose a 3D Neural Additive Model for Interpretable Shape Representation ($textt NAISR$) for scientific shape discovery.
Our approach captures shape population trends and allows for patient-specific predictions through shape transfer.
Our experiments demonstrate that $textitStarman$ achieves excellent shape reconstruction performance while retaining interpretability.
arXiv Detail & Related papers (2023-03-16T11:18:04Z) - DiffusionSDF: Conditional Generative Modeling of Signed Distance
Functions [42.015077094731815]
DiffusionSDF is a generative model for shape completion, single-view reconstruction, and reconstruction of real-scanned point clouds.
We use neural signed distance functions (SDFs) as our 3D representation to parameterize the geometry of various signals (e.g., point clouds, 2D images) through neural networks.
arXiv Detail & Related papers (2022-11-24T18:59:01Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - 3D Shape Reconstruction from 2D Images with Disentangled Attribute Flow [61.62796058294777]
Reconstructing 3D shape from a single 2D image is a challenging task.
Most of the previous methods still struggle to extract semantic attributes for 3D reconstruction task.
We propose 3DAttriFlow to disentangle and extract semantic attributes through different semantic levels in the input images.
arXiv Detail & Related papers (2022-03-29T02:03:31Z) - ShapeFormer: Transformer-based Shape Completion via Sparse
Representation [41.33457875133559]
We present ShapeFormer, a network that produces a distribution of object completions conditioned on incomplete, and possibly noisy, point clouds.
The resultant distribution can then be sampled to generate likely completions, each exhibiting plausible shape details while being faithful to the input.
arXiv Detail & Related papers (2022-01-25T13:58:30Z) - Hard Example Generation by Texture Synthesis for Cross-domain Shape
Similarity Learning [97.56893524594703]
Image-based 3D shape retrieval (IBSR) aims to find the corresponding 3D shape of a given 2D image from a large 3D shape database.
metric learning with some adaptation techniques seems to be a natural solution to shape similarity learning.
We develop a geometry-focused multi-view metric learning framework empowered by texture synthesis.
arXiv Detail & Related papers (2020-10-23T08:52:00Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.