Deep Active Surface Models
- URL: http://arxiv.org/abs/2011.08826v5
- Date: Mon, 7 Jun 2021 22:19:00 GMT
- Title: Deep Active Surface Models
- Authors: Udaranga Wickramasinghe and Graham Knott and Pascal Fua
- Abstract summary: Active Surface Models have a long history of being useful to model complex 3D surfaces but only Active Contours have been used in conjunction with deep networks.
We introduce layers that implement them that can be integrated seamlessly into Graph Convolutional Networks to enforce sophisticated smoothness priors.
- Score: 60.027353171412216
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active Surface Models have a long history of being useful to model complex 3D
surfaces but only Active Contours have been used in conjunction with deep
networks, and then only to produce the data term as well as meta-parameter maps
controlling them. In this paper, we advocate a much tighter integration. We
introduce layers that implement them that can be integrated seamlessly into
Graph Convolutional Networks to enforce sophisticated smoothness priors at an
acceptable computational cost. We will show that the resulting Deep Active
Surface Models outperform equivalent architectures that use traditional
regularization loss terms to impose smoothness priors for 3D surface
reconstruction from 2D images and for 3D volume segmentation.
Related papers
- Space-time 2D Gaussian Splatting for Accurate Surface Reconstruction under Complex Dynamic Scenes [30.32214593068206]
We present a space-time 2D Gaussian Splatting approach to tackle the dynamic contents and the occlusions in complex scenes.
Specifically, to improve geometric quality in dynamic scenes, we learn canonical 2D Gaussian splats and deform these 2D Gaussian splats.
We also introduce a compositional opacity strategy, which further reduces the surface recovery of those occluded areas.
Experiments on real-world sparse-view video datasets and monocular dynamic datasets demonstrate that our reconstructions outperform state-of-the-art methods.
arXiv Detail & Related papers (2024-09-27T15:50:36Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Learning Neural Implicit Representations with Surface Signal
Parameterizations [14.835882967340968]
We present a neural network architecture that implicitly encodes the underlying surface parameterization suitable for appearance data.
Our model remains compatible with existing mesh-based digital content with appearance data.
arXiv Detail & Related papers (2022-11-01T15:10:58Z) - Mesh Convolution with Continuous Filters for 3D Surface Parsing [101.25796935464648]
We propose a series of modular operations for effective geometric feature learning from 3D triangle meshes.
Our mesh convolutions exploit spherical harmonics as orthonormal bases to create continuous convolutional filters.
We further contribute a novel hierarchical neural network for perceptual parsing of 3D surfaces, named PicassoNet++.
arXiv Detail & Related papers (2021-12-03T09:16:49Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - The Phong Surface: Efficient 3D Model Fitting using Lifted Optimization [9.619889745900009]
Realtime perceptual and interaction capabilities in mixed reality require a range of 3D tracking problems to be solved at low latency.
We introduce a new surface model: the Phong surface'
Using ideas from computer graphics, the Phong surface describes the same 3D shape as a triangulated mesh model, but with continuous surface normals.
We show that Phong surfaces retain the convergence benefits of smoother surface models, while triangle meshes do not.
arXiv Detail & Related papers (2020-07-09T17:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.