The Phong Surface: Efficient 3D Model Fitting using Lifted Optimization
- URL: http://arxiv.org/abs/2007.04940v1
- Date: Thu, 9 Jul 2020 17:10:11 GMT
- Title: The Phong Surface: Efficient 3D Model Fitting using Lifted Optimization
- Authors: Jingjing Shen, Thomas J. Cashman, Qi Ye, Tim Hutton, Toby Sharp,
Federica Bogo, Andrew William Fitzgibbon, Jamie Shotton
- Abstract summary: Realtime perceptual and interaction capabilities in mixed reality require a range of 3D tracking problems to be solved at low latency.
We introduce a new surface model: the Phong surface'
Using ideas from computer graphics, the Phong surface describes the same 3D shape as a triangulated mesh model, but with continuous surface normals.
We show that Phong surfaces retain the convergence benefits of smoother surface models, while triangle meshes do not.
- Score: 9.619889745900009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Realtime perceptual and interaction capabilities in mixed reality require a
range of 3D tracking problems to be solved at low latency on
resource-constrained hardware such as head-mounted devices. Indeed, for devices
such as HoloLens 2 where the CPU and GPU are left available for applications,
multiple tracking subsystems are required to run on a continuous, real-time
basis while sharing a single Digital Signal Processor. To solve model-fitting
problems for HoloLens 2 hand tracking, where the computational budget is
approximately 100 times smaller than an iPhone 7, we introduce a new surface
model: the `Phong surface'. Using ideas from computer graphics, the Phong
surface describes the same 3D shape as a triangulated mesh model, but with
continuous surface normals which enable the use of lifting-based optimization,
providing significant efficiency gains over ICP-based methods. We show that
Phong surfaces retain the convergence benefits of smoother surface models,
while triangle meshes do not.
Related papers
- Occupancy-Based Dual Contouring [12.944046673902415]
We introduce a dual contouring method that provides state-of-the-art performance for occupancy functions.
Our method is learning-free and carefully designed to maximize the use of GPU parallelization.
arXiv Detail & Related papers (2024-09-20T11:32:21Z) - GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation [65.33726478659304]
We introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory.
Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images.
GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms.
arXiv Detail & Related papers (2024-06-21T17:49:31Z) - 3D Face Tracking from 2D Video through Iterative Dense UV to Image Flow [15.479024531161476]
We propose a novel face tracker, FlowFace, that introduces an innovative 2D alignment network for dense per-vertex alignment.
Unlike prior work, FlowFace is trained on high-quality 3D scan annotations rather than weak supervision or synthetic data.
Our method exhibits superior performance on both custom and publicly available benchmarks.
arXiv Detail & Related papers (2024-04-15T14:20:07Z) - 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.
We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.
We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - HybridNeRF: Efficient Neural Rendering via Adaptive Volumetric Surfaces [71.1071688018433]
Neural radiance fields provide state-of-the-art view synthesis quality but tend to be slow to render.
We propose a method, HybridNeRF, that leverages the strengths of both representations by rendering most objects as surfaces.
We improve error rates by 15-30% while achieving real-time framerates (at least 36 FPS) for virtual-reality resolutions (2Kx2K)
arXiv Detail & Related papers (2023-12-05T22:04:49Z) - EvaSurf: Efficient View-Aware Implicit Textured Surface Reconstruction on Mobile Devices [53.28220984270622]
We present an implicit textured $textbfSurf$ace reconstruction method on mobile devices.
Our method can reconstruct high-quality appearance and accurate mesh on both synthetic and real-world datasets.
Our method can be trained in just 1-2 hours using a single GPU and run on mobile devices at over 40 FPS (Frames Per Second)
arXiv Detail & Related papers (2023-11-16T11:30:56Z) - Deep Active Surface Models [60.027353171412216]
Active Surface Models have a long history of being useful to model complex 3D surfaces but only Active Contours have been used in conjunction with deep networks.
We introduce layers that implement them that can be integrated seamlessly into Graph Convolutional Networks to enforce sophisticated smoothness priors.
arXiv Detail & Related papers (2020-11-17T18:48:28Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - HyperFlow: Representing 3D Objects as Surfaces [19.980044265074298]
We present a novel generative model that leverages hypernetworks to create continuous 3D object representations in a form of lightweight surfaces (meshes) directly out of point clouds.
We obtain continuous mesh-based object representations that yield better qualitative results than competing approaches.
arXiv Detail & Related papers (2020-06-15T19:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.