EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous
Visual Hulls
- URL: http://arxiv.org/abs/2304.05296v1
- Date: Tue, 11 Apr 2023 15:46:16 GMT
- Title: EvAC3D: From Event-based Apparent Contours to 3D Models via Continuous
Visual Hulls
- Authors: Ziyun Wang, Kenneth Chaney, Kostas Daniilidis
- Abstract summary: 3D reconstruction from multiple views is a successful computer vision field with multiple deployments in applications.
We study the problem of 3D reconstruction from event-cameras, motivated by the advantages of event-based cameras in terms of low power and latency.
We propose Apparent Contour Events (ACE), a novel event-based representation that defines the geometry of the apparent contour of an object.
- Score: 46.94040300725127
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: 3D reconstruction from multiple views is a successful computer vision field
with multiple deployments in applications. State of the art is based on
traditional RGB frames that enable optimization of photo-consistency cross
views. In this paper, we study the problem of 3D reconstruction from
event-cameras, motivated by the advantages of event-based cameras in terms of
low power and latency as well as by the biological evidence that eyes in nature
capture the same data and still perceive well 3D shape. The foundation of our
hypothesis that 3D reconstruction is feasible using events lies in the
information contained in the occluding contours and in the continuous scene
acquisition with events. We propose Apparent Contour Events (ACE), a novel
event-based representation that defines the geometry of the apparent contour of
an object. We represent ACE by a spatially and temporally continuous implicit
function defined in the event x-y-t space. Furthermore, we design a novel
continuous Voxel Carving algorithm enabled by the high temporal resolution of
the Apparent Contour Events. To evaluate the performance of the method, we
collect MOEC-3D, a 3D event dataset of a set of common real-world objects. We
demonstrate the ability of EvAC3D to reconstruct high-fidelity mesh surfaces
from real event sequences while allowing the refinement of the 3D
reconstruction for each individual event.
Related papers
- Elite-EvGS: Learning Event-based 3D Gaussian Splatting by Distilling Event-to-Video Priors [8.93657924734248]
Event cameras are bio-inspired sensors that output asynchronous and sparse event streams, instead of fixed frames.
We propose a novel event-based 3DGS framework, named Elite-EvGS.
Our key idea is to distill the prior knowledge from the off-the-shelf event-to-video (E2V) models to effectively reconstruct 3D scenes from events.
arXiv Detail & Related papers (2024-09-20T10:47:52Z) - Dynamic Scene Understanding through Object-Centric Voxelization and Neural Rendering [57.895846642868904]
We present a 3D generative model named DynaVol-S for dynamic scenes that enables object-centric learning.
voxelization infers per-object occupancy probabilities at individual spatial locations.
Our approach integrates 2D semantic features to create 3D semantic grids, representing the scene through multiple disentangled voxel grids.
arXiv Detail & Related papers (2024-07-30T15:33:58Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - Zero-Shot Multi-Object Scene Completion [59.325611678171974]
We present a 3D scene completion method that recovers the complete geometry of multiple unseen objects in complex scenes from a single RGB-D image.
Our method outperforms the current state-of-the-art on both synthetic and real-world datasets.
arXiv Detail & Related papers (2024-03-21T17:59:59Z) - Object-Centric Domain Randomization for 3D Shape Reconstruction in the Wild [22.82439286651921]
One of the biggest challenges in single-view 3D shape reconstruction in the wild is the scarcity of 3D shape, 2D image>-paired data from real-world environments.
Inspired by remarkable achievements via domain randomization, we propose ObjectDR which synthesizes such paired data via a random simulation of visual variations in object appearances and backgrounds.
arXiv Detail & Related papers (2024-03-21T16:40:10Z) - 3D-SceneDreamer: Text-Driven 3D-Consistent Scene Generation [51.64796781728106]
We propose a generative refinement network to synthesize new contents with higher quality by exploiting the natural image prior to 2D diffusion model and the global 3D information of the current scene.
Our approach supports wide variety of scene generation and arbitrary camera trajectories with improved visual quality and 3D consistency.
arXiv Detail & Related papers (2024-03-14T14:31:22Z) - Exploring Event-based Human Pose Estimation with 3D Event Representations [26.34100847541989]
We introduce two 3D event representations: the Rasterized Event Point Cloud (Ras EPC) and the Decoupled Event Voxel (DEV)
The Ras EPC aggregates events within concise temporal slices at identical positions, preserving their 3D attributes along with statistical information, thereby significantly reducing memory and computational demands.
Our methods are tested on the DHP19 public dataset, MMHPSD dataset, and our EV-3DPW dataset, with further qualitative validation via a derived driving scene dataset EV-JAAD and an outdoor collection vehicle.
arXiv Detail & Related papers (2023-11-08T10:45:09Z) - Voxel-based 3D Detection and Reconstruction of Multiple Objects from a
Single Image [22.037472446683765]
We learn a regular grid of 3D voxel features from the input image which is aligned with 3D scene space via a 3D feature lifting operator.
Based on the 3D voxel features, our novel CenterNet-3D detection head formulates the 3D detection as keypoint detection in the 3D space.
We devise an efficient coarse-to-fine reconstruction module, including coarse-level voxelization and a novel local PCA-SDF shape representation.
arXiv Detail & Related papers (2021-11-04T18:30:37Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.