An Efficient End-to-End 3D Model Reconstruction based on Neural
Architecture Search
- URL: http://arxiv.org/abs/2202.13313v2
- Date: Tue, 1 Mar 2022 02:51:51 GMT
- Title: An Efficient End-to-End 3D Model Reconstruction based on Neural
Architecture Search
- Authors: Yongdong Huang, Yuanzhan Li, Xulong Cao, Siyu Zhang, Shen Cai, Ting
Lu, Yuqi Liu
- Abstract summary: We propose an efficient model reconstruction method utilizing neural architecture search (NAS) and binary classification.
Our method achieves significantly higher reconstruction accuracy using fewer network parameters.
- Score: 5.913946292597174
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Using neural networks to represent 3D objects has become popular. However,
many previous works employ neural networks with fixed architecture and size to
represent different 3D objects, which lead to excessive network parameters for
simple objects and limited reconstruction accuracy for complex objects. For
each 3D model, it is desirable to have an end-to-end neural network with as few
parameters as possible to achieve high-fidelity reconstruction. In this paper,
we propose an efficient model reconstruction method utilizing neural
architecture search (NAS) and binary classification. Taking the number of
layers, the number of nodes in each layer, and the activation function of each
layer as the search space, a specific network architecture can be obtained
based on reinforcement learning technology. Furthermore, to get rid of the
traditional surface reconstruction algorithms (e.g., marching cube) used after
network inference, we complete the end-to-end network by classifying binary
voxels. Compared to other signed distance field (SDF) prediction or binary
classification networks, our method achieves significantly higher
reconstruction accuracy using fewer network parameters.
Related papers
- Multi-Objective Neural Architecture Search for In-Memory Computing [0.5892638927736115]
We employ neural architecture search (NAS) to enhance the efficiency of deploying diverse machine learning (ML) tasks on in-memory computing architectures.
Our evaluation of this NAS approach for IMC architecture deployment spans three distinct image classification datasets.
arXiv Detail & Related papers (2024-06-10T19:17:09Z) - Systematic construction of continuous-time neural networks for linear dynamical systems [0.0]
We discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems.
We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order Ordinary Differential Equation (ODE)
Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system.
arXiv Detail & Related papers (2024-03-24T16:16:41Z) - SeMLaPS: Real-time Semantic Mapping with Latent Prior Networks and
Quasi-Planar Segmentation [53.83313235792596]
We present a new methodology for real-time semantic mapping from RGB-D sequences.
It combines a 2D neural network and a 3D network based on a SLAM system with 3D occupancy mapping.
Our system achieves state-of-the-art semantic mapping quality within 2D-3D networks-based systems.
arXiv Detail & Related papers (2023-06-28T22:36:44Z) - DNArch: Learning Convolutional Neural Architectures by Backpropagation [19.399535453449488]
We present DNArch, a method that jointly learns the weights and the architecture of Convolutional Neural Networks (CNNs) by backpropagation.
In particular, DNArch allows learning (i) the size of convolutional kernels at each layer, (ii) the number of channels at each layer, (iii) the position and values of downsampling layers, and (iv) the depth of the network.
arXiv Detail & Related papers (2023-02-10T17:56:49Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Unsupervised Learning of 3D Object Categories from Videos in the Wild [75.09720013151247]
We focus on learning a model from multiple views of a large collection of object instances.
We propose a new neural network design, called warp-conditioned ray embedding (WCR), which significantly improves reconstruction.
Our evaluation demonstrates performance improvements over several deep monocular reconstruction baselines on existing benchmarks.
arXiv Detail & Related papers (2021-03-30T17:57:01Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.