Neural Architecture Search for Efficient Uncalibrated Deep Photometric
Stereo
- URL: http://arxiv.org/abs/2110.05621v1
- Date: Mon, 11 Oct 2021 21:22:17 GMT
- Title: Neural Architecture Search for Efficient Uncalibrated Deep Photometric
Stereo
- Authors: Francesco Sarno, Suryansh Kumar, Berk Kaya, Zhiwu Huang, Vittorio
Ferrari, Luc Van Gool
- Abstract summary: We leverage differentiable neural architecture search (NAS) strategy to find uncalibrated PS architecture automatically.
Experiments on the DiLiGenT dataset show that the automatically searched neural architectures performance compares favorably with the state-of-the-art uncalibrated PS methods.
- Score: 105.05232615226602
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present an automated machine learning approach for uncalibrated
photometric stereo (PS). Our work aims at discovering lightweight and
computationally efficient PS neural networks with excellent surface normal
accuracy. Unlike previous uncalibrated deep PS networks, which are handcrafted
and carefully tuned, we leverage differentiable neural architecture search
(NAS) strategy to find uncalibrated PS architecture automatically. We begin by
defining a discrete search space for a light calibration network and a normal
estimation network, respectively. We then perform a continuous relaxation of
this search space and present a gradient-based optimization strategy to find an
efficient light calibration and normal estimation network. Directly applying
the NAS methodology to uncalibrated PS is not straightforward as certain
task-specific constraints must be satisfied, which we impose explicitly.
Moreover, we search for and train the two networks separately to account for
the Generalized Bas-Relief (GBR) ambiguity. Extensive experiments on the
DiLiGenT dataset show that the automatically searched neural architectures
performance compares favorably with the state-of-the-art uncalibrated PS
methods while having a lower memory footprint.
Related papers
- Local Loss Optimization in the Infinite Width: Stable Parameterization of Predictive Coding Networks and Target Propagation [8.35644084613785]
We introduce the maximal update parameterization ($mu$P) in the infinite-width limit for two representative designs of local targets.
By analyzing deep linear networks, we found that PC's gradients interpolate between first-order and Gauss-Newton-like gradients.
We demonstrate that, in specific standard settings, PC in the infinite-width limit behaves more similarly to the first-order gradient.
arXiv Detail & Related papers (2024-11-04T11:38:27Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Pruning-as-Search: Efficient Neural Architecture Search via Channel
Pruning and Structural Reparameterization [50.50023451369742]
Pruning-as-Search (PaS) is an end-to-end channel pruning method to search out desired sub-network automatically and efficiently.
Our proposed architecture outperforms prior arts by around $1.0%$ top-1 accuracy on ImageNet-1000 classification task.
arXiv Detail & Related papers (2022-06-02T17:58:54Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - CONetV2: Efficient Auto-Channel Size Optimization for CNNs [35.951376988552695]
This work introduces a method that is efficient in computationally constrained environments by examining the micro-search space of channel size.
In tackling channel-size optimization, we design an automated algorithm to extract the dependencies within different connected layers of the network.
We also introduce a novel metric that highly correlates with test accuracy and enables analysis of individual network layers.
arXiv Detail & Related papers (2021-10-13T16:17:19Z) - An optimised deep spiking neural network architecture without gradients [7.183775638408429]
We present an end-to-end trainable modular event-driven neural architecture that uses local synaptic and threshold adaptation rules.
The architecture represents a highly abstracted model of existing Spiking Neural Network (SNN) architectures.
arXiv Detail & Related papers (2021-09-27T05:59:12Z) - Dense for the Price of Sparse: Improved Performance of Sparsely
Initialized Networks via a Subspace Offset [0.0]
We introduce a new DCT plus Sparse' layer architecture, which maintains information propagation and trainability even with as little as 0.01% trainable kernel parameters remaining.
Switching from standard sparse layers to DCT plus Sparse layers does not increase the storage footprint of a network and incurs only a small additional computational overhead.
arXiv Detail & Related papers (2021-02-12T00:05:02Z) - MS-RANAS: Multi-Scale Resource-Aware Neural Architecture Search [94.80212602202518]
We propose Multi-Scale Resource-Aware Neural Architecture Search (MS-RANAS)
We employ a one-shot architecture search approach in order to obtain a reduced search cost.
We achieve state-of-the-art results in terms of accuracy-speed trade-off.
arXiv Detail & Related papers (2020-09-29T11:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.