Noise-Sampling Cross Entropy Loss: Improving Disparity Regression Via
Cost Volume Aware Regularizer
- URL: http://arxiv.org/abs/2005.08806v2
- Date: Thu, 28 May 2020 09:59:11 GMT
- Title: Noise-Sampling Cross Entropy Loss: Improving Disparity Regression Via
Cost Volume Aware Regularizer
- Authors: Yang Chen, Zongqing Lu, Xuechen Zhang, Lei Chen and Qingmin Liao
- Abstract summary: We propose a noise-sampling cross entropy loss function to regularize the cost volume produced by deep neural networks to be unimodal and coherent.
Experiments validate that the proposed noise-sampling cross entropy loss can not only help neural networks learn more informative cost volume, but also lead to better stereo matching performance.
- Score: 38.86850327892113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent end-to-end deep neural networks for disparity regression have achieved
the state-of-the-art performance. However, many well-acknowledged specific
properties of disparity estimation are omitted in these deep learning
algorithms. Especially, matching cost volume, one of the most important
procedure, is treated as a normal intermediate feature for the following
softargmin regression, lacking explicit constraints compared with those
traditional algorithms. In this paper, inspired by previous canonical
definition of cost volume, we propose the noise-sampling cross entropy loss
function to regularize the cost volume produced by deep neural networks to be
unimodal and coherent. Extensive experiments validate that the proposed
noise-sampling cross entropy loss can not only help neural networks learn more
informative cost volume, but also lead to better stereo matching performance
compared with several representative algorithms.
Related papers
- Concurrent Training and Layer Pruning of Deep Neural Networks [0.0]
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training.
We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned.
arXiv Detail & Related papers (2024-06-06T23:19:57Z) - Ultra Low Complexity Deep Learning Based Noise Suppression [3.4373727078460665]
This paper introduces an innovative method for reducing the computational complexity of deep neural networks in real-time speech enhancement on resource-constrained devices.
Our algorithm exhibits 3 to 4 times less computational complexity and memory usage than prior state-of-the-art approaches.
arXiv Detail & Related papers (2023-12-13T13:34:15Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Quantized Proximal Averaging Network for Analysis Sparse Coding [23.080395291046408]
We unfold an iterative algorithm into a trainable network that facilitates learning sparsity prior to quantization.
We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction.
arXiv Detail & Related papers (2021-05-13T12:05:35Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Efficient and Sparse Neural Networks by Pruning Weights in a
Multiobjective Learning Approach [0.0]
We propose a multiobjective perspective on the training of neural networks by treating its prediction accuracy and the network complexity as two individual objective functions.
Preliminary numerical results on exemplary convolutional neural networks confirm that large reductions in the complexity of neural networks with neglibile loss of accuracy are possible.
arXiv Detail & Related papers (2020-08-31T13:28:03Z) - Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization [26.270754571140735]
PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
arXiv Detail & Related papers (2020-07-07T06:51:28Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.