Noise-Sampling Cross Entropy Loss: Improving Disparity Regression Via
Cost Volume Aware Regularizer
- URL: http://arxiv.org/abs/2005.08806v2
- Date: Thu, 28 May 2020 09:59:11 GMT
- Title: Noise-Sampling Cross Entropy Loss: Improving Disparity Regression Via
Cost Volume Aware Regularizer
- Authors: Yang Chen, Zongqing Lu, Xuechen Zhang, Lei Chen and Qingmin Liao
- Abstract summary: We propose a noise-sampling cross entropy loss function to regularize the cost volume produced by deep neural networks to be unimodal and coherent.
Experiments validate that the proposed noise-sampling cross entropy loss can not only help neural networks learn more informative cost volume, but also lead to better stereo matching performance.
- Score: 38.86850327892113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent end-to-end deep neural networks for disparity regression have achieved
the state-of-the-art performance. However, many well-acknowledged specific
properties of disparity estimation are omitted in these deep learning
algorithms. Especially, matching cost volume, one of the most important
procedure, is treated as a normal intermediate feature for the following
softargmin regression, lacking explicit constraints compared with those
traditional algorithms. In this paper, inspired by previous canonical
definition of cost volume, we propose the noise-sampling cross entropy loss
function to regularize the cost volume produced by deep neural networks to be
unimodal and coherent. Extensive experiments validate that the proposed
noise-sampling cross entropy loss can not only help neural networks learn more
informative cost volume, but also lead to better stereo matching performance
compared with several representative algorithms.
Related papers
- YOSO: You-Only-Sample-Once via Compressed Sensing for Graph Neural Network Training [9.02251811867533]
YOSO (You-Only-Sample-Once) is an algorithm designed to achieve efficient training while preserving prediction accuracy.
YOSO not only avoids costly computations in traditional compressed sensing (CS) methods, such as orthonormal basis calculations, but also ensures high-probability accuracy retention.
arXiv Detail & Related papers (2024-11-08T16:47:51Z) - Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Ultra Low Complexity Deep Learning Based Noise Suppression [3.4373727078460665]
This paper introduces an innovative method for reducing the computational complexity of deep neural networks in real-time speech enhancement on resource-constrained devices.
Our algorithm exhibits 3 to 4 times less computational complexity and memory usage than prior state-of-the-art approaches.
arXiv Detail & Related papers (2023-12-13T13:34:15Z) - Dense-Sparse Deep Convolutional Neural Networks Training for Image Denoising [0.6215404942415159]
Deep learning methods such as the convolutional neural networks have gained prominence in the area of image denoising.
Deep denoising convolutional neural networks use many feed-forward convolution layers with added regularization methods of batch normalization and residual learning to speed up training and improve denoising performance significantly.
In this paper, we show that by employing an enhanced dense-sparse-dense network training procedure to the deep denoising convolutional neural networks, comparable denoising performance level can be achieved at a significantly reduced number of trainable parameters.
arXiv Detail & Related papers (2021-07-10T15:14:19Z) - Quantized Proximal Averaging Network for Analysis Sparse Coding [23.080395291046408]
We unfold an iterative algorithm into a trainable network that facilitates learning sparsity prior to quantization.
We demonstrate applications to compressed image recovery and magnetic resonance image reconstruction.
arXiv Detail & Related papers (2021-05-13T12:05:35Z) - Activation Relaxation: A Local Dynamical Approximation to
Backpropagation in the Brain [62.997667081978825]
Activation Relaxation (AR) is motivated by constructing the backpropagation gradient as the equilibrium point of a dynamical system.
Our algorithm converges rapidly and robustly to the correct backpropagation gradients, requires only a single type of computational unit, and can operate on arbitrary computation graphs.
arXiv Detail & Related papers (2020-09-11T11:56:34Z) - Robust Processing-In-Memory Neural Networks via Noise-Aware
Normalization [26.270754571140735]
PIM accelerators often suffer from intrinsic noise in the physical components.
We propose a noise-agnostic method to achieve robust neural network performance against any noise setting.
arXiv Detail & Related papers (2020-07-07T06:51:28Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.