Binarizing Sparse Convolutional Networks for Efficient Point Cloud
Analysis
- URL: http://arxiv.org/abs/2303.15493v1
- Date: Mon, 27 Mar 2023 13:47:06 GMT
- Title: Binarizing Sparse Convolutional Networks for Efficient Point Cloud
Analysis
- Authors: Xiuwei Xu, Ziwei Wang, Jie Zhou, Jiwen Lu
- Abstract summary: We propose binary sparse convolutional networks called BSC-Net for efficient point cloud analysis.
We employ the differentiable search strategies to discover the optimal opsitions for active site matching in the shifted sparse convolution.
Our BSC-Net achieves significant improvement upon our srtong baseline and outperforms the state-of-the-art network binarization methods.
- Score: 93.55896765176414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose binary sparse convolutional networks called BSC-Net
for efficient point cloud analysis. We empirically observe that sparse
convolution operation causes larger quantization errors than standard
convolution. However, conventional network quantization methods directly
binarize the weights and activations in sparse convolution, resulting in
performance drop due to the significant quantization loss. On the contrary, we
search the optimal subset of convolution operation that activates the sparse
convolution at various locations for quantization error alleviation, and the
performance gap between real-valued and binary sparse convolutional networks is
closed without complexity overhead. Specifically, we first present the shifted
sparse convolution that fuses the information in the receptive field for the
active sites that match the pre-defined positions. Then we employ the
differentiable search strategies to discover the optimal opsitions for active
site matching in the shifted sparse convolution, and the quantization errors
are significantly alleviated for efficient point cloud analysis. For fair
evaluation of the proposed method, we empirically select the recently advances
that are beneficial for sparse convolution network binarization to construct a
strong baseline. The experimental results on Scan-Net and NYU Depth v2 show
that our BSC-Net achieves significant improvement upon our srtong baseline and
outperforms the state-of-the-art network binarization methods by a remarkable
margin without additional computation overhead for binarizing sparse
convolutional networks.
Related papers
- FOBNN: Fast Oblivious Binarized Neural Network Inference [12.587981899648419]
We develop a fast oblivious binarized neural network inference framework, FOBNN.
Specifically, we customize binarized convolutional neural networks to enhance oblivious inference, design two fast algorithms for binarized convolutions, and optimize network structures experimentally under constrained costs.
arXiv Detail & Related papers (2024-05-06T03:12:36Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Rethinking Spatial Invariance of Convolutional Networks for Object
Counting [119.83017534355842]
We try to use locally connected Gaussian kernels to replace the original convolution filter to estimate the spatial position in the density map.
Inspired by previous work, we propose a low-rank approximation accompanied with translation invariance to favorably implement the approximation of massive Gaussian convolution.
Our methods significantly outperform other state-of-the-art methods and achieve promising learning of the spatial position of objects.
arXiv Detail & Related papers (2022-06-10T17:51:25Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Layer Adaptive Node Selection in Bayesian Neural Networks: Statistical
Guarantees and Implementation Details [0.5156484100374059]
Sparse deep neural networks have proven to be efficient for predictive model building in large-scale studies.
We propose a Bayesian sparse solution using spike-and-slab Gaussian priors to allow for node selection during training.
We establish the fundamental result of variational posterior consistency together with the characterization of prior parameters.
arXiv Detail & Related papers (2021-08-25T00:48:07Z) - QuantNet: Learning to Quantize by Learning within Fully Differentiable
Framework [32.465949985191635]
This paper proposes a meta-based quantizer named QuantNet, which utilizes a differentiable sub-network to directly binarize the full-precision weights.
Our method not only solves the problem of gradient mismatching, but also reduces the impact of discretization errors, caused by the binarizing operation in the deployment.
arXiv Detail & Related papers (2020-09-10T01:41:05Z) - BiDet: An Efficient Binarized Object Detector [96.19708396510894]
We propose a binarized neural network learning method called BiDet for efficient object detection.
Our BiDet fully utilizes the representational capacity of the binary neural networks for object detection by redundancy removal.
Our method outperforms the state-of-the-art binary neural networks by a sizable margin.
arXiv Detail & Related papers (2020-03-09T08:16:16Z) - ReActNet: Towards Precise Binary Neural Network with Generalized
Activation Functions [76.05981545084738]
We propose several ideas for enhancing a binary network to close its accuracy gap from real-valued networks without incurring any additional computational cost.
We first construct a baseline network by modifying and binarizing a compact real-valued network with parameter-free shortcuts.
We show that the proposed ReActNet outperforms all the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-03-07T02:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.