Efficient bit encoding of neural networks for Fock states
- URL: http://arxiv.org/abs/2103.08285v2
- Date: Thu, 27 May 2021 09:46:18 GMT
- Title: Efficient bit encoding of neural networks for Fock states
- Authors: Oliver K\"astle and Alexander Carmele
- Abstract summary: The complexity of the neural network scales only with the number of bit-encoded neurons rather than the maximum boson number.
In the high occupation regime its information compression efficiency is shown to surpass even maximally optimized density matrix implementations.
- Score: 77.34726150561087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a bit encoding scheme for a highly efficient and scalable
representation of bosonic Fock number states in the restricted Boltzmann
machine neural network architecture. In contrast to common density matrix
implementations, the complexity of the neural network scales only with the
number of bit-encoded neurons rather than the maximum boson number. Crucially,
in the high occupation regime its information compression efficiency is shown
to surpass even maximally optimized density matrix implementations, where a
projector method is used to access the sparsest Hilbert space representation
available.
Related papers
- Masked Wavelet Representation for Compact Neural Radiance Fields [5.279919461008267]
Using a multi-layer perceptron to represent a 3D scene or object requires enormous computational resources and time.
We present a method to reduce the size without compromising the advantages of having additional data structures.
With our proposed mask and compression pipeline, we achieved state-of-the-art performance within a memory budget of 2 MB.
arXiv Detail & Related papers (2022-12-18T11:43:32Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - Dynamic Neural Representational Decoders for High-Resolution Semantic
Segmentation [98.05643473345474]
We propose a novel decoder, termed dynamic neural representational decoder (NRD)
As each location on the encoder's output corresponds to a local patch of the semantic labels, in this work, we represent these local patches of labels with compact neural networks.
This neural representation enables our decoder to leverage the smoothness prior in the semantic label space, and thus makes our decoder more efficient.
arXiv Detail & Related papers (2021-07-30T04:50:56Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Neural Network Activation Quantization with Bitwise Information
Bottlenecks [25.319181120172562]
This paper presents a Bitwise Information Bottleneck approach for quantizing and encoding neural network activations.
By minimizing the quantization rate-distortion of each layer, the neural network with information bottlenecks achieves the state-of-the-art accuracy with low-precision activation.
arXiv Detail & Related papers (2020-06-09T12:10:04Z) - Mixed-Precision Quantized Neural Network with Progressively Decreasing
Bitwidth For Image Classification and Object Detection [21.48875255723581]
A mixed-precision quantized neural network with progressively ecreasing bitwidth is proposed to improve the trade-off between accuracy and compression.
Experiments on typical network architectures and benchmark datasets demonstrate that the proposed method could achieve better or comparable results.
arXiv Detail & Related papers (2019-12-29T14:11:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.