Using the Projected Belief Network at High Dimensions
- URL: http://arxiv.org/abs/2204.12922v1
- Date: Mon, 25 Apr 2022 19:54:52 GMT
- Title: Using the Projected Belief Network at High Dimensions
- Authors: Paul M Baggenstoss
- Abstract summary: The projected belief network (PBN) is a layered generative network (LGN) with tractable likelihood function.
We apply the discriminatively aligned PBN to classifying and auto-encoding high-dimensional spectrograms of acoustic events.
- Score: 13.554038901140949
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The projected belief network (PBN) is a layered generative network (LGN) with
tractable likelihood function, and is based on a feed-forward neural network
(FFNN). There are two versions of the PBN: stochastic and deterministic
(D-PBN), and each has theoretical advantages over other LGNs. However,
implementation of the PBN requires an iterative algorithm that includes the
inversion of a symmetric matrix of size M X M in each layer, where M is the
layer output dimension. This, and the fact that the network must be always
dimension-reducing in each layer, can limit the types of problems where the PBN
can be applied. In this paper, we describe techniques to avoid or mitigate
these restrictions and use the PBN effectively at high dimension. We apply the
discriminatively aligned PBN (PBN-DA) to classifying and auto-encoding
high-dimensional spectrograms of acoustic events. We also present the
discriminatively aligned D-PBN for the first time.
Related papers
- A Discrete Perspective Towards the Construction of Sparse Probabilistic Boolean Networks [3.807361298718093]
We propose a novel Greedy Entry Removal (GER) algorithm for constructing sparse PBNs.
GER gives the best performance among state-of-the-art sparse PBN construction algorithms.
arXiv Detail & Related papers (2024-07-16T09:50:04Z) - Projected Belief Networks With Discriminative Alignment for Acoustic
Event Classification: Rivaling State of the Art CNNs [6.062751776009752]
The projected belief network (PBN) is a generative network with tractable likelihood function based on a feed-forward neural network (FFNN)
The PBN is two networks in one, a FFNN that operates in the forward direction, and a generative network that operates in the backward direction.
This paper provides a comprehensive treatment of PBN, PBN-DA, and PBN-DA-HMM.
arXiv Detail & Related papers (2024-01-20T10:27:04Z) - Sample Complexity of Neural Policy Mirror Descent for Policy
Optimization on Low-Dimensional Manifolds [75.51968172401394]
We study the sample complexity of the neural policy mirror descent (NPMD) algorithm with deep convolutional neural networks (CNN)
In each iteration of NPMD, both the value function and the policy can be well approximated by CNNs.
We show that NPMD can leverage the low-dimensional structure of state space to escape from the curse of dimensionality.
arXiv Detail & Related papers (2023-09-25T07:31:22Z) - Membrane Potential Batch Normalization for Spiking Neural Networks [26.003193122060697]
spiking neural networks (SNNs) have gained more and more interest recently.
To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs.
We propose another BN layer before the firing function to normalize the membrane potential again, called MPBN.
arXiv Detail & Related papers (2023-08-16T13:32:03Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - BN-invariant sharpness regularizes the training model to better
generalization [72.97766238317081]
We propose a measure of sharpness, BN-Sharpness, which gives consistent value for equivalent networks under BN.
We use the BN-sharpness to regularize the training and design an algorithm to minimize the new regularized objective.
arXiv Detail & Related papers (2021-01-08T10:23:24Z) - Joint Deep Reinforcement Learning and Unfolding: Beam Selection and
Precoding for mmWave Multiuser MIMO with Lens Arrays [54.43962058166702]
millimeter wave (mmWave) multiuser multiple-input multiple-output (MU-MIMO) systems with discrete lens arrays have received great attention.
In this work, we investigate the joint design of a beam precoding matrix for mmWave MU-MIMO systems with DLA.
arXiv Detail & Related papers (2021-01-05T03:55:04Z) - MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch
Normalization [60.36100335878855]
We propose a novel normalization method, named MimicNorm, to improve the convergence and efficiency in network training.
We leverage the neural kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer.
MimicNorm achieves similar accuracy for various network structures, including ResNets and lightweight networks like ShuffleNet, with a reduction of about 20% memory consumption.
arXiv Detail & Related papers (2020-10-19T07:42:41Z) - Towards Stabilizing Batch Statistics in Backward Propagation of Batch
Normalization [126.6252371899064]
Moving Average Batch Normalization (MABN) is a novel normalization method.
We show that MABN can completely restore the performance of vanilla BN in small batch cases.
Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.
arXiv Detail & Related papers (2020-01-19T14:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.