The Projected Belief Network Classfier : both Generative and
Discriminative
- URL: http://arxiv.org/abs/2008.06434v1
- Date: Fri, 14 Aug 2020 16:00:54 GMT
- Title: The Projected Belief Network Classfier : both Generative and
Discriminative
- Authors: Paul M Baggenstoss
- Abstract summary: The projected belief network (PBN) is a layered generative network with tractable likelihood function.
In this paper, a convolutional PBN is constructed that is both fully discriminative and fully generative.
- Score: 13.554038901140949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The projected belief network (PBN) is a layered generative network with
tractable likelihood function, and is based on a feed-forward neural network
(FF-NN). It can therefore share an embodiment with a discriminative classifier
and can inherit the best qualities of both types of network. In this paper, a
convolutional PBN is constructed that is both fully discriminative and fully
generative and is tested on spectrograms of spoken commands. It is shown that
the network displays excellent qualities from either the discriminative or
generative viewpoint. Random data synthesis and visible data reconstruction
from low-dimensional hidden variables are shown, while classifier performance
approaches that of a regularized discriminative network. Combination with a
conventional discriminative CNN is also demonstrated.
Related papers
- Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Projected Belief Networks With Discriminative Alignment for Acoustic
Event Classification: Rivaling State of the Art CNNs [6.062751776009752]
The projected belief network (PBN) is a generative network with tractable likelihood function based on a feed-forward neural network (FFNN)
The PBN is two networks in one, a FFNN that operates in the forward direction, and a generative network that operates in the backward direction.
This paper provides a comprehensive treatment of PBN, PBN-DA, and PBN-DA-HMM.
arXiv Detail & Related papers (2024-01-20T10:27:04Z) - Towards Rigorous Understanding of Neural Networks via
Semantics-preserving Transformations [0.0]
We present an approach to the precise and global verification and explanation of Rectifier Neural Networks.
Key to our approach is the symbolic execution of these networks that allows the construction of semantically equivalent Typed Affine Decision Structures.
arXiv Detail & Related papers (2023-01-19T11:35:07Z) - Multi-Fake Evolutionary Generative Adversarial Networks for Imbalance
Hyperspectral Image Classification [7.9067022260826265]
This paper presents a novel multi-fake evolutionary generative adversarial network for handling imbalance hyperspectral image classification.
Different generative objective losses are considered in the generator network to improve the classification performance of the discriminator network.
The effectiveness of the proposed method has been validated through two hyperspectral spatial-spectral data sets.
arXiv Detail & Related papers (2021-11-07T07:29:24Z) - Provable Generalization of SGD-trained Neural Networks of Any Width in
the Presence of Adversarial Label Noise [85.59576523297568]
We consider a one-hidden-layer leaky ReLU network of arbitrary width trained by gradient descent.
We prove that SGD produces neural networks that have classification accuracy competitive with that of the best halfspace over the distribution.
arXiv Detail & Related papers (2021-01-04T18:32:49Z) - Provably Training Neural Network Classifiers under Fairness Constraints [70.64045590577318]
We show that overparametrized neural networks could meet the constraints.
Key ingredient of building a fair neural network classifier is establishing no-regret analysis for neural networks.
arXiv Detail & Related papers (2020-12-30T18:46:50Z) - Discriminability of Single-Layer Graph Neural Networks [172.5042368548269]
Graph neural networks (GNNs) have exhibited promising performance on a wide range of problems.
We focus on the property of discriminability and establish conditions under which the inclusion of pointwise nonlinearities to a stable graph filter bank leads to an increased discriminative capacity for high-eigenvalue content.
arXiv Detail & Related papers (2020-10-17T18:52:34Z) - Identity-Based Patterns in Deep Convolutional Networks: Generative
Adversarial Phonology and Reduplication [0.0]
We use the ciwGAN architecture Beguvs in which learning of meaningful representations in speech emerges from a requirement that the CNNs generate informative data.
We propose a technique to wug-test CNNs trained on speech and, based on four generative tests, argue that the network learns to represent an identity-based pattern in its latent space.
arXiv Detail & Related papers (2020-09-13T23:12:49Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z) - Neural Anisotropy Directions [63.627760598441796]
We define neural anisotropy directions (NADs) the vectors that encapsulate the directional inductive bias of an architecture.
We show that for the CIFAR-10 dataset, NADs characterize the features used by CNNs to discriminate between different classes.
arXiv Detail & Related papers (2020-06-17T08:36:28Z) - Network Comparison with Interpretable Contrastive Network Representation
Learning [44.145644586950574]
We introduce a new analysis approach called contrastive network representation learning (cNRL)
cNRL enables embedding of network nodes into a low-dimensional representation that reveals the uniqueness of one network compared to another.
We demonstrate the effectiveness of i-cNRL for network comparison with multiple network models and real-world datasets.
arXiv Detail & Related papers (2020-05-25T21:46:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.