Multi-layered Discriminative Restricted Boltzmann Machine with Untrained
Probabilistic Layer
- URL: http://arxiv.org/abs/2210.15434v1
- Date: Thu, 27 Oct 2022 13:56:17 GMT
- Title: Multi-layered Discriminative Restricted Boltzmann Machine with Untrained
Probabilistic Layer
- Authors: Yuri Kanno and Muneki Yasuda
- Abstract summary: An extreme learning machine (ELM) is a three-layered feed-forward neural network having untrained parameters.
Inspired by ELM, a probabilistic untrained layer called a probabilistic-ELM layer is proposed.
It is combined with a discriminative restricted Boltzmann machine (DRBM) to solve classification problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: An extreme learning machine (ELM) is a three-layered feed-forward neural
network having untrained parameters, which are randomly determined before
training. Inspired by the idea of ELM, a probabilistic untrained layer called a
probabilistic-ELM (PELM) layer is proposed, and it is combined with a
discriminative restricted Boltzmann machine (DRBM), which is a probabilistic
three-layered neural network for solving classification problems. The proposed
model is obtained by stacking DRBM on the PELM layer. The resultant model
(i.e., multi-layered DRBM (MDRBM)) forms a probabilistic four-layered neural
network. In MDRBM, the parameters in the PELM layer can be determined using
Gaussian-Bernoulli restricted Boltzmann machine. Owing to the PELM layer, MDRBM
obtains a strong immunity against noise in inputs, which is one of the most
important advantages of MDRBM. Numerical experiments using some benchmark
datasets, MNIST, Fashion-MNIST, Urban Land Cover, and CIFAR-10, demonstrate
that MDRBM is superior to other existing models, particularly, in terms of the
noise-robustness property (or, in other words, the generalization property).
Related papers
- Monotone deep Boltzmann machines [86.50247625239406]
Deep Boltzmann machines (DBMs) are multi-layered probabilistic models governed by a pairwise energy function.
We develop a new class of restricted model, the monotone DBM, which allows for arbitrary self-connection in each layer.
We show that a particular choice of activation results in a fixed-point iteration that gives a variational mean-field solution.
arXiv Detail & Related papers (2023-07-11T03:02:44Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Neural Boltzmann Machines [2.179313476241343]
Conditional generative models are capable of using contextual information as input to create new imaginative outputs.
Conditional Restricted Boltzmann Machines (CRBMs) are one class of conditional generative models that have proven to be especially adept at modeling noisy discrete or continuous data.
We generalize CRBMs by converting each of the CRBM parameters to their own neural networks that are allowed to be functions of the conditional inputs.
arXiv Detail & Related papers (2023-05-15T04:03:51Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Intelligent Trajectory Design for RIS-NOMA aided Multi-robot
Communications [59.34642007625687]
The goal is to maximize the sum-rate of whole trajectories for multi-robot system by jointly optimizing trajectories and NOMA decoding orders of robots.
An integrated machine learning (ML) scheme is proposed, which combines long short-term memory (LSTM)-autoregressive integrated moving average (ARIMA) model and dueling double deep Q-network (D$3$QN) algorithm.
arXiv Detail & Related papers (2022-05-03T17:14:47Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - A deep learning based surrogate model for stochastic simulators [0.0]
We propose a deep learning-based surrogate model for simulators.
We utilize conditional maximum mean discrepancy (CMMD) as the loss-function.
Results obtained indicate the excellent performance of the proposed approach.
arXiv Detail & Related papers (2021-10-24T11:38:47Z) - Barriers and Dynamical Paths in Alternating Gibbs Sampling of Restricted
Boltzmann Machines [0.0]
We study the performance of Alternating Gibbs Sampling (AGS) on several analytically tractable models.
We show that standard AGS is not more efficient than classical Metropolis-Hastings (MH) sampling of the effective energy landscape.
We illustrate our findings on three datasets: Bars and Stripes and MNIST, well known in machine learning, and the so-called Lattice Proteins.
arXiv Detail & Related papers (2021-07-13T12:07:56Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Highly-scalable stochastic neuron based on Ovonic Threshold Switch (OTS)
and its applications in Restricted Boltzmann Machine (RBM) [0.2529563359433233]
We propose a highly-scalable neuron device based on Ovonic Threshold Switch (OTS)
As a candidate for a true random number generator (TRNG), it passes 15 among the 16 tests of the National Institute of Standards and Technology (NIST) Statistical Test Suite.
reconstruction of images is successfully demonstrated using images contaminated with noises, resulting in images with the noise removed.
arXiv Detail & Related papers (2020-10-21T13:20:01Z) - Probabilistic Object Classification using CNN ML-MAP layers [0.0]
We introduce a CNN probabilistic approach based on distributions calculated in the network's Logit layer.
The new approach shows promising performance compared to SoftMax.
arXiv Detail & Related papers (2020-05-29T13:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.