Semi-supervised learning with Bayesian Confidence Propagation Neural
Network
- URL: http://arxiv.org/abs/2106.15546v1
- Date: Tue, 29 Jun 2021 16:29:17 GMT
- Title: Semi-supervised learning with Bayesian Confidence Propagation Neural
Network
- Authors: Naresh Balaji Ravichandran, Anders Lansner, Pawel Herman
- Abstract summary: Learning internal representations from data using no or few labels is useful for machine learning research.
Recent work has demonstrated that these networks can learn useful internal representations from data using local Bayesian-Hebbian learning rules.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning internal representations from data using no or few labels is useful
for machine learning research, as it allows using massive amounts of unlabeled
data. In this work, we use the Bayesian Confidence Propagation Neural Network
(BCPNN) model developed as a biologically plausible model of the cortex. Recent
work has demonstrated that these networks can learn useful internal
representations from data using local Bayesian-Hebbian learning rules. In this
work, we show how such representations can be leveraged in a semi-supervised
setting by introducing and comparing different classifiers. We also evaluate
and compare such networks with other popular semi-supervised classifiers.
Related papers
- Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Fuzzy Convolution Neural Networks for Tabular Data Classification [0.0]
Convolutional neural networks (CNNs) have attracted a great deal of attention due to their remarkable performance in various domains.
In this paper, we propose a novel framework fuzzy convolution neural network (FCNN) tailored specifically for tabular data.
arXiv Detail & Related papers (2024-06-04T20:33:35Z) - Context-Specific Refinements of Bayesian Network Classifiers [1.9136291802656262]
We study the relationship between our novel classes of classifiers and Bayesian networks.
We introduce and implement data-driven learning routines for our models.
The study demonstrates that models embedding asymmetric information can enhance classification accuracy.
arXiv Detail & Related papers (2024-05-28T15:50:50Z) - Spiking neural networks with Hebbian plasticity for unsupervised
representation learning [0.0]
We introduce a novel spiking neural network model for learning distributed internal representations from data in an unsupervised procedure.
We employ an online correlation-based Hebbian-Bayesian learning and rewiring mechanism, shown previously to perform representation learning, into a spiking neural network.
We show performance close to the non-spiking BCPNN, and competitive with other Hebbian-based spiking networks when trained on MNIST and F-MNIST machine learning benchmarks.
arXiv Detail & Related papers (2023-05-05T22:34:54Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Semi-Supervised Learning using Siamese Networks [3.492636597449942]
This work explores a new training method for semi-supervised learning that is based on similarity function learning using a Siamese network.
Confident predictions of unlabeled instances are used as true labels for retraining the Siamese network.
For improving unlabeled predictions, local learning with global consistency is also evaluated.
arXiv Detail & Related papers (2021-09-02T09:06:35Z) - Leveraging Sparse Linear Layers for Debuggable Deep Networks [86.94586860037049]
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
The resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
arXiv Detail & Related papers (2021-05-11T08:15:25Z) - Deep Archimedean Copulas [98.96141706464425]
ACNet is a novel differentiable neural network architecture that enforces structural properties.
We show that ACNet is able to both approximate common Archimedean Copulas and generate new copulas which may provide better fits to data.
arXiv Detail & Related papers (2020-12-05T22:58:37Z) - Network Classifiers Based on Social Learning [71.86764107527812]
We propose a new way of combining independently trained classifiers over space and time.
The proposed architecture is able to improve prediction performance over time with unlabeled data.
We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers.
arXiv Detail & Related papers (2020-10-23T11:18:20Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.