The ideal data compression and automatic discovery of hidden law using
neural network
- URL: http://arxiv.org/abs/2203.16941v1
- Date: Thu, 31 Mar 2022 10:55:24 GMT
- Title: The ideal data compression and automatic discovery of hidden law using
neural network
- Authors: Taisuke Katayose
- Abstract summary: We consider how the human brain recognizes events and memorizes them.
We reproduce the system of the human brain on a machine learning model with a new autoencoder neural network.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently machine learning using neural networks has been developed, and many
new methods have been suggested. On the other hand, a system that has true
versatility has not been developed, and there remain many fields in which the
human brain has advantages over machine learning. We considered how the human
brain recognizes events and memorizes them and succeeded to reproduce the
system of the human brain on a machine learning model with a new autoencoder
neural network (NN). The previous autoencoders have the problem that they
cannot define well what is the features of the input data, and we need to
restrict the middle layer of the autoencoder artificially. We solve this
problem by defining a new loss function that reflects the information entropy,
and it enables the NN to compress the input data ideally and automatically
discover the hidden law behind the input data set. The loss function used in
our NN is based on the free-energy principle which is known as the unified
brain theory, and our study is the first concrete formularization of this
principle. The result of this study can be applied to any kind of data analysis
and also to cognitive science.
Related papers
- Machine Unlearning using Forgetting Neural Networks [0.0]
This paper presents a new approach to machine unlearning using forgetting neural networks (FNN)
FNNs are neural networks with specific forgetting layers, that take inspiration from the processes involved when a human brain forgets.
We report our results on the MNIST handwritten digit recognition and fashion datasets.
arXiv Detail & Related papers (2024-10-29T02:52:26Z) - Verified Neural Compressed Sensing [58.98637799432153]
We develop the first (to the best of our knowledge) provably correct neural networks for a precise computational task.
We show that for modest problem dimensions (up to 50), we can train neural networks that provably recover a sparse vector from linear and binarized linear measurements.
We show that the complexity of the network can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
arXiv Detail & Related papers (2024-05-07T12:20:12Z) - Toward Neuromic Computing: Neurons as Autoencoders [0.0]
This paper presents the idea that neural backpropagation is using dendritic processing to enable individual neurons to perform autoencoding.
Using a very simple connection weight search and artificial neural network model, the effects of interleaving autoencoding for each neuron in a hidden layer of a feedforward network are explored.
arXiv Detail & Related papers (2024-03-04T18:58:09Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Implementing engrams from a machine learning perspective: matching for
prediction [0.0]
We propose how we might design a computer system to implement engrams using neural networks.
Building on autoencoders, we propose latent neural spaces as indexes for storing and retrieving information in a compressed format.
We consider how different states in latent neural spaces corresponding to different types of sensory input could be linked by synchronous activation.
arXiv Detail & Related papers (2023-03-01T10:05:40Z) - Continual Learning with Deep Learning Methods in an Application-Oriented
Context [0.0]
An important research area of Artificial Intelligence (AI) deals with the automatic derivation of knowledge from data.
One type of machine learning algorithms that can be categorized as "deep learning" model is referred to as Deep Neural Networks (DNNs)
DNNs are affected by a problem that prevents new knowledge from being added to an existing base.
arXiv Detail & Related papers (2022-07-12T10:13:33Z) - A Spiking Neural Network based on Neural Manifold for Augmenting
Intracortical Brain-Computer Interface Data [5.039813366558306]
Brain-computer interfaces (BCIs) transform neural signals in the brain into in-structions to control external devices.
With the advent of advanced machine learning methods, the capability of brain-computer interfaces has been enhanced like never before.
Here, we use spiking neural networks (SNN) as data generators.
arXiv Detail & Related papers (2022-03-26T15:32:31Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.