Pattern Inversion as a Pattern Recognition Method for Machine Learning
- URL: http://arxiv.org/abs/2108.10242v1
- Date: Sun, 15 Aug 2021 10:25:51 GMT
- Title: Pattern Inversion as a Pattern Recognition Method for Machine Learning
- Authors: Alexei Mikhailov, Mikhail Karavay
- Abstract summary: The paper discusses the use of indexing-based methods for pattern recognition.
It is shown that for pattern recognition applications such indexing methods replace with inverse patterns the fully inverted files.
The paper discusses a pattern inversion formalism that makes use on a novel pattern transform and its application for unsupervised instant learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial neural networks use a lot of coefficients that take a great deal
of computing power for their adjustment, especially if deep learning networks
are employed. However, there exist coefficients-free extremely fast
indexing-based technologies that work, for instance, in Google search engines,
in genome sequencing, etc. The paper discusses the use of indexing-based
methods for pattern recognition. It is shown that for pattern recognition
applications such indexing methods replace with inverse patterns the fully
inverted files, which are typically employed in search engines. Not only such
inversion provide automatic feature extraction, which is a distinguishing mark
of deep learning, but, unlike deep learning, pattern inversion supports almost
instantaneous learning, which is a consequence of absence of coefficients. The
paper discusses a pattern inversion formalism that makes use on a novel pattern
transform and its application for unsupervised instant learning. Examples
demonstrate a view-angle independent recognition of three-dimensional objects,
such as cars, against arbitrary background, prediction of remaining useful life
of aircraft engines, and other applications. In conclusion, it is noted that,
in neurophysiology, the function of the neocortical mini-column has been widely
debated since 1957. This paper hypothesize that, mathematically, the cortical
mini-column can be described as an inverse pattern, which physically serves as
a connection multiplier expanding associations of inputs with relevant pattern
classes.
Related papers
- Scale-Free Image Keypoints Using Differentiable Persistent Homology [3.5263582734967307]
In computer vision, keypoint detection is a fundamental task, with applications spanning from robotics to image retrieval.
This paper introduces a novel approach that leverages Morse theory and persistent homology, powerful tools rooted in algebraic topology.
We propose a novel loss function based on the recent introduction of a notion of subgradient in persistent homology, paving the way toward topological learning.
arXiv Detail & Related papers (2024-06-03T13:27:51Z) - Detecting Moving Objects With Machine Learning [0.0]
This chapter presents a review of the use of machine learning techniques to find moving objects in astronomical imagery.
I discuss various pitfalls with the use of machine learning techniques, including a discussion on the important issue of overfitting.
arXiv Detail & Related papers (2024-05-10T00:13:39Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Mapping of attention mechanisms to a generalized Potts model [50.91742043564049]
We show that training a neural network is exactly equivalent to solving the inverse Potts problem by the so-called pseudo-likelihood method.
We also compute the generalization error of self-attention in a model scenario analytically using the replica method.
arXiv Detail & Related papers (2023-04-14T16:32:56Z) - Learning Single-Index Models with Shallow Neural Networks [43.6480804626033]
We introduce a natural class of shallow neural networks and study its ability to learn single-index models via gradient flow.
We show that the corresponding optimization landscape is benign, which in turn leads to generalization guarantees that match the near-optimal sample complexity of dedicated semi-parametric methods.
arXiv Detail & Related papers (2022-10-27T17:52:58Z) - Measures of Information Reflect Memorization Patterns [53.71420125627608]
We show that the diversity in the activation patterns of different neurons is reflective of model generalization and memorization.
Importantly, we discover that information organization points to the two forms of memorization, even for neural activations computed on unlabelled in-distribution examples.
arXiv Detail & Related papers (2022-10-17T20:15:24Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z) - Surprisal-Triggered Conditional Computation with Neural Networks [19.55737970532817]
Autoregressive neural network models have been used successfully for sequence generation, feature extraction, and hypothesis scoring.
This paper presents yet another use for these models: allocating more computation to more difficult inputs.
arXiv Detail & Related papers (2020-06-02T14:34:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.