A Perceptron-based Fine Approximation Technique for Linear Separation
- URL: http://arxiv.org/abs/2309.06049v1
- Date: Tue, 12 Sep 2023 08:35:24 GMT
- Title: A Perceptron-based Fine Approximation Technique for Linear Separation
- Authors: \'Akos Hajnal
- Abstract summary: This paper presents a novel online learning method that aims at finding a separator hyperplane between data points labelled as either positive or negative.
Weights and biases of artificial neurons can directly be related to hyperplanes in high-dimensional spaces.
The presented method has proven converge; empirical results show that it can be more efficient than the Perceptron algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel online learning method that aims at finding a
separator hyperplane between data points labelled as either positive or
negative. Since weights and biases of artificial neurons can directly be
related to hyperplanes in high-dimensional spaces, the technique is applicable
to train perceptron-based binary classifiers in machine learning. In case of
large or imbalanced data sets, use of analytical or gradient-based solutions
can become prohibitive and impractical, where heuristics and approximation
techniques are still applicable. The proposed method is based on the Perceptron
algorithm, however, it tunes neuron weights in just the necessary extent during
searching the separator hyperplane. Due to an appropriate transformation of the
initial data set we need not to consider data labels, neither the bias term.
respectively, reducing separability to a one-class classification problem. The
presented method has proven converge; empirical results show that it can be
more efficient than the Perceptron algorithm, especially, when the size of the
data set exceeds data dimensionality.
Related papers
- Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Learning A Disentangling Representation For PU Learning [18.94726971543125]
We propose to learn a neural network-based data representation using a loss function that can be used to project the unlabeled data into two clusters.
We conduct experiments on simulated PU data that demonstrate the improved performance of our proposed method compared to the current state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-05T18:33:32Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Tensor Network Kalman Filtering for Large-Scale LS-SVMs [17.36231167296782]
Least squares support vector machines are used for nonlinear regression and classification.
A framework based on tensor networks and the Kalman filter is presented to alleviate the demanding memory and computational complexities.
Results show that our method can achieve high performance and is particularly useful when alternative methods are computationally infeasible.
arXiv Detail & Related papers (2021-10-26T08:54:03Z) - Manifold learning-based polynomial chaos expansions for high-dimensional
surrogate models [0.0]
We introduce a manifold learning-based method for uncertainty quantification (UQ) in describing systems.
The proposed method is able to achieve highly accurate approximations which ultimately lead to the significant acceleration of UQ tasks.
arXiv Detail & Related papers (2021-07-21T00:24:15Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Unlabeled Principal Component Analysis and Matrix Completion [25.663593359761336]
We introduce robust principal component analysis from a data matrix in which the entries of its columns have been corrupted by permutations.
We derive theory and algorithms of similar flavor for UPCA.
Experiments on synthetic data, face images, educational and medical records reveal the potential of our algorithms for applications such as data privatization and record linkage.
arXiv Detail & Related papers (2021-01-23T07:34:48Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Linear Tensor Projection Revealing Nonlinearity [0.294944680995069]
Dimensionality reduction is an effective method for learning high-dimensional data.
We propose a method that searches for a subspace that maximizes the prediction accuracy while retaining as much of the original data information as possible.
arXiv Detail & Related papers (2020-07-08T06:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.