On the Preservation of Spatio-temporal Information in Machine Learning
Applications
- URL: http://arxiv.org/abs/2006.08321v1
- Date: Mon, 15 Jun 2020 12:22:36 GMT
- Title: On the Preservation of Spatio-temporal Information in Machine Learning
Applications
- Authors: Yigit Oktar, Mehmet Turkan
- Abstract summary: In machine learning applications, each data attribute is assumed to be of others.
Shift vectors-in $k$means is proposed in a novel framework with the help of sparse representations.
Experiments suggest that feature extraction as a simulation of shallow neural networks provides a little better performance than Gaboral dictionary learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In conventional machine learning applications, each data attribute is assumed
to be orthogonal to others. Namely, every pair of dimension is orthogonal to
each other and thus there is no distinction of in-between relations of
dimensions. However, this is certainly not the case in real world signals which
naturally originate from a spatio-temporal configuration. As a result, the
conventional vectorization process disrupts all of the spatio-temporal
information about the order/place of data whether it be $1$D, $2$D, $3$D, or
$4$D. In this paper, the problem of orthogonality is first investigated through
conventional $k$-means of images, where images are to be processed as vectors.
As a solution, shift-invariant $k$-means is proposed in a novel framework with
the help of sparse representations. A generalization of shift-invariant
$k$-means, convolutional dictionary learning, is then utilized as an
unsupervised feature extraction method for classification. Experiments suggest
that Gabor feature extraction as a simulation of shallow convolutional neural
networks provides a little better performance compared to convolutional
dictionary learning. Many alternatives of convolutional-logic are also
discussed for spatio-temporal information preservation, including a
spatio-temporal hypercomplex encoding scheme.
Related papers
- Sample-Efficient Linear Representation Learning from Non-IID Non-Isotropic Data [4.971690889257356]
We introduce an adaptation of the alternating minimization-descent scheme proposed by Collins and Nayer and Vaswani.
We show that vanilla alternating-minimization descent fails catastrophically even for iid, but mildly non-isotropic data.
Our analysis unifies and generalizes prior work, and provides a flexible framework for a wider range of applications.
arXiv Detail & Related papers (2023-08-08T17:56:20Z) - Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Combining Varied Learners for Binary Classification using Stacked
Generalization [3.1871776847712523]
This paper performs binary classification using Stacked Generalization on high dimensional Polycystic Ovary Syndrome dataset.
The various metrics are given in this paper that also point out a subtle transgression found with Receiver Operating Characteristic Curve that was proved to be incorrect.
arXiv Detail & Related papers (2022-02-17T21:47:52Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Mitigating Generation Shifts for Generalized Zero-Shot Learning [52.98182124310114]
Generalized Zero-Shot Learning (GZSL) is the task of leveraging semantic information (e.g., attributes) to recognize the seen and unseen samples, where unseen classes are not observable during training.
We propose a novel Generation Shifts Mitigating Flow framework for learning unseen data synthesis efficiently and effectively.
Experimental results demonstrate that GSMFlow achieves state-of-the-art recognition performance in both conventional and generalized zero-shot settings.
arXiv Detail & Related papers (2021-07-07T11:43:59Z) - High-dimensional separability for one- and few-shot learning [58.8599521537]
This work is driven by a practical question, corrections of Artificial Intelligence (AI) errors.
Special external devices, correctors, are developed. They should provide quick and non-iterative system fix without modification of a legacy AI system.
New multi-correctors of AI systems are presented and illustrated with examples of predicting errors and learning new classes of objects by a deep convolutional neural network.
arXiv Detail & Related papers (2021-06-28T14:58:14Z) - DiGS : Divergence guided shape implicit neural representation for
unoriented point clouds [36.60407995156801]
Shape implicit neural representations (INRs) have recently shown to be effective in shape analysis and reconstruction tasks.
We propose a divergence guided shape representation learning approach that does not require normal vectors as input.
arXiv Detail & Related papers (2021-06-21T02:10:03Z) - A Local Similarity-Preserving Framework for Nonlinear Dimensionality
Reduction with Neural Networks [56.068488417457935]
We propose a novel local nonlinear approach named Vec2vec for general purpose dimensionality reduction.
To train the neural network, we build the neighborhood similarity graph of a matrix and define the context of data points.
Experiments of data classification and clustering on eight real datasets show that Vec2vec is better than several classical dimensionality reduction methods in the statistical hypothesis test.
arXiv Detail & Related papers (2021-03-10T23:10:47Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - System Identification Through Lipschitz Regularized Deep Neural Networks [0.4297070083645048]
We use neural networks to learn governing equations from data.
We reconstruct the right-hand side of a system of ODEs $dotx(t) = f(t, x(t))$ directly from observed uniformly time-sampled data.
arXiv Detail & Related papers (2020-09-07T17:52:51Z) - Masking schemes for universal marginalisers [1.0412114420493723]
We consider the effect of structure-agnostic and structure-dependent masking schemes when training a universal marginaliser.
We compare networks trained with different masking schemes in terms of their predictive performance and generalisation properties.
arXiv Detail & Related papers (2020-01-16T15:35:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.