A Primer on the Signature Method in Machine Learning
- URL: http://arxiv.org/abs/1603.03788v2
- Date: Fri, 17 Jan 2025 21:49:02 GMT
- Title: A Primer on the Signature Method in Machine Learning
- Authors: Ilya Chevyrev, Andrey Kormilitzin,
- Abstract summary: We provide an introduction to the signature method, focusing on its theoretical properties and machine learning applications.
In the first part, we present the definition and fundamental properties of the signature of a path.
As a sequence of numbers, the signature serves as a compact description (dimension reduction) of a path.
In the second part, we present practical applications of the signature to the area of machine learning.
- Score: 2.3020018305241337
- License:
- Abstract: We provide an introduction to the signature method, focusing on its theoretical properties and machine learning applications. Our presentation is divided into two parts. In the first part, we present the definition and fundamental properties of the signature of a path. The signature is a sequence of numbers associated with a path that captures many of its important analytic and geometric properties. As a sequence of numbers, the signature serves as a compact description (dimension reduction) of a path. In presenting its theoretical properties, we assume only familiarity with classical real analysis and integration, and supplement theory with straightforward examples. We also mention several advanced topics, including the role of the signature in rough path theory. In the second part, we present practical applications of the signature to the area of machine learning. The signature method is a non-parametric way of transforming data into a set of features that can be used in machine learning tasks. In this method, data are converted into multi-dimensional paths, by means of embedding algorithms, of which the signature is then computed. We describe this pipeline in detail, making a link with the properties of the signature presented in the first part. We furthermore review some of the developments of the signature method in machine learning and, as an illustrative example, present a detailed application of the method to handwritten digit classification.
Related papers
- On expected signatures and signature cumulants in semimartingale models [0.0]
The concept of signatures and expected signatures is vital in data science, especially for sequential data analysis.
A log-transform (expected signatures) leads to log-signatures (signature cumulants)
arXiv Detail & Related papers (2024-08-09T14:16:21Z) - Fractional signature: a generalisation of the signature inspired by fractional calculus [0.0]
We propose a novel generalisation of the signature of a path, motivated by fractional calculus.
We also propose another generalisation of the signature, inspired by the previous one, but more convenient to use in machine learning.
arXiv Detail & Related papers (2024-07-24T17:23:14Z) - Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition [96.62264528407863]
We propose a self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency.
Inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling.
Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin.
arXiv Detail & Related papers (2024-06-15T04:50:19Z) - Multi-Label Knowledge Distillation [86.03990467785312]
We propose a novel multi-label knowledge distillation method.
On one hand, it exploits the informative semantic knowledge from the logits by dividing the multi-label learning problem into a set of binary classification problems.
On the other hand, it enhances the distinctiveness of the learned feature representations by leveraging the structural information of label-wise embeddings.
arXiv Detail & Related papers (2023-08-12T03:19:08Z) - Weakly Supervised 3D Instance Segmentation without Instance-level
Annotations [57.615325809883636]
3D semantic scene understanding tasks have achieved great success with the emergence of deep learning, but often require a huge amount of manually annotated training data.
We propose the first weakly-supervised 3D instance segmentation method that only requires categorical semantic labels as supervision.
By generating pseudo instance labels from categorical semantic labels, our designed approach can also assist existing methods for learning 3D instance segmentation at reduced annotation cost.
arXiv Detail & Related papers (2023-08-03T12:30:52Z) - Same or Different? Diff-Vectors for Authorship Analysis [78.83284164605473]
In classic'' authorship analysis a feature vector represents a document, the value of a feature represents (an increasing function of) the relative frequency of the feature in the document, and the class label represents the author of the document.
Our experiments tackle same-author verification, authorship verification, and closed-set authorship attribution; while DVs are naturally geared for solving the 1st, we also provide two novel methods for solving the 2nd and 3rd.
arXiv Detail & Related papers (2023-01-24T08:48:12Z) - Neural Representation Learning for Scribal Hands of Linear B [23.603494290484086]
We present an investigation into the use of neural feature extraction in performing scribal hand analysis of the Linear B writing system.
We propose learning features using a fully unsupervised neural network that does not require any human annotation.
arXiv Detail & Related papers (2021-07-14T20:33:59Z) - Unsupervised Deep Learning for Handwritten Page Segmentation [0.0]
We present an unsupervised deep learning method for page segmentation.
A siamese neural network is trained to differentiate between patches using their measurable properties.
Our experiments show that the proposed unsupervised method is as effective as typical supervised methods.
arXiv Detail & Related papers (2021-01-19T07:13:38Z) - SLADE: A Self-Training Framework For Distance Metric Learning [75.54078592084217]
We present a self-training framework, SLADE, to improve retrieval performance by leveraging additional unlabeled data.
We first train a teacher model on the labeled data and use it to generate pseudo labels for the unlabeled data.
We then train a student model on both labels and pseudo labels to generate final feature embeddings.
arXiv Detail & Related papers (2020-11-20T08:26:10Z) - FCN+RL: A Fully Convolutional Network followed by Refinement Layers to
Offline Handwritten Signature Segmentation [3.3144312096837325]
We propose an approach to locate and extract the pixels of handwritten signatures on identification documents.
The technique is based on a fully convolutional encoder-decoder network combined with a block of refinement layers for the alpha channel of the predicted image.
arXiv Detail & Related papers (2020-05-28T18:47:10Z) - Transferring Cross-domain Knowledge for Video Sign Language Recognition [103.9216648495958]
Word-level sign language recognition (WSLR) is a fundamental task in sign language interpretation.
We propose a novel method that learns domain-invariant visual concepts and fertilizes WSLR models by transferring knowledge of subtitled news sign to them.
arXiv Detail & Related papers (2020-03-08T03:05:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.