A topological classifier to characterize brain states: When shape
matters more than variance
- URL: http://arxiv.org/abs/2303.04231v1
- Date: Tue, 7 Mar 2023 20:45:15 GMT
- Title: A topological classifier to characterize brain states: When shape
matters more than variance
- Authors: Aina Ferr\`a, Gloria Cecchini, Fritz-Pere Nobbe Fisas, Carles
Casacuberta, Ignasi Cos
- Abstract summary: topological data analysis (TDA) is devoted to study the shape of data clouds by means of persistence descriptors.
We introduce a novel TDA-based classifier that works on the principle of assessing quantifiable changes on topological metrics caused by the addition of new input to a subset of data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the remarkable accuracies attained by machine learning classifiers to
separate complex datasets in a supervised fashion, most of their operation
falls short to provide an informed intuition about the structure of data, and,
what is more important, about the phenomena being characterized by the given
datasets. By contrast, topological data analysis (TDA) is devoted to study the
shape of data clouds by means of persistence descriptors and provides a
quantitative characterization of specific topological features of the dataset
under scrutiny.
In this article we introduce a novel TDA-based classifier that works on the
principle of assessing quantifiable changes on topological metrics caused by
the addition of new input to a subset of data. We used this classifier with a
high-dimensional electro-encephalographic (EEG) dataset recorded from eleven
participants during a decision-making experiment in which three motivational
states were induced through a manipulation of social pressure. After processing
a band-pass filtered version of EEG signals, we calculated silhouettes from
persistence diagrams associated with each motivated state, and classified
unlabeled signals according to their impact on each reference silhouette. Our
results show that in addition to providing accuracies within the range of those
of a nearest neighbour classifier, the TDA classifier provides formal intuition
of the structure of the dataset as well as an estimate of its intrinsic
dimension. Towards this end, we incorporated dimensionality reduction methods
to our procedure and found that the accuracy of our TDA classifier is generally
not sensitive to explained variance but rather to shape, contrary to what
happens with most machine learning classifiers.
Related papers
- Analyzing Generative Models by Manifold Entropic Metrics [8.477943884416023]
We introduce a novel set of tractable information-theoretic evaluation metrics.
We compare various normalizing flow architectures and $beta$-VAEs on the EMNIST dataset.
The most interesting finding of our experiments is a ranking of model architectures and training procedures in terms of their inductive bias to converge to aligned and disentangled representations during training.
arXiv Detail & Related papers (2024-10-25T09:35:00Z) - Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Downstream-Pretext Domain Knowledge Traceback for Active Learning [138.02530777915362]
We propose a downstream-pretext domain knowledge traceback (DOKT) method that traces the data interactions of downstream knowledge and pre-training guidance.
DOKT consists of a traceback diversity indicator and a domain-based uncertainty estimator.
Experiments conducted on ten datasets show that our model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2024-07-20T01:34:13Z) - Fascinating Supervisory Signals and Where to Find Them: Deep Anomaly
Detection with Scale Learning [11.245813423781415]
We devise novel data-driven supervision for data by introducing a characteristic -- scale -- as data labels.
Scales serve as labels attached to transformed representations, thus offering ample labeled data for neural network training.
This paper further proposes a scale learning-based anomaly detection method.
arXiv Detail & Related papers (2023-05-25T14:48:00Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Metric Distribution to Vector: Constructing Data Representation via
Broad-Scale Discrepancies [15.40538348604094]
We present a novel embedding strategy named $mathbfMetricDistribution2vec$ to extract distribution characteristics into the vectorial representation for each data.
We demonstrate the application and effectiveness of our representation method in the supervised prediction tasks on extensive real-world structural graph datasets.
arXiv Detail & Related papers (2022-10-02T03:18:30Z) - On topological data analysis for structural dynamics: an introduction to
persistent homology [0.0]
Topological data analysis is a method of quantifying the shape of data over a range of length scales.
Persistent homology is a method of quantifying the shape of data over a range of length scales.
arXiv Detail & Related papers (2022-09-12T10:39:38Z) - SLA$^2$P: Self-supervised Anomaly Detection with Adversarial
Perturbation [77.71161225100927]
Anomaly detection is a fundamental yet challenging problem in machine learning.
We propose a novel and powerful framework, dubbed as SLA$2$P, for unsupervised anomaly detection.
arXiv Detail & Related papers (2021-11-25T03:53:43Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - TELESTO: A Graph Neural Network Model for Anomaly Classification in
Cloud Services [77.454688257702]
Machine learning (ML) and artificial intelligence (AI) are applied on IT system operation and maintenance.
One direction aims at the recognition of re-occurring anomaly types to enable remediation automation.
We propose a method that is invariant to dimensionality changes of given data.
arXiv Detail & Related papers (2021-02-25T14:24:49Z) - Learning from Incomplete Features by Simultaneous Training of Neural
Networks and Sparse Coding [24.3769047873156]
This paper addresses the problem of training a classifier on a dataset with incomplete features.
We assume that different subsets of features (random or structured) are available at each data instance.
A new supervised learning method is developed to train a general classifier, using only a subset of features per sample.
arXiv Detail & Related papers (2020-11-28T02:20:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.