Classification of Alzheimers Disease with Deep Learning on Eye-tracking
Data
- URL: http://arxiv.org/abs/2309.12574v1
- Date: Fri, 22 Sep 2023 02:02:59 GMT
- Title: Classification of Alzheimers Disease with Deep Learning on Eye-tracking
Data
- Authors: Harshinee Sriram, Cristina Conati, Thalia Field
- Abstract summary: We investigate whether we can improve on existing results by using a Deep-Learning classifier trained end-to-end on raw ET data.
A main challenge in applying VTNet to our target AD classification task is that the available ET data sequences are much longer than those used in the previous confusion detection task.
We show that VTNet outperforms the state-of-the-art approaches in AD classification, providing encouraging evidence on the generality of this model.
- Score: 0.7366405857677227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing research has shown the potential of classifying Alzheimers Disease
(AD) from eye-tracking (ET) data with classifiers that rely on task-specific
engineered features. In this paper, we investigate whether we can improve on
existing results by using a Deep-Learning classifier trained end-to-end on raw
ET data. This classifier (VTNet) uses a GRU and a CNN in parallel to leverage
both visual (V) and temporal (T) representations of ET data and was previously
used to detect user confusion while processing visual displays. A main
challenge in applying VTNet to our target AD classification task is that the
available ET data sequences are much longer than those used in the previous
confusion detection task, pushing the limits of what is manageable by
LSTM-based models. We discuss how we address this challenge and show that VTNet
outperforms the state-of-the-art approaches in AD classification, providing
encouraging evidence on the generality of this model to make predictions from
ET data.
Related papers
- Deep evolving semi-supervised anomaly detection [14.027613461156864]
The aim of this paper is to formalise the task of continual semi-supervised anomaly detection (CSAD)
The paper introduces a baseline model of a variational autoencoder (VAE) to work with semi-supervised data along with a continual learning method of deep generative replay with outlier rejection.
arXiv Detail & Related papers (2024-12-01T15:48:37Z) - Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - Defect Classification in Additive Manufacturing Using CNN-Based Vision
Processing [76.72662577101988]
This paper examines two scenarios: first, using convolutional neural networks (CNNs) to accurately classify defects in an image dataset from AM and second, applying active learning techniques to the developed classification model.
This allows the construction of a human-in-the-loop mechanism to reduce the size of the data required to train and generate training data.
arXiv Detail & Related papers (2023-07-14T14:36:58Z) - Transfer Learning for Fine-grained Classification Using Semi-supervised
Learning and Visual Transformers [1.694405932826705]
Visual transformers (ViT) have emerged as a powerful tool for image classification.
In this work, we explore Semi-ViT, a ViT model fine tuned using semi-supervised learning techniques.
Our results demonstrate that Semi-ViT outperforms traditional convolutional neural networks (CNN) and ViTs, even when fine-tuned with limited annotated data.
arXiv Detail & Related papers (2023-05-17T07:51:35Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - Linking data separation, visual separation, and classifier performance
using pseudo-labeling by contrastive learning [125.99533416395765]
We argue that the performance of the final classifier depends on the data separation present in the latent space and visual separation present in the projection.
We demonstrate our results by the classification of five real-world challenging image datasets of human intestinal parasites with only 1% supervised samples.
arXiv Detail & Related papers (2023-02-06T10:01:38Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Mutual Information Learned Classifiers: an Information-theoretic
Viewpoint of Training Deep Learning Classification Systems [9.660129425150926]
Cross entropy loss can easily lead us to find models which demonstrate severe overfitting behavior.
In this paper, we prove that the existing cross entropy loss minimization for training DNN classifiers essentially learns the conditional entropy of the underlying data distribution.
We propose a mutual information learning framework where we train DNN classifiers via learning the mutual information between the label and input.
arXiv Detail & Related papers (2022-10-03T15:09:19Z) - Learning Deep Representations via Contrastive Learning for Instance
Retrieval [11.736450745549792]
This paper makes the first attempt that tackles the problem using instance-discrimination based contrastive learning (CL)
In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models.
arXiv Detail & Related papers (2022-09-28T04:36:34Z) - Causal Scene BERT: Improving object detection by searching for
challenging groups of data [125.40669814080047]
Computer vision applications rely on learning-based perception modules parameterized with neural networks for tasks like object detection.
These modules frequently have low expected error overall but high error on atypical groups of data due to biases inherent in the training process.
Our main contribution is a pseudo-automatic method to discover such groups in foresight by performing causal interventions on simulated scenes.
arXiv Detail & Related papers (2022-02-08T05:14:16Z) - Multiple Organ Failure Prediction with Classifier-Guided Generative
Adversarial Imputation Networks [4.040013871160853]
Multiple organ failure (MOF) is a severe syndrome with a high mortality rate among Intensive Care Unit (ICU) patients.
Applying machine learning models to electronic health records is a challenge due to the pervasiveness of missing values.
arXiv Detail & Related papers (2021-06-22T15:49:01Z) - Domain Adaptive Transfer Learning on Visual Attention Aware Data
Augmentation for Fine-grained Visual Categorization [3.5788754401889014]
We perform domain adaptive knowledge transfer via fine-tuning on our base network model.
We show competitive improvement on accuracies by using attention-aware data augmentation techniques.
Our method achieves state-of-the-art results in multiple fine-grained classification datasets.
arXiv Detail & Related papers (2020-10-06T22:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.