Self-Supervised Learning of Gait-Based Biomarkers
- URL: http://arxiv.org/abs/2307.16321v1
- Date: Sun, 30 Jul 2023 21:04:17 GMT
- Title: Self-Supervised Learning of Gait-Based Biomarkers
- Authors: R. James Cotton, J.D. Peiffer, Kunal Shah, Allison DeLillo, Anthony
Cimorelli, Shawana Anarwala, Kayan Abdou, and Tasos Karakostas
- Abstract summary: Markerless motion capture (MMC) is revolutionizing gait analysis in clinical settings by making it more accessible.
In multiple fields ranging from image processing to natural language processing, self-supervised learning (SSL) from large amounts of unannotated data produces very effective representations for downstream tasks.
We find that contrastive learning on unannotated gait data learns a representation that captures clinically meaningful information.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Markerless motion capture (MMC) is revolutionizing gait analysis in clinical
settings by making it more accessible, raising the question of how to extract
the most clinically meaningful information from gait data. In multiple fields
ranging from image processing to natural language processing, self-supervised
learning (SSL) from large amounts of unannotated data produces very effective
representations for downstream tasks. However, there has only been limited use
of SSL to learn effective representations of gait and movement, and it has not
been applied to gait analysis with MMC. One SSL objective that has not been
applied to gait is contrastive learning, which finds representations that place
similar samples closer together in the learned space. If the learned similarity
metric captures clinically meaningful differences, this could produce a useful
representation for many downstream clinical tasks. Contrastive learning can
also be combined with causal masking to predict future timesteps, which is an
appealing SSL objective given the dynamical nature of gait. We applied these
techniques to gait analyses performed with MMC in a rehabilitation hospital
from a diverse clinical population. We find that contrastive learning on
unannotated gait data learns a representation that captures clinically
meaningful information. We probe this learned representation using the
framework of biomarkers and show it holds promise as both a diagnostic and
response biomarker, by showing it can accurately classify diagnosis from gait
and is responsive to inpatient therapy, respectively. We ultimately hope these
learned representations will enable predictive and prognostic gait-based
biomarkers that can facilitate precision rehabilitation through greater use of
MMC to quantify movement in rehabilitation.
Related papers
- OPTiML: Dense Semantic Invariance Using Optimal Transport for Self-Supervised Medical Image Representation [6.4136876268620115]
Self-supervised learning (SSL) has emerged as a promising technique for medical image analysis due to its ability to learn without annotations.
We introduce a novel SSL framework OPTiML, employing optimal transport (OT), to capture the dense semantic invariance and fine-grained details.
Our empirical results reveal OPTiML's superiority over state-of-the-art methods across all evaluated tasks.
arXiv Detail & Related papers (2024-04-18T02:59:48Z) - MedFLIP: Medical Vision-and-Language Self-supervised Fast Pre-Training with Masked Autoencoder [26.830574964308962]
We introduce MedFLIP, a Fast Language-Image Pre-training method for Medical analysis.
We explore MAEs for zero-shot learning with crossed domains, which enhances the model's ability to learn from limited data.
Lastly, we validate using language will improve the zero-shot performance for the medical image analysis.
arXiv Detail & Related papers (2024-03-07T16:11:43Z) - Overcoming Dimensional Collapse in Self-supervised Contrastive Learning
for Medical Image Segmentation [2.6764957223405657]
We investigate the application of contrastive learning to the domain of medical image analysis.
Our findings reveal that MoCo v2, a state-of-the-art contrastive learning method, encounters dimensional collapse when applied to medical images.
To address this, we propose two key contributions: local feature learning and feature decorrelation.
arXiv Detail & Related papers (2024-02-22T15:02:13Z) - MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Morphology-Enhanced CAM-Guided SAM for weakly supervised Breast Lesion Segmentation [7.747608350830482]
We present a novel framework for weakly supervised lesion segmentation in early breast ultrasound images.
Our method uses morphological enhancement and class activation map (CAM)-guided localization.
This approach does not require pixel-level annotation, thereby reducing the cost of data annotation.
arXiv Detail & Related papers (2023-11-18T22:06:04Z) - Self-Verification Improves Few-Shot Clinical Information Extraction [73.6905567014859]
Large language models (LLMs) have shown the potential to accelerate clinical curation via few-shot in-context learning.
They still struggle with issues regarding accuracy and interpretability, especially in mission-critical domains such as health.
Here, we explore a general mitigation framework using self-verification, which leverages the LLM to provide provenance for its own extraction and check its own outputs.
arXiv Detail & Related papers (2023-05-30T22:05:11Z) - Hierarchical discriminative learning improves visual representations of
biomedical microscopy [35.521563469534264]
HiDisc is a data-driven method that implicitly learns features of the underlying cancer diagnosis.
HiDisc pretraining outperforms current state-of-the-art self-supervised pretraining methods for cancer diagnosis and genetic mutation prediction.
arXiv Detail & Related papers (2023-03-02T22:04:42Z) - Active Learning Enhances Classification of Histopathology Whole Slide
Images with Attention-based Multiple Instance Learning [48.02011627390706]
We train an attention-based MIL and calculate a confidence metric for every image in the dataset to select the most uncertain WSIs for expert annotation.
With a novel attention guiding loss, this leads to an accuracy boost of the trained models with few regions annotated for each class.
It may in the future serve as an important contribution to train MIL models in the clinically relevant context of cancer classification in histopathology.
arXiv Detail & Related papers (2023-03-02T15:18:58Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - Label Cleaning Multiple Instance Learning: Refining Coarse Annotations
on Single Whole-Slide Images [83.7047542725469]
Annotating cancerous regions in whole-slide images (WSIs) of pathology samples plays a critical role in clinical diagnosis, biomedical research, and machine learning algorithms development.
We present a method, named Label Cleaning Multiple Instance Learning (LC-MIL), to refine coarse annotations on a single WSI without the need of external training data.
Our experiments on a heterogeneous WSI set with breast cancer lymph node metastasis, liver cancer, and colorectal cancer samples show that LC-MIL significantly refines the coarse annotations, outperforming the state-of-the-art alternatives, even while learning from a single slide.
arXiv Detail & Related papers (2021-09-22T15:06:06Z) - Uncovering the structure of clinical EEG signals with self-supervised
learning [64.4754948595556]
Supervised learning paradigms are often limited by the amount of labeled data that is available.
This phenomenon is particularly problematic in clinically-relevant data, such as electroencephalography (EEG)
By extracting information from unlabeled data, it might be possible to reach competitive performance with deep neural networks.
arXiv Detail & Related papers (2020-07-31T14:34:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.