Time-based Self-supervised Learning for Wireless Capsule Endoscopy
- URL: http://arxiv.org/abs/2204.09773v1
- Date: Wed, 20 Apr 2022 20:31:06 GMT
- Title: Time-based Self-supervised Learning for Wireless Capsule Endoscopy
- Authors: Guillem Pascual, Pablo Laiz, Albert Garc\'ia, Hagen Wenzek, Jordi
Vitri\`a, Santi Segu\'i
- Abstract summary: This work proposes using self-supervised learning for wireless endoscopy videos by introducing a custom-tailored method.
We prove that using the inferred inherent structure learned by our method, extracted from the temporal axis, improves the detection rate on several domain-specific applications even under severe imbalance.
- Score: 1.3514953384460016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art machine learning models, and especially deep learning ones,
are significantly data-hungry; they require vast amounts of manually labeled
samples to function correctly. However, in most medical imaging fields,
obtaining said data can be challenging. Not only the volume of data is a
problem, but also the imbalances within its classes; it is common to have many
more images of healthy patients than of those with pathology. Computer-aided
diagnostic systems suffer from these issues, usually over-designing their
models to perform accurately. This work proposes using self-supervised learning
for wireless endoscopy videos by introducing a custom-tailored method that does
not initially need labels or appropriate balance. We prove that using the
inferred inherent structure learned by our method, extracted from the temporal
axis, improves the detection rate on several domain-specific applications even
under severe imbalance.
Related papers
- LEARNER: Learning Granular Labels from Coarse Labels using Contrastive Learning [28.56726678583327]
Can a model trained on multi-patient scans predict subtle changes in an individual patient's scans?
Recent computer vision models to learn fine-grained differences while being trained on data showing larger differences.
We find that models pre-trained on clips from multiple patients can better predict fine-grained differences in scans from a single patient by employing contrastive learning.
arXiv Detail & Related papers (2024-11-02T05:27:52Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Data Efficient Contrastive Learning in Histopathology using Active Sampling [0.0]
Deep learning algorithms can provide robust quantitative analysis in digital pathology.
These algorithms require large amounts of annotated training data.
Self-supervised methods have been proposed to learn features using ad-hoc pretext tasks.
We propose a new method for actively sampling informative members from the training set using a small proxy network.
arXiv Detail & Related papers (2023-03-28T18:51:22Z) - RadTex: Learning Efficient Radiograph Representations from Text Reports [7.090896766922791]
We build a data-efficient learning framework that utilizes radiology reports to improve medical image classification performance with limited labeled data.
Our model achieves higher classification performance than ImageNet-supervised pretraining when labeled training data is limited.
arXiv Detail & Related papers (2022-08-05T15:06:26Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - Relational Subsets Knowledge Distillation for Long-tailed Retinal
Diseases Recognition [65.77962788209103]
We propose class subset learning by dividing the long-tailed data into multiple class subsets according to prior knowledge.
It enforces the model to focus on learning the subset-specific knowledge.
The proposed framework proved to be effective for the long-tailed retinal diseases recognition task.
arXiv Detail & Related papers (2021-04-22T13:39:33Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Additive Angular Margin for Few Shot Learning to Classify Clinical
Endoscopy Images [42.74958357195011]
We propose a few-shot learning approach that requires less training data and can be used to predict label classes of test samples from an unseen dataset.
We compare our approach to the several established methods on a large cohort of multi-center, multi-organ, and multi-modal endoscopy data.
arXiv Detail & Related papers (2020-03-23T00:20:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.