Learning Generic Lung Ultrasound Biomarkers for Decoupling Feature
Extraction from Downstream Tasks
- URL: http://arxiv.org/abs/2206.08398v1
- Date: Thu, 16 Jun 2022 18:15:14 GMT
- Title: Learning Generic Lung Ultrasound Biomarkers for Decoupling Feature
Extraction from Downstream Tasks
- Authors: Gautam Rajendrakumar Gare, Tom Fox, Pete Lowery, Kevin Zamora, Hai V.
Tran, Laura Hutchins, David Montgomery, Amita Krishnan, Deva Kannan Ramanan,
Ricardo Luis Rodriguez, Bennett P deBoisblanc, John Michael Galeotti
- Abstract summary: We propose to decouple feature learning from downstream lung ultrasound tasks by introducing an auxiliary pre-task of visual biomarker classification.
We demonstrate that one can learn an informative, concise, and interpretable feature space from ultrasound videos by training models for predicting biomarker labels.
- Score: 0.032270246323516584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contemporary artificial neural networks (ANN) are trained end-to-end, jointly
learning both features and classifiers for the task of interest. Though
enormously effective, this paradigm imposes significant costs in assembling
annotated task-specific datasets and training large-scale networks. We propose
to decouple feature learning from downstream lung ultrasound tasks by
introducing an auxiliary pre-task of visual biomarker classification. We
demonstrate that one can learn an informative, concise, and interpretable
feature space from ultrasound videos by training models for predicting
biomarker labels. Notably, biomarker feature extractors can be trained from
data annotated with weak video-scale supervision. These features can be used by
a variety of downstream Expert models targeted for diverse clinical tasks
(Diagnosis, lung severity, S/F ratio). Crucially, task-specific expert models
are comparable in accuracy to end-to-end models directly trained for such
target tasks, while being significantly lower cost to train.
Related papers
- What Matters for Bioacoustic Encoding [34.118070876417065]
We present a large-scale empirical study that covers aspects of bioacoustics that are relevant to research.<n>We obtain encoders that are state-of-the-art on the existing and proposed benchmarks.<n>Specifically, across 26 datasets with tasks including species classification, detection, individual ID, and vocal repertoire discovery, we find self-supervised pre-training followed by supervised post-training.
arXiv Detail & Related papers (2025-08-15T23:52:34Z) - Granularity Matters in Long-Tail Learning [62.30734737735273]
We offer a novel perspective on long-tail learning, inspired by an observation: datasets with finer granularity tend to be less affected by data imbalance.
We introduce open-set auxiliary classes that are visually similar to existing ones, aiming to enhance representation learning for both head and tail classes.
To prevent the overwhelming presence of auxiliary classes from disrupting training, we introduce a neighbor-silencing loss.
arXiv Detail & Related papers (2024-10-21T13:06:21Z) - Multi-organ Self-supervised Contrastive Learning for Breast Lesion
Segmentation [0.0]
This paper employs multi-organ datasets for pre-training models tailored to specific organ-related target tasks.
Our target task is breast tumour segmentation in ultrasound images.
Results show that conventional contrastive learning pre-training improves performance compared to supervised baseline approaches.
arXiv Detail & Related papers (2024-02-21T20:29:21Z) - Contrastive Deep Encoding Enables Uncertainty-aware
Machine-learning-assisted Histopathology [6.548275341067594]
terabytes of training data can be consciously utilized to pre-train deep networks to encode informative representations.
We show that our approach can reach the state-of-the-art (SOTA) for patch-level classification with only 1-10% randomly selected annotations.
arXiv Detail & Related papers (2023-09-13T17:37:19Z) - Clinically Acceptable Segmentation of Organs at Risk in Cervical Cancer
Radiation Treatment from Clinically Available Annotations [0.0]
We present an approach to learn a deep learning model for the automatic segmentation of Organs at Risk (OARs) in cervical cancer radiation treatment.
We employ simples for automatic data cleaning to minimize data inhomogeneity, label noise, and missing annotations.
We develop a semi-supervised learning approach utilizing a teacher-student setup, annotation imputation, and uncertainty-guided training to learn in presence of missing annotations.
arXiv Detail & Related papers (2023-02-21T13:24:40Z) - Teacher-Student Architecture for Mixed Supervised Lung Tumor
Segmentation [0.4159343412286401]
This paper investigates the use of a teacher-student design to train an automatic model performing pulmonary tumor segmentation on computed tomography images.
Using only a small proportion of semantically labeled data and a large number of bounding box annotated data, we achieved competitive performance using a teacher-student design.
arXiv Detail & Related papers (2021-12-21T22:02:34Z) - Learning Debiased and Disentangled Representations for Semantic
Segmentation [52.35766945827972]
We propose a model-agnostic and training scheme for semantic segmentation.
By randomly eliminating certain class information in each training iteration, we effectively reduce feature dependencies among classes.
Models trained with our approach demonstrate strong results on multiple semantic segmentation benchmarks.
arXiv Detail & Related papers (2021-10-31T16:15:09Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Naive-Student: Leveraging Semi-Supervised Learning in Video Sequences
for Urban Scene Segmentation [57.68890534164427]
In this work, we ask if we may leverage semi-supervised learning in unlabeled video sequences and extra images to improve the performance on urban scene segmentation.
We simply predict pseudo-labels for the unlabeled data and train subsequent models with both human-annotated and pseudo-labeled data.
Our Naive-Student model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks.
arXiv Detail & Related papers (2020-05-20T18:00:05Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Self-Training with Improved Regularization for Sample-Efficient Chest
X-Ray Classification [80.00316465793702]
We present a deep learning framework that enables robust modeling in challenging scenarios.
Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.
arXiv Detail & Related papers (2020-05-03T02:36:00Z) - Semi-supervised few-shot learning for medical image segmentation [21.349705243254423]
Recent attempts to alleviate the need for large annotated datasets have developed training strategies under the few-shot learning paradigm.
We propose a novel few-shot learning framework for semantic segmentation, where unlabeled images are also made available at each episode.
We show that including unlabeled surrogate tasks in the episodic training leads to more powerful feature representations.
arXiv Detail & Related papers (2020-03-18T20:37:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.