CellTranspose: Few-shot Domain Adaptation for Cellular Instance
Segmentation
- URL: http://arxiv.org/abs/2212.14121v1
- Date: Wed, 28 Dec 2022 23:00:50 GMT
- Title: CellTranspose: Few-shot Domain Adaptation for Cellular Instance
Segmentation
- Authors: Matthew Keaton, Ram Zaveri, Gianfranco Doretto
- Abstract summary: We address the problem of designing an approach that requires minimal amounts of new annotated data as well as training time.
We do so by designing specialized contrastive losses that leverage the few annotated samples very efficiently.
- Score: 4.38301148531795
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated cellular instance segmentation is a process utilized for
accelerating biological research for the past two decades, and recent
advancements have produced higher quality results with less effort from the
biologist. Most current endeavors focus on completely cutting the researcher
out of the picture by generating highly generalized models. However, these
models invariably fail when faced with novel data, distributed differently than
the ones used for training. Rather than approaching the problem with methods
that presume the availability of large amounts of target data and computing
power for retraining, in this work we address the even greater challenge of
designing an approach that requires minimal amounts of new annotated data as
well as training time. We do so by designing specialized contrastive losses
that leverage the few annotated samples very efficiently. A large set of
results show that 3 to 5 annotations lead to models with accuracy that: 1)
significantly mitigate the covariate shift effects; 2) matches or surpasses
other adaptation methods; 3) even approaches methods that have been fully
retrained on the target distribution. The adaptation training is only a few
minutes, paving a path towards a balance between model performance, computing
requirements and expert-level annotation needs.
Related papers
- Source-Free Test-Time Adaptation For Online Surface-Defect Detection [29.69030283193086]
We propose a novel test-time adaptation surface-defect detection approach.
It adapts pre-trained models to new domains and classes during inference.
Experiments demonstrate it outperforms state-of-the-art techniques.
arXiv Detail & Related papers (2024-08-18T14:24:05Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Ask Your Distribution Shift if Pre-Training is Right for You [74.18516460467019]
In practice, fine-tuning a pre-trained model improves robustness significantly in some cases but not at all in others.
We focus on two possible failure modes of models under distribution shift: poor extrapolation and biases in the training data.
Our study suggests that, as a rule of thumb, pre-training can help mitigate poor extrapolation but not dataset biases.
arXiv Detail & Related papers (2024-02-29T23:46:28Z) - Few-shot adaptation for morphology-independent cell instance
segmentation [3.6064695344878093]
We show how to adapt a cell instance segmentation model to adapt to very challenging bacteria datasets.
Our results show a significant boost in accuracy after adaptation to very challenging bacteria datasets.
arXiv Detail & Related papers (2024-02-27T02:54:22Z) - Informative Data Mining for One-Shot Cross-Domain Semantic Segmentation [84.82153655786183]
We propose a novel framework called Informative Data Mining (IDM) to enable efficient one-shot domain adaptation for semantic segmentation.
IDM provides an uncertainty-based selection criterion to identify the most informative samples, which facilitates quick adaptation and reduces redundant training.
Our approach outperforms existing methods and achieves a new state-of-the-art one-shot performance of 56.7%/55.4% on the GTA5/SYNTHIA to Cityscapes adaptation tasks.
arXiv Detail & Related papers (2023-09-25T15:56:01Z) - Intra-class Adaptive Augmentation with Neighbor Correction for Deep
Metric Learning [99.14132861655223]
We propose a novel intra-class adaptive augmentation (IAA) framework for deep metric learning.
We reasonably estimate intra-class variations for every class and generate adaptive synthetic samples to support hard samples mining.
Our method significantly improves and outperforms the state-of-the-art methods on retrieval performances by 3%-6%.
arXiv Detail & Related papers (2022-11-29T14:52:38Z) - Robustifying Sentiment Classification by Maximally Exploiting Few
Counterfactuals [16.731183915325584]
We propose a novel solution that only requires annotation of a small fraction of the original training data.
We achieve noticeable accuracy improvements by adding only 1% manual counterfactuals.
arXiv Detail & Related papers (2022-10-21T08:30:09Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Exploring Strategies for Generalizable Commonsense Reasoning with
Pre-trained Models [62.28551903638434]
We measure the impact of three different adaptation methods on the generalization and accuracy of models.
Experiments with two models show that fine-tuning performs best, by learning both the content and the structure of the task, but suffers from overfitting and limited generalization to novel answers.
We observe that alternative adaptation methods like prefix-tuning have comparable accuracy, but generalize better to unseen answers and are more robust to adversarial splits.
arXiv Detail & Related papers (2021-09-07T03:13:06Z) - Few Is Enough: Task-Augmented Active Meta-Learning for Brain Cell
Classification [8.998976678920236]
We propose a tAsk-auGmented actIve meta-LEarning (AGILE) method to efficiently adapt Deep Neural Networks to new tasks.
AGILE combines a meta-learning algorithm with a novel task augmentation technique which we use to generate an initial adaptive model.
We show that the proposed task-augmented meta-learning framework can learn to classify new cell types after a single gradient step.
arXiv Detail & Related papers (2020-07-09T18:03:12Z) - An Efficient Method of Training Small Models for Regression Problems
with Knowledge Distillation [1.433758865948252]
We propose a new formalism of knowledge distillation for regression problems.
First, we propose a new loss function, teacher outlier loss rejection, which rejects outliers in training samples using teacher model predictions.
By considering the multi-task network, training of the feature extraction of student models becomes more effective.
arXiv Detail & Related papers (2020-02-28T08:46:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.