CHAMMI: A benchmark for channel-adaptive models in microscopy imaging
- URL: http://arxiv.org/abs/2310.19224v2
- Date: Tue, 16 Jan 2024 18:26:50 GMT
- Title: CHAMMI: A benchmark for channel-adaptive models in microscopy imaging
- Authors: Zitong Chen, Chau Pham, Siqi Wang, Michael Doron, Nikita Moshkov,
Bryan A. Plummer, Juan C. Caicedo
- Abstract summary: We present a benchmark for investigating channel-adaptive models in microscopy imaging.
We find that channel-adaptive models can generalize better to out-of-domain tasks and can be computationally efficient.
- Score: 18.220276947512843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most neural networks assume that input images have a fixed number of channels
(three for RGB images). However, there are many settings where the number of
channels may vary, such as microscopy images where the number of channels
changes depending on instruments and experimental goals. Yet, there has not
been a systemic attempt to create and evaluate neural networks that are
invariant to the number and type of channels. As a result, trained models
remain specific to individual studies and are hardly reusable for other
microscopy settings. In this paper, we present a benchmark for investigating
channel-adaptive models in microscopy imaging, which consists of 1) a dataset
of varied-channel single-cell images, and 2) a biologically relevant evaluation
framework. In addition, we adapted several existing techniques to create
channel-adaptive models and compared their performance on this benchmark to
fixed-channel, baseline models. We find that channel-adaptive models can
generalize better to out-of-domain tasks and can be computationally efficient.
We contribute a curated dataset (https://doi.org/10.5281/zenodo.7988357) and an
evaluation API (https://github.com/broadinstitute/MorphEm.git) to facilitate
objective comparisons in future research and applications.
Related papers
- ChAda-ViT : Channel Adaptive Attention for Joint Representation Learning of Heterogeneous Microscopy Images [2.954116522244175]
We propose ChAda-ViT, a novel Channel Adaptive Vision Transformer architecture.
We also introduce IDRCell100k, a bioimage dataset with a rich set of 79 experiments covering 7 microscope modalities.
Our architecture, trained in a self-supervised manner, outperforms existing approaches in several biologically relevant downstream tasks.
arXiv Detail & Related papers (2023-11-26T10:38:47Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Scale-Equivariant UNet for Histopathology Image Segmentation [1.213915839836187]
Convolutional Neural Networks (CNNs) trained on such images at a given scale fail to generalise to those at different scales.
We propose the Scale-Equivariant UNet (SEUNet) for image segmentation by building on scale-space theory.
arXiv Detail & Related papers (2023-04-10T14:03:08Z) - ConvTransSeg: A Multi-resolution Convolution-Transformer Network for
Medical Image Segmentation [14.485482467748113]
We propose a hybrid encoder-decoder segmentation model (ConvTransSeg)
It consists of a multi-layer CNN as the encoder for feature learning and the corresponding multi-level Transformer as the decoder for segmentation prediction.
Our method achieves the best performance in terms of Dice coefficient and average symmetric surface distance measures with low model complexity and memory consumption.
arXiv Detail & Related papers (2022-10-13T14:59:23Z) - Stacking Ensemble Learning in Deep Domain Adaptation for Ophthalmic
Image Classification [61.656149405657246]
Domain adaptation is effective in image classification tasks where obtaining sufficient label data is challenging.
We propose a novel method, named SELDA, for stacking ensemble learning via extending three domain adaptation methods.
The experimental results using Age-Related Eye Disease Study (AREDS) benchmark ophthalmic dataset demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-09-27T14:19:00Z) - ViViT: A Video Vision Transformer [75.74690759089529]
We present pure-transformer based models for video classification.
Our model extracts-temporal tokens from the input video, which are then encoded by a series of transformer layers.
We show how we can effectively regularise the model during training and leverage pretrained image models to be able to train on comparatively small datasets.
arXiv Detail & Related papers (2021-03-29T15:27:17Z) - End-to-end learnable EEG channel selection with deep neural networks [72.21556656008156]
We propose a framework to embed the EEG channel selection in the neural network itself.
We deal with the discrete nature of this new optimization problem by employing continuous relaxations of the discrete channel selection parameters.
This generic approach is evaluated on two different EEG tasks.
arXiv Detail & Related papers (2021-02-11T13:44:07Z) - Comparisons among different stochastic selection of activation layers
for convolutional neural networks for healthcare [77.99636165307996]
We classify biomedical images using ensembles of neural networks.
We select our activations among the following ones: ReLU, leaky ReLU, Parametric ReLU, ELU, Adaptive Piecewice Linear Unit, S-Shaped ReLU, Swish, Mish, Mexican Linear Unit, Parametric Deformable Linear Unit, Soft Root Sign.
arXiv Detail & Related papers (2020-11-24T01:53:39Z) - Robust Retinal Vessel Segmentation from a Data Augmentation Perspective [14.768009562830004]
We propose two new data augmentation modules, namely, channel-wise random Gamma correction and channel-wise random vessel augmentation.
With the additional training samples generated by applying these two modules sequentially, a model could learn more invariant and discriminating features.
Experimental results on both real-world and synthetic datasets demonstrate that our method can improve the performance and robustness of a classic convolutional neural network architecture.
arXiv Detail & Related papers (2020-07-31T07:37:14Z) - Improving Calibration and Out-of-Distribution Detection in Medical Image
Segmentation with Convolutional Neural Networks [8.219843232619551]
Convolutional Neural Networks (CNNs) have shown to be powerful medical image segmentation models.
We advocate for multi-task learning, i.e., training a single model on several different datasets.
We show that not only a single CNN learns to automatically recognize the context and accurately segment the organ of interest in each context, but also that such a joint model often has more accurate and better-calibrated predictions.
arXiv Detail & Related papers (2020-04-12T23:42:51Z) - Channel Interaction Networks for Fine-Grained Image Categorization [61.095320862647476]
Fine-grained image categorization is challenging due to the subtle inter-class differences.
We propose a channel interaction network (CIN), which models the channel-wise interplay both within an image and across images.
Our model can be trained efficiently in an end-to-end fashion without the need of multi-stage training and testing.
arXiv Detail & Related papers (2020-03-11T11:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.