A Light-weight Interpretable CompositionalNetwork for Nuclei Detection
and Weakly-supervised Segmentation
- URL: http://arxiv.org/abs/2110.13846v1
- Date: Tue, 26 Oct 2021 16:44:08 GMT
- Title: A Light-weight Interpretable CompositionalNetwork for Nuclei Detection
and Weakly-supervised Segmentation
- Authors: Yixiao Zhang, Adam Kortylewski, Qing Liu, Seyoun Park, Benjamin Green,
Elizabeth Engle, Guillermo Almodovar, Ryan Walk, Sigfredo Soto-Diaz, Janis
Taube, Alex Szalay, and Alan Yuille
- Abstract summary: Deep neural networks usually require large numbers of annotated data to train vast parameters.
We propose to build a data-efficient model, which only requires partial annotation, specifically on isolated nucleus.
- Score: 10.196621315018884
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The field of computational pathology has witnessed great advancements since
deep neural networks have been widely applied. These deep neural networks
usually require large numbers of annotated data to train vast parameters.
However, it takes significant effort to annotate a large histopathology
dataset. We propose to build a data-efficient model, which only requires
partial annotation, specifically on isolated nucleus, rather than on the whole
slide image. It exploits shallow features as its backbone and is light-weight,
therefore a small number of data is sufficient for training. What's more, it is
a generative compositional model, which enjoys interpretability in its
prediction. The proposed method could be an alternative solution for the
data-hungry problem of deep learning methods.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Data Augmentations in Deep Weight Spaces [89.45272760013928]
We introduce a novel augmentation scheme based on the Mixup method.
We evaluate the performance of these techniques on existing benchmarks as well as new benchmarks we generate.
arXiv Detail & Related papers (2023-11-15T10:43:13Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws [24.356906682593532]
We study the compute-optimal trade-off between model and training data set sizes for large neural networks.
Our result suggests a linear relation similar to that supported by the empirical analysis of chinchilla.
arXiv Detail & Related papers (2022-12-02T18:46:41Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - Leveraging Sparse Linear Layers for Debuggable Deep Networks [86.94586860037049]
We show how fitting sparse linear models over learned deep feature representations can lead to more debuggable neural networks.
The resulting sparse explanations can help to identify spurious correlations, explain misclassifications, and diagnose model biases in vision and language tasks.
arXiv Detail & Related papers (2021-05-11T08:15:25Z) - The Yin-Yang dataset [0.0]
Yin-Yang dataset was developed for research on biologically plausible error backpropagation and deep learning in spiking neural networks.
It serves as an alternative to classic deep learning datasets, by providing several advantages.
arXiv Detail & Related papers (2021-02-16T15:18:05Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z) - Beyond Dropout: Feature Map Distortion to Regularize Deep Neural
Networks [107.77595511218429]
In this paper, we investigate the empirical Rademacher complexity related to intermediate layers of deep neural networks.
We propose a feature distortion method (Disout) for addressing the aforementioned problem.
The superiority of the proposed feature map distortion for producing deep neural network with higher testing performance is analyzed and demonstrated.
arXiv Detail & Related papers (2020-02-23T13:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.