RepMode: Learning to Re-parameterize Diverse Experts for Subcellular
Structure Prediction
- URL: http://arxiv.org/abs/2212.10066v2
- Date: Sat, 25 Mar 2023 06:48:53 GMT
- Title: RepMode: Learning to Re-parameterize Diverse Experts for Subcellular
Structure Prediction
- Authors: Donghao Zhou, Chunbin Gu, Junde Xu, Furui Liu, Qiong Wang, Guangyong
Chen, Pheng-Ann Heng
- Abstract summary: In biological research, fluorescence staining is a key technique to reveal the locations and morphology of subcellular structures.
In this paper, we model it as a deep learning task termed subcellular structure prediction (SSP), aiming to predict the 3D fluorescent images of multiple subcellular structures from a 3D transmitted-light image.
We propose RepMode, a network that dynamically organizes its parameters with task-aware priors to handle specified single-label prediction tasks.
- Score: 54.69195221765405
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In biological research, fluorescence staining is a key technique to reveal
the locations and morphology of subcellular structures. However, it is slow,
expensive, and harmful to cells. In this paper, we model it as a deep learning
task termed subcellular structure prediction (SSP), aiming to predict the 3D
fluorescent images of multiple subcellular structures from a 3D
transmitted-light image. Unfortunately, due to the limitations of current
biotechnology, each image is partially labeled in SSP. Besides, naturally,
subcellular structures vary considerably in size, which causes the multi-scale
issue of SSP. To overcome these challenges, we propose Re-parameterizing
Mixture-of-Diverse-Experts (RepMode), a network that dynamically organizes its
parameters with task-aware priors to handle specified single-label prediction
tasks. In RepMode, the Mixture-of-Diverse-Experts (MoDE) block is designed to
learn the generalized parameters for all tasks, and gating re-parameterization
(GatRep) is performed to generate the specialized parameters for each task, by
which RepMode can maintain a compact practical topology exactly like a plain
network, and meanwhile achieves a powerful theoretical topology. Comprehensive
experiments show that RepMode can achieve state-of-the-art overall performance
in SSP.
Related papers
- Interpretability in Parameter Space: Minimizing Mechanistic Description Length with Attribution-based Parameter Decomposition [0.0]
We introduce a conceptual foundation for Attribution-based Decomposition (APD)
APD directly decomposes a neural network's parameters into components that are faithful to the parameters of the original network.
We demonstrate APD's effectiveness by successfully identifying ground truth mechanisms in toy experimental settings.
arXiv Detail & Related papers (2025-01-24T21:31:12Z) - Large Spatial Model: End-to-end Unposed Images to Semantic 3D [79.94479633598102]
Large Spatial Model (LSM) processes unposed RGB images directly into semantic radiance fields.
LSM simultaneously estimates geometry, appearance, and semantics in a single feed-forward operation.
It can generate versatile label maps by interacting with language at novel viewpoints.
arXiv Detail & Related papers (2024-10-24T17:54:42Z) - More Experts Than Galaxies: Conditionally-overlapping Experts With Biologically-Inspired Fixed Routing [5.846028298833611]
Conditionally Overlapping Mixture of ExperTs (COMET) is a general deep learning method that inducing a modular, sparse architecture with an exponential number of overlapping experts.
We demonstrate the effectiveness of COMET on a range of tasks, including image classification, language modeling, and regression.
arXiv Detail & Related papers (2024-10-10T14:58:18Z) - Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - Self-Supervised Representation Learning for Nerve Fiber Distribution
Patterns in 3D-PLI [36.136619420474766]
3D-PLI is a microscopic imaging technique that enables insights into the fine-grained organization of myelinated nerve fibers with high resolution.
Best practices for observer-independent characterization of fiber architecture in 3D-PLI are not yet available.
We propose the application of a fully data-driven approach to characterize nerve fiber architecture in 3D-PLI images using self-supervised representation learning.
arXiv Detail & Related papers (2024-01-30T17:49:53Z) - PC-GANs: Progressive Compensation Generative Adversarial Networks for
Pan-sharpening [50.943080184828524]
We propose a novel two-step model for pan-sharpening that sharpens the MS image through the progressive compensation of the spatial and spectral information.
The whole model is composed of triple GANs, and based on the specific architecture, a joint compensation loss function is designed to enable the triple GANs to be trained simultaneously.
arXiv Detail & Related papers (2022-07-29T03:09:21Z) - RDCNet: Instance segmentation with a minimalist recurrent residual
network [0.14999444543328289]
We propose a minimalist recurrent network called recurrent dilated convolutional network (RDCNet)
RDCNet consists of a shared stacked dilated convolution (sSDC) layer that iteratively refines its output and thereby generates interpretable intermediate predictions.
We demonstrate its versatility on 3 tasks with different imaging modalities: nuclear segmentation of H&E slides, of 3D anisotropic stacks from light-sheet fluorescence microscopy and leaf segmentation of top-view images of plants.
arXiv Detail & Related papers (2020-10-02T13:36:45Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - Accurate and Lightweight Image Super-Resolution with Model-Guided Deep
Unfolding Network [63.69237156340457]
We present and advocate an explainable approach toward SISR named model-guided deep unfolding network (MoG-DUN)
MoG-DUN is accurate (producing fewer aliasing artifacts), computationally efficient (with reduced model parameters), and versatile (capable of handling multiple degradations)
The superiority of the proposed MoG-DUN method to existing state-of-theart image methods including RCAN, SRDNF, and SRFBN is substantiated by extensive experiments on several popular datasets and various degradation scenarios.
arXiv Detail & Related papers (2020-09-14T08:23:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.