Title: {\mu}Split: efficient image decomposition for microscopy data
Authors: Ashesh, Alexander Krull, Moises Di Sante, Francesco Silvio Pasqualini,
Florian Jug
Abstract summary: muSplit is a dedicated approach for trained image decomposition in the context of fluorescence microscopy images.
We introduce lateral contextualization (LC), a novel meta-architecture that enables the memory efficient incorporation of large image-context.
We apply muSplit to five decomposition tasks, one on a synthetic dataset, four others derived from real microscopy data.
Abstract: We present {\mu}Split, a dedicated approach for trained image decomposition
in the context of fluorescence microscopy images. We find that best results
using regular deep architectures are achieved when large image patches are used
during training, making memory consumption the limiting factor to further
improving performance. We therefore introduce lateral contextualization (LC), a
novel meta-architecture that enables the memory efficient incorporation of
large image-context, which we observe is a key ingredient to solving the image
decomposition task at hand. We integrate LC with U-Nets, Hierarchical AEs, and
Hierarchical VAEs, for which we formulate a modified ELBO loss. Additionally,
LC enables training deeper hierarchical models than otherwise possible and,
interestingly, helps to reduce tiling artefacts that are inherently impossible
to avoid when using tiled VAE predictions. We apply {\mu}Split to five
decomposition tasks, one on a synthetic dataset, four others derived from real
microscopy data. Our method consistently achieves best results (average
improvements to the best baseline of 2.25 dB PSNR), while simultaneously
requiring considerably less GPU memory. Our code and datasets can be found at
https://github.com/juglab/uSplit.
Related papers
Serpent: Scalable and Efficient Image Restoration via Multi-scale Structured State Space Models [22.702352459581434] Serpent is an efficient architecture for high-resolution image restoration.
We show that Serpent can achieve reconstruction quality on par with state-of-the-art techniques. arXivDetail & Related papers (2024-03-26T17:43:15Z)
You Can Mask More For Extremely Low-Bitrate Image Compression [80.7692466922499] Learned image compression (LIC) methods have experienced significant progress during recent years.
LIC methods fail to explicitly explore the image structure and texture components crucial for image compression.
We present DA-Mask that samples visible patches based on the structure and texture of original images.
We propose a simple yet effective masked compression model (MCM), the first framework that unifies LIC and LIC end-to-end for extremely low-bitrate compression. arXivDetail & Related papers (2023-06-27T15:36:22Z)
Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103] Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata. arXivDetail & Related papers (2023-06-21T06:59:07Z)
Raw Image Reconstruction with Learned Compact Metadata [61.62454853089346] We propose a novel framework to learn a compact representation in the latent space serving as the metadata in an end-to-end manner.
We show how the proposed raw image compression scheme can adaptively allocate more bits to image regions that are important from a global perspective. arXivDetail & Related papers (2023-02-25T05:29:45Z)
Multi-scale Transformer Network with Edge-aware Pre-training for
Cross-Modality MR Image Synthesis [52.41439725865149] Cross-modality magnetic resonance (MR) image synthesis can be used to generate missing modalities from given ones.
Existing (supervised learning) methods often require a large number of paired multi-modal data to train an effective synthesis model.
We propose a Multi-scale Transformer Network (MT-Net) with edge-aware pre-training for cross-modality MR image synthesis. arXivDetail & Related papers (2022-12-02T11:40:40Z)
Rethinking the Paradigm of Content Constraints in Unpaired
Image-to-Image Translation [9.900050049833986] We propose EnCo, a simple but efficient way to maintain the content by constraining the representational similarity in the latent space of patch-level features.
For the similarity function, we use a simple MSE loss instead of contrastive loss, which is currently widely used in I2I tasks.
In addition, we rethink the role played by discriminators in sampling patches and propose a discnative attention-guided (DAG) patch sampling strategy to replace random sampling. arXivDetail & Related papers (2022-11-20T04:39:57Z)
Joint Super-Resolution and Inverse Tone-Mapping: A Feature Decomposition
Aggregation Network and A New Benchmark [0.0] We propose a lightweight Feature Decomposition Aggregation Network (FDAN) to exploit the potential power of decomposition mechanism.
In particular, we design a Feature Decomposition Block (FDB) to achieve learnable separation of detail and base feature maps.
We also collect a large-scale dataset for joint SR-ITM, i.e., SRITM-4K, which provides versatile scenarios for robust model training and evaluation. arXivDetail & Related papers (2022-07-07T15:16:36Z)
Learning strides in convolutional neural networks [34.20666933112202] This work introduces DiffStride, the first downsampling layer with learnable strides.
Experiments on audio and image classification show the generality and effectiveness of our solution. arXivDetail & Related papers (2022-02-03T16:03:36Z)
Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322] We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks. arXivDetail & Related papers (2020-08-25T06:00:06Z)
Adaptive Fractional Dilated Convolution Network for Image Aesthetics
Assessment [33.945579916184364] An adaptive fractional dilated convolution (AFDC) is developed to tackle this issue in convolutional kernel level.
We provide a concise formulation for mini-batch training and utilize a grouping strategy to reduce computational overhead.
Our experimental results demonstrate that our proposed method achieves state-of-the-art performance on image aesthetics assessment over the AVA dataset. arXivDetail & Related papers (2020-04-06T21:56:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.