Q-Net: Query-Informed Few-Shot Medical Image Segmentation
- URL: http://arxiv.org/abs/2208.11451v1
- Date: Wed, 24 Aug 2022 11:36:53 GMT
- Title: Q-Net: Query-Informed Few-Shot Medical Image Segmentation
- Authors: Qianqian Shen, Yanan Li, Jiyong Jin, Bin Liu
- Abstract summary: We propose a Query-informed Meta-FSS approach, which mimics the learning mechanism of an expert clinician.
We build Q-Net based on ADNet, a recently proposed anomaly detection-inspired method.
Q-Net achieves state-of-the-art performance on two widely used datasets.
- Score: 5.615188751640673
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep learning has achieved tremendous success in computer vision, while
medical image segmentation (MIS) remains a challenge, due to the scarcity of
data annotations. Meta-learning techniques for few-shot segmentation (Meta-FSS)
have been widely used to tackle this challenge, while they neglect possible
distribution shifts between the query image and the support set. In contrast,
an experienced clinician can perceive and address such shifts by borrowing
information from the query image, then fine-tune or calibrate his (her) prior
cognitive model accordingly. Inspired by this, we propose Q-Net, a
Query-informed Meta-FSS approach, which mimics in spirit the learning mechanism
of an expert clinician. We build Q-Net based on ADNet, a recently proposed
anomaly detection-inspired method. Specifically, we add two query-informed
computation modules into ADNet, namely a query-informed threshold adaptation
module and a query-informed prototype refinement module. Combining them with a
dual-path extension of the feature extraction module, Q-Net achieves
state-of-the-art performance on two widely used datasets, which are composed of
abdominal MR images and cardiac MR images, respectively. Our work sheds light
on a novel way to improve Meta-FSS techniques by leveraging query information.
Related papers
- Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion [4.821565717653691]
Medical Visual Question Answering (Med-VQA) answers clinical questions using medical images, aiding diagnosis.
This study proposes a HiCA-VQA method, including two modules: Hierarchical Prompting for fine-grained medical questions and Hierarchical Answer Decoders.
Experiments on the Rad-Restruct benchmark demonstrate that the HiCA-VQA framework better outperforms existing state-of-the-art methods in answering hierarchical fine-grained questions.
arXiv Detail & Related papers (2025-04-04T03:03:12Z) - Advancing Medical Image Segmentation: Morphology-Driven Learning with Diffusion Transformer [4.672688418357066]
We propose a novel Transformer Diffusion (DTS) model for robust segmentation in the presence of noise.
Our model, which analyzes the morphological representation of images, shows better results than the previous models in various medical imaging modalities.
arXiv Detail & Related papers (2024-08-01T07:35:54Z) - A Mutual Inclusion Mechanism for Precise Boundary Segmentation in Medical Images [2.9137615132901704]
We present a novel deep learning-based approach, MIPC-Net, for precise boundary segmentation in medical images.
We introduce the MIPC module, which enhances the focus on channel information when extracting position features.
We also propose the GL-MIPC-Residue, a global residual connection that enhances the integration of the encoder and decoder.
arXiv Detail & Related papers (2024-04-12T02:14:35Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Few-shot Medical Image Segmentation via Cross-Reference Transformer [3.2634122554914]
Few-shot segmentation(FSS) has the potential to address these challenges by learning new categories from a small number of labeled samples.
We propose a novel self-supervised few shot medical image segmentation network with Cross-Reference Transformer.
Experimental results show that the proposed model achieves good results on both CT dataset and MRI dataset.
arXiv Detail & Related papers (2023-04-19T13:05:18Z) - Few Shot Medical Image Segmentation with Cross Attention Transformer [30.54965157877615]
We propose a novel framework for few-shot medical image segmentation, termed CAT-Net.
Our proposed network mines the correlations between the support image and query image, limiting them to focus only on useful foreground information.
We validated the proposed method on three public datasets: Abd-CT, Abd-MRI, and Card-MRI.
arXiv Detail & Related papers (2023-03-24T09:10:14Z) - MedSegDiff-V2: Diffusion based Medical Image Segmentation with
Transformer [53.575573940055335]
We propose a novel Transformer-based Diffusion framework, called MedSegDiff-V2.
We verify its effectiveness on 20 medical image segmentation tasks with different image modalities.
arXiv Detail & Related papers (2023-01-19T03:42:36Z) - InDuDoNet+: A Model-Driven Interpretable Dual Domain Network for Metal
Artifact Reduction in CT Images [53.4351366246531]
We construct a novel interpretable dual domain network, termed InDuDoNet+, into which CT imaging process is finely embedded.
We analyze the CT values among different tissues, and merge the prior observations into a prior network for our InDuDoNet+, which significantly improve its generalization performance.
arXiv Detail & Related papers (2021-12-23T15:52:37Z) - Recurrent Mask Refinement for Few-Shot Medical Image Segmentation [15.775057485500348]
We propose a new framework for few-shot medical image segmentation based on prototypical networks.
Our innovation lies in the design of two key modules: 1) a context relation encoder (CRE) that uses correlation to capture local relation features between foreground and background regions.
Experiments on two abdomen CT datasets and an abdomen MRI dataset show the proposed method obtains substantial improvement over the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-02T04:06:12Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.