CDFI: Cross Domain Feature Interaction for Robust Bronchi Lumen
Detection
- URL: http://arxiv.org/abs/2304.09115v1
- Date: Tue, 18 Apr 2023 16:28:02 GMT
- Title: CDFI: Cross Domain Feature Interaction for Robust Bronchi Lumen
Detection
- Authors: Jiasheng Xu, Tianyi Zhang, Yangqian Wu, Jie Yang, Guang-Zhong Yang,
Yun Gu
- Abstract summary: Cross domain feature interaction (CDFI) network is proposed to extract the structural features of lumens.
Quadruple Feature Constraints (QFC) module is designed to constrain the intrinsic connections of samples with various imaging-quality.
Guided Feature Fusion (GFF) module is designed to supervise the model for adaptive feature fusion.
- Score: 36.048976865785605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Endobronchial intervention is increasingly used as a minimally invasive means
for the treatment of pulmonary diseases. In order to reduce the difficulty of
manipulation in complex airway networks, robust lumen detection is essential
for intraoperative guidance. However, these methods are sensitive to visual
artifacts which are inevitable during the surgery. In this work, a cross domain
feature interaction (CDFI) network is proposed to extract the structural
features of lumens, as well as to provide artifact cues to characterize the
visual features. To effectively extract the structural and artifact features,
the Quadruple Feature Constraints (QFC) module is designed to constrain the
intrinsic connections of samples with various imaging-quality. Furthermore, we
design a Guided Feature Fusion (GFF) module to supervise the model for adaptive
feature fusion based on different types of artifacts. Results show that the
features extracted by the proposed method can preserve the structural
information of lumen in the presence of large visual variations, bringing
much-improved lumen detection accuracy.
Related papers
- PathSegDiff: Pathology Segmentation using Diffusion model representations [63.20694440934692]
We propose PathSegDiff, a novel approach for histopathology image segmentation that leverages Latent Diffusion Models (LDMs) as pre-trained featured extractors.
Our method utilizes a pathology-specific LDM, guided by a self-supervised encoder, to extract rich semantic information from H&E stained histopathology images.
Our experiments demonstrate significant improvements over traditional methods on the BCSS and GlaS datasets.
arXiv Detail & Related papers (2025-04-09T14:58:21Z) - Multiscale Latent Diffusion Model for Enhanced Feature Extraction from Medical Images [5.395912799904941]
variations in CT scanner models and acquisition protocols introduce significant variability in the extracted radiomic features.
LTDiff++ is a multiscale latent diffusion model designed to enhance feature extraction in medical imaging.
arXiv Detail & Related papers (2024-10-05T02:13:57Z) - Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - IAFI-FCOS: Intra- and across-layer feature interaction FCOS model for lesion detection of CT images [5.198119863305256]
Multi-scale feature fusion mechanism of most traditional detectors are unable to transmit detail information without loss.
We propose a novel intra- and across-layer feature interaction FCOS model (IAFI-FCOS) with a multi-scale feature fusion mechanism ICAF-FPN.
Our approach has been extensively experimented on both the private pancreatic lesion dataset and the public DeepLesion dataset.
arXiv Detail & Related papers (2024-09-01T10:58:48Z) - CathFlow: Self-Supervised Segmentation of Catheters in Interventional Ultrasound Using Optical Flow and Transformers [66.15847237150909]
We introduce a self-supervised deep learning architecture to segment catheters in longitudinal ultrasound images.
The network architecture builds upon AiAReSeg, a segmentation transformer built with the Attention in Attention mechanism.
We validated our model on a test dataset, consisting of unseen synthetic data and images collected from silicon aorta phantoms.
arXiv Detail & Related papers (2024-03-21T15:13:36Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Introducing Shape Prior Module in Diffusion Model for Medical Image
Segmentation [7.7545714516743045]
We propose an end-to-end framework called VerseDiff-UNet, which leverages the denoising diffusion probabilistic model (DDPM)
Our approach integrates the diffusion model into a standard U-shaped architecture.
We evaluate our method on a single dataset of spine images acquired through X-ray imaging.
arXiv Detail & Related papers (2023-09-12T03:05:00Z) - Convolutional neural network based on sparse graph attention mechanism
for MRI super-resolution [0.34410212782758043]
Medical image super-resolution (SR) reconstruction using deep learning techniques can enhance lesion analysis and assist doctors in improving diagnostic efficiency and accuracy.
Existing deep learning-based SR methods rely on convolutional neural networks (CNNs), which inherently limit the expressive capabilities of these models.
We propose an A-network that utilizes multiple convolution operator feature extraction modules (MCO) for extracting image features.
arXiv Detail & Related papers (2023-05-29T06:14:22Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Fuzzy Attention Neural Network to Tackle Discontinuity in Airway
Segmentation [67.19443246236048]
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases.
Some small-sized airway branches (e.g., bronchus and terminaloles) significantly aggravate the difficulty of automatic segmentation.
This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function.
arXiv Detail & Related papers (2022-09-05T16:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.