MeVGAN: GAN-based Plugin Model for Video Generation with Applications in
Colonoscopy
- URL: http://arxiv.org/abs/2311.03884v1
- Date: Tue, 7 Nov 2023 10:58:16 GMT
- Title: MeVGAN: GAN-based Plugin Model for Video Generation with Applications in
Colonoscopy
- Authors: {\L}ukasz Struski, Tomasz Urba\'nczyk, Krzysztof Bucki, Bart{\l}omiej
Cupia{\l}, Aneta Kaczy\'nska, Przemys{\l}aw Spurek, Jacek Tabor
- Abstract summary: We propose Memory Efficient Video GAN (MeVGAN) - a Geneversarative Adrial Network (GAN)
We use a pre-trained 2D-image GAN to construct respective trajectories in the noise space, so that the trajectory forwarded through the GAN model constructs a real-life video.
We show that MeVGAN can produce good quality synthetic colonoscopy videos, which can be potentially used in virtual simulators.
- Score: 12.515404169717451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video generation is important, especially in medicine, as much data is given
in this form. However, video generation of high-resolution data is a very
demanding task for generative models, due to the large need for memory. In this
paper, we propose Memory Efficient Video GAN (MeVGAN) - a Generative
Adversarial Network (GAN) which uses plugin-type architecture. We use a
pre-trained 2D-image GAN and only add a simple neural network to construct
respective trajectories in the noise space, so that the trajectory forwarded
through the GAN model constructs a real-life video. We apply MeVGAN in the task
of generating colonoscopy videos. Colonoscopy is an important medical
procedure, especially beneficial in screening and managing colorectal cancer.
However, because colonoscopy is difficult and time-consuming to learn,
colonoscopy simulators are widely used in educating young colonoscopists. We
show that MeVGAN can produce good quality synthetic colonoscopy videos, which
can be potentially used in virtual simulators.
Related papers
- Frontiers in Intelligent Colonoscopy [96.57251132744446]
This study investigates the frontiers of intelligent colonoscopy techniques and their prospective implications for multimodal medical applications.
We assess the current data-centric and model-centric landscapes through four tasks for colonoscopic scene perception.
To embrace the coming multimodal era, we establish three foundational initiatives: a large-scale multimodal instruction tuning dataset ColonINST, a colonoscopy-designed multimodal language model ColonGPT, and a multimodal benchmark.
arXiv Detail & Related papers (2024-10-22T17:57:12Z) - Self-Prompting Polyp Segmentation in Colonoscopy using Hybrid Yolo-SAM 2 Model [18.61909523131399]
This paper presents a novel approach to polyp segmentation by integrating the Segment Anything Model (SAM 2) with the YOLOv8 model.
Our method leverages YOLOv8's bounding box predictions to autonomously generate input prompts for SAM 2, thereby reducing the need for manual annotations.
We conducted exhaustive tests on five benchmark colonoscopy image datasets and two colonoscopy video datasets, demonstrating that our method exceeds state-of-the-art models in both image and video segmentation tasks.
arXiv Detail & Related papers (2024-09-14T17:11:37Z) - Endora: Video Generation Models as Endoscopy Simulators [53.72175969751398]
This paper introduces model, an innovative approach to generate medical videos that simulate clinical endoscopy scenes.
We also pioneer the first public benchmark for endoscopy simulation with video generation models.
Endora marks a notable breakthrough in the deployment of generative AI for clinical endoscopy research.
arXiv Detail & Related papers (2024-03-17T00:51:59Z) - REAL-Colon: A dataset for developing real-world AI applications in
colonoscopy [1.8590283101866463]
We introduce the REAL-Colon (Real-world multi-center Endoscopy Annotated video Library) dataset.
It is a compilation of 2.7M native video frames from sixty full-resolution, real-world colonoscopy recordings across multiple centers.
The dataset contains 350k bounding-box annotations, each created under the supervision of expert gastroenterologists.
arXiv Detail & Related papers (2024-03-04T16:11:41Z) - Vivim: a Video Vision Mamba for Medical Video Segmentation [52.11785024350253]
This paper presents a Video Vision Mamba-based framework, dubbed as Vivim, for medical video segmentation tasks.
Our Vivim can effectively compress the long-term representation into sequences at varying scales.
Experiments on thyroid segmentation, breast lesion segmentation in ultrasound videos, and polyp segmentation in colonoscopy videos demonstrate the effectiveness and efficiency of our Vivim.
arXiv Detail & Related papers (2024-01-25T13:27:03Z) - Customizing General-Purpose Foundation Models for Medical Report
Generation [64.31265734687182]
The scarcity of labelled medical image-report pairs presents great challenges in the development of deep and large-scale neural networks.
We propose customizing off-the-shelf general-purpose large-scale pre-trained models, i.e., foundation models (FMs) in computer vision and natural language processing.
arXiv Detail & Related papers (2023-06-09T03:02:36Z) - Colo-SCRL: Self-Supervised Contrastive Representation Learning for
Colonoscopic Video Retrieval [2.868043986903368]
We construct a large-scale colonoscopic dataset named Colo-Pair for medical practice.
Based on this dataset, a simple yet effective training method called Colo-SCRL is proposed for more robust representation learning.
It aims to refine general knowledge from colonoscopies through masked autoencoder-based reconstruction and momentum contrast to improve retrieval performance.
arXiv Detail & Related papers (2023-03-28T01:27:23Z) - NanoNet: Real-Time Polyp Segmentation in Video Capsule Endoscopy and
Colonoscopy [0.6125117548653111]
We propose NanoNet, a novel architecture for the segmentation of video capsule endoscopy and colonoscopy images.
Our proposed architecture allows real-time performance and has higher segmentation accuracy compared to other more complex ones.
arXiv Detail & Related papers (2021-04-22T15:40:28Z) - Colonoscopy Polyp Detection: Domain Adaptation From Medical Report
Images to Real-time Videos [76.37907640271806]
We propose an Image-video-joint polyp detection network (Ivy-Net) to address the domain gap between colonoscopy images from historical medical reports and real-time videos.
Experiments on the collected dataset demonstrate that our Ivy-Net achieves the state-of-the-art result on colonoscopy video.
arXiv Detail & Related papers (2020-12-31T10:33:09Z) - SAG-GAN: Semi-Supervised Attention-Guided GANs for Data Augmentation on
Medical Images [47.35184075381965]
We present a data augmentation method for generating synthetic medical images using cycle-consistency Generative Adversarial Networks (GANs)
The proposed GANs-based model can generate a tumor image from a normal image, and in turn, it can also generate a normal image from a tumor image.
We train the classification model using real images with classic data augmentation methods and classification models using synthetic images.
arXiv Detail & Related papers (2020-11-15T14:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.