Compound Figure Separation of Biomedical Images with Side Loss
- URL: http://arxiv.org/abs/2107.08650v1
- Date: Mon, 19 Jul 2021 07:16:32 GMT
- Title: Compound Figure Separation of Biomedical Images with Side Loss
- Authors: Tianyuan Yao, Chang Qu, Quan Liu, Ruining Deng, Yuanhan Tian, Jiachen
Xu, Aadarsh Jha, Shunxing Bao, Mengyang Zhao, Agnes B. Fogo, Bennett
A.Landman, Catie Chang, Haichun Yang, Yuankai Huo
- Abstract summary: In medical image analysis, even unannotated data can be difficult to obtain for individual labs.
We propose a simple compound figure separation (SimCFS) framework that uses weak classification annotations from individual images.
SimCFS achieved a new state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation Database.
- Score: 7.037505559439388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning algorithms (e.g., self-supervised learning,
auto-encoder, contrastive learning) allow deep learning models to learn
effective image representations from large-scale unlabeled data. In medical
image analysis, even unannotated data can be difficult to obtain for individual
labs. Fortunately, national-level efforts have been made to provide efficient
access to obtain biomedical image data from previous scientific publications.
For instance, NIH has launched the Open-i search engine that provides a
large-scale image database with free access. However, the images in scientific
publications consist of a considerable amount of compound figures with
subplots. To extract and curate individual subplots, many different compound
figure separation approaches have been developed, especially with the recent
advances in deep learning. However, previous approaches typically required
resource extensive bounding box annotation to train detection models. In this
paper, we propose a simple compound figure separation (SimCFS) framework that
uses weak classification annotations from individual images. Our technical
contribution is three-fold: (1) we introduce a new side loss that is designed
for compound figure separation; (2) we introduce an intra-class image
augmentation method to simulate hard cases; (3) the proposed framework enables
an efficient deployment to new classes of images, without requiring resource
extensive bounding box annotations. From the results, the SimCFS achieved a new
state-of-the-art performance on the ImageCLEF 2016 Compound Figure Separation
Database. The source code of SimCFS is made publicly available at
https://github.com/hrlblab/ImageSeperation.
Related papers
- Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Compound Figure Separation of Biomedical Images: Mining Large Datasets
for Self-supervised Learning [12.445324044675116]
We introduce a simulation-based training framework that minimizes the need for resource extensive bounding box annotations.
We also propose a new side loss that is optimized for compound figure separation.
This is the first study that evaluates the efficacy of leveraging self-supervised learning with compound image separation.
arXiv Detail & Related papers (2022-08-30T16:02:34Z) - Self-Supervised Generative Style Transfer for One-Shot Medical Image
Segmentation [10.634870214944055]
In medical image segmentation, supervised deep networks' success comes at the cost of requiring abundant labeled data.
We propose a novel volumetric self-supervised learning for data augmentation capable of synthesizing volumetric image-segmentation pairs.
Our work's central tenet benefits from a combined view of one-shot generative learning and the proposed self-supervised training strategy.
arXiv Detail & Related papers (2021-10-05T15:28:42Z) - AugNet: End-to-End Unsupervised Visual Representation Learning with
Image Augmentation [3.6790362352712873]
We propose AugNet, a new deep learning training paradigm to learn image features from a collection of unlabeled pictures.
Our experiments demonstrate that the method is able to represent the image in low dimensional space.
Unlike many deep-learning-based image retrieval algorithms, our approach does not require access to external annotated datasets.
arXiv Detail & Related papers (2021-06-11T09:02:30Z) - Semantic Segmentation with Generative Models: Semi-Supervised Learning
and Strong Out-of-Domain Generalization [112.68171734288237]
We propose a novel framework for discriminative pixel-level tasks using a generative model of both images and labels.
We learn a generative adversarial network that captures the joint image-label distribution and is trained efficiently using a large set of unlabeled images.
We demonstrate strong in-domain performance compared to several baselines, and are the first to showcase extreme out-of-domain generalization.
arXiv Detail & Related papers (2021-04-12T21:41:25Z) - Self-supervised Image-text Pre-training With Mixed Data In Chest X-rays [10.398175542736285]
We introduce an image-text pre-training framework that can learn from mixed data inputs.
We demonstrate the feasibility of pre-training across mixed data inputs.
We also illustrate the benefits of adopting such pre-trained models in 3 chest X-ray applications.
arXiv Detail & Related papers (2021-03-30T01:48:46Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.