RLCorrector: Reinforced Proofreading for Connectomics Image Segmentation
- URL: http://arxiv.org/abs/2106.05487v1
- Date: Thu, 10 Jun 2021 04:02:41 GMT
- Title: RLCorrector: Reinforced Proofreading for Connectomics Image Segmentation
- Authors: Khoa Tuan Nguyen, Ganghee Jang and Won-ki Jeong
- Abstract summary: We propose a fully automatic proofreading method based on reinforcement learning.
The main idea is to model the human decision process in proofreading using a reinforcement agent.
We demonstrate the efficacy of the proposed system by comparing it with state-of-the-art proofreading methods.
- Score: 3.21359455541169
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The segmentation of nanoscale electron microscopy (EM) images is crucial but
challenging in connectomics. Recent advances in deep learning have demonstrated
the significant potential of automatic segmentation for tera-scale EM images.
However, none of the existing segmentation methods are error-free, and they
require proofreading, which is typically implemented as an interactive,
semi-automatic process via manual intervention. Herein, we propose a fully
automatic proofreading method based on reinforcement learning. The main idea is
to model the human decision process in proofreading using a reinforcement agent
to achieve fully automatic proofreading. We systematically design the proposed
system by combining multiple reinforcement learning agents in a hierarchical
manner, where each agent focuses only on a specific task while preserving
dependency between agents. Furthermore, we also demonstrate that the episodic
task setting of reinforcement learning can efficiently manage a combination of
merge and split errors concurrently presented in the input. We demonstrate the
efficacy of the proposed system by comparing it with state-of-the-art
proofreading methods using various testing examples.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - An Information Compensation Framework for Zero-Shot Skeleton-based Action Recognition [49.45660055499103]
Zero-shot human skeleton-based action recognition aims to construct a model that can recognize actions outside the categories seen during training.
Previous research has focused on aligning sequences' visual and semantic spatial distributions.
We introduce a new loss function sampling method to obtain a tight and robust representation.
arXiv Detail & Related papers (2024-06-02T06:53:01Z) - DPL: Decoupled Prompt Learning for Vision-Language Models [41.90997623029582]
We propose a new method, Decoupled Prompt Learning, which reformulates the attention in prompt learning to alleviate this problem.
Our approach is flexible for both visual and textual modalities, making it easily extendable to multi-modal prompt learning.
arXiv Detail & Related papers (2023-08-19T15:48:38Z) - Multi-scale Target-Aware Framework for Constrained Image Splicing
Detection and Localization [11.803255600587308]
We propose a multi-scale target-aware framework to couple feature extraction and correlation matching in a unified pipeline.
Our approach can effectively promote the collaborative learning of related patches, and perform mutual promotion of feature learning and correlation matching.
Our experiments demonstrate that our model, which uses a unified pipeline, outperforms state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2023-08-18T07:38:30Z) - Semi-supervised learning made simple with self-supervised clustering [65.98152950607707]
Self-supervised learning models have been shown to learn rich visual representations without requiring human annotations.
We propose a conceptually simple yet empirically powerful approach to turn clustering-based self-supervised methods into semi-supervised learners.
arXiv Detail & Related papers (2023-06-13T01:09:18Z) - MA2CL:Masked Attentive Contrastive Learning for Multi-Agent
Reinforcement Learning [128.19212716007794]
We propose an effective framework called textbfMulti-textbfAgent textbfMasked textbfAttentive textbfContrastive textbfLearning (MA2CL)
MA2CL encourages learning representation to be both temporal and agent-level predictive by reconstructing the masked agent observation in latent space.
Our method significantly improves the performance and sample efficiency of different MARL algorithms and outperforms other methods in various vision-based and state-based scenarios.
arXiv Detail & Related papers (2023-06-03T05:32:19Z) - Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Neural Coreference Resolution based on Reinforcement Learning [53.73316523766183]
Coreference resolution systems need to solve two subtasks.
One task is to detect all of the potential mentions, the other is to learn the linking of an antecedent for each possible mention.
We propose a reinforcement learning actor-critic-based neural coreference resolution system.
arXiv Detail & Related papers (2022-12-18T07:36:35Z) - Efficient Self-Supervision using Patch-based Contrastive Learning for
Histopathology Image Segmentation [0.456877715768796]
We propose a framework for self-supervised image segmentation using contrastive learning on image patches.
A fully convolutional neural network (FCNN) is trained in a self-supervised manner to discern features in the input images.
The proposed model only consists of a simple FCNN with 10.8k parameters and requires about 5 minutes to converge on the high resolution microscopy datasets.
arXiv Detail & Related papers (2022-08-23T07:24:47Z) - OCTAve: 2D en face Optical Coherence Tomography Angiography Vessel
Segmentation in Weakly-Supervised Learning with Locality Augmentation [14.322349196837209]
We propose the application of the scribble-base weakly-supervised learning method to automate the pixel-level annotation.
The proposed method, called OCTAve, combines the weakly-supervised learning using scribble-annotated ground truth augmented with an adversarial and a novel self-supervised deep supervision.
arXiv Detail & Related papers (2022-07-25T14:40:56Z) - Weakly Supervised Semantic Segmentation via Alternative Self-Dual
Teaching [82.71578668091914]
This paper establishes a compact learning framework that embeds the classification and mask-refinement components into a unified deep model.
We propose a novel alternative self-dual teaching (ASDT) mechanism to encourage high-quality knowledge interaction.
arXiv Detail & Related papers (2021-12-17T11:56:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.