Towards Annotation-free Instance Segmentation and Tracking with
Adversarial Simulations
- URL: http://arxiv.org/abs/2101.00567v2
- Date: Tue, 19 Jan 2021 22:32:33 GMT
- Title: Towards Annotation-free Instance Segmentation and Tracking with
Adversarial Simulations
- Authors: Quan Liu, Isabella M. Gaeta, Mengyang Zhao, Ruining Deng, Aadarsh Jha,
Bryan A. Millis, Anita Mahadevan-Jansen, Matthew J. Tyska, Yuankai Huo
- Abstract summary: In computer vision, annotated training data with consistent segmentation and tracking is resource intensive.
adversarial simulations have provided successful solutions in computer vision to train real-world self-driving systems.
This paper proposes an annotation-free synthetic instance segmentation and tracking (ASIST) method with adversarial simulation and single-stage pixel-embedding based learning.
- Score: 5.434831972326107
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The quantitative analysis of microscope videos often requires instance
segmentation and tracking of cellular and subcellular objects. The traditional
method is composed of two stages: (1) performing instance object segmentation
of each frame, and (2) associating objects frame-by-frame. Recently,
pixel-embedding-based deep learning approaches provide single stage holistic
solutions to tackle instance segmentation and tracking simultaneously. However,
such deep learning methods require consistent annotations not only spatially
(for segmentation), but also temporally (for tracking). In computer vision,
annotated training data with consistent segmentation and tracking is resource
intensive, the severity of which can be multiplied in microscopy imaging due to
(1) dense objects (e.g., overlapping or touching), and (2) high dynamics (e.g.,
irregular motion and mitosis). To alleviate the lack of such annotations in
dynamics scenes, adversarial simulations have provided successful solutions in
computer vision, such as using simulated environments (e.g., computer games) to
train real-world self-driving systems. In this paper, we propose an
annotation-free synthetic instance segmentation and tracking (ASIST) method
with adversarial simulation and single-stage pixel-embedding based learning.
The contribution of this paper is three-fold: (1) the proposed method
aggregates adversarial simulations and single-stage pixel-embedding based deep
learning; (2) the method is assessed with both the cellular (i.e., HeLa cells)
and subcellular (i.e., microvilli) objects; and (3) to the best of our
knowledge, this is the first study to explore annotation-free instance
segmentation and tracking study for microscope videos. This ASIST method
achieved an important step forward, when compared with fully supervised
approaches.
Related papers
- Appearance-Based Refinement for Object-Centric Motion Segmentation [85.2426540999329]
We introduce an appearance-based refinement method that leverages temporal consistency in video streams to correct inaccurate flow-based proposals.
Our approach involves a sequence-level selection mechanism that identifies accurate flow-predicted masks as exemplars.
Its performance is evaluated on multiple video segmentation benchmarks, including DAVIS, YouTube, SegTrackv2, and FBMS-59.
arXiv Detail & Related papers (2023-12-18T18:59:51Z) - Self-Supervised Interactive Object Segmentation Through a
Singulation-and-Grasping Approach [9.029861710944704]
We propose a robot learning approach to interact with novel objects and collect each object's training label.
The Singulation-and-Grasping (SaG) policy is trained through end-to-end reinforcement learning.
Our system achieves 70% singulation success rate in simulated cluttered scenes.
arXiv Detail & Related papers (2022-07-19T15:01:36Z) - Discovering Objects that Can Move [55.743225595012966]
We study the problem of object discovery -- separating objects from the background without manual labels.
Existing approaches utilize appearance cues, such as color, texture, and location, to group pixels into object-like regions.
We choose to focus on dynamic objects -- entities that can move independently in the world.
arXiv Detail & Related papers (2022-03-18T21:13:56Z) - Sim2Real Object-Centric Keypoint Detection and Description [40.58367357980036]
Keypoint detection and description play a central role in computer vision.
We propose the object-centric formulation, which requires further identifying which object each interest point belongs to.
We develop a sim2real contrastive learning mechanism that can generalize the model trained in simulation to real-world applications.
arXiv Detail & Related papers (2022-02-01T15:00:20Z) - Joint Inductive and Transductive Learning for Video Object Segmentation [107.32760625159301]
Semi-supervised object segmentation is a task of segmenting the target object in a video sequence given only a mask in the first frame.
Most previous best-performing methods adopt matching-based transductive reasoning or online inductive learning.
We propose to integrate transductive and inductive learning into a unified framework to exploit complement between them for accurate and robust video object segmentation.
arXiv Detail & Related papers (2021-08-08T16:25:48Z) - Learning to Track Instances without Video Annotations [85.9865889886669]
We introduce a novel semi-supervised framework by learning instance tracking networks with only a labeled image dataset and unlabeled video sequences.
We show that even when only trained with images, the learned feature representation is robust to instance appearance variations.
In addition, we integrate this module into single-stage instance segmentation and pose estimation frameworks.
arXiv Detail & Related papers (2021-04-01T06:47:41Z) - Point-supervised Segmentation of Microscopy Images and Volumes via
Objectness Regularization [2.243486411968779]
This work enables the training of semantic segmentation networks on images with only a single point for training per instance.
We achieve competitive results against the state-of-the-art in point-supervised semantic segmentation on challenging datasets in digital pathology.
arXiv Detail & Related papers (2021-03-09T18:40:00Z) - Learning Monocular Depth in Dynamic Scenes via Instance-Aware Projection
Consistency [114.02182755620784]
We present an end-to-end joint training framework that explicitly models 6-DoF motion of multiple dynamic objects, ego-motion and depth in a monocular camera setup without supervision.
Our framework is shown to outperform the state-of-the-art depth and motion estimation methods.
arXiv Detail & Related papers (2021-02-04T14:26:42Z) - ASIST: Annotation-free synthetic instance segmentation and tracking for
microscope video analysis [8.212196747588361]
We propose a novel annotation-free synthetic instance segmentation and tracking (ASIST) algorithm for analyzing microscope videos of sub-cellular microvilli.
From the experimental results, the proposed annotation-free method achieved superior performance compared with supervised learning.
arXiv Detail & Related papers (2020-11-02T14:39:26Z) - DyStaB: Unsupervised Object Segmentation via Dynamic-Static
Bootstrapping [72.84991726271024]
We describe an unsupervised method to detect and segment portions of images of live scenes that are seen moving as a coherent whole.
Our method first partitions the motion field by minimizing the mutual information between segments.
It uses the segments to learn object models that can be used for detection in a static image.
arXiv Detail & Related papers (2020-08-16T22:05:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.