GSVNet: Guided Spatially-Varying Convolution for Fast Semantic
Segmentation on Video
- URL: http://arxiv.org/abs/2103.08834v1
- Date: Tue, 16 Mar 2021 03:38:59 GMT
- Title: GSVNet: Guided Spatially-Varying Convolution for Fast Semantic
Segmentation on Video
- Authors: Shih-Po Lee, Si-Cun Chen, Wen-Hsiao Peng
- Abstract summary: We propose a simple yet efficient propagation framework for video segmentation.
We perform lightweight flow estimation in 1/8-downscaled image space for temporal warping in segmentation outpace space.
We introduce a guided spatially-varying convolution for fusing segmentations derived from the previous and current frames, to mitigate propagation error.
- Score: 10.19019476978683
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses fast semantic segmentation on video.Video segmentation
often calls for real-time, or even fasterthan real-time, processing. One common
recipe for conserving computation arising from feature extraction is to
propagate features of few selected keyframes. However, recent advances in fast
image segmentation make these solutions less attractive. To leverage fast image
segmentation for furthering video segmentation, we propose a simple yet
efficient propagation framework. Specifically, we perform lightweight flow
estimation in 1/8-downscaled image space for temporal warping in segmentation
outpace space. Moreover, we introduce a guided spatially-varying convolution
for fusing segmentations derived from the previous and current frames, to
mitigate propagation error and enable lightweight feature extraction on
non-keyframes. Experimental results on Cityscapes and CamVid show that our
scheme achieves the state-of-the-art accuracy-throughput trade-off on video
segmentation.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.