PVUW 2025 Challenge Report: Advances in Pixel-level Understanding of Complex Videos in the Wild
- URL: http://arxiv.org/abs/2504.11326v2
- Date: Mon, 21 Apr 2025 07:23:45 GMT
- Title: PVUW 2025 Challenge Report: Advances in Pixel-level Understanding of Complex Videos in the Wild
- Authors: Henghui Ding, Chang Liu, Nikhila Ravi, Shuting He, Yunchao Wei, Song Bai, Philip Torr, Kehuan Song, Xinglin Xie, Kexin Zhang, Licheng Jiao, Lingling Li, Shuyuan Yang, Xuqiang Cao, Linnan Zhao, Jiaxuan Zhao, Fang Liu, Mengjiao Wang, Junpei Zhang, Xu Liu, Yuting Yang, Mengru Ma, Hao Fang, Runmin Cong, Xiankai Lu, Zhiyang Chen, Wei Zhang, Tianming Liang, Haichao Jiang, Wei-Shi Zheng, Jian-Fang Hu, Haobo Yuan, Xiangtai Li, Tao Zhang, Lu Qi, Ming-Hsuan Yang,
- Abstract summary: This report provides a comprehensive overview of the 4th Pixel-level Video Understanding in the Wild (PVUW) Challenge, held in conjunction with CVPR 2025.<n>The challenge features two tracks: MOSE, which focuses on complex scene video object segmentation, and MeViS, which targets motion-guided, language-based video segmentation.
- Score: 164.8093566483583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report provides a comprehensive overview of the 4th Pixel-level Video Understanding in the Wild (PVUW) Challenge, held in conjunction with CVPR 2025. It summarizes the challenge outcomes, participating methodologies, and future research directions. The challenge features two tracks: MOSE, which focuses on complex scene video object segmentation, and MeViS, which targets motion-guided, language-based video segmentation. Both tracks introduce new, more challenging datasets designed to better reflect real-world scenarios. Through detailed evaluation and analysis, the challenge offers valuable insights into the current state-of-the-art and emerging trends in complex video segmentation. More information can be found on the workshop website: https://pvuw.github.io/.
Related papers
- AIM 2024 Sparse Neural Rendering Challenge: Methods and Results [64.19942455360068]
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations.
Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric.
arXiv Detail & Related papers (2024-09-23T14:17:40Z) - LSVOS Challenge Report: Large-scale Complex and Long Video Object Segmentation [124.50550604020684]
This paper introduces the 6th Large-scale Video Object (LSVOS) challenge in conjunction with ECCV 2024 workshop.
This year's challenge includes two tasks: Video Object (VOS) and Referring Video Object (RVOS)
This year's challenge attracted 129 registered teams from more than 20 institutes across over 8 countries.
arXiv Detail & Related papers (2024-09-09T17:45:45Z) - PVUW 2024 Challenge on Complex Video Understanding: Methods and Results [199.5593316907284]
We add two new tracks, Complex Video Object Track based on MOSE dataset and Motion Expression guided Video track based on MeViS dataset.
In the two new tracks, we provide additional videos and annotations that feature challenging elements.
These new videos, sentences, and annotations enable us to foster the development of a more comprehensive and robust pixel-level understanding of video scenes.
arXiv Detail & Related papers (2024-06-24T17:38:58Z) - 1st Place Solution for MeViS Track in CVPR 2024 PVUW Workshop: Motion Expression guided Video Segmentation [81.50620771207329]
We investigate the effectiveness of static-dominant data and frame sampling on referring video object segmentation (RVOS)
Our solution achieves a J&F score of 0.5447 in the competition phase and ranks 1st in the MeViS track of the PVUW Challenge.
arXiv Detail & Related papers (2024-06-11T08:05:26Z) - The Second Place Solution for The 4th Large-scale Video Object
Segmentation Challenge--Track 3: Referring Video Object Segmentation [18.630453674396534]
ReferFormer aims to segment object instances in a given video referred by a language expression in all video frames.
This work proposes several tricks to boost further, including cyclical learning rates, semi-supervised approach, and test-time augmentation inference.
The improved ReferFormer ranks 2nd place on CVPR2022 Referring Youtube-VOS Challenge.
arXiv Detail & Related papers (2022-06-24T02:15:06Z) - ViSeRet: A simple yet effective approach to moment retrieval via
fine-grained video segmentation [6.544437737391409]
This paper presents the 1st place solution to the video retrieval track of the ICCV VALUE Challenge 2021.
We present a simple yet effective approach to jointly tackle two video-text retrieval tasks.
We create an ensemble model that achieves the new state-of-the-art performance on all four datasets.
arXiv Detail & Related papers (2021-10-11T10:39:13Z) - The End-of-End-to-End: A Video Understanding Pentathlon Challenge (2020) [186.7816349401443]
We present a new video understanding pentathlon challenge, an open competition held in conjunction with the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2020.
The objective of the challenge was to explore and evaluate new methods for text-to-video retrieval.
arXiv Detail & Related papers (2020-08-03T09:55:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.