The SkatingVerse Workshop & Challenge: Methods and Results
- URL: http://arxiv.org/abs/2405.17188v1
- Date: Mon, 27 May 2024 14:12:07 GMT
- Title: The SkatingVerse Workshop & Challenge: Methods and Results
- Authors: Jian Zhao, Lei Jin, Jianshu Li, Zheng Zhu, Yinglei Teng, Jiaojiao Zhao, Sadaf Gulshad, Zheng Wang, Bo Zhao, Xiangbo Shu, Yunchao Wei, Xuecheng Nie, Xiaojie Jin, Xiaodan Liang, Shin'ichi Satoh, Yandong Guo, Cewu Lu, Junliang Xing, Jane Shen Shengmei,
- Abstract summary: The SkatingVerse Workshop & Challenge aims to encourage research in developing novel and accurate methods for human action understanding.
The dataset used for the SkatingVerse Challenge has been publicly released.
Around 10 participating teams from the globe competed in the SkatingVerse Challenge.
- Score: 137.81522563074287
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The SkatingVerse Workshop & Challenge aims to encourage research in developing novel and accurate methods for human action understanding. The SkatingVerse dataset used for the SkatingVerse Challenge has been publicly released. There are two subsets in the dataset, i.e., the training subset and testing subset. The training subsets consists of 19,993 RGB video sequences, and the testing subsets consists of 8,586 RGB video sequences. Around 10 participating teams from the globe competed in the SkatingVerse Challenge. In this paper, we provide a brief summary of the SkatingVerse Workshop & Challenge including brief introductions to the top three methods. The submission leaderboard will be reopened for researchers that are interested in the human action understanding challenge. The benchmark dataset and other information can be found at: https://skatingverse.github.io/.
Related papers
- AIM 2024 Sparse Neural Rendering Challenge: Methods and Results [64.19942455360068]
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
The challenge aims at producing novel camera view synthesis of diverse scenes from sparse image observations.
Participants are asked to optimise objective fidelity to the ground-truth images as measured via the Peak Signal-to-Noise Ratio (PSNR) metric.
arXiv Detail & Related papers (2024-09-23T14:17:40Z) - 3D Pose-Based Temporal Action Segmentation for Figure Skating: A Fine-Grained and Jump Procedure-Aware Annotation Approach [5.453385501324681]
In figure skating, technical judgments are performed by watching skaters' 3D movements, and its part of the judging procedure can be regarded as a Temporal Action (TAS) task.
There is a lack of datasets and effective methods for TAS tasks requiring 3D pose data.
In this study, we first created the FS-Jump3D dataset of complex and dynamic figure skating jumps using optical markerless motion capture.
We also propose a new fine-grained figure skating jump TAS dataset annotation method with which TAS models can learn jump procedures.
arXiv Detail & Related papers (2024-08-29T15:42:06Z) - 1st Place Solution to the 1st SkatingVerse Challenge [12.17968838503053]
This paper presents the winning solution for the 1stVerse Skating Challenge.
We leverage the DINO framework to extract the Region of Interest (ROI) and perform precise cropping of the raw video footage.
By ensembling the prediction results based on logits, our solution attains an impressive leaderboard score of 95.73%.
arXiv Detail & Related papers (2024-04-22T09:50:05Z) - OpenSUN3D: 1st Workshop Challenge on Open-Vocabulary 3D Scene Understanding [96.69806736025248]
This report provides an overview of the challenge hosted at the OpenSUN3D Workshop on Open-Vocabulary 3D Scene Understanding held in conjunction with ICCV 2023.
arXiv Detail & Related papers (2024-02-23T13:39:59Z) - NTIRE 2023 Quality Assessment of Video Enhancement Challenge [97.809937484099]
This paper reports on the NTIRE 2023 Quality Assessment of Video Enhancement Challenge.
The challenge is to address a major challenge in the field of video processing, namely, video quality assessment (VQA) for enhanced videos.
The challenge has a total of 167 registered participants.
arXiv Detail & Related papers (2023-07-19T02:33:42Z) - ICDAR 2021 Competition on Scene Video Text Spotting [28.439390836950025]
Scene video text spotting (SVTS) is a very important research topic because of many real-life applications.
This paper includes dataset descriptions, task definitions, evaluation protocols and results summaries of the ICDAR 2021 on SVTS competition.
arXiv Detail & Related papers (2021-07-26T01:25:57Z) - LID 2020: The Learning from Imperfect Data Challenge Results [242.86700551532272]
Learning from Imperfect Data workshop aims to inspire and facilitate the research in developing novel approaches.
We organize three challenges to find the state-of-the-art approaches in weakly supervised learning setting.
This technical report summarizes the highlights from the challenge.
arXiv Detail & Related papers (2020-10-17T13:06:12Z) - The 1st Tiny Object Detection Challenge:Methods and Results [70.00081071453003]
The 1st Tiny Object Detection (TOD) Challenge aims to encourage research in developing novel and accurate methods for tiny object detection in images which have wide views.
The TinyPerson dataset was used for the TOD Challenge and is publicly released.
arXiv Detail & Related papers (2020-09-16T07:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.