CS-lol: a Dataset of Viewer Comment with Scene in E-sports
Live-streaming
- URL: http://arxiv.org/abs/2301.06876v1
- Date: Tue, 17 Jan 2023 13:34:06 GMT
- Title: CS-lol: a Dataset of Viewer Comment with Scene in E-sports
Live-streaming
- Authors: Junjie H. Xu and Yu Nakano and Lingrong Kong and Kojiro Iizuka
- Abstract summary: Billions of live-streaming viewers share their opinions on scenes they are watching in real-time and interact with the event.
We develop CS-lol, a dataset containing comments from viewers paired with descriptions of game scenes in E-sports live-streaming.
We propose a task, namely viewer comment retrieval, to retrieve the viewer comments for the scene of the live-streaming event.
- Score: 0.5735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Billions of live-streaming viewers share their opinions on scenes they are
watching in real-time and interact with the event, commentators as well as
other viewers via text comments. Thus, there is necessary to explore viewers'
comments with scenes in E-sport live-streaming events. In this paper, we
developed CS-lol, a new large-scale dataset containing comments from viewers
paired with descriptions of game scenes in E-sports live-streaming. Moreover,
we propose a task, namely viewer comment retrieval, to retrieve the viewer
comments for the scene of the live-streaming event. Results on a series of
baseline retrieval methods derived from typical IR evaluation methods show our
task as a challenging task. Finally, we release CS-lol and baseline
implementation to the research community as a resource.
Related papers
- Deep learning for action spotting in association football videos [64.10841325879996]
The SoccerNet initiative organizes yearly challenges, during which participants from all around the world compete to achieve state-of-the-art performances.
This paper traces the history of action spotting in sports, from the creation of the task back in 2018, to the role it plays today in research and the sports industry.
arXiv Detail & Related papers (2024-10-02T07:56:15Z) - MatchTime: Towards Automatic Soccer Game Commentary Generation [52.431010585268865]
We consider constructing an automatic soccer game commentary model to improve the audiences' viewing experience.
First, observing the prevalent video-text misalignment in existing datasets, we manually annotate timestamps for 49 matches.
Second, we propose a multi-modal temporal alignment pipeline to automatically correct and filter the existing dataset at scale.
Third, based on our curated dataset, we train an automatic commentary generation model, named MatchVoice.
arXiv Detail & Related papers (2024-06-26T17:57:25Z) - Game-MUG: Multimodal Oriented Game Situation Understanding and Commentary Generation Dataset [8.837048597513059]
This paper introduces GAME-MUG, a new multimodal game situation understanding and audience-engaged commentary generation dataset.
Our dataset is collected from 2020-2022 LOL game live streams from YouTube and Twitch, and includes multimodal esports game information, including text, audio, and time-series event logs.
In addition, we also propose a new audience conversation augmented commentary dataset by covering the game situation and audience conversation understanding.
arXiv Detail & Related papers (2024-04-30T00:39:26Z) - SoccerNet-Caption: Dense Video Captioning for Soccer Broadcasts
Commentaries [71.44210436913029]
We propose a novel task of dense video captioning focusing on the generation of textual commentaries anchored with single timestamps.
We present a challenging dataset consisting of almost 37k timestamped commentaries across 715.9 hours of soccer broadcast videos.
arXiv Detail & Related papers (2023-04-10T13:08:03Z) - Commentary Generation from Data Records of Multiplayer Strategy Esports Game [21.133690853111133]
We build large-scale datasets that pair structured data and commentaries from a popular esports game, League of Legends.
We then evaluate Transformer-based models to generate game commentaries from structured data records.
We will release our dataset to boost potential research in the data-to-text generation community.
arXiv Detail & Related papers (2022-12-21T11:23:31Z) - Going for GOAL: A Resource for Grounded Football Commentaries [66.10040637644697]
We present GrOunded footbAlL commentaries (GOAL), a novel dataset of football (or soccer') highlights videos with transcribed live commentaries in English.
We provide state-of-the-art baselines for the following tasks: frame reordering, moment retrieval, live commentary retrieval and play-by-play live commentary generation.
Results show that SOTA models perform reasonably well in most tasks.
arXiv Detail & Related papers (2022-11-08T20:04:27Z) - GOAL: Towards Benchmarking Few-Shot Sports Game Summarization [0.3683202928838613]
We release GOAL, the first English sports game summarization dataset.
There are 103 commentary-news pairs in GOAL, where the average lengths of commentaries and news are 2724.9 and 476.3 words, respectively.
arXiv Detail & Related papers (2022-07-18T14:29:18Z) - Temporally-Aware Feature Pooling for Action Spotting in Soccer
Broadcasts [86.56462654572813]
We focus our analysis on action spotting in soccer broadcast, which consists in temporally localizing the main actions in a soccer game.
We propose a novel feature pooling method based on NetVLAD, dubbed NetVLAD++, that embeds temporally-aware knowledge.
We train and evaluate our methodology on the recent large-scale dataset SoccerNet-v2, reaching 53.4% Average-mAP for action spotting.
arXiv Detail & Related papers (2021-04-14T11:09:03Z) - SoccerNet-v2: A Dataset and Benchmarks for Holistic Understanding of
Broadcast Soccer Videos [71.72665910128975]
SoccerNet-v2 is a novel large-scale corpus of manual annotations for the SoccerNet video dataset.
We release around 300k annotations within SoccerNet's 500 untrimmed broadcast soccer videos.
We extend current tasks in the realm of soccer to include action spotting, camera shot segmentation with boundary detection.
arXiv Detail & Related papers (2020-11-26T16:10:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.