Subjective and Objective Quality-of-Experience Evaluation Study for Live Video Streaming
- URL: http://arxiv.org/abs/2409.17596v1
- Date: Thu, 26 Sep 2024 07:22:38 GMT
- Title: Subjective and Objective Quality-of-Experience Evaluation Study for Live Video Streaming
- Authors: Zehao Zhu, Wei Sun, Jun Jia, Wei Wu, Sibin Deng, Kai Li, Ying Chen, Xiongkuo Min, Jia Wang, Guangtao Zhai,
- Abstract summary: We conduct a comprehensive study of subjective and objective QoE evaluations for live video streaming.
For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE.
A human study was conducted to derive subjective QoE scores of videos in the TaoLive QoE dataset.
We propose an end-to-end QoE evaluation model, Tao-QoE, which integrates multi-scale semantic features and optical flow-based motion features.
- Score: 51.712182539961375
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In recent years, live video streaming has gained widespread popularity across various social media platforms. Quality of experience (QoE), which reflects end-users' satisfaction and overall experience, plays a critical role for media service providers to optimize large-scale live compression and transmission strategies to achieve perceptually optimal rate-distortion trade-off. Although many QoE metrics for video-on-demand (VoD) have been proposed, there remain significant challenges in developing QoE metrics for live video streaming. To bridge this gap, we conduct a comprehensive study of subjective and objective QoE evaluations for live video streaming. For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE, which consists of $42$ source videos collected from real live broadcasts and $1,155$ corresponding distorted ones degraded due to a variety of streaming distortions, including conventional streaming distortions such as compression, stalling, as well as live streaming-specific distortions like frame skipping, variable frame rate, etc. Subsequently, a human study was conducted to derive subjective QoE scores of videos in the TaoLive QoE dataset. For the objective QoE study, we benchmark existing QoE models on the TaoLive QoE dataset as well as publicly available QoE datasets for VoD scenarios, highlighting that current models struggle to accurately assess video QoE, particularly for live content. Hence, we propose an end-to-end QoE evaluation model, Tao-QoE, which integrates multi-scale semantic features and optical flow-based motion features to predicting a retrospective QoE score, eliminating reliance on statistical quality of service (QoS) features.
Related papers
- Satellite Streaming Video QoE Prediction: A Real-World Subjective Database and Network-Level Prediction Models [59.061552498630874]
We introduce the LIVE-Viasat Real-World Satellite QoE Database.
This database consists of 179 videos recorded from real-world streaming services affected by various authentic distortion patterns.
We demonstrate the usefulness of this unique new resource by evaluating the efficacy of QoE-prediction models on it.
We also created a new model that maps the network parameters to predicted human perception scores, which can be used by ISPs to optimize the video streaming quality of their networks.
arXiv Detail & Related papers (2024-10-17T18:22:50Z) - CLIPVQA:Video Quality Assessment via CLIP [56.94085651315878]
We propose an efficient CLIP-based Transformer method for the VQA problem ( CLIPVQA)
The proposed CLIPVQA achieves new state-of-the-art VQA performance and up to 37% better generalizability than existing benchmark VQA methods.
arXiv Detail & Related papers (2024-07-06T02:32:28Z) - KVQ: Kwai Video Quality Assessment for Short-form Videos [24.5291786508361]
We establish the first large-scale Kaleidoscope short Video database for Quality assessment, KVQ, which comprises 600 user-uploaded short videos and 3600 processed videos.
We propose the first short-form video quality evaluator, i.e., KSVQE, which enables the quality evaluator to identify the quality-determined semantics with the content understanding of large vision language models.
arXiv Detail & Related papers (2024-02-11T14:37:54Z) - StableVQA: A Deep No-Reference Quality Assessment Model for Video
Stability [56.462032266188785]
Video shakiness is an unpleasant distortion of User Generated Content (UGC) videos, which is usually caused by the unstable hold of cameras.
We build a new database named 1,952 diversely-shaky videos, where each video has a Mean Opinion Score (MOS) on the degree of video stability rated by 34 subjects.
We elaborately design a novel VQA-S model named StableVQA, which consists of three feature extractors to acquire the optical flow, semantic, and blur features respectively, and a regression layer to predict the final stability score.
arXiv Detail & Related papers (2023-08-09T12:04:36Z) - A Brief Survey on Adaptive Video Streaming Quality Assessment [30.253712568568876]
Quality of experience (QoE) assessment for adaptive video streaming plays a significant role in advanced network management systems.
We analyze and compare different variations of objective QoE assessment models with or without using machine learning techniques for adaptive video streaming.
We find that existing video streaming QoE assessment models still have limited performance, which makes it difficult to be applied in practical communication systems.
arXiv Detail & Related papers (2022-02-25T21:38:14Z) - Study on the Assessment of the Quality of Experience of Streaming Video [117.44028458220427]
In this paper, the influence of various objective factors on the subjective estimation of the QoE of streaming video is studied.
The paper presents standard and handcrafted features, shows their correlation and p-Value of significance.
We take SQoE-III database, so far the largest and most realistic of its kind.
arXiv Detail & Related papers (2020-12-08T18:46:09Z) - Unified Quality Assessment of In-the-Wild Videos with Mixed Datasets
Training [20.288424566444224]
We focus on automatically assessing the quality of in-the-wild videos in computer vision applications.
To improve the performance of quality assessment models, we borrow intuitions from human perception.
We propose a mixed datasets training strategy for training a single VQA model with multiple datasets.
arXiv Detail & Related papers (2020-11-09T09:22:57Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.