Study on the Assessment of the Quality of Experience of Streaming Video
- URL: http://arxiv.org/abs/2012.04623v1
- Date: Tue, 8 Dec 2020 18:46:09 GMT
- Title: Study on the Assessment of the Quality of Experience of Streaming Video
- Authors: Aleksandr Ivchenko, Pavel Kononyuk, Alexander Dvorkovich, Liubov
Antiufrieva
- Abstract summary: In this paper, the influence of various objective factors on the subjective estimation of the QoE of streaming video is studied.
The paper presents standard and handcrafted features, shows their correlation and p-Value of significance.
We take SQoE-III database, so far the largest and most realistic of its kind.
- Score: 117.44028458220427
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic adaptive streaming over HTTP provides the work of most multimedia
services, however, the nature of this technology further complicates the
assessment of the QoE (Quality of Experience). In this paper, the influence of
various objective factors on the subjective estimation of the QoE of streaming
video is studied. The paper presents standard and handcrafted features, shows
their correlation and p-Value of significance. VQA (Video Quality Assessment)
models based on regression and gradient boosting with SRCC reaching up to
0.9647 on the validation subsample are proposed. The proposed regression models
are adapted for applied applications (both with and without a reference video);
the Gradient Boosting Regressor model is perspective for further improvement of
the quality estimation model. We take SQoE-III database, so far the largest and
most realistic of its kind. The VQA (video quality assessment) models are
available at https://github.com/AleksandrIvchenko/QoE-assesment
Related papers
- Subjective and Objective Quality-of-Experience Evaluation Study for Live Video Streaming [51.712182539961375]
We conduct a comprehensive study of subjective and objective QoE evaluations for live video streaming.
For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE.
A human study was conducted to derive subjective QoE scores of videos in the TaoLive QoE dataset.
We propose an end-to-end QoE evaluation model, Tao-QoE, which integrates multi-scale semantic features and optical flow-based motion features.
arXiv Detail & Related papers (2024-09-26T07:22:38Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Enhancing Blind Video Quality Assessment with Rich Quality-aware Features [79.18772373737724]
We present a simple but effective method to enhance blind video quality assessment (BVQA) models for social media videos.
We explore rich quality-aware features from pre-trained blind image quality assessment (BIQA) and BVQA models as auxiliary features.
Experimental results demonstrate that the proposed model achieves the best performance on three public social media VQA datasets.
arXiv Detail & Related papers (2024-05-14T16:32:11Z) - Ada-DQA: Adaptive Diverse Quality-aware Feature Acquisition for Video
Quality Assessment [25.5501280406614]
Video quality assessment (VQA) has attracted growing attention in recent years.
The great expense of annotating large-scale VQA datasets has become the main obstacle for current deep-learning methods.
An Adaptive Diverse Quality-aware feature Acquisition (Ada-DQA) framework is proposed to capture desired quality-related features.
arXiv Detail & Related papers (2023-08-01T16:04:42Z) - CONVIQT: Contrastive Video Quality Estimator [63.749184706461826]
Perceptual video quality assessment (VQA) is an integral component of many streaming and video sharing platforms.
Here we consider the problem of learning perceptually relevant video quality representations in a self-supervised manner.
Our results indicate that compelling representations with perceptual bearing can be obtained using self-supervised learning.
arXiv Detail & Related papers (2022-06-29T15:22:01Z) - A Deep Learning based No-reference Quality Assessment Model for UGC
Videos [44.00578772367465]
Previous video quality assessment (VQA) studies either use the image recognition model or the image quality assessment (IQA) models to extract frame-level features of videos for quality regression.
We propose a very simple but effective VQA model, which trains an end-to-end spatial feature extraction network to learn the quality-aware spatial feature representation from raw pixels of the video frames.
With the better quality-aware features, we only use the simple multilayer perception layer (MLP) network to regress them into the chunk-level quality scores, and then the temporal average pooling strategy is adopted to obtain the video
arXiv Detail & Related papers (2022-04-29T12:45:21Z) - A Brief Survey on Adaptive Video Streaming Quality Assessment [30.253712568568876]
Quality of experience (QoE) assessment for adaptive video streaming plays a significant role in advanced network management systems.
We analyze and compare different variations of objective QoE assessment models with or without using machine learning techniques for adaptive video streaming.
We find that existing video streaming QoE assessment models still have limited performance, which makes it difficult to be applied in practical communication systems.
arXiv Detail & Related papers (2022-02-25T21:38:14Z) - UGC-VQA: Benchmarking Blind Video Quality Assessment for User Generated
Content [59.13821614689478]
Blind quality prediction of in-the-wild videos is quite challenging, since the quality degradations of content are unpredictable, complicated, and often commingled.
Here we contribute to advancing the problem by conducting a comprehensive evaluation of leading VQA models.
By employing a feature selection strategy on top of leading VQA model features, we are able to extract 60 of the 763 statistical features used by the leading models.
Our experimental results show that VIDEVAL achieves state-of-theart performance at considerably lower computational cost than other leading models.
arXiv Detail & Related papers (2020-05-29T00:39:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.