Video traffic identification with novel feature extraction and selection
method
- URL: http://arxiv.org/abs/2303.03949v1
- Date: Mon, 6 Mar 2023 14:26:24 GMT
- Title: Video traffic identification with novel feature extraction and selection
method
- Authors: Licheng Zhang, Shuaili Liu, Qingsheng Yang, Zhongfeng Qu, Lizhi Peng
- Abstract summary: This study proposes to extract video-related features to construct a large-scale feature set to identify video traffic.
Second, to reduce the cost of video traffic identification and select an effective feature subset, the current research proposes an adaptive distribution distance-based feature selection (ADDFS) method.
Experimental results suggest that the proposed method can achieve high identification performance for video scene traffic and cloud game video traffic identification.
- Score: 1.7709344190822938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the rapid rise of video applications has led to an explosion
of Internet video traffic, thereby posing severe challenges to network
management. Therefore, effectively identifying and managing video traffic has
become an urgent problem to be solved. However, the existing video traffic
feature extraction methods mainly target at the traditional packet and flow
level features, and the video traffic identification accuracy is low.
Additionally, the issue of high data dimension often exists in video traffic
identification, requiring an effective approach to select the most relevant
features to complete the identification task. Although numerous studies have
used feature selection to achieve improved identification performance, no
feature selection research has focused on measuring feature distributions that
do not overlap or have a small overlap. First, this study proposes to extract
video-related features to construct a large-scale feature set to identify video
traffic. Second, to reduce the cost of video traffic identification and select
an effective feature subset, the current research proposes an adaptive
distribution distance-based feature selection (ADDFS) method, which uses
Wasserstein distance to measure the distance between feature distributions. To
test the effectiveness of the proposal, we collected a set of video traffic
from different platforms in a campus network environment and conducted a set of
experiments using these data sets. Experimental results suggest that the
proposed method can achieve high identification performance for video scene
traffic and cloud game video traffic identification. Lastly, a comparison of
ADDFS with other feature selection methods shows that ADDFS is a practical
feature selection technique not only for video traffic identification, but also
for general classification tasks.
Related papers
- HAVANA: Hierarchical stochastic neighbor embedding for Accelerated Video ANnotAtions [59.71751978599567]
This paper presents a novel annotation pipeline that uses pre-extracted features and dimensionality reduction to accelerate the temporal video annotation process.
We demonstrate significant improvements in annotation effort compared to traditional linear methods, achieving more than a 10x reduction in clicks required for annotating over 12 hours of video.
arXiv Detail & Related papers (2024-09-16T18:15:38Z) - EMDFNet: Efficient Multi-scale and Diverse Feature Network for Traffic Sign Detection [11.525603303355268]
The detection of small objects, particularly traffic signs, is a critical subtask within object detection and autonomous driving.
Motivated by these challenges, we propose a novel object detection network named Efficient Multi-scale and Diverse Feature Network (EMDFNet)
EMDFNet integrates an Augmented Shortcut Module and an Efficient Hybrid to address the aforementioned issues simultaneously.
arXiv Detail & Related papers (2024-08-26T11:26:27Z) - Causal Video Summarizer for Video Exploration [74.27487067877047]
Causal Video Summarizer (CVS) is proposed to capture the interactive information between the video and query.
Based on the evaluation of the existing multi-modal video summarization dataset, experimental results show that the proposed approach is effective.
arXiv Detail & Related papers (2023-07-04T22:52:16Z) - Few-shot Action Recognition via Intra- and Inter-Video Information
Maximization [28.31541961943443]
We propose a novel framework, Video Information Maximization (VIM), for few-shot action recognition.
VIM is equipped with an adaptive spatial-temporal video sampler and atemporal action alignment model.
VIM acts to maximize the distinctiveness of video information from limited video data.
arXiv Detail & Related papers (2023-05-10T13:05:43Z) - A novel efficient Multi-view traffic-related object detection framework [17.50049841016045]
We propose a novel traffic-related framework named CEVAS to achieve efficient object detection using multi-view video data.
Results show that our framework significantly reduces response latency while achieving the same detection accuracy as the state-of-the-art methods.
arXiv Detail & Related papers (2023-02-23T06:42:37Z) - Task-adaptive Spatial-Temporal Video Sampler for Few-shot Action
Recognition [25.888314212797436]
We propose a novel video frame sampler for few-shot action recognition.
Task-specific spatial-temporal frame sampling is achieved via a temporal selector (TS) and a spatial amplifier (SA)
Experiments show a significant boost on various benchmarks including long-term videos.
arXiv Detail & Related papers (2022-07-20T09:04:12Z) - Action Keypoint Network for Efficient Video Recognition [63.48422805355741]
This paper proposes to integrate temporal and spatial selection into an Action Keypoint Network (AK-Net)
AK-Net selects some informative points scattered in arbitrary-shaped regions as a set of action keypoints and then transforms the video recognition into point cloud classification.
Experimental results show that AK-Net can consistently improve the efficiency and performance of baseline methods on several video recognition benchmarks.
arXiv Detail & Related papers (2022-01-17T09:35:34Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - Few-Shot Video Object Detection [70.43402912344327]
We introduce Few-Shot Video Object Detection (FSVOD) with three important contributions.
FSVOD-500 comprises of 500 classes with class-balanced videos in each category for few-shot learning.
Our TPN and TMN+ are jointly and end-to-end trained.
arXiv Detail & Related papers (2021-04-30T07:38:04Z) - Multi-Density Attention Network for Loop Filtering in Video Compression [9.322800480045336]
We propose a on-line scaling based multi-density attention network for loop filtering in video compression.
Experimental results show that 10.18% bit-rate reduction at the same video quality can be achieved over the latest Versatile Video Coding (VVC) standard.
arXiv Detail & Related papers (2021-04-08T05:46:38Z) - Gabriella: An Online System for Real-Time Activity Detection in
Untrimmed Security Videos [72.50607929306058]
We propose a real-time online system to perform activity detection on untrimmed security videos.
The proposed method consists of three stages: tubelet extraction, activity classification and online tubelet merging.
We demonstrate the effectiveness of the proposed approach in terms of speed (100 fps) and performance with state-of-the-art results.
arXiv Detail & Related papers (2020-04-23T22:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.