Human-Machine Collaborative Video Coding Through Cuboidal Partitioning
- URL: http://arxiv.org/abs/2102.01307v1
- Date: Tue, 2 Feb 2021 04:44:45 GMT
- Title: Human-Machine Collaborative Video Coding Through Cuboidal Partitioning
- Authors: Ashek Ahmmed, Manoranjan Paul, Manzur Murshed, and David Taubman
- Abstract summary: We propose a video coding framework by leveraging on to the commonality that exists between human vision and machine vision applications using cuboids.
Cuboids, estimated rectangular regions over a video frame, are computationally efficient, has a compact representation and object centric.
Herein cuboidal feature descriptors are extracted from the current frame and then employed for accomplishing a machine vision task in the form of object detection.
- Score: 26.70051123157869
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Video coding algorithms encode and decode an entire video frame while feature
coding techniques only preserve and communicate the most critical information
needed for a given application. This is because video coding targets human
perception, while feature coding aims for machine vision tasks. Recently,
attempts are being made to bridge the gap between these two domains. In this
work, we propose a video coding framework by leveraging on to the commonality
that exists between human vision and machine vision applications using cuboids.
This is because cuboids, estimated rectangular regions over a video frame, are
computationally efficient, has a compact representation and object centric.
Such properties are already shown to add value to traditional video coding
systems. Herein cuboidal feature descriptors are extracted from the current
frame and then employed for accomplishing a machine vision task in the form of
object detection. Experimental results show that a trained classifier yields
superior average precision when equipped with cuboidal features oriented
representation of the current test frame. Additionally, this representation
costs 7% less in bit rate if the captured frames are need be communicated to a
receiver.
Related papers
- High-Efficiency Neural Video Compression via Hierarchical Predictive Learning [27.41398149573729]
Enhanced Deep Hierarchical Video Compression-DHVC 2.0- introduces superior compression performance and impressive complexity efficiency.
Uses hierarchical predictive coding to transform each video frame into multiscale representations.
Supports transmission-friendly progressive decoding, making it particularly advantageous for networked video applications in the presence of packet loss.
arXiv Detail & Related papers (2024-10-03T15:40:58Z) - Neuromorphic Synergy for Video Binarization [54.195375576583864]
Bimodal objects serve as a visual form to embed information that can be easily recognized by vision systems.
Neuromorphic cameras offer new capabilities for alleviating motion blur, but it is non-trivial to first de-blur and then binarize the images in a real-time manner.
We propose an event-based binary reconstruction method that leverages the prior knowledge of the bimodal target's properties to perform inference independently in both event space and image space.
We also develop an efficient integration method to propagate this binary image to high frame rate binary video.
arXiv Detail & Related papers (2024-02-20T01:43:51Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - Saliency-Driven Versatile Video Coding for Neural Object Detection [7.367608892486084]
We propose a saliency-driven coding framework for the video coding for machines task using the latest video coding standard Versatile Video Coding (VVC)
To determine the salient regions before encoding, we employ the real-time-capable object detection network You Only Look Once(YOLO) in combination with a novel decision criterion.
We find that, compared to the reference VVC with a constant quality, up to 29 % of accuracy can be saved with the same detection at the decoder side by applying the proposed saliency-driven framework.
arXiv Detail & Related papers (2022-03-11T14:27:43Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - Video Exploration via Video-Specific Autoencoders [60.256055890647595]
We present video-specific autoencoders that enables human-controllable video exploration.
We observe that a simple autoencoder trained on multiple frames of a specific video enables one to perform a large variety of video processing and editing tasks.
arXiv Detail & Related papers (2021-03-31T17:56:13Z) - An Emerging Coding Paradigm VCM: A Scalable Coding Approach Beyond
Feature and Signal [99.49099501559652]
Video Coding for Machine (VCM) aims to bridge the gap between visual feature compression and classical video coding.
We employ a conditional deep generation network to reconstruct video frames with the guidance of learned motion pattern.
By learning to extract sparse motion pattern via a predictive model, the network elegantly leverages the feature representation to generate the appearance of to-be-coded frames.
arXiv Detail & Related papers (2020-01-09T14:18:18Z) - Towards Coding for Human and Machine Vision: A Scalable Image Coding
Approach [104.02201472370801]
We come up with a novel image coding framework by leveraging both the compressive and the generative models.
By introducing advanced generative models, we train a flexible network to reconstruct images from compact feature representations and the reference pixels.
Experimental results demonstrate the superiority of our framework in both human visual quality and facial landmark detection.
arXiv Detail & Related papers (2020-01-09T10:37:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.