MVFlow: Deep Optical Flow Estimation of Compressed Videos with Motion
Vector Prior
- URL: http://arxiv.org/abs/2308.01568v2
- Date: Fri, 4 Aug 2023 04:18:59 GMT
- Title: MVFlow: Deep Optical Flow Estimation of Compressed Videos with Motion
Vector Prior
- Authors: Shili Zhou, Xuhao Jiang, Weimin Tan, Ruian He and Bo Yan
- Abstract summary: We propose an optical flow model, MVFlow, which uses motion vectors to improve the speed and accuracy of optical flow estimation for compressed videos.
The experimental results demonstrate the superiority of our proposed MVFlow by 1.09 compared to existing models or save time to achieve similar accuracy to existing models.
- Score: 16.633665275166706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many deep learning-based methods have been proposed to
tackle the problem of optical flow estimation and achieved promising results.
However, they hardly consider that most videos are compressed and thus ignore
the pre-computed information in compressed video streams. Motion vectors, one
of the compression information, record the motion of the video frames. They can
be directly extracted from the compression code stream without computational
cost and serve as a solid prior for optical flow estimation. Therefore, we
propose an optical flow model, MVFlow, which uses motion vectors to improve the
speed and accuracy of optical flow estimation for compressed videos. In detail,
MVFlow includes a key Motion-Vector Converting Module, which ensures that the
motion vectors can be transformed into the same domain of optical flow and then
be utilized fully by the flow estimation module. Meanwhile, we construct four
optical flow datasets for compressed videos containing frames and motion
vectors in pairs. The experimental results demonstrate the superiority of our
proposed MVFlow, which can reduce the AEPE by 1.09 compared to existing models
or save 52% time to achieve similar accuracy to existing models.
Related papers
- Generalizable Implicit Motion Modeling for Video Frame Interpolation [51.966062283735596]
Motion is critical in flow-based Video Frame Interpolation (VFI)
We introduce General Implicit Motion Modeling (IMM), a novel and effective approach to motion modeling VFI.
Our GIMM can be easily integrated with existing flow-based VFI works by supplying accurately modeled motion.
arXiv Detail & Related papers (2024-07-11T17:13:15Z) - MemFlow: Optical Flow Estimation and Prediction with Memory [54.22820729477756]
We present MemFlow, a real-time method for optical flow estimation and prediction with memory.
Our method enables memory read-out and update modules for aggregating historical motion information in real-time.
Our approach seamlessly extends to the future prediction of optical flow based on past observations.
arXiv Detail & Related papers (2024-04-07T04:56:58Z) - FlowVid: Taming Imperfect Optical Flows for Consistent Video-to-Video
Synthesis [66.2611385251157]
Diffusion models have transformed the image-to-image (I2I) synthesis and are now permeating into videos.
This paper proposes a consistent V2V synthesis framework by jointly leveraging spatial conditions and temporal optical flow clues within the source video.
arXiv Detail & Related papers (2023-12-29T16:57:12Z) - Offline and Online Optical Flow Enhancement for Deep Video Compression [14.445058335559994]
Motion information is represented as optical flows in most of the existing deep video compression networks.
We conduct experiments on a state-of-the-art deep video compression scheme, DCVC.
arXiv Detail & Related papers (2023-07-11T07:52:06Z) - Towards Anytime Optical Flow Estimation with Event Cameras [35.685866753715416]
Event cameras are capable of responding to log-brightness changes in microseconds.
Existing datasets collected via event cameras provide limited frame rate optical flow ground truth.
We propose EVA-Flow, an EVent-based Anytime Flow estimation network to produce high-frame-rate event optical flow.
arXiv Detail & Related papers (2023-07-11T06:15:12Z) - VideoFlow: Exploiting Temporal Cues for Multi-frame Optical Flow
Estimation [61.660040308290796]
VideoFlow is a novel optical flow estimation framework for videos.
We first propose a TRi-frame Optical Flow (TROF) module that estimates bi-directional optical flows for the center frame in a three-frame manner.
With the iterative flow estimation refinement, the information fused in individual TROFs can be propagated into the whole sequence via MOP.
arXiv Detail & Related papers (2023-03-15T03:14:30Z) - Versatile Learned Video Compression [26.976302025254043]
We propose a versatile learned video compression (VLVC) framework that uses one model to support all possible prediction modes.
Specifically, to realize versatile compression, we first build a motion compensation module that applies multiple 3D motion vector fields.
We show that the flow prediction module can largely reduce the transmission cost of voxel flows.
arXiv Detail & Related papers (2021-11-05T10:50:37Z) - SCFlow: Optical Flow Estimation for Spiking Camera [50.770803466875364]
Spiking camera has enormous potential in real applications, especially for motion estimation in high-speed scenes.
Optical flow estimation has achieved remarkable success in image-based and event-based vision, but % existing methods cannot be directly applied in spike stream from spiking camera.
This paper presents, SCFlow, a novel deep learning pipeline for optical flow estimation for spiking camera.
arXiv Detail & Related papers (2021-10-08T06:16:45Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.