EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised
Event-based Optical Flow with Hybrid Motion-Compensation Loss
- URL: http://arxiv.org/abs/2305.07853v1
- Date: Sat, 13 May 2023 07:08:48 GMT
- Title: EV-MGRFlowNet: Motion-Guided Recurrent Network for Unsupervised
Event-based Optical Flow with Hybrid Motion-Compensation Loss
- Authors: Hao Zhuang, Xinjie Huang, Kuanxu Hou, Delei Kong, Chenming Hu, Zheng
Fang
- Abstract summary: Event cameras offer promising properties, such as high temporal resolution and high dynamic range.
Currently, most existing event-based works use deep learning to estimate optical flow.
We propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation pipeline.
- Score: 4.266841662194981
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras offer promising properties, such as high temporal resolution
and high dynamic range. These benefits have been utilized into many machine
vision tasks, especially optical flow estimation. Currently, most existing
event-based works use deep learning to estimate optical flow. However, their
networks have not fully exploited prior hidden states and motion flows.
Additionally, their supervision strategy has not fully leveraged the geometric
constraints of event data to unlock the potential of networks. In this paper,
we propose EV-MGRFlowNet, an unsupervised event-based optical flow estimation
pipeline with motion-guided recurrent networks using a hybrid
motion-compensation loss. First, we propose a feature-enhanced recurrent
encoder network (FERE-Net) which fully utilizes prior hidden states to obtain
multi-level motion features. Then, we propose a flow-guided decoder network
(FGD-Net) to integrate prior motion flows. Finally, we design a hybrid
motion-compensation loss (HMC-Loss) to strengthen geometric constraints for the
more accurate alignment of events. Experimental results show that our method
outperforms the current state-of-the-art (SOTA) method on the MVSEC dataset,
with an average reduction of approximately 22.71% in average endpoint error
(AEE). To our knowledge, our method ranks first among unsupervised
learning-based methods.
Related papers
- Optimal OnTheFly Feedback Control of Event Sensors [0.14999444543328289]
Event-based vision sensors produce an asynchronous stream of events which are triggered when pixel intensity variation exceeds a threshold.
We propose an approach for dynamic feedback control of activation thresholds, in which a controller network analyzes the past emitted events.
We demonstrate that our approach outperforms both fixed and randomly-varying threshold schemes by 6-12% in terms of LPIPS perceptual image dissimilarity metric.
arXiv Detail & Related papers (2024-08-23T10:49:16Z) - OFMPNet: Deep End-to-End Model for Occupancy and Flow Prediction in Urban Environment [0.0]
We introduce an end-to-end neural network methodology designed to predict the future behaviors of all dynamic objects in the environment.
We propose a novel time-weighted motion flow loss, whose application has shown a substantial decrease in end-point error.
arXiv Detail & Related papers (2024-04-02T19:37:58Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - EM-driven unsupervised learning for efficient motion segmentation [3.5232234532568376]
This paper presents a CNN-based fully unsupervised method for motion segmentation from optical flow.
We use the Expectation-Maximization (EM) framework to leverage the loss function and the training procedure of our motion segmentation neural network.
Our method outperforms comparable unsupervised methods and is very efficient.
arXiv Detail & Related papers (2022-01-06T14:35:45Z) - MotionHint: Self-Supervised Monocular Visual Odometry with Motion
Constraints [70.76761166614511]
We present a novel self-supervised algorithm named MotionHint for monocular visual odometry (VO)
Our MotionHint algorithm can be easily applied to existing open-sourced state-of-the-art SSM-VO systems.
arXiv Detail & Related papers (2021-09-14T15:35:08Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Feature Flow: In-network Feature Flow Estimation for Video Object
Detection [56.80974623192569]
Optical flow is widely used in computer vision tasks to provide pixel-level motion information.
A common approach is to:forward optical flow to a neural network and fine-tune this network on the task dataset.
We propose a novel network (IFF-Net) with an textbfIn-network textbfFeature textbfFlow estimation module for video object detection.
arXiv Detail & Related papers (2020-09-21T07:55:50Z) - Implicit Euler ODE Networks for Single-Image Dehazing [33.34490764631837]
We propose an efficient end-to-end multi-level implicit network (MI-Net) for the single image dehazing problem.
Our method outperforms existing methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-07-13T15:27:33Z) - Cascade Network with Guided Loss and Hybrid Attention for Two-view
Geometry [32.52184271700281]
We propose a Guided Loss to establish the direct negative correlation between the loss and Fn-measure.
We then propose a hybrid attention block to extract feature.
Experiments have shown that our network achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-07-11T07:44:04Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.