Optical Flow Based Motion Detection for Autonomous Driving
- URL: http://arxiv.org/abs/2203.11693v1
- Date: Thu, 3 Mar 2022 03:24:14 GMT
- Title: Optical Flow Based Motion Detection for Autonomous Driving
- Authors: Ka Man Lo
- Abstract summary: We train a neural network model to classify the motion status using optical flow field information as the input.
The experiments result in high accuracy, showing that our idea is viable and promising.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Motion detection is a fundamental but challenging task for autonomous
driving. In particular scenes like highway, remote objects have to be paid
extra attention for better controlling decision. Aiming at distant vehicles, we
train a neural network model to classify the motion status using optical flow
field information as the input. The experiments result in high accuracy,
showing that our idea is viable and promising. The trained model also achieves
an acceptable performance for nearby vehicles. Our work is implemented in
PyTorch. Open tools including nuScenes, FastFlowNet and RAFT are used.
Visualization videos are available at
https://www.youtube.com/playlist?list=PLVVrWgq4OrlBnRebmkGZO1iDHEksMHKGk .
Related papers
- MOFA-Video: Controllable Image Animation via Generative Motion Field Adaptions in Frozen Image-to-Video Diffusion Model [78.11258752076046]
MOFA-Video is an advanced controllable image animation method that generates video from the given image using various additional controllable signals.
We design several domain-aware motion field adapters to control the generated motions in the video generation pipeline.
After training, the MOFA-Adapters in different domains can also work together for more controllable video generation.
arXiv Detail & Related papers (2024-05-30T16:22:22Z) - Safe Navigation: Training Autonomous Vehicles using Deep Reinforcement
Learning in CARLA [0.0]
The goal of this project is to train autonomous vehicles to make decisions to navigate in uncertain environments using deep reinforcement learning techniques.
The simulator provides a realistic and urban environment for training and testing self-driving models.
arXiv Detail & Related papers (2023-10-23T04:23:07Z) - Follow Anything: Open-set detection, tracking, and following in
real-time [89.83421771766682]
We present a robotic system to detect, track, and follow any object in real-time.
Our approach, dubbed follow anything'' (FAn), is an open-vocabulary and multimodal model.
FAn can be deployed on a laptop with a lightweight (6-8 GB) graphics card, achieving a throughput of 6-20 frames per second.
arXiv Detail & Related papers (2023-08-10T17:57:06Z) - Linking vision and motion for self-supervised object-centric perception [16.821130222597155]
Object-centric representations enable autonomous driving algorithms to reason about interactions between many independent agents and scene features.
Traditionally these representations have been obtained via supervised learning, but this decouples perception from the downstream driving task and could harm generalization.
We adapt a self-supervised object-centric vision model to perform object decomposition using only RGB video and the pose of the vehicle as inputs.
arXiv Detail & Related papers (2023-07-14T04:21:05Z) - Policy Pre-training for End-to-end Autonomous Driving via
Self-supervised Geometric Modeling [96.31941517446859]
We propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving.
We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos.
In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input.
In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only.
arXiv Detail & Related papers (2023-01-03T08:52:49Z) - Self-Supervised Moving Vehicle Detection from Audio-Visual Cues [29.06503735149157]
We propose a self-supervised approach that leverages audio-visual cues to detect moving vehicles in videos.
Our approach employs contrastive learning for localizing vehicles in images from corresponding pairs of images and recorded audio.
We show that our model can be used as a teacher to supervise an audio-only detection model.
arXiv Detail & Related papers (2022-01-30T09:52:14Z) - Real Time Monocular Vehicle Velocity Estimation using Synthetic Data [78.85123603488664]
We look at the problem of estimating the velocity of road vehicles from a camera mounted on a moving car.
We propose a two-step approach where first an off-the-shelf tracker is used to extract vehicle bounding boxes and then a small neural network is used to regress the vehicle velocity.
arXiv Detail & Related papers (2021-09-16T13:10:27Z) - YOLOP: You Only Look Once for Panoptic Driving Perception [21.802146960999394]
We present a panoptic driving perception network (YOLOP) to perform traffic object detection, drivable area segmentation and lane detection simultaneously.
It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.
Our model performs extremely well on the challenging BDD100K dataset, achieving state-of-the-art on all three tasks in terms of accuracy and speed.
arXiv Detail & Related papers (2021-08-25T14:19:42Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - VM-MODNet: Vehicle Motion aware Moving Object Detection for Autonomous
Driving [3.6550372593827887]
Moving object Detection (MOD) is a critical task in autonomous driving.
We aim to leverage the vehicle motion information and feed it into the model to have an adaptation mechanism based on ego-motion.
The proposed model using Vehicle Motion (VMT) achieves an absolute improvement of 5.6% in mIoU over the baseline architecture.
arXiv Detail & Related papers (2021-04-22T10:46:55Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.