A 2.5D Vehicle Odometry Estimation for Vision Applications
- URL: http://arxiv.org/abs/2105.02679v1
- Date: Thu, 6 May 2021 14:01:46 GMT
- Title: A 2.5D Vehicle Odometry Estimation for Vision Applications
- Authors: Paul Moran, Leroy-Francisco Periera, Anbuchezhiyan Selvaraju, Tejash
Prakash, Pantelis Ermilios, John McDonald, Jonathan Horgan, Ciar\'an Eising
- Abstract summary: We describe a set of steps to combine a planar odometry based on wheel sensors with a suspension model based on linear suspension sensors.
The aim is to determine a more accurate estimate of the camera pose.
- Score: 0.2069421313572092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a method to estimate the pose of a sensor mounted on a
vehicle as the vehicle moves through the world, an important topic for
autonomous driving systems. Based on a set of commonly deployed vehicular
odometric sensors, with outputs available on automotive communication buses
(e.g. CAN or FlexRay), we describe a set of steps to combine a planar odometry
based on wheel sensors with a suspension model based on linear suspension
sensors. The aim is to determine a more accurate estimate of the camera pose.
We outline its usage for applications in both visualisation and computer
vision.
Related papers
- Can you see me now? Blind spot estimation for autonomous vehicles using
scenario-based simulation with random reference sensors [5.910402196056647]
A Monte Carlo-based reference sensor simulation enables us to accurately estimate blind spot size as a metric of coverage.
Our method leverages point clouds from LiDAR sensors or camera depth images from high-fidelity simulations of target scenarios to provide accurate and actionable visibility estimates.
arXiv Detail & Related papers (2024-02-01T10:14:53Z) - Applications of Computer Vision in Autonomous Vehicles: Methods, Challenges and Future Directions [2.693342141713236]
This paper reviews publications on computer vision and autonomous driving that are published during the last ten years.
In particular, we first investigate the development of autonomous driving systems and summarize these systems that are developed by the major automotive manufacturers from different countries.
Then, a comprehensive overview of computer vision applications for autonomous driving such as depth estimation, object detection, lane detection, and traffic sign recognition are discussed.
arXiv Detail & Related papers (2023-11-15T16:41:18Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - HUM3DIL: Semi-supervised Multi-modal 3D Human Pose Estimation for
Autonomous Driving [95.42203932627102]
3D human pose estimation is an emerging technology, which can enable the autonomous vehicle to perceive and understand the subtle and complex behaviors of pedestrians.
Our method efficiently makes use of these complementary signals, in a semi-supervised fashion and outperforms existing methods with a large margin.
Specifically, we embed LiDAR points into pixel-aligned multi-modal features, which we pass through a sequence of Transformer refinement stages.
arXiv Detail & Related papers (2022-12-15T11:15:14Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Rope3D: TheRoadside Perception Dataset for Autonomous Driving and
Monocular 3D Object Detection Task [48.555440807415664]
We present the first high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel view.
The dataset consists of 50k images and over 1.5M 3D objects in various scenes.
We propose to leverage the geometry constraint to solve the inherent ambiguities caused by various sensors, viewpoints.
arXiv Detail & Related papers (2022-03-25T12:13:23Z) - 2.5D Vehicle Odometry Estimation [0.2302750678082437]
It is well understood that in ADAS applications, a good estimate of the pose of the vehicle is required.
This paper proposes a metaphorically named 2.5D odometry, whereby the planar odometry derived from the yaw rate sensor is augmented by a linear model of suspension.
We show, by experimental results with a DGPS/IMU reference, that this model provides highly accurate odometry estimates, compared with existing methods.
arXiv Detail & Related papers (2021-11-16T11:54:34Z) - Single View Physical Distance Estimation using Human Pose [18.9877515094788]
We propose a fully automated system that simultaneously estimates the camera intrinsics, the ground plane, and physical distances between people from a single RGB image or video.
The proposed approach enables existing camera systems to measure physical distances without needing a dedicated calibration process or range sensors.
We contribute to the publicly available MEVA dataset with additional distance annotations, resulting in MEVADA -- the first evaluation benchmark in the world for the pose-based auto-calibration and distance estimation problem.
arXiv Detail & Related papers (2021-06-18T19:50:40Z) - Self-Supervised Steering Angle Prediction for Vehicle Control Using
Visual Odometry [55.11913183006984]
We show how a model can be trained to control a vehicle's trajectory using camera poses estimated through visual odometry methods.
We propose a scalable framework that leverages trajectory information from several different runs using a camera setup placed at the front of a car.
arXiv Detail & Related papers (2021-03-20T16:29:01Z) - Ego-motion and Surrounding Vehicle State Estimation Using a Monocular
Camera [11.29865843123467]
We propose a novel machine learning method to estimate ego-motion and surrounding vehicle state using a single monocular camera.
Our approach is based on a combination of three deep neural networks to estimate the 3D vehicle bounding box, depth, and optical flow from a sequence of images.
arXiv Detail & Related papers (2020-05-04T16:41:38Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.