Towards Accurate Ground Plane Normal Estimation from Ego-Motion
- URL: http://arxiv.org/abs/2212.04224v1
- Date: Thu, 8 Dec 2022 12:06:36 GMT
- Title: Towards Accurate Ground Plane Normal Estimation from Ego-Motion
- Authors: Jiaxin Zhang, Wei Sui, Qian Zhang, Tao Chen and Cong Yang
- Abstract summary: We introduce a novel approach for ground plane normal estimation of wheeled vehicles.
In practice, the ground plane is dynamically changed due to braking and unstable road surface.
Our proposed method only uses odometry as input and estimates accurate ground plane normal vectors in real time.
- Score: 10.22470503842832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel approach for ground plane normal
estimation of wheeled vehicles. In practice, the ground plane is dynamically
changed due to braking and unstable road surface. As a result, the vehicle
pose, especially the pitch angle, is oscillating from subtle to obvious. Thus,
estimating ground plane normal is meaningful since it can be encoded to improve
the robustness of various autonomous driving tasks (e.g., 3D object detection,
road surface reconstruction, and trajectory planning). Our proposed method only
uses odometry as input and estimates accurate ground plane normal vectors in
real time. Particularly, it fully utilizes the underlying connection between
the ego pose odometry (ego-motion) and its nearby ground plane. Built on that,
an Invariant Extended Kalman Filter (IEKF) is designed to estimate the normal
vector in the sensor's coordinate. Thus, our proposed method is simple yet
efficient and supports both camera- and inertial-based odometry algorithms. Its
usability and the marked improvement of robustness are validated through
multiple experiments on public datasets. For instance, we achieve
state-of-the-art accuracy on KITTI dataset with the estimated vector error of
0.39{\deg}. Our code is available at github.com/manymuch/ground_normal_filter.
Related papers
- Automatic dimensionality reduction of Twin-in-the-Loop Observers [1.6877390079162282]
This paper aims to find a procedure to tune the high-complexity observer by lowering its dimensionality.
The strategies have been validated for speed and yaw-rate estimation on real-world data.
arXiv Detail & Related papers (2024-01-18T10:14:21Z) - NeuralGF: Unsupervised Point Normal Estimation by Learning Neural
Gradient Function [55.86697795177619]
Normal estimation for 3D point clouds is a fundamental task in 3D geometry processing.
We introduce a new paradigm for learning neural gradient functions, which encourages the neural network to fit the input point clouds.
Our excellent results on widely used benchmarks demonstrate that our method can learn more accurate normals for both unoriented and oriented normal estimation tasks.
arXiv Detail & Related papers (2023-11-01T09:25:29Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Vanishing Point Estimation in Uncalibrated Images with Prior Gravity
Direction [82.72686460985297]
We tackle the problem of estimating a Manhattan frame.
We derive two new 2-line solvers, one of which does not suffer from singularities affecting existing solvers.
We also design a new non-minimal method, running on an arbitrary number of lines, to boost the performance in local optimization.
arXiv Detail & Related papers (2023-08-21T13:03:25Z) - Normal Transformer: Extracting Surface Geometry from LiDAR Points
Enhanced by Visual Semantics [6.516912796655748]
This paper presents a technique for estimating the normal from 3D point clouds and 2D colour images.
We have developed a transformer neural network that learns to utilise the hybrid information of visual semantic and 3D geometric data.
arXiv Detail & Related papers (2022-11-19T03:55:09Z) - Ground Plane Matters: Picking Up Ground Plane Prior in Monocular 3D
Object Detection [92.75961303269548]
The ground plane prior is a very informative geometry clue in monocular 3D object detection (M3OD)
We propose a Ground Plane Enhanced Network (GPENet) which resolves both issues at one go.
Our GPENet can outperform other methods and achieve state-of-the-art performance, well demonstrating the effectiveness and the superiority of the proposed approach.
arXiv Detail & Related papers (2022-11-03T02:21:35Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - Accurate and Robust Scale Recovery for Monocular Visual Odometry Based
on Plane Geometry [7.169216737580712]
We develop a lightweight scale recovery framework leveraging an accurate and robust estimation of the ground plane.
Experiments on the KITTI dataset show that the proposed framework can achieve state-of-theart accuracy in terms of translation errors.
Due to the light-weight design, our framework also demonstrates a high frequency of 20Hz on the dataset.
arXiv Detail & Related papers (2021-01-15T07:21:24Z) - Train in Germany, Test in The USA: Making 3D Object Detectors Generalize [59.455225176042404]
deep learning has substantially improved the 3D object detection accuracy for LiDAR and stereo camera data alike.
Most datasets for autonomous driving are collected within a narrow subset of cities within one country.
In this paper we consider the task of adapting 3D object detectors from one dataset to another.
arXiv Detail & Related papers (2020-05-17T00:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.