Particle Filter SLAM for Vehicle Localization
- URL: http://arxiv.org/abs/2402.07429v2
- Date: Tue, 20 Feb 2024 02:42:33 GMT
- Title: Particle Filter SLAM for Vehicle Localization
- Authors: Tianrui Liu, Changxin Xu, Yuxin Qiao, Chufeng Jiang, Jiqiang Yu
- Abstract summary: We address the challenges of SLAM by adopting the Particle Filter SLAM method.
Our approach leverages encoded data and fiber optic gyro (FOG) information to enable precise estimation of vehicle motion.
The integration of these data streams culminates in the establishment of a Particle Filter SLAM framework.
- Score: 2.45723043286596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simultaneous Localization and Mapping (SLAM) presents a formidable challenge
in robotics, involving the dynamic construction of a map while concurrently
determining the precise location of the robotic agent within an unfamiliar
environment. This intricate task is further compounded by the inherent
"chicken-and-egg" dilemma, where accurate mapping relies on a dependable
estimation of the robot's location, and vice versa. Moreover, the computational
intensity of SLAM adds an additional layer of complexity, making it a crucial
yet demanding topic in the field. In our research, we address the challenges of
SLAM by adopting the Particle Filter SLAM method. Our approach leverages
encoded data and fiber optic gyro (FOG) information to enable precise
estimation of vehicle motion, while lidar technology contributes to
environmental perception by providing detailed insights into surrounding
obstacles. The integration of these data streams culminates in the
establishment of a Particle Filter SLAM framework, representing a key endeavor
in this paper to effectively navigate and overcome the complexities associated
with simultaneous localization and mapping in robotic systems.
Related papers
- Real-time Spatial-temporal Traversability Assessment via Feature-based Sparse Gaussian Process [14.428139979659395]
Terrain analysis is critical for the practical application of ground mobile robots in real-world tasks.
We propose a novel spatial-temporal traversability assessment method, which aims to enable autonomous robots to navigate through complex terrains.
We develop an autonomous navigation framework integrated with the traversability map and validate it with a differential driven vehicle in complex outdoor environments.
arXiv Detail & Related papers (2025-03-06T06:26:57Z) - Kriformer: A Novel Spatiotemporal Kriging Approach Based on Graph Transformers [5.4381914710364665]
This study addresses posed by sparse sensor deployment and unreliable data by framing the problem as an environmental challenge.
A graphkriformer model, Kriformer, estimates data at locations without sensors by mining spatial and temporal correlations, even with limited resources.
arXiv Detail & Related papers (2024-09-23T11:01:18Z) - Deep Attention Driven Reinforcement Learning (DAD-RL) for Autonomous Decision-Making in Dynamic Environment [2.3575550107698016]
We introduce an AV centrictemporal attention encoding (STAE) mechanism for learning dynamic interactions with different surrounding vehicles.
To understand map and route context, we employ a context encoder to extract context maps.
The resulting model is trained using the Soft Actor Critic (SAC) algorithm.
arXiv Detail & Related papers (2024-07-12T02:34:44Z) - Outlier-Robust Long-Term Robotic Mapping Leveraging Ground Segmentation [1.7948767405202701]
I propose a robust long-term robotic mapping system that can work out of the box.
I propose (i) fast and robust ground segmentation to reject the presence of outliers.
I propose (ii)-robust registration with ground segmentation that encompasses the presence of gross outliers.
arXiv Detail & Related papers (2024-05-18T04:56:15Z) - Improved LiDAR Odometry and Mapping using Deep Semantic Segmentation and
Novel Outliers Detection [1.0334138809056097]
We propose a novel framework for real-time LiDAR odometry and mapping based on LOAM architecture for fast moving platforms.
Our framework utilizes semantic information produced by a deep learning model to improve point-to-line and point-to-plane matching.
We study the effect of improving the matching process on the robustness of LiDAR odometry against high speed motion.
arXiv Detail & Related papers (2024-03-05T16:53:24Z) - Efficient Real-time Smoke Filtration with 3D LiDAR for Search and Rescue
with Autonomous Heterogeneous Robotic Systems [56.838297900091426]
Smoke and dust affect the performance of any mobile robotic platform due to their reliance on onboard perception systems.
This paper proposes a novel modular computation filtration pipeline based on intensity and spatial information.
arXiv Detail & Related papers (2023-08-14T16:48:57Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Model-free Motion Planning of Autonomous Agents for Complex Tasks in
Partially Observable Environments [3.7660066212240753]
Motion planning of autonomous agents in partially known environments is a challenging problem.
This paper proposes a model-free reinforcement learning approach to address this problem.
We show that our proposed method effectively addresses environment, action, and observation uncertainties.
arXiv Detail & Related papers (2023-04-30T19:57:39Z) - Multimodal Dataset from Harsh Sub-Terranean Environment with Aerosol
Particles for Frontier Exploration [55.41644538483948]
This paper introduces a multimodal dataset from the harsh and unstructured underground environment with aerosol particles.
It contains synchronized raw data measurements from all onboard sensors in Robot Operating System (ROS) format.
The focus of this paper is not only to capture both temporal and spatial data diversities but also to present the impact of harsh conditions on captured data.
arXiv Detail & Related papers (2023-04-27T20:21:18Z) - Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense
Forest Canopy [48.51396198176273]
We propose an integrated system that can perform large-scale autonomous flights and real-time semantic mapping in challenging under-canopy environments.
We detect and model tree trunks and ground planes from LiDAR data, which are associated across scans and used to constrain robot poses as well as tree trunk models.
A drift-compensation mechanism is designed to minimize the odometry drift using semantic SLAM outputs in real time, while maintaining planner optimality and controller stability.
arXiv Detail & Related papers (2021-09-14T07:24:53Z) - Attention-based Neural Network for Driving Environment Complexity
Perception [123.93460670568554]
This paper proposes a novel attention-based neural network model to predict the complexity level of the surrounding driving environment.
It consists of a Yolo-v3 object detection algorithm, a heat map generation algorithm, CNN-based feature extractors, and attention-based feature extractors.
The proposed attention-based network achieves 91.22% average classification accuracy to classify the surrounding environment complexity.
arXiv Detail & Related papers (2021-06-21T17:27:11Z) - Risk-Averse MPC via Visual-Inertial Input and Recurrent Networks for
Online Collision Avoidance [95.86944752753564]
We propose an online path planning architecture that extends the model predictive control (MPC) formulation to consider future location uncertainties.
Our algorithm combines an object detection pipeline with a recurrent neural network (RNN) which infers the covariance of state estimates.
The robustness of our methods is validated on complex quadruped robot dynamics and can be generally applied to most robotic platforms.
arXiv Detail & Related papers (2020-07-28T07:34:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.