Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA
Driveworks
- URL: http://arxiv.org/abs/2401.07582v1
- Date: Mon, 15 Jan 2024 10:38:07 GMT
- Title: Geo-locating Road Objects using Inverse Haversine Formula with NVIDIA
Driveworks
- Authors: Mamoona Birkhez Shami, Gabriel Kiss, Trond Arve Haakonsen, Frank
Lindseth
- Abstract summary: This paper introduces a methodology to geolocate road objects using a monocular camera.
We use the Centimeter Positioning Service (CPOS) and the inverse Haversine formula to geo-locate road objects accurately.
- Score: 0.7428236410246181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Geolocation is integral to the seamless functioning of autonomous vehicles
and advanced traffic monitoring infrastructures. This paper introduces a
methodology to geolocate road objects using a monocular camera, leveraging the
NVIDIA DriveWorks platform. We use the Centimeter Positioning Service (CPOS)
and the inverse Haversine formula to geo-locate road objects accurately. The
real-time algorithm processing capability of the NVIDIA DriveWorks platform
enables instantaneous object recognition and spatial localization for Advanced
Driver Assistance Systems (ADAS) and autonomous driving platforms. We present a
measurement pipeline suitable for autonomous driving (AD) platforms and provide
detailed guidelines for calibrating cameras using NVIDIA DriveWorks.
Experiments were carried out to validate the accuracy of the proposed method
for geolocating targets in both controlled and dynamic settings. We show that
our approach can locate targets with less than 1m error when the AD platform is
stationary and less than 4m error at higher speeds (i.e. up to 60km/h) within a
15m radius.
Related papers
- Spatial Retrieval Augmented Autonomous Driving [81.39665750557526]
Existing autonomous driving systems rely on onboard sensors for environmental perception.<n>We propose the spatial retrieval paradigm, introducing offline retrieved geographic images as an additional input.<n>We will open-source dataset curation code, data, and benchmarks for further study of this new autonomous driving paradigm.
arXiv Detail & Related papers (2025-12-07T14:40:49Z) - MobileGeo: Exploring Hierarchical Knowledge Distillation for Resource-Efficient Cross-view Drone Geo-Localization [47.16612614191333]
Cross-view geo-localization enables drone localization by matching aerial images to geo-tagged satellite databases.<n>MobileGeo is a mobile-friendly framework designed for efficient on-device CVGL.<n>MobileGeo runs at 251.5 FPS on an NVIDIA AGX Orin edge device, demonstrating its practical viability for real-time on-device drone geo-localization.
arXiv Detail & Related papers (2025-10-26T08:47:20Z) - Geo-ORBIT: A Federated Digital Twin Framework for Scene-Adaptive Lane Geometry Detection [17.09138102827048]
Geo-ORBIT is a unified framework that combines real-time lane detection, DT synchronization, and federated meta-learning.<n>We extend this model through Meta-GeoLane, which learns to personalize detection parameters for local entities.<n>Our system is integrated with CARLA and SUMO to create a high-fidelity DT that renders highway scenarios and captures traffic flows in real-time.
arXiv Detail & Related papers (2025-07-11T16:45:59Z) - Pole-based Vehicle Localization with Vector Maps: A Camera-LiDAR Comparative Study [6.300346102366891]
In road environments, many common furniture such as traffic signs, traffic lights and street lights take the form of poles.
This paper introduces a real-time method for camera-based pole detection using a lightweight neural network trained on automatically annotated images.
The results highlight the high accuracy of the vision-based approach in open road conditions.
arXiv Detail & Related papers (2024-12-11T09:05:05Z) - Neural Semantic Map-Learning for Autonomous Vehicles [85.8425492858912]
We present a mapping system that fuses local submaps gathered from a fleet of vehicles at a central instance to produce a coherent map of the road environment.
Our method jointly aligns and merges the noisy and incomplete local submaps using a scene-specific Neural Signed Distance Field.
We leverage memory-efficient sparse feature-grids to scale to large areas and introduce a confidence score to model uncertainty in scene reconstruction.
arXiv Detail & Related papers (2024-10-10T10:10:03Z) - Robust Vehicle Localization and Tracking in Rain using Street Maps [2.2651698012357473]
We propose a novel approach for vehicle localization that uses street network based map information to correct drifting odometry estimates.
Specifically, our approach is a flexible fusion algorithm that integrates intermittent GPS, drifting IMU and VO estimates.
We robustly evaluate our proposed approach on four geographically diverse datasets from different countries.
arXiv Detail & Related papers (2024-09-02T08:15:12Z) - Leveraging GNSS and Onboard Visual Data from Consumer Vehicles for Robust Road Network Estimation [18.236615392921273]
This paper addresses the challenge of road graph construction for autonomous vehicles.
We propose using global navigation satellite system (GNSS) traces and basic image data acquired from these standard sensors in consumer vehicles.
We exploit the spatial information in the data by framing the problem as a road centerline semantic segmentation task using a convolutional neural network.
arXiv Detail & Related papers (2024-08-03T02:57:37Z) - RSRD: A Road Surface Reconstruction Dataset and Benchmark for Safe and
Comfortable Autonomous Driving [67.09546127265034]
Road surface reconstruction helps to enhance the analysis and prediction of vehicle responses for motion planning and control systems.
We introduce the Road Surface Reconstruction dataset, a real-world, high-resolution, and high-precision dataset collected with a specialized platform in diverse driving conditions.
It covers common road types containing approximately 16,000 pairs of stereo images, original point clouds, and ground-truth depth/disparity maps.
arXiv Detail & Related papers (2023-10-03T17:59:32Z) - EAutoDet: Efficient Architecture Search for Object Detection [110.99532343155073]
EAutoDet framework can discover practical backbone and FPN architectures for object detection in 1.4 GPU-days.
We propose a kernel reusing technique by sharing the weights of candidate operations on one edge and consolidating them into one convolution.
In particular, the discovered architectures surpass state-of-the-art object detection NAS methods and achieve 40.1 mAP with 120 FPS and 49.2 mAP with 41.3 FPS on COCO test-dev set.
arXiv Detail & Related papers (2022-03-21T05:56:12Z) - Continuous Self-Localization on Aerial Images Using Visual and Lidar
Sensors [25.87104194833264]
We propose a novel method for geo-tracking in outdoor environments by registering a vehicle's sensor information with aerial imagery of an unseen target region.
We train a model in a metric learning setting to extract visual features from ground and aerial images.
Our method is the first to utilize on-board cameras in an end-to-end differentiable model for metric self-localization on unseen orthophotos.
arXiv Detail & Related papers (2022-03-07T12:25:44Z) - ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints [94.60414567852536]
Long-range navigation requires both planning and reasoning about local traversability.
We propose a learning-based approach that integrates learning and planning.
ViKiNG can leverage its image-based learned controller and goal-directed to navigate to goals up to 3 kilometers away.
arXiv Detail & Related papers (2022-02-23T02:14:23Z) - Workshop on Autonomous Driving at CVPR 2021: Technical Report for
Streaming Perception Challenge [57.647371468876116]
We introduce our real-time 2D object detection system for the realistic autonomous driving scenario.
Our detector is built on a newly designed YOLO model, called YOLOX.
On the Argoverse-HD dataset, our system achieves 41.0 streaming AP, which surpassed second place by 7.8/6.1 on detection-only track/fully track, respectively.
arXiv Detail & Related papers (2021-07-27T06:36:06Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - Embedded Vision for Self-Driving on Forest Roads [0.0]
AMTU is a robotic system designed to autonomously navigate off-road terrain and inspect if any deforestation or damage occurred along tracked route.
AMTU's core component is its embedded vision module, optimized for real-time environment perception.
We show experimental results on the test track of our research facility.
arXiv Detail & Related papers (2021-05-27T09:05:08Z) - Learning to Localize Using a LiDAR Intensity Map [87.04427452634445]
We propose a real-time, calibration-agnostic and effective localization system for self-driving cars.
Our method learns to embed the online LiDAR sweeps and intensity map into a joint deep embedding space.
Our full system can operate in real-time at 15Hz while achieving centimeter level accuracy across different LiDAR sensors and environments.
arXiv Detail & Related papers (2020-12-20T11:56:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.