Distance Estimation in Outdoor Driving Environments Using Phase-only Correlation Method with Event Cameras
- URL: http://arxiv.org/abs/2505.17582v1
- Date: Fri, 23 May 2025 07:44:33 GMT
- Title: Distance Estimation in Outdoor Driving Environments Using Phase-only Correlation Method with Event Cameras
- Authors: Masataka Kobayashi, Shintaro Shiba, Quan Kong, Norimasa Kobori, Tsukasa Shimizu, Shan Lu, Takaya Yamazato,
- Abstract summary: We present a method for distance estimation using a monocular event camera and a roadside LED bar.<n>The proposed approach achieves over 90% success rate with less than 0.5-meter error for distances ranging from 20 to 60 meters.<n>Future work includes extending this method to full position estimation by leveraging infrastructure such as smart poles equipped with LEDs.
- Score: 5.690128924544198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing adoption of autonomous driving, the advancement of sensor technology is crucial for ensuring safety and reliable operation. Sensor fusion techniques that combine multiple sensors such as LiDAR, radar, and cameras have proven effective, but the integration of multiple devices increases both hardware complexity and cost. Therefore, developing a single sensor capable of performing multiple roles is highly desirable for cost-efficient and scalable autonomous driving systems. Event cameras have emerged as a promising solution due to their unique characteristics, including high dynamic range, low latency, and high temporal resolution. These features enable them to perform well in challenging lighting conditions, such as low-light or backlit environments. Moreover, their ability to detect fine-grained motion events makes them suitable for applications like pedestrian detection and vehicle-to-infrastructure communication via visible light. In this study, we present a method for distance estimation using a monocular event camera and a roadside LED bar. By applying a phase-only correlation technique to the event data, we achieve sub-pixel precision in detecting the spatial shift between two light sources. This enables accurate triangulation-based distance estimation without requiring stereo vision. Field experiments conducted in outdoor driving scenarios demonstrated that the proposed approach achieves over 90% success rate with less than 0.5-meter error for distances ranging from 20 to 60 meters. Future work includes extending this method to full position estimation by leveraging infrastructure such as smart poles equipped with LEDs, enabling event-camera-based vehicles to determine their own position in real time. This advancement could significantly enhance navigation accuracy, route optimization, and integration into intelligent transportation systems.
Related papers
- Human-Robot Navigation using Event-based Cameras and Reinforcement Learning [1.7614751781649955]
This work introduces a robot navigation controller that combines event cameras and other sensors with reinforcement learning to enable real-time human-centered navigation and obstacle avoidance.<n>Unlike conventional image-based controllers, which operate at fixed rates and suffer from motion blur and latency, this approach leverages the asynchronous nature of event cameras to process visual information over flexible time intervals.
arXiv Detail & Related papers (2025-06-12T15:03:08Z) - Planar Velocity Estimation for Fast-Moving Mobile Robots Using Event-Based Optical Flow [1.4447019135112429]
We introduce an approach to velocity estimation that is decoupled from wheel-to-surface traction assumptions.<n>The proposed method is evaluated through in-field experiments on a 1:10 scale autonomous racing platform.
arXiv Detail & Related papers (2025-05-16T11:00:33Z) - Application of YOLOv8 in monocular downward multiple Car Target detection [0.0]
This paper presents an improved autonomous target detection network based on YOLOv8.<n>The proposed approach achieves highly efficient and precise detection of multi-scale, small, and remote objects.<n> Experimental results demonstrate that the enhanced model can effectively detect both large and small objects with a detection accuracy of 65%.
arXiv Detail & Related papers (2025-05-15T06:58:45Z) - Vision-Based Detection of Uncooperative Targets and Components on Small Satellites [6.999319023465766]
Space debris and inactive satellites pose a threat to the safety and integrity of operational spacecraft.
Recent advancements in computer vision models can be used to improve upon existing methods for tracking such uncooperative targets.
This paper introduces an autonomous detection model designed to identify and monitor these objects using learning and computer vision.
arXiv Detail & Related papers (2024-08-22T02:48:13Z) - Deep Learning-Based Robust Multi-Object Tracking via Fusion of mmWave Radar and Camera Sensors [6.166992288822812]
Multi-Object Tracking plays a critical role in ensuring safer and more efficient navigation through complex traffic scenarios.
This paper presents a novel deep learning-based method that integrates radar and camera data to enhance the accuracy and robustness of Multi-Object Tracking in autonomous driving systems.
arXiv Detail & Related papers (2024-07-10T21:09:09Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Drone Detection and Tracking in Real-Time by Fusion of Different Sensing
Modalities [66.4525391417921]
We design and evaluate a multi-sensor drone detection system.
Our solution integrates a fish-eye camera as well to monitor a wider part of the sky and steer the other cameras towards objects of interest.
The thermal camera is shown to be a feasible solution as good as the video camera, even if the camera employed here has a lower resolution.
arXiv Detail & Related papers (2022-07-05T10:00:58Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Holistic Grid Fusion Based Stop Line Estimation [5.5476621209686225]
Knowing where to stop in advance in an intersection is an essential parameter in controlling the longitudinal velocity of the vehicle.
Most of the existing methods in literature solely use cameras to detect stop lines, which is typically not sufficient in terms of detection range.
We propose a method that takes advantage of fused multi-sensory data including stereo camera and lidar as input and utilizes a carefully designed convolutional neural network architecture to detect stop lines.
arXiv Detail & Related papers (2020-09-18T21:29:06Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.