LiDAR Point Cloud Colourisation Using Multi-Camera Fusion and Low-Light Image Enhancement
- URL: http://arxiv.org/abs/2509.25859v1
- Date: Tue, 30 Sep 2025 06:56:11 GMT
- Title: LiDAR Point Cloud Colourisation Using Multi-Camera Fusion and Low-Light Image Enhancement
- Authors: Pasindu Ranasinghe, Dibyayan Patra, Bikram Banerjee, Simit Raval,
- Abstract summary: This study introduces a novel, hardware-agnostic methodology that generates colourised point clouds from mechanical LiDAR using multiple camera inputs.<n>The primary innovation lies in its robustness under low-light conditions, achieved through the integration of a low-light image enhancement module.<n>The algorithm was tested using a Velodyne Puck Hi-Res LiDAR and a four-camera configuration.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, the fusion of camera data with LiDAR measurements has emerged as a powerful approach to enhance spatial understanding. This study introduces a novel, hardware-agnostic methodology that generates colourised point clouds from mechanical LiDAR using multiple camera inputs, providing complete 360-degree coverage. The primary innovation lies in its robustness under low-light conditions, achieved through the integration of a low-light image enhancement module within the fusion pipeline. The system requires initial calibration to determine intrinsic camera parameters, followed by automatic computation of the geometric transformation between the LiDAR and cameras, removing the need for specialised calibration targets and streamlining the setup. The data processing framework uses colour correction to ensure uniformity across camera feeds before fusion. The algorithm was tested using a Velodyne Puck Hi-Res LiDAR and a four-camera configuration. The optimised software achieved real-time performance and reliable colourisation even under very low illumination, successfully recovering scene details that would otherwise remain undetectable.
Related papers
- Unifying Color and Lightness Correction with View-Adaptive Curve Adjustment for Robust 3D Novel View Synthesis [73.27997579020233]
We propose Luminance-GS++, a 3DGS-based framework for robust NVS under diverse illumination conditions.<n>Our method combines a globally view-adaptive lightness adjustment with a local pixel-wise residual refinement for precise color correction.
arXiv Detail & Related papers (2026-02-20T16:20:50Z) - DST-Calib: A Dual-Path, Self-Supervised, Target-Free LiDAR-Camera Extrinsic Calibration Network [57.22935789233992]
This article presents the first self-supervised LiDAR-camera extrinsic calibration network that operates in an online fashion.<n>The proposed method significantly outperforms existing approaches in terms of generalizability.
arXiv Detail & Related papers (2026-01-03T13:57:01Z) - From Cheap to Pro: A Learning-based Adaptive Camera Parameter Network for Professional-Style Imaging [0.07829352305480283]
ACamera-Net is a lightweight and scene-adaptive camera parameter adjustment network.<n>It predicts optimal exposure and white balance from RAW inputs.<n>It consistently enhances image quality and stabilizes perception outputs.
arXiv Detail & Related papers (2025-10-23T13:35:17Z) - SAIGFormer: A Spatially-Adaptive Illumination-Guided Network for Low-Light Image Enhancement [58.79901582809091]
Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>Recent Transformer-based low-light enhancement methods have made promising progress in recovering global illumination.<n>We present a Spatially-Adaptive Illumination-Guided Transformer framework that enables accurate illumination restoration.
arXiv Detail & Related papers (2025-07-21T11:38:56Z) - Robust LiDAR-Camera Calibration with 2D Gaussian Splatting [0.3281128493853064]
A critical and initial step in integrating the LiDAR and camera data is the calibration of the LiDAR-camera system.<n>Most existing calibration methods rely on auxiliary target objects, which often involve complex manual operations.<n>We propose a calibration method that estimates LiDAR-camera extrinsic parameters using geometric constraints.
arXiv Detail & Related papers (2025-04-01T08:19:26Z) - CalibRefine: Deep Learning-Based Online Automatic Targetless LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement [7.736775961390864]
CalibRefine is a fully automatic, targetless, and online calibration framework.<n>It directly processes raw LiDAR point clouds and camera images.<n>Our results show that robust object-level feature matching, combined with iterative refinement and self-supervised attention-based refinement, enables reliable sensor alignment.
arXiv Detail & Related papers (2025-02-24T20:53:42Z) - Discovering an Image-Adaptive Coordinate System for Photography Processing [51.164345878060956]
We propose a novel algorithm, IAC, to learn an image-adaptive coordinate system in the RGB color space before performing curve operations.<n>This end-to-end trainable approach enables us to efficiently adjust images with a jointly learned image-adaptive coordinate system and curves.
arXiv Detail & Related papers (2025-01-11T06:20:07Z) - YOCO: You Only Calibrate Once for Accurate Extrinsic Parameter in LiDAR-Camera Systems [0.5999777817331317]
In a multi-sensor fusion system composed of cameras and LiDAR, precise extrinsic calibration contributes to the system's long-term stability and accurate perception of the environment.
This paper proposes a novel fully automatic extrinsic calibration method for LiDAR-camera systems that circumvents the need for corresponding point registration.
arXiv Detail & Related papers (2024-07-25T13:44:49Z) - LCE-Calib: Automatic LiDAR-Frame/Event Camera Extrinsic Calibration With
A Globally Optimal Solution [10.117923901732743]
The combination of LiDARs and cameras enables a mobile robot to perceive environments with multi-modal data.
Traditional frame cameras are sensitive to changing illumination conditions, motivating us to introduce novel event cameras.
This paper proposes an automatic checkerboard-based approach to calibrate extrinsics between a LiDAR and a frame/event camera.
arXiv Detail & Related papers (2023-03-17T08:07:56Z) - Gait Recognition in Large-scale Free Environment via Single LiDAR [35.684257181154905]
LiDAR's ability to capture depth makes it pivotal for robotic perception and holds promise for real-world gait recognition.
We present the Hierarchical Multi-representation Feature Interaction Network (HMRNet) for robust gait recognition.
To facilitate LiDAR-based gait recognition research, we introduce FreeGait, a comprehensive gait dataset from large-scale, unconstrained settings.
arXiv Detail & Related papers (2022-11-22T16:05:58Z) - LIF-Seg: LiDAR and Camera Image Fusion for 3D LiDAR Semantic
Segmentation [78.74202673902303]
We propose a coarse-tofine LiDAR and camera fusion-based network (termed as LIF-Seg) for LiDAR segmentation.
The proposed method fully utilizes the contextual information of images and introduces a simple but effective early-fusion strategy.
The cooperation of these two components leads to the success of the effective camera-LiDAR fusion.
arXiv Detail & Related papers (2021-08-17T08:53:11Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.