LiDAR-based Person Re-identification
- URL: http://arxiv.org/abs/2312.03033v2
- Date: Mon, 11 Dec 2023 14:36:18 GMT
- Title: LiDAR-based Person Re-identification
- Authors: Wenxuan Guo, Zhiyu Pan, Yingping Liang, Ziheng Xi, Zhi Chen Zhong,
Jianjiang Feng, Jie Zhou
- Abstract summary: We propose a LiDAR-based ReID framework, ReID3D, that utilizes pre-training strategy to retrieve features of 3D body shape.
To the best of our knowledge, we are the first to propose a solution for LiDAR-based ReID.
- Score: 29.694346498355443
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Camera-based person re-identification (ReID) systems have been widely applied
in the field of public security. However, cameras often lack the perception of
3D morphological information of human and are susceptible to various
limitations, such as inadequate illumination, complex background, and personal
privacy. In this paper, we propose a LiDAR-based ReID framework, ReID3D, that
utilizes pre-training strategy to retrieve features of 3D body shape and
introduces Graph-based Complementary Enhancement Encoder for extracting
comprehensive features. Due to the lack of LiDAR datasets, we build LReID, the
first LiDAR-based person ReID dataset, which is collected in several outdoor
scenes with variations in natural conditions. Additionally, we introduce
LReID-sync, a simulated pedestrian dataset designed for pre-training encoders
with tasks of point cloud completion and shape parameter learning. Extensive
experiments on LReID show that ReID3D achieves exceptional performance with a
rank-1 accuracy of 94.0, highlighting the significant potential of LiDAR in
addressing person ReID tasks. To the best of our knowledge, we are the first to
propose a solution for LiDAR-based ReID. The code and datasets will be released
soon.
Related papers
- Multi-Modal Data-Efficient 3D Scene Understanding for Autonomous Driving [58.16024314532443]
We introduce LaserMix++, a framework that integrates laser beam manipulations from disparate LiDAR scans and incorporates LiDAR-camera correspondences to assist data-efficient learning.
Results demonstrate that LaserMix++ outperforms fully supervised alternatives, achieving comparable accuracy with five times fewer annotations.
This substantial advancement underscores the potential of semi-supervised approaches in reducing the reliance on extensive labeled data in LiDAR-based 3D scene understanding systems.
arXiv Detail & Related papers (2024-05-08T17:59:53Z) - Just Add $100 More: Augmenting NeRF-based Pseudo-LiDAR Point Cloud for Resolving Class-imbalance Problem [12.26293873825084]
We propose to leverage pseudo-LiDAR point clouds generated from videos capturing a surround view of miniatures or real-world objects of minor classes.
Our method, called Pseudo Ground Truth Augmentation (PGT-Aug), consists of three main steps: (i) volumetric 3D instance reconstruction using a 2D-to-3D view synthesis model, (ii) object-level domain alignment with LiDAR intensity estimation, and (iii) a hybrid context-aware placement method from ground and map information.
arXiv Detail & Related papers (2024-03-18T08:50:04Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - LidarGait: Benchmarking 3D Gait Recognition with Point Clouds [18.22238384814974]
This work explores precise 3D gait features from point clouds and proposes a simple yet efficient 3D gait recognition framework, termed LidarGait.
Our proposed approach projects sparse point clouds into depth maps to learn the representations with 3D geometry information.
Due to the lack of point cloud datasets, we built the first large-scale LiDAR-based gait recognition dataset, SUSTech1K.
arXiv Detail & Related papers (2022-11-19T06:23:08Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - LiDARCap: Long-range Marker-less 3D Human Motion Capture with LiDAR
Point Clouds [58.402752909624716]
Existing motion capture datasets are largely short-range and cannot yet fit the need of long-range applications.
We propose LiDARHuman26M, a new human motion capture dataset captured by LiDAR at a much longer range to overcome this limitation.
Our dataset also includes the ground truth human motions acquired by the IMU system and the synchronous RGB images.
arXiv Detail & Related papers (2022-03-28T12:52:45Z) - Learning to Drop Points for LiDAR Scan Synthesis [5.132259673802809]
Generative modeling of 3D scenes is a crucial topic for aiding mobile robots to improve unreliable observations.
Most existing studies on point clouds have focused on small and uniform-density data.
3D LiDAR point clouds widely used in mobile robots are non-trivial to be handled because of the large number of points and varying-density.
This paper proposes a novel framework based on generative adversarial networks to synthesize realistic LiDAR data as an improved 2D representation.
arXiv Detail & Related papers (2021-02-23T21:53:14Z) - Unsupervised Pre-training for Person Re-identification [90.98552221699508]
We present a large scale unlabeled person re-identification (Re-ID) dataset "LUPerson"
We make the first attempt of performing unsupervised pre-training for improving the generalization ability of the learned person Re-ID feature representation.
arXiv Detail & Related papers (2020-12-07T14:48:26Z) - SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural
Networks [81.64530401885476]
We propose a self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties.
Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns.
We evaluate our method's performances on two large-scale datasets, i.e., KITTI and Apollo-SouthBay.
arXiv Detail & Related papers (2020-10-19T09:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.