Nighttime Person Re-Identification via Collaborative Enhancement Network with Multi-domain Learning
- URL: http://arxiv.org/abs/2312.16246v2
- Date: Sat, 04 Jan 2025 03:50:09 GMT
- Title: Nighttime Person Re-Identification via Collaborative Enhancement Network with Multi-domain Learning
- Authors: Andong Lu, Chenglong Li, Tianrui Zha, Jin Tang, Xiaofeng Wang, Bin Luo,
- Abstract summary: We propose a novel Collaborative Enhancement Network called CENet, which performs the multilevel feature interactions in a parallel framework for nighttime person ReID.
In particular, the designed parallel structure of CENet can not only avoid the impact of the quality of relighting images on ReID performance, but also allow us to mine the collaborative relations between image relighting and person ReID tasks.
- Score: 24.13081086915467
- License:
- Abstract: Prevalent nighttime person re-identification (ReID) methods typically combine image relighting and ReID networks in a sequential manner. However, their performance (recognition accuracy) is limited by the quality of relighting images and insufficient collaboration between image relighting and ReID tasks. To handle these problems, we propose a novel Collaborative Enhancement Network called CENet, which performs the multilevel feature interactions in a parallel framework, for nighttime person ReID. In particular, the designed parallel structure of CENet can not only avoid the impact of the quality of relighting images on ReID performance, but also allow us to mine the collaborative relations between image relighting and person ReID tasks. To this end, we integrate the multilevel feature interactions in CENet, where we first share the Transformer encoder to build the low-level feature interaction, and then perform the feature distillation that transfers the high-level features from image relighting to ReID, thereby alleviating the severe image degradation issue caused by the nighttime scenario while avoiding the impact of relighting images. In addition, the sizes of existing real-world nighttime person ReID datasets are limited, and large-scale synthetic ones exhibit substantial domain gaps with real-world data. To leverage both small-scale real-world and large-scale synthetic training data, we develop a multi-domain learning algorithm, which alternately utilizes both kinds of data to reduce the inter-domain difference in training procedure. Extensive experiments on two real nighttime datasets, \textit{Night600} and \textit{RGBNT201$_{rgb}$}, and a synthetic nighttime ReID dataset are conducted to validate the effectiveness of CENet. We release the code and synthetic dataset at: \hyperlink{https://github.com/Alexadlu/CENet}{\color{red} https://github.com/Alexadlu/CENet}.
Related papers
- RHRSegNet: Relighting High-Resolution Night-Time Semantic Segmentation [0.0]
Night time semantic segmentation is a crucial task in computer vision, focusing on accurately classifying and segmenting objects in low-light conditions.
We propose RHRSegNet, implementing a relighting model over a High-Resolution Network for semantic segmentation.
Our proposed model increases the HRnet segmentation performance by 5% in low-light or nighttime images.
arXiv Detail & Related papers (2024-07-08T15:07:09Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Exposure Bracketing Is All You Need For A High-Quality Image [50.822601495422916]
Multi-exposure images are complementary in denoising, deblurring, high dynamic range imaging, and super-resolution.
We propose to utilize exposure bracketing photography to get a high-quality image by combining these tasks in this work.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Similarity Min-Max: Zero-Shot Day-Night Domain Adaptation [52.923298434948606]
Low-light conditions not only hamper human visual experience but also degrade the model's performance on downstream vision tasks.
This paper challenges a more complicated scenario with border applicability, i.e., zero-shot day-night domain adaptation.
We propose a similarity min-max paradigm that considers them under a unified framework.
arXiv Detail & Related papers (2023-07-17T18:50:15Z) - LEDNet: Joint Low-light Enhancement and Deblurring in the Dark [100.24389251273611]
We present the first large-scale dataset for joint low-light enhancement and deblurring.
LOL-Blur contains 12,000 low-blur/normal-sharp pairs with diverse darkness and motion blurs in different scenarios.
We also present an effective network, named LEDNet, to perform joint low-light enhancement and deblurring.
arXiv Detail & Related papers (2022-02-07T17:44:05Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.