Illumination adaptive person reid based on teacher-student model and
adversarial training
- URL: http://arxiv.org/abs/2002.01625v3
- Date: Tue, 26 May 2020 10:20:21 GMT
- Title: Illumination adaptive person reid based on teacher-student model and
adversarial training
- Authors: Ziyue Zhang, Richard YD Xu, Shuai Jiang, Yang Li, Congzhentao Huang,
Chen Deng
- Abstract summary: We propose a Two-Stream Network that can separate ReID features from lighting features to enhance ReID performance.
Our algorithm outperforms other state-of-the-art works and particularly potent in handling images under extremely low light.
- Score: 11.307571732296513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing works in Person Re-identification (ReID) focus on settings
where illumination either is kept the same or has very little fluctuation.
However, the changes in the illumination degree may affect the robustness of a
ReID algorithm significantly. To address this problem, we proposed a Two-Stream
Network that can separate ReID features from lighting features to enhance ReID
performance. Its innovations are threefold: (1) A discriminative entropy loss
to ensure the ReID features contain no lighting information. (2) A ReID Teacher
model trained by images under "neutral" lighting conditions to guide ReID
classification. (3) An illumination Teacher model trained by the differences
between the illumination-adjusted and original images to guide illumination
classification. We construct two augmented datasets by synthetically changing a
set of predefined lighting conditions in two of the most popular ReID
benchmarks: Market1501 and DukeMTMC-ReID. Experiments demonstrate that our
algorithm outperforms other state-of-the-art works and particularly potent in
handling images under extremely low light.
Related papers
- ALEN: A Dual-Approach for Uniform and Non-Uniform Low-Light Image Enhancement [6.191556429706728]
Inadequate illumination can lead to significant information loss and poor image quality, impacting various applications such as surveillance.
Current enhancement techniques often use specific datasets to enhance low-light images, but still present challenges when adapting to diverse real-world conditions.
The Adaptive Light Enhancement Network (ALEN) is introduced, whose main approach is the use of a classification mechanism to determine whether local or global illumination enhancement is required.
arXiv Detail & Related papers (2024-07-29T05:19:23Z) - CodeEnhance: A Codebook-Driven Approach for Low-Light Image Enhancement [97.95330185793358]
Low-light image enhancement (LLIE) aims to improve low-illumination images.
Existing methods face two challenges: uncertainty in restoration from diverse brightness degradations and loss of texture and color information.
We propose a novel enhancement approach, CodeEnhance, by leveraging quantized priors and image refinement.
arXiv Detail & Related papers (2024-04-08T07:34:39Z) - Zero-Reference Low-Light Enhancement via Physical Quadruple Priors [58.77377454210244]
We propose a new zero-reference low-light enhancement framework trainable solely with normal light images.
This framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement.
arXiv Detail & Related papers (2024-03-19T17:36:28Z) - ClassLIE: Structure- and Illumination-Adaptive Classification for
Low-Light Image Enhancement [17.51201873607536]
This paper proposes a novel framework, called ClassLIE, that combines the potential of CNNs and transformers.
It classifies and adaptively learns the structural and illumination information from the low-light images in a holistic and regional manner.
Experiments on five benchmark datasets consistently show our ClassLIE achieves new state-of-the-art performance.
arXiv Detail & Related papers (2023-12-20T18:43:20Z) - Retinexformer: One-stage Retinex-based Transformer for Low-light Image
Enhancement [96.09255345336639]
We formulate a principled One-stage Retinex-based Framework (ORF) to enhance low-light images.
ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image.
Our algorithm, Retinexformer, significantly outperforms state-of-the-art methods on thirteen benchmarks.
arXiv Detail & Related papers (2023-03-12T16:54:08Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Intrinsic Image Transfer for Illumination Manipulation [1.2387676601792899]
This paper presents a novel intrinsic image transfer (IIT) algorithm for illumination manipulation.
It creates a local image translation between two illumination surfaces.
We illustrate that all losses can be reduced without the necessity of taking an intrinsic image decomposition.
arXiv Detail & Related papers (2021-07-01T19:12:24Z) - Physically Inspired Dense Fusion Networks for Relighting [45.66699760138863]
We propose a model which enriches neural networks with physical insight.
Our method generates the relighted image with new illumination settings via two different strategies.
We show that our proposal can outperform many state-of-the-art methods in terms of well-known fidelity metrics and perceptual loss.
arXiv Detail & Related papers (2021-05-05T17:33:45Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Unsupervised Low-light Image Enhancement with Decoupled Networks [103.74355338972123]
We learn a two-stage GAN-based framework to enhance the real-world low-light images in a fully unsupervised fashion.
Our proposed method outperforms the state-of-the-art unsupervised image enhancement methods in terms of both illumination enhancement and noise reduction.
arXiv Detail & Related papers (2020-05-06T13:37:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.