Spectral Aware Softmax for Visible-Infrared Person Re-Identification
- URL: http://arxiv.org/abs/2302.01512v1
- Date: Fri, 3 Feb 2023 02:57:18 GMT
- Title: Spectral Aware Softmax for Visible-Infrared Person Re-Identification
- Authors: Lei Tan, Pingyang Dai, Qixiang Ye, Mingliang Xu, Yongjian Wu, Rongrong
Ji
- Abstract summary: Visible-infrared person re-identification (VI-ReID) aims to match specific pedestrian images from different modalities.
Existing methods still follow the softmax loss training paradigm, which is widely used in single-modality classification tasks.
We propose the spectral-aware softmax (SA-Softmax) loss, which can fully explore the embedding space with the modality information.
- Score: 123.69049942659285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visible-infrared person re-identification (VI-ReID) aims to match specific
pedestrian images from different modalities. Although suffering an extra
modality discrepancy, existing methods still follow the softmax loss training
paradigm, which is widely used in single-modality classification tasks. The
softmax loss lacks an explicit penalty for the apparent modality gap, which
adversely limits the performance upper bound of the VI-ReID task. In this
paper, we propose the spectral-aware softmax (SA-Softmax) loss, which can fully
explore the embedding space with the modality information and has clear
interpretability. Specifically, SA-Softmax loss utilizes an asynchronous
optimization strategy based on the modality prototype instead of the
synchronous optimization based on the identity prototype in the original
softmax loss. To encourage a high overlapping between two modalities,
SA-Softmax optimizes each sample by the prototype from another spectrum. Based
on the observation and analysis of SA-Softmax, we modify the SA-Softmax with
the Feature Mask and Absolute-Similarity Term to alleviate the ambiguous
optimization during model training. Extensive experimental evaluations
conducted on RegDB and SYSU-MM01 demonstrate the superior performance of the
SA-Softmax over the state-of-the-art methods in such a cross-modality
condition.
Related papers
- Softmax-free Linear Transformers [90.83157268265654]
Vision transformers (ViTs) have pushed the state-of-the-art for visual perception tasks.
Existing methods are either theoretically flawed or empirically ineffective for visual recognition.
We propose a family of Softmax-Free Transformers (SOFT)
arXiv Detail & Related papers (2022-07-05T03:08:27Z) - Sparse-softmax: A Simpler and Faster Alternative Softmax Transformation [2.3813678058429626]
The softmax function is widely used in artificial neural networks for the multiclass classification problems.
In this paper, we provide an empirical study on a simple and concise softmax variant, namely sparse-softmax, to alleviate the problem that occurred in traditional softmax in terms of high-dimensional classification problems.
arXiv Detail & Related papers (2021-12-23T09:53:38Z) - Real Additive Margin Softmax for Speaker Verification [14.226089039985151]
We show that AM-Softmax loss does not implement real max-margin training.
We present a Real AM-Softmax loss which involves a true margin function in the softmax training.
arXiv Detail & Related papers (2021-10-18T09:11:14Z) - Breaking the Softmax Bottleneck for Sequential Recommender Systems with
Dropout and Decoupling [0.0]
We show that there are more aspects to the Softmax bottleneck in SBRSs.
We propose a simple yet effective method, Dropout and Decoupling (D&D), to alleviate these problems.
Our method significantly improves the accuracy of a variety of Softmax-based SBRS algorithms.
arXiv Detail & Related papers (2021-10-11T16:52:23Z) - Exploring Alternatives to Softmax Function [0.5924831288313849]
We investigate Taylor softmax, SM-softmax and our proposed SM-Taylor softmax as alternatives to softmax function.
Our experiments for the image classification task on different datasets reveal that there is always a configuration of the SM-Taylor softmax function that outperforms the normal softmax function.
arXiv Detail & Related papers (2020-11-23T16:50:18Z) - Optimal Approximation -- Smoothness Tradeoffs for Soft-Max Functions [73.33961743410876]
A soft-max function has two main efficiency measures: approximation and smoothness.
We identify the optimal approximation-smoothness tradeoffs for different measures of approximation and smoothness.
This leads to novel soft-max functions, each of which is optimal for a different application.
arXiv Detail & Related papers (2020-10-22T05:19:58Z) - Balanced Meta-Softmax for Long-Tailed Visual Recognition [46.215759445665434]
We show that the Softmax function, though used in most classification tasks, gives a biased gradient estimation under the long-tailed setup.
This paper presents Balanced Softmax, an elegant unbiased extension of Softmax, to accommodate the label distribution shift between training and testing.
In our experiments, we demonstrate that Balanced Meta-Softmax outperforms state-of-the-art long-tailed classification solutions on both visual recognition and instance segmentation tasks.
arXiv Detail & Related papers (2020-07-21T12:05:00Z) - Loss Function Search for Face Recognition [75.79325080027908]
We develop a reward-guided search method to automatically obtain the best candidate.
Experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method.
arXiv Detail & Related papers (2020-07-10T03:40:10Z) - Taming GANs with Lookahead-Minmax [63.90038365274479]
Experimental results on MNIST, SVHN, CIFAR-10, and ImageNet demonstrate a clear advantage of combining Lookahead-minmax with Adam or extragradient.
Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels.
arXiv Detail & Related papers (2020-06-25T17:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.