Cluster-level Feature Alignment for Person Re-identification
- URL: http://arxiv.org/abs/2008.06810v1
- Date: Sat, 15 Aug 2020 23:47:47 GMT
- Title: Cluster-level Feature Alignment for Person Re-identification
- Authors: Qiuyu Chen, Wei Zhang, Jianping Fan
- Abstract summary: This paper probes another feature alignment modality, namely cluster-level feature alignment across whole dataset.
We propose anchor loss and investigate many variants of cluster-level feature alignment, which consists of iterative aggregation and alignment from overview of dataset.
- Score: 16.01713931617725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance-level alignment is widely exploited for person re-identification,
e.g. spatial alignment, latent semantic alignment and triplet alignment. This
paper probes another feature alignment modality, namely cluster-level feature
alignment across whole dataset, where the model can see not only the sampled
images in local mini-batch but the global feature distribution of the whole
dataset from distilled anchors. Towards this aim, we propose anchor loss and
investigate many variants of cluster-level feature alignment, which consists of
iterative aggregation and alignment from the overview of dataset. Our extensive
experiments have demonstrated that our methods can provide consistent and
significant performance improvement with small training efforts after the
saturation of traditional training. In both theoretical and experimental
aspects, our proposed methods can result in more stable and guided optimization
towards better representation and generalization for well-aligned embedding.
Related papers
- GCC: Generative Calibration Clustering [55.44944397168619]
We propose a novel Generative Clustering (GCC) method to incorporate feature learning and augmentation into clustering procedure.
First, we develop a discrimirative feature alignment mechanism to discover intrinsic relationship across real and generated samples.
Second, we design a self-supervised metric learning to generate more reliable cluster assignment.
arXiv Detail & Related papers (2024-04-14T01:51:11Z) - Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - One for all: A novel Dual-space Co-training baseline for Large-scale
Multi-View Clustering [42.92751228313385]
We propose a novel multi-view clustering model, named Dual-space Co-training Large-scale Multi-view Clustering (DSCMC)
The main objective of our approach is to enhance the clustering performance by leveraging co-training in two distinct spaces.
Our algorithm has an approximate linear computational complexity, which guarantees its successful application on large-scale datasets.
arXiv Detail & Related papers (2024-01-28T16:30:13Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Multi-View Clustering via Semi-non-negative Tensor Factorization [120.87318230985653]
We develop a novel multi-view clustering based on semi-non-negative tensor factorization (Semi-NTF)
Our model directly considers the between-view relationship and exploits the between-view complementary information.
In addition, we provide an optimization algorithm for the proposed method and prove mathematically that the algorithm always converges to the stationary KKT point.
arXiv Detail & Related papers (2023-03-29T14:54:19Z) - Hub-VAE: Unsupervised Hub-based Regularization of Variational
Autoencoders [11.252245456934348]
We propose an unsupervised, data-driven regularization of the latent space with a mixture of hub-based priors and a hub-based contrastive loss.
Our algorithm achieves superior cluster separability in the embedding space, and accurate data reconstruction and generation.
arXiv Detail & Related papers (2022-11-18T19:12:15Z) - Rethinking Clustering-Based Pseudo-Labeling for Unsupervised
Meta-Learning [146.11600461034746]
Method for unsupervised meta-learning, CACTUs, is a clustering-based approach with pseudo-labeling.
This approach is model-agnostic and can be combined with supervised algorithms to learn from unlabeled data.
We prove that the core reason for this is lack of a clustering-friendly property in the embedding space.
arXiv Detail & Related papers (2022-09-27T19:04:36Z) - Deterministic Decoupling of Global Features and its Application to Data
Analysis [0.0]
We propose a new formalism that is based on defining transformations on submanifolds.
Through these transformations we define a normalization that, we demonstrate, allows for decoupling differentiable features.
We apply this method in the original data domain and at the output of a filter bank to regression and classification problems based on global descriptors.
arXiv Detail & Related papers (2022-07-05T15:54:39Z) - Transductive Few-Shot Learning: Clustering is All You Need? [31.21306826132773]
We investigate a general formulation for transive few-shot learning, which integrates prototype-based objectives.
We find that our method yields competitive performances, in term of accuracy and optimization, while scaling up to large problems.
Surprisingly, we find that our general model already achieve competitive performances in comparison to the state-of-the-art learning.
arXiv Detail & Related papers (2021-06-16T16:14:01Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Latent Space Regularization for Unsupervised Domain Adaptation in
Semantic Segmentation [14.050836886292869]
We introduce feature-level space-shaping regularization strategies to reduce the domain discrepancy in semantic segmentation.
We verify the effectiveness of such methods in the autonomous driving setting.
arXiv Detail & Related papers (2021-04-06T16:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.