Exploring Transferable Homogeneous Groups for Compositional Zero-Shot Learning
- URL: http://arxiv.org/abs/2501.10695v2
- Date: Sat, 25 Jan 2025 14:02:46 GMT
- Title: Exploring Transferable Homogeneous Groups for Compositional Zero-Shot Learning
- Authors: Zhijie Rao, Jingcai Guo, Miaoge Li, Yang Chen,
- Abstract summary: Homogeneous Group Representation Learning (HGRL) is a new perspective formulates state (object) representation learning as multiple homogeneous sub-group representation learning.
Our method integrates three core components designed to simultaneously enhance both the visual and prompt representation capabilities of the model.
- Score: 10.687828416652929
- License:
- Abstract: Conditional dependency present one of the trickiest problems in Compositional Zero-Shot Learning, leading to significant property variations of the same state (object) across different objects (states). To address this problem, existing approaches often adopt either all-to-one or one-to-one representation paradigms. However, these extremes create an imbalance in the seesaw between transferability and discriminability, favoring one at the expense of the other. Comparatively, humans are adept at analogizing and reasoning in a hierarchical clustering manner, intuitively grouping categories with similar properties to form cohesive concepts. Motivated by this, we propose Homogeneous Group Representation Learning (HGRL), a new perspective formulates state (object) representation learning as multiple homogeneous sub-group representation learning. HGRL seeks to achieve a balance between semantic transferability and discriminability by adaptively discovering and aggregating categories with shared properties, learning distributed group centers that retain group-specific discriminative features. Our method integrates three core components designed to simultaneously enhance both the visual and prompt representation capabilities of the model. Extensive experiments on three benchmark datasets validate the effectiveness of our method.
Related papers
- Task Groupings Regularization: Data-Free Meta-Learning with Heterogeneous Pre-trained Models [83.02797560769285]
Data-Free Meta-Learning (DFML) aims to derive knowledge from a collection of pre-trained models without accessing their original data.
Current methods often overlook the heterogeneity among pre-trained models, which leads to performance degradation due to task conflicts.
arXiv Detail & Related papers (2024-05-26T13:11:55Z) - Separating common from salient patterns with Contrastive Representation
Learning [2.250968907999846]
Contrastive Analysis aims at separating common factors of variation between two datasets.
Current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations.
We propose to leverage the ability of Contrastive Learning to learn semantically expressive representations well adapted for Contrastive Analysis.
arXiv Detail & Related papers (2024-02-19T08:17:13Z) - A Probabilistic Model Behind Self-Supervised Learning [53.64989127914936]
In self-supervised learning (SSL), representations are learned via an auxiliary task without annotated labels.
We present a generative latent variable model for self-supervised learning.
We show that several families of discriminative SSL, including contrastive methods, induce a comparable distribution over representations.
arXiv Detail & Related papers (2024-02-02T13:31:17Z) - Unified Multi-View Orthonormal Non-Negative Graph Based Clustering
Framework [74.25493157757943]
We formulate a novel clustering model, which exploits the non-negative feature property and incorporates the multi-view information into a unified joint learning framework.
We also explore, for the first time, the multi-model non-negative graph-based approach to clustering data based on deep features.
arXiv Detail & Related papers (2022-11-03T08:18:27Z) - Heterogeneous Target Speech Separation [52.05046029743995]
We introduce a new paradigm for single-channel target source separation where the sources of interest can be distinguished using non-mutually exclusive concepts.
Our proposed heterogeneous separation framework can seamlessly leverage datasets with large distribution shifts.
arXiv Detail & Related papers (2022-04-07T17:14:20Z) - Cluster-based Contrastive Disentangling for Generalized Zero-Shot
Learning [25.92340532509084]
Generalized Zero-Shot Learning (GZSL) aims to recognize both seen and unseen classes by training only the seen classes.
We propose a Cluster-based Contrastive Disentangling (CCD) method to improve GZSL by alleviating the semantic gap and domain shift problems.
arXiv Detail & Related papers (2022-03-05T02:50:12Z) - Group-disentangled Representation Learning with Weakly-Supervised
Regularization [13.311886256230814]
GroupVAE is a simple yet effective Kullback-Leibler divergence-based regularization to enforce consistent and disentangled representations.
We demonstrate that learning group-disentangled representations improve upon downstream tasks, including fair classification and 3D shape-related tasks such as reconstruction, classification, and transfer learning.
arXiv Detail & Related papers (2021-10-23T10:01:05Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Representation Learning for Clustering via Building Consensus [3.7434090710577608]
We propose Consensus Clustering using Unsupervised Representation Learning (ConCURL)
ConCURL improves the clustering performance over state-of-the art methods on four out of five image datasets.
We extend the evaluation procedure for clustering to reflect the challenges in real world clustering tasks.
arXiv Detail & Related papers (2021-05-04T05:04:03Z) - Deep Clustering by Semantic Contrastive Learning [67.28140787010447]
We introduce a novel variant called Semantic Contrastive Learning (SCL)
It explores the characteristics of both conventional contrastive learning and deep clustering.
It can amplify the strengths of contrastive learning and deep clustering in a unified approach.
arXiv Detail & Related papers (2021-03-03T20:20:48Z) - Clustering based Contrastive Learning for Improving Face Representations [34.75646290505793]
We present Clustering-based Contrastive Learning (CCL), a new clustering-based representation learning approach.
CCL uses labels obtained from clustering along with video constraints to learn discnative face features.
arXiv Detail & Related papers (2020-04-05T13:11:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.