DisCont: Self-Supervised Visual Attribute Disentanglement using Context
Vectors
- URL: http://arxiv.org/abs/2006.05895v2
- Date: Mon, 29 Jun 2020 23:23:12 GMT
- Title: DisCont: Self-Supervised Visual Attribute Disentanglement using Context
Vectors
- Authors: Sarthak Bhagat, Vishaal Udandarao, Shagun Uppal
- Abstract summary: We propose a self-supervised framework DisCont to disentangle multiple attributes by exploiting the structural inductive biases within images.
Motivated by the recent surge in contrastive learning paradigms, our model bridges the gap between self-supervised contrastive learning algorithms and unsupervised disentanglement.
- Score: 6.385006149689549
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentangling the underlying feature attributes within an image with no prior
supervision is a challenging task. Models that can disentangle attributes well
provide greater interpretability and control. In this paper, we propose a
self-supervised framework DisCont to disentangle multiple attributes by
exploiting the structural inductive biases within images. Motivated by the
recent surge in contrastive learning paradigms, our model bridges the gap
between self-supervised contrastive learning algorithms and unsupervised
disentanglement. We evaluate the efficacy of our approach, both qualitatively
and quantitatively, on four benchmark datasets.
Related papers
- Disentangled and Self-Explainable Node Representation Learning [1.4002424249260854]
We introduce DiSeNE, a framework that generates self-explainable embeddings in an unsupervised manner.
Our method employs disentangled representation learning to produce dimension-wise interpretable embeddings.
We formalize novel desiderata for disentangled and interpretable embeddings, which drive our new objective functions.
arXiv Detail & Related papers (2024-10-28T13:58:52Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - CustomContrast: A Multilevel Contrastive Perspective For Subject-Driven Text-to-Image Customization [27.114395240088562]
We argue an ideal subject representation can be achieved by a cross-differential perspective, i.e., decoupling subject intrinsic attributes from irrelevant attributes via contrastive learning.
Specifically, we propose CustomContrast, a novel framework, which includes a Multilevel Contrastive Learning paradigm and a Multimodal Feature Injection (MFI)
Extensive experiments show the effectiveness of CustomContrast in subject similarity and text controllability.
arXiv Detail & Related papers (2024-09-09T13:39:47Z) - Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Self-Supervised Consistent Quantization for Fully Unsupervised Image
Retrieval [17.422973861218182]
Unsupervised image retrieval aims to learn an efficient retrieval system without expensive data annotations.
Recent advance proposes deep fully unsupervised image retrieval aiming at training a deep model from scratch to jointly optimize visual features and quantization codes.
We propose a novel self-supervised consistent quantization approach to deep fully unsupervised image retrieval, which consists of part consistent quantization and global consistent quantization.
arXiv Detail & Related papers (2022-06-20T14:39:59Z) - Translational Concept Embedding for Generalized Compositional Zero-shot
Learning [73.60639796305415]
Generalized compositional zero-shot learning means to learn composed concepts of attribute-object pairs in a zero-shot fashion.
This paper introduces a new approach, termed translational concept embedding, to solve these two difficulties in a unified framework.
arXiv Detail & Related papers (2021-12-20T21:27:51Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Face Anti-Spoofing Via Disentangled Representation Learning [90.90512800361742]
Face anti-spoofing is crucial to security of face recognition systems.
We propose a novel perspective of face anti-spoofing that disentangles the liveness features and content features from images.
arXiv Detail & Related papers (2020-08-19T03:54:23Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z) - Unsupervised Discovery, Control, and Disentanglement of Semantic
Attributes with Applications to Anomaly Detection [15.817227809141116]
We focus on unsupervised generative representations that discover latent factors controlling image semantic attributes.
For (a), we propose a network architecture that exploits the combination of multiscale generative models with mutual information (MI)
For (b), we derive an analytical result (Lemma 1) that brings clarity to two related but distinct concepts.
arXiv Detail & Related papers (2020-02-25T20:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.