CLAWS: Contrastive Learning with hard Attention and Weak Supervision
- URL: http://arxiv.org/abs/2112.00847v1
- Date: Wed, 1 Dec 2021 21:45:58 GMT
- Title: CLAWS: Contrastive Learning with hard Attention and Weak Supervision
- Authors: Jansel Herrera-Gerena, Ramakrishnan Sundareswaran, John Just, Matthew
Darr, Ali Jannesari
- Abstract summary: We present CLAWS, an annotation-efficient learning framework, addressing the problem of manually labeling large-scale agricultural datasets.
CLAWS uses a network backbone inspired by SimCLR and weak supervision to investigate the effect of contrastive learning within class clusters.
We compare results between a supervised SimCLR and CLAWS using an agricultural dataset with 227,060 samples consisting of 11 different crop classes.
- Score: 1.1619569706231647
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Learning effective visual representations without human supervision is a
long-standing problem in computer vision. Recent advances in self-supervised
learning algorithms have utilized contrastive learning, with methods such as
SimCLR, which applies a composition of augmentations to an image, and minimizes
a contrastive loss between the two augmented images. In this paper, we present
CLAWS, an annotation-efficient learning framework, addressing the problem of
manually labeling large-scale agricultural datasets along with potential
applications such as anomaly detection and plant growth analytics. CLAWS uses a
network backbone inspired by SimCLR and weak supervision to investigate the
effect of contrastive learning within class clusters. In addition, we inject a
hard attention mask to the cropped input image before maximizing agreement
between the image pairs using a contrastive loss function. This mask forces the
network to focus on pertinent object features and ignore background features.
We compare results between a supervised SimCLR and CLAWS using an agricultural
dataset with 227,060 samples consisting of 11 different crop classes. Our
experiments and extensive evaluations show that CLAWS achieves a competitive
NMI score of 0.7325. Furthermore, CLAWS engenders the creation of low
dimensional representations of very large datasets with minimal parameter
tuning and forming well-defined clusters, which lends themselves to using
efficient, transparent, and highly interpretable clustering methods such as
Gaussian Mixture Models.
Related papers
- Dual Advancement of Representation Learning and Clustering for Sparse and Noisy Images [14.836487514037994]
Sparse and noisy images (SNIs) pose significant challenges for effective representation learning and clustering.
We propose Dual Advancement of Representation Learning and Clustering (DARLC) to enhance the representations derived from masked image modeling.
Our framework offers a comprehensive approach that improves the learning of representations by enhancing their local perceptibility, distinctiveness, and the understanding of relational semantics.
arXiv Detail & Related papers (2024-09-03T10:52:27Z) - Bayesian Learning-driven Prototypical Contrastive Loss for Class-Incremental Learning [42.14439854721613]
We propose a prototypical network with a Bayesian learning-driven contrastive loss (BLCL) tailored specifically for class-incremental learning scenarios.
Our approach dynamically adapts the balance between the cross-entropy and contrastive loss functions with a Bayesian learning technique.
arXiv Detail & Related papers (2024-05-17T19:49:02Z) - CUCL: Codebook for Unsupervised Continual Learning [129.91731617718781]
The focus of this study is on Unsupervised Continual Learning (UCL), as it presents an alternative to Supervised Continual Learning.
We propose a method named Codebook for Unsupervised Continual Learning (CUCL) which promotes the model to learn discriminative features to complete the class boundary.
Our method significantly boosts the performances of supervised and unsupervised methods.
arXiv Detail & Related papers (2023-11-25T03:08:50Z) - CLC: Cluster Assignment via Contrastive Representation Learning [9.631532215759256]
We propose Contrastive Learning-based Clustering (CLC), which uses contrastive learning to directly learn cluster assignment.
We achieve 53.4% accuracy on the full ImageNet dataset and outperform existing methods by large margins.
arXiv Detail & Related papers (2023-06-08T07:15:13Z) - Non-Contrastive Learning Meets Language-Image Pre-Training [145.6671909437841]
We study the validity of non-contrastive language-image pre-training (nCLIP)
We introduce xCLIP, a multi-tasking framework combining CLIP and nCLIP, and show that nCLIP aids CLIP in enhancing feature semantics.
arXiv Detail & Related papers (2022-10-17T17:57:46Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Weakly Supervised Contrastive Learning [68.47096022526927]
We introduce a weakly supervised contrastive learning framework (WCL) to tackle this issue.
WCL achieves 65% and 72% ImageNet Top-1 Accuracy using ResNet50, which is even higher than SimCLRv2 with ResNet101.
arXiv Detail & Related papers (2021-10-10T12:03:52Z) - A Contrastive Learning Approach to Auroral Identification and
Classification [0.8399688944263843]
We present a novel application of unsupervised learning to the task of auroral image classification.
We modify and adapt the Simple framework for Contrastive Learning of Representations (SimCLR) algorithm to learn representations of auroral images.
Our approach exceeds an established threshold for operational purposes, demonstrating readiness for deployment and utilization.
arXiv Detail & Related papers (2021-09-28T17:51:25Z) - Clustering by Maximizing Mutual Information Across Views [62.21716612888669]
We propose a novel framework for image clustering that incorporates joint representation learning and clustering.
Our method significantly outperforms state-of-the-art single-stage clustering methods across a variety of image datasets.
arXiv Detail & Related papers (2021-07-24T15:36:49Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Contrastive Learning based Hybrid Networks for Long-Tailed Image
Classification [31.647639786095993]
We propose a novel hybrid network structure composed of a supervised contrastive loss to learn image representations and a cross-entropy loss to learn classifiers.
Experiments on three long-tailed classification datasets demonstrate the advantage of the proposed contrastive learning based hybrid networks in long-tailed classification.
arXiv Detail & Related papers (2021-03-26T05:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.