Joint Contrastive Learning with Infinite Possibilities
- URL: http://arxiv.org/abs/2009.14776v2
- Date: Sat, 10 Oct 2020 13:27:10 GMT
- Title: Joint Contrastive Learning with Infinite Possibilities
- Authors: Qi Cai and Yu Wang and Yingwei Pan and Ting Yao and Tao Mei
- Abstract summary: This paper explores useful modifications of the recent development in contrastive learning via novel probabilistic modeling.
We derive a particular form of contrastive loss named Joint Contrastive Learning (JCL)
- Score: 114.45811348666898
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper explores useful modifications of the recent development in
contrastive learning via novel probabilistic modeling. We derive a particular
form of contrastive loss named Joint Contrastive Learning (JCL). JCL implicitly
involves the simultaneous learning of an infinite number of query-key pairs,
which poses tighter constraints when searching for invariant features. We
derive an upper bound on this formulation that allows analytical solutions in
an end-to-end training manner. While JCL is practically effective in numerous
computer vision applications, we also theoretically unveil the certain
mechanisms that govern the behavior of JCL. We demonstrate that the proposed
formulation harbors an innate agency that strongly favors similarity within
each instance-specific class, and therefore remains advantageous when searching
for discriminative features among distinct instances. We evaluate these
proposals on multiple benchmarks, demonstrating considerable improvements over
existing algorithms. Code is publicly available at:
https://github.com/caiqi/Joint-Contrastive-Learning.
Related papers
- Binary Code Similarity Detection via Graph Contrastive Learning on Intermediate Representations [52.34030226129628]
Binary Code Similarity Detection (BCSD) plays a crucial role in numerous fields, including vulnerability detection, malware analysis, and code reuse identification.
In this paper, we propose IRBinDiff, which mitigates compilation differences by leveraging LLVM-IR with higher-level semantic abstraction.
Our extensive experiments, conducted under varied compilation settings, demonstrate that IRBinDiff outperforms other leading BCSD methods in both One-to-one comparison and One-to-many search scenarios.
arXiv Detail & Related papers (2024-10-24T09:09:20Z) - ParaICL: Towards Robust Parallel In-Context Learning [74.38022919598443]
Large language models (LLMs) have become the norm in natural language processing.
Few-shot in-context learning (ICL) relies on the choice of few-shot demonstration examples.
We propose a novel method named parallel in-context learning (ParaICL)
arXiv Detail & Related papers (2024-03-31T05:56:15Z) - Learning impartial policies for sequential counterfactual explanations
using Deep Reinforcement Learning [0.0]
Recently Reinforcement Learning (RL) methods have been proposed that seek to learn policies for discovering SCFs, thereby enhancing scalability.
In this work, we identify shortcomings in existing methods that can result in policies with undesired properties, such as a bias towards specific actions.
We propose to use the output probabilities of the classifier to create a more informative reward, to mitigate this effect.
arXiv Detail & Related papers (2023-11-01T13:50:47Z) - Synergies between Disentanglement and Sparsity: Generalization and
Identifiability in Multi-Task Learning [79.83792914684985]
We prove a new identifiability result that provides conditions under which maximally sparse base-predictors yield disentangled representations.
Motivated by this theoretical result, we propose a practical approach to learn disentangled representations based on a sparsity-promoting bi-level optimization problem.
arXiv Detail & Related papers (2022-11-26T21:02:09Z) - Consistent Multiclass Algorithms for Complex Metrics and Constraints [38.6998359999636]
This setting includes many common performance metrics such as the multiclass G-mean and micro F1-measure.
We give a general framework for consistent algorithms for such complex design goals.
Experiments on a variety of multiclass classification tasks and fairness-constrained problems show that our algorithms compare favorably to the state-of-the-art baselines.
arXiv Detail & Related papers (2022-10-18T09:09:29Z) - Counterfactual Explanations Using Optimization With Constraint Learning [0.0]
We propose a generic and flexible approach to counterfactual explanations using optimization with constraint learning (CE-OCL)
Specifically, we discuss how we can leverage an optimization with constraint learning framework for the generation of counterfactual explanations.
We also propose two novel modeling approaches to address data manifold closeness and diversity, which are two key criteria for practical counterfactual explanations.
arXiv Detail & Related papers (2022-09-22T13:27:21Z) - Contrastive Learning with Adversarial Examples [79.39156814887133]
Contrastive learning (CL) is a popular technique for self-supervised learning (SSL) of visual representations.
This paper introduces a new family of adversarial examples for constrastive learning and using these examples to define a new adversarial training algorithm for SSL, denoted as CLAE.
arXiv Detail & Related papers (2020-10-22T20:45:10Z) - K-Shot Contrastive Learning of Visual Features with Multiple Instance
Augmentations [67.46036826589467]
$K$-Shot Contrastive Learning is proposed to investigate sample variations within individual instances.
It aims to combine the advantages of inter-instance discrimination by learning discriminative features to distinguish between different instances.
Experiment results demonstrate the proposed $K$-shot contrastive learning achieves superior performances to the state-of-the-art unsupervised methods.
arXiv Detail & Related papers (2020-07-27T04:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.