Mitigating the Effect of Incidental Correlations on Part-based Learning
- URL: http://arxiv.org/abs/2310.00377v1
- Date: Sat, 30 Sep 2023 13:44:48 GMT
- Title: Mitigating the Effect of Incidental Correlations on Part-based Learning
- Authors: Gaurav Bhatt, Deepayan Das, Leonid Sigal, Vineeth N Balasubramanian
- Abstract summary: Part-based representations could be more interpretable and generalize better with limited data.
We present two innovative regularization methods for part-based representations.
We exhibit state-of-the-art (SoTA) performance on few-shot learning tasks on benchmark datasets.
- Score: 50.682498099720114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intelligent systems possess a crucial characteristic of breaking complicated
problems into smaller reusable components or parts and adjusting to new tasks
using these part representations. However, current part-learners encounter
difficulties in dealing with incidental correlations resulting from the limited
observations of objects that may appear only in specific arrangements or with
specific backgrounds. These incidental correlations may have a detrimental
impact on the generalization and interpretability of learned part
representations. This study asserts that part-based representations could be
more interpretable and generalize better with limited data, employing two
innovative regularization methods. The first regularization separates
foreground and background information's generative process via a unique
mixture-of-parts formulation. Structural constraints are imposed on the parts
using a weakly-supervised loss, guaranteeing that the mixture-of-parts for
foreground and background entails soft, object-agnostic masks. The second
regularization assumes the form of a distillation loss, ensuring the invariance
of the learned parts to the incidental background correlations. Furthermore, we
incorporate sparse and orthogonal constraints to facilitate learning
high-quality part representations. By reducing the impact of incidental
background correlations on the learned parts, we exhibit state-of-the-art
(SoTA) performance on few-shot learning tasks on benchmark datasets, including
MiniImagenet, TieredImageNet, and FC100. We also demonstrate that the
part-based representations acquired through our approach generalize better than
existing techniques, even under domain shifts of the background and common data
corruption on the ImageNet-9 dataset. The implementation is available on
GitHub: https://github.com/GauravBh1010tt/DPViT.git
Related papers
- Unsupervised Part Discovery via Dual Representation Alignment [31.100169532078095]
Object parts serve as crucial intermediate representations in various downstream tasks.
Previous research has established that Vision Transformer can learn instance-level attention without labels.
In this paper, we achieve unsupervised part-specific attention learning using a novel paradigm.
arXiv Detail & Related papers (2024-08-15T12:11:20Z) - Hierarchical Visual Primitive Experts for Compositional Zero-Shot
Learning [52.506434446439776]
Compositional zero-shot learning (CZSL) aims to recognize compositions with prior knowledge of known primitives (attribute and object)
We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues.
Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL.
arXiv Detail & Related papers (2023-08-08T03:24:21Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - On Feature Learning in the Presence of Spurious Correlations [45.86963293019703]
We show that the quality learned feature representations is greatly affected by the design decisions beyond the method.
We significantly improve upon the best results reported in the literature on the popular Waterbirds, Celeb hair color prediction and WILDS-FMOW problems.
arXiv Detail & Related papers (2022-10-20T16:10:28Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing [88.76112371510999]
Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
arXiv Detail & Related papers (2022-03-23T21:42:31Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - Dynamic Feature Regularized Loss for Weakly Supervised Semantic
Segmentation [37.43674181562307]
We propose a new regularized loss which utilizes both shallow and deep features that are dynamically updated.
Our approach achieves new state-of-the-art performances, outperforming other approaches by a significant margin with more than 6% mIoU increase.
arXiv Detail & Related papers (2021-08-03T05:11:00Z) - Joint learning of variational representations and solvers for inverse
problems with partially-observed data [13.984814587222811]
In this paper, we design an end-to-end framework allowing to learn actual variational frameworks for inverse problems in a supervised setting.
The variational cost and the gradient-based solver are both stated as neural networks using automatic differentiation for the latter.
This leads to a data-driven discovery of variational models.
arXiv Detail & Related papers (2020-06-05T19:53:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.