A brief review of contrastive learning applied to astrophysics
- URL: http://arxiv.org/abs/2306.05528v1
- Date: Thu, 8 Jun 2023 19:56:32 GMT
- Title: A brief review of contrastive learning applied to astrophysics
- Authors: Marc Huertas-Company, Regina Sarmiento, Johan Knapen
- Abstract summary: Contrastive Learning is a self-supervised machine learning algorithm that extracts informative measurements from multi-dimensional datasets.
This paper briefly summarizes the main concepts behind contrastive learning and reviews the first promising applications to astronomy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable tools to extract patterns from high-dimensionality spaces are
becoming more necessary as astronomical datasets increase both in volume and
complexity. Contrastive Learning is a self-supervised machine learning
algorithm that extracts informative measurements from multi-dimensional
datasets, which has become increasingly popular in the computer vision and
Machine Learning communities in recent years. To do so, it maximizes the
agreement between the information extracted from augmented versions of the same
input data, making the final representation invariant to the applied
transformations. Contrastive Learning is particularly useful in astronomy for
removing known instrumental effects and for performing supervised
classifications and regressions with a limited amount of available labels,
showing a promising avenue towards \emph{Foundation Models}. This short review
paper briefly summarizes the main concepts behind contrastive learning and
reviews the first promising applications to astronomy. We include some
practical recommendations on which applications are particularly attractive for
contrastive learning.
Related papers
- Mutual Information guided Visual Contrastive Learning [8.058961687401627]
We investigate the potential of selecting training data based on their mutual information computed from real-world distributions.<n>We evaluate the proposed mutual-information-informed data augmentation method on several benchmarks across multiple state-of-the-art representation learning frameworks.
arXiv Detail & Related papers (2025-10-26T20:43:29Z) - Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning [62.69917996026769]
A class-incremental learning task requires learning and preserving both spatial appearance and temporal action involvement.
We propose a framework that equips separate adapters to learn new class patterns, accommodating the incremental information requirements unique to each class.
A causal compensation mechanism is proposed to reduce the conflicts during increment and memorization for between different types of information.
arXiv Detail & Related papers (2025-01-13T11:34:55Z) - Universal Time-Series Representation Learning: A Survey [14.340399848964662]
Time-series data exists in every corner of real-world systems and services.
Deep learning has demonstrated remarkable performance in extracting hidden patterns and features from time-series data.
arXiv Detail & Related papers (2024-01-08T08:00:04Z) - Time Series Contrastive Learning with Information-Aware Augmentations [57.45139904366001]
A key component of contrastive learning is to select appropriate augmentations imposing some priors to construct feasible positive samples.
How to find the desired augmentations of time series data that are meaningful for given contrastive learning tasks and datasets remains an open question.
We propose a new contrastive learning approach with information-aware augmentations, InfoTS, that adaptively selects optimal augmentations for time series representation learning.
arXiv Detail & Related papers (2023-03-21T15:02:50Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - Metric Learning as a Service with Covariance Embedding [7.5989847759545155]
Metric learning aims to maximize and minimize inter- and intra-class similarities.
Existing models mainly rely on distance measures to obtain a separable embedding space.
We argue that to enable metric learning as a service for high-performance deep learning applications, we should also wisely deal with inter-class relationships.
arXiv Detail & Related papers (2022-11-28T10:10:59Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - MetAug: Contrastive Learning via Meta Feature Augmentation [28.708395209321846]
We argue that contrastive learning heavily relies on informative features, or "hard" (positive or negative) features.
The key challenge toward exploring such features is that the source multi-view data is generated by applying random data augmentations.
We propose to directly augment the features in latent space, thereby learning discriminative representations without a large amount of input data.
arXiv Detail & Related papers (2022-03-10T02:35:39Z) - Vertical Machine Unlearning: Selectively Removing Sensitive Information
From Latent Feature Space [21.8933559159369]
We investigate a vertical unlearning mode, aiming at removing only sensitive information from latent feature space.
We introduce intuitive and formal definitions for this unlearning and show its relationship with existing horizontal unlearning.
We propose an approximation with an upper bound to estimate it, with rigorous theoretical analysis.
arXiv Detail & Related papers (2022-02-27T05:25:15Z) - What Makes Good Contrastive Learning on Small-Scale Wearable-based
Tasks? [59.51457877578138]
We study contrastive learning on the wearable-based activity recognition task.
This paper presents an open-source PyTorch library textttCL-HAR, which can serve as a practical tool for researchers.
arXiv Detail & Related papers (2022-02-12T06:10:15Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Multi-Pretext Attention Network for Few-shot Learning with
Self-supervision [37.6064643502453]
We propose a novel augmentation-free method for self-supervised learning, which does not rely on any auxiliary sample.
Besides, we propose Multi-pretext Attention Network (MAN), which exploits a specific attention mechanism to combine the traditional augmentation-relied methods and our GC.
We evaluate our MAN extensively on miniImageNet and tieredImageNet datasets and the results demonstrate that the proposed method outperforms the state-of-the-art (SOTA) relevant methods.
arXiv Detail & Related papers (2021-03-10T10:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.