Non-contrastive representation learning for intervals from well logs
- URL: http://arxiv.org/abs/2209.14750v3
- Date: Fri, 10 Nov 2023 08:49:47 GMT
- Title: Non-contrastive representation learning for intervals from well logs
- Authors: Alexander Marusov, Alexey Zaytsev
- Abstract summary: The representation learning problem in the oil & gas industry aims to construct a model that provides a representation based on logging data for a well interval.
One of the possible approaches is self-supervised learning (SSL)
We are the first to introduce non-contrastive SSL for well-logging data.
- Score: 58.70164460091879
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The representation learning problem in the oil & gas industry aims to
construct a model that provides a representation based on logging data for a
well interval. Previous attempts are mainly supervised and focus on similarity
task, which estimates closeness between intervals. We desire to build
informative representations without using supervised (labelled) data. One of
the possible approaches is self-supervised learning (SSL). In contrast to the
supervised paradigm, this one requires little or no labels for the data.
Nowadays, most SSL approaches are either contrastive or non-contrastive.
Contrastive methods make representations of similar (positive) objects closer
and distancing different (negative) ones. Due to possible wrong marking of
positive and negative pairs, these methods can provide an inferior performance.
Non-contrastive methods don't rely on such labelling and are widespread in
computer vision. They learn using only pairs of similar objects that are easier
to identify in logging data.
We are the first to introduce non-contrastive SSL for well-logging data. In
particular, we exploit Bootstrap Your Own Latent (BYOL) and Barlow Twins
methods that avoid using negative pairs and focus only on matching positive
pairs. The crucial part of these methods is an augmentation strategy. Our
augmentation strategies and adaption of BYOL and Barlow Twins together allow us
to achieve superior quality on clusterization and mostly the best performance
on different classification tasks. Our results prove the usefulness of the
proposed non-contrastive self-supervised approaches for representation learning
and interval similarity in particular.
Related papers
- Regularized Contrastive Partial Multi-view Outlier Detection [76.77036536484114]
We propose a novel method named Regularized Contrastive Partial Multi-view Outlier Detection (RCPMOD)
In this framework, we utilize contrastive learning to learn view-consistent information and distinguish outliers by the degree of consistency.
Experimental results on four benchmark datasets demonstrate that our proposed approach could outperform state-of-the-art competitors.
arXiv Detail & Related papers (2024-08-02T14:34:27Z) - A Closer Look at Benchmarking Self-Supervised Pre-training with Image Classification [51.35500308126506]
Self-supervised learning (SSL) is a machine learning approach where the data itself provides supervision, eliminating the need for external labels.
We study how classification-based evaluation protocols for SSL correlate and how well they predict downstream performance on different dataset types.
arXiv Detail & Related papers (2024-07-16T23:17:36Z) - Prototypical Contrastive Learning through Alignment and Uniformity for
Recommendation [6.790779112538357]
We present underlinePrototypical contrastive learning through underlineAlignment and underlineUniformity for recommendation.
Specifically, we first propose prototypes as a latent space to ensure consistency across different augmentations from the origin graph.
The absence of explicit negatives means that directly optimizing the consistency loss between instance and prototype could easily result in dimensional collapse issues.
arXiv Detail & Related papers (2024-02-03T08:19:26Z) - Semantic Positive Pairs for Enhancing Visual Representation Learning of Instance Discrimination methods [4.680881326162484]
Self-supervised learning algorithms (SSL) based on instance discrimination have shown promising results.
We propose an approach to identify those images with similar semantic content and treat them as positive instances.
We run experiments on three benchmark datasets: ImageNet, STL-10 and CIFAR-10 with different instance discrimination SSL approaches.
arXiv Detail & Related papers (2023-06-28T11:47:08Z) - Revisiting Contrastive Methods for Unsupervised Learning of Visual
Representations [78.12377360145078]
Contrastive self-supervised learning has outperformed supervised pretraining on many downstream tasks like segmentation and object detection.
In this paper, we first study how biases in the dataset affect existing methods.
We show that current contrastive approaches work surprisingly well across: (i) object- versus scene-centric, (ii) uniform versus long-tailed and (iii) general versus domain-specific datasets.
arXiv Detail & Related papers (2021-06-10T17:59:13Z) - Understanding self-supervised Learning Dynamics without Contrastive
Pairs [72.1743263777693]
Contrastive approaches to self-supervised learning (SSL) learn representations by minimizing the distance between two augmented views of the same data point.
BYOL and SimSiam, show remarkable performance it without negative pairs.
We study the nonlinear learning dynamics of non-contrastive SSL in simple linear networks.
arXiv Detail & Related papers (2021-02-12T22:57:28Z) - Unsupervised Feature Learning by Cross-Level Instance-Group
Discrimination [68.83098015578874]
We integrate between-instance similarity into contrastive learning, not directly by instance grouping, but by cross-level discrimination.
CLD effectively brings unsupervised learning closer to natural data and real-world applications.
New state-of-the-art on self-supervision, semi-supervision, and transfer learning benchmarks, and beats MoCo v2 and SimCLR on every reported performance.
arXiv Detail & Related papers (2020-08-09T21:13:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.