Raman Spectrum Matching with Contrastive Representation Learning
- URL: http://arxiv.org/abs/2202.12549v1
- Date: Fri, 25 Feb 2022 08:32:27 GMT
- Title: Raman Spectrum Matching with Contrastive Representation Learning
- Authors: Bo Li, Mikkel N. Schmidt, Tommy S. Alstr{\o}m
- Abstract summary: We propose a new machine learning technique for Raman spectrum matching, based on contrastive representation learning.
Our approach significantly improves or is on par with the state of the art in prediction accuracy.
- Score: 7.070018798821577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Raman spectroscopy is an effective, low-cost, non-intrusive technique often
used for chemical identification. Typical approaches are based on matching
observations to a reference database, which requires careful preprocessing, or
supervised machine learning, which requires a fairly large number of training
observations from each class. We propose a new machine learning technique for
Raman spectrum matching, based on contrastive representation learning, that
requires no preprocessing and works with as little as a single reference
spectrum from each class. On three datasets we demonstrate that our approach
significantly improves or is on par with the state of the art in prediction
accuracy, and we show how to compute conformal prediction sets with specified
frequentist coverage. Based on our findings, we believe contrastive
representation learning is a promising alternative to existing methods for
Raman spectrum matching.
Related papers
- Benchmarking Deep Learning Models for Raman Spectroscopy Across Open-Source Datasets [0.0]
This study presents one of the first systematic benchmarks comparing three or more published Raman-specific deep learning classifiers across multiple open-source Raman datasets.<n>We report classification accuracies and macro-averaged F1 scores to provide a fair and reproducible comparison of deep learning models for Raman spectra based classification.
arXiv Detail & Related papers (2026-01-22T16:54:53Z) - A Self-supervised Learning Method for Raman Spectroscopy based on Masked Autoencoders [3.9517125314802306]
We propose a self-supervised learning paradigm for Raman spectroscopy based on a Masked AutoEncoder, termed SMAE.
SMAE does not require any spectral annotations during pre-training. By randomly masking and then reconstructing the spectral information, the model learns essential spectral features.
arXiv Detail & Related papers (2025-04-21T10:44:06Z) - Probably Approximately Precision and Recall Learning [60.00180898830079]
A key challenge in machine learning is the prevalence of one-sided feedback.<n>We introduce a Probably Approximately Correct (PAC) framework in which hypotheses are set functions that map each input to a set of labels.<n>We develop new algorithms that learn from positive data alone, achieving optimal sample complexity in the realizable case.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Two Is Better Than One: Aligned Representation Pairs for Anomaly Detection [56.57122939745213]
Anomaly detection focuses on identifying samples that deviate from the norm.<n>Recent self-supervised methods have successfully learned such representations by employing prior knowledge about anomalies to create synthetic outliers during training.<n>We address this limitation with our new approach Con$$, which leverages prior knowledge about symmetries in normal samples to observe the data in different contexts.
arXiv Detail & Related papers (2024-05-29T07:59:06Z) - Balanced Data, Imbalanced Spectra: Unveiling Class Disparities with Spectral Imbalance [11.924440950433658]
We introduce the concept of spectral imbalance in features as a potential source for class disparities.
We derive exact expressions for the per-class error in a high-dimensional mixture model setting.
We study this phenomenon in 11 different state-of-the-art pretrained encoders.
arXiv Detail & Related papers (2024-02-18T23:59:54Z) - DiffSpectralNet : Unveiling the Potential of Diffusion Models for
Hyperspectral Image Classification [6.521187080027966]
We propose a new network called DiffSpectralNet, which combines diffusion and transformer techniques.
First, we use an unsupervised learning framework based on the diffusion model to extract both high-level and low-level spectral-spatial features.
The diffusion method is capable of extracting diverse and meaningful spectral-spatial features, leading to improvement in HSI classification.
arXiv Detail & Related papers (2023-10-29T15:26:37Z) - Hodge-Aware Contrastive Learning [101.56637264703058]
Simplicial complexes prove effective in modeling data with multiway dependencies.
We develop a contrastive self-supervised learning approach for processing simplicial data.
arXiv Detail & Related papers (2023-09-14T00:40:07Z) - Matched Machine Learning: A Generalized Framework for Treatment Effect
Inference With Learned Metrics [87.05961347040237]
We introduce Matched Machine Learning, a framework that combines the flexibility of machine learning black boxes with the interpretability of matching.
Our framework uses machine learning to learn an optimal metric for matching units and estimating outcomes.
We show empirically that instances of Matched Machine Learning perform on par with black-box machine learning methods and better than existing matching methods for similar problems.
arXiv Detail & Related papers (2023-04-03T19:32:30Z) - Spectrum-BERT: Pre-training of Deep Bidirectional Transformers for
Spectral Classification of Chinese Liquors [0.0]
We propose a pre-training method of deep bidirectional transformers for spectral classification of Chinese liquors, abbreviated as Spectrum-BERT.
We elaborately design two pre-training tasks, Next Curve Prediction (NCP) and Masked Curve Model (MCM), so that the model can effectively utilize unlabeled samples.
In the comparative experiments, the proposed Spectrum-BERT significantly outperforms the baselines in multiple metrics.
arXiv Detail & Related papers (2022-10-22T13:11:25Z) - Resolving label uncertainty with implicit posterior models [71.62113762278963]
We propose a method for jointly inferring labels across a collection of data samples.
By implicitly assuming the existence of a generative model for which a differentiable predictor is the posterior, we derive a training objective that allows learning under weak beliefs.
arXiv Detail & Related papers (2022-02-28T18:09:44Z) - Generalizing Face Forgery Detection with High-frequency Features [63.33397573649408]
Current CNN-based detectors tend to overfit to method-specific color textures and thus fail to generalize.
We propose to utilize the high-frequency noises for face forgery detection.
The first is the multi-scale high-frequency feature extraction module that extracts high-frequency noises at multiple scales.
The second is the residual-guided spatial attention module that guides the low-level RGB feature extractor to concentrate more on forgery traces from a new perspective.
arXiv Detail & Related papers (2021-03-23T08:19:21Z) - Spectral Analysis Network for Deep Representation Learning and Image
Clustering [53.415803942270685]
This paper proposes a new network structure for unsupervised deep representation learning based on spectral analysis.
It can identify the local similarities among images in patch level and thus more robust against occlusion.
It can learn more clustering-friendly representations and is capable to reveal the deep correlations among data samples.
arXiv Detail & Related papers (2020-09-11T05:07:15Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - Robust Classification of High-Dimensional Spectroscopy Data Using Deep
Learning and Data Synthesis [0.5801044612920815]
A novel application of a locally-connected neural network (NN) for the binary classification of spectroscopy data is proposed.
A two-step classification process is presented as an alternative to the binary and one-class classification paradigms.
arXiv Detail & Related papers (2020-03-26T11:33:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.