Self-Supervised Learning from Noisy and Incomplete Data
- URL: http://arxiv.org/abs/2601.03244v1
- Date: Tue, 06 Jan 2026 18:40:50 GMT
- Title: Self-Supervised Learning from Noisy and Incomplete Data
- Authors: Julián Tachella, Mike Davies,
- Abstract summary: Problems in science and engineering involve inferring a signal from noisy and/or incomplete observations.<n>Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations.<n>Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone.
- Score: 11.852526434070839
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many important problems in science and engineering involve inferring a signal from noisy and/or incomplete observations, where the observation process is known. Historically, this problem has been tackled using hand-crafted regularization (e.g., sparsity, total-variation) to obtain meaningful estimates. Recent data-driven methods often offer better solutions by directly learning a solver from examples of ground-truth signals and associated observations. However, in many real-world applications, obtaining ground-truth references for training is expensive or impossible. Self-supervised learning methods offer a promising alternative by learning a solver from measurement data alone, bypassing the need for ground-truth references. This manuscript provides a comprehensive summary of different self-supervised methods for inverse problems, with a special emphasis on their theoretical underpinnings, and presents practical applications in imaging inverse problems.
Related papers
- Learning to reconstruct from saturated data: audio declipping and high-dynamic range imaging [15.223658462501893]
This work extends self-supervised learning to the non-linear problem of recovering audio and images from clipped measurements.<n>We provide sufficient conditions for learning to reconstruct from saturated signals alone and a self-supervised loss.<n>Experiments on both audio and image data show that the proposed approach is almost as effective as fully supervised approaches.
arXiv Detail & Related papers (2026-02-25T10:37:14Z) - What Really Matters for Learning-based LiDAR-Camera Calibration [50.2608502974106]
This paper revisits the development of learning-based LiDAR-Camera calibration.<n>We identify the critical limitations of regression-based methods with the widely used data generation pipeline.<n>We also investigate how the input data format and preprocessing operations impact network performance.
arXiv Detail & Related papers (2025-01-28T14:12:32Z) - RECOVAR: Representation Covariances on Deep Latent Spaces for Seismic Event Detection [0.0]
We develop an unsupervised method for earthquake detection that learns to detect earthquakes from raw waveforms.
The performance is comparable to, and in some cases better than, some state-of-the-art supervised methods.
The approach has the potential to be useful for time series datasets from other domains.
arXiv Detail & Related papers (2024-07-25T21:33:54Z) - Towards Effective Evaluations and Comparisons for LLM Unlearning Methods [97.2995389188179]
This paper seeks to refine the evaluation of machine unlearning for large language models.<n>It addresses two key challenges -- the robustness of evaluation metrics and the trade-offs between competing goals.
arXiv Detail & Related papers (2024-06-13T14:41:00Z) - Learned reconstruction methods for inverse problems: sample error
estimates [0.8702432681310401]
This dissertation addresses the generalization properties of learned reconstruction methods, and specifically to perform their sample error analysis.
A rather general strategy is proposed, whose assumptions are met for a large class of inverse problems and learned methods.
arXiv Detail & Related papers (2023-12-21T17:56:19Z) - Scale-Equivariant Imaging: Self-Supervised Learning for Image Super-Resolution and Deblurring [9.587978273085296]
Self-supervised methods have recently proved to be nearly as effective as supervised ones in various imaging inverse problems.<n>We propose scale-equivariant imaging, a new self-supervised approach that leverages the fact that many image distributions are approximately scale-invariant.<n>We demonstrate throughout a series of experiments on real datasets that the proposed method outperforms other self-supervised approaches.
arXiv Detail & Related papers (2023-12-18T14:30:54Z) - Re-Evaluating LiDAR Scene Flow for Autonomous Driving [80.37947791534985]
Popular benchmarks for self-supervised LiDAR scene flow have unrealistic rates of dynamic motion, unrealistic correspondences, and unrealistic sampling patterns.
We evaluate a suite of top methods on a suite of real-world datasets.
We show that despite the emphasis placed on learning, most performance gains are caused by pre- and post-processing steps.
arXiv Detail & Related papers (2023-04-04T22:45:50Z) - IQ-Learn: Inverse soft-Q Learning for Imitation [95.06031307730245]
imitation learning from a small amount of expert data can be challenging in high-dimensional environments with complex dynamics.
Behavioral cloning is a simple method that is widely used due to its simplicity of implementation and stable convergence.
We introduce a method for dynamics-aware IL which avoids adversarial training by learning a single Q-function.
arXiv Detail & Related papers (2021-06-23T03:43:10Z) - Seeing Differently, Acting Similarly: Imitation Learning with
Heterogeneous Observations [126.78199124026398]
In many real-world imitation learning tasks, the demonstrator and the learner have to act in different but full observation spaces.
In this work, we model the above learning problem as Heterogeneous Observations Learning (HOIL)
We propose the Importance Weighting with REjection (IWRE) algorithm based on the techniques of importance-weighting, learning with rejection, and active querying to solve the key challenge of occupancy measure matching.
arXiv Detail & Related papers (2021-06-17T05:44:04Z) - Teaching Key Machine Learning Principles Using Anti-learning Datasets [0.0]
We advocate the teaching of alternative methods of generalising to the best possible solution.
Students can achieve a deeper understanding of the importance of validation on data excluded from the training process.
arXiv Detail & Related papers (2020-11-16T05:43:40Z) - Overcoming the curse of dimensionality with Laplacian regularization in
semi-supervised learning [80.20302993614594]
We provide a statistical analysis to overcome drawbacks of Laplacian regularization.
We unveil a large body of spectral filtering methods that exhibit desirable behaviors.
We provide realistic computational guidelines in order to make our method usable with large amounts of data.
arXiv Detail & Related papers (2020-09-09T14:28:54Z) - A Review of Meta-level Learning in the Context of Multi-component,
Multi-level Evolving Prediction Systems [6.810856082577402]
The exponential growth of volume, variety and velocity of data is raising the need for investigations of automated or semi-automated ways to extract useful patterns from the data.
It requires deep expert knowledge and extensive computational resources to find the most appropriate mapping of learning methods for a given problem.
There is a need for an intelligent recommendation engine that can advise what is the best learning algorithm for a dataset.
arXiv Detail & Related papers (2020-07-17T14:14:37Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Self-trained Deep Ordinal Regression for End-to-End Video Anomaly
Detection [114.9714355807607]
We show that applying self-trained deep ordinal regression to video anomaly detection overcomes two key limitations of existing methods.
We devise an end-to-end trainable video anomaly detection approach that enables joint representation learning and anomaly scoring without manually labeled normal/abnormal data.
arXiv Detail & Related papers (2020-03-15T08:44:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.