Prevention of Overfitting on Mesh-Structured Data Regressions with a Modified Laplace Operator
- URL: http://arxiv.org/abs/2507.06631v1
- Date: Wed, 09 Jul 2025 07:57:52 GMT
- Title: Prevention of Overfitting on Mesh-Structured Data Regressions with a Modified Laplace Operator
- Authors: Enda D. V. Bigarella,
- Abstract summary: This document reports on a method for detecting and preventing overfitting on data regressions, herein applied to mesh-like data structures.<n>The mesh structure allows for the straightforward computation of the Laplace-operator second-order derivatives in a finite-difference fashion for noiseless data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This document reports on a method for detecting and preventing overfitting on data regressions, herein applied to mesh-like data structures. The mesh structure allows for the straightforward computation of the Laplace-operator second-order derivatives in a finite-difference fashion for noiseless data. Derivatives of the training data are computed on the original training mesh to serve as a true label of the entropy of the training data. Derivatives of the trained data are computed on a staggered mesh to identify oscillations in the interior of the original training mesh cells. The loss of the Laplace-operator derivatives is used for hyperparameter optimisation, achieving a reduction of unwanted oscillation through the minimisation of the entropy of the trained model. In this setup, testing does not require the splitting of points from the training data, and training is thus directly performed on all available training points. The Laplace operator applied to the trained data on a staggered mesh serves as a surrogate testing metric based on diffusion properties.
Related papers
- Elliptic Loss Regularization [24.24785205800212]
We propose a technique for enforcing a level of smoothness in the mapping between the data input space and the loss value.<n>We specify the level of regularity by requiring that the loss of the network satisfies an elliptic operator over the data domain.
arXiv Detail & Related papers (2025-03-04T00:08:08Z) - DispFormer: Pretrained Transformer for Flexible Dispersion Curve Inversion from Global Synthesis to Regional Applications [59.488352977043974]
This study proposes DispFormer, a transformer-based neural network for inverting the $v_s$ profile from Rayleigh-wave phase and group dispersion curves.<n>Results indicate that zero-shot DispFormer, even without any labeled data, produces inversion profiles that match well with the ground truth.
arXiv Detail & Related papers (2025-01-08T09:08:24Z) - Capturing the Temporal Dependence of Training Data Influence [100.91355498124527]
We formalize the concept of trajectory-specific leave-one-out influence, which quantifies the impact of removing a data point during training.<n>We propose data value embedding, a novel technique enabling efficient approximation of trajectory-specific LOO.<n>As data value embedding captures training data ordering, it offers valuable insights into model training dynamics.
arXiv Detail & Related papers (2024-12-12T18:28:55Z) - Derivative-based regularization for regression [3.0408645115035036]
We introduce a novel approach to regularization in multivariable regression problems.
Our regularizer, called DLoss, penalises differences between the model's derivatives and derivatives of the data generating function as estimated from the training data.
arXiv Detail & Related papers (2024-05-01T14:57:59Z) - Mendata: A Framework to Purify Manipulated Training Data [12.406255198638064]
We propose Mendata, a framework to purify manipulated training data.
Mendata perturbs the training inputs so that they retain their utility but are distributed similarly to the reference data.
We demonstrate the effectiveness of Mendata by applying it to defeat state-of-the-art data poisoning and data tracing techniques.
arXiv Detail & Related papers (2023-12-03T04:40:08Z) - Reconstructing Training Data from Model Gradient, Provably [68.21082086264555]
We reconstruct the training samples from a single gradient query at a randomly chosen parameter value.
As a provable attack that reveals sensitive training data, our findings suggest potential severe threats to privacy.
arXiv Detail & Related papers (2022-12-07T15:32:22Z) - Robust Training under Label Noise by Over-parameterization [41.03008228953627]
We propose a principled approach for robust training of over-parameterized deep networks in classification tasks where a proportion of training labels are corrupted.
The main idea is yet very simple: label noise is sparse and incoherent with the network learned from clean data, so we model the noise and learn to separate it from the data.
Remarkably, when trained using such a simple method in practice, we demonstrate state-of-the-art test accuracy against label noise on a variety of real datasets.
arXiv Detail & Related papers (2022-02-28T18:50:10Z) - DIVA: Dataset Derivative of a Learning Task [108.18912044384213]
We present a method to compute the derivative of a learning task with respect to a dataset.
A learning task is a function from a training set to the validation error, which can be represented by a trained deep neural network (DNN)
The "dataset derivative" is a linear operator, computed around the trained model, that informs how outliers of the weight of each training sample affect the validation error.
arXiv Detail & Related papers (2021-11-18T16:33:12Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Adversarial Training for EM Classification Networks [0.0]
We present a novel variant of Domain Adversarial Networks.
New loss functions are defined for both forks of the DANN network.
It is possible to extend the concept of 'domain' to include arbitrary user defined labels.
arXiv Detail & Related papers (2020-11-20T20:11:58Z) - Predicting Training Time Without Training [120.92623395389255]
We tackle the problem of predicting the number of optimization steps that a pre-trained deep network needs to converge to a given value of the loss function.
We leverage the fact that the training dynamics of a deep network during fine-tuning are well approximated by those of a linearized model.
We are able to predict the time it takes to fine-tune a model to a given loss without having to perform any training.
arXiv Detail & Related papers (2020-08-28T04:29:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.