Advancing underwater acoustic target recognition via adaptive data
pruning and smoothness-inducing regularization
- URL: http://arxiv.org/abs/2304.11907v1
- Date: Mon, 24 Apr 2023 08:30:41 GMT
- Title: Advancing underwater acoustic target recognition via adaptive data
pruning and smoothness-inducing regularization
- Authors: Yuan Xie, Tianyu Chen and Ji Xu
- Abstract summary: We propose a strategy based on cross-entropy to prune excessively similar segments in training data.
We generate noisy samples and apply smoothness-inducing regularization based on KL divergence to mitigate overfitting.
- Score: 27.039672355700198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater acoustic recognition for ship-radiated signals has high practical
application value due to the ability to recognize non-line-of-sight targets.
However, due to the difficulty of data acquisition, the collected signals are
scarce in quantity and mainly composed of mechanical periodic noise. According
to the experiments, we observe that the repeatability of periodic signals leads
to a double-descent phenomenon, which indicates a significant local bias toward
repeated samples. To address this issue, we propose a strategy based on
cross-entropy to prune excessively similar segments in training data.
Furthermore, to compensate for the reduction of training data, we generate
noisy samples and apply smoothness-inducing regularization based on KL
divergence to mitigate overfitting. Experiments show that our proposed data
pruning and regularization strategy can bring stable benefits and our framework
significantly outperforms the state-of-the-art in low-resource scenarios.
Related papers
- Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection [82.43311784594384]
Real-world datasets contain not only noisy labels but also class imbalance.
We propose a simple yet effective method to address noisy labels in imbalanced datasets.
arXiv Detail & Related papers (2024-02-17T10:34:53Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in
Influence Estimation [58.20016784231991]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Debias the Training of Diffusion Models [53.49637348771626]
We provide theoretical evidence that the prevailing practice of using a constant loss weight strategy in diffusion models leads to biased estimation during the training phase.
We propose an elegant and effective weighting strategy grounded in the theoretically unbiased principle.
These analyses are expected to advance our understanding and demystify the inner workings of diffusion models.
arXiv Detail & Related papers (2023-10-12T16:04:41Z) - Consistent Signal Reconstruction from Streaming Multivariate Time Series [5.448070998907116]
We formalize for the first time the concept of consistent signal reconstruction from streaming time-series data.
Our method achieves a favorable error-rate decay with the sampling rate compared to a similar but non-consistent reconstruction.
arXiv Detail & Related papers (2023-08-23T22:50:52Z) - Underwater Acoustic Target Recognition based on Smoothness-inducing Regularization and Spectrogram-based Data Augmentation [21.327653766608805]
Insufficient data can hinder the ability of recognition systems to support complex modeling.
We propose two strategies to enhance the generalization ability of models in the case of limited data.
arXiv Detail & Related papers (2023-06-12T08:26:47Z) - Novel features for the detection of bearing faults in railway vehicles [88.89591720652352]
We introduce Mel-Frequency Cepstral Coefficients (MFCCs) and features extracted from the Amplitude Modulation Spectrogram (AMS) as features for the detection of bearing faults.
arXiv Detail & Related papers (2023-04-14T10:09:50Z) - Per-Example Gradient Regularization Improves Learning Signals from Noisy
Data [25.646054298195434]
Empirical evidence suggests that gradient regularization technique can significantly enhance the robustness of deep learning models against noisy perturbations.
We present a theoretical analysis that demonstrates its effectiveness in improving both test error and robustness against noise perturbations.
Our analysis reveals that PEGR penalizes the variance of pattern learning, thus effectively suppressing the memorization of noises from the training data.
arXiv Detail & Related papers (2023-03-31T10:08:23Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Weak-signal extraction enabled by deep-neural-network denoising of
diffraction data [26.36525764239897]
We show how data can be denoised via a deep convolutional neural network.
We demonstrate that weak signals stemming from charge ordering, insignificant in the noisy data, become visible and accurate in the denoised data.
arXiv Detail & Related papers (2022-09-19T14:43:01Z) - Consistency Regularization Can Improve Robustness to Label Noise [4.340338299803562]
This paper empirically studies the relevance of consistency regularization for training-time robustness to noisy labels.
We show that a simple loss function that encourages consistency improves the robustness of the models to label noise.
arXiv Detail & Related papers (2021-10-04T08:15:08Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.