Advancing underwater acoustic target recognition via adaptive data
pruning and smoothness-inducing regularization
- URL: http://arxiv.org/abs/2304.11907v1
- Date: Mon, 24 Apr 2023 08:30:41 GMT
- Title: Advancing underwater acoustic target recognition via adaptive data
pruning and smoothness-inducing regularization
- Authors: Yuan Xie, Tianyu Chen and Ji Xu
- Abstract summary: We propose a strategy based on cross-entropy to prune excessively similar segments in training data.
We generate noisy samples and apply smoothness-inducing regularization based on KL divergence to mitigate overfitting.
- Score: 27.039672355700198
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater acoustic recognition for ship-radiated signals has high practical
application value due to the ability to recognize non-line-of-sight targets.
However, due to the difficulty of data acquisition, the collected signals are
scarce in quantity and mainly composed of mechanical periodic noise. According
to the experiments, we observe that the repeatability of periodic signals leads
to a double-descent phenomenon, which indicates a significant local bias toward
repeated samples. To address this issue, we propose a strategy based on
cross-entropy to prune excessively similar segments in training data.
Furthermore, to compensate for the reduction of training data, we generate
noisy samples and apply smoothness-inducing regularization based on KL
divergence to mitigate overfitting. Experiments show that our proposed data
pruning and regularization strategy can bring stable benefits and our framework
significantly outperforms the state-of-the-art in low-resource scenarios.
Related papers
- Learning with Imbalanced Noisy Data by Preventing Bias in Sample
Selection [82.43311784594384]
Real-world datasets contain not only noisy labels but also class imbalance.
We propose a simple yet effective method to address noisy labels in imbalanced datasets.
arXiv Detail & Related papers (2024-02-17T10:34:53Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Consistent Signal Reconstruction from Streaming Multivariate Time Series [5.448070998907116]
We formalize for the first time the concept of consistent signal reconstruction from streaming time-series data.
Our method achieves a favorable error-rate decay with the sampling rate compared to a similar but non-consistent reconstruction.
arXiv Detail & Related papers (2023-08-23T22:50:52Z) - Seismic Data Interpolation via Denoising Diffusion Implicit Models with Coherence-corrected Resampling [7.755439545030289]
Deep learning models such as U-Net often underperform when the training and test missing patterns do not match.
We propose a novel framework that is built upon the multi-modal diffusion models.
Inference phase, we introduce the denoising diffusion implicit model to reduce the number of sampling steps.
To enhance the coherence and continuity between the revealed traces and the missing traces, we propose two strategies.
arXiv Detail & Related papers (2023-07-09T16:37:47Z) - Underwater Acoustic Target Recognition based on Smoothness-inducing Regularization and Spectrogram-based Data Augmentation [21.327653766608805]
Insufficient data can hinder the ability of recognition systems to support complex modeling.
We propose two strategies to enhance the generalization ability of models in the case of limited data.
arXiv Detail & Related papers (2023-06-12T08:26:47Z) - Novel features for the detection of bearing faults in railway vehicles [88.89591720652352]
We introduce Mel-Frequency Cepstral Coefficients (MFCCs) and features extracted from the Amplitude Modulation Spectrogram (AMS) as features for the detection of bearing faults.
arXiv Detail & Related papers (2023-04-14T10:09:50Z) - Per-Example Gradient Regularization Improves Learning Signals from Noisy
Data [25.646054298195434]
Empirical evidence suggests that gradient regularization technique can significantly enhance the robustness of deep learning models against noisy perturbations.
We present a theoretical analysis that demonstrates its effectiveness in improving both test error and robustness against noise perturbations.
Our analysis reveals that PEGR penalizes the variance of pattern learning, thus effectively suppressing the memorization of noises from the training data.
arXiv Detail & Related papers (2023-03-31T10:08:23Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Weak-signal extraction enabled by deep-neural-network denoising of
diffraction data [26.36525764239897]
We show how data can be denoised via a deep convolutional neural network.
We demonstrate that weak signals stemming from charge ordering, insignificant in the noisy data, become visible and accurate in the denoised data.
arXiv Detail & Related papers (2022-09-19T14:43:01Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Salvage Reusable Samples from Noisy Data for Robust Learning [70.48919625304]
We propose a reusable sample selection and correction approach, termed as CRSSC, for coping with label noise in training deep FG models with web images.
Our key idea is to additionally identify and correct reusable samples, and then leverage them together with clean examples to update the networks.
arXiv Detail & Related papers (2020-08-06T02:07:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.