A noise-robust acoustic method for recognizing foraging activities of grazing cattle
- URL: http://arxiv.org/abs/2304.14824v3
- Date: Wed, 10 Jul 2024 12:37:50 GMT
- Title: A noise-robust acoustic method for recognizing foraging activities of grazing cattle
- Authors: Luciano S. Martinez-Rau, José O. Chelotti, Mariano Ferrero, Julio R. Galli, Santiago A. Utsumi, Alejandra M. Planisich, H. Leonardo Rufiner, Leonardo L. Giovanini,
- Abstract summary: We present the operating principle and generalization capability of an acoustic method called Noise-Robust Foraging Activity Recognizer (NRFAR)
In noiseless conditions, NRFAR reached an average balanced accuracy of 86.4%, outperforming two previous acoustic methods by more than 7.5%.
NRFAR has been shown to be effective in harsh free-ranging environments and could be used as a reliable solution to improve pasture management and monitor the health and welfare of dairy cows.
- Score: 35.21388806827219
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Farmers must continuously improve their livestock production systems to remain competitive in the growing dairy market. Precision livestock farming technologies provide individualized monitoring of animals on commercial farms, optimizing livestock production. Continuous acoustic monitoring is a widely accepted sensing technique used to estimate the daily rumination and grazing time budget of free-ranging cattle. However, typical environmental and natural noises on pastures noticeably affect the performance limiting the practical application of current acoustic methods. In this study, we present the operating principle and generalization capability of an acoustic method called Noise-Robust Foraging Activity Recognizer (NRFAR). The proposed method determines foraging activity bouts by analyzing fixed-length segments of identified jaw movement events produced during grazing and rumination. The additive noise robustness of the NRFAR was evaluated for several signal-to-noise ratios using stationary Gaussian white noise and four different nonstationary natural noise sources. In noiseless conditions, NRFAR reached an average balanced accuracy of 86.4%, outperforming two previous acoustic methods by more than 7.5%. Furthermore, NRFAR performed better than previous acoustic methods in 77 of 80 evaluated noisy scenarios (53 cases with p<0.05). NRFAR has been shown to be effective in harsh free-ranging environments and could be used as a reliable solution to improve pasture management and monitor the health and welfare of dairy cows. The instrumentation and computational algorithms presented in this publication are protected by a pending patent application: AR P20220100910. Web demo available at: https://sinc.unl.edu.ar/web-demo/nrfar
Related papers
- Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert [7.501033048686552]
Current fine-tuning methods for adapting pre-trained language models to downstream tasks are susceptible to interference from noisy data.<n>We propose a noise-robust adaptation method via asymmetric LoRA poisoning experts (LoPE)<n>LoPE achieves strong performance and robustness purely through the low-cost noise injection, which completely eliminates the requirement of data cleaning.
arXiv Detail & Related papers (2025-05-29T10:35:07Z) - Noise Augmented Fine Tuning for Mitigating Hallucinations in Large Language Models [1.0579965347526206]
Large language models (LLMs) often produce inaccurate or misleading content-hallucinations.
Noise-Augmented Fine-Tuning (NoiseFiT) is a novel framework that leverages adaptive noise injection to enhance model robustness.
NoiseFiT selectively perturbs layers identified as either high-SNR (more robust) or low-SNR (potentially under-regularized) using a dynamically scaled Gaussian noise.
arXiv Detail & Related papers (2025-04-04T09:27:19Z) - Dataset Distillers Are Good Label Denoisers In the Wild [16.626153947696743]
We propose a novel approach that leverages dataset distillation for noise removal.
This method avoids the feedback loop common in existing techniques and enhances training efficiency.
We rigorously evaluate three representative dataset distillation methods (DATM, DANCE, and RCIG) under various noise conditions.
arXiv Detail & Related papers (2024-11-18T06:26:41Z) - Towards Robust Transcription: Exploring Noise Injection Strategies for Training Data Augmentation [55.752737615873464]
This study investigates the impact of white noise at various Signal-to-Noise Ratio (SNR) levels on state-of-the-art APT models.
We hope this research provides valuable insights as preliminary work toward developing transcription models that maintain consistent performance across a range of acoustic conditions.
arXiv Detail & Related papers (2024-10-18T02:31:36Z) - SoftPatch: Unsupervised Anomaly Detection with Noisy Data [67.38948127630644]
This paper considers label-level noise in image sensory anomaly detection for the first time.
We propose a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level.
Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset.
arXiv Detail & Related papers (2024-03-21T08:49:34Z) - Comparative Study on the Effects of Noise in ML-Based Anxiety Detection [0.0]
We study how noise impacts model performance and developing models that are robust to noisy, real-world conditions.
We compare the effect of various intensities of noise on machine learning models classifying levels of physiological arousal.
arXiv Detail & Related papers (2023-06-01T19:52:24Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - C2N: Practical Generative Noise Modeling for Real-World Denoising [53.96391787869974]
We introduce a Clean-to-Noisy image generation framework, namely C2N, to imitate complex real-world noise without using paired examples.
We construct the noise generator in C2N accordingly with each component of real-world noise characteristics to express a wide range of noise accurately.
arXiv Detail & Related papers (2022-02-19T05:53:46Z) - Noise Stability Regularization for Improving BERT Fine-tuning [94.80511419444723]
Fine-tuning pre-trained language models such as BERT has become a common practice dominating leaderboards across various NLP tasks.
We introduce a novel and effective regularization method to improve fine-tuning on NLP tasks, referred to as Layer-wise Noise Stability Regularization (LNSR)
We experimentally confirm that well-performing models show a low sensitivity to noise and fine-tuning with LNSR exhibits clearly higher generalizability and stability.
arXiv Detail & Related papers (2021-07-10T13:19:04Z) - Dynamic Layer Customization for Noise Robust Speech Emotion Recognition
in Heterogeneous Condition Training [16.807298318504156]
We show that we can improve performance by dynamically routing samples to specialized feature encoders for each noise condition.
We extend these improvements to the multimodal setting by dynamically routing samples to maintain temporal ordering.
arXiv Detail & Related papers (2020-10-21T18:07:32Z) - SERIL: Noise Adaptive Speech Enhancement using Regularization-based
Incremental Learning [36.24803486242198]
Adaptation to a new environment may lead to catastrophic forgetting of the previously learned environments.
In this paper, we propose a regularization-based incremental learning SE (SERIL) strategy.
With a regularization constraint, the parameters are updated to the new noise environment while retaining the knowledge of the previous noise environments.
arXiv Detail & Related papers (2020-05-24T14:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.