Tool Wear Prediction in CNC Turning Operations using Ultrasonic Microphone Arrays and CNNs
- URL: http://arxiv.org/abs/2406.08957v1
- Date: Thu, 13 Jun 2024 09:36:13 GMT
- Title: Tool Wear Prediction in CNC Turning Operations using Ultrasonic Microphone Arrays and CNNs
- Authors: Jan Steckel, Arne Aerts, Erik Verreycken, Dennis Laurijssen, Walter Daems,
- Abstract summary: This paper introduces a novel method for predicting tool wear in CNC turning operations, combining ultrasonic microphone arrays and convolutional neural networks (CNNs)
Our results demonstrate the potential gained by integrating advanced ultrasonic sensors with deep learning for accurate predictive maintenance.
- Score: 4.0884398391117704
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper introduces a novel method for predicting tool wear in CNC turning operations, combining ultrasonic microphone arrays and convolutional neural networks (CNNs). High-frequency acoustic emissions between 0 kHz and 60 kHz are enhanced using beamforming techniques to improve the signal- to-noise ratio. The processed acoustic data is then analyzed by a CNN, which predicts the Remaining Useful Life (RUL) of cutting tools. Trained on data from 350 workpieces machined with a single carbide insert, the model can accurately predict the RUL of the carbide insert. Our results demonstrate the potential gained by integrating advanced ultrasonic sensors with deep learning for accurate predictive maintenance tasks in CNC machining.
Related papers
- On-Chip Learning with Memristor-Based Neural Networks: Assessing Accuracy and Efficiency Under Device Variations, Conductance Errors, and Input Noise [0.0]
This paper presents a memristor-based compute-in-memory hardware accelerator for on-chip training and inference.
Hardware, consisting of 30 memristors and 4 neurons, utilizes three different M-SDC structures with tungsten, chromium, and carbon media to perform binary image classification tasks.
arXiv Detail & Related papers (2024-08-26T23:10:01Z) - Effects of Dataset Sampling Rate for Noise Cancellation through Deep Learning [1.024113475677323]
This research explores the use of deep neural networks (DNNs) as a superior alternative to traditional noise cancellation techniques.
The ConvTasNET network was trained on datasets such as WHAM!, LibriMix, and the MS-2023 DNS Challenge.
Models trained at higher sampling rates (48kHz) provided much better evaluation metrics against Total Harmonic Distortion (THD) and Quality Prediction For Generative Neural Speech Codecs (WARP-Q) values.
arXiv Detail & Related papers (2024-05-30T16:20:44Z) - Self-Supervised Pretraining Improves Performance and Inference
Efficiency in Multiple Lung Ultrasound Interpretation Tasks [65.23740556896654]
We investigated whether self-supervised pretraining could produce a neural network feature extractor applicable to multiple classification tasks in lung ultrasound analysis.
When fine-tuning on three lung ultrasound tasks, pretrained models resulted in an improvement of the average across-task area under the receiver operating curve (AUC) by 0.032 and 0.061 on local and external test sets respectively.
arXiv Detail & Related papers (2023-09-05T21:36:42Z) - Digital noise spectroscopy with a quantum sensor [57.53000001488777]
We introduce and experimentally demonstrate a quantum sensing protocol to sample and reconstruct the auto-correlation of a noise process.
Walsh noise spectroscopy method exploits simple sequences of spin-flip pulses to generate a complete basis of digital filters.
We experimentally reconstruct the auto-correlation function of the effective magnetic field produced by the nuclear-spin bath on the electronic spin of a single nitrogen-vacancy center in diamond.
arXiv Detail & Related papers (2022-12-19T02:19:35Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Neural Clamping: Joint Input Perturbation and Temperature Scaling for Neural Network Calibration [62.4971588282174]
We propose a new post-processing calibration method called Neural Clamping.
Our empirical results show that Neural Clamping significantly outperforms state-of-the-art post-processing calibration methods.
arXiv Detail & Related papers (2022-09-23T14:18:39Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - A Novel Approach For Analysis of Distributed Acoustic Sensing System
Based on Deep Transfer Learning [0.0]
Convolutional neural networks are highly capable tools for extracting spatial information.
Long-short term memory (LSTM) is an effective instrument for processing sequential data.
VGG-16 architecture in our framework manages to obtain 100% classification accuracy in 50 trainings.
arXiv Detail & Related papers (2022-06-24T19:56:01Z) - Improving Generalization of Deep Neural Network Acoustic Models with
Length Perturbation and N-best Based Label Smoothing [49.82147684491619]
We introduce two techniques to improve generalization of deep neural network (DNN) acoustic models for automatic speech recognition (ASR)
Length perturbation is a data augmentation algorithm that randomly drops and inserts frames of an utterance to alter the length of the speech feature sequence.
N-best based label smoothing randomly injects noise to ground truth labels during training in order to avoid overfitting, where the noisy labels are generated from n-best hypotheses.
arXiv Detail & Related papers (2022-03-29T01:40:22Z) - Machining Cycle Time Prediction: Data-driven Modelling of Machine Tool
Feedrate Behavior with Neural Networks [0.34998703934432673]
This paper presents a data-driven feedrate and machining cycle time prediction method by building a neural network model for each machine tool axis.
Validation trials using a representative industrial thin wall structure component on a commercial machining centre showed that this method estimated the machining time with more than 90% accuracy.
arXiv Detail & Related papers (2021-06-18T08:29:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.