Deep Learning Based Tool Wear Estimation Considering Cutting Conditions
- URL: http://arxiv.org/abs/2407.01199v1
- Date: Mon, 1 Jul 2024 11:48:33 GMT
- Title: Deep Learning Based Tool Wear Estimation Considering Cutting Conditions
- Authors: Zongshuo Li, Markus Meurer, Thomas Bergs,
- Abstract summary: We propose a deep learning approach based on a convolutional neural network that incorporates cutting conditions as extra model inputs.
We evaluate the model's performance in terms of tool wear estimation accuracy and its transferability to new fixed or variable cutting parameters.
- Score: 0.18206461789819073
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Tool wear conditions impact the final quality of the workpiece. In this study, we propose a deep learning approach based on a convolutional neural network that incorporates cutting conditions as extra model inputs, aiming to improve tool wear estimation accuracy and fulfill industrial demands for zero-shot transferability. Through a series of milling experiments under various cutting parameters, we evaluate the model's performance in terms of tool wear estimation accuracy and its transferability to new fixed or variable cutting parameters. The results consistently highlight our approach's advantage over conventional models that omit cutting conditions, maintaining superior performance irrespective of the stability of the wear development or the limitation of the training dataset. This finding underscores its potential applicability in industrial scenarios.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Deep Learning Approach for Enhanced Transferability and Learning Capacity in Tool Wear Estimation [0.18206461789819073]
Deep learning approach is proposed for estimating tool wear, considering cutting parameters.
Results indicate that the proposed method outperforms conventional methods in terms of both transferability and rapid learning capabilities.
arXiv Detail & Related papers (2024-07-01T11:49:10Z) - Low-rank finetuning for LLMs: A fairness perspective [54.13240282850982]
Low-rank approximation techniques have become the de facto standard for fine-tuning Large Language Models.
This paper investigates the effectiveness of these methods in capturing the shift of fine-tuning datasets from the initial pre-trained data distribution.
We show that low-rank fine-tuning inadvertently preserves undesirable biases and toxic behaviors.
arXiv Detail & Related papers (2024-05-28T20:43:53Z) - On the Calibration of Large Language Models and Alignment [63.605099174744865]
Confidence calibration serves as a crucial tool for gauging the reliability of deep models.
We conduct a systematic examination of the calibration of aligned language models throughout the entire construction process.
Our work sheds light on whether popular LLMs are well-calibrated and how the training process influences model calibration.
arXiv Detail & Related papers (2023-11-22T08:57:55Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Making informed decisions in cutting tool maintenance in milling: A KNN
based model agnostic approach [0.0]
This research paper presents a KNN based white box model, which allows us to dive deep into how the model performs the classification and how it prioritizes the different features included.
This approach helps in detecting why the tool is in a certain condition and allows the manufacturer to make an informed decision about the tools maintenance.
arXiv Detail & Related papers (2023-10-23T07:02:30Z) - Sharing Information Between Machine Tools to Improve Surface Finish
Forecasting [0.0]
The authors propose a Bayesian hierarchical model to predict surface-roughness measurements for a turning machining process.
The hierarchical model is compared to multiple independent Bayesian linear regression models to showcase the benefits of partial pooling in a machining setting.
arXiv Detail & Related papers (2023-10-09T15:44:35Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - SynBench: Task-Agnostic Benchmarking of Pretrained Representations using
Synthetic Data [78.21197488065177]
Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning.
This paper proposes a new task-agnostic framework, textitSynBench, to measure the quality of pretrained representations using synthetic data.
arXiv Detail & Related papers (2022-10-06T15:25:00Z) - Diagnostic Tool for Out-of-Sample Model Evaluation [12.44615656370048]
We consider the use of a finite calibration data set to characterize the future, out-of-sample losses of a model.
We propose a simple model diagnostic tool that provides finite-sample guarantees under weak assumptions.
arXiv Detail & Related papers (2022-06-22T11:13:18Z) - Post-hoc Models for Performance Estimation of Machine Learning Inference [22.977047604404884]
Estimating how well a machine learning model performs during inference is critical in a variety of scenarios.
We systematically generalize performance estimation to a diverse set of metrics and scenarios.
We find that proposed post-hoc models consistently outperform the standard confidence baselines.
arXiv Detail & Related papers (2021-10-06T02:20:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.