Machine Learning to Estimate Gross Loss of Jewelry for Wax Patterns
- URL: http://arxiv.org/abs/2301.02872v1
- Date: Sat, 7 Jan 2023 15:09:51 GMT
- Title: Machine Learning to Estimate Gross Loss of Jewelry for Wax Patterns
- Authors: Mihir Jain, Kashish Jain and Sandip Mane
- Abstract summary: The gross loss is estimated before manufacturing to calculate the wax weight of the pattern that would be investment casted to make multiple identical pieces of jewellery.
In this paper, the authors found a way to use Machine Learning in the jewellery industry to estimate this crucial Gross Loss.
- Score: 3.123682649279259
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In mass manufacturing of jewellery, the gross loss is estimated before
manufacturing to calculate the wax weight of the pattern that would be
investment casted to make multiple identical pieces of jewellery. Machine
learning is a technology that is a part of AI which helps create a model with
decision-making capabilities based on a large set of user-defined data. In this
paper, the authors found a way to use Machine Learning in the jewellery
industry to estimate this crucial Gross Loss. Choosing a small data set of
manufactured rings and via regression analysis, it was found out that there is
a potential of reducing the error in estimation from +-2-3 to +-0.5 using ML
Algorithms from historic data and attributes collected from the CAD file during
the design phase itself. To evaluate the approach's viability, additional study
must be undertaken with a larger data set.
Related papers
- RESTOR: Knowledge Recovery through Machine Unlearning [71.75834077528305]
Large language models trained on web-scale corpora can memorize undesirable datapoints.
Many machine unlearning methods have been proposed that aim to 'erase' these datapoints from trained models.
We propose the RESTOR framework for machine unlearning based on the following dimensions.
arXiv Detail & Related papers (2024-10-31T20:54:35Z) - Attribute-to-Delete: Machine Unlearning via Datamodel Matching [65.13151619119782]
Machine unlearning -- efficiently removing a small "forget set" training data on a pre-divertrained machine learning model -- has recently attracted interest.
Recent research shows that machine unlearning techniques do not hold up in such a challenging setting.
arXiv Detail & Related papers (2024-10-30T17:20:10Z) - How to unlearn a learned Machine Learning model ? [0.0]
I will present an elegant algorithm for unlearning a machine learning model and visualize its abilities.
I will elucidate the underlying mathematical theory and establish specific metrics to evaluate both the unlearned model's performance on desired data and its level of ignorance regarding unwanted data.
arXiv Detail & Related papers (2024-10-13T17:38:09Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Loss-Free Machine Unlearning [51.34904967046097]
We present a machine unlearning approach that is both retraining- and label-free.
Retraining-free approaches often utilise Fisher information, which is derived from the loss and requires labelled data which may not be available.
We present an extension to the Selective Synaptic Dampening algorithm, substituting the diagonal of the Fisher information matrix for the gradient of the l2 norm of the model output to approximate sensitivity.
arXiv Detail & Related papers (2024-02-29T16:15:34Z) - Parcel loss prediction in last-mile delivery: deep and non-deep
approaches with insights from Explainable AI [1.104960878651584]
We propose two machine learning approaches, namely, Data Balance with Supervised Learning (DBSL) and Deep Hybrid Ensemble Learning (DHEL)
The practical implication of such predictions is their value in aiding e-commerce retailers in optimizing insurance-related decision-making policies.
arXiv Detail & Related papers (2023-10-25T12:46:34Z) - Machine Unlearning for Causal Inference [0.6621714555125157]
It is important to enable the model to forget some of its learning/captured information about a given user (machine unlearning)
This paper introduces the concept of machine unlearning for causal inference, particularly propensity score matching and treatment effect estimation.
The dataset used in the study is the Lalonde dataset, a widely used dataset for evaluating the effectiveness of job training programs.
arXiv Detail & Related papers (2023-08-24T17:27:01Z) - SSSE: Efficiently Erasing Samples from Trained Machine Learning Models [103.43466657962242]
We propose an efficient and effective algorithm, SSSE, for samples erasure.
In certain cases SSSE can erase samples almost as well as the optimal, yet impractical, gold standard of training a new model from scratch with only the permitted data.
arXiv Detail & Related papers (2021-07-08T14:17:24Z) - Automated Agriculture Commodity Price Prediction System with Machine
Learning Techniques [0.8998318101090188]
We propose a web-based automated system to predict agriculture commodity price.
The most optimal algorithm, LSTM model with an average of 0.304 mean-square error has been selected as the prediction engine.
arXiv Detail & Related papers (2021-06-24T03:10:25Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.