Estimating productivity gains in digital automation
- URL: http://arxiv.org/abs/2210.01252v1
- Date: Mon, 3 Oct 2022 22:11:42 GMT
- Title: Estimating productivity gains in digital automation
- Authors: Mauricio Jacobo-Romero, Danilo S. Carvalho and Andr\'e Freitas
- Abstract summary: This paper proposes a novel productivity estimation model to evaluate the effects of adopting Artificial Intelligence (AI) components in a production chain.
We provide (i) theoretical and empirical evidence to explain Solow's dichotomy; (ii) a data-driven model to estimate and asses productivity variations; (iii) a methodology underpinned on process mining datasets to determine the business process, BP, and productivity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper proposes a novel productivity estimation model to evaluate the
effects of adopting Artificial Intelligence (AI) components in a production
chain. Our model provides evidence to address the "AI's" Solow's Paradox. We
provide (i) theoretical and empirical evidence to explain Solow's dichotomy;
(ii) a data-driven model to estimate and asses productivity variations; (iii) a
methodology underpinned on process mining datasets to determine the business
process, BP, and productivity; (iv) a set of computer simulation parameters;
(v) and empirical analysis on labour-distribution. These provide data on why we
consider AI Solow's paradox a consequence of metric mismeasurement.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Explainable Artificial Intelligence for identifying profitability predictors in Financial Statements [0.7067443325368975]
We apply Machine Learning techniques to raw financial statements data taken from AIDA, a Database comprising Italian listed companies' data from 2013 to 2022.
We present a comparative study of different models and following the European AI regulations, we complement our analysis by applying explainability techniques to the proposed models.
arXiv Detail & Related papers (2025-01-29T14:33:23Z) - Explainable Artificial Intelligence for Dependent Features: Additive Effects of Collinearity [0.0]
We propose an Additive Effects of Collinearity (AEC) as a novel XAI method to consider the collinearity issue.
The proposed method is implemented using simulated and real data to validate its efficiency comparing with the a state of arts XAI method.
arXiv Detail & Related papers (2024-10-30T07:00:30Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Sparse Attention-driven Quality Prediction for Production Process Optimization in Digital Twins [53.70191138561039]
We propose to deploy a digital twin of the production line by encoding its operational logic in a data-driven approach.
We adopt a quality prediction model for production process based on self-attention-enabled temporal convolutional neural networks.
Our operation experiments on a specific tobacco shredding line demonstrate that the proposed digital twin-based production process optimization method fosters seamless integration between virtual and real production lines.
arXiv Detail & Related papers (2024-05-20T09:28:23Z) - The Artificial Neural Twin -- Process Optimization and Continual Learning in Distributed Process Chains [3.79770624632814]
We propose the Artificial Neural Twin, which combines concepts from model predictive control, deep learning, and sensor networks.
Our approach introduces differentiable data fusion to estimate the state of distributed process steps.
By treating the interconnected process steps as a quasi neural-network, we can backpropagate loss gradients for process optimization or model fine-tuning to process parameters.
arXiv Detail & Related papers (2024-03-27T08:34:39Z) - FIMBA: Evaluating the Robustness of AI in Genomics via Feature
Importance Adversarial Attacks [0.0]
This paper demonstrates the vulnerability of AI models often utilized downstream tasks on recognized public genomics datasets.
We undermine model robustness by deploying an attack that focuses on input transformation while mimicking the real data and confusing the model decision-making.
Our empirical findings unequivocally demonstrate a decline in model performance, underscored by diminished accuracy and an upswing in false positives and false negatives.
arXiv Detail & Related papers (2024-01-19T12:04:31Z) - QualEval: Qualitative Evaluation for Model Improvement [82.73561470966658]
We propose QualEval, which augments quantitative scalar metrics with automated qualitative evaluation as a vehicle for model improvement.
QualEval uses a powerful LLM reasoner and our novel flexible linear programming solver to generate human-readable insights.
We demonstrate that leveraging its insights, for example, improves the absolute performance of the Llama 2 model by up to 15% points relative.
arXiv Detail & Related papers (2023-11-06T00:21:44Z) - Interpretability and causal discovery of the machine learning models to
predict the production of CBM wells after hydraulic fracturing [0.5512295869673146]
A novel methodology is proposed to discover the latent causality from observed data.
Based on the theory of causal discovery, a causal graph is derived with explicit input, output, treatment and confounding variables.
SHAP is employed to analyze the influence of the factors on the production capability, which indirectly interprets the machine learning models.
arXiv Detail & Related papers (2022-12-21T02:06:26Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z) - Learnability of Competitive Threshold Models [11.005966612053262]
We study the learnability of the competitive threshold model from a theoretical perspective.
We demonstrate how competitive threshold models can be seamlessly simulated by artificial neural networks.
arXiv Detail & Related papers (2022-05-08T01:11:51Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.