Causal Machine Learning in IoT-based Engineering Problems: A Tool Comparison in the Case of Household Energy Consumption
- URL: http://arxiv.org/abs/2505.12147v3
- Date: Mon, 30 Jun 2025 20:10:28 GMT
- Title: Causal Machine Learning in IoT-based Engineering Problems: A Tool Comparison in the Case of Household Energy Consumption
- Authors: Nikolaos-Lysias Kosioris, Sotirios Nikoletseas, Gavrilis Filios, Stefanos Panagiotou,
- Abstract summary: Two prevalent tools based on Causal Machine Learning methods are compared.<n>The operation of the tools is demonstrated by examining their response to 18 queries.<n>Results were encouraging and may easily be extended to other domains.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid increase in computing power and the ability to store Big Data in the infrastructure has enabled predictions in a large variety of domains by Machine Learning. However, in many cases, existing Machine Learning tools are considered insufficient or incorrect since they exploit only probabilistic dependencies rather than inference logic. Causal Machine Learning methods seem to close this gap. In this paper, two prevalent tools based on Causal Machine Learning methods are compared, as well as their mathematical underpinning background. The operation of the tools is demonstrated by examining their response to 18 queries, based on the IDEAL Household Energy Dataset, published by the University of Edinburgh. First, it was important to evaluate the causal relations assumption that allowed the use of this approach; this was based on the preexisting scientific knowledge of the domain and was implemented by use of the in-built validation tools. Results were encouraging and may easily be extended to other domains.
Related papers
- Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Robust Counterfactual Explanations under Model Multiplicity Using Multi-Objective Optimization [0.8702432681310401]
Counterfactual explanation (CE) is not robust when there are multiple machine-learning models with similar accuracy.<n>In this paper, we propose robust CEs that introduce a new viewpoint, and a method that uses multi-objective optimization to generate it.<n>We believe that this research can serve as a valuable foundation for various fields, including explainability in machine learning, decision-making, and action planning based on machine learning.
arXiv Detail & Related papers (2025-01-10T08:57:50Z) - Coupling Machine Learning with Ontology for Robotics Applications [0.0]
The lack of availability of prior knowledge in dynamic scenarios is without doubt a major barrier for scalable machine intelligence.
My view of the interaction between the two tiers intelligence is based on the idea that when knowledge is not readily available at the knowledge base tier, more knowledge can be extracted from the other tier.
arXiv Detail & Related papers (2024-06-08T23:38:03Z) - A Survey on Brain-Inspired Deep Learning via Predictive Coding [85.93245078403875]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.<n>PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Learnability with Time-Sharing Computational Resource Concerns [65.268245109828]
We present a theoretical framework that takes into account the influence of computational resources in learning theory.
This framework can be naturally applied to stream learning where the incoming data streams can be potentially endless.
It may also provide a theoretical perspective for the design of intelligent supercomputing operating systems.
arXiv Detail & Related papers (2023-05-03T15:54:23Z) - The Right Tool for the Job: Open-Source Auditing Tools in Machine
Learning [0.0]
In recent years, discussions about fairness in machine learning, AI ethics and algorithm audits have increased.
Many open-source auditing tools are available, but users aren't always aware of the tools, what they are useful for, or how to access them.
This paper aims to reinforce the urgent need to actually use these tools and provides motivations for doing so.
arXiv Detail & Related papers (2022-06-20T15:20:26Z) - Automated Machine Learning Techniques for Data Streams [91.3755431537592]
This paper surveys the state-of-the-art open-source AutoML tools, applies them to data collected from streams, and measures how their performance changes over time.
The results show that off-the-shelf AutoML tools can provide satisfactory results but in the presence of concept drift, detection or adaptation techniques have to be applied to maintain the predictive accuracy over time.
arXiv Detail & Related papers (2021-06-14T11:42:46Z) - A Survey on Extraction of Causal Relations from Natural Language Text [9.317718453037667]
Cause-effect relations appear frequently in text, and curating cause-effect relations from text helps in building causal networks for predictive tasks.
Existing causality extraction techniques include knowledge-based, statistical machine learning(ML)-based, and deep learning-based approaches.
arXiv Detail & Related papers (2021-01-16T10:49:39Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.