Importance measures derived from random forests: characterisation and
extension
- URL: http://arxiv.org/abs/2106.09473v2
- Date: Mon, 21 Jun 2021 08:15:29 GMT
- Title: Importance measures derived from random forests: characterisation and
extension
- Authors: Antonio Sutera
- Abstract summary: This thesis aims at improving the interpretability of models built by a specific family of machine learning algorithms.
Several mechanisms have been proposed to interpret these models and we aim along this thesis to improve their understanding.
- Score: 0.2741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays new technologies, and especially artificial intelligence, are more
and more established in our society. Big data analysis and machine learning,
two sub-fields of artificial intelligence, are at the core of many recent
breakthroughs in many application fields (e.g., medicine, communication,
finance, ...), including some that are strongly related to our day-to-day life
(e.g., social networks, computers, smartphones, ...). In machine learning,
significant improvements are usually achieved at the price of an increasing
computational complexity and thanks to bigger datasets. Currently, cutting-edge
models built by the most advanced machine learning algorithms typically became
simultaneously very efficient and profitable but also extremely complex. Their
complexity is to such an extent that these models are commonly seen as
black-boxes providing a prediction or a decision which can not be interpreted
or justified. Nevertheless, whether these models are used autonomously or as a
simple decision-making support tool, they are already being used in machine
learning applications where health and human life are at stake. Therefore, it
appears to be an obvious necessity not to blindly believe everything coming out
of those models without a detailed understanding of their predictions or
decisions. Accordingly, this thesis aims at improving the interpretability of
models built by a specific family of machine learning algorithms, the so-called
tree-based methods. Several mechanisms have been proposed to interpret these
models and we aim along this thesis to improve their understanding, study their
properties, and define their limitations.
Related papers
- Large Language Models for Scientific Synthesis, Inference and
Explanation [56.41963802804953]
We show how large language models can perform scientific synthesis, inference, and explanation.
We show that the large language model can augment this "knowledge" by synthesizing from the scientific literature.
This approach has the further advantage that the large language model can explain the machine learning system's predictions.
arXiv Detail & Related papers (2023-10-12T02:17:59Z) - Brain-Inspired Computational Intelligence via Predictive Coding [89.6335791546526]
Predictive coding (PC) has shown promising performance in machine intelligence tasks.
PC can model information processing in different brain areas, can be used in cognitive control and robotics.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - Designing Explainable Predictive Machine Learning Artifacts: Methodology
and Practical Demonstration [0.0]
Decision-makers from companies across various industries are still largely reluctant to employ applications based on modern machine learning algorithms.
We ascribe this issue to the widely held view on advanced machine learning algorithms as "black boxes"
We develop a methodology which unifies methodological knowledge from design science research and predictive analytics with state-of-the-art approaches to explainable artificial intelligence.
arXiv Detail & Related papers (2023-06-20T15:11:26Z) - Frugal Machine Learning [7.460473725109103]
This paper investigates frugal learning, aimed to build the most accurate possible models using the least amount of resources.
The most promising algorithms are then assessed in a real-world scenario by implementing them in a smartwatch and letting them learn activity recognition models on the watch itself.
arXiv Detail & Related papers (2021-11-05T21:27:55Z) - From Machine Learning to Robotics: Challenges and Opportunities for
Embodied Intelligence [113.06484656032978]
Article argues that embodied intelligence is a key driver for the advancement of machine learning technology.
We highlight challenges and opportunities specific to embodied intelligence.
We propose research directions which may significantly advance the state-of-the-art in robot learning.
arXiv Detail & Related papers (2021-10-28T16:04:01Z) - Ten Quick Tips for Deep Learning in Biology [116.78436313026478]
Machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling.
Deep learning has become its own subfield of machine learning.
In the context of biological research, deep learning has been increasingly used to derive novel insights from high-dimensional biological data.
arXiv Detail & Related papers (2021-05-29T21:02:44Z) - Individual Explanations in Machine Learning Models: A Survey for
Practitioners [69.02688684221265]
The use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise.
Many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways.
Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models.
arXiv Detail & Related papers (2021-04-09T01:46:34Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - Principles and Practice of Explainable Machine Learning [12.47276164048813]
This report focuses on data-driven methods -- machine learning (ML) and pattern recognition models in particular.
With the increasing prevalence and complexity of methods, business stakeholders in the very least have a growing number of concerns about the drawbacks of models.
We have undertaken a survey to help industry practitioners understand the field of explainable machine learning better.
arXiv Detail & Related papers (2020-09-18T14:50:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.