Function Trees: Transparent Machine Learning
- URL: http://arxiv.org/abs/2403.13141v1
- Date: Tue, 19 Mar 2024 20:23:31 GMT
- Title: Function Trees: Transparent Machine Learning
- Authors: Jerome H. Friedman,
- Abstract summary: Knowing the global properties of such functions can help in understanding the system that produced the data.
A function tree is constructed that can be used to rapidly identify and compute all of the function's main and interaction effects.
- Score: 1.3597551064547502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The output of a machine learning algorithm can usually be represented by one or more multivariate functions of its input variables. Knowing the global properties of such functions can help in understanding the system that produced the data as well as interpreting and explaining corresponding model predictions. A method is presented for representing a general multivariate function as a tree of simpler functions. This tree exposes the global internal structure of the function by uncovering and describing the combined joint influences of subsets of its input variables. Given the inputs and corresponding function values, a function tree is constructed that can be used to rapidly identify and compute all of the function's main and interaction effects up to high order. Interaction effects involving up to four variables are graphically visualized.
Related papers
- Compositional learning of functions in humans and machines [23.583544271543033]
We develop a function learning paradigm to explore the capacity of humans and neural network models in learning and reasoning with compositional functions.
Our findings indicate that humans can make zero-shot generalizations on novel visual function compositions across interaction conditions.
A comparison with a neural network model on the same task reveals that, through the meta-learning for compositionality (MLC) approach, a standard sequence-to-sequence Transformer can mimic human generalization patterns in composing functions.
arXiv Detail & Related papers (2024-03-18T19:22:53Z) - FIND: A Function Description Benchmark for Evaluating Interpretability
Methods [86.80718559904854]
This paper introduces FIND (Function INterpretation and Description), a benchmark suite for evaluating automated interpretability methods.
FIND contains functions that resemble components of trained neural networks, and accompanying descriptions of the kind we seek to generate.
We evaluate methods that use pretrained language models to produce descriptions of function behavior in natural language and code.
arXiv Detail & Related papers (2023-09-07T17:47:26Z) - Interpretability with full complexity by constraining feature
information [1.52292571922932]
Interpretability is a pressing issue for machine learning.
We approach interpretability from a new angle: constrain the information about the features without restricting the complexity of the model.
We develop a framework for extracting insight from the spectrum of approximate models.
arXiv Detail & Related papers (2022-11-30T18:59:01Z) - Unifying local and global model explanations by functional decomposition
of low dimensional structures [0.0]
We consider a global explanation of a regression or classification function by decomposing it into the sum of main components and interaction components.
Here, q denotes the highest order of interaction present in the decomposition.
arXiv Detail & Related papers (2022-08-12T07:38:53Z) - Neural Network Approximation of Refinable Functions [8.323468006516018]
We show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential.
Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design.
arXiv Detail & Related papers (2021-07-28T06:45:36Z) - Learning Aggregation Functions [78.47770735205134]
We introduce LAF (Learning Aggregation Functions), a learnable aggregator for sets of arbitrary cardinality.
We report experiments on semi-synthetic and real data showing that LAF outperforms state-of-the-art sum- (max-) decomposition architectures.
arXiv Detail & Related papers (2020-12-15T18:28:53Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Measure Inducing Classification and Regression Trees for Functional Data [0.0]
We propose a tree-based algorithm for classification and regression problems in the context of functional data analysis.
This is achieved by learning a weighted functional $L2$ space by means of constrained convex optimization.
arXiv Detail & Related papers (2020-10-30T18:49:53Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z) - From Sets to Multisets: Provable Variational Inference for Probabilistic
Integer Submodular Models [82.95892656532696]
Submodular functions have been studied extensively in machine learning and data mining.
In this work, we propose a continuous DR-submodular extension for integer submodular functions.
We formulate a new probabilistic model which is defined through integer submodular functions.
arXiv Detail & Related papers (2020-06-01T22:20:45Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.