Physics-Inspired Interpretability Of Machine Learning Models
- URL: http://arxiv.org/abs/2304.02381v1
- Date: Wed, 5 Apr 2023 11:35:17 GMT
- Title: Physics-Inspired Interpretability Of Machine Learning Models
- Authors: Maximilian P Niroomand, David J Wales
- Abstract summary: The ability to explain decisions made by machine learning models remains one of the most significant hurdles towards widespread adoption of AI.
We propose a novel approach to identify relevant features of the input data, inspired by methods from the energy landscapes field.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to explain decisions made by machine learning models remains one
of the most significant hurdles towards widespread adoption of AI in highly
sensitive areas such as medicine, cybersecurity or autonomous driving. Great
interest exists in understanding which features of the input data prompt model
decision making. In this contribution, we propose a novel approach to identify
relevant features of the input data, inspired by methods from the energy
landscapes field, developed in the physical sciences. By identifying conserved
weights within groups of minima of the loss landscapes, we can identify the
drivers of model decision making. Analogues to this idea exist in the molecular
sciences, where coordinate invariants or order parameters are employed to
identify critical features of a molecule. However, no such approach exists for
machine learning loss landscapes. We will demonstrate the applicability of
energy landscape methods to machine learning models and give examples, both
synthetic and from the real world, for how these methods can help to make
models more interpretable.
Related papers
- Learning Low-Dimensional Strain Models of Soft Robots by Looking at the Evolution of Their Shape with Application to Model-Based Control [2.058941610795796]
This paper introduces a streamlined method for learning low-dimensional, physics-based models.
We validate our approach through simulations with various planar soft manipulators.
Thanks to the capability of the method of generating physically compatible models, the learned models can be straightforwardly combined with model-based control policies.
arXiv Detail & Related papers (2024-10-31T18:37:22Z) - Fairness Implications of Heterogeneous Treatment Effect Estimation with
Machine Learning Methods in Policy-making [0.0]
We argue that standard AI Fairness approaches for predictive machine learning are not suitable for all causal machine learning applications.
We argue that policy-making is best seen as a joint decision where the causal machine learning model usually only has indirect power.
arXiv Detail & Related papers (2023-09-02T03:06:14Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Model-agnostic multi-objective approach for the evolutionary discovery
of mathematical models [55.41644538483948]
In modern data science, it is more interesting to understand the properties of the model, which parts could be replaced to obtain better results.
We use multi-objective evolutionary optimization for composite data-driven model learning to obtain the algorithm's desired properties.
arXiv Detail & Related papers (2021-07-07T11:17:09Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Physics-Integrated Variational Autoencoders for Robust and Interpretable
Generative Modeling [86.9726984929758]
We focus on the integration of incomplete physics models into deep generative models.
We propose a VAE architecture in which a part of the latent space is grounded by physics.
We demonstrate generative performance improvements over a set of synthetic and real-world datasets.
arXiv Detail & Related papers (2021-02-25T20:28:52Z) - Choice modelling in the age of machine learning -- discussion paper [0.27998963147546135]
Cross-pollination of machine learning models, techniques and practices could help overcome problems and limitations encountered in the current theory-driven paradigm.
Despite the potential benefits of using the advances of machine learning to improve choice modelling practices, the choice modelling field has been hesitant to embrace machine learning.
arXiv Detail & Related papers (2021-01-28T11:57:08Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - A Weighted Solution to SVM Actionability and Interpretability [0.0]
Actionability is as important as interpretability or explainability of machine learning models, an ongoing and important research topic.
This paper finds a solution to the question of actionability on both linear and non-linear SVM models.
arXiv Detail & Related papers (2020-12-06T20:35:25Z) - Using machine-learning modelling to understand macroscopic dynamics in a
system of coupled maps [0.0]
We consider a case study the macroscopic motion emerging from a system of globally coupled maps.
We build a coarse-grained Markov process for the macroscopic dynamics both with a machine learning approach and with a direct numerical computation of the transition probability of the coarse-grained process.
We are able to infer important information about the effective dimension of the attractor, the persistence of memory effects and the multi-scale structure of the dynamics.
arXiv Detail & Related papers (2020-11-08T15:38:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.