Constrained Generalized Additive 2 Model with Consideration of
High-Order Interactions
- URL: http://arxiv.org/abs/2106.02836v1
- Date: Sat, 5 Jun 2021 08:31:20 GMT
- Title: Constrained Generalized Additive 2 Model with Consideration of
High-Order Interactions
- Authors: Akihisa Watanabe, Michiya Kuramata, Kaito Majima, Haruka Kiyohara,
Kensho Kondo, Kazuhide Nakata
- Abstract summary: We propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M)
In this study, we propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M)
- Score: 0.39146761527401425
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, machine learning and AI have been introduced in many
industrial fields. In fields such as finance, medicine, and autonomous driving,
where the inference results of a model may have serious consequences, high
interpretability as well as prediction accuracy is required. In this study, we
propose CGA2M+, which is based on the Generalized Additive 2 Model (GA2M) and
differs from it in two major ways. The first is the introduction of
monotonicity. Imposing monotonicity on some functions based on an analyst's
knowledge is expected to improve not only interpretability but also
generalization performance. The second is the introduction of a higher-order
term: given that GA2M considers only second-order interactions, we aim to
balance interpretability and prediction accuracy by introducing a higher-order
term that can capture higher-order interactions. In this way, we can improve
prediction performance without compromising interpretability by applying
learning innovation. Numerical experiments showed that the proposed model has
high predictive performance and interpretability. Furthermore, we confirmed
that generalization performance is improved by introducing monotonicity.
Related papers
- Zero-Shot Embeddings Inform Learning and Forgetting with Vision-Language Encoders [6.7181844004432385]
The Inter-Intra Modal Measure (IIMM) functions as a strong predictor of performance changes with fine-tuning.
Fine-tuning on tasks with higher IIMM scores produces greater in-domain performance gains but also induces more severe out-of-domain performance degradation.
With only a single forward pass of the target data, practitioners can leverage this key insight to evaluate the degree to which a model can be expected to improve following fine-tuning.
arXiv Detail & Related papers (2024-07-22T15:35:09Z) - Towards Generalizable and Interpretable Motion Prediction: A Deep
Variational Bayes Approach [54.429396802848224]
This paper proposes an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases.
For interpretability, the model achieves the target-driven motion prediction by estimating the spatial distribution of long-term destinations.
Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable.
arXiv Detail & Related papers (2024-03-10T04:16:04Z) - Human Trajectory Forecasting with Explainable Behavioral Uncertainty [63.62824628085961]
Human trajectory forecasting helps to understand and predict human behaviors, enabling applications from social robots to self-driving cars.
Model-free methods offer superior prediction accuracy but lack explainability, while model-based methods provide explainability but cannot predict well.
We show that BNSP-SFM achieves up to a 50% improvement in prediction accuracy, compared with 11 state-of-the-art methods.
arXiv Detail & Related papers (2023-07-04T16:45:21Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Explaining Language Models' Predictions with High-Impact Concepts [11.47612457613113]
We propose a complete framework for extending concept-based interpretability methods to NLP.
We optimize for features whose existence causes the output predictions to change substantially.
Our method achieves superior results on predictive impact, usability, and faithfulness compared to the baselines.
arXiv Detail & Related papers (2023-05-03T14:48:27Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - An Interpretable Probabilistic Model for Short-Term Solar Power
Forecasting Using Natural Gradient Boosting [0.0]
We propose a two stage probabilistic forecasting framework able to generate highly accurate, reliable, and sharp forecasts.
The framework offers full transparency on both the point forecasts and the prediction intervals (PIs)
To highlight the performance and the applicability of the proposed framework, real data from two PV parks located in Southern Germany are employed.
arXiv Detail & Related papers (2021-08-05T12:59:38Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Counterfactual Maximum Likelihood Estimation for Training Deep Networks [83.44219640437657]
Deep learning models are prone to learning spurious correlations that should not be learned as predictive clues.
We propose a causality-based training framework to reduce the spurious correlations caused by observable confounders.
We conduct experiments on two real-world tasks: Natural Language Inference (NLI) and Image Captioning.
arXiv Detail & Related papers (2021-06-07T17:47:16Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - A Locally Adaptive Interpretable Regression [7.4267694612331905]
Linear regression is one of the most interpretable prediction models.
In this work, we introduce a locally adaptive interpretable regression (LoAIR)
Our model achieves comparable or better predictive performance than the other state-of-the-art baselines.
arXiv Detail & Related papers (2020-05-07T09:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.